article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
scale - free networks , i.e. networks with power - law degree distributions , have recently been widely studied ( see refs . for a review ) .such degree distributions have been found in many different contexts , for example in several technological webs like the internet , the www , or electrical power grids , in natural networks like the network of chemical reactions in the living cell and also in social networks , like the network of human sexual contacts , the science and the movie actor collaboration networks , or the network of the phone calls .the topology of networks is essential for the spread of information or infections , as well as for the robustness of networks against intentional attack or random breakdown of elements .recent studies have focused on a more detailed topological characterization of networks , in particular , in the degree correlations among nodes .for instance , many technological and biological networks show that nodes with high degree connect preferably to nodes with low degree , a property referred to as disassortative mixing . on the other hand ,social networks show assortative mixing , i.e. highly connected nodes are preferably connected to nodes with high degree . in this paper we shall study some aspects of this topology , specifically the importance of the degree correlations , in three related models of scale - free networks andconcentrate on the two important characteristics : the tomography of shell structure around an arbitrary node , and percolation .our starting model is the one of barabasi and albert ( ba ) , based on the growth algorithm with preferential attachment . starting from an arbitrary set of initial nodes , at each timestep a new node is added to the network .this node brings with it proper links which are connected to nodes already present .the latter are chosen according to the preferential attachment prescription : the probability that a new link connects to a certain node is proportional to the degree ( number of links ) of that node .the resulting degree distribution of such networks tends to : krapivsky and redner have shown that in the ba - construction correlations develop spontaneously between the degrees of connected nodes . to assess the role of such correlations we shall randomize the ba - network .recently maslov and sneppen have suggested an algorithm radomitzing a given network that keeps the degree distribution constant . according to this algorithm at each steptwo links of the network are chosen at random .then , one end of each link is selected randomly and the attaching nodes are interchanged . however , in case one or both of these new links already exits in the network , this step is discarded and a new pair of edges is selected .this restriction prevents the apparearance of multiple edges connecting the same pair of nodes . a repeated application of the rewiring step leads to a randomized version of the original network .we shall refer to this model as link - randomized ( lr ) model .the lr model can be compared with another model which is widely studied in the context of scale - free networks , namely with the configuration model introduced by bender and canfield .it starts with a given number of nodes and assigning to each node a number of `` edge stubs '' equal to its desired connectivity .the stubs of different nodes are then connected randomly to each other ; two connected stubs form a link .one of the limitations of this `` stub reconnection '' algorithm is that for broad distribution of connectivities , which is usually the case in complex networks , the algorithm generates multiple edges joining the same pair of hub nodes and loops connecting the node to itself .however , the cofiguration model and the lr model get equivalent as . one can also consider a node - randomized ( nr ) counterpart of the lr randomize procedure .the only difference to the link - radomized algorithm is that instead of choosing randomly two links we choose randomly two nodes in the network .then the procedure is the same as in the lr model .as we proceed to show , the three models have different properties with respect to the correlations between the degrees of connected nodes .while the lr ( configuration ) model is random , the genuine ba prescription leads to a network which is dissortative with respect to the degrees of connected nodes , and the nr model leads to an assortative network .this fact leads to considerable differences in the shell structure of the networks and also to some ( not extremely large ) differences in their percolation characteristics .we hasten to note that our simple models neglect many important aspects of real networks like geography but stress the importance to consider the higher correlations in the degrees of connected nodes .referring to spreading of computer viruses or human diseases , it is necessary to know how many sites get infected on each step of the infection propagation .thus , we examine the local structure in the network .cohen et al . examined the shells around the node with the highest degree in the network . in our studywe start from a node chosen at random .this initial node ( the root ) is assigned to shell number 0 .then all links starting at this node are followed .all nodes reached are assigned to shell number 1. then all links leaving a node in shell 1 are followed and all nodes reached that do nt belong to previous shells are labelled as nodes of shell 2 . the same is carried out for shell 2 etc ., until the whole network is exhausted .we then get , the number of nodes in shell for root .the whole procedure is repeated starting at all nodes in the network , giving , the degree distribution in shell .we define as : we are most interested in the average degree of nodes of the shell . in the epidemiological context, this quantity can be interpreted as a disease multiplication factor after steps of propagation .it describes how many neighbors a node can infect on average .note that such a definition of gives us for the degree distribution in the first shell : where and are the degree distribution and the number of nodes with degree in the network respectively .we bear in mind that every link in the network is followed exactly once in each direction .hence , we find that every node with degree is counted exactly times . from eq.( ) follows that .this quatity , that plays a very important role in the percolation theory of networks , depends only on the first and second moment of the degree distribution , but not on the correlations .of course .note that as we have : for our scale - free constructions the mean degree in shell 1 depends significantly on the network size determining the cutoff in the degree distribution .however , the values of are the same for all three models : the first two shells are determined only by the degree distributions . in all other shells the three models differ . for the lr ( configuration )model one finds for all shells in the thermodynamic limit .however , since these distributions do not possess finite means , the values of are governed by the finite - size cutoff , which is different in different shells , since the network is practically exhausted within the first few steps , see fig.1 . in what followswe compare the shell structure of the ba , the lr and the nr models .we discuss in detail the networks based on the ba - construction with . for larger same qualitative results were observed . in the present workwe refrain from discussion of a peculiar case . for topology of the ba - model is distinct from one for since in this case the network is a tree .this connected tree is destroyed by the randomization procedure and is transformed into a set of disconnected clusters . on the other hand , for creation of large separate clusters under randomization is rather unprobable , so that most of the nodes stay connected .[ fig1 ] shows as a function of the shell number .panel ( a ) corresponds to the ba model , panel ( b ) to the lr model , and panel ( c ) to the nr model .the different curves show simulations for different network sizes : ; ; ; and .all points are averaged over ten different realizations except for those for networks of 100,000 nodes with only one simulation . in panel( d ) we compare the shell - structure for all three models at .the most significant feature of the graphs is the difference in . in the ba and lr modelsthe maximum is reached in the first shell , while for the nr model the maximum is reached only in the second shell : .this effect becomes more pronounced with increasing network size . in shells with large for all networksmostly nodes with the lowest degree are found .the inset in graph ( a ) of fig .[ fig1 ] shows the relation between average age of nodes with connectivy in the network as a function of their degree for the ba model .the age of a node and of any of its proper links is defined as where denotes the time of birth of the node . for the randomized lr and nr models agehas no meaning .the figure shows a strong correlation between age and degree of a node .the reasons for these strong correlations are as follows : first , older nodes experienced more time - steps than younger ones and thus have larger probability to acquire non - proper bonds . moreover , at earlier times there are less nodes in the network , so that the probability of acquiring a new link per time step for an individual node is even higher .third , at later time - steps older nodes already tend to have higher degrees than younger ones , so the probability for them to acquire new links is considerably larger due to preferential attachment .the correlations between the age and the degree bring some nontrivial aspects into the ba model based on growth , which are erased when randomizing the network .let us discuss the degree distribution in the second shell .in this case we find as that every link leaving a node of degree is counted times .let be a probability that a link leaving a node of degree enters a node with degree . neglecting the possibility of short loops ( which is always appropriate in the thermodynamical limit ) and the inherent direction of links ( which may be not totally appropriate for the ba - model ) we have : .\label{p2}\ ] ] 2 the value of gives important information about the type of mixing in the network . to studymixing in networks one needs to divide the nodes into groups with identical properties .the only relevant characteristics of the nodes that is present in all three models , is their degree .thus , we can examine the degree - correlations between neighboring nodes , which we compare with the uncorrelated lr model , where the probability that a link connects to a node with a certain degree is independent from whatever is attached to the other end of the link : .all other relations would correspond to assortative or disassortative mixing .qualitatively , assortativity then means that nodes attach to nodes with similar degree more likely than in the lr - model : for .dissortativity means that nodes attach to nodes with very different degree more likely than in the lr - model : for or . inserting this in eq.([p2 ] ) , and calculating the mean , one finds qualitatively that for assortativity , and for dissortativity . in the following weshow where the correlations of the ba and nr model originate .a consequence of the ba - algorithm is that there are two different types of ends for the links .each node has exactly proper links attached to it at the moment of its birth and a certain number of links that are attached later . since each node receives the same number of links at its birth , towards the proper nodes a link encounters a node with degree with probability . to compensate for this , in the other direction a node with degree encountered with the probability , so that both distributions together yield .on one end of the link nodes with small degree are predominant : for small . on the other end nodes with high degreeare predominant : for large .this corresponds to dissortativity .actually the situation is somewhat more complex since in the ba model these probability distributions also depend on the age of the link .assortativity of the nr model is a result of the node - randomizing process .since the nodes with smaller degree are predominant in the node population , those links are preferably chosen that have on the end with the randomly chosen node a node with a smaller degree ( for small ) .then the randomization algorithm exchanges the links and connects those nodes to each other .this leads to assortativity for nodes with small degree , which is compensated by assortativity for nodes with high degree .percolation properties of networks are relevant when discussing their vulnerability to attack or immunization which removes nodes or links from the network . for scale - free networksrandom percolation as well as vulnerability to a deliberate attack have been studied by several groups .one considers the removal of a certain fraction of edges or nodes in a network .our simulations correspond to the node removal model ; is the fraction of removed nodes . below the percolation threshold a giant component ( infinite cluster ) exists , which ceases to exist above the threshold . a giant component , and consequently is exactly defined only in the thermodynamic limit : it is a cluster , to which a nonzero fraction of all nodes belongs .in and a condition for the percolation transition in random networks has been discussed : every node already connected to the spanning cluster is connected to at least one new node . gives the following percolation criterion for the configuration model : where the means correspond to an unperturbed network ( ) . for networks with degree distribution eq.( ) , diverges as .this yields for the random networks with a such degree distribution a percolation threshold in the thermodinamic limit , independent of the minimal degree ; in the epidemiological terms this corresponds to the absence of herd immunities in such systems .crucial for this threshold is the power - law tail of the degree distribution with an exponent .moreover , ref . shows that the critical exponent governing the fraction of nodes of the giant component , , diverges as the exponent of the degree distribution approaches .therefore approaches zero with zero slope as . in fig .[ fig2 ] we plotted for the three models discussed as a function of .the behavior of all three models for a network size of nodes is presented in panel ( a ) . in the insetthe size of the giant component was measured in relation to the number of nodes remaining in the network and not to their initial number .other panels show the percolation behavior of each of the models at different network sizes : panel ( b ) corresponds to the ba model , ( c ) to the lr model , and ( d ) to the nr model . for the largest networks with nodes we calculated 5 realizations for each model , for those with ; ; and nodes averaging over 10 realizationwas performed .for all three models within the error bars the curves at different network sizes coincide .this shows that even the smallest network is already close to the thermodynamical limit .r. albert et al . have found a similar behavior in a study of ba - networks .they analyze networks of sizes and concluding `` that the overall clustering scenario and the value of the critical point is independent of the size of the system '' . in the simulations we find two regimes : for moderate we find , that the sizes of the giant components of the ba , lr , and nr model obey the inequalities , while for close to unity the inequalities are reverted : .however , in this regime the differences between and are subtle and hardly resolved on the scales of fig .we note that similar situation was observed in ref .however , there the size of the giant cluster was measured not as a function of but of a scaling parameter in the degree distribution .the observed effects can be explained by the correlations in the network . for one has .now , the probability that single nodes loose their connection to the giant cluster depends only on the degree distribution , and not on correlations .so , the difference in the must be explained by the break - off of clusters containing more than one node .the probability for such an event is smaller in the ba than in the lr model , since dissortativity implies that one finds fewer regions , where only nodes with low degree are present .however , when we get to the region of large , as nodes with low degree act as bridges between the nodes with high degree , the connections between the nodes with high degree are weaker in the case of the ba model than in the case of the lr model .so , the probability that nodes with high degree break off is higher for the ba model than for the lr model .there is no robust core of high - degree nodes in the network .the correlation effects for the nr model , when compared with the lr model , are opposite to those for the ba model . 2we consider three different models of scale - free networks : the genuine barabasi - albert construction based on growth and preferential attachment , and two networks emerging when randomizing it with respect to links or nodes .we point out that the ba model shows dissortative behavior with respect to the nodes degrees , while the node - randomized network shows assortative mixing .however , these strong differences in the shell structure lead only to moderate quantitative difference in the percolation behavior of the networks .partial financial support of the fonds der chemischen industrie is gratefully acknowledged .r. albert and a .-barabsi , rev .mod . phys . * 74 * , 47 ( 2002 ) .s.n . dorogovtsev and j.f.f .mendes , adv .phys . * 51 * , 1079 ( 2002 ) .m. faloutsos , p. faloutsos , and c. faloutsos , comput .commun . rev .* 29 * , 251 ( 1999 ) .r. pastor - satorras , a. vazquez , and a. vespignani , phys .lett . * 87 * , 258701 ( 2001 ) .r. albert , h. jeong , and a .-barabasi , nature ( london ) * 401 * , 130 ( 1999 ) .a. broder , r. kumar , f. maghoul , p. raphavan , s. rajagopalan , r. stata , a. tomkins , and j. wiener , comput .netw . * 33 * , 309 ( 2000 ) .d. j. watts , and s. h. strogatz , nature ( london ) * 393 * , 440 ( 1998 ) .h. jeong , b. tombor , r. albert , z.n .oltvai , and a .-barabsi , nature * 407 * , 651 ( 2000 ) .d.a . fell , and a. wagner , nat .* 18 * , 1121 ( 2000 ) .h. jeong , s.p .mason , a .-barabsi , and z.n .oltvai , nature * 411 * , 41 ( 2001 ) .f. liljeros , c. edling , l.a.n .amaral , h.e .stanley , and y. berg , nature * 411 * , 907 ( 2001 ) .newman , proc .sci . * 98 * , 404 ( 2001 ) .newman , phys .e * 64 * , 016131 ( 2001 ) .a. l. n. amaral , m. barthlmy , and h. e. stanley , proc .sci . * 97 * , 11149 ( 2000 ) .r. albert , and a .-barabsi , phys .* 85 * , 5234 ( 2000 ) .j. abello , a. buchsbaum , and j. westbrook , lect .notes comput .sci . * 1461 * , 332 ( 1998 ) .m. e. j. newman , phys .lett . * 89 * , 208701 ( 2002 ) .j. berg , and m. lssig , phys .rev . lett . * 89 * , 228701 ( 2002 ) .v. m. eguluz , and k. klemm , phys .lett . * 89 * , 108701 ( 2002 ) .m. bogu , and r. pastor - satorras , phys .e * 66 * , 047104 ( 2002 ) .s. maslov , and k. sneppen , science * 296 * 910 ( 2002 ) .a. vzquez , and y. moreno , phys . rev .e * 67 * , 015101 ( 2003 ) .goh , e. oh , b.kahng , and d. kim , phys . rev .e * 67 * , 017101 ( 2003 ) .m. bogu , r. pastor - satorras , and a. vespignani , phys .lett . * 90 * , 028701 ( 2003 ) .m. e. j. newman , phys .e * 67 * , 026126 ( 2003 ) .m. a. serrano , and m. bogu , e - print cond - mat/0301015 .barabsi , and r. albert , science * 286 * , 509 ( 1999 ) .p. l. krapivsky , s. redner , and f. leyvraz , phys .. lett . * 85 * , 4629 ( 2000 ) .dorogovtsev , j.f.f .mendes , and a.n .samukhin , phys .lett . * 85 * , 4633 ( 2000 ) .p. l. krapivsky , and s. redner , phys .e * 63 * 066123 ( 2001 ) .e. a. bender , and e. r. canfield , journal of combinatorial theory ( a ) * 24 * , 296 ( 1978 ) .m. molloy , and b. reed , random struct .algorithms * 6 * , 161 ( 1995 ) .sander , c.p .warren , and i.m .sokolov , phys .e * 66 * , 056105 ( 2002 ) .d. ben - avraham , a.f .rozenfeld , r. cohen , and s. havlin , e - print cond - mat/0301504 .r. cohen , d. dolev , s. havlin , t. kalisky , o. mokryn , and y. shavitt , leibniz center technical report 2002 - 49 r. cohen , k. erez , d. ben - avraham , and s. havlin , phys . rev . lett .* 85 * , 4626 ( 2000 ) .r. cohen , d. ben - avraham , and s. havlin , phys .e * 66 * , 036113 ( 2002 ) .r. albert , h. jeong , and a .-barabsi , nature * 406 * , 378 ( 2000 ) .r. cohen , k. erez , d. ben - avraham , and s. havlin , phys .lett . * 86 * , 3682 ( 2000 ) .d. callaway , m. newman , s. strogatz , and d. watts , phys .lett . * 85 * , 5468 ( 2000 ) .
we discuss three related models of scale - free networks with the same degree distribution but different correlation properties . starting from the barabasi - albert construction based on growth and preferential attachment we discuss two other networks emerging when randomizing it with respect to links or nodes . we point out that the barabasi - albert model displays dissortative behavior with respect to the nodes degrees , while the node - randomized network shows assortative mixing . these kinds of correlations are visualized by discussig the shell structure of the networks around their arbitrary node . in spite of different correlation behavior , all three constructions exhibit similar percolation properties . 2
several second generation gravitational wave detectors are under construction now ( advanced ligo , advanced virgo , geo - hf ) . in general , the sensitivity of these instruments will be limited at low and high frequencies by quantum noise , however , in a critical mid - band around 100 hz , the thermal noise of the test mass mirror coatings is a significant limit . a typical sensitivity curve for a second generation detector is shown in fig . [ adligonoise ] .in light of this fact , the study of thermal noise in dielectric coatings is an active area of research in the gravitational wave community .the optical cavities at the heart of the length - sensing mechanism of gravitational wave interferometers use mirrors made with multilayer dielectric coatings to produce the high reflectivities that are required . for good performance ( low absorption ) at nm ( the operating wavelength of most gravitational wave detectors ) , the coatings are usually made of alternating layers of silica ( ) and tantala ( ) . in an elegantly conceived experiment that studied mechanical loss in these coatings , penn et al . showed that the primary source of mechanical dissipation in these coatings was in the tantala layers rather than in the silica layers or at the interfaces between layers .this result suggests two possible mechanisms for reducing the mechanical dissipation and , therefore , the in - band brownian noise .first , one can alter the chemistry of the tantala to reduce its mechanical loss .this has been accomplished by doping the tantala with titania ( ) to good effect .second , one can modify the geometry of the coating to reduce the total amount of tantala while preserving the reflectivity of the coating .such a coating is referred to here as an `` optimized coating '' and is the subject of this paper .projected noise floor of advanced ligo . as it stands , brownian noise in the test mass coatings will prevent the instrument from reaching the quantum limit.,width=307 ] a standard high - reflectivity multilayer dielectric coating consists of alternating layers of high and low index of refraction materials where the thickness of each layer is the local wavelength of the light .this design requires the minimum number of layers to achieve a prescribed reflectance .it does not , however , yield the lowest thermal noise for a prescribed reflectance .if there is a difference in the mechanical loss of the high and low index materials then the overall coating dissipation can be reduced by decreasing the total amount of the high loss material .we have developed a systematic procedure for designing minimal - noise coatings featuring a prescribed reflectivity . we have manufactured mirrors based on such a design at the laboratoire des matriaux avancs and measured the broadband noise floor of the thermal noise interferometer ( tni ) at the california institute of technology with these mirrors installed .we then compared this to an earlier measurement of the noise floor of the tni when mirrors with standard quarter - wavelength ( qwl ) coatings manufactured at research electro - optics , inc .were in place .this paper summarizes the relevant theoretical background , and presents the results of these measurements .until 2002 , surface fluctuations of thermal origin in the mirrors of an optical cavity had never been directly observed . up to that pointthe small scale of the fluctuations meant that they were always below the level of other noise sources such as laser frequency noise , shot noise , etc .the tni is a test bed interferometer specifically designed to detect these surface fluctuations .it consists of a lightwave model 126 npro ( non - planar ring oscillator ) laser , a triangular mode cleaner cavity used to spatially filter and stabilize the frequency of the laser beam , and two high finesse test cavities where cavity length noise measurements are made .the key features of the tni design that enable resolution of thermal noise are a relatively high power laser of nearly 1 w , frequency stabilization by the mode cleaner by a factor of 1000 compared to the free - running laser frequency noise in the detection band , a relatively small beam radius of 164 m to amplify the effect of the surface fluctuations , short test cavities to reduce the laser frequency stabilization requirement , and two identical test cavities to permit common mode noise rejection .see reference for more details about the tni .schematic of the thermal noise interferometer.,width=326 ] the layout of the tni is shown in fig .[ layout ] .all three optical cavities are under vacuum and every cavity mirror is suspended like a pendulum from a loop of steel wire to preserve its high mechanical q and to isolate it from external sources of noise .the pound - drever - hall technique is used to keep the cavities locked on resonance . in the case of a test cavity ,a signal is generated proportional to the deviation from the cavity resonance and this signal is fed back , after suitable amplification and filtering , to electromagnetic actuators at the end mirror of the cavity to control the cavity length .a block diagram of a test cavity servo is shown in fig . [ servodiagram ] .it is a simple matter to show that a fluctuation measured at the readout point , , can be converted to its equivalent fluctuation in cavity length , , with the following formula , in the spectral ( frequency ) domain : it is imperative that the ( spectral ) transfer functions of the servo elements , , , and are known accurately so that this conversion is valid .this calibration error is the largest source of uncertainty at the tni .block diagram of the feedback loop of a test cavity . and denote fluctuations in cavity length and laser frequency , respectively .data is taken at the point .,width=268 ] in a single test cavity at the tni , laser frequency noise begins to dominate the noise floor above about 6 khz .the frequency range where thermal noise is dominant can be greatly extended if common mode rejection is implemented between the two test cavities .this is done by using a commercial pre - amplifier ( stanford research systems sr560 ) to subtract the readout signal of one test cavity from the other as shown in fig .[ difservo ] . if the two test cavity servos are well matched , the difference between the readout signals , , is related to the difference between the cavity length fluctuations in the following manner : we refer to the two test cavities as the `` north '' and `` south '' cavities and the subscripts and reflect this . and are the transfer functions given by eq .( 1 ) for the north and south cavities , respectively . by converting the difference readout in this manner, we can compare the noise spectrum of the tni to the theoretically predicted thermal length noise .block diagram illustrating how the difference signal readout , , is related to the length noise in the `` north , '' , and `` south , '' , test cavities . and are attenuators used to balance out any difference in the responses of the servos and .,width=268 ] at the tni , thermal noise in the coatings can be observed for over a decade of frequencies starting at around 800 hz .this is above the unity gain frequency of the servos ( typically around 500 hz ) where the conversion formula simplifies to : here we have used the fact that the servo element , which is the ratio of the frequency of the laser to the length of the cavity , is the same for both test cavities .all the servo responses in eq .( 3 ) are frequency independent constants .the elements and are attenuators whose gains are adjusted so that the product , to maximize the common mode rejection ratio .once the common mode noise is removed from the tni by subtracting the test cavity readouts from each other , the noise floor of the instrument above about 20 khz becomes dominated by shot noise . for each test cavity ,the magnitude of the shot noise is determined by the power incident on the cavity s photodetector .this is light that is reflected from the test cavity and it consists primarily of the component of the test cavity input beam that is not mode - matched to the cavity ( the mode - matched component is transmitted through the cavity when it is resonant ) .the power depends on how well the cavities are aligned with their input beams and this can vary between measurements .the shot noise as a function of power on the photodetectors has been measured using a heat lamp as a shot noise limited source of radiation . accordingly , determining the magnitude of shot noise at the time a thermal noise measurement is made is simply a matter of noting the power on the photodetectors at that time and using the results from the heat lamp measurement .designing an optimized coating is a matter of finding the configuration of layer thicknesses that minimizes the brownian noise of the coating as a whole for a prescribed reflectance .since the qwl design has the maximum reflectance for a given number of layers , the reflectance can only be maintained by adding more layers to the coating . to design the optimized coating , a means of calculating the reflectance of a coating as a function of the layer thicknessesis needed . for time - harmonic ( ) normal plane wave incidence, the electric and magnetic fields at the -th interface of a coating are related to the fields at the -th one by the propagation matrix of the -th layer , . = \left [ \begin{array}{cc } \cos \delta_i & j n_{i}^{-1 } z_0\sin \delta_i \\ j n_i z_{0}^{-1 } \sin \delta_i & \cos \delta_i \\\end{array } \right ] \left [ \begin{array}{c } e_{i+1 } \\ h_{i+1 } \end{array } \right]\ ] ] where is the characteristic impedance of the vacuum , is the refractive index of the layer , and is the phase thickness of the layer .it is convenient to write the phase thickness as , where is the thickness of the layer in units of the local wavelength , .see fig .[ multidiagram ] for an illustration of a multilayer coating .the input impedance of the coating , defined as the ratio of the electric and magnetic fields at the vacuum - coating interface , , can then be obtained by chain multiplying the propagation matrices of all the layers in the coating : = \mathbf{m}_1 \cdot \mathbf{m}_2 \cdot \ldots \mathbf{m}_m \left [ \begin{array}{c } e_{m+1 } \\h_{m+1 } \end{array } \right]\ ] ] = \mathbf{m}_1 \cdot \mathbf{m}_2 \cdot \ldots \mathbf{m}_m \left [ \begin{array}{c } 1 \\n_s / z_0 \end{array } \right ] e_t.\ ] ] in the last step we used the fact that the fields at the coating - substrate boundary can both be written in terms of the electric field of the transmitted beam , .the reflection coefficient of the coating , , in terms of the input impedance is : the ( power ) reflectance of the coating is . with this, we have a means of calculating the reflectance of a coating from the layer thicknesses .we also need a means of calculating the brownian noise of the coating from the layer thicknesses .diagram of a layer coating . is the thickness of the -th layer and is its index of refraction . and are the electric and magnetic fields , respectively , at the -th interface.,width=326 ] the power spectral density ( psd ) of the brownian noise of a mirror is given by the following formula : where is boltzmann s constant , is the temperature , is the half - width of the gaussian laser beam , is the poisson ratio , and the young s modulus of the substrate .the psd is proportional to the effective loss angle of the mirror , , and , if the substrate has a high factor , this is dominated by the loss angle of the coating , so that .a precise formula for the coating loss angle is very complex , nonetheless , in the limit of small poisson s ratios , it can be written as a weighted sum of the thicknesses of the materials in the coating : where is the total thickness ( in units of local wavelength ) of all the low ( ) and high ( ) index layers , which , in our case , are made of silica and tantala , respectively .the weighting factors are given by : where , and denote the young s modulus and loss angle and the subscripts refer to silica , tantala , and substrate , respectively .unfortunately , there is much uncertainty in the values of the loss angles for thin - film materials so the ratio can only be said , with confidence , to lie somewhere between 5 and 10 .the optimized design was chosen among a few alternative ones yielding close - to - minimum noise , as the least sensitive to the inherent uncertainty in the value of as regards the noise psd reduction . the problem of findingthe optimal configuration of coating layer thicknesses may seem a difficult one if the thicknesses of each and every layer were allowed to be free variables .however , preliminary simulations based on genetic minimization of the coating loss angle at prescribed reflectance , using two refractive materials and no prior assumptions about coating geometry , support the conclusion that the optimal configuration is periodic and consists of a stack of identical high / low index layer pairs or `` doublets , '' with the exception of the terminal ( top / bottom ) layers . accordingly , in designing our optimized coatings we restrict to only these structurally simple stacked doublet configurations with tweaked terminal layers .contours of constant reflectance ( ellipse - like curves ) and thermal noise ( straight lines ) for a stack of four identical tantala / silica doublets as functions of the silica ( ) and tantala ( ) layer thicknesses ( in units of local wavelength ) .the arrow pointing from darker to lighter areas indicates the direction of increasing thermal noise . in this plot .the dashed line consists of points where .the white dot marks the point of minimum thermal noise on that particular reflectance contour .the black dot marks a point on the same reflectance contour that satisfies the condition .this point has slightly greater thermal noise than the minimum noise point but the difference becomes negligible as the number of doublets ( 4 in this case ) increases.,width=307 ] further simplification is possible , though .figure [ multicontours ] is a plot of contours of constant reflectance and constant thermal noise ( in the case where ) as functions of the thickness ( in units of local wavelength ) of the silica ( ) and tantala ( ) layers for a coating consisting of four identical tantala / silica doublets ( for a total of eight layers ) . in the figure , the reflectance of the coating increases to a maximum at the point , the qwl configuration .the figure indicates that , for a given reflectance , there is not much difference in thermal noise between the point of minimum thermal noise ( the white dot in the figure ) and the point along the same reflectance contour where ( the black dot ) .actually , as the number of doublets is increased , the reflectance contours become even more squeezed along the half - wavelength doublet dashed line and the difference in thermal noise between the above two points becomes negligible .thus , to simplify matters further , we focus on configurations where the doublets are a half - wavelength thick .this leaves only two free parameters to adjust : the number of doublets , , and the thickness of the tantala layers , , the thickness of the silica layers being .the coating design then proceeds as follows :1 . starting with the standard qwl design with doublets and reflectance , use equation ( 8) to calculate the loss angle of the coating .2 . add one doublet to the coating then reduce the thickness of the tantala layers ( since they are more lossy ) and increase the thickness of the silica layers ( to maintain the half - wavelength doublet condition ) until the desired reflectance , ( calculated from eqs .( 4)(6 ) ) , is recovered .recalculate the loss angle of the coating .3 . repeat step 2 and keep repeating until a design with a minimum coating loss angle , , is obtained .the structure of the standard quarter wavelength coating.,width=316 ] the reference qwl coating measured with the tni consists of 14 silica / tantala doublets and is shown in fig .[ qwldesign ] . for protective purposes , the topmost layer of the coatingis silica for its hardness .it is half - wavelength , so as to have no influence on the coating transmission properties , as seen from eq .the reflectance of the coating is ( the transmittance is 278 ppm ) .following the procedure outlined above , we designed an optimized coating with the same reflectance .the material parameters used in the design process are shown in table [ parameters ] .figure [ phivn ] is a plot that shows how the coating loss angle varies as the number of doublets is increased .as the figure indicates , the minimum loss angle design consists of 17 identical doublets . as suggested by the genetic simulations ,this design was improved upon by tweaking the thicknesses of the end layers to further reduce the thermal noise while keeping the reflectance unchanged .the final optimized design is shown in fig .[ optdesign ] .if lies somewhere between 5 and 10 then the ratio of the coating loss angles , , should lie somewhere between 0.876 and 0.817 .assuming ( the value used in the design process ) , then .plot of the coating loss angle ( normalized by the loss angle of the qwl coating ) as a function of the number of doublets , .the corresponding values of the ratio are shown at the top of the plot .the qwl and the minimum noise ( optimized ) coatings are indicated in the plot.,width=326 ] this is the structure of the optimized coating .this coating was designed for minimal brownian noise and a transmittance of 278 ppm.,width=316 ]to obtain the length noise in the test cavities at the tni , we used a spectrum analyzer ( stanford research systems sr780 ) to measure the power spectral density ( psd ) at the instrument readout , .after calibrating the instrument by measuring the responses of the servo elements in fig .[ servodiagram ] , we converted this readout psd ( actually the square root of the psd ) to its equivalent length noise spectrum using eq .the resulting length noise spectrum of the qwl coating is shown in fig .[ qwlspectrum ] .the spectrum of the optimized coating is similar .note that from about 500 hz to 20 khz the measured length noise has a slope of that is characteristic of brownian noise ( the square root of the psd in eq .above 20 khz the shot noise begins to dominate the noise spectrum .the large peaks above 20 khz are body modes of the mirror substrates . to extract the loss angles of the coatings , we focused on the region around 3 khz since it is approximately at the center of the coating noise dominated region .plot of the spectral density of cavity length noise for the qwl coating along with the shot noise ( dashed line ) and the brownian noise ( solid line with frequency dependence ) . to extract the loss angle of the coating the incoherent sum of the brownian and shot noise ( solid black curve ) is fit to the data between the vertical lines at 2 khz and 4 khz with the loss angle as the free parameter in the fit.,width=326] in the process of taking measurements of the noise floor of the instrument , we observed a certain degree of instability that is mainly due to instability of the optical gains of the servos , represented by the element in the servo diagram in fig .[ servodiagram ] . to mitigate this, we took multiple measurements for each coating so that we could average the results . for each coating, we measured eight spectra from 2 to 4 khz .of these eight , four had a frequency bin width of 4 hz , for a total of 500 points in each spectrum , and four had a 16 hz bin width , for a total of 125 points in each spectrum .for each spectrum , we performed a least - squares fit of the function : where is the psd of brownian noise ( eq . ( 7 ) ) , and is the psd of shot noise , which is obtained from the heat lamp measurement as described earlier .the coating loss angle is the only free parameter in the fit .the factor of 4 in the equation above is due to the fact that there are a total of four mirrors in the two test cavities , so the total psd of brownian noise is four times that of a single mirror .the resulting loss angles are shown in table [ angles ] and fig .[ anglesplot ] is a graphical representation of these values .the mean loss angles are for the qwl coating and for the optimized coating .to be conservative , we used the standard deviations of the two sets of eight loss angles for the errors of their respective means rather than the standard deviations divided by the square - root of eight ( the number of measurements for each coating ) .this was done because at least some of the variation in the loss angles of each coating arises from the instability in the optical gain mentioned earlier and this variation is non - gaussian .the ratio of the means is .recall that the predicted ratio of optimized to qwl coating loss angles ( assuming ) was 0.843 , which is within the uncertainty of the measured ratio .it is apparent from fig .[ anglesplot ] that for the qwl coating , two of the measured loss angles ( the fourth and sixth ) are somewhat high compared to the other six . for the optimized coating ,one of the values ( the eighth ) is high relative to the others .each of these three values is significantly more than one standard deviation above the mean for that coating .these anomalously high measurements may be due to transients that have been observed to suddenly boost the noise floor of the instrument .they are not related to brownian noise in the coatings .if these measurements are discarded , the means for the coatings are reduced slightly to the values indicated by asterisks in table [ angles ] , and .the ratio of the means then becomes and the remaining loss angles for each coating are more tightly grouped . while there is no compelling justification for removing these points , the new means may reflect the true coating loss angles more accurately ..[angles ] table of the coating loss angles ( ) that give the best fit to the data along with the mean value .* mean after discarding anomalously high values . [cols="^,^,^,^,^,^,^,^,^,^,^ " , ] plot of the loss angles in table [ angles ] .the black squares are the loss angles of the qwl coating , the black dots are the loss angles of the optimized coating .the solid gray lines are the means of each set of measurements , and the dashed gray lines are one standard deviation from the mean.,width=326 ] it should be noted that there is an uncertainty associated with each value in table [ angles ] since each is obtained from a least - squares fit to a spectrum with some scatter in it .this is evident in fig .[ savgol0 ] , a plot of one of the spectra of the qwl coating overlaid by one of the optimized coating . while there is some overlap between the spectra of the two coatingsthere is none between the confidence intervals of the fits ( indicated by the dashed lines in the plot ) , because the error of the fit is much smaller than the standard deviation of the residuals to the fit .figure [ qwlhistogram ] is an histogram of the residuals of the fit to the qwl spectrum .the standard deviation of the histogram is m/ or about 10% of the length noise at 3 khz and nearly equal to the separation between the spectra of the two coatings .the error of the fit , however , is smaller than the standard deviation by a factor of , where is the number of data points . finding the least - squares fit to the data is akin to finding the center of the histogram which can be done with much more precision than its standard deviation , particularly if there are many data points .in fact , this fitting error is smaller than the errors quoted above for the mean loss angles and can be ignored .one more thing to note from fig .[ qwlhistogram ] is that the distribution of residuals is consistent with the normal distribution , suggesting that the scatter in each spectrum is random in nature and that our method of extracting the loss angles from least - squares fits to the length noise spectra is valid . the separation between the spectra of the two coatings in fig .[ savgol0 ] can be made visually more evident by using the savitzky - golay smoothing filter to filter out the random noise in the spectra .this technique generates a smoothed data set from the raw data where each point in the smoothed data is derived from a polynomial regression performed using neighboring points in the raw data .the smoothed data can be viewed as a generalized running average of the raw data . in the smoothed spectra shown in fig .[ savgol5 ] , each data point was derived from a first order polynomial regression using the corresponding point ( at the same frequency ) in the raw spectra of fig .[ savgol0 ] plus the five points to the left and the five points to the right of that point . plot of one of the spectra of the qwl coating ( light ) overlaid by one of the optimized coating ( dark ) .the solid line through each spectrum is the best fit of eq .( 10 ) to the data . for each fit, the region between the dashed lines represents the 95% confidence interval.,width=326 ] histogram of the residuals of the fit to one of the spectra taken for the qwl coating .the distribution of the residuals is well approximated by a normal distribution with a standard deviation of m/.,width=326 ] this is the plot in fig .[ savgol0 ] after the savitzky - golay smoothing filter was applied to each spectrum .the separation between the spectra of the two coatings can be seen more clearly.,width=326 ]we have directly measured broadband brownian noise of two different high reflectivity optical coating designs : a standard qwl coating , and an optimized coating specifically designed to minimize the brownian noise .the ratio of the coating loss angles that we observed was , which agrees with the predicted ratio , to within the margin of error .the results validate the proposed coating optimization strategy , and suggest its use to improve the sensitivity of future generations of interferometric gravitational wave detectors at relatively little cost .this research was supported by the nsf under cooperative agreement phy-0757058 and the italian national institute for nuclear physics ( infn ) under the committee - v coat grant ) .
a standard quarter - wavelength multilayer optical coating will produce the highest reflectivity for a given number of coating layers , but in general it will not yield the lowest thermal noise for a prescribed reflectivity . coatings with the layer thicknesses optimized to minimize thermal noise could be useful in future generation interferometric gravitational wave detectors where coating thermal noise is expected to limit the sensitivity of the instrument . we present the results of direct measurements of the thermal noise of a standard quarter - wavelength coating and a low noise optimized coating . the measurements indicate a reduction in thermal noise in line with modeling predictions .
the science of seti has always suffered from a lack of quantitative substance ( purely resulting from its reliance on one - sample statistics ) relative to its sister astronomical sciences . in 1961 ,frank drake took the first steps to quantifying the field by developing the now - famous drake equation , a simple algebraic expression which provides an estimate for the number of communicating civilisations in the milky way .unfortunately , its simplistic nature leaves it open to frequent re - expression , hence there are in fact many variants of the equation , and no clear canonical form . for the purpose of this paper, the following form will be used ( walters , hoover & kotra ( 1980 ) ) : with the symbols having the following meanings : + = the number of galactic civilisations who can communicate with earth + = the mean star formation rate of the milky way + = the fraction of stars that could support habitable planets + = the fraction of stars that host planetary systems + = the number of planets in each system which are potentially habitable + = the fraction of habitable planets where life originates and becomes complex + = the fraction of life - bearing planets which bear intelligence + = the fraction of intelligence bearing planets where technology can develop + = the mean lifetime of a technological civilisation within the detection window + the equation itself does suffer from some key weaknesses : it relies strongly on mean estimations of variables such as the star formation rate ; it is unable to incorporate the effects of the physico - chemical history of the galaxy , or the time - dependence of its terms .indeed , it is criticised for its polarising effect on `` contact optimists '' and `` contact pessimists '' , who ascribe very different values to the parameters , and return values of between and ( ! ) .+ a decade before , enrico fermi attempted to analyse the problem from a different angle , used order of magnitude estimates for the timescales required for an earthlike civilisation to arise and colonise the galaxy to arrive at the conclusion that the milky way should be teeming with intelligence , and that they should be seen all over the sky .this lead him to pose the fermi paradox , by asking , `` where are they ? '' .the power of this question , along with the enormous chain of events required for intelligent observers to exist on earth to pose it , has lead many to the conclusion that the conditions for life to flourish are rare , possibly even unique to earth ( ward and brownlee ( 2000) . the inference by lineweaver( 2001) that the median age of terrestrial planets in the milky way is gyr older than earth would suggest that a significant number of earthlike civilisations have had enough time to evolve , and hence be detectable : the absence of such detection lends weight to the so - called `` rare earth '' hypothesis .however , there have been many posited solutions to the fermi paradox that allow eti to be prevalent , such as : * they are already here , in hiding * contact with earth is forbidden for ethical reasons * they were here , but they are now extinct * they will be here , if mankind can survive long enough some of these answers are inherently sociological , and are difficult to model .others are dependent on the evolution of the galaxy and its stars , and are much more straightforward to verify . as a whole , astrobiologists are at a tremendous advantage in comparison with drake and fermi : the development of astronomy over the last fifty years - in particular the discovery of the first extra solar planet ( mayor et al 1995 ) and some hundreds thereafter , as well as the concepts of habitable zones , both stellar ( hart 1979 ) and galactic ( lineweaver et al 2004 ) - have allowed a more in - depth analysis of the problem .however , the key issue still affecting seti ( and astrobiology as a whole ) is that there is no consensus as to how to assign values to the key _biological _ parameters involved in the drake equation and fermi paradox . furthermore , there are no means of assigning confidence limits or errors to these parameters , and therefore no way of comparing hypotheses for life ( e.g. panspermia - review , see dose ( 1986) - or the `` rare earth '' hypothesis ( ward and brownlee ( 2000) ) .this paper outlines a means for applying monte carlo realisation techniques to investigate the parameter space of intelligent civilisations more rigorously , and to help assign errors to the resulting distributions of life and intelligence .+ the paper is organised as follows : in section [ sec : method ] the techniques are described ; in section [ sec : inputs ] the input data is discussed ; in section [ sec : results ] the results from several tests are shown , and in section [ sec : conclusions ] the method is reviewed .the overall procedure can be neatly summarised as : 1 .generate a galaxy of stars , with parameters that share the same distribution as observations 2 . generate planetary systems for these stars 3 .assign life to some of the planets depending on their parameters ( e.g. distance from the habitable zone ) 4 . for each life - bearing planet , follow life s evolution into intelligence using stochastic equationsthis will produce one monte carlo realisation ( mcr ) of the milky way in its entirety .the concept of using mcr techniques in astrobiology is itself not new : recent work by vukotic & cirkovic ( 2007,2008) uses similar procedures to investigate timescale forcing by global regulation mechanisms such as those suggested by annis ( 1999) . in order to provide error estimates , this procedure must be repeated many times , so as to produce many mcrs , and to hence produce results with a well - defined sample mean and sample standard deviation .the procedure relies on generating parameters in three categories : stellar , planetary and biological .the study of stars in the milky way has been extensive , and their properties are well - constrained . assuming the stars concerned are all main sequence objects allows much of their characteristics to be determined by their mass .stellar masses are randomly sampled to reproduce the observed initial mass function ( imf ) of the milky way , which is taken from scalo & miller ( 1979) ( see * figure [ fig : imf ] * ) .+ stellar radii can then be calculated using ( prialnik 2000 ) : ^{\frac{n-1}{n+3}}\ ] ] where if the primary fusion mechanism is the p - p chain ( ) , and if the primary fusion mechanism is the cno cycle ( ) .( please note that in this paper , the subscript denotes the sun , e.g. indicates the value of one solar mass ) .the luminosity is calculated using a simple mass - luminosity relation : ^{3}\ ] ] the main sequence lifetime therefore is : ^{-2}\ ] ] the stars effective temperature can be calculated , assuming a blackbody : ^{1/4}\ ] ] the star s age is sampled to reproduce the star formation history of the milky way ( twarog 1980 ) , see * figure [ fig : sfh]*. + the metallicity of the star is dependent on the metallicity gradient in the galaxy , and hence its galactocentric radius , .this is sampled so that the surface mass density of the galaxy is equivalent to that of the milky way ( assuming a simple two - dimensional disc structure ) : where is the galactic scale length ( taken to be 3.5 kpc ) .therefore , given its galactocentric radius , the metallicity of the star , in terms of ] {ijaduncanforgan14a.eps } & \includegraphics[scale = 0.4]{ijaduncanforgan14b.eps } \\ \end{array} ]the results for all three hypotheses show clear trends throughout . in all cases , the inhabited planets orbit low mass stars ( a symptom of the hot jupiter bias present in current data ) ; the signal lifetimes are correspondingly dependent on these masses ( as expected , given its functional form ) .the signal history of all three hypotheses shows a period of transition between low signal number and high signal number at around , again symptomatic of the biological copernican principle used to constrain , , etc .+ this paper has outlined a means by which key seti variables can be estimated , taking into account the diverse planetary niches that are known to exist in the milky way and the stochastic evolutionary nature of life , as well as providing estimates of errors on these variables .however , two notes of caution must be offered : 1 .the reader may be suspicious of the high precision of the statistics quoted : it is worth noting that the standard deviations of these results are indeed low , and the data is precise : but , its accuracy is not as certain .the output data will only be as useful as the input data will allow ( the perennial `` garbage in , garbage out '' problem ) .current data on exoplanets , while improving daily , is still insufficient to explore the parameter space in mass and orbital radii , and as such all results here are very much incomplete .conversely , as observations improve and catalogues attain higher completeness , the efficacy of the monte carlo realisation method improves also .future studies will also consider planetary parameters which are sampled as to match current planet formation theory , rather than current observations .the method currently does not produce a realistic age metallicity relation ( amr ) .age , metallicity and galactocentric radius are intrinsically linked in this setup : to obtain realistic data for all three self - consistently requires an improved three - dimensional galaxy model which takes into account its various components ( the bulge , the bar , etc ) , as well as the time evolution of the galaxy .future work will attempt to incorporate a more holistic model which allows all three parameters to be sampled correctly .in particular , future efforts will be able to take advantage of better numerical models for the star formation history ( e.g. rocha - pinto et al 2000 ) , and the spatial distribution of stars ( e.g. dehnen and binney 1998 ) .although this paper applies the `` hard step scenario '' to the biological processes modelled , the method itself is flexible enough to allow other means of evolving life and intelligence .the minutiae of how exactly the biological parameters are calculated do not affect the overall concept : this work has shown that it is possible to simulate a realistic backdrop ( in terms of stars and planets ) for the evolution of eti , whether it is modelled by the `` hard step '' scenario or by some other stochastic method . incorporating new empirical input from the next generation of terrestrial planet finders , e.g. kepler( borucki et al 2008 ) , as well as other astrobiological research into the model , alongside new theoretical input by adding more realistic physics and biology will strengthen the efficacy of this monte carlo technique , providing a new avenue of seti research , and a means to bring many disparate areas of astronomical research together .the author would like to thank the referee for valuable comments and suggestions which greatly improved this paper .the simulations described in this paper were carried out using high performance computing funded by the scottish universities physics alliance ( supa ) .99 _ annis , j. , 1999 , j. br .soc , 52 , pp 19 - 22 _ _ borucki , w. , koch , d. , basri , g. , batalha , n. , brown , t. , caldwell , d. , christensen - dalsgaard , j. , cochran , w. , dunham , e. , gautier , t. n. , geary , j. , gilliland , r. , jenkins , j. , kondo , y. , latham , d. , lissauer , j.j . ,monet , d. , 2008 , proc .iau , 249 , pp17 - 24 _ _ carter , b. , 2008 , ija , 7 , pp 178 - 182 _ _ cirkovic , m.m . , 2004 , j. br . interplanet .soc , 57 , pp 209 - 215 _ _ cirkovic , m.m . , 2004 ,astrobiology , 4 , pp 225 - 231 _ _ cirkovic , m.m . , 2007 ,ija , 6 , pp 325 - 329 _ _ crick , f.h.c , orgel , l.e . , 1973 ,icarus , 19 , pp 341 - 346 _ _ dehnen , w. , binney , j. , 1998 , mnras , 294 , 429 _ _ dose , k. , 1986 , adv .research , 6 , pp 181 - 186 _ _ dyson , f.j ., 1960 , science , 131 , pp 1667 - 1668 _ _ hart , m.h . , 1979 ,icarus , 37 , pp 351 - 357 _ _ hou , j.l , prantzos , n. , boissier , s. , 2000 , a & a , 362 , pp 921 - 936 _ _ ida , s. , lin ., d.n.c . , 2004 ,apj , 616 , pp 567 - 572 _ _ leger , a. , selsis , f. , sotin , c. , guillot , .t , despois , d. , mawet , d. , ollivier , m. , labque , a. , valette , c. , brachet , f. , chazelas , b. , lammer , h.,2004 icarus , 169 , 499 - 504 _ _ lineweaver , c.h . , 2001 , icarus , 151 , pp 307 - 313 _ _ lineweaver , c.h . , fenner , y. , gibson , b.k . , 2004 ,science , 303 , pp 59 - 62 _ _ mayor , m. , et al , 1995 , iau circ ., 6251 , p 1 _ _napier w.m . , 2004 , mnras , 348 , pp 46 - 51 _ _ prialnik , d. , 2000 , `` an introduction to the theory of stellar structure and evolution '' , cambridge university press , pp 121 - 122 _ _ raup , d.m . ,sepkoski , j.j . , 1982 , science , 215 , pp 1501 - 1503 _ _ rocha - pinto , h.j . ,scalo , j. , maciel w.j . , flynn , c. , 2000 , a & a , 358 , pp 869 - 885 _ _ scalo , j.m . ,miller , g.e ., 1979 , apjs , 41 , pp 513 - 547 _ _ twarog , b.a . , 1980 ,apj , 242 , pp 242 - 259 _ _vukotic , b. , cirkovic , m.m . , 2007 , seraj , 175 ,pp 45 - 50 _ _vukotic , b. , cirkovic , m.m . , 2008 ,seraj , 176 , pp 71 - 79 _ _ wallis , m.k . ,wickramasinghe , n.c . , 2004 , mnras , 348 , pp 52 - 61 _ _ walters , c. , hoover , r.a . ,kotra , r.k . , 1980 ,icarus , 41 , pp 193 - 197 _ _ ward , p.d . , brownlee , d. , 2000 , `` rare earth : why complex life is uncommon in the universe '' , springer , new york _
the search for extraterrestrial intelligence ( seti ) has been heavily influenced by solutions to the drake equation , which returns an integer value for the number of communicating civilisations resident in the milky way , and by the fermi paradox , glibly stated as : `` if they are there , where are they ? '' . both rely on using average values of key parameters , such as the mean signal lifetime of a communicating civilisation . a more accurate answer must take into account the distribution of stellar , planetary and biological attributes in the galaxy , as well as the stochastic nature of evolution itself . this paper outlines a method of monte carlo realisation which does this , and hence allows an estimation of the distribution of key parameters in seti , as well as allowing a quantification of their errors ( and the level of ignorance therein ) . furthermore , it provides a means for competing theories of life and intelligence to be compared quantitatively . + keywords : numerical , monte carlo , extraterrestrial intelligence , seti , drake equation , fermi paradox scottish universities physics alliance ( supa ) + institute for astronomy , university of edinburgh + royal observatory edinburgh + blackford hill + edinburgh eh9 3hj + uk + + tel : 0131 668 8359 + fax : 0131 668 8416 + email : dhf.ac.uk +
principal components analysis is one of the most useful statistical tool to extract information by reducing the dimension when one has to analyze large samples of multivariate or functional data ( see _ e.g. _ or ) .when both the dimension and the sample size are large , outlying observations may be difficult to detect automatically .principal components , which are derived from the spectral analysis of the covariance matrix , can be very sensitive to outliers ( see ) and many robust procedures for principal components analysis have been considered in the literature ( see , and ) .the most popular approaches are probably the minimum covariance determinant estimator ( see ) and the robust projection pursuit ( see and ) .robust pca based on projection pursuit has been extended to deal with functional data in and .adopting another point of view , robust modifications of the covariance matrix , based on projection of the data onto the unit sphere , have been proposed in ( see also and ) .we consider in this work another robust way of measuring association between variables , that can be extended directly to functional data .it is based on the notion of median covariation matrix ( mcm ) which is defined as the minimizer of an expected loss criterion based on the hilbert - schmidt norm ( see for a first definition in a more general -estimation setting ) .it can be seen as a geometric median ( see or ) in the particular hilbert spaces of square matrices ( or operators for functional data ) equipped with the frobenius ( or hilbert - schmidt ) norm .the mcm is non negative and unique under weak conditions .as shown in it also has the same eigenspace as the usual covariance matrix when the distribution of the data is symmetric and the second order moment is finite . being a spatial median in a particular hilbert space of matrices ,the mcm is also a robust indicator of central location , among the covariance matrices , which has a 50 % breakdown point ( see or ) as well as a bounded gross sensitivity error ( see ) .the aim of this work is twofold .it provides efficient recursive estimation algorithms of the mcm that are able to deal with large samples of high dimensional data . by this recursive property, these algorithms can naturally deal with data that are observed sequentially and provide a natural update of the estimators at each new observation .another advantage compared to classical approaches is that such recursive algorithms will not require to store all the data .secondly , this work also aims at highlighting the interest of considering the median covariation matrix to perform principal components analysis of high dimensional contaminated data .different algorithms can be considered to get effective estimators of the mcm .when the dimension of the data is not too high and the sample size is not too large , weiszfeld s algorithm ( see and ) can be directly used to estimate effectively both the geometric median and the median covariation matrix .when both the dimension and the sample size are large this static algorithm which requires to store all the data may be inappropriate and ineffective .we show how the algorithm developed by for the geometric median in hilbert spaces can be adapted to estimate recursively and simultaneously the median as well as the median covariation matrix . then an averaging step ( ) of the two initial recursive estimators of the median and the mcm permits to improve the accuracy of the initial stochastic gradient algorithms . a simple modification of the stochastic gradient algorithm is proposed in order to ensure that the median covariance estimator is non negative .we also explain how the eigenelements of the estimator of the mcm can be updated online without being obliged to perform a new spectral decomposition at each new observation .the paper is organized as follows .the median covariation matrix as well as the recursive estimators are defined in section 2 . in section 3 ,almost sure and quadratic mean consistency results are given for variables taking values in general separable hilbert spaces .the proofs , which are based on new induction steps compared to , allow to get better convergence rates in quadratic mean even if this new framework is much more complicated because two averaged non linear algorithms are running simultaneously .one can also note that the techniques generally employed to deal with two time scale robbins monro algorithms ( see for the multivariate case ) require assumptions on the rest of the taylor expansion and the finite dimension of the data that are too restrictive in our framework . in section 4, a comparison with some classic robust pca techniques is made on simulated data .the interest of considering the mcm is also highlighted on the analysis of individual tv audiences , a large sample of high dimensional data which , because of its dimension , can not be analyzed in a reasonable time with classical robust pca approaches .the main parts of the proofs are described in section 5 .perspectives for future research are discussed in section 6 .some technical parts of the proofs as well as a description of weiszfeld s algorithm in our context are gathered in an appendix .let be a separable hilbert space ( for example or , for some closed interval ) .we denote by its inner product and by the associated norm .we consider a random variable that takes values in and define its center as follows : } .\label{defmed}\end{aligned}\ ] ] the solution is often called the geometric median of .it is uniquely defined under broad assumptions on the distribution of ( see ) which can be expressed as follows .[ eq : supportcdtnmed ] there exist two linearly independent unit vectors , such that if the distribution of is symmetric around zero and if admits a first moment that is finite then the geometric median is equal to the expectation of , } ] .we now consider the special vector space , denoted by , of matrices when , or for general separable hilbert spaces , the vector space of linear operators mapping . denoting by an orthonormal basis in , the vector space equipped with the following inner product : is also a separable hilbert space . in , we have equivalently where is the transpose matrix of .the induced norm is the well known frobenius norm ( also called hilbert - schmidt norm ) and is denoted by when has finite second order moments , with expectation }=\mu ] can be defined as the minimum argument , over all the elements belonging to , of the functional , }.\ ] ] note that in general hilbert spaces with inner product , operator should be understood as the operator .the mcm is obtained by removing the squares in previous function in order to get a more robust indicator of `` covariation '' . for , define by } .\label{def : popriskcov}\end{aligned}\ ] ] the median covariation matrix , denoted by , is defined as the minimizer of over all elements .the second term at the right - hand side of ( [ def : popriskcov ] ) prevents from having to introduce hypotheses on the existence of the moments of . introducing the random variable that takes values in ,the mcm is unique provided that the support of is not concentrated on a line and assumption 1 can be rephrased as follows in , [ eq : supportcdtncov ] there exist two linearly independent unit vectors , such that we can remark that assumption [ eq : supportcdtnmed ] and assumption [ eq : supportcdtncov ] are strongly connected . indeed ,if assumption [ eq : supportcdtnmed ] holds , then for . consider the rank one matrices and , we have which has a strictly positive variance when the distribution of has no atom .more generally unless there is a scalar such that = \mathbb{p}\left [ \langle u_1 , x - m \rangle= -a\right ] = \frac{1}{2} ] ) .furthermore it can be deduced easily that the mcm , which is a geometric median in the particular hilbert spaces of hilbert - schmidt operators , is a robust indicator with a 50% breakdown point ( see ) and a bounded sensitive gross error ( see ) .we also assume that [ eq : invmomentcov ] there is a constant such that for all and all } \leq c. \\ ( b ) & : \quad { { \mathbb{e}}\left [ { \left\| ( x- h)(x- h)^t - v \right\|}^{-2}_f \right ] } \leq c. \\\end{aligned}\ ] ] this assumption implicitly forces the distribution of to have no atoms .it is more `` likely '' to be satisfied when the dimension of the data is large ( see and for a discussion ) .note that it could be weakened as in by allowing points , necessarily different from the mcm , to have strictly positive masses .considering the particular case , assumption [ eq : invmomentcov](a ) implies that for all , } \leq c , \label{cond : mominv2x}\end{aligned}\ ] ] and this is not restrictive when the dimension of is equal or larger than 3 . under assumption [ eq : invmomentcov](a ), the functional is twice frchet differentiable , with gradient }. \label{def : gradv}\end{aligned}\ ] ] and hessian operator , , }. \label{def : hev}\end{aligned}\ ] ] where , is the identity operator on and for any elements and belonging to .furthermore , is also defined as the unique zero of the non linear equation : remarking that previous equality can be rewritten as follows , }}{{\mathbb{e}}\left[\frac { ( x - m)(x - m)^t } { { \left\| ( x- m)(x - m)^t - \gamma_m \right\|}_f}\right ] } , \label{def : baseweiszfled}\end{aligned}\ ] ] it is clear that is a bounded , symmetric and non negative operator in .as stated in proposition 2 of , operator has an important stability property when the distribution of is symmetric , with finite second moment , _i.e _ } < \infty ] , which is well defined in this case , and share the same eigenvectors : if is an eigenvector of with corresponding eigenvalue , then , for some non negative value .this important result means that for gaussian and more generally symmetric distribution ( with finite second order moments ) , the covariance operator and the median covariation operator have the same eigenspaces . note that it is also conjectured in that the order of the eigenfunctions is also the same .we suppose now that we have i.i.d .copies of random variables with the same law as . for simplicity, we temporarily suppose that the median of is known .we consider a sequence of ( learning ) weights , with and and we define the recursive estimation procedure as follows this algorithm can be seen as a particular case of the averaged stochastic gradient algorithm studied in . indeed, the first recursive algorithm ( [ def : algormcov ] ) is a stochastic gradient algorithm , } = \nabla g_m(w_n)\ ] ] where is the -algebra generated by whereas the final estimator is obtained by averaging the past values of the first algorithm .the averaging step ( see ) , _ i_.e .the computation of the arithmetical mean of the past values of a slowly convergent estimator ( see proposition [ prop : rmvn ] below ) , permits to obtain a new and efficient estimator converging at a parametric rate , with the same asymptotic variance as the empirical risk minimizer ( see theorem [ theo : asymptnorm ] below ) . in most of the casesthe value of is unknown so that it also required to estimate the median . to build an estimator of , it is possible to estimate simultaneously and by considering two averaged stochastic gradient algorithms that are running simultaneously . for , where the averaged recursive estimator of the median is controlled by a sequence of descent steps .the learning rates are generally chosen as follows , , where the tuning constants satisfy ] . when is fixed , this averaged recursive algorithm is about 30 times faster than the weiszfeld s approach ( see ) .when is known , can be seen as an averaged stochastic gradient estimator of the geometric median in a particular hilbert space and the asymptotic weak convergence of such estimator has been studied in .they have shown that : ( , theorem 3.4 ) .[ theo : asymptnorm ] + if assumptions 1 - 3(a ) hold , then as tends to infinity , where stands for convergence in distribution and is the limiting covariance operator , with }. ] and can be thought as a discretized version of a brownian sample path in ] .the covariance matrix of is .trajectories when and for the three different contamination scenarios : student with 1 degree of freedom , student with 2 degrees of freedom and reverse time brownian motion ( from left to right).,width=680,height=340 ] for the averaged recursive algorithms , we have considered tuning coefficients and a speed rate of .note that the values of these tuning parameters have not been particularly optimised .we have noted that the simulation results were very stable , and did not depend much on the value of and for ] . thus , is a sequence of martingale differences adapted to the filtration . indeed , = \nabla g_{\overline{m}_{n}}(v_{n } ) - \mathbb{e}\left [ u_{n+1}|\mathcal{f}_{n } \right ] = 0 ] .since converges almost surely to , one can conclude the proof of the almost sure consistency of with the same arguments as in the proof of theorem 3.1 in and the convexity properties given in the section b of the supplementary file .finally , the almost sure consistency of is obtained by a direct application of topelitz s lemma ( see _ e.g. _ lemma 2.2.13 in ) .the proof of theorem [ theol2l4 ] relies on properties of the -th moments of for all given in the following three lemmas .these properties enable us , with the application of markov s inequality , to control the probability of the deviations of the robbins monro algorithm from .[ lemmajordre ] under assumptions 1 - 3(b ) , for all integer , there is a positive constant such that for all , & \leq m_{p}.\end{aligned}\ ] ] [ lem1 ] under assumptions 1 - 3(b ) , there are positive constants such that for all , & \leq c_{1}e^{-c_{1}'n^{1-\alpha } } + \frac{c_{2}}{n^{\alpha } } + c_{3}\sup_{e ( n/2)+1 \leq k \leq n-1}\mathbb{e}\left [ \left\| v_{k } - \gamma_{m } \right\|^{4}\right ] , \end{aligned}\ ] ] where is the integer part of the real number .[ lem2 ] under assumptions 1 - 3(b ) , for all integer , there are a rank and positive constants such that for all , & \leq \left ( 1-c_{p'}\gamma_{n}n^{-\frac{1-\alpha}{p'}}\right)\mathbb{e}\left [ \left\| v_{n } - \gamma_{m } \right\|_{f}^{4}\right ] + \frac{c_{1,p'}}{n^{3\alpha } } + \frac{c_{2,p'}}{n^{2\alpha}}\mathbb{e}\left [ \left\| v_{n } - \gamma_{m}\right\|_{f}^{2}\right ] + \frac{c_{3,p'}}{n^{3\alpha -3\frac{1-\alpha}{p'}}}.\end{aligned}\ ] ] we can now prove theorem [ theol2l4 ] .let us choose an integer such that .thus , , and applying lemma [ lem2 ] , there are positive constants and a rank such that for all , \leq \left ( 1-c_{p'}\gamma_{n}n^{-\frac{1-\alpha}{p'}}\right)\mathbb{e}\left [ \left\| v_{n } - \gamma_{m } \right\|_{f}^{4}\right ] + \frac{c_{1,p'}}{n^{3\alpha } } + \frac{c_{2,p'}}{n^{2\alpha}}\mathbb{e}\left [ \left\| v_{n } - \gamma_{m}\right\|_{f}^{2}\right ] .\ ] ] let us now choose and such that . note that .one can check that there is a rank such that for all , with the help of a strong induction , we are going to prove the announced results , that is to say that there are positive constants such that and ( with defined in lemma [ lem1 ] ) , such that for all , & \leq \frac{c_{p'}}{n^{\alpha } } , \\ \mathbb{e}\left [ \left\| v_{n } - \gamma_{m}\right\|_{f}^{4 } \right ] & \leq \frac{c_{\beta}}{n^{\beta } } .\end{aligned}\ ] ] first , let us choose and such that \right\rbrace , \\ c_{\beta } & \geq \max_{k \leq n_{p'}'}\left\lbrace k^{\beta}\mathbb{e}\left [ \left\| v_{n_{p ' } ' } - \gamma_{m } \right\|_{f}^{4}\right ] \right\rbrace .\end{aligned}\ ] ] thus , for all , & \leq \frac{c_{p'}}{k^{\alpha } } , \\\mathbb{e}\left [ \left\| v_{k } - \gamma_{m}\right\|_{f}^{4 } \right ] & \leq \frac{c_{\beta}}{k^{\beta } } .\end{aligned}\ ] ] we suppose from now that and that previous inequalities are verified for all . applying lemma [ lemmajordre ] and by induction , & \leq c_{1}e^{-c_{1}'n^{1-\alpha } } + \frac{c_{2}}{n^{\alpha } } + c_{3}\sup_{e((n+1)/2 ) + 1 \leq k \leq n}\left\lbrace\mathbb{e}\left [ \left\| v_{k } - \gamma_{m}\right\|_{f}^{4}\right ] \right\rbrace \\ & \leq c_{1}e^{-c_{1}'n^{1-\alpha } } + \frac{c_{2}}{n^{\alpha } } + c_{3}\sup_{e((n+1)/2 ) + 1 \leq k \leq n}\left\lbrace \frac{c_{\beta}}{k^{\beta } } \right\rbrace \\ & \leq c_{1}e^{-c_{1}'n^{1-\alpha } } + \frac{c_{2}}{n^{\alpha } } + c_{3}2^{\beta}\frac{c_{\beta}}{n^{\beta}}. \end{aligned}\ ] ] since and since , factorizing by , & \leq c_{p'}c_{1}e^{-c_{1}'n^{1-\alpha } } + c_{p'}2^{-\alpha -1}\frac{1}{n^{\alpha } } + c_{3}2^{\beta}\frac{2c_{p'}}{n^{\beta } } \\ & \leq \frac{c_{p}'}{(n+1)^{\alpha}}(n+1)^{\alpha}c_{1}e^{-c_{1}'n^{1-\alpha } } + 2^{-\alpha}\left(\frac{n}{n+1}\right)^{\alpha}\frac{c_{p'}}{2(n+1)^{\alpha } } + \frac{c_{3}2^{\beta + 1}}{(n+1)^{\beta - \alpha}}\frac{c_{p'}}{(n+1)^{\alpha } } \\ & \leq \frac{c_{p}'}{(n+1)^{\alpha}}c_{1}(n+1)^{\alpha}e^{-c_{1}'n^{1-\alpha } } + \frac{1}{2}\frac{c_{p'}}{(n+1)^{\alpha } } + c_{3}2^{\beta + 1 } \frac{1}{(n+1)^{\beta -\alpha } } \frac{c_{p'}}{(n+1)^{\alpha } } \\ & \leq \left ( ( n+1)^{\alpha}c_{1}e^{-c_{1}'n^{1-\alpha } } + \frac{1}{2 } + c_{3}2^{\beta + 1}\frac{1}{(n+1)^{\beta - \alpha } } \right ) \frac{c_{p'}}{(n+1)^{\alpha } } .\end{aligned}\ ] ] by definition of , \leq \frac{c_{p'}}{(n+1)^{\alpha}}.\ ] ] in the same way , applying lemma [ lem2 ] and by induction , & \leq \left ( 1-c_{p'}\gamma_{n}n^{-\frac{1-\alpha}{p ' } } \right ) \mathbb{e}\left [ \left\| v_{n } - \gamma_{m}\right\|_{f}^{4}\right ] + \frac{c_{1,p'}}{n^{3\alpha } } + \frac{c_{2,p'}}{n^{2\alpha}}\mathbb{e}\left [ \left\| v_{n } - \gamma_{m}\right\|_{f}^{2}\right ] \\ & \leq \left ( 1-c_{p'}\gamma_{n}n^{-\frac{1-\alpha}{p ' } } \right)\frac{c_{\beta}}{n^{\beta}}+ \frac{c_{1,p'}}{n^{3\alpha } } + \frac{c_{2,p'}}{n^{2\alpha}}\frac{c_{p'}}{n^{\alpha}}.\end{aligned}\ ] ] since , factorizing by , & \leq \left ( 1-c_{p'}\gamma_{n}n^{-\frac{1-\alpha}{p ' } } \right)\frac{c_{\beta}}{n^{\beta}}+ \left ( c_{1,p ' } + c_{2,p'}\right ) \frac{c_{\beta}}{n^{3\alpha } } \\ & \leq \left ( 1-c_{p'}\gamma_{n}n^{-\frac{1-\alpha}{p ' } } \right ) \left ( \frac{n+1}{n}\right)^{\beta}\frac{c_{\beta}}{n^{\beta } } + 2^{3\alpha}\frac{c_{1,p ' } + c_{2,p'}}{(n+1)^{3\alpha - \beta}}\frac{c_{\beta}}{(n+1)^{\beta } } \\ & \leq \left ( \left ( 1-c_{p'}\gamma_{n}n^{-\frac{1-\alpha}{p ' } } \right ) \left ( \frac{n+1}{n}\right)^{\beta } + 2^{3\alpha}\frac{c_{1,p ' } + c_{2,p'}}{(n+1)^{3\alpha - \beta } } \right ) \frac{c_{\beta}}{(n+1)^{\beta}}.\end{aligned}\ ] ] by definition of , \leq \frac{c_{\beta}}{(n+1)^{\beta}},\ ] ] which concludes the induction and the proof . in order to prove theorem [ th : cvgeqm ] , we first recall the following lemma .[ lemsumg ] let be random variables taking values in a normed vector space such that for all positive constant and for all , < \infty ] . applying theorem [ theol2l4 ] , & \leq \frac{1}{n^{2}}\frac{c'c_{\gamma}^{-2}}{n^{-\alpha } } = o \left ( \frac{1}{n}\right ) .\end{aligned}\ ] ] moreover , since , the application of lemma [ lemsumg ] and theorem [ theol2l4 ] gives & \leq \frac{1}{n^{2}}\left ( \sum_{k=2}^{n } \left| \gamma_{k}^{-1 } - \gamma_{k-1}^{-1 } \right| \sqrt{\mathbb{e}\left [ \left\| t_{k } \right\|_{f}^{2}\right ] } \right)^{2 } \\ & \leq \frac{1}{n^{2}}4\alpha^{2}c_{\gamma}^{-2}c'\left ( \sum_{k=2}^{n } \frac{1}{k^{1-\alpha /2 } } \right)^{2 } \\ & = o \left ( \frac{1}{n^{2-\alpha}}\right ) \\ & = o \left ( \frac{1}{n } \right ) , \end{aligned}\ ] ] since . in the same way , since , applying lemma [ lemsumg ] and theorem [ theol2l4 ] with , & \leq \frac{1}{n^{2}}\left ( \sum_{k=1}^{n } \sqrt{\mathbb{e}\left [ \left\| \delta_{k } \right\|_{f}^{2}\right ] } \right)^{2 } \\ & \leq \frac{36c^{2}}{n^{2}}\left ( \sum_{k=1}^{n } \sqrt{\mathbb{e}\left [ \left\| t_{k } \right\|_{f}^{4}\right ] } \right)^{2 } \\ & \leq \frac{36c^{2}c_{\beta}}{n^{2}}\left ( \sum_{k=1}^{n } \frac{1}{k^{\beta /2 } } \right)^{2 } \\ & = o \left ( \frac{1}{n^{\beta}}\right ) \\ & = o \left ( \frac{1}{n}\right ) , \end{aligned}\ ] ] moreover , let .since , and since there is a positive constant such that for all , \leq c''n^{-1} ] , and since is a sequence of martingale differences adapted to the filtration , & = \frac{1}{n^{2}}\left ( \sum_{k=1}^{n } \mathbb{e}\left [ \left\| \xi_{k+1}\right\|_{f}^{2}\right ] + 2\sum_{k=1}^{n}\sum_{k'=k+1}^{n}\mathbb{e}\left [ \left\langle\xi_{k+1},\xi_{k'+1}\right\rangle_{f } \right ] \right ) \\ & = \frac{1}{n^{2}}\left ( \sum_{k=1}^{n } \mathbb{e}\left [ \left\| \xi_{k+1}\right\|_{f}^{2}\right ] + 2\sum_{k=1}^{n}\sum_{k'=k+1}^{n}\mathbb{e}\left [ \left\langle \xi_{k+1},\mathbb{e}\left [ \xi_{k'+1}\big| \mathcal{f}_{k'}\right ] \right\rangle_{f } \right ] \right ) \\ & = \frac{1}{n^{2}}\sum_{k=1}^{n } \mathbb{e}\left [ \left\| \xi_{k+1 } \right\|_{f}^{2}\right ] \\ & \leq \frac{1}{n } .\end{aligned}\ ] ] thus , there is a positive constant such that for all , \leq \frac{k}{n}.\ ] ] let be the smallest eigenvalue of .we have , with proposition b.1 in the supplementary file , that and the announced result is proven , & \leq \frac{k}{\lambda_{\min}^{2}n}.\end{aligned}\ ] ]the simulation study and the illustration on real data indicate that performing robust principal components analysis via the median covariation matrix , which can bring new information compared to classical pca , is an interesting alternative to more classical robust principal components analysis techniques . the use of recursive algorithms permits to perform robust pca on very large datasets , in which outlying observations may be hard to detect .another interest of the use of such sequential algorithms is that estimation of the median covariation matrix as well as the principal components can be performed online with automatic update at each new observation and without being obliged to store all the data in memory .a simple modification of the averaged stochastic gradient algorithm is proposed that ensures non negativeness of the estimated covariation matrices .this modified algorithms has better performances on our simulated data .a deeper study of the asymptotic behaviour of the recursive algorithms would certainly deserve further investigations .proving the asymptotic normality and obtaining the limiting variance of the sequence of estimators when is unknown would be of great interest .this is a challenging issue that is beyond the scope of the paper and would require to study the joint weak convergence of the two simultaneous recursive averaged estimators of and .the use of the mcm could be interesting to robustify the estimation in many different statistical models , particularly with functional data .for example , it could be employed as an alternative to robust functional projection pursuit in robust functional time series prediction or for robust estimation in functional linear regression , with the introduction of the median cross - covariation matrix .* acknowledgements . *we thank the company mdiamtrie for allowing us to illustrate our methodologies with their data .we also thank dr .peggy cnac for a careful reading of the proofs .suppose we have a fixed size sample and we want to estimate the geometric median .the iterative weiszfeld s algorithm relies on the fact that the solution of the following optimization problem satisfies , when , for all where the weights are defined by weiszfeld s algorithm is based on the following iterative scheme .consider first a pilot estimator of . at step , a new approximation to is given by the iterative procedure is stopped when , for some precision known in advance .the final value of the algorithm is denoted by .the estimator of the mcm is computed similarly .suppose has been calculated at step , then at step , the new approximation to is defined by the procedure is stopped when , for some precision fixed in advance .note that by construction , this algorithm leads to an estimated median covariation matrix that is always non negative .in this section , we first give and recall some convexity properties of functional .the following one gives some information on the spectrum of the hessian of .[ convexity ] under assumptions 1 - 3(b ) , for all and , admits an orthonormal basis composed of eigenvectors of .let us denote by the set of eigenvalues of . for all , moreover, there is a positive constant such that for all , finally , by continuity , there are positive constants such that for all and , and for all , the proof is very similar to the one in and consequently it is not given here .furthermore , as in , it ensures the local strong convexity as shown in the following corollary .[ corforconv ] under assumptions 1 - 3(b ) , for all positive constant , there is a positive constant such that for all and , finally , the following lemma gives an upper bound on the remainder term in the taylor s expansion of the gradient .[ lemdelta ] under assumptions 1 - 3(b ) , for all and , let , since + , we have as in the proof of lemma 5.1 in , under assumptions 1 - 3(b ) , one can check that for all , and ] defined for all ] .thus , by dominated convergence , }\mathbb{e}\left [ \left\| \varphi_{\overline{m}_{n}-m}'(t ) \right\|_{f } \big| \mathcal{f}_{n } \right ] .\ ] ] moreover , one can check that for all , we now bound each term on the right - hand side of previous equality .first , applying cauchy - schwarz s inequality and using the fact that for all , , & \leq \left\| h \right\| \mathbb{e}\left [ \frac{\left\| x - m - th \right\|}{\left\| y(m+th ) - \gamma_{m } \right\|_{f}}\right ] \\ & \leq \left\| h \right\| \mathbb{e}\left [ \frac{\sqrt{\left\| y(m+th ) \right\|_{f}}}{\left\| y(m+th ) - \gamma_{m } \right\|_{f } } \right ] \\ & \leq \left\| h \right\| \left ( \mathbb{e}\left [ \frac{\sqrt { \left\| \gamma_{m } \right\|_{f}}}{\left\| y(m+th ) - \gamma_{m } \right\|_{f } } \right ] + \mathbb{e}\left [ \frac{1}{\sqrt{\left\| y(m+th ) - \gamma_{m } \right\|_{f } } } \right ] \right ) .\end{aligned}\ ] ] thus , since \leq c ] , and since for all positive constants , , & \leq \left\| h \right\| \left ( \mathbb{e}\left [ \frac{\sqrt { \left\| \gamma_{m } \right\|_{f}}}{\left\| y(m+th ) - \gamma_{m } \right\|_{f } } \right ] + \mathbb{e}\left [ \frac{1}{\sqrt{\left\| y(m+th ) - \gamma_{m } \right\|_{f } } } \right ] \right ) \\ & \leq \left\| h \right\| \left ( c\sqrt { \left\| \gamma_{m } \right\|_{f } } + \sqrt{c } \right ) .\end{aligned}\ ] ] finally , & \leq \left\| h \right\| \left ( c\sqrt { \left\| \gamma_{m } \right\|_{f } } + \sqrt{c } \right ) ,\\ \label{inequality4}\mathbb{e}\left [ \left| \left\langle y(m+th ) - \gamma_{m } , \left ( x - m - th \right)h^{t } \right\rangle_{f}\right| \frac{\left\| y(m+th ) - \gamma_{m}\right\|_{f}}{\left\| y(m+th ) - \gamma_{m}\right\|_{f}^{3 } } \right ] & \leq \left\| h \right\| \left ( c\sqrt { \left\| \gamma_{m } \right\|_{f } } + \sqrt{c } \right ) .\end{aligned}\ ] ] applying inequalities ( [ inequality1 ] ) to ( [ inequality4 ] ) with , the announced result is proven , * bounding * for all and , we define the random function ~\longrightarrow ~ \mathcal{s}(h) ] , note that ] , finally , & \leq \mathbb{e}\left [ \frac{\left\| h \right\| \left\| x - m - th \right\|}{\left\| y(m+th ) - \gamma_{m } \right\|_{f}^{2}}\left\| v \right\|_{f } \right ] \\ \notag & \leq \left\| h \right\| \left\| v \right\|_{f } \mathbb{e}\left [ \frac{\sqrt{\left\|\gamma_{m } \right\|_{f}}}{\left\| y(m+th ) - \gamma_{m } \right\|_{f}^{2 } } \right ] \\\notag & + \left\| h \right\| \left\| v \right\|_{f}\mathbb{e}\left [ \frac{1}{\left\| y(m+th ) - \gamma_{m } \right\|_{f}^{3/2}}\right ] \\ \label{inequality1 ' } & \leq \left ( c\sqrt{\left\| \gamma_{m}\right\|_{f } } + c^{3/4}\right)\left\| h \right\| \left\| v \right\|_{f}. \end{aligned}\ ] ] then the announced result follows from an application of inequality ( [ inequality1 ] ) with and , decomposition ( [ decxi ] ) , note that for all and we have .moreover , and . since for all , is a convex function , we get with cauchy - schwarz s inequality , let , let us recall that .we now prove by induction that for all integer , there is a positive constant such that for all , \leq m_{p} ] .* let us apply inequality ( [ majord2 ] ) , for all and use the fact that is a sequence of martingales differences adapted to the filtration , \leq \mathbb{e}\left [ \left ( \left\| v_{n } - \gamma_{m}\right\|_{f}^{2 } + 36\gamma_{n}^{2 } + 2\gamma_{n}\left\| r_{n } \right\|_{f } \left\| v_{n } - \gamma_{m}\right\|_{f } \right)^{p } \right ] \\+ \sum_{k=2}^{p } \binom{p}{k } \mathbb{e}\left[\left ( 2\gamma_{n } \left\langle v_{n } - \gamma_{m } , \xi_{n+1 } \right\rangle_{f } \right)^{k } \left ( \left\| v_{n } - \gamma_{m } \right\|_{f}^{2 } + 36 \gamma_{n}^{2 } + 2\gamma_{n } \left\| r_{n } \right\|_{f}\left\| v_{n } - \gamma_{m } \right\|_{f } \right)^{p - k } \right ] .\label{decordp } \end{gathered}\ ] ] let us denote by the second term on the right - hand side of inequality ( [ decordp ] ) . applying cauchy - schwarz s inequality and since , \\ & \leq \sum_{k=2}^{p}\binom{p}{k}2^{2k}\gamma_{n}^{k}\mathbb{e}\left[\left\| v_{n } - \gamma_{m } \right\|_{f}^{k}\left ( \left\| v_{n } - \gamma_{m } \right\|_{f}^{2 } + 36 \gamma_{n}^{2 } + 2\gamma_{n } \left\| r_{n } \right\|_{f}\left\| v_{n } - \gamma_{m } \right\|_{f } \right)^{p - k}\right ] .\end{aligned}\ ] ] with the help of lemma [ lemtechnique ] , + \sum_{k=2}^{p}2^{2k}3^{p - k-1}36^{p - k}\gamma_{n}^{2p - k}\mathbb{e}\left [ \left\| v_{n } - \gamma_{m } \right\|_{f}^{k}\right ] \\ & + \sum_{k=2}^{p}2^{p+k}3^{p - k-1}\gamma_{n}^{p}\mathbb{e}\left [ \left\| r_{n } \right\|_{f}^{p - k}\left\| v_{n } - \gamma_{m}\right\|_{f}^{p } \right ] .\end{aligned}\ ] ] applying cauchy - schwarz s inequality , & = \sum_{k=2}^{p}2^{2k}3^{p - k-1}\gamma_{n}^{k } \mathbb{e}\left [ \left\| v_{n } - \gamma_{m } \right\|_{f}^{p-1}\left\| v_{n } - \gamma_{m } \right\|_{f}^{p+1-k}\right ] \\ &\leq \sum_{k=2}^{p}2^{2k}3^{p - k-1}\gamma_{n}^{k}\sqrt{\mathbb{e}\left [ \left\| v_{n } - \gamma_{m } \right\|_{f}^{2(p-1)}\right]}\sqrt{\mathbb{e}\left [ \left\| v_{n } - \gamma_{m } \right\|_{f}^{2(p+1-k)}\right]}. \\\end{aligned}\ ] ] by induction , & \leq \sum_{k=2}^{p}2^{2k}3^{p - k-1}\gamma_{n}^{k}\sqrt{m_{p-1}}\sqrt{m_{p+1-k } } \\ & = o \left ( \gamma_{n}^{2 } \right ) .\label{eq1}\end{aligned}\ ] ] in the same way , applying cauchy - schwarz s inequality and by induction , & = \sum_{k=2}^{p}2^{2k}3^{p - k-1}36^{p - k}\gamma_{n}^{2p - k}\mathbb{e}\left [ \left\| v_{n } - \gamma_{m } \right\|_{f}\left\| v_{n } - \gamma_{m } \right\|_{f}^{k-1 } \right ] \\\notag & \leq \sum_{k=2}^{p}2^{2k}3^{p - k-1}36^{p - k}\gamma_{n}^{2p - k}\sqrt{m_{1}}\sqrt{m_{k-1 } } \\ \label{eq2 } & = o \left ( \gamma_{n}^{2 } \right ) , \end{aligned}\ ] ] since .similarly , since and since , applying cauchy - schwarz s inequality and by induction , & \leq \sum_{k=2}^{p}2^{2p}3^{p - k-1}\gamma_{n}^{p}\mathbb{e}\left [ \left\| v_{n } - \gamma_{m}\right\|_{f}^{p}\right ] \\\notag & \leq \sum_{k=2}^{p}2^{2p}3^{p - k-1}\gamma_{n}^{p}\sqrt{m_{1}}\sqrt{m_{p-1 } } \\ \label{eq3 } & = o \left ( \gamma_{n}^{2}\right ) .\end{aligned}\ ] ] finally , applying inequalities ( [ eq1 ] ) to ( [ eq3 ] ) , there is a positive constant such that for all , \leq a_{1}'\gamma_{n}^{2}.\ ] ] we now denote by the first term at the right - hand side of inequality ( [ decordp ] ) . with the help of lemma [ lemtechnique ] and applying cauchy - schwarz s inequality, + \sum_{k=1}^{p}\binom{p}{k}\mathbb{e}\left [ \left ( 36\gamma_{n}^{2 } + 2\gamma_{n}\left\langle r_{n } , v_{n } - \gamma_{m } \right\rangle_{f}\right)^{k}\left\| v_{n } - \gamma_{m}\right\|_{f}^{2p-2k}\right ] \\ & \leq \mathbb{e}\left [ \left\| v_{n } - \gamma_{m}\right\|_{f}^{2p}\right ] + \sum_{k=1}^{p}\binom{p}{k}2^{k-1}\mathbb{e}\left [ \left ( 36^{k}\gamma_{n}^{2k } + 2^{k}\gamma_{n}^{k}\left\| r_{n } \right\|_{f}^{k } \left\| v_{n } - \gamma_{m}\right\|_{f}^{k } \right ) \left\| v_{n}-\gamma_{m}\right\|_{f}^{2p-2k } \right ] .\end{aligned}\ ] ] moreover , let \\ & = \sum_{k=1}^{p}\binom{p}{k}2^{k-1}36^{k}\gamma_{n}^{2k}\mathbb{e}\left [ \left\| v_{n } - \gamma_{m } \right\|_{f}^{2p-2k } \right ] + \sum_{k=1}^{p}\binom{p}{k}2^{2k-1}\gamma_{n}^{k}\mathbb{e}\left [ \left\| r_{n } \right\|_{f}^{k}\left\| v_{n } - \gamma_{m } \right\|_{f}^{2p - k}\right ] .\end{aligned}\ ] ] by induction , & = \sum_{k=1}^{p}\binom{p}{k}2^{k-1}36^{k}\gamma_{n}^{2k}m_{p - k } \\ & = o \left ( \gamma_{n}^{2 } \right ) .\end{aligned}\ ] ] moreover , & = \sum_{k=2}^{p}\binom{p}{k}2^{2k-1}\gamma_{n}^{k}\mathbb{e}\left [ \left\| r_{n } \right\|_{f}^{k}\left\| v_{n } - \gamma_{m } \right\|_{f}^{2p - k}\right ] \\ & + 2p\gamma_{n}\mathbb{e}\left [ \left\| r_{n } \right\|_{f}\left\| v_{n } - \gamma_{m}\right\|_{f}^{2p-1}\right ] .\end{aligned}\ ] ] applying cauchy - schwarz s inequality and by induction , since , & \leq \sum_{k=2}^{p}\binom{p}{k}2^{3k-1}\gamma_{n}^{k}\mathbb{e}\left [ \left\| v_{n } - \gamma_{m } \right\|_{f}^{2p - k}\right ] \\ & \leq \sum_{k=2}^{p}\binom{p}{k}2^{3k-1}\gamma_{n}^{k}\sqrt{m_{p+1-k}}\sqrt{m_{p-1 } } \\ & = o \left ( \gamma_{n}^{2 } \right ) .\end{aligned}\ ] ] moreover , applying theorem 4.2 in and hlder s inequality , since , & \leq 2c ' p \gamma_{n } \mathbb{e}\left [ \left\| \overline{m}_{n } - m\right\| \left\| v_{n } - \gamma_{m}\right\|_{f}^{2p-1}\right ] \\ & \leq 2c ' p \gamma_{n } \left ( \mathbb{e}\left [ \left\| \overline{m}_{n } - m \right\|^{2p}\right]\right)^{\frac{1}{2p}}\left( \mathbb{e}\left [ \left\| v_{n } - \gamma_{m}\right\|_{f}^{2p}\right]\right)^{\frac{2p-1}{2p } } \\ & \leq 2c'p\gamma_{n } \frac{k_{p}^{\frac{1}{2p}}}{n^{1/2}}\left ( \mathbb{e}\left [ \left\| v_{n } - \gamma_{m}\right\|_{f}^{2p}\right]\right)^{\frac{2p-1}{2p}}.\end{aligned}\ ] ] finally , \right)^{\frac{2p-1}{2p } } & \leq 2c'p\gamma_{n } \frac{k_{p}^{\frac{1}{2p}}}{n^{1/2 } } \max\left\lbrace 1 , \mathbb{e}\left [ \left\| v_{n } - \gamma_{m}\right\|_{f}^{2p}\right]\right\rbrace \\ & \leq 2c'p\gamma_{n } \frac{k_{p}^{\frac{1}{2p}}}{n^{1/2 } } \left ( 1 + \mathbb{e}\left [ \left\| v_{n } - \gamma_{m}\right\|_{f}^{2p}\right]\right ) .\end{aligned}\ ] ] thus , there are positive constants such that + a_{1}''\frac{1}{n^{\alpha + 1/2 } } .\ ] ] finally , thanks to inequalities ( [ maj1 ] ) and ( [ maj2 ] ) , there are positive constants such that & \leq \left ( 1+a_{0}'\frac{1}{n^{\alpha + 1/2}}\right)\mathbb{e}\left [ \left\| v_{n } - \gamma_{m}\right\|_{f}^{2p}\right ] + a_{1}'\frac{1}{n^{\alpha + 1/2 } } \\ & \leq \prod_{k=1}^{n}\left ( 1+a_{0}'\frac{1}{k^{\alpha + 1/2}}\right)\mathbb{e}\left [ \left\| v_{1 } - \gamma_{m}\right\|_{f}^{2p}\right]+ \sum_{k=1}^{n}\prod_{j = k+1}^{n}\left ( 1+a_{0 } ' \frac{1}{j^{\alpha + 1/2}}\right ) a_{1}'\frac{1}{k^{\alpha + 1/2 } } \\ & \leq \prod_{k=1}^{\infty}\left ( 1+a_{0}'\frac{1}{k^{\alpha + 1/2}}\right)\mathbb{e}\left [ \left\| v_{1 } - \gamma_{m}\right\|_{f}^{2p}\right]+ \prod_{j=1}^{\infty}\left ( 1+a_{0 } ' \frac{1}{j^{\alpha + 1/2}}\right)\sum_{k=1}^{\infty } a_{1}'\frac{1}{k^{\alpha + 1/2 } } \\ & \leq m_{p},\end{aligned}\ ] ] which concludes the induction and the proof .let us define the following linear operators : using decomposition ( [ decdelta ] ) and by induction , for all , with we now study the asymptotic behavior of the linear operators and .as in , one can check that there are positive constants such that for all integers with , where is the usual spectral norm for linear operators .we now bound the quadratic mean of each term in decomposition ( [ decbeta ] ) .* step 1 : the quasi deterministic term . * applying inequality ( [ majbeta ] ) , there is a positive constant such that & \leq \left\| \beta_{n-1}\right\|_{op}^{2}\mathbb{e}\left [ \left\| v_{1}-\gamma_{m}\right\|_{f}^{2}\right ] \\\notag & \leq c_{0}e^{-2\lambda_{\min}\sum_{k=1}^{n}\gamma_{n}}\mathbb{e}\left [ \left\| v_{1 } - \gamma_{m } \right\|_{f}^{2}\right ] \\ & \leq c_{0}e^{-c_{0}'n^{1-\alpha}}\mathbb{e}\left [ \left\| v_{1}- \gamma_{m } \right\|_{f}^{2 } \right ] .\end{aligned}\ ] ] this term converges exponentially fast to . * step 2 : the martingale term . * since is a sequence of martingale differences adapted to the filtration , & = \sum_{k=1}^{n-1}\mathbb{e}\left [ \left\| \beta_{n-1}\beta_{k}^{-1}\gamma_{k}\xi_{k+1 } \right\|_{f}^{2}\right ] + 2 \sum_{k=1}^{n-1}\sum_{k'=k+1}^{n-1}\gamma_{k}\gamma_{k'}\mathbb{e}\left [ \left\langle \beta_{n-1}\beta_{k}^{-1}\xi_{k+1 } , \beta_{n-1}\beta_{k'}^{-1}\xi_{k'+1 } \right\rangle_{f } \right ] \\ & = \sum_{k=1}^{n-1}\mathbb{e}\left [ \left\| \beta_{n-1}\beta_{k}^{-1}\gamma_{k}\xi_{k+1 } \right\|_{f}^{2}\right ] + 2 \sum_{k=1}^{n-1}\sum_{k'=k+1}^{n-1}\gamma_{k}\gamma_{k'}\mathbb{e}\left [ \left\langle \beta_{n-1}\beta_{k}^{-1}\xi_{k+1 } , \beta_{n-1}\beta_{k'}^{-1}\mathbb{e}\left[\xi_{k'+1}|\mathcal{f}_{k'}\right ] \right\rangle_{f } \right ] \\ & = \sum_{k=1}^{n-1}\mathbb{e}\left [ \left\| \beta_{n-1}\beta_{k}^{-1}\gamma_{k}\xi_{k+1 } \right\|_{f}^{2}\right ] .\end{aligned}\ ] ] moreover , as in , lemma [ sumexp ] ensures that there is a positive constant such that for all , \leq \frac{c_{1}'}{n^{\alpha}}.\ ] ] * step 3 : the first remainder term . * remarking that , & \leq \mathbb{e}\left [ \left ( \sum_{k=1}^{n-1 } \gamma_{k } \left\| \beta_{n-1}\beta_{k}^{-1 } \right\|_{op}\left\| r_{k } \right\|_{f } \right)^{2}\right ] \\ & \leq 16 \left ( \sqrt{c } + \sqrt{\left\| \gamma_{m}\right\|_{f}}\right)^{2 } \mathbb{e}\left [ \left ( \sum_{k=1}^{n-1 } \gamma_{k}\left\| \beta_{n-1}\beta_{k}^{-1}\right\|_{op } \left\| \overline{m}_{k}-m \right\| \right)^{2}\right ] .\end{aligned}\ ] ] applying lemma 4.3 and theorem 4.2 in , & \leq 16 \left ( \sqrt{c } + c\sqrt{\left\| \gamma_{m}\right\|_{f } } \right)^{2}\left ( \sum_{k=1}^{n-1}\gamma_{k}\left\| \beta_{n-1}\beta_{k}^{-1}\right\|_{op}\sqrt{\mathbb{e}\left [ \left\| \overline{m}_{k } - m \right\|^{2}\right]}\right)^{2 } \\ & \leq 16 \left ( \sqrt{c } + c\sqrt{\left\| \gamma_{m}\right\|_{f } } \right)^{2}k_{1}\left ( \sum_{k=1}^{n-1}\gamma_{k}\left\| \beta_{n-1}\beta_{k}^{-1}\right\|_{op}\frac{1}{k^{1/2}}\right)^{2}.\end{aligned}\ ] ] applying inequality ( [ majbeta ] ) , & \leq 16 \left ( \sqrt{c } + c\sqrt{\gamma_{m } } \right)^{2}k_{1}\left ( \sum_{k=1}^{n-1}\gamma_{k}e^{-\sum_{j = k}^{n}\gamma_{j}}\frac{1}{k^{1/2}}\right)^{2 } \\ & \leq 16 \left ( \sqrt{c } + c\sqrt{\gamma_{m } } \right)^{2}k_{1}\left ( \sum_{k=1}^{n}\gamma_{k}e^{-\sum_{j = k}^{n}\gamma_{j}}\frac{1}{k^{1/2}}\right)^{2}.\end{aligned}\ ] ] splitting the sum into two parts and applying lemma [ sumexp ] , we have & \leq 32 \left ( \sqrt{c } + c\sqrt{\left\| \gamma_{m}\right\|_{f } } \right)^{2}k_{1}\left ( \sum_{k=1}^{e ( n/2)}\gamma_{k}e^{-\sum_{j = k}^{n}\gamma_{j}}\frac{1}{k^{1/2}}\right)^{2 } \\ & + 32 \left ( \sqrt{c } + c\sqrt{\left\| \gamma_{m}\right\|_{f } } \right)^{2}k_{1}\left ( \sum_{k = e(n/2 ) + 1}^{n}\gamma_{k}e^{-\sum_{j = k}^{n}\gamma_{j}}\frac{1}{k^{1/2}}\right)^{2 } \\ & = o \left ( \frac{1}{n}\right ) .\end{aligned}\ ] ] thus , there is a positive constant such that for all , \leq \frac{c_{2}'}{n}.\ ] ] * step 4 : the second remainder term .* let us recall that for all , with .thus , & \leq \mathbb{e}\left [ \left ( \sum_{k=1}^{n-1}\gamma_{k}\left\| \beta_{n-1}\beta_{k}^{-1}\right\|_{op}\left\| r_{k } ' \right\|_{f } \right)^{2}\right ] \\ & \leq 144d^{2}\mathbb{e}\left [ \left ( \sum_{k=1}^{n-1}\gamma_{k}\left\| \beta_{n-1}\beta_{k}^{-1}\right\|_{op}\left\| \overline{m}_{k } - m \right\| \left\| v_{k } - \gamma_{m}\right\|_{f } \right)^{2}\right ] .\end{aligned}\ ] ] applying lemma 4.3 in , \leq 144d^{2}\left ( \sum_{k=1}^{n-1}\gamma_{k}\left\| \beta_{n-1}\beta_{k}^{-1}\right\|_{op}\sqrt{\mathbb{e}\left [ \left\| \overline{m}_{k } - m \right\|^{2 } \left\|v_{k } - \gamma_{m } \right\|_{f}^{2 } \right]}\right)^{2}.\ ] ] thanks to lemma 5.2 , there is a positive constant such that for all , ~\leq~m_{2} ] .thus , splitting the sum into two parts and applying inequalities ( [ majbeta ] ) and lemma [ sumexp ] , there are positive constant such that for all , & \leq 72c^{2}m_{2}^{2}\left ( \sum_{k=1}^{e(n/2)}\gamma_{k}e^{-\sum_{j = k}^{n}\gamma_{j } } \right)^{2 } \\ & + 72c^{2}\sup_{e(n/2)+1 \leq k \leq n-1 } \left\lbrace \mathbb{e}\left [ \left\| v_{k } - \gamma_{m}\right\|_{f}^{4}\right]\right\rbrace \left ( \sum_{k = e(n/2)+1}^{n}\gamma_{k}e^{-\sum_{j = k}^{n}\gamma_{j } } \right)^{2 } \\ & \leq c_{2 } ' \sup_{e(n/2)+1 \leq k \leq n-1}\left\lbrace \mathbb{e}\left [ \left\| v_{k } - \gamma_{m}\right\|_{f}^{4}\right]\right\rbrace + o \left ( e^{-2c_{0}'n^{1-\alpha } } \right ) .\end{aligned}\ ] ] thus , there is a positive constant such that for all , \leq c_{0}'e^{-2c_{0}'n^{1-\alpha } } + c_{2}'\sup_{e(n/2)+1 \leq k \leqn-1}\left\lbrace \mathbb{e}\left [ \left\| v_{k } - \gamma_{m}\right\|_{f}^{4}\right]\right\rbrace .\ ] ] * conclusion : * applying lemma [ lemtechnique ] and decomposition ( [ decbeta ] ) , for all , & \leq 5 \mathbb{e}\left [ \left\| \beta_{n-1}\left ( v_{1}-\gamma_{m}\right)\right\|_{f}^{2}\right ] + 5 \mathbb{e}\left [ \left\| \beta_{n-1}m_{n}\right\|_{f}^{2}\right ] + 5 \mathbb{e}\left [ \left\| \beta_{n-1}r_{n } \right\|_{f}^{2}\right]\\ & + 5\mathbb{e}\left [ \left\| \beta_{n-1}r_{n } ' \right\|_{f}^{2}\right ] + 5 \mathbb{e}\left [ \left\| \beta_{n-1}\delta_{n}\right\|_{f}^{2}\right ] .\end{aligned}\ ] ] applying inequalities ( [ eqdec1 ] ) to ( [ eqdec5 ] ) , there are positive constants such that for all , \leq c_{1}e^{-c_{1}'n^{1-\alpha } } + \frac{c_{2}}{n^{\alpha } } + c_{3}\sup_{e(n/2)+1 \leq k \leqn-1}\mathbb{e}\left [ \left\| v_{k } - \gamma_{m } \right\|_{f}^{4}\right ] .\ ] ] let us define and use decomposition ( [ decxi ] ) , since , and the fact that for all , , , we get with an application of cauchy - schwarz s inequality thus , since is a sequence of martingale differences adapted to the filtration , and since ( this inequality follows from proposition [ convexity ] and from the fact that for all , is a convex application ) , & \leq \mathbb{e}\left [ \left\| w_{n } \right\|_{f}^{4}\right ] + 2\gamma_{n}\mathbb{e}\left [ \left\| r_{n } \right\|_{f } \left\| w_{n } \right\|_{f}^{2}\left\| v_{n } - \gamma_{m}\right\|_{f}\right ] \\ & + 40 \left ( 1+c^{2}c_{\gamma}^{2}\right)\gamma_{n}^{2}\mathbb{e}\left [ \left\| v_{n } - \gamma_{m}\right\|_{f}^{2}\right ] \\ & + 4\gamma_{n}^{2}\mathbb{e}\left [ \left\langle \xi_{n+1 } , v_{n } - \gamma_{m}\right\rangle_{f}^{2}\right ] + 400\gamma_{n}^{4 } + 40\gamma_{n}^{3}\mathbb{e}\left [ \left\| r_{n } \right\|_{f } \left\| v_{n } - \gamma_{m}\right\|_{f}^{2 } \right ] \\ & + 4\gamma_{n}^{2}\mathbb{e}\left [ \left\| r_{n } \right\|_{f}^{2}\left\| v_{n } - \gamma_{m}\right\|_{f}^{2}\right ] .\end{aligned}\ ] ] since and , applying cauchy - schwarz s inequality , there are positive constants such that for all , \leq \mathbb{e}\left [ \left\| w_{n } \right\|_{f}^{4}\right ] + 2\gamma_{n}\mathbb{e}\left [ \left\| r_{n } \right\|_{f } \left\| w_{n } \right\|_{f}^{2}\left\| v_{n } - \gamma_{m}\right\|_{f}\right ] + \frac{c_{1}'}{n^{3\alpha}}+ \frac{c_{2}'}{n^{2\alpha}}\mathbb{e}\left [ \left\| v_{n } - \gamma_{m}\right\|_{f}^{2}\right].\ ] ] we now bound the two first terms at the right - hand side of inequality ( [ majoord4 ] ) . * step 1 : bounding ] . since and since there is a positive constant such that for all , we have & \leq \left ( 1+c_{\gamma}^{2}c^{2}\right)^{2}\mathbb{e}\left [ \left\|v_{n } - \gamma_{m}\right\|_{f}^{4 } \mathbf{1}_{a_{n , p'}^{c}}\right ] \\ & \leq \left ( 1+c_{\gamma}^{2}c^{2}\right)^{2}c_{0}^{4}n^{4 - 4\alpha}\mathbb{p}\left [ a_{n , p'}^{c}\right ] \\ & \leq \left ( 1+c_{\gamma}^{2}c^{2}\right)^{2}c_{0}^{4}n^{4 - 4\alpha } \left ( \mathbb{p}\left [ \left\| \overline{m}_{n } - m \right\| \geq \epsilon \right ] + \mathbb{p}\left[ \left\| v_{n } - \gamma_{m}\right\|_{f } \geq n^{\frac{1-\alpha}{p'}}\right ] \right ) .\end{aligned}\ ] ] applying markov s inequality , theorem 4.2 in and lemma 5.2 , & \leq \left ( 1+c_{\gamma}^{2}c^{2}\right)^{2}c_{0}^{4}n^{4 - 4\alpha } \left ( \frac{\mathbb{e}\left [ \left\| \overline{m}_{n}-m \right\|^{2p''}\right]}{\epsilon^{2p '' } } + \frac{\mathbb{e}\left [ \left\| v_{n } - \gamma_{m}\right\|_{f}^{2q}\right]}{n^{2q\frac{1-\alpha}{p ' } } } \right ) \\ & \leq \frac{k_{p''}}{\epsilon^{2p''}}\left ( 1+c_{\gamma}^{2}c^{2}\right)^{2}c_{0}^{4}n^{4 - 4\alpha - p '' } + \left ( 1+c_{\gamma}^{2}c^{2}\right)^{2}c_{0}^{4}m_{q}n^{4 - 4\alpha - 2q \frac{1-\alpha}{p'}}. \end{aligned}\ ] ] taking and , = o \left ( \frac{1}{n^{3\alpha}}\right ) .\ ] ] thus , applying inequalities ( [ majwnan ] ) and ( [ majwnanc ] ) , there are positive constants , and a rank such that for all , \leq \left ( 1- 2c_{p'}\gamma_{n}n^{-\frac{1-\alpha}{p'}}\right ) \mathbb{e}\left [ \left\| v_{n } - \gamma_{m } \right\|_{f}^{4}\right ] + \frac{c_{1,p'}}{n^{3\alpha}}.\ ] ] * step 2 : bounding $ ] . * since , applying lemma [ lemtechnique ] , let \\& \leq 2\left ( 1+c_{\gamma}^{2}c^{2}\right ) \gamma_{n } \mathbb{e}\left [ \left\| r_{n } \right\|_{f}\left\| v_{n } - \gamma_{m}\right\|_{f}^{3}\right ] \\ & \leq \frac{2}{c_{p'}}\left ( 1+c_{\gamma}^{2}c^{2}\right)^{2}\gamma_{n}n^{\frac{1-\alpha}{p'}}\mathbb{e}\left [ \left\| r_{n } \right\|_{f}^{2}\left\| v_{n } - \gamma_{m}\right\|_{f}^{2}\right ] + \frac{1}{2}c_{p'}\gamma_{n}n^{-\frac{1-\alpha}{p'}}\mathbb{e}\left[ \left\| v_{n } - \gamma_{m}\right\|_{f}^{4}\right ] \\ & \leq \frac{2}{c_{p'}^{2}}\left ( 1+c_{\gamma}^{2}c^{2}\right)^{4}\gamma_{n}n^{3\frac{1-\alpha}{p'}}\mathbb{e}\left [ \left\| r_{n } \right\|_{f}^{4}\right ] + c_{p'}\gamma_{n}n^{-\frac{1-\alpha}{p'}}\mathbb{e}\left [ \left\| v_{n } - \gamma_{m}\right\|_{f}^{4}\right ] .\end{aligned}\ ] ] since and applying theorem 4.2 in , + c_{p'}\gamma_{n}n^{-\frac{1-\alpha}{p'}}\mathbb{e}\left [ \left\| v_{n } - \gamma_{m}\right\|_{f}^{4}\right ] \\\notag & \leq \frac{2}{c_{p'}^{2}}k_{2}\left ( 1+c_{\gamma}^{2}c^{2}\right)^{4}\left ( \sqrt{c } + c\sqrt{\left\| \gamma_{m}\right\|_{f}}\right)^{4}\gamma_{n}n^{3\frac{1-\alpha}{p'}}\frac{1}{n^{2 } } + c_{p'}\gamma_{n}n^{-\frac{1-\alpha}{p'}}\mathbb{e}\left [ \left\| v_{n } - \gamma_{m}\right\|_{f}^{4}\right ] \\ & = c_{p'}\gamma_{n}n^{-\frac{1-\alpha}{p'}}\mathbb{e}\left [ \left\| v_{n } - \gamma_{m}\right\|_{f}^{4}\right ] + o \left ( \frac{1}{n^{2 + \alpha -3(1-\alpha)/p'}}\right ) .\end{aligned}\ ] ] * step 3 : conclusion . * applying inequalities ( [ majoord4 ] ) , ( [ majwn ] ) and ( [ majdn ] ) , there are a rank and positive constants such that for all , \leq \left ( 1- c_{p'}\gamma_{n}n^{-\frac{1-\alpha}{p'}}\right)\mathbb{e}\left [ \left\| v_{n } - \gamma_{m } \right\|_{f}^{4}\right ] + \frac{c_{1,p'}}{n^{3\alpha } } + \frac{c_{2,p'}}{n^{2\alpha}}\mathbb{e}\left [ \left\| v_{n } - \gamma_{m}\right\|_{f}^{2}\right ] + \frac{c_{3,p'}}{n^{2+\alpha -3\frac{1-\alpha}{p'}}}.\ ] ]first , the following lemma recalls some well - known inequalities .[ sumexp ] let be non - negative constants such that , and , be two sequences defined for all by with .thus , there is a positive constant such that for all , where is the integer part function .we first prove inequality ( [ sumexp1 ] ) . for all , moreover , for all , thus , we now prove inequality ( [ sumexp2 ] ) . with the help of an integral test for convergence , , with the help of an integral test for convergence , there is a rank ( for sake of simplicity , we consider that ) such that for all , {e(n/2)+1}^{n } + \beta\int_{e(n/2)+1}^{n}e^{t^{1-\alpha}}t^{-1-\beta}dt \\ & = e^{(n+1)^{1-\alpha}(n+1)^{-\beta } } + o \left ( \int_{e(n/2)+1}^{n+1}e^{t^{1-\alpha}}t^{-\alpha - \beta } dt \right ) , \end{aligned}\ ] ] since .thus , as a conclusion , we have cardot , h. , cnac , p. , and chaouch , m. ( 2010 ) .stochastic approximation to the multivariate and the functional median . in lechevallier , y. and saporta , g. , editors , _ compstat 2010 _ , pages 421428 .physica verlag , springer .kemperman , j. h. b. ( 1987 ) .the median of a finite measure on a banach space . in _ statistical data analysis based on the -norm and related methods ( neuchtel , 1987 ) _ , pages 217230 .north - holland , amsterdam .mttnen , j. , nordhausen , k. , and oja , h. ( 2010 ) .asymptotic theory of the spatial median . in_ nonparametrics and robustness in modern statistical inference and time series analysis : a festschrift in honor of professor jana jureckov _ , volume 7 , pages 182193 .ims collection .
the geometric median covariation matrix is a robust multivariate indicator of dispersion which can be extended without any difficulty to functional data . we define estimators , based on recursive algorithms , that can be simply updated at each new observation and are able to deal rapidly with large samples of high dimensional data without being obliged to store all the data in memory . asymptotic convergence properties of the recursive algorithms are studied under weak conditions . the computation of the principal components can also be performed online and this approach can be useful for online outlier detection . a simulation study clearly shows that this robust indicator is a competitive alternative to minimum covariance determinant when the dimension of the data is small and robust principal components analysis based on projection pursuit and spherical projections for high dimension data . an illustration on a large sample and high dimensional dataset consisting of individual tv audiences measured at a minute scale over a period of 24 hours confirms the interest of considering the robust principal components analysis based on the median covariation matrix . all studied algorithms are available in the r package ` gmedian ` on cran . * keywords . * averaging , functional data , geometric median , online algorithms , online principal components , recursive robust estimation , stochastic gradient , weiszfeld s algorithm .
in the past decade tremendous progress has been made toward understanding the genetic basis of disease .this challenging endeavor has given rise to numerous study designs with a vast arsenal of statistical machinery . a common theme , however , is the pivotal role played by familial relationships . traditionally relationships are encoded in pedigrees of known relatives [ thompson ( ) , , , ] , but for more distantly related individuals , pedigree information can sometimes be erroneous or difficult to obtain .relatedness can also be calculated from large panels of genetic markers [ , , , , , , , ] . while this approach has greatly expanded the scope of inference for relationships , empirical estimates are noisy , especially regarding distant relatives .the search for a disease gene begins with finding unusual sharing of genetic material among individuals who share a trait ( phenotype ) .linkage analysis involves the study of joint inheritance of genetic material and phenotypes within relatives [ , ] .typically , these studies are restricted to relatives within a pedigree , but more recently the approach has been extended to samples of people who are more distantly related and without known pedigree structure [ ] .alternatively , genetic associations can be discovered from population samples , which are usually based on case control studies . in these studies the sampleis assumed to be unrelated , but the presence of distant relatives ( i.e. , cryptic relatedness ) can reduce power or generate spurious associations [ , ] .numerous methods have been proposed to deal with familial structure in genetic association studies [ , , , ] , all of which require an estimate of family relationships among individuals within the study .relationships are also critical for quantitative genetics .a common problem for quantitative genetics is to estimate the fraction of variance of a continuous trait , such as height , due to genetic variation amongst individuals in a population .this feature , known as heritability , delineates the relative contributions of genetic and nongenetic factors to the total phenotypic variance in a population .heritability is a fundamental concept in genetic epidemiology and disease mapping . using a variety of close relatives, the heritability of quantitative and qualitative traits can be estimated directly [ , ] . with complex pedigrees , applying the same principles , heritability can be estimated using random effects models [ ] .heritability of height , weight , iq and many other quantitative traits have been investigated for nearly a century and continue to generate interest [ ] .interest in the genetic basis of disease is high because greater understanding of disease etiology will in principle lead to better treatments .large population - based samples are enhancing our ability to identify dna variants affecting risk for disease and it has become the standard to search for genetic variants associated with common disease using genome - wide association studies ( gwas ) .thousands of associations for common diseases / phenotypes have already been validated [ ] . nevertheless , even in the most successful cases , such as inflammatory bowel disease studied in and ,discoveries account for only a fraction of the heritability .given the relatively limited discoveries thus far , a reasonable question is whether the heritability of a trait estimated from relatives truly does trace to genetic variation . offer a novel approach to genetic analysis that shows that indeed much of it does .they propose to analyze population samples , rather than pedigrees , for the heritability of the trait .to do so they first estimate the correlation between all pairs of individuals in the population sample using a dense set of common genetic variants , such as those typically used for a gwas .they then take this matrix and relate it to the covariance matrix of phenotypes for these subjects to derive an estimate of heritability .thus , in their application , where essentially all relatives are removed from the sample , heritability refers to the proportion of variance in the trait explained by the measured genetic markers .they provide a fascinating example of how this approach works in the case of human height and they and others applied these techniques to many other traits [ reviewed by ] . the work of yang et al .( ) inspired us to consider applying a related approach to answer a different question .could estimates of relatedness obtained from a population sample be improved by using smoothing techniques on the variance covariance matrix ?if so , population samples could be used to estimate heritability in the traditional sense without requiring close relatives .this approach has application to phenotypes for which extended pedigrees are difficult to obtain . for instance , there is controversy in the literature concerning the heritability of autism , which is typically estimated from twin studies [ ] .smoothing techniques could also be used to estimate relatedness in samples of distantly related individuals for many other genetic analyses .for example , a version of linkage analysis could be applied to distant relatives .we propose treelet covariance smoothing a novel method for smoothing and multiscale decomposition of covariance matrices as a means to improving estimates of relationships .treelets were first introduced in and as a multi - scale basis that extends wavelets to unordered data .the method is fully adaptive .it returns orthonormal basis functions supported on nested clusters in a hierarchical tree . unlike other hierarchical methods , the basis and the tree structure are computed simultaneously , and both reflect the internal structure of the data . in this work, we extend the original treelet framework for smoothing of one - dimensional signals to smoothing and denoising of variance covariance matrices with hierarchical block structure and unstructured noise .smoothing is achieved by a nonlinear approximation scheme in which one discards small elements in a multi - scale matrix decomposition .the basic idea is that if the data have underlying structure in the form of groupings of correlated variables , then we can enforce sparsity by first transforming the data into a treelet representation by a series of rotations of pairs of correlated variables , and then thresholding covariances .we refer to this new regularization approach for covariance matrices with groupings on multiple scales as _ treelet covariance smoothing _( tcs ) .we apply tcs to genetically inferred relationship matrices , with the goal of improving estimates of pairwise relationships from large pedigrees and population - based samples . on both simulated and real data ,we show that tcs leads to better estimates of the relatedness between individuals .using these estimates allows us to estimate the heritability from population - based samples provided they include some distantly related individuals , a property that is almost inevitable in practice .finally , we discuss how estimating heritability is simply a case of variance component estimation for an error - in - variables random effects model .therefore , our method can be applied to a whole family of more general models of similar structure .the human genome contains many millions of single nucleotide polymorphisms ( snps ) and other genetic variation distributed across the genome . in a gwasit is now typical to measure a panel of at least 500,000 snps from each subject .snps typically have only two forms or alleles within a population .whichever allele is less frequent is called the minor allele .the genotype of an individual at a snp can then be coded as 0 , 1 or 2 depending on the number of minor alleles the individual has at that snp .alleles at snps in close physical proximity are often highly correlated ( i.e. , in linkage disequilibrium ) . when multiple snps are in linkage disequilibrium , we say one of these snps `` tags , '' or represents , the others .although estimates vary , well - designed panels of 500,000 snps do not tag all of the common snps in the genome and they tag very few of the snps with rare minor alleles [ ] . nevertheless , gwas provide considerable information about familial relationships .the relatedness between a pair of individuals is defined by the frequency by which they share alleles _identical by descent _formally , two alleles are considered ibd if they descended from a common ancestor without an intermediate mutation . within a pedigree relativesshare very recent common ancestors , hence , many alleles are ibd . for a more detailed exposition of genetic relationships , see .the quantity of interest in this investigation is the _ additive genetic relationship _ which is defined as the expected proportion of alleles ibd for a pair of individuals . for individuals and we use to denote this quantity , which is more familiar when viewed as the _ degree of relationship _ , where .for example , for siblings , first cousins and second cousins , who are 1st , 3rd and 5th degree relatives , is , and , respectively . within a noninbred pedigree can be computed using a recursive algorithm [ ] .for example , if individual has parents and , then . for distantly related individuals ,detailed pedigree information is not often available ; however , with gwas data one can calculate genome - average relatedness directly [ ] . even with complete information regarding ibd status of the chromosomes ,the fraction of genetic material shared by relatives will differ slightly from the expectation calculated from the pedigree due to the stochastic nature of the meiotic process [ ] .for the purpose of genetic investigations , one could argue that genome - average relatedness is a truer measure of relatedness .for example , while two distantly related individuals are expected to share a small fraction of their genetic material , if they do not inherit anything from their common ancestor , it seems appropriate to consider them unrelated . under many population genetic models also be interpreted as a correlation coefficient .let denote the scaled minor allele count for individual at snp : , where is the minor allele count and is the minor allele frequency . for individuals and at genetic variant , it follows from our model that = a_{ij}. \label{eqsnpcov}\ ] ] exploiting this feature leads to a method of moments estimate of from a panel of genetic markers . to see this ,let denote a column vector of observed scaled allele counts for all individuals at the snp , then let where .the genome - wide complex trait analysis ( gcta ) software from computes this estimator .the method of moments estimator is unbiased if the population allele frequencies are known [ ] . in practice ,the s are estimated from the sample data .a criticism of this estimator is that some off - diagonal elements are negative , which does not conform to the interpretation of as a probability .viewed as a correlation coefficient , however , negative quantities suggest the pair of individuals share fewer alleles than expected given the allele frequencies .alternatively , maximum likelihood estimators of have been developed [ , ] , but these estimators are quite computationally intensive for gwas panels . hence , while method of moments estimators are typically less precise than maximum likelihood estimators , they are more commonly used when a large snp panel is available . by definition ,the heritability of a quantitative trait ( ) such as height is determined by the additive effect of many genes and genetic variants ( ) , each of small effect ( i.e. , the polygenic model ) . for individuals , suppose that the genetic effects are explained by causal snps , and we can express the genetic effect as where is the additive random effect of the causal variant , weighted by the scaled number of minor alleles at this variant .let be the vector of random effects corresponding to the additive genetic effects for individuals .for and ] , it follows that where . in the traditional model for quantitative traits a continuous phenotype is modeled as where is the vector of residual effects , and is the vector of phenotypes . in matrix notation , .the residuals are assumed to be independent with variance covariance equal to and the random effects and residual error are assumed to be normally distributed .consequently , = \frac{z_cz_c^t}{j}\sigma^2_g + i \sigma_e^2 . \label{eqknownzc}\] ] the heritability of the phenotype is defined as this quantity is more accurately known as the additive or narrow - sense heritability , in contrast to the broad - sense heritability , which includes nonadditive genetic effects such as gene gene interactions .our inferences will be confined to narrow - sense heritability .if the causal snps ( or good tag snps ) and the phenotype were directly measured , then one could estimate based on equation ( [ eqqtmodel ] ) and the implied random effects model using maximum likelihood ( reml ) [ ] .notationally , is an matrix that picks out columns of the full snp panel . in practice, is not known .few of the causal snps are known for any phenotype , and many causal snps will be missing from ( i.e. , not tagged by any measured snps ) .how then is estimated in practice ?assuming various subsets of individuals in the sample are related with relationship matrix ( defined previously ) , heritability can be estimated without any knowledge of causal genetic variants that constitute . from equation ( [ eqsnpcov ] ) and the polygenic model it follows that as gets large .this inspires an alternative random effects model which has long been utilized in population genetics : = a\sigma^2_g + i \sigma_e^2 . \label{equnknownzc}\ ] ] historically , has been derived from known pedigree structure .however , provided some subsets of the individuals in the sample are related ( even distantly ) , one can estimate from genetic markers using either method of moments or maximum likelihood estimation techniques .this approach has been applied frequently in quantitative genetics , especially in breeding studies [ , , , ] .we conjecture that by using tcs , we can improve estimates of and obtain better estimates of heritability without knowledge of causal variants . alternatively ,if the sample is completely unrelated , then substituting the result of equation ( [ eqzzprime ] ) for ( [ eqknownzc ] ) does not lead to an estimate of unless all of the causal snps have been recorded . instead this approach estimates , the proportion of the variance in phenotype explained by the snp panel [ ] . in thissetting , tcs will not improve estimates of .the genetic relationship matrix is a measure of the additive covariance structure that exists between individuals due to a common genetic background .we estimate the relationship matrix using genotyped snps , but this estimate is usually noisy .hence , we propose a method for improving upon this estimate using treelets .treelets simultaneously return a hierarchical tree and orthonormal basis functions supported on _ nested clusters _ in the tree both reflect the underlying structure of the data . herewe extend the original treelet framework [ , ] for smoothing one - dimensional signals and functions , to a new means of smoothing and denoising variance covariance matrices with hierarchical block structure and unstructured noise .the main idea is to first move to a different basis representation through a series of local transformations , and then impose sparsity by thresholding the transformed covariance matrix .we refer to the approach as treelet covariance smoothing ( tcs ) .the general setup is as follows .[ see appendix in for details on how to compute the treelet transformation .the treelet algorithm , as well as its implementation , is available in r on cran as the ` treelet ` library . ]let be a random vector in with variance covariance matrix . in our context, represents the scaled minor allele counts for a set of individuals at any snp , and the covariance , the additive genetic relationship matrix of the individuals [ equation ( [ eqsnpcov ] ) ] .now at each level of the treelet algorithm , we have an orthonormal multiscale basis .let denote the basis at the top of the tree [ corresponding to level if using the notation in ] .we write where represent the orthogonal projections onto local basis vectors on different scales .it follows that the covariance of can be written in terms of a _ multi - scale matrix decomposition _ where and . the first term in equation ( [ eqcov ] )describes the diagonally symmetric block structure of the variance covariance matrix .these blocks are organized in a hierarchical tree .the second term describes a more complex structure , including off - diagonal rectangular blocks , which are also hierarchically related to each other in a multi - scale matrix decomposition . in practice ,the covariance is unknown , and both the covariance matrix and the treelet basis need to be estimated from data . for relationship matrices , one can , for example , derive an estimate from marker data using method of moments or maximum likelihood methods .denote the treelet basis derived from by , and write where and .let be the covariance estimate after a treelet transformation , that is , after applying a full set of jacobi rotations of pairs of correlated variables .a calculation shows that {ii } \qquad \mbox{and}\qquad \widehat { \gamma}_{i , j } = \widehat { \operatorname{cov}}(c_i , c_j ) = \bigl[t(\widehat{\sigma})\bigr]_{ij } , \label{eqtreeletcoeffs}\vspace*{-1pt}\ ] ] where and .this suggests and , where denotes the kronecker delta function , corresponds to simple thresholding of the original covariance estimate .here we consider more general groupings of correlated variables on different scales . ]a smoothed estimate of the covariance by thresholding : \widehat{\mathbf{v}}_i(\widehat { \mathbf{v}}_i)^t + \sum_{i\ne j}^nf_\lambda [ \widehat{\gamma}_{i , j}]\widehat{\mathbf{v}}_i(\widehat { \mathbf{v}}_j)^t , \label{eqcovsmooth}\ ] ] with the thresholding function = \cases { a , & \quad |a| \ge\lambda \vspace*{2pt}\cr 0 , & \quad |a| < \lambda } \label{eqcovthresholding}\ ] ] where is a smoothing parameter . to summarize and in matrix short - hand notation ,the smoothed genetic relationship matrix is given by b^t , \label{eqasmoothmat}\ ] ] where and , respectively , denote the treelet basis and the covariance matrix at the top of the tree , and corresponds to element - wise thresholding [ equation ( [ eqcovthresholding ] ) ] .note that to compute we only need to know the jacobi rotations at each level of the tree , more precisely , the treelet basis , , where the jacobi rotation matrix is the rotation matrix at level .the covariance estimate after a treelet transformation and before smoothing is .the goal is to choose a threshold ( ) that reduces noise in the estimated relationships .traditional cross - validation is not an option because we can not predict without including persons and .alternatively , we have an abundance of genetic information from which to estimate .we propose a snp subsampling procedure to estimate the tuning parameter .we begin by breaking the genome into independent _ training _ and _ test _ sets by randomly placing half the chromosomes into each set . to improve the efficiency of our estimate of , we utilize a `` blackout window '' of length to avoid sampling snps that are highly correlated. this can be considered either in terms of physical location along the chromosome or the number of snps between any two snps in question . from the set of training chromosomes ,select a relatively large sample of independent snps to get a reliable estimate of .we train our algorithm by smoothing using tcs to get , for all , where is a grid of reasonable threshold values .once we have , for a given , we subsample snp sets of size from the test set of chromosomes . here , and the snps within each of the subsampled sets follow our defined blackout window , . then , for all , estimate the relationship matrix , , based on the subset of snps .we then compare our smoothed relationship matrix , , from the training chromosomes to each of the nonsmoothed relationship matrices , , via a weighted risk function : where is a weight associated with each element in .clearly , the optimal tuning parameter is .the reason for introducing the weighting scheme is because many subjects are nearly unrelated .thus , we aim to upweight the loss function so that the preponderance of near - zero elements in the off - diagonal do not overwhelm the loss function .we suggest using the learned hierarchical tree to get the weights .more specifically , {ij}| ] and = zdz^t + e = \sum _ { i=1}^c \sigma_i^2 z_i z_i^t + e,\ ] ] where the variance components and are unknown and to be estimated .now consider an _ error - in - variables _ scenario in which the matrix of regressors of fixed effects is known , but we only have _ noisy _ estimates of some or all of the positive semi - definite ( p.s.d . )matrices associated with the random effects .if these matrices have block structure and the noise is unstructured , then one could potentially improve estimates of variance components by first applying tcs . in our application , for example, we looked at a special case where we first estimate the p.s.d .matrix in an additive polygenic model using marker - based data , and then use a denoised estimate of to estimate the variance components , and in a random effects model where and . in summary , we have introduced a new method , called treelet covariance smoothing ( tcs ) , that regularizes a relationship matrix estimated from a large panel of genetic markers . in the context of a gwas study a huge number of snps are measured , each of which provides information about the relationship between individuals in the sample .we proposed a snp subsampling procedure that exploits this rich source of information to choose a tuning parameter for the algorithm .we illustrated one instance of the utility of such estimates by substituting the resulting smoothed relationship matrix into a random effects model to estimate the heritability of body mass index .while others have used genetically inferred estimates of relatedness from samples of close relatives to estimate heritability , we believe this is the first time such estimates have been applied to population - based samples when the goal is to estimate heritability in the traditional sense .we would like to thank daniel weeks , nadine melhem and cosma shalizi for comments on the manuscript , elizabeth thompson for guidance in designing the simulations , and evan klei for assistance with the simulations .
recent technological advances coupled with large sample sets have uncovered many factors underlying the genetic basis of traits and the predisposition to complex disease , but much is left to discover . a common thread to most genetic investigations is familial relationships . close relatives can be identified from family records , and more distant relatives can be inferred from large panels of genetic markers . unfortunately these empirical estimates can be noisy , especially regarding distant relatives . we propose a new method for denoising genetically inferred relationship matrices by exploiting the underlying structure due to hierarchical groupings of correlated individuals . the approach , which we call treelet covariance smoothing , employs a multiscale decomposition of covariance matrices to improve estimates of pairwise relationships . on both simulated and real data , we show that smoothing leads to better estimates of the relatedness amongst distantly related individuals . we illustrate our method with a large genome - wide association study and estimate the `` heritability '' of body mass index quite accurately . traditionally heritability , defined as the fraction of the total trait variance attributable to additive genetic effects , is estimated from samples of closely related individuals using random effects models . we show that by using smoothed relationship matrices we can estimate heritability using population - based samples . finally , while our methods have been developed for refining genetic relationship matrices and improving estimates of heritability , they have much broader potential application in statistics . most notably , for error - in - variables random effects models and settings that require regularization of matrices with block or hierarchical structure . , , ,
consider a scenario where a large volume of data is collected on a daily basis : for example , sales records in a retailer , or network activity in a telecoms company .this activity will be archived in a warehouse or other storage mechanism , but the size of the data is too large for data analysts to keep in memory . rather than go out to the full archive for every query , it is natural to retain accurate summaries of each data table , and use these queries for data exploration and analysis , reducing the need to read through the full history for each query .since there can be many tables ( say , one for every day at each store , in the retailer case , or one for every hour and every router in the network case ) , we want to keep a very compact summary of each table , but still guarantee accurate answers to any query .the summary allows approximate processing of queries , in place of the original data ( which may be slow to access or even no longer available ) ; it also allows fast ` previews ' of computations which are slow or resource hungry to perform exactly .[ eg : motivate ] as a motivating example , consider network data in the form of ip flow records .each record has a source and destination ip address , a port number , and size ( number of bytes ) .ip addresses form a natural hierarchy , where prefixes or sets of prefixes define the ranges of interest .port numbers indicate the generating application , and related applications use ranges of port numbers .flow summaries are used for many network management tasks , including planning routing strategies , and traffic anomaly detection .typical ad hoc analysis tasks may involve estimating the amount of traffic between different subnetworks , or the fraction of voip traffic on a certain network .resources for collection , transport , storage and analysis of network measurements are expensive ; therefore , structure - aware summaries are needed by network operators to understand the behavior of their network .such scenarios have motivated a wealth of work on data summarization and approximation.there are two main themes : methods based on random sampling , and algorithms that build more complex summaries ( often deterministic , but also randomized ) .both have their pros and cons .sampling is fast and efficient , and has useful guaranteed properties .dedicated summaries can offer greater accuracy for the kind of range queries which are most common over large data , albeit at a greater cost to compute , and providing less flexibility for other query types .our goal in this work is to provide summaries which combine the best of both worlds : fast , flexible summaries which are very accurate for the all - important range queries . to attain this goal, we must understand existing methods in detail to see how to improve on their properties .summaries which are based on random sampling allow us to build ( unbiased ) estimates of properties of the data set , such as counts of individual identifiers ( `` keys '' ) , sums of weights for particular subsets of keys , and so on , all specified after the data has been seen .having high - quality estimates of these primitives allows us to implement higher - level applications over samples , such as computing order statistics over subsets of the data , heavy hitters detection , longitudinal studies of trends and correlations , and so on .summarization of items with weights traditionally uses poisson sampling , where each item is sampled independently .the approach which sets the probability of including an item in the sample to be proportional to its weight ( ipps ) enables us to use the horvitz - thompson estimator , which minimizes the sum of per - item variances .`` '' samples improve on poisson samples in that the sample size is fixed and they are more accurate on subset - sum queries .in particular samples have _ variance optimality _ : they achieve variance over the queries that is provably the smallest possible for any sample of that size . since sampling is simple to implement and flexible to use , it is the default summarization method for large data sets .samples support a rich class of possible queries directly , such as those mentioned in example [ eg : motivate ] : evaluating the query over the sampled data ( with appropriately scaled weights ) usually provides an unbiased , low variance estimate of the true answer , while not requiring any new code to be written .these summaries provide not only estimates of aggregate values but also a representative sample of keys that satisfy a selection criteria .the fact that estimates are unbiased also means that relative error decreases for queries that span multiple samples or larger subsets and the estimation error is governed by exponential tail bounds : the estimation error , in terms of the number of samples from any particular subset , is highly concentrated around the square root of the expectation .we observe , however , that traditionally sampling has neglected the inherent structure that is present , and which is known before the data is observed .that is , data typically exists within a well - understood schema that exhibits considerable structure .common structures include _ order _ where there is a natural ordering over keys ; _ hierarchy _ where keys are leaves within a hierarchy ( e.g. geographic hierarchy , network hierarchy ) ; and combinations of these where keys are multi - dimensional points in a _product structure_. over such data , queries are often _ structure - respecting_. for example , on ordered data with possible key - values , although there are possible subset - sum queries , the most relevant queries may be the possible range queries . in a hierarchy, relevant queries may correspond to particular nodes in the hierarchy ( geographic areas , ip address prefixes ) , which represent possible ranges . in a product structure ,likely queries are boxes intersections of ranges of each dimension .this is observed in example [ eg : motivate ] : the queries mentioned are based on the network hierarchy .while samples have been shown to work very well for queries which resemble the sums of _ arbitrary _ subsets of keys , they tend to be less satisfactory when restricted to range queries .given the same summary size , samples can be out - performed in accuracy by dedicated methods such as ( multi - dimensional ) histograms , wavelet transforms , and geometric summaries including the popular q - digest .these dedicated summaries , however , have inherent drawbacks : they primarily support queries that are sum aggregates over the original weights , and so other queries must be expressed in terms of this primitive .their accuracy rapidly degrades when the query spans multiple ranges a limitation since natural queries may span several ( time , geographic ) ranges within the same summary and across multiple summaries .dedicated summaries do not provide `` representative '' keys of selected subsets , and require changes to existing code to utilize .of most concern is that they can be very slow to compute , requiring a lot of i / o ( especially as the dimensionality of the data grows ) : a method which gives a highly accurate summary of each hour s data is of little use if it takes a day to build !lastly , the quality of the summary may rely on certain structure being present in the data , which is not always the case .while these summaries have shown their value in efficiently summarizing one - dimensional data ( essentially , arrays of counts ) , their behavior on even two - dimensional data is less satisfying : troubling since this is where accurate summaries are most needed .for example , in the network data example , we are often interested in the traffic volume between ( collections of ) various source and destination ranges .motivated by the limitations of dedicated summaries , and the potential for improvement over existing ( structure - oblivious ) sampling schemes , we aim to design sampling schemes that are both and _ structure - aware_. at the same time , we aim to match the accuracy of deterministic summaries on range sum queries and retain the desirable properties of existing sample - based summaries : unbiasedness , tail bounds on arbitrary subset - sums , flexibility and support for representative samples , and good i / o performance .we introduce a novel algorithmic sampling framework , which we refer to as _probabilistic aggregation _ , for deriving samples .this framework makes explicit the freedom of choice in building a summary which has previously been overlooked . working within this framework , we design _ structure - aware _ sampling schemes which exploit this freedom to be much more accurate on ranges than their structure - oblivious counterparts .= 1em for hierarchies , we design an efficient algorithm that constructs summaries with bounded `` range discrepancy '' . that is , for any range , the number of samples deviates from the expectation by less than .this scheme has the minimum possible variance on ranges of any unbiased sample - based summary . for ordered sets ,where the ranges consist of all intervals , we provide a sampling algorithm which builds a summary with range discrepancy less than .we prove that this is the best possible for any sample . for -dimensional datasets , we propose sampling algorithms where the discrepancy between , the expected number of sample points in the range , and the actual number is , where is the sample size .this improves over structure - oblivious random sampling , where the corresponding discrepancy is .discrepany corresponds to error of range - sum queries but sampling has an advantage over other summaries with similar error bounds : the error on queries which span multiple ranges grows linearly with the number of ranges for other summaries but has square root dependence for samples .moreover , for samples the expected error never exceeds ( in expectation ) regardless of the number of ranges .* construction cost .* for a summary structure to be effective , it must be possible to construct quickly , and with small space requirements .our main - memory sampling algorithms perform tasks such as sorting keys or ( for multidimensional data ) building a kd - tree .we propose even cheaper alternatives which perform two read - only passes over the dataset using memory that depends on the desired summary size ( and independent of the size of the data set ) . when the available memory is , we obtain a sample that with high probability is close in quality to the algorithms which store and manipulate the full data set . *empirical study . * to demonstrate the value of our new structure - aware sampling algorithms , we perform experiments comparing to popular summaries , in particular the wavelet transform , -approximations , randomized sketches and to structure - oblivious random sampling .these experiments show that it is possible to have the best of both worlds : summaries with equal or better accuracy than the best - in - class , which are flexible and dramatically more efficient to construct and work with .this section introduces the `` probabilistic aggregation '' technique . for more background ,see the review of core concepts from sampling and summarization in appendix [ prelim : sec ] .our data is modeled as a set of ( key , weight ) pairs : each key has weight .a sample is a random subset .a sampling scheme is ipps when , for expected sample size and derived threshold , the sample includes key with probability .ipps can be acheived with _poisson _ sampling ( by including keys independently ) or sampling , which allows correlations between key inclusions to achieve improved variance and fixed sample size of exactly .there is not a unique sampling scheme , but rather there is a large family of sampling distributions : the well - known `` reservoir sampling '' is a special case of on a data stream with uniform weights .classic tail bounds , including chernoff bounds , apply both to and poisson sampling .structure is specified as a _ range space _ with being the key domain and _ ranges _ that are subsets of . the _ discrepancy_ of a sample on a range is the difference between the number of sampled keys and its expectation .we use to denote the maximum discrepancy over all ranges .disrepancy means that the error of range - sum queries is at most .if a sample is poisson or , it follows from chernoff bounds that the expected discrepancy is and ( from bounded vc dimension of our range spaces ) that the maximum range discrepancy is with probability . with structure - aware sampling , we aim for much lower discrepancy . * defining probabilistic aggregation .* let be the vector of sampling probabilities .we can view a sampling scheme that picks a set of keys as operating on .vector is incrementally modified : setting to 1 means is included in the sample , while means it is omitted . when all entries are set to 0 or 1 ,the sample is chosen ( e.g. poisson sampling independently sets each entry to 1 with probability ) . to ensure a sample ,the current vector must be a _probabilistic aggregate _ of the original .a random vector ^n ] if the following conditions are satisfied : * ( _ agreement in expectation _ ) =p^{(0)}_i$ ] , * ( _ agreement in sum _ ) , and * ( _ inclusion - exclusion bounds _ ) & \quad \leq \quad \prod_{i\in j } p^{(0)}_i \\\mbox{(e ) : } & & { \textsf{e}}[\prod_{i\in j } ( 1-p^{(1)}_i ) ] & \quad \leq \quad \prod_{i\in j } ( 1-p^{(0)}_i ) \ .\end{aligned}\ ] ] ; ; ; ; * return * we obtain samples by performing a sequence of probabilistic aggregations , each setting at least one of the probabilities to 1 or 0 . in appendix [ probaggapp ]we show that probablistic aggregations are transitive and that set entries remain set .thus , such a process must terminate with a sample .* pair aggregation . *our summarization algorithms perform a sequence of simple aggregation steps which we refer to as _ pair aggregations _ ( algorithm [ pairagg : alg ] ) .each pair aggregation step modifies only two entries and sets at least one of them to .the input to _pair aggregation _ is a vector and a pair with each .the output vector agrees with on all entries except and one of the entries is set to or .it is not hard to verify , separately considering cases and , that pair - aggregate correctly computes a probabilistic aggregate of its input , and hence the sample is .pair aggregation is a powerful primitive .it produces a sample of size exactly .. this can be ensured ( deterministically ) by choosing as described in algorithm [ get_tau_k : alg ] . ]observe that the choice of which pair to aggregate at any point can be arbitrary and the result is still a sample .this observation is what enables our approach .we harness this freedom in pair selection to obtain samples that are structure aware : intuitively , by choosing to aggregate pairs that are `` close '' to each other with respect to the structure , we control the range impact of the `` movement '' of probability mass .we use pair aggregation to make sampling structure - aware by describing ways to pick which pair of items to aggregate at each step . for now , we assume the data fits in main - memory , and our input is the list of keys and their associated ipps probabilities .we later discuss the case when the data exceeds the available memory . for hierarchy structures ( keys associated with leaves of a tree and contains all sets of keys under some internal node ) we show how to obtain samples with ( optimal ) maximum range discrepancy .there are two special cases of hierarchies : ( i ) _ disjoint ranges _( where is a partition of )captured by a flat 2-level hierarchy with parent nodes corresponding to ranges and ( ii ) _ order _ where there is a linear order on keys and is the set of all prefixes the corresponding hierarchy is a path with single leaf below each internal node . for order structures where is the set of `` intervals '' ( all consecutive sets of keys )we show that there is always a sample with maximum range discrepancy and prove that this is the best possible .* disjoint ranges : * pair selection picks pairs where both keys belong to the same range .when there are multiple choices , we may choose one arbitrarily .only when there are none do we select a pair that spans two different ranges ( arbitrarily if there are multiple choices ) .* hierarchy : * pair selection picks pairs with lowest ( lowest common ancestor ) .that is , we pair aggregate if there are no other pairs with an that is a descendant of . the kd - hierarchy algorithm ( algorithm [ kdhierarchy : alg ] ) aims to minimize the discrepancy within a product space .figure [ fig : kdpart_all][fig : kdpart ] shows a two - dimensional set of keys that are uniformly weighted , with sampling probabilities , and the corresponding kd - tree : a balanced binary tree of depth 6 .the cuts alternate vertically ( red tree nodes ) and horizontally ( blue nodes ) .right - hand children in tree correspond to right / upper parts of cuts and left - hand children to left / lower parts .we now analyze the resulting summary , based on the properties of the space partitioning performed by the kd - tree .we use to refer interchangeably to a node in the tree and the hyperrectangle induced by node in the tree .a node at depth in the tree has probability mass .we refer to the set of minimum depth nodes that satisfy as s - leaves ( for _ super leaves _ ) ( is an s - leaf iff and its immediate ancestor has .the depth of an s - leaf ( and of the hierarchy when truncated at s - leaves ) is at most .consider the hierarchy level - by - level , top to bottom .each level where the axis was not perpendicular to the hyperplane at most doubles the number of nodes that intersect the hyperplane .levels where the partition axis is perpendicular to the hyperplane do not increase the number of intersecting nodes . because axes were used in a round - robin fashion , the fraction of levels that can double the number of intersecting nodes is .hence , when we reach the s - leaf level , the number of intersecting nodes is at most .an immediate corollary is that the boundary of an axis - parallel box may intersect at most s - leaves .we denote by this set of boundary s - leaves .let be a minimum size collection of nodes in the hierarchy such that no internal node contains a leaf from .informally , consists of the ( maximal ) hyperrectangles which are fully contained in or fully disjoint from .figure [ fig : kdpart_all][fig : kdpart_query ] illustrates a query rectangle ( dotted red line ) over the data set .the maximal interior nodes contained in ( ) are marked in solid colors ( and green circles in the tree layout ) and the boundary s - leaves in light stripes ( magenta circles in the tree layouts ) .for example , the magenta rectangle corresponds to the r - l - l - r path .each node in must have a sibling such that the sibling , or some of its descendants , are in .if this is not the case , then the two siblings can be replaced by their parent , decreasing the size of , which contradicts its minimality .we bound the size of by bounding the number of potential siblings .the number of ancestors of each boundary leaf is at most the depth which is .thus , the number of potential siblings is at most the number of boundary leaves times the depth . by substituting a bound on , we obtain the stated upper bound .these lemmas allow us to bound the estimation error , by applying lemma [ hierarcyunionbound : lemma ] .that is , for each such that we have a 0/1 random variable that is 1 with probability and is otherwise ( the value is if includes samples and otherwise ) . for each , we have a random variable that is 1 with probability .this is the probability that contains one key from ( can contain at most one key from each s - leaf ) .the sample is over these random variables with
in processing large quantities of data , a fundamental problem is to obtain a summary which supports approximate query answering . random sampling yields flexible summaries which naturally support subset - sum queries with unbiased estimators and well - understood confidence bounds . classic sample - based summaries , however , are designed for arbitrary subset queries and are oblivious to the structure in the set of keys . the particular structure , such as hierarchy , order , or product space ( multi - dimensional ) , makes _ range queries _ much more relevant for most analysis of the data . dedicated summarization algorithms for range - sum queries have also been extensively studied . they can outperform existing sampling schemes in terms of accuracy on range queries per summary size . their accuracy , however , rapidly degrades when , as is often the case , the query spans multiple ranges . they are also less flexible being targeted for range sum queries alone and are often quite costly to build and use . in this paper we propose and evaluate variance optimal sampling schemes that are _ structure - aware_. these summaries improve over the accuracy of existing _ structure - oblivious _ sampling schemes on range queries while retaining the benefits of sample - based summaries : flexible summaries , with high accuracy on both range queries and arbitrary subset queries . [ theorem]lemma [ theorem]definition
there is perhaps no topic in biology more fascinating and yet more mysterious than the origin of life . with only one example of organic life to date, we have no way of knowing whether the appearance of life on earth was an extraordinarily rare event , or it if was a commonplace occurrence that was unavoidable given earth s chemistry .were we to replay earth s history one thousand times , how often would it result in a biosphere ? and among the cases where life emerged , how different or how similar would the emergent biochemistries be ?the role of historical contingency has been studied extensively in the evolution of life ( see , e.g. , and references therein ) . herewe endeavour to ask an even more fundamental question : what is the role of historical contingency in the origin of life ?the best evidence suggests that the first self - replicators were rna - based , although other first self - replicators have been proposed .given the large number of uncertainties concerning the possible biochemistry that would lead to the origin of self - replication and life , either on earth or other planets , researchers have begun to study the process of emergence in an abstract manner .tools from computer science , information theory , and statistical physics have been used in an attempt to understand life and its origins at a fundamental level , removed from the peculiarities of any particular chemistry .investigations along those lines may reveal to us general laws governing the emergence of life that are obscured by the nature of our current evidence , point us to experiments that probe such putative laws , and get us closer to understand the inevitability or perhaps the elusiveness of life itself .at the heart of understanding the interplay between historical contingency and the origin of life lies the structure of the fitness landscapes of these first replicators , and how that landscape shapes the biomolecules subsequent evolution .while the fitness landscapes of some rna - based genotypes have been mapped ( and other rna replicators have been evolved experimentally ) , in all such cases evolution already had the chance to shape the landscape for these organisms and dictate " , as it were , the sequences most conducive for evolution .the structure of primordial fitness landscapes , in comparison , is entirely unknown . while we know , for example , that in realistic landscapes highly fit sequences are genetically close to other highly fit sequences ( this is the essence of kauffman s central massif " hypothesis , see also ) , we suspect that this convenient property which makes fitness landscapes traversable " is an outcome of evolution , in particular the evolution of evolvability . what about primordial landscapes not shaped by evolution? how often are self - replicators in the neighborhood of other self - replicators ?are self - replicators evenly distributed among sequences , or are there ( as in the landscapes of evolved sequences ) vast areas devoid of self - replicators and rare ( genetic ) areas that teem with life ?can evolution easily take hold on such primordial landscapes ?these are fundamental questions , and they are central to our quest to understand life s origins . if the fitness landscape consist of isolated fitness networks , as found in some modern rna fitness landscapes , then one may expect the effects of historical contingency to be strong , and the future evolution of life to depend on the characteristics of the first replicator . however , if there exist `` neutral networks '' that connect genotypes across the fitness landscape ( as found in computational rna landscapes ) then the effect of history may be diminished .can we learn more about these options ?recently , we have used the digital evolution platform avida as a model system to study questions concerning the origin of life . in avida ,a population of self - replicating computer programs undergo mutation and selection , and are thus undergoing darwinian evolution explicitly .because the genomic content required for self - replication is non - trivial , most avidian genomes are non - viable , in the sense that they can not form colonies " and thus propagate information in time .thus , viable self - replicators are rare in avida , with their exact abundance dependent on their information content .further work on these rare self - replicators showed that while most of them were evolvable to some degree , their ability to improve in replication speed or evolve complex traits greatly varied .furthermore , the capability of avidian self - replicators to evolve greater complexity was determined by the _ algorithm _ they used for replication , suggesting that the future evolution of life in this digital world would be highly contingent on the original self - replicator .however , all of this research was performed without a complete knowledge of the underlying fitness landscape , by sampling billions of sequences of a specific genome - size class , and testing their capacity to self - replicate .sequences used to seed evolution experiments in avida are usually hand - written , for the simple reason that it was assumed that they would be impossible to find by chance .indeed , a typical hand - written ancestral replicator of length 15 instructions is so rare were it the only replicator among sequences of that length that it would take a thousand processors , executing a million sequences per second each in parallel , about 50,000 years of search to find it .however , it turns out that shorter self - replicators exist in avida .an exhaustive search of all 11,881,376 sequences of length , as well as all 308,915,776 sequences of length previously revealed no self - replicators .however , in that investigation six replicators of length turned up in a random search of a billion sequences of that length , suggesting that perhaps there are replicators among the 8 billion or so sequences of length . here ,we confirm that the smallest replicator in avida must have 8 instructions by testing all sequences , but also report mapping the entirety of the landscape ( sequences ) to investigate the fitness landscape of primordial self - replicators of that length .mapping all sequences in this space allows us to determine the relatedness of self - replicators and study whether they occur in clusters or evenly in sequence space , all without the usual bias of studying only sequences that are among the chosen " already . of the almost 209 billion possible genomes, we found that precisely 914 could undergo self - replication and reproduction , and thus propagate their information forward in time in a noisy environment .we found that these 914 primordial replicators are not uniformly distributed across genetic space , but instead cluster into two broad groups ( discovered earlier in larger self - replicators ) that form 13 main clusters . by analyzing how these groups ( and clusters ) evolve , we are able to study how the primordial landscape shapes the evolutionary landscape , and how chance events early in evolutionary history can shape future evolution .we used avida ( version 2.14 ) as our computational system to study the origin of self - replication .avida is a digital evolution system in which a population of computer programs compete for the system resources needed to reproduce ( see for a full description of avida ) .each of these programs is self - replicating and consists of a genome of computer instructions that encode for replication . during this asexual reproduction process, mutations can occur , altering the speed at which these programs reproduce .as faster replicators will out - reproduce slower replicators , selection then leads to the spread of faster replicators .because avidian populations undergo darwinian evolution , avida has been used to explore many complex evolutionary processes .the individual computer programs in avida are referred to as avidians .they consist of a genome of computer instructions and different containers to store numbers .each genome has a defined start point and instructions are sequentially executed throughout the avidian s lifetime .some of these instructions allow the avidian to start the replication process , copy their genome into a new daughter avidian , and divide into two avidians ( see for the full avida instruction set ) . during this replication process, mutations can occur , causing the daughter avidian s genome to differ from its parent .these mutations can have two broad phenotypic outcomes .first , mutations can alter the number of instruction executions required for replication ; these mutations can increase or decrease replication speed and thus fitness .second , the fixation of multiple mutations can lead to the evolution of complex traits in avida .these traits are the ability to input binary numbers from the avida environment , perform boolean calculations on these numbers , and then output the result of those calculations . in the experiments described here, avidians could evolve any of the nine one- and two - input logic functions ( not , nand , ornot , and , or , andnot , nor , xor , and equals ) .this is usually referred to as the logic-9 " environment .the ability to perform the above boolean logic calculations ( possess any of these nine traits ) , increases its bearer s replication speed by increasing the number of genome instructions the bearer can execute per unit of time .the more instructions an avidian can execute during a unit of time , the fewer units of time that are required for self - replication .these units of time are referred to as updates ( they are different from generations ) . during each update ,the entire population will execute instructions , where is the current population size .the ability to execute one instruction is called a single instruction processing " unit , or sip .if the population is monoclonal , each avidian will receive , on average , 30 sips .however , every avidian also has a _ merit _ which determines how many sips they receive per update .the greater the merit , the more sips that individual receives . the ability to perform the nine calculations multiply an individual s merit by the following values : not and nand : 2 , ornot and and : 4 , andnot and or : 8 , nor and xor : 16 , and equals : 32 .the avida world consists of a fixed - size toroidal grid of cells .the total number of cells sets the maximum population size .each cell can be occupied by at most one avidian . after successful reproduction , a new avidianis placed into one of the world s cells . in a well - mixed population , any cell in the population may be chosen . in a population with spatial structure ,the new avidian is placed into one of the nine cells neighboring the parent avidian ( including the cell occupied by the parent ) . if there are empty cells available , the new avidian occupies one of these cells .if all possible cells are occupied , a cell is chosen at random , its occupant removed from the population , andthe new avidian then occupies this cell .this random removal implements a form of genetic drift in avida . for the experiments performed here ,the population structure was spatial . in order to map the entire avida fitness landscape , we constructed all genomes and analyzed whether they could self - replicate .this operation was performed by running these genomes through avida s _ analyze mode _( described in the data analysis section ) and checking whether these genomes gave their bearer non - zero fitness , and whether they were _viable_. next , we described the fitness landscape by looking for the presence of genotype clusters among the discovered self - replicators .we constructed a network of the fitness landscape where each genotype is a node and the length between two nodes is the square of the hamming distance between the genotypes .we also examined the frequency of single instruction motifs ( monomers ) , as well as double instruction motifs ( dimers ) . to test the evolvability of the 914 self - replicators, we evolved 10 monoclonal populations of each replicator with 3,600 individuals for updates in the logic-9 environment ( see above ) .point mutations occurred at a rate of mutations per copied instruction , while single - instruction insertion and deletion mutations both occurred at a rate of mutations per division . at the end of each population s evolution, we analyzed the most abundant genotype from each population . in order to test the role of historical contingency when the appearance of self - replicators was frequent , we ran experiments where we evolved all 914 self - replicators in the same population ( a primordial soup " of replicators ) . in each population, we placed 10 individuals of each self - replicator .the ancestral population then had 9140 individuals and could expand to individuals at maximum capacity .these populations evolved for updates in the logic-9 environment .mutation rates were the same as in the previous evolvability experiments .this experiment was performed in 200 replicates . to identify the ancestral genotype that outcompeted all of the other genotypes , we isolated the most abundant genotype at the end of the experiment and traced its evolutionary history back to its original ancestor . statistics on different avidianswere calculated using avida s _ analyze mode_. in analyze mode , a single genotype is examined in isolation as it executes the instructions in its genome , runs through its life - cycle , and possibly creates an offspring .this confers on experimenters the ability to calculate the fitness for an individual avidian ( number per offspring generated per unit time ) and examine other characteristics , such as whether it can reproduce perfectly ( all offspring are genetically identical to each other and the mother genome ) or which traits this avidian possesses .analyze mode was also used to calculate quantities such as genome size .avida s analyze mode code is available along with the entire avida software at https://github.com/devosoft/avida. across - population means and standard errors were calculated using the numpy python software package .the clusters of replicators were rendered using neato , which is an undirected graph embedder that creates a layout similar to that of multi - dimensional scaling .figures were plotted using the matplotlib python package .of the ( approximately 209 billion ) genomes with 8 instructions , we found 914 that could self - replicate .we also searched for self - replicators with seven - instruction genomes but found none , establishing that is the minimal self - replicator length in avida . by discovering all self - replicators in this fitness landscape, we can now calculate the precise information content required for self - replication in avida , using previously - established methods , as mers ( a mer " is a unit of entropy or information , normalized by the number of states that each instruction can take on , see ) .our previous estimate of the information content of length-8 replicators , based on finding 8 replicators among a billion random samples , was mers .to study the genetic structure of these replicators , we obtained the distribution of instructions ( monomers ) across the replicators genomes ( fig .[ fig : dist]a ) .this distribution is biased , as every single replicator contained at least the three instructions required for replication : h - copy , h - alloc , and h - divide ( denoted by , , and , respectively , see the mapping between instructions and the letter mnemonic in table 1 in the appendix ) .in addition , 75% of replicators have a ( nop - b ) , an ( if - label ) , and a ( mov - head ) instruction , while 25% have a ( nop - c ) , an ( jmp - head ) , and an ( swap ) instruction in their sequence .we also analyzed the distribution of sequential instruction pairs ( dimers ) and found that while most dimers do not occur in any self - replicators , the dimers and occur in approximately 70% of the replicators ( fig .[ fig : dist]b ) and are highly over - represented .other dimers such as , , and dimers containing ,,,,, , and occur in approximately 20%-30% of replicators .-axis ( the proportion of fg , gb , rc , and hc dimers are labeled . ) ] if there were no constraint on the genetic architecture , we would expect self - replicators to be distributed uniformly across the fitness landscape .however , we found instead that self - replicators are not distributed uniformly in the landscape , but are grouped into 41 distinct genotype clusters , shown in fig .[ fig : motifs ] .the dimer distribution function we analyzed above separates primordial self - replicators into two major categories : those that carry fg / gb motifs ( fg - replicators " for short ) , as opposed to those carrying hc / rc motifs ( hc - replicators " ) instead .this separation into two classes was noted earlier from a smaller sample of the landscape , which we corroborate here . by scanning the entire landscapewe can confirm that these two types are the only types of self - replicators in the landscape , and the clusters of genotypes are homogeneous in the sense that fg - replicators and hc - replicators do not intermix ( fig .[ fig : motifs ] ) .[ fig : clusters ] shows four examples of clusters pulled from the landscape , showing that they are tightly interconnected .many self - replicators are isolated and 20 of these clusters consist of only 1 genotype . however , most self - replicators are located in large clusters .almost 75% of the self - replicators are located in four major clusters with 212 , 199 , 165 , and 95 genotypes each , and almost 96% are contained within the 13 clusters that have at least 14 members .there is thus a distinct gap in the cluster size distribution , with small clusters ranging from 1 - 3 connected members , while the next largest size class is 14 . .a : a 23-node cluster of hc - replicators , b : the third - largest cluster in the network : an fg - replicator cluster with 165 members .c : another large fg - replicator cluster with 96 genotypes .d : a 15-node hc - replicator cluster . , width=384 ] we find that clusters of replicators are highly connected among each other , with a degree distribution that is sharply peaked around the mean degree of a cluster ( see fig .[ fig : edge_dist ] ) , which is similar to what is seen in neutral networks of random rna structures .we find that fg - replicators form the denser clusters . .as each cluster has a particular edge distribution , the distributions of the two different kinds of replicators ( fg - types and hc - types ) do not overlap .red : fg - replicators , blue : hc - replicators , scaledwidth=80.0% ] the 914 self - replicators we found vary in fitness , but consistently we find that the fittest self - replicators contain the fg / gb motifs and many of the lowest fitness self - replicators contain the hc / rc motifs . in fig .[ fig:3dplot ] we show the fitness as a function of the mds - coordinate . in that figure, color denotes fitness according to the scale on the right .the highest peaks and plateaus all belong to fg - replicators .the hc - replicators appear as a valley ( dark blue ) bordering the group of fg - replicators . , where x - y coordinates are the same as the network in fig [ fig : motifs ] .] in order to explore the subsequent role of historical contingency after the emergence of life , we tested the evolvability of all 914 self - replicators .first , we evolved each replicator separately .almost all self - replicators could evolve increased fitness ( fig .[ fig : evol]b ) .however , there was a wide range of mean relative fitness ; fg - replicators clearly undergo more adaptation than hc - replicators .to explain why fg - replicators were more evolvable , we first looked at the evolution of genome size .replicators with the fg / gb motifs grew larger genomes than replicators with the hc / rc motifs ( fig .[ fig : evol]c ) . as larger genomes can allow for the evolution of novel traits in avida , and thus fitness increases , we next checked whether the fg - replicators had evolved more _ computational traits _ than the hc - replicators . in avida , traits are snippets of code that allow the avidian to gain energy from the environment , by performing logic operations on binary numbers that the environment provides ( see methods ) .replicators with the fg / gb motifs did evolve more novel traits than replicators with the hc / rc motifs ( fig .[ fig : evol]d ) .in fact , only fg - replicators evolved traits in these experiments .self - replicators before and after evolution . a : ancestral fitness of all replicators .b : log mean relative fitness after updates of evolution .c : final genome size after updates of evolution .d : number of evolved traits after updates of evolution . in all plots ,fg - replicators are in red and hc - replicators are in blue .error bars ( black ) are twice the standard error of the mean .all plots are sorted in increasing order . ] finally , we looked at the effect of historical contingency when all 914 replicators were competed against each other in one population .after 50,000 updates , we identify the most abundant genotype in 200 replicate experiments and reconstruct the line - of - descent to determine which of the replicators gave rise to it ( we call that replicator the progenitor " ) .most replicators did not emerge as the progenitor of life in these experiments ( fig .[ fig : soup ] ) .three genotypes , vvwfgxgb , vwvfgxgb , and wvvfgxgb , outcompete the other genotypes in 37 , 49 , and 45 populations out of 200 , respectively , or in about 65% of the competitions .the other progenitors of life were not distributed randomly among the other self - replicators either ; most of them were present in the same clusters as the three genotypes from above.thus , while history is a factor in which of the replicators becomes the seed of all life in these experiments , more than half the time the progenitor is one of the three highest - fitness sequences .thus , life predominantly originates from the highest peaks of the primordial landscape .here , we tested the role of fitness landscape structure and historical contingency in the origin of self - replication in the digital evolution system avida .we characterized the complete fitness landscape of all minimal - genome self - replicators and found that viable genotypes form clusters in the fitness landscape .these self - replicators can be separated into two replication classes , as we previously found for self - replicators with larger genomes .we also found that one of these replication classes ( the fg - replicators ) is more evolvable than the other , although the evolvability of each genotype varies .finally , we show that , when all self - replicators are competed against each other in a digital primordial soup " , three genotypes win over 65% of the competitions and many of the other winners " come from the same genotype cluster . in a previous study with avida , we found that 6 out of spontaneously - emergent genomes with 8 instructions could self - replicate . here, we found that 914 out of genomes could replicate , consistent with our previous results .this concordance suggests that the information - theoretic theory of the emergence of life , originally proposed by adami and tested with avida by adami and labar , can accurately explain the likelihood of the chance emergence of life .thus , the emergence of self - replication , and life is dependent on the information required for such life . by enumerating all of the length-8 self - replicators, we were able to show that self - replicators are not uniformly distributed across the fitness landscape and that viable genotypes cluster together .the size of these clusters varies : there are few clusters with many genotypes and many clusters with few genotypes , but the cluster size distribution has a gap .the edge distribution of the clusters is similar to what has been found in random rna structures , and the mean degree differs between replicator types .genotypes with different replication mechanisms were in different clusters with no evolutionary trajectory between the two .empirical studies of rna - based fitness landscapes , biochemical model systems for the origin of life , also show that these landscapes consist of isolated fitness peaks with many non - viable genotypes .the fact that both rna - based landscapes and these digital landscapes have similar structures suggests that the evolutionary patterns we see in these avida experiments may be similar to those one would have seen in the origin of life on earth .the presence of isolated genotype clusters in both digital and rna fitness landscapes further suggests that the identity of the first self - replicator may determine life s future evolution , as other evolutionary trajectories are not accessible .however , if populations can evolve larger genomes , non - accessible evolutionary trajectories may later become accessible , as mathematical results on the structure of high - dimensional fitness landscapes suggest . to test for the effects of historical contingency in the origin of self - replication in avida , we evolved all of the 914 replicators in an environment where they could increase in genome size and evolve novel traits .previously , we found that the evolvability of spontaneously - emergent self - replicators varied and was determined by their replication mechanism .however , those genotypes possessed fixed - length genomes of 15 instructions . here , we confirmed that the genotype of the first self - replicator , and more specifically the replication mechanism of the first replicator , determine the future evolution of novel traits in avida .the fg - replicators showed high rates of trait evolution , while hc - replicators failed to evolve novel traits in most populations .however , we did not detect any trade - off in evolvability , as we previously found .this difference is likely due to their differences in capacity to increase in genome size , as genome size increases enhance the evolution of novel traits and fitness increases in avida .would a similar dynamic occur in a hypothetical population of rna - based replicators ?while experimental evolution of rna replicators has been performed , the selective environments resulted in genome size decreases .it is unknown how simple rna replicators vary in their evolvability. we also performed experiments to test for the role of historical contingency in scenarios where any self - replicator could become the progenitor of digital life . here, we found that only three self - replicators ( or their neighbors in the fitness landscape ) became the last common ancestor in the majority of populations .this suggests a lack of contingency in the ancestral self - replicator , but emphasizes the role of the ancestral genotype in determining its future evolution .if life emerges rarely , then its future evolution will be determined by the specific genotype that first emerges , as shown from our first set of evolvability experiments ( fig .[ fig : evol ] ) .however , if simple self - replicators emerge frequently , then the future evolution is determined by the evolvability of the fittest replicators , a sort of clonal interference among possible progenitors of life . in this case , the self - replicators that most successfully invaded the population happened to also be of the type that evolved the largest genomes and most complex traits .however , it can be imagined that the opposite trend could occur , and then the progenitor of life would limit the future evolution of biological complexity .in this work we have performed the first complete mapping of a primordial sequence landscape in which replicators are extremely rare ( about one replicator per 200 million sequences ) and found two functionally inequivalent classes of replicators that differ in their fitness as well as evolvability , and that form distinct ( mutationally disconnected ) clusters in sequence space . in direct evolutionary competition ,only the highest - fitness sequences manage to repeatedly become the common ancestor of all life in this microcosm , showing that despite significant diversity of replicators , historical contingency plays only a minor role during early evolution . while it is unclear how the results we obtained in this digital microcosm generalize to a biochemical microcosms , we are confident that they can guide our thinking about primordial fitness landscapes .the functional sequences we discovered here are extremely rare , but likely not as rare as putative biochemical primordial replicators . however , from a purely statistical point of view , it is unlikely that a primordial landscape consisting of sequences that are several orders of magnitude more rare would look qualitatively different , nor would we expect our results concerning historical contingency to change significantly . after all , random functional rna sequences ( but not replicators , of course ) within a computational world , chosen only for their ability to fold , show similar clustering and degree distributions as we find here .follow - up experiments in the much larger landscape ( currently under way ) will reveal which aspects of the landscape are specific , and which ones are germane , in this digital microcosm .a comparison between fitness landscapes across a variety of evolutionary systems , both digital and biochemical , will further elucidate commonalities expected for simple self - replicators .as the landscapes for these simple self - replicators are mapped , we expect general properties of primordial fitness landscapes to emerge , regardless of the nature of the replicator .as long as primordial self - replicators anywhere in the universe consist of linear heteropolymers that encode the information necessary to replicate , studies with digital microcosms can give us clues about the origin of life that experiments with terrestrian biochemistry can not deliver .this work was supported in part by the national science foundation s beacon center for the study of evolution in action under cooperative agreement dbi-0939454 .we wish to acknowledge the support of the michigan state university high performance computing center and the institute for cyber enabled research ( icer )[ cols="<,<,<",options="header " , ] blount z , borland c , lenski r. 2008 historical contingency and the evolution of a key innovation in an experimental population of escherichia coli ._ proceedings of the national academy of sciences of the united states of america _ * 105 * , 78997906 .adami c , labar t. 2017 from entropy to information : biased typewriters and the origin of life . in _ from matter to life : information and causality _ ( ed .si walker , pcw davies , gfr ellis ) , pp . 130154 .cambridge , ma : cambridge university press .jimnez ji , xulvi - brunet r , campbell gw , turk - macleod r , chen ia . 2013 comprehensive experimental fitness landscape and evolutionary network for small rna ._ proceedings of the national academy of sciences _ * 110 * , 1498414989 .mills d , peterson r , spiegelman s. 1967 an extracellular darwinian experiment with a self - duplicating nucleic acid molecule . _ proceedings of the national academy of sciences of the united states of america _ * 58 * , 217 .ofria c , bryson dm , wilke co. 2009 avida : a software platform for research in computational evolutionary biology . in _ artificial life models in software _aa maciej komosinski ) , pp .springer london . covert aw , lenski re , wilke co , ofria c. 2013 experiments on the role of deleterious mutations as stepping stones in adaptive evolution. _ proceedings of the national academy of sciences _ * 110 * , e3171e3178 .
while all organisms on earth descend from a common ancestor , there is no consensus on whether the origin of this ancestral self - replicator was a one - off event or whether it was only the final survivor of multiple origins . here we use the digital evolution system avida to study the origin of self - replicating computer programs . by using a computational system , we avoid many of the uncertainties inherent in any biochemical system of self - replicators ( while running the risk of ignoring a fundamental aspect of biochemistry ) . we generated the exhaustive set of minimal - genome self - replicators and analyzed the network structure of this fitness landscape . we further examined the evolvability of these self - replicators and found that the evolvability of a self - replicator is dependent on its genomic architecture . we studied the differential ability of replicators to take over the population when competed against each other ( akin to a primordial - soup model of biogenesis ) and found that the probability of a self - replicator out - competing the others is not uniform . instead , progenitor ( most - recent common ancestor ) genotypes are clustered in a small region of the replicator space . our results demonstrate how computational systems can be used as test systems for hypotheses concerning the origin of life . department of computer science & engineering + beacon center for the study of evolution in action + department of microbiology & molecular genetics + program in ecology , evolutionary biology , and behavior + department of integrative biology + department of physics and astronomy + michigan state university , east lansing , mi 48824 +
the output signals of complex systems exhibit fluctuations over multiple scales which are characterized by absence of dynamic scale , i.e. , scale - invariant behavior .these signals , due to the nonlinear mechanisms controlling the underlying interactions , are also typically non - stationary and their reliable analysis can not be achieved by traditional methods , e.g. , power - spectrum and auto - correlation analysis . on the other hand , the detrended fluctuation analysis ( dfa) has been established as a robust method suitable for detecting long - range power - law correlations embedded in non - stationary signals .this is so , because a power spectrum calculation assumes that the signal is stationary and hence when applied to non - stationary time series it can lead to misleading results .thus , a power spectrum analysis should be necessarily preceded by a test for the stationarity of the ( portions of the ) data analyzed . as for the dfa, it can determine the ( mono ) fractal scaling properties ( see below ) even in non - stationary time series , and can avoid , in principle , spurious detection of correlations that are artifacts of non - stationarities .dfa has been applied with successful results to diverse fields where scale - invariant behavior emerges , such as dna , heart dynamics , circadian rhythms , meteorology and climate temperature fluctuations , economics as well as in the low - frequency ( hz ) variations of the electric field of the earth that precede earthquakes termed seismic electric signals and in the relevant magnetic field variations .monofractal signals are homogeneous in the sense that they have the same scaling properties , characterized locally by a single singularity exponent , throughout the signal .thus , monofractal signals can be indexed by a single global exponent , e.g. , the hurst exponent , which suggests that they are stationary from viewpoint of their local scaling properties ( see ref. and references therein ) . since dfa can measure only one exponent , this method is more suitable for the investigation of monofractal signals .in several cases , however , the records can not be accounted for by a single scaling exponent ( i.e. , do not exhibit a simple monofractal behavior ) . in some examples, there exist crossover ( time- ) scales separating regimes with different scaling exponents .in general , if a multitude of scaling exponents is required for a full description of the scaling behavior , a multifractal analysis must be applied .multifractal signals are intrinsically more complex , and inhomogeneous , than monofractals ( see ref. and references therein ). a reliable multifractal analysis can be performed by the multifractal detrended fluctuation analysis , mf - dfa or by the wavelet transform ( e.g. , see ref. ) .dfa has been applied , as mentioned , to the ses activities .it was found that when dfa is applied to the original time series of the ses activities and artificial ( man - made ) noises , both types of signals lead to a slope at short times ( i.e. , ) lying in the range =1.1 - 1.4 , while for longer times the range =0.8 - 1.0 was determined without , however , any safe classification between ses activities and artificial noises . on the other hand ,when employing natural time ( see section ii ) , dfa enables the distinction between ses activities and artificial noises in view of the following difference : for the ses activities the -values lie approximately in the range 0.9 - 1.0 ( or between 0.85 to 1.1 , if a reasonable experimental error is envisaged ) , while for the artificial noises the -values are markedly smaller , i.e. , =0.65 - 0.8 .in addition , mf - dfa has been used and it was found that this multifractal analysis , when carried out in the conventional time frame , did not lead to any distinction between these two types of signals , but does so , if the analysis is made in the natural time domain . versus the scale in natural time : we increase the percentage of data loss p by removing segments of length samples from the signal of fig.[fig1](a ) .the black ( plus ) symbols correspond to no data loss ( p=0 ) , the red ( crosses ) to 30% data loss ( p=0.3 ) , the green ( asterisks ) to 50% data loss ( p=0.5 ) and the blue ( squares ) to 70% data loss ( p=0.7 ) . except for the casep=0 , the data have been shifted vertically for the sake of clarity .the slopes of the corresponding straight lines that fit the data lead to .95 , 0.94 , 0.88 and 0.84 from the top to bottom , respectively .they correspond to the average values of obtained from 5000 surrogate time - series that were generated with the method of surrogate by ma et al. ( see the text ) . ]( a ) , (b ) and (c ) to recognize the signal of fig.[fig1](a ) as true ses activity when considering various percentages of data loss p=0.2 , 0.3 , 0.5 , 0.7 and 0.8 as a function of the length of the contiguous samples removed .the removal of large segments leads to better results when using dfa in natural time ( a ) , whereas the opposite holds when using the conditions of eqs.([eq1 ] ) and ( [ eq2 ] ) for , and ( b ) .the optimum selection ( c ) for the identification of a signal as ses activity consists of a proper combination of the aforementioned procedures in ( a ) and ( b ) , see the text .the values presented have been obtained from 5000 surrogate time - series ( for a given value of and ) , and hence they a have plausible error 1.4% ( ) . ]the aforementioned findings of dfa for ses activities , are consistent with their generation mechanism which could be summarized as follows . beyond the usual intrinsic lattice defects that exist in solids , in ionic solids in particular ,when doped with aliovalent impurities , extrinsic defects are formed for the sake of charge compensation .a portion of these defects are attracted by the nearby impurities , thus forming electric dipoles the orientation of which can change by means of a defect migration . in the focal area of an impending earthquakethe stress gradually increases and hence affects the thermodynamic parameters of this migration , thus it may result in a gradual decrease of their relaxation time .when the stress ( pressure ) reaches a _critical _ value , a _ cooperative _ orientation of these dipoles occurs , which leads to the emission of a transient signal .this signal constitutes the ses activity and , since it is characterized by _critical _ dynamics , should exhibit infinitely range temporal correlations .this is consistent with the above findings of dfa that for ses activities .hereafter , we will solely use dfa in view of its simplicity and its ability to reliably classify ses activities .it is the basic aim of this study to investigate how significant data loss affects the scaling behavior of long - range correlated ses activities inspired from the new segmentation approach introduced recently by ma et al to generate surrogate signals by randomly removing data segments from stationary signals with different types of long - range correlations .the practical importance of this study becomes very clear upon considering that such a data loss is inevitable mainly due to the following two reasons : first , failure of the measuring system in the field station , including the electric measuring dipoles , electronics and the data collection system , may occur especially due to lightning .second , noise - contaminated data segments are often unavoidable due to natural changes such as rainfall , lightning , induction of geomagnetic field variations and ocean - earth tides besides the noise from artificial ( man - made ) sources including the leakage currents from dc driven trains .the latter are common in japan where at some sites they may last for almost 70 of the time every day .we clarify , however , that even at such noisy - stations in japan , several clear ses activities have been unambiguously identified during the night ( when the noise level is low ) .in addition , prominent ses activities were recently reported at noise - free stations ( far from industrialized regions ) having long duration , i.e. , of the order of several weeks. as we shall see , our results described in section iv , are in essential agreement with those obtained in the innovative and exhaustive study of ma et al . before proceeding to our results, we will briefly summarize dfa and natural time analysis in section ii , and then present in section iii the most recent ses data along with their analysis in natural time . in sectionv , we summarize our conclusions .we first sum up the original time series and determine the profile ; we then divide this profile of length into ( ) non overlapping fragments of -observations .next , we define the detrended process , in the -th fragment , as the difference between the original value of the profile and the local ( linear ) trend .we then calculate the mean variance of the detrended process : where if , the slope of the log versus log plot , leads to the value of the exponent .( this scaling exponent is a self - similarity parameter that represents the long - range power - law correlations of the signal ) .if =0.5 , there is no correlation and the signal is uncorrelated ( white noise ) ; if .5 , the signal is anti - correlated ; if .5 , the signal is correlated and specifically the case =1.5 corresponds to the brownian motion ( integrated white noise ) .we now summarize the background of natural time . in a time series comprising events, the natural time serves as an index for the occurrence of the -th event .the evolution of the pair ( ) is studied , where denotes a quantity proportional to the _ energy _ released in the -th event . for dichotomous signals , which is frequently the case of ses activities , the quantity stands for the duration of the -th pulse . by defining , we have found that the variance where , of the natural time with respect to the distribution may be used for the identification of ses activities . in particular, the following relation should hold the entropy in the natural time - domain is defined as it exhibits lesche ( experimental ) stability , and for ses activities ( critical dynamics ) is smaller than the value ) of a `` uniform '' ( u ) distribution ( as defined in refs . , e.g. when all are equal or are positive independent and identically distributed random variables of finite variance . in this case , and are designated and , respectively . ) .thus , . the same holds for the value of the entropy obtained upon considering the time reversal , ( the operator is defined by ) , which is labelled by . in summary , the ses activities , in contrast to the signals produced by man - made electrical sources , when analyzed in natural time ( see section i ) exhibit _ infinitely _ ranged temporal correlations and obey the conditions : and it should be recalled that ses activities are publicized _ only _ when the magnitude of the impending earthquake is ms(ath).0in fig.[fig1](a ) , we depict the ses activity recorded at ioannina station ( northwestern greece ) on 18 april , 1995 . it preceded the 6.6 earthquake on 13 may , 1995 .since this earthquake was the strongest one in greece during the 25 year period 1983 - 2007 , we focus on this example in the next section to present the dfa results . in addition , the most recent ses activity in greece is depicted in figs.[fig1](b),(c ) .this has been recorded at lamia station located in central greece during the period 27 december - 30 december , 2009 .almost three weeks later , two strong earthquakes of magnitude (ath)=5.7 and 5.6 occurred in central greece with an epicenter at 38.4 22.0 ( but see also refs. ) . the two signals in figs .[ fig1](a),(b ) have been classified as ses activities after analyzing them in natural time .in particular , for the signal in fig .[ fig1](a ) , straightforward application of natural time analysis leads to the conclusion that the conditions ( [ eq1 ] ) and ( [ eq2 ] ) are satisfied ( see table i of ref. ) .further , the dfa analysis of the natural representation of this signal gives an exponent .( we also note that the classification of this signal as ses activity had been previously achieved by independent procedures discussed in the royal society meeting that was held during may 11 - 12 , 1995 before the occurrence of the 6.6 earthquake on may 13 , 1995 . ) for the long duration signal of fig .[ fig1](b ) , the procedure explained in detail in ref. was followed .following ma et al , we now describe the segmentation approach used here to generate surrogate signals by randomly removing data segments of length from the original signal .the percentage of the data loss , i.e. , the percentage of the data removed , also characterizes the signal .the procedure followed is based on the construction of a binary time - series of the same length as .the values of that correspond to equal to unity are kept , whereas the data of when equals zero are removed .the values of kept , are then concatenated to construct .the binary time - series is obtained as follows : ( i)we first generate the lengths with of the removed segments , by selecting to be the smallest integer so that the total number of removed data satisfies the condition . ( ii ) we then construct an auxiliary time - series with when and when of size .( iii)we shuffle the time - series randomly to obtain .( iv)we then append to obtain : if we keep it , but we replace all with elements of value 0 and one element with value 1. in this way , a binary series is obtained , which has a size equal to the one of the original signal .we then construct the surrogate signal by simultaneously scanning the original signal and the binary series , removing the -th element of if and concatenating the segments of the remaining data to .the resulting signal is later analyzed in natural time , thus leading to the quantities , and as well as to the dfa exponent in natural time .such an example is given in fig .[ figx ] . in what remains we present the results focusing hereafter , as mentioned , on the example of the ses activity depicted in fig .[ fig1](a ) .typical dfa plots , obtained for =200 and and are given in fig .[ fig2 ] . for the sake of comparison, this figure also includes the case of no data loss ( i.e. , ) .we notice a gradual decrease of upon increasing the data loss , which affects our ability to classify a signal as ses activity . in order to quantify ,in general , our ability to identify ses activities from the natural time analysis of surrogate signals with various levels of data loss , three procedures have been attempted : let us call procedure 1 , the investigation whether , resulted from the dfa analysis of the natural time representation of a signal , belongs to the range . if it does , the signal is then classified as ses activity .figure [ fig3](a ) shows that for a given amount of data loss ( =const ) , upon increasing the length of the randomly removed segment , the probability of achieving , after making 5000 attempts ( for a given value of and ) , the identification of the signal as ses activity is found to gradually increase versus at small scales and stabilizes at large scales .for example , when considering the case of data loss ( magenta color in fig .[ fig3](a ) ) the probability is close to for =50 ; it increases to for =100 and finally stabilizes around for lengths to 500 .let us now label as procedure 2 , the investigation whether the quantities , and ( resulted from the analysis of a signal in natural time ) obey the conditions ( [ eq1 ] ) and ( [ eq2 ] ) , i.e. , and , .0966 .if they do so , the signal is classified as ses activity .figure [ fig3](b ) shows that for a given amount of data loss , the probability of achieving the signal identification as ses activity -that results after making 5000 attempts for each value- gradually decreases when moving from the small to large scales .note that for the smallest length scale investigated , i.e. , =10 ( which is more or less comparable -if we consider the sampling frequency of 1 sample / sec- with the average duration sec of the transient pulses that constitute the signal ) , the probability reaches values close to even for the extreme data loss of .this is understood in the context that the quantities , and remain almost unaffected when randomly removing segments with lengths comparable to the average pulse s duration .this is consistent with our earlier finding that the quantities , and are experimentally stable ( lesche s stability ) meaning that they exhibit only slight variations when deleting ( due to experimental errors ) a small number of pulses . on the other hand , at large scales of , markedly decreases .this may be understood if we consider that , at such scales , each segment of contiguous samples removed , comprises on the average a considerable number of pulses the removal of which may seriously affect the quantities , and . as an example , for data loss ( cyan curve in fig . [ fig3](b ) ) , and for lengths =400 - 500 , the probability of identifying a true ses activity is around .interestingly , a closer inspection of the two figures [ fig3](a ) and [ fig3](b ) reveals that and play complementary roles . in particular , at small scales of , increases but decreases versus . at large scales , where reaches ( for considerable values of data loss ) its largest value, the value becomes small .inspired from this complementary behavior of and , we proceeded to the investigation of a combined procedure , let us call it procedure 3 . in this procedure ,a signal is identified as ses activity when _ either _ the condition _ or _the relations ( [ eq1 ] ) and ( [ eq2 ] ) are satisfied .the probability of achieving such an identification , after making 5000 attempts ( for a given value of and ) , is plotted in fig .[ fig3](c ) .the results are remarkable since , even at significant values of data loss , e.g. , or , the probability of identifying a ses activity at scales =100 to 400 remains relatively high , i.e. , and , respectively ( cf .note also that the value of reaches values close to at small scales =10 ) .this is important from practical point of view , because it states for example the following : even if the records of a station are contaminated by considerable noise , say of the time of its operation , the remaining of the non - contaminated segments have a chance of to correctly identify a ses activity .the chances increase considerably , i.e. , to , if only half of the recordings are noisy .the aforementioned results have been deduced from the analysis of a ses activity lasting around three hours . in cases of ses activities with appreciably longer duration ,e.g. , a few to several days detected in greece or a few months in japan , the results should become appreciably better .we start our conclusions by recalling that the distinction between ses activities ( critical dynamics , infinitely ranged temporal correlations ) and artificial ( man - made ) noise remains an extremely difficult task , even without any data loss , when solely focusing on the original time series of electrical records which are , of course , in conventional time . on the other hand ,when combining natural time with dfa analysis , such a distinction becomes possible even after significant data loss .in particular we showed for example that even when randomly removing of the data , we have a probability ( ) around , or larger , to identify correctly a ses activity .this probability becomes somewhat smaller , i.e. , 75% , when the data loss increases to 70% . to achieve this goal ,the proper procedure is the following : the signal is first represented in natural time and then analyzed in order to deduce the quantities , and as well as the exponent from the slope of the log - log plot of the dfa analysis in natural time .we then examine whether the latter slope has a value close to unity _ or _ the conditions and , are obeyed . in other words , the consequences caused by an undesirable severe data loss can be markedly reduced upon taking advantage of the dfa and natural time analysis .
electric field variations that appear before rupture have been recently studied by employing the detrended fluctuation analysis ( dfa ) as a scaling method to quantify long - range temporal correlations . these studies revealed that seismic electric signals ( ses ) activities exhibit a scale invariant feature with an exponent over all scales investigated ( around five orders of magnitude ) . here , we study what happens upon significant data loss , which is a question of primary practical importance , and show that the dfa applied to the natural time representation of the remaining data still reveals for ses activities an exponent close to 1.0 , which markedly exceeds the exponent found in artificial ( man - made ) noises . this , in combination with natural time analysis , enables the identification of a ses activity with probability 75% even after a significant ( 70% ) data loss . the probability increases to 90% or larger for 50% data loss . * keywords : * detrended fluctuation analysis , complex systems , scale invariance * complex systems usually exhibit scale - invariant features characterized by long - range power - law correlations , which are often difficult to quantify due to various types of non - stationarities observed in the signals emitted . this also happens when monitoring geoelectric field changes aiming at detecting seismic electric signals ( ses ) activities that appear before major earthquakes . to overcome this difficulty the novel method of detrended fluctuation analysis ( dfa ) has been employed , which when combined with a newly introduced time domain termed natural time , allows a reliable distinction of true ses activities from artificial ( man - made ) noises this is so , because the ses activities are characterized by infinitely ranged temporal correlations ( thus resulting in dfa exponents close to unity ) while the artificial noises are not . the analysis of ses observations often meet the difficulty of significant data loss caused either by failure of the data collection system or by removal of seriously noise - contaminated data segments . thus , here we focus on the study of the effect of significant data loss on the long - range correlated ses activities quantified by dfa . we find that the remaining data , even after a considerable percentage of data loss ( which may reach ) , may be correctly interpreted , thus revealing the scaling properties of ses activities . this is achieved , by applying dfa _ not _ to the original time series of the remaining data but to those resulted when employing natural time . *
the solar cycle is not regular .the individual cycles vary in strength from one cycle to another. therefore prediction of future cycles is a non - trivial task .however forecasting future cycle amplitudes is important because of the impact of solar activity on our space environment .unfortunately , recent efforts to predict the solar cycle did not reach any consensus , with a wide range of forecasts for the strength of the ongoing cycle 24 ( pesnell 2008 ) .kinematic dynamo models based on the babcock - leighton mechanism has proven to be a viable approach for modeling the solar cycle ( e.g. , muoz - jaramillo et al .2010 ; nandy 2011 ; choudhuri 2013 ) . in such models ,the poloidal field is generated from the decay of tilted active regions near the solar surface mediated via near - surface flux transport processes . in this modelthe large - scale coherent meridional circulation plays a crucial role ( choudhuri et al .1995 ; yeates , nandy & mackay 2008 ; karak 2010 ; nandy , muoz - jaramillo & martens 2011 ; karak & choudhuri 2012 ) .this is because the meridional circulation is believed to transport the poloidal field generated near the solar surface to the interior of the convection zone where the toroidal field is generated through stretching by differential rotation .the time necessary for this transport introduces a memory in the solar dynamo , i.e. , the toroidal field ( which gives rise the sunspot eruptions ) has an in - built `` memory '' of the earlier poloidal field .yeates , nandy & mackay ( 2008 ) systematically studied this issue and showed that in the advection - dominated regime of the dynamo the poloidal field is mainly transported by the meridional circulation and the solar cycle memory persists over many cycles ( see also jiang , chatterjee & choudhuri 2007 ) . on the other hand , in the diffusion - dominated regime of the dynamo , the poloidal field is mainly transported by turbulent diffusion and the memory of the solar cycle is short roughly over a cycle .recent studies favor the diffusion - dominated solar convection zone ( miesch et al .2011 ) and the diffusion - dominated dynamo is successful in modeling many important aspects of the solar cycle including the the waldmeier effect and the grand minima ( karak & choudhuri 2011 ; choudhuri & karak 2009 ; karak 2010 ; choudhuri & karak 2012 ; karak & petrovay 2013 ) . using an advection dominated b - l dynamo dikpati de toma & gilman ( 2006 ) predicted a strong cycle 24 . on the other hand , choudhuri et al .( 2007 ) used a diffusion - dominated model and predicted a weak cycle ( see also jiang et al .however in most of the models , particularly in these prediction models , the turbulent pumping of magnetic flux an important mechanism for transporting magnetic field in the convection zone was ignored .theoretical as well as numerical studies have shown that a horizontal magnetic field in the strongly stratified turbulent convection zone is pumped preferentially downward towards the base of the convection zone ( stable layer ) and a few m / s pumping speed is unavoidable in many convective simulations ( e.g. , petrovay & szakaly 1993 ; brandenburg et al .1996 ; tobias et al . 2001 ;dorch & nordlund 2001 ; ossendrijver et al . 2002 ; kpyl et al . 2006 ; racine et al .recently , we have studied the impact of turbulent pumping on the memory of the solar cycle and hence its relevance for solar cycle forecasting ( karak & nandy 2012 ) . herewe provide a synopsis of our findings and discuss its implications for solar cycle predictability .the evolution of the magnetic fields for a kinematic dynamo model is governed by the following two equations . = \eta_{t } \left ( \nabla^2 - \frac{1}{s^2 } \right ) b + s({{\bf b}}_p.\nabla)\omega + \frac{1}{r}\frac{d\eta_t}{dr } \frac{\partial}{\partial{r}}(r b)~~\ ] ] with . here is the vector potential of the poloidal magnetic field , is the toroidal magnetic field , is the meridional circulation , is the internal angular velocity , is the source term for the poloidal field by the b - l mechanism and , are the turbulent diffusivities for the poloidal and toroidal components . with the given ingredients , we solve the above two equations to study the evolution of the magnetic field in the dynamo model .the details of this model can be found in nandy & choudhuri ( 2002 ) and chatterjee , nandy & choudhuri ( 2004 ) .however for the sake of comparison with the earlier results we use the exactly same parameters as given in yeates , nandy & mackay ( 2008 ) . in the mean - field induction equation, the turbulent pumping naturally appears as an advective term .therefore to include its effect in the present dynamo model , we include the turbulent pumping term shown by the following expression in the advection term of the poloidal field equation ( eq . [ pol_eq ] ) .+ \left [ 1 - \rm{erf } \left ( \frac{r-0.97}{0.1}\right ) \right ] \left [ \rm{exp}\left ( \frac{r-0.715}{0.25}\right ) ^2 \rm{cos}\theta + 1\right],\\ \ ] ] where determines the strength of the pumping what we vary in our simulations .note that we introduce pumping only in the poloidal field because turbulent pumping is likely to be relatively less effective on the toroidal component ( e.g. , kpyl et al .the toroidal field is stronger , intermittent and subject to buoyancy forces and therefore it is less prone to be pumped downwards .also note that we do not consider any latitudinal pumping . to study the solar cycle memory we have to make the strength of the cycle unequal by introducing some stochasticity in the model .presently we believe that there are two important sources of randomness in the flux transport dynamo model the stochastic fluctuations in the b - l process of generating the poloidal field and the stochastic fluctuations in the meridional circulation . in this work, we introduce stochastic fluctuations in the appearing in eq .[ pol_eq ] to capture the irregularity in the b - l process of poloidal field generation .we set . throughout all the calculations we take m s ( i.e. , level of fluctuations ) .the coherence time is chosen in such a way that there are around 10 fluctuations in each cycle .we have carried out extensive simulations with stochastically varying at different downward pumping speed varied from 0 to 4 m s .we have performed simulations in two different regimes of dynamo the diffusion - dominated regime with parameters m s , s and the advection - dominated regime with m s , s . in the previous casethe diffusion of the fields are more important compared to the advection by meridional flow whereas in the latter case it is the other way round .other than some obvious effects of the turbulent pumping on the solar cycle period and the latitudinal distribution of the magnetic field ( which have already been explored by guerrero & de gouveia dal pino 2008 ) we are interested here to see the dependence of the toroidal field on the previous cycle poloidal fields . to do thiswe compute the correlation between the peak of the surface radial flux ( ) of cycle with that of the deep - seated toroidal flux ( ) of different cycles . herewe consider as the flux of radial field over the solar surface from latitude to , and as the flux of toroidal field over the region and latitude to . in table 1 , we present the spearman s rank correlation coefficients and significance levels in two different regimes with increasing pumping speed . from this tablewe see that in the advection - dominated regime , in absence of pumping , the polar flux of cycle correlates with the toroidal flux of cycle , and , whereas in diffusion dominated regime , only one cycle correlation exist ( i.e. , the polar flux of cycle only correlates with the toroidal flux of cycle ) .this is consistent with yeates , nandy & mackay ( 2008 ) .however , it is interesting to see that with the increase of the pumping speed in the advection - dominated region , the higher order correlations slowly diminish and even just at 2.0 m s pumping speed only the to correlation exists and other correlations have destroyed .however the behavior in the diffusion - dominated regime remains qualitatively unchanged .[ corr_ap2 ] shows the correlation plot with 2.0 m s pumping amplitude for the advection - dominated regime whereas fig .[ corr_dp2 ] shows the same for the diffusion - dominated case .another important result of these analyses is that with the increase of the strength of the pumping the to correlations are also decreasing rapidly in both the advection - dominated and in the diffusion - dominated regime ( see table 1 ) . and the peak ( deep - seated ) toroidal flux of cycle ( a ) ( b ) , ( c ) , and ( d ) in the advection - dominated regime with a pumping speed amplitude of 2 m s .the flux values are in units of mx .the spearman s rank correlation coefficients ( ) along with significance levels are inscribed .reproduced from karak & nandy ( 2012).,width=384 ] .correlation coefficients ( ) and percentage significance levels ( ) for peak surface radial flux of cycle versus peak toroidal flux of different cycles for 275 solar cycles data .the first column denotes the amplitude of the turbulent pumping speed in various simulation studies .the top row corresponds to the case without turbulent pumping and subsequent rows correspond to simulations with increasing pumping speeds . [ cols="<,>,>,>,>,>",options="header " , ]we have introduced turbulent pumping of the magnetic flux in a b - l type kinematic dynamo model and have carried out several extensive simulations with stochastic fluctuation in the b - l with different strengths of downward turbulent pumping in both advection- and diffusion - dominated regimes of the solar dynamo .we find that multiple cycle correlations between the surface polar flux and the deep - seated toroidal flux in the advection - dominated dynamo model degreades severely when we introduce turbulent pumping . with 2 m s as the typical pumping speed , the timescale for the poloidal field to reach the base of the convection zone is about 4 years , which is even shorter than the timescale of turbulent diffusion ( and much shorter than the advective timescale due to meridional circulation ) .consequently the behavior found in the advection - dominated dynamo model with pumping is similar to that seen in the diffusion - dominated dynamo model indicating that downward turbulent pumping short - circuits the meridional flow transport loop for the poloidal flux .this transport loop is first towards the poles at near - surface layers and then downwards towards the deeper convection zone and subsequently equatorwards .however , when pumping is dominant , then the transport loop is predominantly downwards straight into the interior of the convection zone .an interesting and somewhat counter - intuitive possibility that our findings raise is that the solar convection zone may not be diffusion - dominated , or advection - dominated , but rather be dominated by turbulent pumping .note that this does not rule out the possibility that in the stable layer beneath the base of the convection zone , meridional circulation still plays an important and dominant role in the equatorward transport of toroidal flux and thus , in generating the butterfly diagram .our result implies with turbulent pumping as the dominant mechanism for flux transport , the solar cycle memory is short .this short memory , lasting less than a complete 11 year cycle implies that solar cycle predictions for the maxima of cycles are best achieved at the preceding solar minimum , about 4 - 5 years in advance and long - term predictions are unlikely to be accurate .this also explains why early predictions for the amplitude of solar cycle 24 were inaccurate and generated a wide range of results with no consensus .the lesson that we take from this study is that it is worthwhile to invest time and research to understand the basic physics of the solar cycle first , and that advances made in this understanding will lead to better forecasting capabilities for solar activity .nandy , d. 2012 , iau symp .286 , comparative magnetic minima : characterizing quiet times in the sun and stars , ed . c. h. mandrini & d. f. webb ( cambridge : cambridge univ .press ) , 54 nandy , d. , & choudhuri , a. r. 2002 , _ science _, 296 , 1671 nandy , d. , muoz - jaramillo , a. , & martens , p. c. h. 2011 , _ nature , _ 471 , 80
having advanced knowledge of solar activity is important because the sun s magnetic output governs space weather and impacts technologies reliant on space . however , the irregular nature of the solar cycle makes solar activity predictions a challenging task . this is best achieved through appropriately constrained solar dynamo simulations and as such the first step towards predictions is to understand the underlying physics of the solar dynamo mechanism . in babcock leighton type dynamo models , the poloidal field is generated near the solar surface whereas the toroidal field is generated in the solar interior . therefore a finite time is necessary for the coupling of the spatially segregated source layers of the dynamo . this time delay introduces a memory in the dynamo mechanism which allows forecasting of future solar activity . here we discuss how this forecasting ability of the solar cycle is affected by downward turbulent pumping of magnetic flux . with significant turbulent pumping the memory of the dynamo is severely degraded and thus long term prediction of the solar cycle is not possible ; only a short term prediction of the next cycle peak may be possible based on observational data assimilation at the previous cycle minimum .
although schrdinger introduced the term _ entanglement _ to quantum mechanics in 1935, most physicists did not begin using the term until the 1990s or later .even today there are quantum mechanics textbooks in use that do not use the word `` entanglement '' at all. more importantly , our teaching often glosses over the underlying concept : that for any quantum system with more than one degree of freedom , the vast majority of allowed states exhibit `` correlations '' or `` non - separability . '' when we finally introduce students to entangled states , it is usually in the context of spin systems , such as the singlet state of a pair of spin-1/2 particles .this example is unparalleled for its mathematical simplicity and direct applicability to bell s theorem and quantum information science .however , spin systems are also rather abstract and disconnected from the spatial wave functions that are more familiar to most students .students also encounter entangled wave functions when they apply quantum mechanics to atoms , but there the concept of entanglement tends to get muddied by the complications of three spatial dimensions and identical particles .moreover , neither entangled spins nor entangled atomic wave functions are easy to visualize .fortunately , it is easy to include simple pictorial examples of entangled spatial wave functions in any course that discusses wave functions : an upper - division quantum mechanics course , a sophomore - level modern physics course , and in many cases an introductory physics course .the purpose of this paper is to illustrate some ways of doing so .the following section introduces non - separable wave functions for a single particle in two dimensions .section iii then reinterprets these same functions for a system of two particles in one dimension .section iv explains how entanglement arises from interactions between particles , and sec .v shows how to quantify the degree of entanglement for a two - particle wave function .each of these sections ends with a few short exercises to help students develop a conceptual understanding of entanglement .the appendix reviews some of the history of how the term `` entanglement '' finally came into widespread use , more than a half century after schrdinger coined it .and .positive and negative portions of the wave function are indicated by the and symbols , and by color online ; black represents a value of zero . this wave function factors into a function of and a function of , drawn along the top and right . ]imagine that you are teaching quantum mechanics to undergraduates and you have just finished covering wave mechanics in one dimension .the natural next step is to explore wave mechanics in multiple dimensions , and a typical first example is the two - dimensional square infinite squarewell , with potential energy inside this idealized potential well the separable solutions to the time - independent schrdinger equation are where and are positive integer quantum numbers . the corresponding energies , assuming the particle is nonrelativistic , are proportional to .figure [ box23plot ] shows one of these wave functions . after listing the allowed energies and perhaps drawing some of the separable wave functions , it is customary to put this problem aside and go on to the next example perhaps a three - dimensional infinite square well , or a central force problem .typically these further examples also admit separable solutions , and so our students run the risk of acquiring a serious misconception : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * misconception 1 : * all multidimensional wave functions are separable . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ although i am not aware of any physics education research that documents the prevalence of this misconception , i think most of us who have taught quantum mechanics have encountered it. often students do understand that in order for the time - independent schrdinger equation to have separable solutions the potential energy function must have a good deal of symmetry .but as long as there is enough symmetry for separable solutions to exist , few students will spontaneously realize that these are not all the allowed wave functions .of course , mature physicists could never suffer from misconception 1 .we know that quantum states live in a vector space , where every normalized linear combination of two or more allowed states is also an allowed state. the separable wave functions of eq .( [ separablepsi ] ) are merely a set of basis states , from which all the others can be built . but beginning students of quantum physics know none of this .many of them lack the vocabulary to even say it . whether or not students know about vector spaces and orthonormal bases , it is easy enough to show them examples of superposition states. figure [ boxnonseparableplots](a ) shows the state in which i have combined an admixture of the degenerate state with the state of fig .[ box23plot ] .although it is built out of two separable pieces , this function is not itself separable : you can not factor it into a function of times a function of .you can readily see from the plot that its dependence changes as you vary , and vice - versa . ) .( b ) the same mixture as in ( a ) , with a further admixture of the ( 5,1 ) state .( c ) a `` cat state '' with isolated peaks at and . ] separability , or the lack thereof , is not merely a mathematical abstraction .a separable wave function has the important _ physical _ property that a measurement of one degree of freedom has no effect on a subsequent measurement of the other degree of freedom .for example , if a particle is in the state shown in fig .[ box23plot ] and you measure its coordinate and happen to obtain , the probability distribution for a subsequent measurement of its coordinate is still proportional to , exactly the same as before you measured . on the other hand , if the particle starts out in the state shown in fig .[ boxnonseparableplots](a ) and you measure its coordinate and happen to obtain , a subsequent measurement of is then considerably more likely to yield values near , and less likely to yield values near , than it was before you measured . in this casethe outcomes of the two measurements are correlated ; we could even say they are _entangled_although that word is usually reserved for systems of two or more distinct particles , as discussed in the following section .although the probability claims made in the previous paragraph should be fairly intuitive just from looking at the wave function density plots , they can also be quantified .if you measure before , then you calculate the probability distribution for by integrating the square of the wave function over all : on the other hand , if you measure first and obtain the result , then you calculate the probability distribution for a subsequent measurement of by setting in the wave function ( to `` collapse '' it along the measurement direction ) , renormalizing it , and then squaring : figure [ beforeandafterdistributions ] compares these two probability distributions for the wave function shown in fig .[ boxnonseparableplots](a ) and . on the state shown in fig .[ boxnonseparableplots](a ) , before measuring ( solid ) and after measuring and obtaining the result ( dashed ) . ] in constructing the superposition state shown in fig .[ boxnonseparableplots](a ) i chose to mix two basis states with the same energy , and therefore the result is still a solution to the time - independent schrdinger equation .but this was a pedagogically poor choice on my part , because it could reinforce another common misconception : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * misconception 2 : * all allowed wave functions must satisfy the time - independent schrdinger equation . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ physics education researchers have convincingly documented the prevalence of this misconception , but even without documentation it should not come as a surprise , because we expect students of quantum mechanics to spend so much of their time solving the time - independent schrdinger equation. a better example of a superposition state might therefore be the one shown in fig .[ boxnonseparableplots](b ) , which adds a component of the higher - energy , state to the superposition of fig .[ boxnonseparableplots](a ) and eq .( [ degeneratemix ] ) . again , this wave function is non - separable and therefore has the property that a measurement of will change the probability distribution for a subsequent measurement of ( and vice - versa ) . but why restrict our attention to superpositions of two or three square - well basis states ?figure [ boxnonseparableplots](c ) shows an even clearer example of non - separability : a `` cat state'' consisting of two isolated peaks , one centered at coordinates and the other at .of course the completeness of the basis states guarantees that this state can be expressed as a linear combination of them , but if the goal is to understand non - separability ( or `` entanglement '' of and ) , then there is no need to mention any basis or even to assume that this particle is inside an infinite square well . by inspectionwe can see that if a particle is in this cat state then a measurement of either or will have a 50 - 50 chance of giving a result near either or .however , if we measure first and happen to get a result near , then a subsequent measurement of is guaranteed to give a result near . ] ; ( c ) a separable circular wave , ] . the color hues ( online ) indicate the complex phases , with the arrows pointing in the direction of increasing phase . ]further examples abound .for instance , we can consider complex - valued wave functions such as those shown in fig .[ complexfunctions ] .each of these plots uses color hues to represent the complex phases, and shows only a square portion of a function that extends over a larger area .recognizing separable and non - separable functions from such plots can be tricky , because the phase factor shifts the hues rather than scaling the brightness ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * exercise 1 : * determine the missing normalization constants in eqs .( [ separablepsi ] ) and ( [ degeneratemix ] ) . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * exercise 2 : * use a computer to reproduce fig .[ boxnonseparableplots](b ) , adjusting the relative coefficient of the ( 5,1 ) state to obtain a good match .( a tutorial on plotting wave functions with mathematica is included in the electronic supplement to this paper. ) then find the overall normalization constant and write down the full formula for this wave function . calculate and plot the probability distribution for a measurement of for this state , both before any measurement of is performed and after measuring and obtaining the value ._ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * exercise 3 : * write down a qualitatively accurate formula , in terms of gaussian functions , to represent the `` cat state '' shown in fig . [ boxnonseparableplots](c ) . show both pictorially and algebraically that this function _ is _ separable if you rotate the coordinate axes by 45 .thus , the `` entanglement '' of a two - variable wave function can be a coordinate - dependent concept .describe ( and draw ) at least two conceptually distinct ways in which you could modify this wave function so that it is not separable in any rotated coordinate system . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * exercise 4 : * suppose that you measure the components and of the momentum for a particle with the wave function shown in fig . [ complexfunctions](a ) .what values might you obtain ( in terms of ) , with what probabilities ?answer the same question for the wave function shown in fig .[ complexfunctions](b ) .are the outcomes of the and measurements correlated ?_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * exercise 5 : * for the wave function shown in fig . [ complexfunctions](c ) , suppose that you measure the component of the momentum and obtain a value near zero .what does this tell you about the particle s location ?what does it tell you about the component of the particle s momentum ? has the probability distribution for the component of the momentum changed as a result of the measurement ?answer the same questions for the wave function shown in fig .[ complexfunctions](d ) . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _in standard usage , the word _ entanglement _ seems to be reserved for correlations between two particles , rather than between two different degrees of freedom ( and in the previous section ) of a single particle . butthis restriction is an arbitrary matter of semantics , because every wave function described in the previous section can be reinterpreted as a wave function for a system of _ two _ particles in _one _ dimension , merely by relabeling . before proceeding to discuss a system of two particles ,however , we need to confront a third misconception : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * misconception 3 : * every particle has its own wave function . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ again i am not aware of any research to document the prevalence of this misconception. but i routinely see a look of surprise on students faces when they learn otherwise , and i have even encountered this misconception among phd physicists . we reinforce it whenever we ( and our chemistry colleagues ) teach atomic physics and speak of the first two electrons being in the 1s shell , the next two in the 2s shell , and so on .the english language naturally evokes classical images of particular objects in particular places not entangled quantum states .the best way to fight misconception 3 is to give students plenty of opportunities to work with entangled two - particle wave functions : plot them , interpret them in words , and do calculations with them . even for those who accept in the abstract that a two - particle system has only a single wave function that can not in generalbe factored , working with specific examples can deepen understanding and build intuition .note that each point on a density plot of the two - dimensional wave function now gives the joint amplitude for finding particle 1 at _ and _ particle 2 at . to find the probability density for a position measurement of just one particle, we must integrate over the position of the other particle as in eq .( [ yprobdensity ] ) .mentally switching between one - dimensional physical space and two - dimensional configuration space requires a good deal of practice. with the replacement , the two - dimensional square infinite square well becomes a system of two particles confined in a one - dimensional infinite square well .alternatively , the two particles could be confined in two separate one - dimensional wells . to make a precise analogy we must assume that the two particles are distinguishable , either by being in separate potential wells or by some other physical property ; otherwise there would be symmetry constraints on the two - particle wave function .the separable wave functions of eq .( [ separablepsi ] ) are still energy eigenfunctions as long as the particles do not interact , and the energy eigenvalues are then the same as before if the particles have equal masses .for the two - particle system , however , it is more natural to think about separate energy measurements for the two degrees of freedom .for example , if the system is in the state depicted in fig .[ box23plot ] , then we know that particle 1 has four units of energy and particle 2 has nine units , relative to the ground - state energy for a single particle . whether or not the two particles are confined inside an infinite square well , it is easy to construct two - particle wave functions that are entangled , that is , not separable. we can form combinations of two or three of the separable square - well basis states , as shown in figs .[ boxnonseparableplots](a ) and ( b ) .we can imagine `` cat states '' with two or more separated peaks , as in fig .[ boxnonseparableplots](c ) .and we can build states out of complex exponential functions , as shown in fig .[ complexfunctions ] .the exercises below explore all of these types of entangled states ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * exercise 6 : * for the wave function shown in fig .[ boxnonseparableplots](a ) , with , what are the possible outcomes , and their probabilities , of a measurement of the energy of particle 2 ?suppose next that you first measure the energy of particle 1 , and find that it has four units of energy ( in terms of the single - particle ground - state energy ) ; now what can you predict about the energy of particle 2 ?what if instead you had found that particle 1 has nine units of energy ?_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * exercise 7 : * repeat the previous problem for the wave function shown in fig . [ boxnonseparableplots](b ) .consider all possible outcomes of the measurement of the energy of particle 1 .( before working this exercise you should work exercise 2 . )_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * exercise 8 : * when we reinterpret the `` cat state '' of fig . [ boxnonseparableplots](c ) to apply to two particles in one dimension , it is tempting to assume that each of the two wave function peaks represents one of the two particles .why is this assumption wrong ?what _ does _ each of the peaks represent ?explain carefully ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * exercise 9 : * for the `` cat state '' shown in fig .[ boxnonseparableplots](c ) , with , sketch the probability distributions and for measurements of the positions of the two particles . now sketch at least two other wave functions , one entangled and one not , that are different from the one shown yet still yield the same probability distributions for both particles .explain the physical differences among all three wave functions , in terms of outcomes of successive measurements of and .for instance , if you measure first and obtain a value near , what can you predict about the outcome of a subsequent measurement of ? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * exercise 10 : * for the wave function shown in fig . [ complexfunctions](b ) , with , sketch the probability distributions and for measurements of the momenta of the two particles .suppose now that you measure and obtain a positive value ; what can you now predict about the outcome of a subsequent measurement of ?_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * exercise 11 : * imagine an infinite square well containing two particles whose wave function has the form ( to meet the boundary conditions ) times a gaussian factor that tends to put the two particles close to each other : $ ] , where . sketch this wave function , then sketch the probability distribution for a measurement of .now imagine that you measure and happen to obtain the result ; sketch the new probability distribution for a subsequent measurement of .( instead of merely sketching , you could use a computer to make quantitatively accurate plots . ) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * exercise 12 : * suppose that for a calculation or a plot you need to know the wave function of a particle in one dimension , confined within a width , to a resolution of . for a single particleyou then need to know 100 complex numbers ( minus one if we neglect the normalization constant and unphysical overall phase ) .how many numbers must you know to represent the wave function of two particles at this same resolution ?three particles ?if your computer has eight gigabytes of memory and each complex number takes up eight bytes , what is the maximum number of particles for which your computer can store an arbitrary wave function? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _just because a quantum state is allowed does nt mean it will occur in practice .students will naturally wonder how to create entangled two - particle states in the real world .we owe them an answer , even if we illustrate that answer with idealized examples in one spatial dimension .the answer , in a word , is _ interactions _ : particles tend to become entangled when they interact with each other. schrdinger himself said it well: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ when two systems , of which we know the states by their respective representatives [ i.e. , wave functions ] , enter into temporary physical interaction due to known forces between them , and when after a time of mutual influence the systems separate again , then they can no longer be described in the same way as before , viz . by endowing each of them with a representative of its own .i would not call that_ one _ but rather _ the _ characteristic trait of quantum mechanics , the one that enforces its entire departure from classical lines of thought . by the interaction the two representatives ( or -functions ) have become entangled ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ for a specific example ,let us first go back to the familiar context of two equal - mass ( but distinguishable ) particles trapped in a one - dimensional infinite square well . if we merely add an interaction term of the form to the hamiltonian of this system , then all the stationary - state wave functions will be entangled. for example , fig .[ twointeractingparticlesgroundstate ] shows the ground - state wave function for the case of a repulsive gaussian interparticle interaction , in the two - dimensional configuration space of this system , this potential is simply a barrier running along the main diagonal , centered on the line .the barrier divides the square region into a double well , so the system s ground state consists of a symmetrical double peak , similar to the cat state of fig .[ boxnonseparableplots](c ) . in other words , as we would expect , the repulsive interaction tends to push the particles to opposite sides of the one - dimensional square well , but neither particle has a preference for one side or the other . ) . in natural units with , the particle mass , and the well width all equal to 1 , the parameters of the potential are and .see ref . for details on how this wave function was calculated .software for doing such calculations is included in the electronic supplement to this paper. ] ) , indicated by the gray diagonal band .the brightness indicates the magnitude of the wave function ( scaled differently in each frame ) , while the color hues ( online ) indicate the phase .the arrows show the direction of increasing phase , i.e. , the direction of motion .software for performing simulations of this type is provided in the electronic supplement to this paper. ] figure [ scatteringsequence ] shows an example involving a temporary interaction of the type that schrdinger described . heretwo equal - mass particles , in one dimension , are initially in a state consisting of separated gaussian wave packets , moving toward each other .( note that these physically separated wave packets appear as a single peak in two - dimensional configuration space . )the particles interact via a short - range finite rectangular barrier , where the parameters and have been chosen to give transmission and reflection probabilities that are approximately equal . after the interaction , therefore , the particles are in an entangled state whose probability distribution resembles that of the cat state in fig .[ boxnonseparableplots](c ) , but with peaks moving away from each other as time goes on .one peak puts the two particles back near their starting positions , indicating reflection ; the other peak puts them in interchanged locations , indicating transmission or tunneling .notice that for this state , a measurement of one particle s position affects not only the probability distribution for the other particle s position , but also the probability distribution for its momentum. fundamental though they are , examples like these rarely appear in quantum mechanics textbooks .the reason is probably that despite their conceptual simplicity , a quantitative treatment of either scenario requires numerical methods .the wave function plotted in fig .[ twointeractingparticlesgroundstate ] was calculated using a variational - relaxation algorithm, while fig .[ scatteringsequence ] is the result of a numerical integration of the time - dependent schrdinger equation. although neither calculation takes more than a few seconds on today s personal computers , learning to do such calculations is not a standard part of the undergraduate physics curriculum .teaching students to do these numerical calculations would serve the dual purpose of augmenting their computational skills and helping them develop intuition for entangled two - particle systems . on the other hand ,students who have already studied single - particle examples of double - well bound states and wave packet scattering should not need any computational skills to make qualitative predictions or to understand the pictorial results .the bottom line is that interactions between particles generically create entanglement . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * exercise 13 : * describe and sketch the first excited state for the system whose ground state is depicted in fig .[ twointeractingparticlesgroundstate ] .( hint : what does the first excited state look like for a one - dimensional double well ? ) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * exercise 14 : * consider the system of two particles in an infinite square well , with a gaussian interaction as in eq .( [ gaussianrepulsion ] ) , but with , so the interaction is attractive .sketch what the ground - state wave function of this system might look like , and interpret it physically in terms of measurements of the two particles positions . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * exercise 15 : * when two particles do _ not _ interact with each other , the system s potential energy has the form .prove that in this case ( a ) the time - independent schrdinger equation separates into an equation for each particle , so that there exists a complete set of unentangled stationary states ; and ( b ) if the system s initial state is not entangled , then its state will remain unentangled as time passes . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _the scattering example of the previous section makes it clear that not all interactions produce equal amounts of entanglement . by adjusting the range and strength of the interaction potential , we can obtain transmission probabilities ranging from 0 to 1 , and in either of these limits the final state would not be entangled .there seems to be a sense in which the entanglement is maximized when the reflection and transmission probabilities are equal .more generally , consider any wave function built as a normalized superposition of two separable and orthogonal terms : where and are orthonormal functions of , and are orthonormal functions of , and .there is no entanglement when or is zero , and we intuitively expect the `` amount '' of entanglement to increase as and approach each other , reaching a maximum when . but how can we quantify this intuition to obtain a formula for the amount of entanglement ?a general approach is to calculate a quantity called the _ interparticle purity _ of the two - particle state : experts may recognize this quantity as the trace of the squared one - particle reduced density matrix ; for the rest of us , the best way to develop an understanding of this quantity is to work out some special cases .first notice that if is separable , then each of the four integrals in eq . ( [ purity ] ) becomes a simple normalization integral , so .next consider the two - term superposition of eq .( [ twotermsuperposition ] ) . plugging this expression into eq .( [ purity ] ) results in 16 terms , but 14 of them are zero by orthogonality and the other two reduce to normalization integrals , yielding the simple result which equals 1 when or is zero and reaches a minimum of 1/2 when .thus the interparticle purity is inversely related to the intuitive notion of entanglement described above , at least for a wave function of this form .the following exercises explore the interparticle purity through further examples , and the software in the electronic supplement calculates for scenarios of the types considered in the previous section . in general , the lower the value of , the more a measurement on one particle tends to change the probability distribution for a subsequent measurement on the other particle ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * exercise 16 : * write out a full derivation of eq .( [ twotermpurity ] ) , showing which terms are nonzero , which are zero , and why . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * exercise 17 : * work out the formula for the interparticle purity of a superposition state of the form of eq .( [ twotermsuperposition ] ) , but with three terms , still built from orthogonal functions , instead of just two .what is the smallest possible value of for such a state ?can you generalize to a superposition of four or more such terms ?_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * exercise 18 : * determine for each of the three wave functions depicted in fig .[ boxnonseparableplots ] , reinterpreted for two particles in one dimension . before doing this for ( b )you should work exercises 2 and 17 ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * exercise 19 : * make a rough estimate of the interparticle purity of the wave function considered in exercise 11 , representing two particles in an infinite square well that tend to be much closer together than the size of the well .what happens in the limiting cases and ?( you may also wish to calculate numerically .to do so , it is probably best to sample the wave function on a grid to make a matrix of values ; then you can show that is proportional to the trace of , or simply the trace of if is real and symmetric . ) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * exercise 20 : * equation ( [ purity ] ) generalizes straightforwardly to systems in more than one spatial dimension .consider , then , an ordinary hydrogen atom , consisting of an electron and a proton , in its ground state . the average distance between the two particlesis then known to be on the order of m. if the atom as a whole is in a state that is spread over a volume of a cubic millimeter , what is the approximate interparticle purity of the two - particle state ? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _the goal of this paper is to illustrate some ways of introducing students to entangled wave functions at a relatively early stage in their physics education .there are at least three reasons to do so .first , as emphasized above , we want to prevent misconceptions .when students learn only about separable wave functions they can develop an over - simplified view of how quantum mechanics works .second , entangled quantum systems are important . from atoms and molecules to quantum computers , entanglement is central to a large and growing number of real - world applications. students will be better prepared to understand these applications if they become reasonably comfortable with entanglement first , in the relatively familiar context of wave functions in one spatial dimension . when the time comes to make the transition to a system of two spin-1/2 particles, one could emphasize the correspondence using `` density plots '' as shown in fig .[ 2by2densityplots ] .unfortunately , i do not know of a good visual representation for the wave function of two particles in three spatial dimensions .`` density '' plots for the singlet and triplet states of a system of two spin-1/2 particles , drawn to emphasize the analogy to the continuous wave functions plotted in the earlier figures , with larger magnitudes shown as brighter shades and zero shown as black .the component of the spin of the first particle is plotted horizontally , and that of the second particle is plotted vertically. ] third , entanglement is essential to the quantum measurement process .measurement requires interaction , and this interaction entangles the system being measured with the measurement apparatus, as schrdinger first emphasized in his famous `` cat paradox '' paper. students are naturally curious about the cat paradox in particular and the measurement controversy more generally .although understanding entanglement is not sufficient to resolve the controversy , it is surely necessary .that it took roughly 60 years for the term `` entanglement '' to come into common use , after schrdinger introduced it in 1935 , is astonishing . explaining the long delay is a job for historians of science. here i will merely document some of the relevant publications and dates. a convenient way to see the big picture is to use the google books ngram viewer to search for the phrase `` quantum entanglement . '' as fig .[ ngramplot ] shows , the phrase does not occur at all in this large database of books until 1987 .there are then a very small number of occurrences through 1993 , after which the number rises rapidly .`` entanglement '' was not completely dormant during the first 50 years after 1935 , but its use was sporadic even among specialists in the foundations of quantum mechanics .the first uses of the term i have found after schrdinger s were by hilbrand groenewold in 1946, by henry margenau in 1963, and by james park , a student of margenau , in 1968 ( in the american journal of physics ) .the term continued to appear occasionally in articles on quantum foundations during the 1970s. then , in a 1980 publication of a 1979 talk , john bell pointed out that if we try to explain the ongoing experiments with correlated photons by suggesting that the orientations of the polarizers can not actually be chosen independently , then `` separate parts of the world become deeply entangled , and our apparent free will is entangled with them . ''an almost identical sentence appears in his better - known `` bertlmann s socks '' article, published in 1981 .peres and zurek quoted this sentence the following year in the american journal of physics. still , `` entanglement '' remained outside the standard lexicon well into the 1980s .it does not appear in the 1978 review article on bell s theorem by clauser and shimony, or in the 1984 review article on `` nonseparability '' by despagnat, or even in the 1987 resource letter on foundations of quantum mechanics by ballentine. in wheeler and zurek s 800-page annotated compilation of papers on quantum measurement theory , published in 1983 , `` entanglement '' appears only in the 1935 paper by schrdinger. the 1986 book _ the ghost in the atom_, edited by davies and brown , presents interviews with eight experts in quantum foundations ( including bell ) , none of whom use the word `` entanglement . '' but by 1984 , abner shimony was saying `` entanglement '' regularly , apparently becoming the term s main champion .he used the word several times in a paper published that year in a philosophy journal, then used it in a danish television documentary that aired in 1985. also in 1985 , nick herbert s popular book _ _quantum reality__ described how two - particle quantum states are not necessarily separable , referring to this property ( somewhat confusingly ) as `` phase entanglement . ''the decisive year for `` entanglement '' was probably 1987 .a conference was held in london that year to celebrate schrdinger s 100th birthday , and bell s contribution to the conference proceedings highlighted schrdinger s phrase `` quantum entanglement , '' even using it as a section title .that article was also included in the collected volume of bell s writings on quantum foundations that was published the same year , so it reached many other physicists .the subject of bell s article was the newly published ( 1986 ) `` spontaneous collapse '' proposal of ghirardi , rimini , and weber ( grw). the 1986 grw paper did not use the word `` entanglement , '' but these authors did use it in 1987 in an answer to a comment on their paper , and in this answer they cited bell s contribution to the schrdinger volume .this answer is the earliest use of the term that i can find in any of the physical review journals . another year and a half passed before `` entanglement '' appeared in physical review letters , in a paper by horne , shimony , and zeilinger. by thenshimony had also said `` entangled '' once in a scientific american article, and used the term repeatedly in his chapter on quantum foundations in _ the new physics_, a book intended for interested laypersons . in 1990 he and coauthors greenberger , horne , and zeilinger used it in the american journal of physics. at that point it was just a matter of time before `` entanglement '' entered the vocabulary of most physicists and interested laypersons .roger penrose used the term in popular books published in 1989 and 1994. its first appearance in physics today seems to have been in 1991 , in eugen merzbacher s retiring address as president of the american physical society. curiously , in this transcript merzbacher attributed the term to margenau , though without any citation .getting `` entanglement '' into textbooks took somewhat longer . the earliest textbook to use the term appears to be merzbacher s third ( 1998 ) edition, which gives a clear and general definition of the concept ( and correctly attributes the term to schrdinger ) .five more years went by before it appeared in an undergraduate textbook , gasiorowicz s third edition. since then most new quantum mechanics textbooks have mentioned the term at least briefly , although many apply it only to spin systems .the work of dan styer inspired many parts of this paper , and he specifically contributed fig .[ ngramplot ] .david griffiths , scott johnson , david kaiser , david mcintyre , tom moore , and dan styer read early drafts of the manuscript and provided comments that greatly improved the final version .i am also grateful to weber state university for providing a sabbatical leave that facilitated this work .e. schrdinger , `` die gegenwrtige situation in der quantenmechanik , '' naturwissenschaften * 23 * , 807812 , 823828 , and 844849 ( 1935 ) .translation by j. d. trimmer , `` the present situation in quantum mechanics : a translation of schrdinger s ` cat paradox ' paper , '' proc .soc . * 128 * , 323338 ( 1980 ) . reprinted in j. a. wheeler and w. h. zurek , eds . , _quantum theory and measurement _ ( princeton university press , princeton , 1983 ) , pp .schrdinger s term for `` entanglement '' in the original german was _verschrnkung_. the same year as the famous epr paper , which used the concept of entangled states ( thoughnot the word `` entangled '' ) to argue that quantum mechanics is incomplete : a. einstein , b. podolsky , and n. rosen , `` can quantum - mechanical description of physical reality be considered complete ? , '' phys . rev . * 47 * , 777780 ( 1935 ) .as discussed in the appendix , it appears that no quantum mechanics textbook used the word `` entanglement '' until 1998 .some more recent textbooks that do nt use the word at all include r. w. robinett , _ quantum mechanics _ , second edition ( oxford university press , oxford , 2006 ) , and b. c. reed , _ quantum mechanics _( jones and bartlett , sudbury , ma , 2008 ) .see the electronic supplement to this paper at [ url to be inserted by aip ] for answers to the exercises , a tutorial on plotting two - dimensional wave functions with mathematica ( also at < http://physics.weber.edu/schroeder/quantum/plottutorial.pdf>[<http://physics.weber.edu/schroeder/quantum/plottutorial.pdf > ] ) , and software for calculating wave functions such as those shown in figs .[ twointeractingparticlesgroundstate ] and [ scatteringsequence ] ( also at < http://physics.weber.edu/schroeder/software/entanglementinbox.html>[<http://physics.weber.edu/schroeder/software/entanglementinbox.html > ] and < http://physics.weber.edu/schroeder/software/collidingpackets.html>[<http://physics.weber.edu/schroeder/software/collidingpackets.html > ] ) .see , e.g. , j. r. taylor , c. d. zafiratos , and m. a. dubson , _ modern physics for scientists and engineers _ , second edition ( prentice hall , upper saddle river , nj , 2004 ) , sec . 8.3; a. goswami , _ quantum mechanics _ , second edition ( wm .c. brown , dubuque , ia , 1997 ) , sec . 9.2 .there does exist at least one textbook that has a nice treatment of superposition states for the two - dimensional infinite square well : d. park , _ introduction to the quantum theory _, third edition ( mcgraw - hill , new york , 1992 ; dover reprint , 2005 ) , sec .notably , park s illustrations are mere line drawings showing the wave function node locations presumably because his earlier editions predate today s computer graphics tools .now that those tools are widely available , there is one less barrier to visualizing two - dimensional wave functions .the term `` cat state '' refers to schrdinger s cat ( see ref . ) , which is supposedly in a superposition of alive and dead states .nowadays many physicists use this term to describe any quantum state that is best thought of as a superposition of two other states , especially when the difference between those two states is large or important . in the exampleused here the two states are localized around points that are well separated from each other , so in a sense the particle is in two places at once . for a detailed discussion of representing complex phases as color hues ,see , for example , b. thaller , _ visual quantum mechanics _ ( springer , new york , 2000 ) , and _ advanced visual quantum mechanics _ ( springer , new york , 2005 ) . see the electronic supplement to this article , ref ., for a tutorial on plotting wave functions using mathematica .misconception 3 is closely related to the misconception that the wave function is a function of `` regular three - dimensional position space '' rather than configuration space , as described by styer , ref . ,item 3 .few quantum mechanics textbooks contain even a single plot of a two - particle wave function .an exception is d. h. mcintyre , _ quantum mechanics : a paradigms approach _( pearson , boston , 2012 ) , pp . 418419 .this exercise was inspired by d. styer , _ notes on the physics of quantum mechanics _( unpublished course notes , 2011 ) , < http://www.oberlin.edu/physics/dstyer/qm/physicsqm.pdf>[<http://www.oberlin.edu/physics/dstyer/qm/physicsqm.pdf > ] , p. 150 .the point was made in more generality by r. p. feynman , `` simulating physics with computers , '' intl .* 21 * , 467488 ( 1982 ) .although it is interactions that cause unentangled particles to become entangled , not every entangled state must result from an interaction . for example , a system of two identical fermions is inherently entangled , although the indistinguishability of the particles and the presence of spin introduce further subtleties . this system has been discussed previously by j. s. bolemon and d. j. etzold , `` enriching elementary quantum mechanics with the computer : self - consistent field problems in one dimension , '' am .j. phys . *42*(1 ) , 3342 ( 1974 ) ; j. r. mohallem and l. m. oliveira , `` correlated wavefunction of two particles in an infinite well with a delta repulsion , '' am .j. phys . * 58*(6 ) , 590592 ( 1990 ) ; j. yang and v. zelevinsky , `` short - range repulsion and symmetry of two - body wave functions , '' am . j. phys . *66*(3 ) , 247251 ( 1998 ) ; and e. a. salter , g. w. trucks , and d. s. cyphert , `` two charged particles in a one - dimensional well , '' am .j. phys . *69*(2 ) , 120124 ( 2001 ) . for more sophisticated analyses of entanglement in one - dimensional scattering interactions , see c. k. law , `` entanglement production in colliding wave packets , '' phys .a * 70*(6 ) , 062311 - 14 ( 2004 ) ; n. l. harshman and p. singh , `` entanglement mechanisms in one - dimensional potential scattering , '' j. phys .a * 41*(15 ) , 155304 - 112 ( 2008 ) ; h. s. rag and j. gea - banacloche , `` wavefunction exchange and entanglement in one - dimensional collisions , '' am . j. phys . *83*(4 ) , 305312 ( 2015 ) .when the potential for a two - particle system depends only on the separation distance ( here ) , the time - dependent schrdinger equation is separable in terms of the center - of - mass and relative coordinates .see , for example , d. s. saxon , _ elementary quantum mechanics _ ( holden - day , san francisco , 1968 ; dover reprint , 2012 ) , sec . viii.2 . whether this separation is physically useful depends on how the initial state is prepared and on what measurements on the final state one might perform .the idea for these plots comes from d. styer , `` visualization of quantal entangled states '' ( unpublished ) , < http://www.oberlin.edu/physics/dstyer/teachqm/entangled.pdf>[<http://www.oberlin.edu/physics/dstyer/teachqm/entangled.pdf > ] .similarly , one can say that a stern - gerlach device `` entangles '' the spin state of a particle with its spatial wave function .see , for example , d. j. griffiths , _ introduction to quantum mechanics _ , second edition ( pearson prentice hall , upper saddle river , nj , 2005 ) , eq .( 4.173 ) , and e. merzbacher , _ quantum mechanics _ , third edition ( wiley , new york , 1998 ) , p. 406. two insightful accounts of the history of quantum mechanics during this time period are l. gilder , _ the age of entanglement _( knopf , new york , 2008 ) , and d. kaiser , _ how the hippies saved physics _( norton , new york , 2011 ) .for example , i. fujiwara , `` quantum theory of state reduction and measurement , '' found .phys . * 2*(2/3 ) , 83110 ( 1972 ) ; n. maxwell , `` toward a microrealistic version of quantum mechanics .part ii , '' found .6*(6 ) , 661676 ( 1976 ) . j. s. bell , `` atomic - cascade photons and quantum - mechanical nonlocality , '' comments atom .* 9 * , 121126 ( 1980 ) ; reprinted in j. s. bell , _ speakable and unspeakable in quantum mechanics _ ( cambridge university press , cambridge , 1987 ) , pp .105110 .a. peres and w. h. zurek , `` is quantum theory universally valid ?. j. phys . * 50*(9 ) , 807810 ( 1982 ) .the quote from ref . in this paper contains the only use of `` entangled '' that i can find in ajp from between 1968 and 1990 .p. c. w. davies and j. r. brown , _ the ghost in the atom _( cambridge university press , cambridge , 1986 ) . the introductory chapter of this book does speak of a quantum system being `` entangled '' with the macroscopic experimental apparatus ( p. 12 ) and with our knowledge of the system ( p. 34 ) .jrlunde film denmark , `` atomic physics and reality '' ( television documentary , 1985 ) .posted by youtube user muon ray at < https://www.youtube.com/watch?v=bfvjoz51tmc>[<https://www.youtube.com/watch?v=bfvjoz51tmc > ] .this video also includes interviews with aspect , bell , bohm , and wheeler yet only shimony uses the word `` entanglement '' in the included footage .j. s. bell , `` are there quantum jumps ? , '' in _ schrdinger : centenary celebration of a polymath _ , edited by c. w. kilmister ( cambridge university press , cambridge , 1987 ) , pp . 4152 ; reprinted in bell ( 1987 ) , ref . , pp .201212 .g. c. ghirardi , a. rimini , and t. weber , `` disentanglement of quantum wave functions : answer to ` comment on `` unified dynamics for microscopic and macroscopic systems , ' '' '' phys .d * 36*(10 ) , 32873289 ( 1987 ) .
quantum entanglement occurs not just in discrete systems such as spins , but also in the spatial wave functions of systems with more than one degree of freedom . it is easy to introduce students to entangled wave functions at an early stage , in any course that discusses wave functions . doing so not only prepares students to learn about bell s theorem and quantum information science , but can also provide a deeper understanding of the principles of quantum mechanics and help fight against some common misconceptions . here i introduce several pictorial examples of entangled wave functions that depend on just two spatial variables . i also show how such wave functions can arise dynamically , and describe how to quantify their entanglement .
a connected graph is called transient ( resp . recurrent ) if the simple random walk on it is transient ( resp .recurrent ) .benjamini , gurel - gurevich and lyons showed the cerebrating result claiming that the trace of the simple random walk on a transient graph is recurrent almost surely .if a connected subgraph of an infinite connected graph is transient , then the infinite connected graph is transient .therefore , the trace is somewhat smaller " than the graph on which the simple random walk runs .now we consider the following questions : how far are a transient graph and the trace of the simple random walk on ?more generally , how far are and a recurrent subgraph of ? how many edges of do we need to add to so that the enlargement of becomes transient ?there are numerous choices of edges of to be added to .if we add finitely many edges to , then the enlarged graph is also recurrent .therefore , we add _ infinitely _ many edges to and consider whether the enlarged graph is transient . in this paper , we add infinitely many edges of to _randomly_. specifically , we add open edges of bernoulli bond percolation on to , and consider the probability that the enlargement of is transient .we more precisely state our purpose as follows .let be the bernoulli measure on the space of configurations of bernoulli bond percolation on such that each edge of is open with probability .consider the probability that the number of vertices of connected by open edges from a fixed vertex is infinite under .then hammersley s critical probability is the infimum of such that the probability is positive .similarly , we consider the probability that the enlarged graph is transient under and either of the following two values : the infimum of such that the probability is positive , or the infimum of such that the probability is one .we regard these two values as certain critical probabilities , and compare them with hammersley s critical probability .we also consider questions of this kind , not only for transience , but also for other graph properties .let be an infinite connected graph and be a subgraph of .let be a property of the subgraphs of .assume that satisfies and does not .let be the graph obtained by adding open edges of bernoulli bond percolation on to .( see definition [ enl ] for a precise definition . )let be the bernoulli measure on the space of configurations of bernoulli bond percolation on such that each edge of is open with probability .then we consider the probability that satisfies under .let ( resp . ) be the infimum of such that the probability is positive ( resp .for example , if is infinite and is a subgraph consisting of a vertex of and no edges , then and .the main purpose of this paper is to compare and with hammersley s critical probability .we focus on the following cases that is : being a transient subgraph , having finitely many cut points or no cut points , being a recurrent subset , or being connected . and depend heavily on the choice of .assume is being a transient subgraph .then there is a triplet such that . on the other hand, there is a triplet such that .there is also a triplet such that .see theorem [ tr ] for details .we also consider the case that is chosen _ randomly _ , specifically , is the trace of the simple random walk on . finally , we refer to related results .benjamini , hggstrm and schramm considered questions of this kind with a different motivation to ours .their original motivation was considering the conjecture that for all , there is no infinite cluster in bernoulli percolation on with probability one at the critical point .if an infinite cluster of bernoulli percolation satisfies -a.s . for any ,then the conjecture holds .a question related to this is considering what kinds of conditions on a subgraph of assure .they introduced the concept of _ percolating everywhere _( see definition [ pe ] for a precise definition . ) and considered whether the following claim holds : if we add bernoulli percolation to a percolating everywhere graph , then the enlarged graph is connected , and moreover , , -a.s . for any .this case can be described using our terminology as follows . is , is a percolating everywhere subgraph , and is connected and .they showed that if , then , and conjectured that it also holds for all .recently , benjamini and tassion showed the conjecture for all by a method different from . in this paper , we will discuss the values , for percolating everywhere subgraphs of . is not necessarily assumed to be , and the result depends on whether satisfies a certain condition .see theorem [ conn ] for details . in this paper ,a graph is a locally - finite simple graph .a simple graph is an unoriented graph in which neither multiple edges or self - loops are allowed . and denote the sets of vertices and edges of a graph , respectively .if we consider the -dimensional integer lattice , then it is the nearest - neighbor model .let be an infinite connected graph . in this paper , we consider bernoulli _ bond _ percolation and do not consider site percolation .denote a configuration of percolation by .we say that an edge is open if and closed otherwise .we say that an event is increasing ( resp .decreasing ) if the following holds : if and ( resp . ) for any , then . let be the open cluster containing .we remark that holds . by convention, we often denote the set of vertices by .let be hammersley s critical probability of .that is , for some , this value does not depend on the choice of .[ enl ] let be a subgraph of .let be a random subgraph of such that if is connected , then is also connected .if consists of a single vertex with no edges , then is identical to . in this paper ,a _ property _ is a subset of the class of subgraphs of which is invariant under any graph automorphism of .we consider a property which is well - defined _ only _ on a class of subgraphs of , and call the class the _ scope _ of the property . for example , being a transient subgraph is defined only for connected subgraphs of , and the scope of being transient is the class of connected subgraphs of .we denote ( resp . ) if a subgraph of is in the scope of and satisfies ( resp . does not satisfy ) .let be the cylindrical -algebra on the configuration space .we assume that an infinite connected graph , a subgraph of , and a property satisfy the following : + ( i ) , and are in the scope of .+ ( ii ) and .+ ( iii ) the event that is -measurable and increasing .if is chosen according to a probability law , then we assume that ( i ) and ( ii ) above hold -a.s ., and the event is -measurable and increasing for -a.s . denotes the product -algebra of and . in section 2, we will check that the event is -measurable for those properties , and give an example of such that is _ not _ -measurable . : { \mathbb{p}_p}({\mathcal{u}}(h ) \in { \mathcal{p } } ) > 0 \right\}.\ ] ] : { \mathbb{p}_p}({\mathcal{u}}(h ) \in { \mathcal{p } } ) = 1 \right\}.\ ] ] if obeys a law , then we define , , by replacing above with the product measure of and .the main purpose of this paper is to compare the values , , with .if is a single vertex and is being an infinite graph , then the definitions of and are identical and , hence , .it is easy to see that . in this paper , we focus on each of the following properties : ( i ) being a transient subgraph , ( ii ) having finitely many cut points or having no cut points , ( iii ) being a recurrent subset , and ( iv ) being a connected subgraph .the scopes for properties ( i ) and ( ii ) are connected subgraphs of , and the scopes for ( iii ) and ( iv ) are all subgraphs .we now state our main results informally .the following four assertions deal with the four properties ( i ) - ( iv ) above respectively .some of the assertions are special cases of full and precise versions of them appearing in sections 3 to 6 .[ tr ] let be being a transient graph . then ,+ ( i - a ) there is a pair such that . + ( i - b ) there is a pair such that .+ ( ii ) let .then + ( ii - a ) for any , there exists a subgraph such that .+ ( ii - b ) if is the trace of the simple random walk on , then ( iii ) let be an infinite tree . then + ( iii - a ) for any . + ( iii - b ) there is a subgraph such that . + ( iii - c ) there is a subgraph such that .we now consider the number of cut points .let be the law of two independent simple random walks on which start at and , respectively .let .let be the trace of the two - sided simple random walk on .let be the product measure of and . then , + ( i ) if , then has infinitely many cut points -a.s .+ ( ii ) if , then has no cut points -a.s .+ in particular , if is having finitely many cut points , or having no cut points , then the term cut points above is similar to the notion of a cut point of a random walk .see definition [ cut - def ] for a precise definition .it is known that the trace of the two - sided simple random walk on has infinitely many cut points -a.s .lawler ( * ? ? ?* theorem 3.5.1 ) ) the result above means that in the subcritical regime , there remain infinitely many cut points that are not bridged by open bonds of percolation .now we consider the case that is being a recurrent subset . in this paper , we regard this as a subgraph and consider the induced subgraph of the subset .let be being a recurrent subset .then , + ( i - a ) there is a pair such that .+ ( i - b ) there is a pair such that .+ ( ii ) let and be the trace of the simple random walk on . then + ( ii - a ) ( ii - b ) ( iii ) let be an infinite tree .then for any .the following concerns the connectedness of the enlargement of a percolating everywhere subgraph .[ conn ] let be being connected and be a percolating everywhere subgraph of an infinite connected graph . + ( i ) assume satisfies the following : for any infinite subsets and , the number of edges connecting a vertex of and a vertex of is infinite . then ( ii )otherwise , there is a percolating everywhere subgraph such that the remainder of this paper is organized as follows .section 2 states some preliminary results including the measurability of .we consider the case that is being a transient graph , the case that is a property concerning the number of cut points of graphs , the case that is being a recurrent subset , and the case that is being connected and is percolating everywhere , in sections 3 to 6 respectively .this section consists of three subsections .first we give a lemma estimating .then we state some results concerning random walk and percolation .finally we discuss the measurability of the event . roughly speaking , in the following , we will show that under a certain condition , can be arbitrarily small , if there is a suitable " subgraph .let be the set of neighborhoods of a vertex .[ epsilon ] fix an infinite connected graph and a property for subgraphs of .let .assume that there is a subgraph of such that then for any there is a subgraph such that .we show this assertion for .let be the map defined by ( here and henceforth means the maximum of and . )then the push - forward measure of the product measure on by is . since , we have that for any , there is such that it is easy to see that by ( [ near ] ) , therefore , since , there is a configuration such that and hence we can show this for in the same manner .we now define recurrent and transient subsets of by following lawler and limic ( * ? ? ?* section 6.5 ) . here andhenceforth denotes the simple random walk on .we regard a recurrent subset as a subgraph and consider the induced subgraph of the recurrent subset .[ def - recur ] we say that a subset of is a _ recurrent subset _ if for some otherwise , is called a _ transient subset_. this definition does not depend on choices of a vertex . for a graph ,we let be the graph distance between and in and we now briefly state the notion of cayley graphs .let be a finitely generated countable group and be a symmetric finite generating subset of which does not contain the unit element .then the _ cayley graph of with respect to _ is the graph such that the set of vertices is and the set of edges is .this graph depends on choices of . in this paper ,all results concerning cayley graphs of groups do not depend on choices of .we say that a graph has the _ degree of growth _ if for any vertex of , [ recur - cluster ] let be a cayley graph of a finitely generated group with the degree of growth .let be the unit element of the finitely generated group .assume and .then , + ( i ) there is a unique infinite cluster , -a.s .+ ( ii ) is a recurrent subset of , that is , ( iii ) let be the first hitting time of to a subset . by woess ( * ? ? ?* theorem 12.2 and proposition 12.4 ) , cayley graphs of a finitely generated group with polynomial growth is amenable graphs .therefore , by bollobs and riordan ( * ? ? ?* theorem 4 in chapter 5 ) , the number of infinite clusters is -a.s . or it is -a.s . by ,the latter holds .thus we have ( i ) . we will show ( ii ) .let be the product measure of and . using the shift invariance of bernoulli percolation and the markov property for simple random walk , here is the inverse element of as group .hence , since we have thus we have ( [ recur - posi ] ) . by (* corollary 25.10 ) all bounded harmonic functions on are constant . by following the proof of (* lemma 6.5.7 ) , we have ( [ recur - one ] ) .recall that is the cylindrical -algebra of .first , we consider the case that is a non - random subgraph .\(i ) let be a recurrent subgraph of a transient graph . then the event that is a transient subgraph of is -measurable .+ ( ii ) let be a recurrent subgraph of a transient graph .then the number of cut points of is an -measurable function .see definition [ cut - def ] in section 4 for the definition of cut points .+ ( iii ) let be a transient subset of a transient graph . then the event that is a recurrent subset is -measurable .+ ( iv ) let be a non - connected subgraph of an infinite connected graph . then the event that is connected is -measurable .\(i ) let be the effective resistance from to the outside of .it suffices to show that + is an -measurable function for each . since is a connected subgraph of , contained in .therefore , is determined by configurations in and hence is -measurable .\(ii ) it suffices to show that for any , the event that and is a cut point of is -measurable . is a cut point of if and only if is a cut point of for any .\(iii ) by fubini s theorem , it suffices to see that is -measurable .this follows from \(iv ) if are connected in , then there is such that and are in a connected component of . this event is determined by configurations of edges in .we now consider the case that is a random subgraph of .let be the -algebra on the path space defined by the simple random walk on and be the product -algebra of and .the following easily follows from that the event that the trace of the simple random walk is identical with a given connected subgraph is -measurable .assume that the event satisfies is -measurable for any infinite connected subgraph .let be the trace of the simple random walk .then the event satisfies is -measurable .we first show that there is a non - measurable subset of with respect to the cylindrical -algebra of . here and henceforth, denotes the set of natural numbers .let be the one - sided shift and be an uncountable subset of such that ( i ) and ( ii ) for any and any .assume that is measurable .let be the product measure of the probability measure on with .since for , since is countable , .since preserves , we see that for any , and but this is a contradiction .hence is not measurable .let be the connected subgraph of whose vertices are then any graph automorphism of is the identity map between vertices of .let be the connected subgraph of whose vertices are then let be the projection of to .regard as .let be the property that a graph is isomorphic to a graph in the class .then this event is not measurable with respect to the cylindrical -algebra of .( 70 , 30 ) ( 0 , 15)(10 , 0)10 ( 10 , 15)(0 , 10)10(10 , 0)10 ( 10 , 5)(0 , 10)10 ( 20 , 15)(0 , 10 ) ( 20 , 15)(10 , 0)10 ( 20 , 5)(0 , 10)10 ( 30 , 15)(0 , 10 ) ( 30 , 15)(10 , 0)10 ( 30 , 5)(0 , 10)10 ( 40 , 15)(0 , 10 ) ( 40 , 15)(10 , 0)10 ( 40 , 5)(0 , 10)10 ( 50 , 15)(0 , 10 ) ( 50 , 15)(10 , 0)10 ( 50 , 5)(0 , 10)10in this section , we consider the case that is being a transient graph and assume that is connected .[ extre - graph ] ( i ) there is a graph such that ( ii ) there is a graph such that for any infinite recurrent subgraph of , we remark that if is finite , then .\(i ) let be the graph which is constructed as follows : take and attach a transient tree such that to the origin of .this appears in hggstrm and mossel ( * ? ? ?* section 6 ) .then for any recurrent subgraph , is the graph obtained by the union of and . is recurrent .let . then -a.s ., is the graph obtained by adding at most countably many finite graphs to .hence is also recurrent , -a.s . since the intersection of and is the origin , is recurrent , -a.s .\(ii ) let be an infinite connected line - graph in benjamini and gurel - gurevich ( * ? ? ?* section 2 ) . in their paper , it is given as a graph having multi - lines , but we can construct a simple graph by adding a new vertex on each edge .let be an infinite connected recurrent subgraph of .then . let . if the number of edges between and is , then hence by this and the recurrence / transience criterion by effective resistance ( see ( * ? ? ?* theorem 2.12 ) for example . ) , since is infinite , we can use the 0 - 1 law and have we give rough figures of the two graphs in the proof above .the proof of ( ii ) above heavily depends on the fact that has unbounded degrees .now we consider a case that has bounded degrees .[ epsilon - thm ] let .then for any there is a recurrent subgraph such that .let be a recurrent subgraph of such that . by ( * ?* ( 2.21 ) ) such exists .if , then contain the unique infinite open cluster a.s . by grimmett , kesten and zhang , is transient .hence now the assertion follows from this and lemma [ epsilon ] . in the proof of theorem [ epsilon - thm ], we choose a subgraph such that and apply lemma [ epsilon ] .however , if is a connected proper subgraph of an infinite tree with , then ( [ near ] ) in lemma [ epsilon ] fails .[ tree - graph ] let be an infinite transient tree. then + ( i ) if is a recurrent subtree of , then ( ii ) if is an infinite recurrent subtree of , then \(i ) by peres ( * ? ? ?* exercise 14.7 ) , if , then for any . since for any , therefore , . if , then -a.s ., is an infinite tree obtained by attaching at most countably many finite trees to .hence , is also a recurrent graph -a.s .therefore , .\(ii ) assume that there is a transient subtree of such that and .there is a finite path from to a vertex of .since is transient , the probability that random walk starts at and , then , goes to a vertex of and remains in after the hitting to is positive .hence the probability that is still recurrent is positive .hence .assume that since there are infinitely many transient connected subtrees of such that , contains at least one infinite transient cluster in .hereafter denotes the -regular tree , . by theorem [ tree - graph ] ,let and be a recurrent subgraph . then the value depends on choices of a subgraph as the following example shows .let be the graph obtained by attaching a vertex of to a vertex of .+ ( i ) if is a subgraph of which is isomorphic to , then ( ii ) if is a subgraph of which is isomorphic to , then we give a short remark about stability with respect to rough isometry .let be the graph obtained by attaching one vertex of the triangular lattice to a vertex of the -regular tree .if , then and if is large , then as this remark shows , there is a pair such that we are not sure that there is a pair such that [ trace3 ] let be a cayley graph of a finitely generated countable group with the degree of growth .let be the trace of the simple random walk on .then hereafter denotes the expectation with respect to a probability measure .let .we show that the volume growth of is ( at most ) second order .we assume that simple random walks start at a vertex . since is contained in , by mensikov , molchanov and sidrenko , < + \infty , \ \p < p_c(g).\ ] ] therefore , \le e^{{\mathbb{p}_p}}[|c_o| ] e^{p^o}\left[\left|v(h ) \cap b_{g}(0 , n)\right|\right].\ ] ] using hebisch and saloff - coste ( * ? ? ?* theorem 5.1 ) and summation by parts , & \le \sum_{x \in b_{g}(o , n ) } \sum_{m \ge 0 } p^{o}(s_m = x ) \\ & = \sum_{x \in b_{g}(o , n ) } d_g(o , x)^{2-d } = o(n^2 ) . \end{aligned}\ ] ] using this and fatou s lemma , < + \infty.\ ] ] hence the assertion follows from this and ( * ? ? ?* lemma 3.12 ) .let , and be the trace of the simple random walk on .then by lemma [ recur - cluster ] and the transience of infinite cluster by , by this and theorem [ trace3 ] , let , , and be the trace of the simple random walk on . then this section , we assume that is a transient graph and is a recurrent subgraph of . [ cut - def ] we say that a vertex is a _ cut point _if we remove an edge containing , then the graph splits into two _ infinite _ connected components .the graph appearing in the proof of theorem [ extre - graph ] ( ii ) ( see figure 3 ) has a vertex such that if we remove it , then the graph splits into two connected components .however , it is _ not _ a cut point in the sense of the above definition .[ thm - cut ] let be a cayley graph of a finitely generated countable group with the degree of growth .let be the trace of the two - sided simple random walk on . then if , then has infinitely many cut points , -a.s .let fix a vertex .first we will show that we give a rough sketch of proof of ( [ cut-1 ] ) .first we show there exists a vertex such that two simple random walks starting at and respectively do not intersect with positive probability. then we make " vertices in a large box closed and show the two random walks do not return to the large box with positive probability .finally we choose a path connecting the two traces in a suitable way .using , , and ( * ? ? ?* theorem 5.1 ) , hence for large let then there is a vertex such that . if , then ( [ cut-1 ] ) holds .assume .let and be the event that all edges in are closed . since and is decreasing , since and are transient , there is such that now we can specify two finite paths of .there are vertices such that now we can pick up a path in connecting and .we can let , for , , and . this event is contained in the event and hence we have ( [ cut-1 ] ) .let be the generating set of the cayley graph .consider the following transformation on defined by here we let for an edge and a point .we have that preserves .define a transformation on by by following the proof of ( * ? ? ? * lemma 1 in chapter 5 ) , the family of maps is ergodic . by kakutani (* theorem 3 ) , is ergodic with respect to . by applying the poincar recurrence theorem ( see pollicott and yuri ( * ? ? ?* theorem 9.2 ) for example . ) , to the dynamical system , we have ) ) \cap { \mathcal{u}}(\widetilde s([n+1 , + \infty ) ) ) = \emptyset \text { infinitely many } n \in \mathbb{z } \right ) = 1,\ ] ] where we let the following considers this problem at the critical point in high dimensions .it is pointed out by itai benjamini .( personal communication ) [ itai ] let .let be the trace of the two - sided simple random walk on .. then has infinitely many cut points -a.s .we will show that in below , and are constants depending only on and .fitzner and van der hofstad ( * ? ? ?* theorem 1.4 ) claims that the decay rate for the two - point function is as , if .therefore , since if , the rest of the proof goes in the same way as in the proof of theorem [ thm - cut ] . the following deals with supercritical phases .let .let be the trace of the two - sided simple random walk on .if , then has no cut points -a.s . using the two - arms estimate by aizenman , kesten and newman , using the shift invariance of ,the unique infinite cluster has no cut points -a.s .in this section , we assume that is a transient graph . recall definition [ def - recur ] .we regard a recurrent subset as a subgraph and consider the induced subgraph of the recurrent subset . in other words , if is a recurrent subset of , then we consider the graph such that the set of vertices is and the set of edges .we proceed with this subsection as in subsection 3.1 .the following correspond to theorem [ extre - graph ] .[ extre - subset ] ( i ) there is a graph such that for any transient subset of , ( ii ) there is a graph such that for any infinite transient subset of , we show this in the same manner as in the proof of theorem [ extre - graph ] .even if we add one edge to a transient subset , then the enlarged graph is also a transient subset . if not , the random walks hit an added vertex infinitely often , a.s . , which contradicts that is a transient graphtherefore , we can show ( i ) in the same manner as in the proof of theorem [ extre - graph ] ( i ) .let be the graph defined in the proof of theorem [ extre - graph ] ( ii ) .then hence is a recurrent subset , -a.s . for any .second we consider the case is .lemma [ recur - cluster ] implies that let , .then for any transient subset of third we consider the case that is a tree .let be an infinite tree and be a transient subset of .then , . for and , we let be the connected subtree of such that and . since is a transient subset ,there are an edge and a vertex such that is a transient _ subgraph _ of and .then we can take an infinite path in such that , and for each , , and is a transient subgraph .if , then there is a number such that does not intersect with -a.s .hence is a transient subset of , -a.s .we do not give an assertion corresponding to theorem [ epsilon - thm ] .we are not sure that there is a recurrent subset such that the induced subgraph of it satisfies ( [ near ] ) in lemma [ epsilon ] .let be a cayley graph of a finitely generated countable group with the degree of growth .let be the trace of the simple random walk on . then + ( i ) if , ( ii ) if , let be the unit element of the group .let we remark that .first we show .let .it follows from ( * ? ? ?* theorem 5.1 ) and that by following the proof of ( * ? ? ?* theorem 6.5.10 ) , = \sum_{x \in v(g ) } \theta_p(x)\theta(x ) = o\left(\sum_{x \in v(g ) } d_{g}(o , x)^{4 - 2d}\right).\ ] ] using and summation by parts , hence and .thus we have if , then by lemma [ recur - cluster ] , if , this clearly holds .thus we see ( i ) .we show ( ii ) by following the proof of ( * ? ? ?* theorem 6.5.10 ) .let and be two independent simple random walks on .let let be the event that is strictly positive .in below , , , are positive constants depending only on .it follows from a generalized borel - cantelli lemma that if + ( 1 ) ( 2 ) for some constant hold , then holds i.o . , -a.s . and assertion ( ii ) follows .* theorem 5.1 ) states that by grigoran and telcs ( * ? ? ?* proposition 10.1 ) the elliptic harnack inequality holds .therefore , we have ( 2 ) .now we show ( 1 ) . & = \sum_{x \in b_g(o , 2^k ) \setminus b_g(o , 2^{k-1 } ) } p^o\left(t_x < t_{v(g ) \setminus b_g(o , 2^k)}\right)^2 \\ & = c_4 \sum_{x \in b_g(o , 2^k ) \setminus b_g(o , 2^{k-1 } ) } \theta_{b_g(o , 2^k)}(x)^2 . \end{aligned}\ ] ] since for any , \ge c_6 2^{k(4 - 2d ) } \left|b_g(o , 3 \cdot 2^{k-2 } ) \setminus b_g(o , 2^{k-1 } ) \right|.\ ] ] using this and an isoperimetric inequality ( cf .* theorem 7.4 ) ) , \ge c_7 2^{k(4-d)}.\ ] ] we have = \sum_{x , y \in b_g(o , 2^k ) \setminus b_g(o , 2^{k-1 } ) } p^o\left(t_x \vee t_y <t_{v(g ) \setminus b_g(o , 2^k)}\right)^2.\ ] ] since we have that by using summation by parts therefore , = \begin{cases } o(4^{k } ) \ \ \ d = 3 , \\o(k ) \ \ \ d = 4.\end{cases}\ ] ] using ( [ one ] ) , ( [ two ] ) and the second moment method , for , ^ 2}{e^{p^{o , o}}[z_k^2 ] } \ge \frac{c_8}{k}.\ ] ] thus we have ( 1 ) .we say that a subgraph of is _ connected _ if for any two vertices and of there are vertices of such that , , and is an edge of for each . by definition [ enl ] ,if is connected , then is also connected . on the other hand ,if is _ not _ connected , then can be non - connected .for example , if and , then the following is introduced by . [ pe ] we say that a subgraph of is _ percolating everywhere _ if and every connected component of is infinite .we introduce a notion concerning connectivity . for , we let [ sc ] we say that satisfies ( ti ) if for every satisfying is an infinite set .\(i ) , satisfy ( ti ) .+ ( ii ) , does not satisfy ( ti ) .+ ( iii ) the trace of the two - sided simple random walk on , does not satisfy ( ti ) a.s .let and .since is connected , there is a path connecting and . then contains at least one edge in and is contained in a box . since and are infinite , there are points and .there is a path connecting and in . then contains at least one edge in and is contained in a box . since and are infinite , we can repeat this procedure and have infinitely many disjoint paths .thus we have ( i ) . since and trace have infinitely many cut points , ( ii ) and ( iii ) hold .\(i ) if is ( ti ) , then for any percolating everywhere subgraph if the number of connected components of is finite , then ( ii ) if does not satisfy ( ti ) , then there is a percolating everywhere subgraph such that let be the product measure on such that if and if . denote if and are connected by an open path in this percolation model .define if and only if .this is an equivalent definition .let ] be the probability ] are connected by an open edge with respect to the induced measure of by the quotient map .then , [ y]\right ) = 1 - ( 1-p)^{\left|e([x ] , [ y])\right|}.\ ] ] hence \in a , [ y ] \in b } p([x ] , [ y ] ) \ge p \sum_{[x ] \in a , [ y ] \in b } 1_{\left\{\text{ and are connected by an edge of }\right\ } } = + \infty.\ ] ] by kalikow and weiss ( * ? ? ?* theorem 1 ) , each connected component of is contained in an equivalent class , and conversely , each equivalent class contains each connected component of , due to the percolating everywhere assumption .therefore , is connected if and only if the random graph on is connected .hence , for any and hence if the number of connected components of is finite , then for any decomposition .therefore , , and hence , thus we have ( i ) .assume that does not satisfy ( ti ) .then there is two infinite disjoint sets and such that and . since and may have finite connected components , we modify and .let and be the inner boundaries of and , respectively . for any vertex in , there is a vertex in such that they are connected in .since and are infinite , there are and such that infinitely many vertices of are connected to in and infinitely many vertices of are connected to in .let where let be a subgraph of such that .then and are not connected in .assume that there is a vertex such that it is not connected to in .then consider a path from to _ in . let be the first edge which intersects with . since , pass before it pass .there is a path from to which does not pass any edges of .hence , there is a path from to in .therefore , there are just two connected components of , and due to the choices of and , they are both infinite .thus is percolating everywhere . since is finite , .since is non - empty , .thus we have ( ii ) .we are not sure whether if satisfies ( ti ) and is a percolating everywhere subgraph with infinitely many connected components then author wishes to express his gratitude to n. kubota for stimulating discussions , to i. benjamini for pointing out theorem [ itai ] and to h. duminil - copin for notifying me of the reference .the author was supported by grant - in - aid for jsps research fellow ( 24.8491 ) and grant - in - aid for research activity start - up ( 15h06311 ) and for jsps research fellows ( 16j04213 ) .99 m. aizenman , h. kesten , and c. m. newman , uniqueness of the infinite cluster and continuity of connectivity functions for short and long range percolation , _ comm ._ 111 ( 1987 ) 505 - 531. i. benjamini and o. gurel - gurevich , almost sure recurrence of the simple random walk path , preprint , available at arxiv math 0508270 .i. benjamini , o. gurel - gurevich , and r. lyons , recurrence of random walk traces , _ ann ._ 35 ( 2007 ) 732 - 738 .i. benjamini , o. hggstrm and o. schramm , on the effect of adding -bernoulli percolation to everywhere percolating subgraphs of , _ j. math. phys . _ 41 ( 2000 ) 1294 - 1297. i. benjamini and v. tassion , homogenization via sprinkling , arxiv 1505.06069 .b. bollobs and o. riordan , _ percolation _ , cambridge university press , new york , 2006 .r. fitzner and r. van der hofstad , nearest - neighbor percolation function is continuous for , arxiv 1506.07977 .a. grigoran , and a. telcs , sub - gaussian estimates of heat kernels on infinite graphs _ duke math .j. _ 109 ( 2001 ) , no . 3 , 451 - 510 . g. r. grimmett , h. kesten and y. zhang , random walk on the infinite cluster of the percolation model , _ probab .theory related fields _ 96 ( 1993 ) 33 - 44 .o. hggstrm and e. mossel .nearest - neighbor walks with low predictability profile and percolation in dimensions , _ ann ._ 26 ( 1998 )1212 - 1231 . w. hebisch , and l. saloff - coste , gaussian estimates for markov chains and random walks on groups , _ ann ._ 21 ( 1993 ) , no . 2 , 673 - 709 .s. kakutani , random ergodic theorems and markoff processes with a stable distribution , _ proc .second berkeley symp . on math .stat . and prob ._ , 247 - 261 , univ .of california press , 1951 .s. kalikow , and b. weiss , when are random graphs connected ._ israel j. math ._ 62 ( 1988 ) , 257 - 268. g. f. lawler _ intersections of random walks _ , birkhauser , 1996 .g. f. lawler and v. limic , _ random walk : a modern introduction _, cambridge university press , cambridge , 2010 .m. v. mensikov , s. a. molchanov and a. f. sidrenko , percolation theory and some applications , translated in j. soviet math .42 ( 1988 ) , no . 4 , 1766 - 1810 .i tekhniki , _ probability theory . mathematical statistics .theoretical cybernetics , vol .24 ( russian ) _ , 53 - 110 , i , akad .nauk sssr , vsesoyuz .moscow , 1986 .y. peres , probability on trees : an introductory climb , _ lectures on probability theory and statistics ( saint - flour 1997 ) _ 193 - 280 .lecture notes in math . 1717 , springer , berlin , 1999 .m. pollicott and m. yuri , _ dynamical systems and ergodic theory _ , cambridge university press , 1998 .w. woess , _ random walks on infinite graphs and groups _ , cambridge university press , 2000 .
we consider changes in properties of a subgraph of an infinite graph resulting from the addition of open edges of bernoulli percolation on the infinite graph to the subgraph . we give the triplet of an infinite graph , one of its subgraphs , and a property of the subgraphs . then , in a manner similar to the way hammersley s critical probability is defined , we can define two values associated with the triplet . we regard the two values as certain critical probabilities , and compare them with hammersley s critical probability . in this paper , we focus on the following cases of a graph property : being a transient subgraph , having finitely many cut points or no cut points , being a recurrent subset , or being connected . our results depend heavily on the choice of the triplet .
in this study , we are interested in the ave with non - hermitian toeplitz matrix of the form where denotes the component - wise absolute value of the vector . a slightly more generalized form of the ave , was discussed in and investigated in a more general context in .moreover , the theoretical and numerical aspects of these problems have been extensively investigated in recent literature .generally speaking , the ave ( [ ku1 ] ) arises from quadratic programs , linear programs , bimatrix games and other problems , which can all be resulted in an linear complementarity problem ( lcp ) , and the lcp is equivalent to the ave ( [ ku1 ] ) .this means that the ave is np - hard in its general form . if , then generalized ave ( [ ku2 ] ) reduces to a system of linear equations , which have several applications in scientific computation .the recent research concerning the ave contents can be summarized as the following aspects , one is the theoretical analysis , which focuses on the theorem of alternatives , various equivalent reformulations , and the existence and nonexistence of solutions ; refer to for details .and the other is how to solve the ave numerically . in the last decade ,based on the fact that the lcp can be reduced to the ave , which enjoys a very special and simple structure , a large variety of numerical methods for solving the ave ( [ ku1 ] ) can be found in the recent literature ; see e.g. and references therein .for example , a finite computational algorithm that is solved by a finite succession of linear programs ( slp ) in , and a semi - smooth newton method is proposed in , which largely shortens the computation time than the slp method .furthermore , a smoothing newton algorithm was presented in , which was proved to be globally convergent and the convergence rate was quadratic under the condition that the singular values of exceed 1 .this condition was weaker than the one applied in . during recent years, the picard - hss iteration method and nonlinear hss - like method are established to solve the ave in succession , respectively . the sufficient conditions to guarantee the convergence of this method and some numerical experimentsare given to show the effectiveness of the method .however , the numbers of the inner hss iterative steps are often problem - dependent and difficult to be determined in actual computations .moreover , the iterative vector can not be updated timely .it has shown that the nonlinear hss - like iterative method is more efficient than the picard - hss iteration method in aspects of the defect mentioned above , which is designed originally for solving weakly nonlinear systems in . in order to accelerate the nonlinear hss - like iteration method ,zhang had extended the preconditioned hss ( phss ) method to solve the ave and also exploit the relaxation technique to accelerate his proposed methods . meanwhile, numerical results also show the effectiveness of his proposed method in . in this paper, we consider the special case of involving the non - hermitian toeplitz structure .similar to the strategies of , two kinds of circulant and skew - circulant splitting ( cscs)-based methods are proposed to fast solve the ave ( [ ku1 ] ) .the rest of this paper is organized as follows . in section 2we review the cscs iteration method and its relative topics . in section 3 , we devote to introduce two cscs - based iteration methods to solve ave ( [ ku1 ] ) and investigate their convergence properties , respectively .numerical experiments are reported in section 4 , to shown the feasibility and effectiveness of the cscs - based methods .finally , the paper closes with some conclusions in section 5 .here let be a non - hermitian toeplitz matrix of the following form i.e. , is constant along its diagonals ; see , and be a zero matrix , the general ave ( [ ku2 ] ) reduced to the system of linear equations it is well - known that a toeplitz matrix possesses a circulant and skew - circulant splitting , where and note that is a circulant matrix and is a skew - circulant matrix. a circulant matrix can be diagonalized by the discrete fourier matrix and a skew - circulant matrix can be diagonalized by a discrete fourier matrix with diagonal scaling , i.e. , .that is to say , it holds that where and is the imaginary unit . and are diagonal matrices formed by the eigenvalues of and , respectively , which can be obtained in operations by using the fft . moreover , ng established the following cscs iteration method to solve non - hermitian toeplitz system of linear equations ( [ ku3 ] ) .* algorithm 1 the cscs iteration method*. + _ given an initial guess , compute for using the following iterative scheme until converges , where is a positive constant and is the identity matrix . _ in the matrix - vector form, the cscs iteration can be equivalently rewritten as where it is easy to see that cscs is a stationary iterative method obtained from the splitting where on the other hand , we have here , is the iterative matrix of the cscs method .we remark that the cscs iteration method is greatly similar to the hss iteration method and its variants , see e.g. .when the circulant part and skew - circulant part of the coefficient matrix are both positive definite , ng proved that the spectral radius of the cscs iterative matrix is less than 1 for any positive iterative parameters , i.e. , the cscs iteration method unconditionally converges to the exact solution of for any initial guess ; refer to for details .motivated by the pioneer works of , we extend the classical cscs iteration method to two types of cscs - based methods for solving the ave ( [ ku1 ] ) .these methods will fully exploit the toeplitz structure to accelerate the computation speed and save storage .next , we will devote to constructing these two new methods , i.e. , the picard - cscs iterative method and nonlinear cscs - like iterative method .recalling that the picard iterative method is a fixed - point iterative method and the linear term and the nonlinear term are separated , the ave can be solved by using of the picard iterative method we assume that the toeplitz matrix is non - hermitian positive definite . in this case , the next iterate of can be approximately computed by the cscs iteration by making use of as following ( see ) where and are the matrices defined in the previous section , is a positive constant , a prescribed sequence of positive integers , and is the starting point of the inner cscs iteration at outer picard iteration .this leads to the inexact picard iteration method , called picard - cscs iteration method , for solving the system ( [ ku1 ] ) which can be summarized as following ( refer to ) .* algorithm 2 the picard - cscs iteration method * + _ let be a non - hermitian toeplitz matrix ; and are the circulant and skew - circulant parts of given in ( [ ku4x ] ) and ( [ ku4y ] ) and they are both positive definite . given an initial guess and a sequence of positive integers , compute for , using the following iteration scheme until satisfies the following stopping criterion : _ * set ; * for , solve the following linear systems to obtain : where is a given positive constant . *set .the advantage of picard - cscs iterative method is obvious .first , the two linear subsystems in all inner cscs iterations have the same shifted circulant coefficient matrix and shifted skew - circulant coefficient matrix , which are constant with respect to the iteration index .second , the exact solutions can be efficiently achieved via ffts in operations .hence , the computations of the picard - cscs iteration method could be much cheaper than that of the picard - hss iteration method .the next theorem provides sufficient conditions for the convergence of the picard - cscs method to solve system ( [ ku1 ] ) .[ theorem1 ] let be a non - hermitian toeplitz matrix ; and are the circulant and skew - circulant parts of given in ( [ ku4x ] ) and ( [ ku4y ] ) and they are both positive definite .let also .then the ave ( [ ku2 ] ) has a unique solution , and for any initial guess and any sequence of positive integers , the iteration sequence produced by the picard - cscs iteration method converges to provided that , where is a natural number satisfying * proof*. the proof uses arguments similar to those in the proof of the convergence theorem of the picard - hss iteration method ; see .in fact , we only need to replace the hermitian matrix and the skew - hermitian matrix of the convergence theorem of the picard - cscs iteration method by the circulant matrix and the skew - circulant matrix , and then obtain the convergence theorem of the picard - cscs iteration method . according to theorem [ theorem1 ], we see that the picard - cscs iteration method to solve the ave ( [ ku2 ] ) is convergent if the matrix is positive definite , ( see for the definition of ) and the sequence , is defined as in theorem [ theorem1 ] .similar to , the residual - updating form of the picard - cscs iteration method can be written as following .* algorithm 3 the picard - cscs iteration method * ( residual - updating variant ) + _ let be a non - hermitian toeplitz matrix ; and are the circulant and skew - circulant parts of given in ( [ ku4x ] ) and ( [ ku4y ] ) and they are both positive definite .given an initial guess and a sequence of positive integers , compute for , using the following iteration scheme until satisfies the following stopping criterion : _* set and ; * for , solve the following linear systems to obtain : where is a given positive constant .* set . in the picard - cscs iteration ,the numbers of the inner cscs iterative steps are often problem - dependent and difficult to be determined in actual computations . moreover , the iterative vector can not be updated timely .thus , to avoid the defection and still preserve the advantages of the picard - cscs iterative method , based on the nonlinear fixed - point equations we propose the following nonlinear cscs - like iteration method .* algorithm 4 the nonlinear cscs - like iteration method * + _ let be a non - hermitian toeplitz matrix ; and are the circulant and skew - circulant parts of given in ( [ ku4x ] ) and ( [ ku4y ] ) and they are both positive definite . given an initial guess and compute for , using the following iteration scheme until satisfies the following stopping criterion : where is a given positive constant ._ define and then the nonlinear cscs - like iterative scheme can be equivalently expressed as the ostrowski theorem , i.e. , theorem 10.1.3 in , gives a local convergence theory about a one - step stationary nonlinear iteration .based on this , zhu and zhang established the local convergence theory for the nonlinear cscs - like iteration method in .however , these convergence theory has a strict requirement that is -differentiable at a point such that . obviously , the absolute value function is non - differentiable .leveraging the smoothing approximate function introduced in , we can establish the following local convergence theory for nonlinear cscs - like iterative method .but firstly , we must review this smoothing approximation and its properties , which will be used in the next section .define by it is clear that is a smoothing function of , now we give some properties of , which will be used in the next section .( ) is a uniformly smoothing approximation function of , i.e. , ( ) for any , the jacobian of at is [ lemma3 ] assume that is -differentiable at a point such that .suppose that and are the circulant and the skew - circulant parts of the matrix given in ( [ ku4x ] ) and ( [ ku4y ] ) , and and both are positive definite matrices .denote by and then holds ; in other word , is a point of attraction of the nonlinear cscs - like iteration , provided .leveraging the smoothing approximate function in ( [ ku9x ] ) , we define and then the nonlinear cscs - like iterative scheme can be equivalently expressed as [ the1 ] assume that the condition of lemma [ lemma3 ] are satisfied , and be circulant and skew - circulant parts of the toeplitz matrix , respectively . for any initialguess , the iteration sequence produced by the nonlinear cscs - like iteration method can be instead approximately by that produced by its smoothed nonlinear cscs - like iterative scheme ( [ ku12x ] ) , i.e. , provided * proof*. first , we give the well - known inequality and the result achieved in . then based on iterative scheme ( [ ku9x ] ) and ( [ ku12x ] ) , we obtain for holds , provided this completes the proof . assume that the conditions of theorem [ the1 ] are satisfied .denoted by and then the spectral radius of the matrix is less than 1 , where and is the the jacobian of at defined in ( [ ku11 ] ) , provided that that is to say , for any initial guess , the iteration sequence produced by the nonlinear cscs - like iteration method converges to , or is a point of attraction of the ninlinear cscs - like iteration , provided the condition ( [ ku16 ] ) . *proof*. for , we only need to prove where is defined in ( [ ku9x ] ) and is defined in ( [ ku12x ] ) . via using the theorem [ the1 ] , the former part holds for , provided as the uniformly smoothing approximation function of is -differentiable at a point such that , according lemma [ lemma3 ] , is a point of attraction of the nonlinear cscs - like iteration , that is the second part in ( [ ku17 ] ) holds for , provided .next we prove . via straightforward computations we have where is the jacobian of the smoothing approximation function at ,also since we obtain now , under the condition , we easily obtain . * remark 1 . * an attractive feature of the nonlinear cscs - like iterative method is that it avoids the use of the differentiable in actual iterative scheme , although we employe it in the convergence analysis .thus , the smoothing approximate function in ( [ ku9y ] ) is not necessary in actual implementation . at the end of this subsection, we remark that the main steps in nonlinear cscs - like iteration method can be alternatively reformulated into residual - updating form similar to those in the picard - cscs iterative method as follows . *algorithm 5 ( the nonlinear cscs - like iteration method * ( residual - updating variant ) + _ let be a non - hermitian toeplitz matrix ; and are the circulant and skew - circulant parts of given in ( [ ku4x ] ) and ( [ ku4y ] ) and they are both positive definite . given an initial guess and compute for , using the following iteration scheme until satisfies the following stopping criterion : where is a given positive constant .in this section , the numerical properties of the picard - cscs and the nonlinear cscs - like methods are examined and compared experimentally by a suit of test problems .all the tests are performed in matlab r2014a on intel(r ) core(tm ) i5 - 3470 cpu @ 3.2 ghz and 8.00 gb of ram , with machine precision , and terminated when the current residual satisfies where is the computed solution by each of the methods at iteration step , and a maximum number of the iterations 200 is used . in addition , the stopping criterion for the inner iterations of the picard - cscs method are set to be where is the number of the inner iteration steps and is the prescribed tolerance for controlling the accuracy of the inner iterations at the -th outer iteration step . if is fixed for all , then it is simply denoted by . in our numerical experiments ,we use the zero vector as the initial guess , the accuracy of the inner iterations for both picard - cscs and picard - hss iterative methods is fixed and set to , a maximum number of iterations 15 ( ) for inner iterations , and the right - hand side vector of aves ( 1 ) is taken in such a way that the vector with be the exact solution .the two sub - systems of linear equations involved are solved in the way if , then . moreover ,if the two sub - systems of linear equations involved in the picard - cscs and the nonlinear cscs - like iteration methods are solved by making use of the method presented in and using parallel computing , the numerical results of the picard - cscs and the nonlinear cscs - like iteration methods must be better . in practical implementations ,the optimal parameter recommended in is employed for the picard - hss and the nonlinear hss - like methods , where and are the minimum and the maximum eigenvalues of the hermitian part of the coefficient matrix .similarly , we adopt the optimal parameters given in ( * ? ? ?* theorem 2 ) for the picard - cscs and the nonlinear cscs - like methods . at the same time, it is remarkable that they only minimize the bound of the convergence factor of the iteration matrix , but not the spectral radius of the iteration matrix .admittedly , the optimal parameters are crucial for guaranteeing fast convergence speeds of these parameter - dependent iteration methods , but they are generally very difficult to be determined , refer to e.g. for a discussion of these issues . to show that the proposed iteration methods can also be efficiently applied to deal with complex system of aves ( [ ku1 ] ), we construct and test the following example , which is a toeplitz system of aves with complex matrix .* example 1*. we consider that is a complex non - hermitian , sparse and positive definite toeplitz matrix with the following form where .it means that the matrices in the target aves are defined as eq .( [ matrix2 ] ) . according to the performances of hss - based methods ,see , compared with other early established methods , we compare the proposed cscs - based methods with hss - based methods in example 1. then we will choose different parameters and and present the corresponding numerical results in tables [ tab2]-[tab3 ] ..the optimal parameters for example 1 . [ cols="^,^,^,^,^,^,^,^,^",options="header " , ] [ tab7 ] based on the numerical results in tables [ tab5]-[tab7 ] ,it is notable that these four iterative solvers , i.e. , the picard - cscs and the nonlinear cscs - like , can successfully obtain approximate solutions to the system of aves for all different matrix dimensions ; whereas both the gn - gmres(5 ) and the gn - tfqmr iterative methods fully fail to converge .it should be because the newton - like iterative methods are usually sensitive to the initial guess and the accuracy of solving inner linear systems .when the matrix dimension is increasing , the number of outer iteration steps are almost fixed or or decreasing slightly for all iteration methods , whereas the number of inner iteration steps show the contrary phenomena for the cases with and . meanwhile , the total cpu times and total iteration steps for both the picard - cscs and the nonlinear cscs - like iterative methods are increasing quickly except the cases of with and . on the other hand , from tables [ tab5]-[tab7 ] , we also observe that both the nonlinear cscs - like method is almost more competitive than the picard - cscs iteration methods in terms of the number of iteration steps and the cpu elapsed time for solving the system of aves .in particular , it is remarkable that the nonlinear cscs - like method can use slightly less number of iteration steps to converge than the picard - cscs iterative solver , but the picard - cscs iterative solver can save a little elapsed cpu time with compared to the nonlinear cscs - like iterative method in our implementations .however , it still concludes that the nonlinear cscs - like iterative method is the first choice for solving the aves concerning in example 2 . at the same time , the picard - cscs iterative method can be viewed as a good alternative .in this paper , we have proposed two cscs - based methods for solving the system of aves with non - hermitian toeplitz structure .two cscs - based iterative methods are based on separable property of the linear term and nonlinear term as well as the circulant and skew - circulant splitting ( cscs ) of involved non - hermitian definite toeplitz matrix . by leveraging the smoothing approximate function ,the locally convergence have been analysed .further numerical experiments have shown that the picard - cscs and the nonlinear cscs - like iteration methods are feasible and efficient nonlinear solvers for the ave . moreover ,in particular , the nonlinear cscs - like method often does better than the picard - cscs method to solve ave is that the smoothing approximate function is introduced in the convergence analysis although is avoid in implement algorithm .hence , to find a better theoretical proof for cscs - like will be a topics and suitable accelerated techniques in the future research .99 s .- l .hu , z .- h .huang , a note on absolute value equations , optim .lett . , 4 ( 2010 ) , pp .417 - 424 .o. prokopyev , on equivalent reformulations for absolute value equations .appl . , 44 ( 2009 ) , pp .363 - 372 .mangasarian , absolute value equation solution via concave minimization , optim .lett . , 1 ( 2007 ) , pp . 3 - 8. j. rohn , v. hooshyarbakhsh , r. farhadsefat , an iterative method for solving absolute value equations and sufficient conditions for unique solvability , optim .lett . , 8 ( 2014 ) ,35 - 44 .bai , g.h .golub , c .- k .li , covergence properties of preconditioned hermitian and skew - hermitian splitting methods for non - hermitian positive semidefinite matrices , math .comput . , 76 ( 2007 ) , pp .287 - 298 .ortega , w.c .rheinboldt , iterative solution of nonlinear equations in several variables , siam , philadelphia , usa , 2000 .l. yong , particle swarm optimization for absolute value equations , j. comput .systems , 6 ( 7 ) ( 2010 ) , pp .2359 - 2366 .huang , a practical formula for computing optimal parameters in the hss iteration methods , j. comput . appl ., 255 ( 2014 ) , pp .142 - 149 .choi , s.k .chung , y.j .lee , numerical solutions for space fractional dispersion equations with nonlinear source terms , bull .korean math .soc . , 47 ( 2010 ) , pp .1225 - 1234 .gu , t .- z .huang , h .- b .li , l. li , w .- h .luo , on -step cscs - based polynomial preconditioners for toeplitz linear systems with application to fractional diffusion equations , appl .lett . , 42 ( 2015 ) , pp .
recently , two kinds of hss - based iteration methods are constructed for coping with the absolute value equation ( ave ) , which is a family of non - differentiable np - hard problem . in present paper , we focus on developing the cscs - based methods for solving the absolute value equation ( ave ) involving the toeplitz matrix , and propose the picard - cscs method and the nonlinear cscs - like iterative method . with the help of introducing a smoothing approximate function , we give some theoretical analyses for the convergence of the cscs - based iteration methods for the ave . the advantage of these methods is that they do not require storage of coefficient matrix , and the linear sub - systems can be solved efficiently via the fast fourier transform ( fft ) . therefore , computational cost and computer storage may be saved in actual implementations . extensive numerical experiments involving the numerical solutions of fractional diffusion equations are employed to demonstrate the robustness and effectiveness of the proposed methods and to compare with the recent methods . * key words * : absolute value equation ; cscs - based iteration ; toeplitz matrix ; convergence analysis ; smoothing approximate function ; fast fourier transform * ams classification * : 65f12 ; 65l05 ; 65n22 .
microlensing is one of the most important methods that can detect and characterize extrasolar planets ( see the review of perryman 2000 ) .microlensing planet detection is possible because planets can induce perturbations to the standard lensing light curves produced by primary stars . once the perturbation is detected and analyzed , one can determine the mass ratio , , and the projected separation , ( normalized by the einstein ring radius ) , between the planet and host star .recently , a clear - cut microlensing detection of an exoplanet was reported by .planet detection via microlensing is observationally challenging .one of the most important difficulties in detecting planets via microlensing lies in the fact that planet - induced perturbations last for a short period of time . for a jupiter - mass planet ,the duration is only a few days and it decreases as . to achieve high monitoring frequency required for planet detections ,current lensing experiments are employing early warning system to issue alerts of ongoing events in the early stage of lensing magnification and follow - up observation programs to intensively monitor the alerted events .however , follow - up is generally done with small field - of - view instrument , and thus events should be monitored sequentially . as a result , only a handful number of events can be followed at any given time , limiting the number of planet detections .an observational strategy that can dramatically increase the planet detection efficiency was proposed by .when a microlensing event is caused by a star possessing a planet , two sets of caustics are produced . among them , one is located away from the primary lens ( planetary caustic ) and the other is located close to the primary lens ( central caustic ) . the location of the planetary caustic varies depending on the planetary separation , which is not known , and thus it is impossible to predict the time of planetary perturbation in advance .on the other hand , the central caustic is always located very close to the primary lens , and thus the central perturbation occurs near the peak of high - magnification events .therefore , by focusing on high - magnification events , it is possible to dramatically increase the planet detection efficiency , enabling one to maximize the number of planet detections with a limited use of resources and time .an additional use of high magnification events was noticed by .they pointed out that multiple planets with separations of the einstein ring radius significantly affect the central region of the magnification pattern regardless of the orientation and thus microlensing can be used to detect multiple - planet systems .they noted , however , that characterizing the detected multiple - planet systems by analyzing the central perturbations would be difficult due to the complexity of the magnification pattern combined with the large number of lensing parameters required to model multiple - planet systems . in this paper, we demonstrate that in many cases the central planetary perturbations induced by multiple planets can be well approximated by the superposition of the single - planetary perturbations where the individual planet - primary pairs act as independent binary lens systems ( binary superposition ) .the validity of the binary - superposition approximation implies that a simple single - planet lensing model is possible for the description of the anomalies produced by the individual planet components , enabling better characterization of these systems .the layout of the paper is as follows . in 2 , we describe the multiple - planetary lensing and magnification pattern in the central region . in 3 , we illustrate the validity of th binary - superposition approximation in describing the central magnification pattern of multiple - planet systems . in 4 , we discuss the usefulness of the superposition approximation in the interpretation of the multiple planetary signals .the equation of lens mapping from the lens plane to the source plane ( lens equation ) of an point masses is expressed as where , , and are the complex notations of the source , lens , and image positions , respectively , denotes the complex conjugate of , are the masses of the individual lens components , is the total mass of the system , and thus represent the mass fractions of the individual lens components . hereall angles are normalized to the einstein ring radius of the total mass of the system , , i.e. ^{1/2 } , \label{eq2}\ ] ] where and are the distances to the lens and source , respectively .the lensing process conserves the source surface brightness , and thus the magnifications of the individual images correspond to the ratios between the areas of the images and source . for an infinitesimally small source element , the magnification is , the total magnification is the sum over all images , . for a single lens ( ) ,the lens equation is simply inverted to solve the image positions and magnifications for given lens and source positions .this yields the familiar result that there are two images with magnifications and separations from the lens of and ] is the total magnification .a planetary lensing with a single planet is described by the formalism of a binary ( ) lens . in this case, the lens equation can not be inverted algebraically .however , it can be expressed as a fifth - order polynomial in and the image positions are then obtained by numerically solving the polynomial .one important characteristic of binary lensing is the formation of caustics , which represent the set of source positions at which the magnification of a point source becomes infinite .planetary perturbations on lensing light curves occur when the source approaches close to the caustics .the location and size of these caustics depend on the projected separation and the mass ratio . for a planetary case ,there exist two sets of disconnected caustics .the planetary caustic(s ) is ( are ) located away from the primary star on or very close to the star - planet axis with a separation from the primary star of .the central caustic , on the other hand , is located close to the primary lens with a size of .caustics are located within the einstein ring when the planetary separation is within the range of .the size of the caustic , which is directly proportional to the planet detection efficiency , is maximized when the planet is located within this range , and thus this range is referred as `` lensing zone '' . for a multiple - lens system , the lens equation is equivalent to a polynomial with an order of ( . therefore , a multiple - lens system produces a maximum of and a minimum of images , with the number of images changing by multiple of two as the source crosses a caustic . for a system with two planets ( and thus ) , there are thus a maximum of 10 images and a minimum of 4 images . unlike the caustics of binary lensing ,those of multiple lensing can exhibit self - intersecting and nesting .the mass ratio of a planet to its primary star is very small , and thus the lensing behavior of planet - induced anomalies can be treated as perturbation . due to the perturbative nature of planetary anomalies , it was known that the magnification pattern in the region around _planetary _ caustics of multiple - planet systems can be described by the superposition of those of the single - planet systems where the individual planet - primary pairs acts as independent binary lens systems , i.e. , where , , , and represent the magnifications of the exact multiple lensing , binary - superposition approximation , binary lensing with the pairs of the primary and individual planets , and single - mass lensing of the primary alone , respectively . by contrast, it was believed that the binary - superposition approximation would not be adequate to describe the magnification pattern in the central region because the central caustics produced by the individual planet components reside at the same central region and thus non - linear interference of the perturbations would be large . unlike this belief about the magnification pattern in the central perturbation region , we find that non - linear interference between the perturbations produced by the individual planets of a multiple - planet system is important only in a small confined region very close to the central caustics .this implies that in many cases binary - superposition approximation can also be used for the description of the magnification pattern in the central perturbation region . to demonstrate the validity of the binary - superposition approximation in the central region, we construct a set of _ magnification excess _ maps of example multiple - planet systems containing two planets with various orientations .the magnification excess represents the deviation of the magnification from the single - mass lensing as a function of the source position , and it is computed by where is the magnification of the triple ( primary star plus two planets ) lensing . in figure[ fig : one ] , we present the constructed contour ( drawn by black lines ) maps of magnification excess . in the map ,the parameters of planet 1 are held fixed at and , while the projected separation and the angle between the position vectors to the individual planets from the primary star ( orientation angle ) , , are varied for a second planet with .greyscale is used to represent positive ( bright , ) and negative ( dark ) deviation regions .also drawn are the contours ( drawn in white lines ) of magnification excess based on binary - superposition approximation , i.e. , for the maps based on binary superposition , we consider slight shift of the effective lensing position of the primary star ( ) toward the individual planets ( ) with an amount of to better show the magnification pattern in the very central region and the detailed caustic structure , we enlarge the maps and presented them in figure [ fig : two ] . in each map ,the figures drawn in thick black and white lines represent the caustics for the cases of the exact triple lensing and binary superposition , respectively . from the comparison of the excess maps constructed by the exact triple lensing and binary - superposition approximation, one finds that binary superposition is a good approximation in most of the central perturbation region as demonstrated by the good match between the two sets of contours .slight deviation of the binary - superposition approximation from the exact lensing magnification occurs ( a ) in a small region very close to the central caustics and ( b ) in the narrow regions along the primary - planet axes .this can be better seen in figure [ fig : three ] , where we present the greyscale maps for the difference between the excesses of the triple lensing and binary - superposition approximation , i.e. , the difference in the region close to the central caustics is caused by the non - linear interference between the perturbations produced by the individual planets . on the other hand ,the difference along the primary - planet axes is caused by the slight positional shift of the triple - lensing caustics due to the introduction of an additional planet .however , the area of the deviation region , in general , is much smaller than the total area of the perturbation region , and thus the binary superposition approximation is able to well describe most part of planetary anomalies in lensing light curves .this can be seen in figure [ fig : four ] , where we present example light curves of multiple - planetary lensing events and compare them to those obtained by binary superposition .the lensing behavior of a multiple planetary system is determined by many parameters including , , , , and , and binary - superposition might not be a good approximation in some space region of these parameters .we therefore investigate the region of parameter space where binary - superposition is a poor approximation .due to the numerousness of the parameters , we choose a method of investigation where we inquire the validity of the approximation on the individual parameters by varying one parameter and fixing other parameters .the validity of the approximation is quantified by the ratio of where and represent the area of the central perturbation region and the area of the difference region between the exact triple lensing and the binary - superposition approximation , respectively . in some cases , the perturbation regions caused by the planetary and central caustics are connected , making the boundary between the two regions ambiguous .we thus define the central perturbation region as the region within the lens - source impact parameter of ( corresponding to the region with magnifications ) . the areas and are computed by setting the threshold values of and , respectively . in figures [ fig : five ] , [ fig : six ] , and [ fig : seven ] , we present the dependence of on the orientation angle ( ) , the mass ratios of the component planets ( and ) , and the separations to them ( and ) , respectively .from the variation of depending on the orientation angle , we find that the difference between the exact lensing and binary - superposition approximation becomes bigger as decreases and thus the two planets are located closer to each other .we interpret this tendency as the increase of the non - linear interference between the perturbations caused by the two planets as the separation between them decreases . from the dependence on the mass ratio and separation, we find that binary superposition becomes a poor approximation as either the planet mass increases or the separation approaches to unity .these tendencies are the natural results of the breakdown of the perturbation treatment for companions with high mass ratios and ( or ) separations of , because the perturbation treatment is valid when and .besides these extreme regions of parameter space , however , we find that in most regions , implying that binary - superposition approximation well describes the lensing behavior of most multiple planetary systems .in the previous section , we demonstrated that lensing magnification patterns of multiple - planet systems can be described by using binary - superposition approximation not only in the region around planetary caustics but also in most part of the central perturbation region . in this section, we discuss the importance of the binary - superposition approximation in the analysis and characterization of multiple - planet systems to be detected via microlensing . exact description of lensing behavior of events caused by multiple - planet systems requires a large number of parameters . even for the simplest case of a two - planet system , the number of parameters is ten , including four single - lensing parameters of the einstein timescale , time of the closest lens - source approach , lens - source impact parameter , and blended light fraction , and another four parameters of the two sets of the planetary separation and mass ratio , and , and the source trajectory angle and the orientation angle . as a result , it was believed that analyzing anomalies produced by multiple planets would be a daunting task . with the validity of binary - superposition approximation, however , the analysis can be greatly simplified because the anomalies induced by the individual planets can be investigated separately by using relatively much simpler single - planetary lensing analysis .anomalies for which this type of analysis is directly applicable are those where the perturbations induced by the individual planets are well separated , e.g. , the anomalies in the lensing light curves presented in figure [ fig : four ] with , , , , , and . in some cases , the region of perturbations caused by the individual planetsare located close together and thus part of the anomalies in lensing light curves can be blended together . however , even in these cases , the non - linear interference between the anomalies is important only in small confined regions and thus the superposition approximation would still be valid for a large portion of the anomalies , allowing rough estimation of the separations and mass ratios of the individual planets .once these rough parameters are determined , then fine tuning of the parameters by using exact multiple - planet analysis will be possible by exploring the parameter space that was greatly narrowed down .however , care is required for the case when the two perturbations caused by the individual planets happen to locate at the same position ( or very close to each other ) .this case occurs when the two planets are aligned ( ) or anti - aligned ( ) . in this case, the anomaly appears to be caused by a single planet and thus naive analysis of the anomalies can result in wrong characterization of the planet system .however , this type of anomalies will be very rare .we would like to thank a. gould for making useful comments on the paper .this work was supported by the astrophysical research center for the structure and evolution of the cosmos ( arcsec " ) of korea science & engineering foundation ( kosef ) through science research program ( src ) program .
to maximize the number of planet detections by increasing efficiency , current microlensing follow - up observation experiments are focusing on high - magnification events to search for planet - induced perturbations near the peak of lensing light curves . it was known that by monitoring high - magnification events , it is possible to detect multiplicity signatures of planetary systems . however , it was believed that the interpretation of the signals and the characterization of the detected multiple - planet systems would be difficult due to the complexity of the magnification pattern in the central region combined with the large number of lensing parameters required to model multiple - planet systems . in this paper , we demonstrate that in many cases the central planetary perturbations induced by multiple planets can be well approximated by the superposition of the single planetary perturbations where the individual planet - primary pairs act as independent binary lens systems ( binary superposition ) . the validity of the binary - superposition approximation implies that the analysis of perturbations induced by multiple planets can be greatly simplified because the anomalies produced by the individual planet components can be investigated separately by using relatively much simpler single - planetary analysis , and thus enables better characterization of these systems .
safety systems in the modern building environments uses sensors that monitor atmospheric parameters and alert in the eventuality of an accident . with the present dayincreased threat of use of chemical and biological warfare by terrorist organizations , such a scenario has become a real danger .currently , designers are increasingly focussing on development of sensor systems that can detect accidental / deliberate release of hazardous contaminant , and also suggest an appropriate evacuation plan to ensure safety of occupants .since prolonged exposure of the occupants to the hazardous contaminants may result in serious health conditions including death , rapid source localization by the sensor system is essential . considering that majority of individualsare expected to spend upto 90% of time in an indoor environment , it is imperative to design a sensor system that can detect , characterize and rapidly locate the accidental or deliberate contaminant release .the system is expected to aid in detection of airborne contaminant , real - time interpretation of the information to characterize and localize the contaminant source , computationally efficient prediction of contaminant dispersion with associated uncertainty quantification , and subsequent evacuation decisions based on the predictions .the sensor system often uses contaminant fate and transport models to predict the contaminant dispersion that can aid in source localization and characterization .multizone , zonal and computational fluid dynamics ( cfd ) models are used for simulation of indoor airflow and contaminant dispersion patterns . owing to ease of implementation and computational efficiency ,multizone models are most widely used for predicting the contaminant dispersion and source localization / characterization .a multizone model represents any building as a network of well - mixed zones connected by flow paths like doors , windows , leaks etc .the airflow and contaminant transport between the zones is calculated using adjustment of zone pressures that balances mass flow through the paths .the outdoor environment is modeled as an additional unbounded zone .although used widely , limitations of the multizone models , especially related to the well - mixed assumption , are extensively reported in the literature .zonal models represent intermediate fidelity between multizone and cfd models , wherein large well - mixed zones are further divided into smaller subzones .zonal models use conservation of mass , conservation of energy and pressure gradients to model airflow and contaminant dispersion . computational fluid dynamics ( cfd ) models numerically solves governing equations of fluid flow and contaminant dispersions .the cfd models provide detailed airflow and contaminant distribution inside a room .although most accurate amongst three , computational cost requirement prohibits use of cfd models for rapid source localization and characterization .there are recent research efforts to integrate cfd with multizone models ( termed hereafter as mutlizone - cfd model ) .the multizone - cfd modeling approach models one of the zones using cfd , while the resultant solution is coupled with other well - mixed zones using appropriate boundary conditions .this paper uses the integrated multizone - cfd model for rapid source localization and characterization .traditional deterministic approaches for sensor data fusion and interpretation , like optimization , kalman filtering and backward methods , are found inappropriate by the researchers in the context of rapid contaminant source localization and characterization .owing to the ability to provide the event probability distribution , and associated ease in the uncertainty analysis post event detection , current state of the art for source localization and characterization mainly focusses on probabilistic methods .liu and zhai have explored adjoint probability method for rapid contaminant source localization .the method derives adjoint equations for backward probability calculations using the multi - zone contaminant fate and transport model .efficacy of the method is demonstrated for contaminant release in a multi - room residential house and a complex institutional building .main aim of the present research work is to develop a mcmc - based bayesian framework that can aid the sensor system to rapidly localize and characterize the contaminant source in case of the event detection .main advantage of the bayesian inference method is that it can admit prior information and estimates complete probability distributions of the uncertain parameters , as against point estimates provided by optimization based methods .sohn et al . have proposed a computationally efficient bayes monte carlo method for real - time data interpretation and rapid source localization .the method is divided in two stages . in first stage , a large database of simulation runs for all the possible scenarios is collected that sufficiently represent uncertainty . in the second stage , bayesian updating of the probability for each collected data is obtained after the event detection .see sreedharan et al . for details of recent applications of the bayes monte carlo method .though computationally efficient , the bayes monte carlo method essentially is an approximate formulation of the bayesian inference which can not exploit full capabilities of the bayesian framework , including ability to handle arbitrary priors and uncertainty in the simulation model .rather , if a large number of simulation runs are possible in real time , the mcmc - based bayesian inference is preferred over the bayes monte carlo method . however , currently there is no reported exposition of the mcmc - based bayesian inference for rapid source localization and characterization in the open literature .implementation of the mcmc based bayesian framework for sensor systems is challenging due to : 1 ) necessity of rapid real - time inference to ensure successful evacuation with minimum losses ; 2 ) transient nature of the underlying phenomenon ; and 3 ) requirement of large number of mcmc samples ( often in the range of - ) for acceptable accuracy .the problem is further exacerbated by the often large scale nature of the phenomenon being monitored .note that items 2 ) and 3 ) necessitates large number of dynamic simulator runs , which contradicts with item 1 ) , rendering the mcmc based bayesian framework intractable for the sensor systems .this paper proposes computationally efficient gaussian process emulator ( gpe ) based approach for rapid real - time inference in view of dynamic simulators . considering the improved fidelity of the multizone - cfd model over the multizone model , coupled with the accuracy of the mcmc - based bayesian inference over the bayes monte carlo method , the mcmc - based bayesian inference using the multizone - cfd modelis expected to provide more accurate source localization and characterization as compared to the multizone model based bayes monte carlo method . however , despite of the significant computational advantage over the cfd implementation , the multizone - cfd model remains computationally prohibitive for mcmc - based rapid source localization and characterization .this paper proposes a gaussian process emulator ( gpe ) based bayesian framework , that can use multizone - cfd model in the context of rapid source localization and characterization .the proposed approach follows bayesian inference method of kennedy and ohagan , where computer simulator is calibrated using limited number of experimental observations and simulation runs ( see also higdon et al. , goldstein and rougier ) . the proposed approach treats computer output as a random function , with the associated probability distribution modeled through a gaussian process prior .the gaussian process prior for representation of uncertain simulator outputs is extensively explored in the literature , with associated hyper - parameters predicted using the maximum likelihood estimates or bayesian inference .conditional on the hyper - parameters and a set of simulator outputs obtained at different input settings , mean of the gaussian process acts as a computationally efficient statistical emulator of the simulator .see ohagan for detailed tutorial on building the gpe for a simulator , while kennedy et al . may be referred for discussion on some of the case studies . however , these approaches concern statistical emulation of single - output static simulators . conti andohagan have extended the gpe method for statistical emulation of dynamic simulators .this paper adapts the gpe for dynamic simulators proposed by conti and ohagan to the multizone - cfd model .the resultant emulator is used in the bayesian framework , wherein computational efficiency of the emulator over the simulator is used for rapid source localization and characterization .the proposed method first uses dynamic simulator output data to derive the gpe , which is then used in the bayesian framework to infer source location and characteristics using the experimental observations .the method proposed in this paper advances the current state of the art as follows : a ) the method provide mcmc - based bayesian inference using multizone - cfd model , whereas earlier methods reported in the literature are limited to bayes monte carlo approaches using the multizone models ; b ) gaussian process emulator based approach is proposed for efficient bayesian inference ; c ) the method provide ability to consider model structural uncertainty , which is not treated in the earlier expositions .rest of the paper is organized as follows : detailed problem formulation is presented in section 2 .section 3 provide details of the emulator for dynamic system simulators . in section 4 ,the proposed bayesian framework for rapid source localization and characterization is discussed in detail . in section 5 ,efficacy of the proposed method is demonstrated for a synthetic test case of a hazardous contaminant release in a single storey seven room building .the paper is summarized and concluded in section 6 .this paper concerns a sudden accidental / deliberate release of contaminant in a building that may cause serious health hazards , including death , to the occupants if exposed over a prolonged period of time .although released locally , the contaminant diffuses rapidly through flow paths like doors , windows and leakages , affecting occupants throughout the building .the building is often equipped with sensors that can detect and measure the amount of contaminant present in a room .the sensor data is collected over a period of time , which is then used to decide the evacuation strategy and the containment plan , including appropriate air - handling unit actions and source extinguishing strategies. however , success of the control and evacuation strategy depends on the knowledge of source location and characteristics , which is inferred using the bayesian framework .typically , the source is characterized by specifying the time of activation , , and the amount released , .present paper demonstrates the proposed method for possibly multiple number of sources , , while each source is localized by specifying the zone in which the sources are active , , and xy - coordinate of each source in the zone , .note that the bayesian framework relies on ability to accurately predict the contaminant fate and transport for a given source location and characteristics .multizone model represents a building using a network of well - mixed zones , each zone often representing a room or compartment connecting to rest of the building through flow paths . the model account for influences of the internal air flows , which are generated by pressure differences between the zones .the multizone model uses internal air flows , coupled with the atmospheric and outdoor wind conditions , to predict contaminant dispersion inside a building .wang et al . have coupled a multizone model contam with a zero - turbulence cfd model .the program define one of the zone as a cfd - zone , where full cfd analysis is used , while the resultant air and contaminant properties are linked with other zones to embed the cfd - zone with contam .further , an external coupling is provided to link information on outdoor air pressure and contaminant concentration to indoor building .this subsection briefly describes the integrated multizone - cfd model .the multizone model estimates the airflow and the contaminant dispersion between the zones and , through the flow path , using the pressure drop across the path .the model uses a power - law function to calculate the airflow rate , , through the flow path as where is flow coefficient , is flow exponent while and are total pressures in zone and respectively . for each zone ,the multizone model evaluates steady state air mass balance using where is the air mass source in the zone .contaminant steady state mass balance for a species is similarly obtained by where is the contaminant source in the zone , while is a contaminant concentration defined such that and are the contaminant concentrations in zone and respectively .the cfd model solves a set of partial differential governing equations for conservation of mass , momentum and energy inside the cfd zone .the governing equations for steady state flow are given by where is a variable of conservation equations , is density , is velocity vector , is diffusion coefficient , and is source . at each time step ,cfd model solves steady state conservation equations [ cfd ] .let the cfd zone , , be connected to a zone , , using a flow - path .for each grid point of the discretized flow path , cfd model calculates mass flow rate normal to the cell , , by where is a linear flow coefficient , is pressure in zone , is a pressure difference between zones and , while is pressure at a grid point .thus , the total mass flow through flow path predicted by the cfd model is given by where is total number of grid points for the flow path .the multizone model predicts the total mass flow through flow path as where is the average downwind total pressure for path .thus , the coupling between cfd and multizone models is obtained by ensuring for all connecting flow paths , where is a convergence criterion .using the total mass flow , contaminant concentration in each zone is estimated using eq .( [ cont_massb ] ) . in the present paper ,the coupled multizone - cfd model available with contam is used to simulate the contaminant fate and transport .the room containing active contaminant sources is always defined as a cfd - zone , while other rooms are simulated using multizone model .transient contaminant concentration in each zone is output of the multizone - cfd model . to motivate the choice of multizone - cfd model over the multizone model , it is imperative to investigate the difference between transient contaminant concentration predictions , as shown in figure [ comp_cfd ] . from the figure , significant difference between predictionscan be observed , which may result in erroneous localization and characterization of contaminant sources .the main motivation for the present research work is to develop a bayesian inference method that can use the more accurate multizone - cfd model for rapid localization of contaminant source in an indoor environment .this subsection presents reformulation of the rapid source localization and characterization problem in the bayesian inference terminology . for notational convenience and brevity ,the formulation is presented for a single contaminant species , however , the method can be extended without any change for multiple species .let , represent the multizone - cfd model , where is a set of deterministic inputs , is a set of uncertain parameters , while is a contaminant concentration in the zone at time . for the multizone - cfd model , consists of building description including rooms and flow path specifications , air - handling unit , atmospheric and wind conditions , etc . , while , the uncertain parameters are ] is a vector of regression functions , while is a matrix of regression coefficients with each column given by ^t ] and .covariance function of the gaussian process is given by where is a positive - definite correlation function , while is a positive definite matrix .in the present work , a square exponential correlation function is used where is a diagonal matrix with diagonal elements given by a vector of correlation length parameters .parameters , and are treated as uncertain hyper - parameters .weak non - informative prior is used for and , while prior for is left unspecified . a set of simulation runs at design points \subset \boldsymbol{\theta} ] and is a correlation matrix for a design set . using eq .( [ probd ] ) as likelihood and prior given by eq .( [ prior ] ) , posterior distribution of hyper - parameters is given by conditional on the posterior distribution of hyper - parameters and , the emulator is defined as where while \in \mathcal{r}^n ] using design of experiments select temporal locations simulate define with row given by , where represents the simulator output , , at the time instance .estimate gls using eq .( [ bgls ] ) estimate gls of using eq .( [ sgls ] ) estimate mpe of by maximizing eq .( [ postlambda ] ) with respect to using the estimates of , and , the emulator is defined by the present paper , efficacy of the proposed method is demonstrated for localization and characterization of multiple sources in a building .since the current version of coupled multizone - cfd simulator allows only one zone as cfd - zone , the method assumes all the sources be active in a single zone . for a given number of active sources in the zone, multizone - cfd simulator provides averaged transient contaminant concentration in each zone .thus , each transient response is indexed by number of active sources ( ) , the zone in which sources are active ( ) , and the zone in which contaminant concentration is measured ( ) .separate gpes are built for each combination of ( ) .an initial design set , , is selected using latin hypercube sampling and transient simulator responses are obtained for each design point .a typical response of the simulator is shown in figure [ contam_resp ] .each transient is divided into two parts , first part consists of closely spaced data points collected just after the source activation , while the second part consists of much more coarsely spaced points .a set of data points , , is defined using transients obtained at .conditional on , mpe estimate of are obtained by maximizing eq .( [ postlambda ] ) .conditional on the mpe estimate of , an additional set of design points is selected as the additional set of design points is generated sequentially till the maxima of is below certain pre - defined value .it may be noted that during the process of selecting additional design points , is kept constant , while generalized least square estimates and are calculated after addition of each new design point . for thisenhanced design set , set of data points , , and data points , , are defined .conditional on , generalized least square estimates and are calculated using . using the same value of , estimates of and are similarly calculated using points can then be used for predicting the long term fate and transport of the contaminant . ] .let and be the time instances at which data sets and are defined , respectively . for an arbitrary ,let and define the predicted contaminant concentration obtained using gpes at time instances and respectively .further , define a vector , , and a matrix where is a matrix of zeroes .conditional on and , the contaminant concentration at any time is given by using multivariate normal theory , mean and variance of the normal distribution ( [ cont_conc ] ) are given by where , and .equation ( [ predtime ] ) is used as an emulator to predict long term fate and transport of the contaminant . the overall procedure for building the proposed gpeis summarized in algorithm [ alg1 ] .select using latin hypercube sampling run multizone - cfd for each using transient response at time instances , create estimate by maximizing eq .( [ postlambda ] ) conditional on conditional on and , create using and create using transient response at time instances for all conditional on and , calculate and conditional on and , calculate and use , , , , , and to predict long term transient contaminant concentration . consider a building with total zones , with maximum possible active sources in each zone . for each possible combination of , and the emulator built using algorithm [ alg1 ] .the proposed gpe is used in the bayesian framework for rapid source localization and characterization in the indoor building environment . in the present paper ,the proposed method is demonstrated for maximum possible 3 sources in a zone . the prior uncertainty in number of sources , ,is given by location of each source is assumed to be completely unknown with prior given by uniform distribution .thus , where is area of zone . and are assumed to be completely unknown with the range of possible values as only available information .let and be the ranges of and .thus , let the sensors be placed in zones , where represents total number of sensors , while the observations are collected at time instances .the observations are used in the bayesian inference given by eq .( [ bayfin ] ) , with prior defined using eqs .( [ priorns])-([priorat ] ) , for rapid source localization and characterization . in the mcmc implementation of the bayesian inference ,the multizone - cfd simulator is replaced by an appropriate gpe emulator .details of the implementation are provided in algorithm [ alg2 ] . to ensure ergodicity ,the chain is restarted after initial burn - out period .sensor locations and observations at time instances ,r_z\in [ 0,1],({x_i},{y_i}),s_a , s_t\} ] , and predict contaminant concentration at time instances using emulator calculate posterior probability using and emulator prediction in eq .( [ bayfin ] ) calculate acceptance probability generate a uniform random variable of the proposed method is demonstrated for localization and characterization of a hypothetical pollutant release in a seven room building .the building plan is shown in figure [ build_plan ] .case study is carried out for a single storey 3 m high building with one hallway , three bedrooms , a bathroom , a kitchen and a 1 m wide open passage .rooms are connected internally by doors , while each bedroom is connected to the outside environment by two windows each .further , the hallway is connected to the outside environment by a main door . at the time of contaminant release, all the doors and windows are assumed to be open .outside temperature is assumed to be with the wind blowing at 3 m / s . to build an emulator for the multizone - cfd simulator , an initial set of 121 design pointsis selected using latin hypercube sampling . for each design point ,contaminant concentration at five temporal locations ( i.e. ) in the interval of one minutes , starting from one minute after the source activation , is used as a set of initial simulator outputs .conditional on , correlation length parameters are estimated by maximizing eq .( [ postlambda ] ) . in the present work , complex box method is used for optimization . to avoid local optima ,the optimizer is repeatedly run for pre - determined number of times and the best point amongst the resultant optima is chosen as an estimate of .the initial set of 121 design points is further augmented by sequentially selecting 29 points as described in the algorithm [ alg1 ] .the resultant set of 150 design points , , is used to build the gpe . for each design point from , a second set of simulator outputs , , is created by using contaminant concentration values at five temporal locations ( ) in the interval of four minutes , starting from .conditional on , and are estimated using , while and are estimated using .fate and transport of the contaminant for first five minutes after the source activation is reconstructed by using estimates of the emulator conditional on . the long term contaminant fate and transport for six minutesonwards from the source activation is reconstructed using estimates of the emulator conditional on along with eq .( [ predtime ] ) .figure [ emul_comp ] shows comparison of transient contaminant concentration obtained using the proposed method with multizone - cfd simulator .l + efficacy of the proposed bayesian framework for rapid source localization and characterization is investigated for a release of contaminant in hallway ( zone 1 ) . for the present test case ,two sources are assumed to be activated at time , with each source releasing carbon monoxide ( ) at a rate of 0.09 g / s . inside the hallway ,source 1 is located at , while source 2 is located at .sensors are assumed to be presented in six zones ( zones 16 , except in passage , zone 7 ) .all the sensors are assumed to be collaborating with each other .sensor measurement is simulated by running the multizone - cfd with specified source characteristics and location .transient multizone - cfd prediction in the time - step of 1 min is used as sensor observations , while experimental uncertainty in each sensor observation is assumed to be .total 5 data points per sensors ( i.e. , 5 mins of data ) is used for source localization and characterization . note that for the present test case , all the zones are connected with the hallway , thus the contaminant is detected by the sensors in all the zones .bayesian inference is used after collecting sensor data for five minutes . to investigate the efficacy of the proposed method, the bayesian inference is also implemented using the direct mcmc sampling , where the integrated multizone - cfd model is used in the metropolis - hastings algorithm ( in algorithm [ alg_metr ] ) to sample from the posterior distribution .total 20,000 samples are collected after burnout period of 10,000 samples .the resultant posterior distribution is compared with the posterior distribution obtained using the proposed method .table [ zone_source_prob ] summarizes posterior probability of source located inside a given zone and the posterior probability of number of active sources . for the present test case , the method infers zone and number of sources accurately with probability one ..posterior probabilities of room & number identification [ cols="<,<,<,<,<,<,<,<",options="header " , ]this paper have presented a gaussina proecess emulator ( gpe)-based bayesian framework for rapid contaminant source localization and characterization in the indoor environment .the framework can be used with a computationally expensive integrated multizone - cfd model .the framework approximates the multizone - cfd model using a gpe during the pre - event detection stage , which is used for bayesian inference of the source location and characteristics after the contaminant detection by the sensors .the framework provide methodology for rapid localization and characterization of multiple sources . in conjunction with the rapidly advancing digital and sensor technologies ,the framework can be used for planning the evacuation and the source extinguishing strategies in an indoor building environment in view of sudden contaminant release .the framework can also be used to test different sensor networks and investigate the performance tradeoffs . in the present paper , efficacy of the frameworkhave been investigated for an hypothetical contaminant release in a single storey seven room building .the posterior distributions of the uncertain parameters obtained using the proposed method are found to match closely with the direct mcmc implementation , at a significantly lower computational cost .performance and the robustness of the proposed method have been investigated for a dynamic incremental sensor network .various test cases presented in the paper have demonstrated the robustness of the proposed method , although in a limited sense for one of a possible sensor network . in future , authors propose to investigate the presented approach as an inference machine for informative sensor planning .
this paper explores a gaussian process emulator based approach for rapid bayesian inference of contaminant source location and characteristics in an indoor environment . in the pre - event detection stage , the proposed approach represents transient contaminant fate and transport as a random function with multivariate gaussian process prior . hyper - parameters of the gaussian process prior are inferred using a set of contaminant fate and transport simulation runs obtained at predefined source locations and characteristics . this paper uses an integrated multizone - cfd model to simulate contaminant fate and transport . mean of the gaussian process , conditional on the inferred hyper - parameters , is used as an computationally efficient statistical emulator of the multizone - cfd simulator . in the post event - detection stage , the bayesian framework is used to infer the source location and characteristics using the contaminant concentration data obtained through a sensor network . the gaussian process emulator of the contaminant fate and transport is used for markov chain monte carlo sampling to efficiently explore the posterior distribution of source location and characteristics . efficacy of the proposed method is demonstrated for a hypothetical contaminant release through multiple sources in a single storey seven room building . the method is found to infer location and characteristics of the multiple sources accurately . the posterior distribution obtained using the proposed method is found to agree closely with the posterior distribution obtained by directly coupling the multizone - cfd simulator with the markov chain monte carlo sampling . bayesian framework , gaussian process emulator , multizone models , integrated multizone - cfd , contam , rapid source localization and characterization
the equivalent widths ( ) of absorption lines , observed in the spectrum of astronomical sources can be seen as compressed , but highly informative , representation of the whole spectrum .for example , the of the absorption lines observed in galaxies spectra reveals insights about their stellar populations , like the ages and metallicities of the stars which dominates the light of the host galaxy ( e.g. * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ) . regarding the spectrum of a star ,it is possible to make use of the of absorption lines to determine directly the fundamental atmospheric parameters such as : surface gravity ( ) , effective temperature ( ) and the chemical abundances of many elements ( see for example * ? ? ?* ; * ? ? ?* ) . however , the price to be paid when using the powerful informations contained in the is the long time needed to make a reliable measurement of this observables .commonly , the are measured using interactive routines like _ splot _ provided by the iraf team or with independent codes like liner .both softwares are hand operated " , which means the user need to look for the line limits and continuum points in the spectrum .the next step , is to mark them manually " .this procedure is very time - consuming and introduce many uncertainties which are propagated to a posterior analyses of the quantities involving the measurements . with the growing of spectral surveys ( e.g. sloan digital sky survey ) , it becomes necessary to accelerate and automate some process such as the analysis of stellar populations of galaxies , as well as the determination of fundamental atmospheric parameters of individual stars . in order to help in such task we present in this paper a new automatic code : _ pacce : perl algorithm to compute continuum and equivalent widths_. this software , written in perl ,can be used to compute the of absorption lines as well as to determine continuum points , being very helpful to perform stellar population synthesis following , for example the method developed by and .this paper is structured as follows : in sec .[ code ] we describe the system requirements as well as the numerical procedures behind the code .the input parameters and the outputs of the code are discussed in sec .[ run ] . a comparison with the measures made whit and hand - made "measurements are presented in sec .the final remarks are made in sec .[ final ] .the idea behind is to reproduce the manual " procedure used to measure the of absorption lines in a spectrum , as well as to measure mean continuum fluxes in defined regions and compute the continuum value at line center .in addition , using the same inputs , it does exactly reproduce the measured values being user " independent ( e.g. the uncertainties introduced by the user in hand operated " procedures are removed ) , and thus allowing for a better comparison between measured by different users .was written in perl , allowing anyone to use it without having any problems with software licenses .it is freely distributed under gnu general public license(glp ) .all the libraries used in the code are also free and distributed under glp license .source code can be freely downloaded from http://www.if.ufrgs.br//software.html .all requirements to run can easily be installed in any linux machine ( e.g. _ apt - get , synaptic _ or _ yum _ ) , they are : * perl + ( http://www.perl.org ) ; * perl s `` math::derivative '' package + ( http://search.cpan.org ) ; * perl s `` math::spline '' package + ( http://search.cpan.org ) ; * gnuplot + ( www.gnuplot.info ) .in addition , we call the attention to the fact that perl is installed as default in any linux flavor and can easily be converted to run under microsoft windows system ( without plots ) . in general , absorption feature indicesare composed by measurements of relative flux in a central wavelength interval corresponding to the absorption feature considered ( line limits and in fig . [ ew ] ) and two continuum sidebands passband regions ( spectrum ranges - dots - in the boxes of fig . [ ew ] ) . such sidebands provide a reference level ( pseudo - continuum , solid line fig . [ ew ] ) from which the strength of the absorption feature is evaluated ( see * ? ? ?* for details ) .computes the pseudo - continuum , , in three ways : ( i ) a linear regression is computed using all the continuum points in the passband regions and a straight line ( y = ax+b ) is computed using the regression coefficients ; ( ii ) a straight line is drawn connecting the mid - points of the flanking passband continuum regions ; ( iii ) as a cubic spline .the form in which the pseudo - continuum is adjusted is chosen by the user ( see sec . [ run ] ) . and .shaded region ( a ) is the area below the line limits and filled ( a ) region is the area of the line .dotted line ( red ) represents the spectrum . ]considering as observed flux per unit wavelength an is then : considering fig .[ ew ] one can write the area a as : and , thus , the a is the absorbed flux between and .similarly , the area below the pseudo - continuum between and is : thus , the of the absorption line between and can be written as : for more details see for example . inwe compute both areas , a and c , using the trapezium method .in addition , does compute the uncertainties in the equivalent widths , ) , considering the photon noise statistics pixel by pixel .for this purpose we follow , assuming that the ratio between c and a are similar to the normalized ratio between these quantities ( for details see * ? ? ?in the case of linear pseudo - continuum adjustment we estimate the signal - to - noise ratio ( s / n ) as being the ratio between the square root of the variance and the mean flux of the points in the bandpass interval ( i.e. boxes in fig [ ew ] ) . in the case of a pseudo - continuum defined with a cubic spline the s /n is calculate in the same way as in the linear adjustment , but using points in a region free from emission / absorption lines defined by the user ( see sec .[ run ] ) .besides the the code also computes the mean continuum fluxes ( , last line of tab .[ inpt ] ) and the continuum at line center .such measurements are useful , for example , to perform stellar population studies using the technique described by .the point are calculated following the equation : where is the flux of each in the defined interval and n is the number of points considered .the errors are estimated as being the square root of the variance .the continuum at the line center is taken as being the pseudo - continuum flux where .to run the user needs a ascii , one dimensional , spectrum and a ascii input table containing the line definitions .the ascii spectrum is easily created using the _ wspectext _ iraf task .an example of a input table is shown in tab .note that the continuum points used to determine the line pseudo - continuum can be defined into three different ways ( linear , mid - point or spline ) .the fact that such input table is very easy to be edited / created , makes also an interactive tool , allowing for fast changes and tests in and measurements . besides the calculations , outputs some auxiliary files ( fig .[ fluxog ] ) .these files are the pseudo - continuum data points , the plots showing the regions used in the calculations ( see fig . [ hb ] ) , as well as the gnuplot commands , which are stored for future use ..... # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # index definitions table example .# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # i d line limits left side cont .right side cont .linearc ( 4847.875,4876.625 ) [ 4827.875:4847.875],[4876.625:4891.625 ] midpntc ( 4847.875,4876.625 ) [ 4827.875:4847.875],[4876.625:4891.625 ] * # note the * in the mid - point continuum adjustment .# # # # # # # # # # # # # # # # # # # # # # # # # spline continuum example # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # i d line band pass points for a spline cont .s / n cont .spline ( 4847.875,4876.625 ) { 4827.875,4833.0,4847.875,4876.625,4891.625}<4880:4890 > # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # intervals for mean continuum # # see bica ( 1986 ) # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # cont .|5290:5310,5303:5323,5536:5556,5790:5810,5812:5832,5860:5880,6620:6640| .... index definitions .mid - point , linear and spline pseudo - continuum adjustments where used in left , center and right plots respectively .the shown region is around h absorption line . ]in order to test we perform hand - made " measurements in a set of simple stellar population ( ssp ) models as well as in a library of observed stellar optical spectra . in fig .[ models ] we show a comparison measured using and hand - made " measurements with iraf _ splot _ task .we also plot the differences between the measurements and make a histogram of these differences to give a better idea of the dispersion between both sets .it is clear from fig .[ models ] that does reproduce very well the _ splot _ manual " values .note that there seems to be a very slight systematic under - prediction of the computed with if compared with those of _splot _ , however , the differences are .05 .similar results were obtained by ( * ? ? ?* see their fig .6 ) , thus , suggesting that iraf _ splot _measurements may slightly overestimate the values .it is even harder to properly measure in observed spectra than in theoretical ones . to properly test our code capability in dealing with real spectrawe have measured a set of ( 950 ) in the library of observed stellar spectra presented by .the results of such test are shown in fig .it is clear that is able to properly reproduce the measurements of _ splot _ task .however , a larger dispersion in the differences is observed between both measurements than when using ssp models .in addition , these differences are clearly within the errors and are lower than .5 .we present a perl algorithm to compute continuum and .we describe the method used in the computations , as well as the requirements for its use .we compare the measurements made with and manual " ones made using iraf _splot _ task .these tests show that for ssp models ( i.e. high s / n ) the values are very similar ( differences .2 ) . in real stellar spectra ,the correlation between both values is also very good , but with differences of up to 0.5 .however , these small differences can be explained by the intrinsic errors in subjectiveness determinations of continuum levels caused in manual " measurements .is also able to determine mean continuum and continuum at line center values , which are helpful in stellar population studies .in addition , it is also able to compute the using photon statistics .we thank an anonymous referee for helpful suggestions . we thank miriani g. pastoriza and baslio x. santiago for helpful discussions .tbv thanks brazilian financial support agency cnpq .
we present perl algorithm to compute continuum and equivalent widths ( pacce ) . we describe the methods used in the computations and the requirements for its usage . we compare the measurements made with and manual " ones made using iraf _ splot _ task . these tests show that for ssp models the equivalent widths strengths are very similar ( differences .2 ) for both measurements . in real stellar spectra , the correlation between both values is still very good , but with differences of up to 0.5 . is also able to determine mean continuum and continuum at line center values , which are helpful in stellar population studies . in addition , it is also able to compute the uncertainties in the equivalent widths using photon statistics . the code is made available for the community through the web at http://www.if.ufrgs.br//software.html .
structured information plays an increasingly important role in applications such as information extraction , question answering and robotics . with the notable exceptions of cyc and wordnet ,most of the knowledge bases that are used in such applications have at least partially been obtained using some form of crowdsourcing ( e.g. freebase , wikidata , conceptnet ) . to date ,such knowledge bases are mostly limited to facts ( e.g. obama is the current president of the us ) and simple taxonomic relationships ( e.g. every president is a human ) .one of the main barriers to crowdsourcing more complex domain theories is that most users are not trained in logic .this is exacerbated by the fact that often ( commonsense ) domain knowledge is easiest to formalize as defaults ( e.g. birds typically fly ) , and , even for non - monotonic reasoning ( nmr ) experts , it can be challenging to formulate sets of default rules without introducing inconsistencies ( w.r.t . a given nmr semantics ) or unintended consequences . in this paper , we propose a method for learning consistent domain theories from crowdsourced examples of defaults and non - defaults . since these examples are provided by different users , who may only have an intuitive understanding of the semantics of defaults , together they will typically be inconsistent .the problem we consider is to construct a set of defaults which is consistent w.r.t .the system p semantics , and which entails as many of the given defaults and as few of the non - defaults as possible .taking advantage of the relation between system p and possibilistic logic , we treat this as a learning problem , in which we need to select and stratify a set of propositional formulas .the contributions of this paper are as follows .first , we show that the problem of deciding whether a possibilistic logic theory exists that perfectly covers all positive and negative examples is -complete .second , we formally study the problem of learning from defaults in a standard learning theory setting and we determine the corresponding vc - dimension , which allows us to derive theoretical bounds on how much training data we need , on average , to obtain a system that can classify defaults as being valid or invalid with a given accuracy level .third , we introduce a heuristic algorithm for learning possibilistic logic theories from defaults and non - defaults . to the best of our knowledge ,our method is the first that can learn a consistent logical theory from a set of noisy defaults .we evaluate the performance of this algorithm in two crowdsourcing experiments .in addition , we show how it can be used for approximating maximum a posteriori ( map ) inference in propositional markov logic networks .reasoning with defaults of the form `` if then typically '' , denoted as , has been widely studied .a central problem in this context is to determine what other defaults can be derived from a given input set .note , however , that the existing approaches for reasoning about default rules all require some form of consistency ( e.g. the input set can not contain both and ) . as a result , these approaches can not directly be used for reasoning about noisy crowdsourced defaults . to the best of our knowledge ,this is the first paper that considers a machine learning setting where the input consists of default rules .several authors have proposed approaches for constructing possibility distributions from data ; see for a recent survey. however , such methods are generally not practical for constructing possibilistic logic theories .the possibilistic counterpart of the z - ranking constructs a possibilistic logic theory from a set of defaults , but it requires that these defaults are consistent and can not handle non - defaults , although an extension of the z - ranking that can cope with non - defaults was proposed in .some authors have also looked at the problem of learning sets of defaults from data , but the performance of these methods has not been experimentally tested . in ,a possibilistic inductive logic programming ( ilp ) system is proposed , which uses a variant of possibilistic logic for learning rules with exceptions . however , as is common for ilp systems , this method only considers classification problems , and can not readily be applied to learn general possibilistic logic theories .finally note that the setting of learning from default rules as introduced in this paper can be seen as a non - monotonic counterpart of an ilp setting called _ learning from entailment _ .a stratification of a propositional theory is an ordered partition of .we will use the notation to denote the set of all such ordered partitions and to denote the set of all ordered partitions into at most subsets of . a theory in possibilistic logic a set of formulas of the form , with a propositional formula and ,1] ] for which the classical theory is consistent .an inconsistency - tolerant inference relation for possibilistic logic can then be defined as follows : we will write as an abbreviation for . it can be shown that can be decided by making calls to a sat solver , with the number of certaintly levels in .there is a close relationship between possibilistic logic and the rational closure of a set of defaults .recall that is tolerated by a set of defaults if the classical formula is consistent .let be a set of defaults .the rational closure of is based on a stratification , known as the z - ordering , where each contains all defaults from which are tolerated by .intuitively , contains the most general default rules , contains exceptions to these rules , contains exceptions to these exceptions , etc .given the stratification we define the possibilistic logic theory , where we assume .it then holds that is in the rational closure of iff .we now cover some basic notions from statistical learning theory .we restrict ourselves to binary classification problems , where the two labels are and .let be a set of _examples_. a _ hypothesis _ is a function .a hypothesis is said to cover an example if .consider a set of labeled examples that have been iid sampled from a distribution .a hypothesis s sample error rate is where if and otherwise . a hypothesis s expected error w.r.t . the probability distribution is given by .$ ] statistical learning theory provides tools for bounding the probability , where is known to be sampled iid from but itself is unknown .these bounds link s training set error to its ( probable ) performance on other examples drawn from the same distribution , and therefore permits theoretically controlling overfitting .the most important bounds of this type depend on the vapnik - chervonenkis ( vc ) dimension .a hypothesis set is said to shatter a set of examples if for every subset there is a hypothesis such that for every and for every .the vc dimension of is the cardinality of the largest set that is shattered by .upper bounds based on the vc dimension are increasing functions of the vc dimension and decreasing functions of the number of examples in the training sample .ideally , the goal is to minimize expected error , but this can not be evaluated since is unknown . _structural risk minimization _ helps with this if the hypothesis set can be organized into a hierarchy of nested hypothesis classes of increasing vc dimension .it suggests selecting hypotheses that minimize a risk composed of the training set error and a complexity term , e.g. if two hypotheses have the same training set error , the one originating from the class with lower vc dimension should be preferred .in this section , we formally describe a new learning setting for possibilistic logic called _ learning from default rules_. we assume a finite alphabet is given .an example is a default rule over and a hypothesis is a possibilistic logic theory over .a hypothesis predicts the class of an example by checking if covers , in the following sense .a hypothesis _ covers _ an example if .the hypothesis predicts positive , i.e. , iff covers , and else predicts negative , i.e. .let us consider the following set of examples the following hypotheses over the alphabet cover all positive and no negative examples : the learning task can be formally described as follows. given : : : a multi - set which is an iid sample from a set of default rules over a given finite alphabet .do : : : learn a possiblistic logic theory that covers all positive examples and none of the negative examples in .the above definition assumes that is perfectly separable , i.e. it is possible to perfectly distinguish positive examples from negative examples . in practice, we often relax this requirement , and instead aim to find a theory that minimizes the training set error .similar to learning in graphical models , this learning task can be decomposed into _ parameter learning _ and _ structure learning_. in our context ,the goal of parameter learning is to convert a set of propositional formulas into a possibilistic logic theory , while the goal of structure learning is to decide what that set of propositional formulas should be .parameter learning assumes that the formulas of the possibilistic logic theory are fixed , and only the certainty weights need to be assigned .as the exact numerical values of the certainty weights are irrelevant , we will treat parameter learning as the process of finding the most suitable stratification of a given set of formulas , e.g. the one which minimizes training error or structural risk ( cf .section [ sec : vc ] ) .[ ex : learning1 ] let and a stratification of which minimizes the training error on the examples from is which is equivalent to because . note that correctly classifies all examples except .given a set of examples , we write and ) .a stratification of a theory is a _ separating stratification _ of and if it covers all examples from and no examples from .[ exzrankingcomparison ] let us consider the following set of examples let .the following stratification is a separating stratification of and : .note that the z - ranking of also corresponds to a stratification of , as contains exactly the clause representations of the positive examples . however using the z - ranking leads to a different stratification , which is : note that whereas .because arbitrary stratifications can be chosen , there is substantial freedom to ensure that negative examples are not covered .this is true even when the set of considered formulas is restricted to the clause representations of the positive examples , as seen in example [ exzrankingcomparison ] .unfortunately , the problem of finding an optimal stratification is computationally hard .[ thm - complexity ] deciding whether a separating stratification exists for given , and is a -complete problem .the proof of the membership result is trivial .we show the hardness result by reduction from the -complete problem of deciding the satisfiability of quantified boolean formulas of the form where and are vectors of propositional variables and is a propositional formula .let be a propositional theory , let and .we need to show that is satisfiable if and only if there exists a separating stratification for , and .( ) let be an assignment of variables in such that is true. then we can construct the separating stratification as since will always be true in any model consistent with the highest level of the stratification , because of the way we chose and for this level , so will .( ) let be a stratification of which entails the default rule .we can assume w.l.o.g .that has only two levels .since is a separating stratification , we must have .therefore the highest level of must be a consistent theory and must be true in all of its models .let and .we can construct an assignment to variables in by setting for and for .it follows from the construction that must be true .as this result reveals , in practice we will need to rely on heuristic methods for parameter learning . in section [ secheuristicalgorithm ]we will propose such a heuristic method , which will moreover also include structure learning .we explore the vc dimension of the set of possible stratifications of a propositional theory , as this will allow us to provide probabilistic bounds on the generalization ability of a learned possibilistic logic theory .let us write for the set of all stratifications of a propositional theory , and let be the set of all stratifications with at most levels .the following proposition provides an upper bound for the vc dimension and can be proved by bounding the cardinality of .[ prop : upperbound ] let be a set of propositional formulas. then . in the next theorem, we establish a lower bound on the vc dimension of stratifications with at most levels which shows that the above upper bound is asymptotically tight .[ thm - vc2 ] for every , , there is a propositional theory consisting of formulas such that to prove theorem [ thm - vc2 ] , we need the following lemmas ; some straightforward proofs are omitted due to space constraints .[ lemma : orders ] if is a totally ordered set , let denote the -th highest element of .let be a set of cardinality where .let be a set of inequalities .then for any there is a permutation of satisfying all constraints from and no constraints from .[ lemma : pos_order ] let denote a boolean formula which is true if and only if at least of the arguments are true .let be a set of propositional logic variables and be a permutation of elements from .let .let be a possibilistic logic theory .let and be disjoint subsets of . then iff w.r.t . the ordering given by the permutation .[ lemma : vc3 ] for every there is a propositional theory consisting of formulas such that let be a set of propositional variables where , and let be defined as in lemma [ lemma : orders ] .let i.e. contains one default rule for every inequality from .it follows from lemma [ lemma : orders ] and lemma [ lemma : pos_order ] that the set can be shattered by stratifications of the propositional theory .the cardinality of is . therefore the vc dimension of stratifications of is at least .we show that if and are powers of two then is a lower bound of the vc dimension .the general case of the theorem then follows straightforwardly .let and let be a set of default rules of cardinality shattered by .it follows from lemma [ lemma : vc3 ] that such a set always exists .let .then has cardinality and is shattered by . to see that the latter holds , note that the sets of formulas are disjointtherefore , if we want to find a stratification from which covers only examples from an arbitrary set and no other examples from then we can merge stratifications of which cover exactly the examples from , where merging stratifications is done by level - wise unions . combining the derived lower bounds and upper bounds on the vc dimension together with the structural risk minimization principle, we find that given two stratifications with the same training set error rate , we should prefer the one with the fewest levels .furthermore , when structure learning is used , it is desirable for learned theories to be compact .a natural learning problem then consists in selecting a small subset of , where corresponds to the set of formulas considered by the structure learner , and identifying a stratification only for that subset . the results in this section can readily be extended to provide bounds on the vc dimension of this problem .let be a propositional theory of cardinality and let be a positive integer .the vc dimension of the set of hypotheses involving at most formulas from and having at most levels is bounded by .this can simply be obtained by upper - bounding the number of the different stratifications with at most levels and formulas selected from a set of cardinality , by . in this section ,we propose a practical heuristic algorithm for learning a possibilistic logic theory from a set of positive and negative examples of default rules .our method combines greedy structure learning with greedy weight learning .we assume that every default or non - default in is such that and correspond to clauses .the algorithm starts by initializing the `` working '' stratification to be an empty list .then it repeats the following revision procedure for a user - defined number of iterations , or until a timeout is reached .first , it generates a set of candidate propositional clauses as follows : * it samples a set of defaults from the examples that are misclassified by . * for each default which has been sampled , it samples a subclause of and a subclause of .if is a positive example then is added to ; if it is a negative example , then is added instead , where is obtained from by negating each of the literals .the algorithm then tries to add each formula in to an existing level of or to a newly inserted level .it picks the clause whose addition leads to the highest accuracy and adds it to .the other clauses from are discarded . in case of ties , the clause which leads to the stratification with the fewest levelsis selected , in accordance with the structural risk minimization principle and our derived vc dimension .if there are multiple such clauses , then it selects the shortest among them .subsequently , the algorithm tries to greedily minimize the newly added clause , by repeatedly removing literals as long as this does not lead to an increase in the training set error .next , the algorithm tries to revise by greedily removing clauses whose deletion does not increase the training set error . finally , as the last step of each iteration, the weights of all clauses are optimized by greedily reinserting each clause in the theory .we evaluate our heuristic learning algorithmthe data , code , and learned models are available from https://github.com / supertweety/. ] in two different applications : learning domain theories from crowdsourced default rules and approximating map inference in propositional markov logic networks . as we are not aware of any existing methods that can learn a consistent logical theory from a set of noisy defaults , there are no baseline methods to which our method can directly be compared .however , if we fix a target literal , we can train standard classifiers to predict for each propositional context whether the default holds .this can only be done consistently with `` parallel '' rules , where the literals in the consequent do not appear in antecedents .we will thus compare our method to three traditional classifiers on two crowdsourced datasets of parallel rules : random forests , c4.5 decision trees , and the rule learner ripper .random forests achieve state - of - the - art accuracy but its models are difficult to interpret .decision trees are often less accurate but more interpretable than random forests .finally , rule learners have the most interpretable models , but often at the expense of lower accuracy . in the second experiment ,approximating map inference , we do not restrict ourselves to parallel rules . in this case , only our method can guarantee that the predicted defaults will be consistent .our learning algorithm is implemented in java and uses the sat4j library .the implementation contains a number of optimizations which make it possible to handle datasets of thousands of default rules , including caching , parallelization , detection of relevant possibilistic subtheories for deciding entailment queries and unit propagation in the possibilistic logic theories .we use the weka implementations for the three baselines .when using our heuristic learning algorithm , we run it for a maximum time of 10 hours for the crowdsourcing experiments reported in section [ sec : exp - crowd ] and for one hour for the experiments reported in section [ sec : exp - map ] . for c4.5 and ripper ,we use the default settings . for random forests, we used the default settings and set the number of trees to 100 .we used crowdflower , an online crowdsourcing platform , to collect expert rules about two domains . in the first experiment, we created 3706 scenarios for a team on offense in american football by varying the field position , down and distance , time left , and score difference .then we presented six choices for a play call ( punt , field goal , run , pass , kneel down , do not know / it depends ) and asked the user to select the most appropriate one .all scenarios were presented to 5 annotators .a manual inspection of a subset of the rules revealed that they are of reasonably high quality . in a second experiment , users were presented with 2388 scenarios based on texas holdem poker situations , where users were asked whether in a given situation they would typically fold , call or raise , with a fourth option again being `` do not know / it depends '' . each scenario was again presented to 5 annotators .given the highly subjective nature of poker strategy , it was not possible to enforce the usual quality control mechanism on crowdflower in this case , and the quality of the collected rules was accordingly found to be more variable . in both cases ,the positive examples are the rules obtained via crowdsourcing , while negative examples are created by taking positive examples and randomly selecting a different consequent . to create training and testing sets , we divided the data based on annotator i d so that all rules labeled by a given annotated appear only in the training set or only in the testing set , to prevent leakage of information .we added a set of hard rules to the possibilistic logic theories to enforce that only one choice should be selected for a game situation .the baseline methods were presented with the same information , in the sense that the problem was presented as a multi - class classification problem , i.e. given a game situation , the different algorithms were used to predict the most typical action ( with one additional option being that none of the actions is typical ) .the results are summarized in table [ tab : mrfs ] . in the poker experiment ,our approach obtained slightly higher accuracy than random forest and ripper but performed slightly worse than c4.5 .however , a manual inspection showed that a meaningful theory about poker strategy was learned .for example , at the lowest level , the possibilistic logic theory contains the rule `` call '' , which makes sense given the nature of the presented scenarios . at a higher level, it contains more specific rules such as `` if you have three of a kind then raise '' . at the level above , it contains exceptions to these more specific rules such as `` if you have three of a kind , there are three hearts on the board and your opponent raised on the river then call '' . in the american football experiment ,our approach obtained lower accuracy than the competing algorithms .the best accuracy was achieved by c4.5 .again , we also manually inspected the learned possibilistic logic theory and found that it captures some general intuitions and known strategy about the game .for example , the most general rule is `` pass '' which is the most common play type .another example is that second most general level has several rules that say on fourth down and long you should punt .more specific levels that allow for cases when you should not punt , such as when you are in field goal range . despite not achieving the same accuracy as c4.5 in this experiment, it nonetheless seems that our method is useful for building up domain theories by crowdsourcing opinions .the learned domain theories are easy to interpret ( e.g. , the size of the poker theory , as a sum of rule lengths , is more than 10 times smaller than the number of nodes in the learned tree ) and capture relevant strategies for both games .the models obtained by classifiers such as c4.5 , on the other hand , are often difficult to interpret . moreover , traditional classifiers such as c4.5 can only be applied to parallel rules , and will typically lead to inconsistent logical theories in more complex domains .in contrast , our method can cope with arbitrary default rules as input , making it much more broadly applicable for learning domain theories ..test set accuracies . [ cols="<,^,^,^,^",options="header " , ] the implementation available online contains also an optimized version of the exact algorithm.note that due to its high complexity the algorithm described in this section does not scale to problems involving large numbers of default rules . for practical problems ,it is therefore preferable to use the heuristic algorithm described in section [ secheuristicalgorithm ] .in this section we briefly describe two examples of theories , one learned in the crowd - sourced poker domain ( see section [ sec : exp - crowd ] ) and the other learned in map - inference approximation experiments for the nltcs domain ( see section [ sec : exp - map ] ) .the learned theory for the poker domain is shown in table [ tab : poker ] . since default rules in the dataset from which this theory was learned were all `` parallel rules '' , most of the formulas in the theory are clausal representations of implications of the form `` if situation then action '' ; an exception to this is one of the rules in the lowest level which has the form `` not situation '' , where is in this case `` the flop cards have just been dealt and you have a straight draw and a flush draw '' , and this rule basically serves to block the other rules in this level for evidence .the top level of the theory consists of hard integrity constraints .table [ tab : nltcs ] shows a small theory which was learned in the nltcs domain after 20 iterations of the algorithm ( the complete learned theory available online is larger ) . since in this domainthe default rules were not restricted to be of the `` parallel '' form , also the structure of the rules in the theory is more complex .
we introduce a setting for learning possibilistic logic theories from defaults of the form `` if alpha then typically beta '' . we first analyse this problem from the point of view of machine learning theory , determining the vc dimension of possibilistic stratifications as well as the complexity of the associated learning problems , after which we present a heuristic learning algorithm that can easily scale to thousands of defaults . an important property of our approach is that it is inherently able to handle noisy and conflicting sets of defaults . among others , this allows us to learn possibilistic logic theories from crowdsourced data and to approximate propositional markov logic networks using heuristic map solvers . we present experimental results that demonstrate the effectiveness of this approach .
some techniques for hiding data in executables are already proposed ( e.g. , shin et al ) . in this paperwe introduce a very simple technique to hide secret message bits inside source codes as well .we describe our steganographic technique by hiding inside html source as cover text , but this can be easily extended to any case - insensitive language source codes .html tags are basically directives to the browser and they carry information regarding how to structure and display the data on a web page .they are not case sensitive , so tags in either case ( or mixed case ) are interpreted by the browser in the same manner ( e.g. , `` '' and `` '' refers to the same thing ) .hence , there is a redundancy and we can exploit this redundancy . to embed secret message bits into html ,if the cases of the tag alphabets in html cover text are accordingly manipulated , then this tampering of the cover text will be ignored by the browser and hence it will be imperceptible to the user , since there will not be any visible difference in the web page , hence there will not be any suspect for it as well . also , when the web page is displayed in the browser , only the text contents are displayed , not the tags ( those can only be seen when the user does ` view source ' ) .hence , the secret messages will be kind of hidden to user .both redundancy and imperceptibility conditions for data hiding are met , we use these to embed data in html text .if we do not tamper the html text data that is to be displayed by the browser as web page ( this html cover text is analogical to the cover image , when thought in terms of steganographic techniques in images ) , the user will not even suspect about hidden data in text .we shall only change the case of every character within these html tags ( elements ) in accordance with the secret message bits that we want to embed inside the html web page .if we think of the browser interpreter as a function , we see that it is non - injective , i.e. , not one to one , since whenever , and . the extraction process of the embedded message will also be very simple , one needs to just do ` view source ' and observe the case - patterns of the text within tags and can readily extract the secret message ( and see the unseen ) , while the others will not know anything .the length ( in bits ) of the secret message to be embedded will be upper - limited by the sum of size of text inside html tags ( here we do nt consider attribute values for data embedding . in casewe consider attribute values for data embedding , we need to be more careful , since for some tags we should think of case - sensitivity , e.g. href=``link.html'' , since link file name may be case - sensitive on some systems , whereas , attributes such as align=``center'' is safe ) .if less numbers of bits to be embedded , we can embed the information inside header tag specifying the length of embedded data ( e.g. ` ' if the length of secret data to be embedded is bits ) that will not be shown in the browser ( optionally we can encrypt this integer value with some private key ) . in order to guarantee robustness of this very simple algorithmone may use some simple encryption on the data to be embedded .the algorithm for embedding the secret message inside the html cover text is very simple and straight - forward .first , we need to separate out the characters from the cover text that will be candidates for embedding , these are the case - insensitive text characters inside html tags . figure 2 shows a very simplified automata for this purpose .we define the following functions before describing the algorithm : * is defined by , + where * similarly , is defined by , + where here the ascii value of ` a ' is and that of ` a ' is , with a difference of .it s easy to see that if the domain , then + and , implies that .now , we want to embed secret data bits inside the case - insensitive text inside the html tags . if denotes the sequence of characters inside the html tags in cover text ( input html ) .a character is a candidate for hiding a secret message bit iff it is an alphabet .if we want to hide the secret message bit inside the cover text character , the corresponding stego - text will be defined by the following function : , i.e. if isalphabet( ) is true , , hence , we have the following : number of bits ( ) of the secret message embedded into the html cover text must also be embedded inside the html ( e.g. , in header element ) . the figure 1 and the algorithm [ alg : embed ] togetherexplain this embedding algorithm .[ fig : f1 ] [ h ] search for all the html tags present in the html cover text and extract all the characters from inside those tags using the dfa described in the figure 2 . embed the secret message length inside html header in the stego text . . break .the algorithm for extraction of the secret message bits will be even more simple . like embedding process , we must first seperate out the candidate text ( text within tags ) that were chosen for embedding secret message bits .also , we must extract the number of bits ( ) embedded into this page ( e.g. , from the header element ) .one has to use ` view source ' to find out the stego - text .now , we have .if i.e. , an alphabet , then only it is a candidate for decoding and to extract from , we use the following logic : + repeat the above algorithm , to extract all the hidden bits . the figure 2 and the algorithm [ alg : extract ] together explain this embedding algorithm .[ fig : f2 ] [ fig : htmlsrc - cover ] [ fig : htmlsrc - stego ] [ fig : html ] stego html , title="fig:",width=529,height=680 ] [ h ] search for all the html tags present in the html stego text and extract all the characters from inside those tags using the dfa described in the figure 2 . extract the secret message length from inside html header in the stego text . . . . . . break .[ fig : graph ] figures 3 , 4 and 5 show an example of how our method works , while figure 6 shows the comparison of the histogram of the cover html and stego html in terms of the ( ascii ) character frequencies .classical image hiding techniques like lsb data hiding technique always introduce some ( visible ) distortion in the stego image ( that can be reduced using techniques ) , but our data hiding technique in html is novel in the sense that it introduces no visible distortion in stego text at all .in this paper we presented an algorithm for hiding data in html text .this technique can be extended to any case - insensitive language and data can be embedded in the similar manner , e.g. , we can embed secret message bits even in source codes written in languages like basic or pascal or in the case - insensitive sections ( e.g. comments ) in c like case - sensitive languages .data hiding methods in images results distorted stego - images , but the html data hiding technique does not create any sort of visible distortion in the stego html text .f. battisti , m. carli , a. neri , k. egiaziarian , `` a generalized fibonacci lsb data hiding technique '' , 3rd international conference on computers and devices for communication ( codec- 06 ) tea , institute of radio physics and electronics , university of calcutta , december 18 - 20 , 2006 .sandipan dey , ajith abraham and sugata sanyal `` an lsb data hiding technique using natural numbers '' , ieee third international conference on intelligent information hiding and multimedia signal processing , iihmsp 2007 , nov 26 - 28 , 2007 , kaohsiung city , taiwan , ieee computer society press , usa , isbn 0 - 7695 - 2994 - 1 , pp .473 - 476 , 2007 .sandipan dey , ajith abraham and sugata sanyal `` an lsb data hiding technique using prime numbers '' , third international symposium on information assurance and security , august 29 - 31 , 2007 , manchester , united kingdom , ieee computer society press , usa , isbn 0 - 7695 - 2876 - 7 , pp .101 - 106 , 2007 .
in this paper , we suggest a novel data hiding technique in an html web page . html tags are case insensitive and hence an alphabet in lowercase and one in uppercase present inside an html tag are interpreted in the same manner by the browser , i.e. , change in case in an web page is imperceptible to the browser . we basically exploit this redundancy and use it to embed secret data inside an web page , with no changes visible to the user of the web page , so that he can not even suspect about the data hiding . the embedded data can be recovered by viewing the source of the html page . this technique can easily be extended to embed secret message inside any piece of source - code where the standard interpreter of that language is case - insensitive .
the observed complexity of congested traffic flows has puzzled traffic modelers for a long time ( see for an overview ) .the most controversial open problems concern the issue of faster - than - vehicle characteristic propagation speeds and the question whether traffic models with or without a fundamental diagram ( i.e. with or without a unique equilibrium flow - density or speed - distance relationship ) would describe empirical observations best . while the first issue has been intensively debated recently ( see , and references therein ) , this paper addresses the second issue .the most prominent approach regarding models _ without _ a fundamental diagram is the three - phase traffic theory by .the three phases of this theory are `` free traffic '' , `` wide moving jams '' , and `` synchronized flow '' . while a characteristic feature of `` synchronized flow '' is the wide scattering of flow - density data , many microscopic and macroscopic traffic models neglect noise effects and the heterogeneity of driver - vehicle units for the sake of simplicity , and they possess a unique flow - density or speed - distance relationship under stationary and spatially homogeneous equilibrium conditions .therefore , appendix [ sca ] discusses some issues concerning the wide scattering of congested traffic flows and how it can be treated within the framework of such models . for models with a fundamental diagram ,a phase diagram approach has been developed to represent the conditions under which certain traffic states can exist .a favourable property of this approach is the possibility to semi - quantitatively derive the conditions for the occurence of the different traffic states from the instability properties of the model under consideration and the outflow from congested traffic .the phase diagram approach for models with a fundamental diagram has recently been backed up by empirical studies .nevertheless , the approach has been criticized , which applies to the alternative three - phase traffic theory as well . while both theories claim to be able to explain the empirical data , particularly the different traffic states and the transitions between them , the main dispute concerns the following points : * both approaches use an inconsistent terminology regarding the definition of traffic phases and the naming of the traffic states . *both modeling approaches make simplifications , but are confronted with empirical details they were not intended to reproduce ( e.g. effects of details of the freeway design , or the heterogeneity of driver - vehicle units ) .* three - phase traffic theory is criticized for being complex , inaccurate , and inconsistent , and related models are criticized to contain too many parameters to be meaningful . *it is claimed that the phase diagram of models with a fundamental diagram would not represent the empirical observed traffic states and transitions well . in particular , the `` general pattern '' ( gp ) and the `` widening synchronized pattern '' ( wsp ) would be missing .moreover , wide moving jams should always be part of a `` general pattern '' , and homogeneous traffic flows should not occur for extreme , but rather for small bottleneck strengths . in the following chapters, we will try to overcome these problems . in sec .[ sec : phenomen ] we will summarize the stylized empirical facts that are observed on freeways in many different countries and have to be explained by realistic traffic models .afterwards , we will discuss and clarify the concept of traffic phases in sec .[ sec : defphases ] . in sec .[ sec : phase ] , we show that the traffic patterns of three - phase traffic theory can be simulated by a variety of microscopic and macroscopic traffic models with a fundamental diagram , if the model parameters are suitably chosen . for these model parameters ,the resulting traffic patterns look surprisingly similar to simulation results for models representing three - phase traffic theory , which have a much higher degree of complexity . depending on the interest of the reader , he / she may jump directly to the section of interest .finally , in sec .[ sec : conclusions ] , we will summarize and discuss the alternative explanation mechanisms , pointing out possible ways of resolving the controversy .in this section , we will pursue a data - oriented approach .whenever possible , we describe the observed data without using technical terms used within the framework of three - phase traffic theory or models with a fundamental diagram . in order to show that the following observations are generally valid , we present data from several freeways in germany , not only from the german freeway a5 , which has been extensively studied before .our data from a variety of other countries confirm these observations as well . in order to eliminate confusion arising from different interpretations of the data and to facilitate a direct comparison between computer simulations and observations, one has to simulate the method of data acquisition and the subsequent processing or interpretation steps as well .we will restrict ourselves here to the consideration to aggregated stationary detector data which currently is the main data source of freeway traffic studies .when comparing empirical and simulation data , we will focus on the velocity ( and not the density ) , since it can be measured directly .in addition to the aggregation over one - minute time intervals , we will also aggregate over the freeway lanes .this is justified due to the typical synchronization of velocities among freeway lanes in all types of congested traffic . to simulate the measurement and interpretation process, we use `` virtual detectors '' recording the passage time and velocity of each vehicle . for each aggregation time interval( typically ) , we determine the traffic flow as the vehicle count divided by the aggregation time , and the velocity as the arithmetic mean value of the individual vehicles passing in this time period .notice that the arithmetic mean value leads to a systematic overestimation of velocities in congested situations and that there exist better averaging methods such as the harmonic mean .nevertheless , we will use the above procedure because this is the way in which empirical data are typically evaluated by detectors .since freeway detectors are positioned only at a number of discrete locations , interpolation techniques have to be applied to reconstruct the observed spatiotemporal dynamics at any point in a given spatiotemporal region .if the detector locations are not further apart than about , it is sufficient to apply a linear smoothing / interpolating filter , or even to plot the time series of the single detectors in a suitable way ( see , e.g. fig . 1 in ) .this condition , however , severely restricts the selection of suitable freeway sections , which is one of the reasons why empirical traffic studies in germany have been concentrated on a long section of the autobahn a5 near frankfurt . for most other freeway sections showing recurrent congestion patterns ,two neighboring detectors are apart , which is of the same order of magnitude as typical wavelengths of non - homogeneous congestion patterns and therefore leads to ambiguities as demonstrated by .furthermore , the heterogeneity of traffic flows and measurement noise lead to fluctuations obscuring the underlying patterns. both problems can be overcome by post - processing the aggregated detector data .furthermore , have proposed a method called `` asda / foto '' for short - term traffic prediction .most of these methods , however , can not be applied for the present investigation since they do not provide continuous velocity estimates for all points of a certain spatiotemporal region , or because they are explicitly based on models . (the method asda / foto , for example , is based on three - phase traffic theory . )we will therefore use the adaptive smoothing method , which has recently been validated with empirical data of very high spatial resolution . in order to be consistent, we will apply this method to both , the real data and the virtual detector data of our computer simulations . in this section, we will summarize the _ stylized facts _ of the spatiotemporal evolution of congested traffic patterns , i.e. , typical empirical findings that are persistently observed on various freeways all over the world . in order to provide a comprehensive list as a _ testbed _ for traffic models and theories, we will summarize below all relevant findings , including already published ones : spatiotemporal dynamics of the average velocity on two different freeways .( a ) german freeway a9 in direction south , located in the area north of munich .horizontal lines indicate two intersections ( labelled `` ak '' ) , which cause bottlenecks , since they consume some of the freeway capacity .the traffic direction is shown by arrows .( b ) german freeway a8 in direction east , located about east of munich . here, the bottlenecks are caused by uphill and downhill gradients around `` irschenberg '' and by an accident at {km} ] , and an additional obstruction by an accident at {km} ] and {km / h} ] ( the location of the temporary bottleneck caused by an incident ) starts moving upstream at .such a `` detachment '' of the downstream congestion front occurs , for example , when an accident site has been cleared , and it is one of two ways in which the dissolution of traffic congestion starts ( see next item for the second one ) ._ the upstream front of spatially extended congestion patterns has no characteristic speed ._ depending on the traffic demand and the bottleneck capacity , it can propagate upstream ( if the demand exceeds the capacity ) or downstream ( if the demand is below capacity ) .this can be seen in all extended congestion patterns of fig .[ fig : empdata1 ] ( see also ) .the downstream movement of the congestion front towards the bottleneck is the second and most frequent way in which congestion patterns may dissolve ._ most extended traffic patterns show some `` internal structure '' propagating upstream approximately at the same characteristic speed ._ consequently , all spatiotemporal structures in figs .[ fig : empdata1 ] and [ fig : empdata2 ] ( sometimes termed `` oscillations '' , `` stop - and - go traffic '' , or `` small jams '' ) , move in parallel .the periods and wavelengths of internal structures in congested traffic states tend to decrease as the severity of congestion increases ._ this applies in particular to measurements of the average velocity .( see , for example , fig .[ fig : empdata1](a ) , where the greater of two bottlenecks , located at the intersection mnchen - nord , produces oscillations of a higher frequency .typical periods of the internal quasi - periodic oscillations vary between about and , corresponding to wavelengths between and ._ for bottlenecks of moderate strength , the amplitude of the internal structures tends to increase while propagating upstream_. this can be seen in _ all _ empirical traffic states shown in this contribution , and also in .it can also be seen in the corresponding velocity time series , such as the ones in fig .12 of , in , or in _ all _ relevant time series shown in chapters 9 - 13 of .the oscillations may already be visible at the downstream boundary ( fig .[ fig : empdata1](b ) ) , or emerge further upstream ( figs . [ fig : empdata1](a ) , [ fig : empdata2](a ) ) . during their growth, neighboring perturbations may merge ( fig . 1 in ) , or propagate unaffected ( fig .1 ) . at the upstream end of the congested area, the oscillations may eventually become isolated `` wide jams '' ( fig .[ fig : empdata2 ] ) or remain part of a compact congestion pattern ( fig .[ fig : empdata1 ] ) . 9 ._ light or very strong bottlenecks may cause extended traffic patterns , which appear homogeneous ( uniform in space ) , _ see , for example , figs .1(d ) and 1(f ) of .note however that , for strong bottlenecks ( typically caused by accidents ) , the empirical evidence has been controversially debated , in particular as the oscillation periods at high densities reach the same order of magnitude as the smoothing time window that has typically been used in previous studies ( cf .point 7 above ) .this makes oscillations hardly distinguishable from noise . ) in sec .[ sec : simidm ] below . ]see appendix [ app ] for a further discussion of this issue . note that the above stylized facts have not only be observed in germany , but also in other countries , e.g. the usa , great britain , and the netherlands .furthermore , we find that many congestion patterns are composed of several of the elementary patterns listed above .for example , the congestion pattern observed in fig .[ fig : empdata2](b ) can be decomposed into moving and stationary localized patterns as well as extended patterns .the source of probably most controversies in traffic theory is an observed spatiotemporal structure called the _`` pinch effect '' _ or _ `` general pattern '' _ , see for details and fig . 1 of for a typical example of the spatiotemporal evolution .from the perspective of the above list , this pattern relates to _ stylized facts 6 and 8 _ ,i.e. , it has the following features : ( i ) relatively stationary congested traffic ( _ pinch region _ ) near the downstream front , ( ii ) small perturbations that grow to oscillatory structures as they travel further upstream , ( iii ) some of these structures grow to form `` wide jams '' , thereby suppressing other small jams , which either merge or dissolve .the question is whether this congestion pattern is composed of several elementary congestion patterns or a separate , elementary pattern , which is sometimes called `` general pattern '' .this will be addressed in sec .[ sec : pinch ] .the concept of `` phases '' has originally been used in areas such as thermodynamics , physics , and chemistry . in these systems , `` phases '' mean different aggregate states ( such as solid , fluid , or gaseous ; or different material compositions in metallurgy ; or different collective states in solid state physics ) .when certain `` control parameters '' such as the pressure or temperature in the system are changed , the aggregate state may change as well , i.e. a qualitatively different macroscopic organization of the system may result .if the transition is abrupt , one speaks of first - order ( or `` hysteretic '' , history - dependent ) phase transitions . otherwise , if the transition is continuous , one speaks of second - order phase transitions . in an abstract space ,whose axes are defined by the control parameters , it is useful to mark parameter combinations , for which a phase transition occurs , by lines or `` critical points '' .such illustrations are called phase diagrams , as they specify the conditions , under which certain phases occur .most of the time , the terms `` phase '' and `` phase diagram '' are applied to large ( quasi - infinite ) , spatially closed , and homogeneous systems in thermodynamic equilibrium , where the phase can be determined in any point of the system .when transferring these concepts to traffic flows , researchers have distinguished between one - phase , two - phase , and three - phase models . the number of phases is basically related to the ( in ) stability properties of the traffic flows ( i.e. the number of states that the instability diagram distinguishes ) .the equilibrium state of one - phase models is a spatially homogeneous traffic state ( assuming a long circular road without any bottleneck ) .an example would be the burgers equation , i.e. a lighthill whitham richard model with diffusion term .two - phase models would additionally produce oscillatory traffic states such as wide moving jams or stop - and - go waves , i.e. they require some instability mechanism .three - phase models introduce another traffic state , so - called `` synchronized flow '' , which is characterized by a self - generated scattering of the traffic variables .it is not clear , however , whether this state exists in reality in the absence of spatial inhomogeneities ( freeway bottlenecks ) . note , however , that the concept of phase transitions has also been transferred to non - equilibrium systems , i.e. driven , open systems with a permanent inflow or outflow of energy , inhomogeneities , etc .this use is common in systems theory .for example , one has introduced the concept of boundary - induced phase transitions . from this perspective, the burgers equation can show a boundary - induced phase transition from a free - flow state with _forwardly _ propagating congestion fronts to a congested state with _ upstream _ moving perturbations of the traffic flow .this implies that the burgers equation ( with one equilibrium phase ) has _ two non-_equilibrium phases .analogously , two - phase models ( in the previously discussed , thermodynamic sense ) can have more than two _non-_equilibrium phases .however , to avoid confusion , one often uses the terms `` ( spatiotemporal ) traffic patterns '' or `` ( elementary ) traffic states '' rather than `` non - equilibrium phases '' .for example , the gas - kinetic - based traffic model ( gkt model ) or the intelligent driver model ( idm ) , which are two - phase models according to the above classification , may display _ several _ congested traffic states besides free traffic flow .the phase diagram approach to traffic modeling proposed by was originally presented for an open traffic system with an on - ramp .it shows the qualitatively different , spatiotemporal traffic patterns as a function of the freeway flow and the bottleneck strength .note , however , that the resulting traffic state may depend on the history ( e.g. the size of perturbations in the traffic flow ) , if traffic flows have the property of metastability .the concept of the phase diagram has been taken up by many authors and applied to the spatiotemporal traffic patterns ( non - equilibrium phases ) produced in many models .besides on - ramp scenarios , one may study scenarios with flow - conserving bottlenecks ( such as lane closures or gradients ) or with combinations of several bottlenecks .it appears , however , that the traffic patterns for freeway designs with several bottlenecks can be understood , based on the _ combination _ of elementary traffic patterns appearing in a system with a _single _ bottleneck and interaction effects between these patterns the resulting traffic patterns as a function of the flow conditions and bottleneck strengths ( freeway design ) , and therefore the appearance of the phase diagram , depend on the traffic model and the parameters chosen .therefore , the phase diagram approach can be used to classify the large number of traffic models into a few classes .models with qualitatively similar phase diagrams would be considered equivalent , while models producing different kinds of traffic states would belong to different classes .the grand challenge of traffic theory is therefore to find a model and/or model parameters , for which the congestion patterns match the stylized facts ( see sec .[ sec:3ddata ] ) and for which the phase diagram agrees with the empirical one .this issue will be addressed in sec .[ sec : phase ] for the understanding of traffic dynamics one may ask which of the two competing phase definitions ( the thermodynamic or the non - equilibrium one ) would be more relevant for observable phenomena . considering the stylized facts ( see sec . [sec : phenomen ] ) , it is obvious that boundary conditions and inhomogeneities play an important role for the resulting traffic patterns .this clearly favours the dynamic - phase concept over the definition of thermodynamic equilibrium phases : traffic patterns are easily observable and also relevant for applications .( for calculating traveling times , one needs the spatiotemporal dynamics of the traffic pattern , and not the thermodynamic traffic phase . )moreover , thermodynamic phases are not _ observable _ in the strict sense , because real traffic systems are not quasi - infinite , homogeneous , closed systems .consequently , when assessing the quality of a given model , it is of little relevance whether it has two or three physical phases , as long as it correctly predicts the observed spatiotemporal patterns , including the correct conditions for their occurrence . nevertheless , the thermodynamic phase concept ( the instability diagram ) is relevant for _ explaining _ the mechanisms leading to the different patterns .in fact , for models with a fundamental diagram , it is possible to derive the phase diagram of traffic states from the instability diagram , if bottleneck effects and the outflow from congested traffic are additionally considered .in the following , we will show for specific traffic models that not only three - phase traffic theory , but also the conceptionally simpler two - phase models ( as introduced in sec .[ sec : defphases ] ) can display all stylized facts mentioned in sec .[ sec : phenomen ] , if the model parameters are suitably chosen .this is also true for patterns that were attributed exclusively to three - phase traffic theory such as the pinch effect or the widening synchronized pattern ( wsp ) . considering the dynamic - phase definition of sec .[ sec : defphases ] , the simplest system that allows to reproduce realistic congestion patterns is an open system with a bottleneck . when simulating an on - ramp bottleneck , the possible flow conditions can be characterized by the _ upstream _ freeway flow ( `` main inflow '' ) and the ramp flow , considering the number of lanes .the _ downstream _traffic flow under free and congested conditions can be determined from these quantities . when simulating a flow - conserving ( ramp - less ) bottleneck ,the ramp flow is replaced by the _ bottleneck strength _ quantifying the degree of local capacity reduction . since many models show hysteresis effects , i.e. discontinuous , history - dependent transitions , the time - dependent traffic conditions before the onset of congestion are relevant as well . in the simplest case ,the response of the system is tested ( i ) for minimum perturbations , e.g. slowly increasing inflows and ramp flows , and ( ii ) for a large perturbation .the second case is usually studied by generating a wide moving jam , which can be done by temporarily blocking the outflow .additionally , the model parameters characterizing the bottleneck situation have to be systematically varied and scanned through .this is , of course , a time - consuming task since producing a single point in this multi - dimensional space requires a complete simulation run ( or even to average over several simulation runs ) . classify models with a fundamental diagram that show dynamic traffic instabilities in a certain density range , as two - phase models .alternatively , these models are referred to as `` models within the fundamental diagram approach '' .note , however , that certain models with a unique fundamental diagram are _one_-phase models ( such as the burgers equation ) .moreover , some models such as the kk model can show one - phase , two - phase or three - phase behavior , depending on the choice of model parameters ( see sec .[ sec : three ] ) .a microscopic two - phase model necessarily has a dynamic acceleration equation or contains time delays such as a reaction time . for macroscopic models ,a necessary ( but not sufficient ) condition for two phases is that the model contains a dynamical equation for the macroscopic velocity .we start with results for the gas - kinetic - based traffic model . like other macroscopic traffic models ,the gkt model describes the dynamics of aggregate quantities , but besides the vehicle density and average velocity , it also considers the velocity variance as a function of velocity and density .the gkt model has five parameters , , , , and characterizing the driver - vehicle units , see table [ tab : gkt ] .in contrast to other popular second - order models , the gkt model distinguishes between the desired time gap when following other vehicles , and the much larger acceleration time to reach a certain desired velocity .furthermore , the drivers of the gkt model `` look ahead '' by a certain multiple of the distance to the next vehicle .the gkt model also contains a variance function reflecting statistical properties of the traffic data .its form can be empirically determined ( see table [ tab : gkt ] ) . for the gkt model equations , we refer to ..[tab : gkt]the two parameter sets for the gkt model used in this paper .the four last parameters specify the velocity variance prefactor + 1\}$ ] [ cols="<,<,<",options="header " , ] table [ tab : mech ] gives an overview of mechanisms producing the observed spatiotemporal phenomena listed in sec .[ sec:3ddata ] .so far , these have been either considered incompatible with three - phase models or with two - phase models having a fundamental diagram .it is remarkable that the main controversial observation the occurrence of the pinch effect or general pattern is not only compatible with three - phase models , but can also be produced with conventional two - phase models . for both model classes , this can be demonstrated with macroscopic , microscopic , and cellular automata models , if models and parameters are suitably chosen .it appears that some of the current controversy in the area of traffic modeling arises from the different definitions of what constitutes a traffic phase . in the context of three - phase traffic theory ,the definition of a phase is oriented at equilibrium physics , and in principle , it should be able to determine the phase based on _ local _ criteria and measurements at a _ single _ detector . within three - phase traffic theory , however , this goal is not completely reached : in order to distinguish between `` moving synchronized patterns '' and wide moving jams , which look alike , one needs the additional _ nonlocal _ criterium of whether the congestion pattern propagates through the _ next _ bottleneck area or not .in contrast , the alternative phase diagram approach is oriented at systems theory , where one tries to distinguish different kinds of elementary congestion patterns , which may be considered as non - equilibrium phases occurring in non - homogeneous systems ( containing bottlenecks ) .these traffic patterns are distinguished into localized or spatially extended , moving or stationary ( `` pinned '' ) , and spatially homogeneous or oscillatory patterns .these patterns can be derived from the stability properties of conventional traffic models exhibiting a unique fundamental diagram and unstable and/or metastable flows under certain conditions .models of this class , sometimes also called _ two - phase _ models , include macroscopic and car - following models as well as cellular automata . as key result of our paperwe have found that features , which are claimed to be consistent with three - phase traffic theory only , can also be explained and simulated with conventional models , if the model parameters are suitably specified .in particular , if the parameters are chosen such that traffic at maximum flow is ( meta-)stable and the density range for unstable traffic lies completely on the `` congested '' side of the fundamental diagram , we find the `` widening synchronized pattern '' ( wsp ) , which has not been discovered in two - phase models before . furthermore , the models can be tuned such that no homogeneous congested traffic ( hct ) exists for strong bottlenecks .conversely , we have shown that almost the same kinds of patterns , which are produced by two - phase models , are also found for models developed to reproduce three - phase traffic theory ( such as the kk micro - model ) . moreover , when the kk micro - model is simulated with parameters for which it turns into a model with a unique fundamental diagram , it still displays very similar results .therefore , the difference between so - called two - phase and three - phase models does not seem to be as big as the current scientific controversy suggests .for many empirical observations , we have found _ several _ plausible explanations ( compatible and incompatible ones ) , which makes it difficult to determine the underlying mechanism which is actually at work . in our opinion ,convective instability is a likely reason for the occurence of the pinch effect ( or the general pattern ) , but at intersections with large ramp flows , the effect of off- and on - ramp combinations seems to dominate . to explain the transition to wide moving jams, we favour the depletion effect , as the group velocities of structures within congested traffic patterns are essentially constant . forthe wide scattering of flow - density data , all three mechanisms of table [ tab : mech ] do probably play a role .clearly , further observations and experiments are necessary to confirm or reject these interpretations , and to exclude some of the alternative explanations .it seems to be an interesting challenge for the future to devise and perform suitable experiments in order to finally decide between the alternative explanation mechanisms . in our opinion , the different congestion patterns produced by three - phase traffic theory and the alternative phase diagram approach for models with a fundamental diagram share more commonalities than differences .moreover , according to our judgement , three - phase models do not explain _ more _ observations than the simpler two - phase models ( apart maybe from the fluctuations of `` synchronized flow '' , which can , for example , be explained by the heterogeneity of driver - vehicle units ) . the question is , therefore , which approach is superior over the other . to decide this ,the quality of models should be judged in a _ quantitative _ way , applying the following established standard procedure : * as a first step , mathematical quality functions must be defined .note that the proper selection of these functions ( and the relative weight that is given to them ) depends on the purpose of the model . *the crucial step is the statistical comparison of the competing models based on a new , but representative set of traffic measurements , using model parameters determined in a previous calibration step .note that , due to the problem of over - fitting ( i.e. the risk of fitting of noise in the data ) , a high goodness of fit in the calibration step does not necessarily imply a good fit of the new data set , i.e. a high predictive power . *the goodness of fit should be judged with established statistical methods , for example with the adjusted r - value or similar concepts considering the number of model parameters .given the same correlation with the data , a model containing a few parameters has a higher explanatory power than a model with many parameters . given a comparable predictive power of two models , one should select the simpler one according to einstein s principle that _ a model should be as simple as possible , but not simpler_. if one has to choose between two equally performing models with the same number of parameters , one should use the one which is easier to interpret , i.e. a model with meaningful and independently measurable parameters ( rather than just fit parameters ) .furthermore , the model should not be sensitive to variations of the model parameters within the bounds of their confidence intervals . applying this benchmarking process to traffic modelingwill hopefully lead to an eventual convergence of explanatory concepts in traffic theory .the authors would like to thank the _ hessisches landesamt fr straen- und verkehrswesen _ and the _ autobahndirektion sdbayern _ for providing the freeway data shown in figs . 1 and 2 .they are furthermore grateful to eddie wilson for sharing the data set shown in fig .[ fig : wilson ] , and to anders johansson for generating the plots from his data .the discussion around three - phase traffic theory is directly related with the wide scattering of flow - density data within synchronized traffic flows .however , it deserves to be mentioned that the discussion around traffic theories has largely neglected the fact that empirical measurements of wide moving jams show a considerable amount of scattering as well ( see , e.g. fig . 15 of ) , while theoretically , one expects to find a `` jam line '' .this suggests that wide scattering is actually _ not _ a specific feature of synchronized flow , but of congested traffic in general . while this questions the basis of three - phase traffic theory to a certain extent , particularly as it is claimed that wide scattering is a distinguishing feature of synchronized flows as compared to wide moving jams , the related car - following models , cellular automata , and macroscopic models build in dynamical mechanisms generating such scattering as one of their key features .in other models , particularly those with a fundamental diagram , this scattering is a simple add - on ( and partly a side effect of the measurement process , see sec .[ sec : measurement ] ) .it can be reproduced , for example , by considering heterogeneous driver - vehicle populations in macroscopic models or car - following models , by noise terms , or slowly changing driving styles .for strong bottlenecks ( typically caused by accidents ) , empirical evidence regarding the existence of homogeneous congested traffic has been somewhat ambiguous so far . on the one hand ,when applying the adaptive smoothing method to get rid of noise in the data , the spatiotemporal speed profile looks almost homogeneous , even when the same smoothing parameters are used as for the measurement of the other traffic patterns , e.g. oscillatory ones . on the other hand ,it was claimed that data of the flow measured at freeway cross sections show an oscillatory behavior .these oscillations typically have small wavelengths , which can have various origins : ( 1 ) they can result from the heterogeneity of driver - vehicle units , particularly their time gaps , which is known to cause a wide scattering of congested flow - density data .( 2 ) they could as well result from problems in maintaining low speeds , as the gas and break pedals are difficult to control .( 3 ) they may also be a consequence of perturbations , which can easily occur when traffic flows of several lanes have to merge in a single lane , as it is usually the case at strong bottlenecks . according to _ stylized fact 6 _ ,all these perturbations are expected to propagate upstream at the speed . in order to judgewhether the pattern is to be classified as oscillatory congested traffic or homogeneous congested traffic , one would have to determine the sign of the growth rate of perturbations , i.e. whether large perturbations grow bigger or smaller while travelling upstream .recent traffic data of high spatial and temporal resolution suggest that homogeneous congested traffic states _ do_ exist ( see fig .[ fig : wilson ] ) , but are very rare . for the conclusions of this paper and the applicability of the phase diagram approach , however , it does not matter whether homogeneous congested traffic actually exists or not .this is , because many models with a fundamental diagram can be calibrated in a way that either generates homogeneous patterns for high bottleneck strengths or not ( see sec . [sec : phase ] ) .belomestny , d. , jentsch , v. , schreckenberg , m. , 2003 . completion and continuation of nonlinear traffic timeseries : a probabilistic approach .journal of physics a : mathematical and general 36 ( 45 ) , 1136911383 .bertini , r. , lindgren , r. , helbing , d. , schnhof , m. , 2004 . empirical analysis of flow features on a german autobahn . in : transportation research board83rd annual meeting , washington dc .washington , d.c ., available at arxiv eprint cond - mat/0408138 .kesting , a. , treiber , m. , 2008 . calibrating car - following models by using trajectory data : methodological study .transportation research record : journal of the tranportation research board 2088 , 148156 .smilowitz , k. , daganzo , c. , cassidy , m. , bertini , r. , 1999 .some observations of highway traffic in long queues .transportation research record : journal of the transportation research board 1678 , 225233 .treiber , m. , helbing , d. , 2002 . reconstructing the spatio - temporal traffic dynamics from stationary detector data .cooperative transportation dynamics 1 , 3.13.24 , ( internet journal , www.trafficforum.org/journal ) .treiber , m. , hennecke , a. , helbing , d. , 2000 .microscopic simulation of congested traffic . in : helbing ,d. , herrmann , h. , schreckenberg , m. , wolf , d. ( eds . ) , traffic and granular flow 99 .springer , berlin , pp .365376 .treiber , m. , kesting , a. , wilson , r. e. , 2010 . reconstructing the traffic state by fusion of heterogenous data , computer - aided civil and infrastructure engineering , accepted .preprint physics/0900.4467 .
despite the availability of large empirical data sets and the long history of traffic modeling , the theory of traffic congestion on freeways is still highly controversial . in this contribution , we compare kerner s three - phase traffic theory with the phase diagram approach for traffic models with a fundamental diagram . we discuss the inconsistent use of the term `` traffic phase '' and show that patterns demanded by three - phase traffic theory can be reproduced with simple two - phase models , if the model parameters are suitably specified and factors characteristic for real traffic flows are considered , such as effects of noise or heterogeneity or the actual freeway design ( e.g. combinations of off- and on - ramps ) . conversely , we demonstrate that models created to reproduce three - phase traffic theory create similar spatiotemporal traffic states and associated phase diagrams , no matter whether the parameters imply a fundamental diagram in equilibrium or non - unique flow-_density _ relationships . in conclusion , there are different ways of reproducing the empirical stylized facts of spatiotemporal congestion patterns summarized in this contribution , and it appears possible to overcome the controversy by a more precise definition of the scientific terms and a more careful comparison of models and data , considering effects of the measurement process and the right level of detail in the traffic model used . , ,
scientific computation has become critically enabling in almost every field of scientific and engineering study , enabling simulations with modern computers that were thought to be out of reach even a decade ago and suggesting the possibility of exascale computing architectures .the continued scaling of memory , processor speed and parallelization enable studies of increasingly sophisticated multi - scale physical systems . despite these advances , significant challenges and computational bottlenecks still remain in efficiently computing dynamics of extremely high - dimensional systems , such as high reynolds turbulent flow . reduced order models ( roms ) are of growing importance as a critically enabling mathematical framework for reducing the dimension of such large systems .the core of the rom architecture relies on two key innovations : ( i ) the pod - galerkin method , which is used for projecting the high - dimensional nonlinear dynamics to a low - dimensional subspace in a principled way , and ( ii ) sparse sampling ( gappy pod ) of the state space for interpolating the nonlinear terms required for the subspace projection .the focus of this manuscript is on a sparse sampling innovation for roms .specifically , a method for optimizing sampling locations for both reconstruction and identification of parametrized systems .we propose an algorithm comprised of two components : ( i ) an _ offline _ stage that produces initial sparse sensor locations , and ( ii ) an _ online _ stage that uses a short , genetic search algorithm for producing nearly optimal sensor locations .the technique optimizes for both reconstruction error and classification efficacy , leading to an attractive _ online _ modification of commonly used gappy pod methods .the importance of sparse sampling of high - dimensional systems , especially those manifesting low - dimensional dynamics , was recognized early on in the roms community .thus sparse sampling has already been established as a critically enabling mathematical framework for model reduction through methods such as gappy pod and its variants .more generally , sparsity promoting methods are of growing importance in physical modeling and scientific computing .the seminal work of everson and sirovich first established how the gappy pod could play a transformative role in the mathematical sciences . in their sparse sampling scheme, random measurements were used to perform reconstruction tasks of inner products .principled selection of the interpolation points , through the gappy pod infrastructure or missing point ( best points ) estimation ( mpe ) , were quickly incorporated into roms to improve performance .more recently , the transformative empirical interpolation method ( eim ) and its most successful variant , the pod - tailored discrete empirical interpolation method ( deim ) , have provided a greedy , sparse samplimg algorithm that allows for nearly optimal reconstructions of nonlinear terms of the original high - dimensional system .the deim approach combines projection with interpolation .specifically , the deim uses selected interpolation indices to specify an interpolation - based projection for a nearly optimal subspace approximating the nonlinearity .it is well - known that the various sparse sampling techniques proposed are not optimal , but have been shown to be sufficiently robust to provide accurate reconstructions of the high - dimensional system .the deim algorithm has been particularly successful for nonlinear model reduction of time - dependent problems .interestingly , for parametrized systems , the deim algorithm needs to be executed in the various dynamical regimes considered , leading to a library learning mathematical framework .thus efficient sparse sampling locations for both classification and reconstruction can be computed in an _ offline _ manner across various dynamical regimes .again , they are not optimal , but they are robust for building roms .we build upon the deim library learning framework , showing that nearly - optimal sparse sampling can be achieved with a short _ online _ , genetic algorithm search from the learned deim libraries .this improves both the classification and reconstruction accuracy of the sparse sampling , making it an attractive performance enhancer for roms .the paper is outlined as follows : in sec .[ sec : rom ] , the basic rom architecture is outlined .section 3 reviews the various innovations of the sparse sampling architecture , including the library building procedure used here .section 4 develops the genetic search algorithm for _ online _ improvement of sparse sampling locations .the method advocated here is demonstrated in sec . 5 on two example problems : the complex cubic - quintic ginzburg - landau equation and fluid flow past a circular cylinder .concluding remarks are provided in sec .in our analysis , we consider a parametrized , high - dimensional system of nonlinear differential equations that arises , for example , from the finite - difference discretization of a partial differential equation . in the formulationproposed , the linear and nonlinear terms for the state vector are separated : where ^t\in \mathbb{r}^n ] .the nonlinear function is evaluated component - wise at the spatial grid points used for discretization .note that we have assumed , without loss of generality , that the parametric dependence is in the nonlinear term .typical discretization schemes for achieving a prescribed spatial resolution and accuracy require the number of discretization points to be very large , resulting in a high - dimensional state vector . for sufficiently complicated problems where significant spatial refinement is required and/or higher spatial dimension problems ( 2d or 3d computations , for instance ) can potentially lead to a computationally intractable problem where roms are necessary .the pod - galerkin method is a principled dimensionality - reduction scheme that approximates the function with rank--optimal basis functions where .these optimal basis functions are computed from a singular value decomposition of a time series of snapshots of the nonlinear dynamical system ( [ eq : complex ] ) .given the snapshots of the state variable at times , the snapshot matrix \in \mathbb{r}^{n\times p} ] .specifially , the following two relationships hold & & m_jk = ( _ j,_k ) = _ jk + & & m_jk = ( _ j,_k ) _ s [ ] 0 j , k [ eq : m ] where are the entries of the hermitian matrix and is the kroenecker delta function .the fact that the pod modes are not orthogonal on the support ] .the pseudo - inverse for determining is a least - square fit algorithm .note that in the event the measurement space is sufficiently dense , or as the support space is the entire space , then and , thus implying the eigenvalues of approach unity as the number of measurements become dense .once the vector is determined , then a reconstruction of the solution can be performed using it only remains to consider the efficacy of the measurement matrix .originally , random measurements were proposed .however , the roms community quickly developed principled techniques based upon , for example , minimization of the condition number of , selection of minima or maxima of pod modes , and/or greedy algorithms of eim / deim .thus measurement locations were judiciously chosen for the task of accurately interpolating the nonlinear terms in the rom .this type of sparsity has been commonly used throughout the roms community .the deim algorithm constructs two low - rank spaces through the svd : one for the full system using the snapshots matrix , and a second using the snapshot matrix composed of samples of the nonlinearity alone .thus a low - rank representation of the nonlinearity is given by = { \bf \xi \sigma}_n { \bf w}_n^*\ ] ] where the matrix contains the optimal ( in an sense ) basis set for spanning the nonlinearity .specifically , we consider the rank- basis set ] which is chosen so that is nonsingular. then is uniquely defined from and thus , the tremendous advantage of this result for nonlinear model reduction is that the term requires evaluation of nonlinearity only at indices , where .the deim further proposes a principled method for choosing the basis vectors and indices .the deim algorithm , which is based upon a greedy - like search , is detailed in and further demonstrated in table [ table : alg ] .+ construct snapshot matrix & ] + singular value decomposition of & + rank- approximating basis & ] + approximate by at indices & solve for : with ] + the deim algorithm is highly effective for determining sampling ( sensor ) locations .such sensors can be used with sparse representation and compressive sensing to ( i ) identify dynamical regimes , ( ii ) reconstruct the full state of the system , and ( iii ) provide an efficient nonlinear model reduction and pod - galerkin prediction for the future state . given the parametrized nature of the evolution equation ( [ eq : pod ] ) , we use the concept of library building which arises in machine learning from leveraging low - rank features " from data . in the rom community, it has recently become an issue of intense investigation .indeed , a variety of recent works have produced libraries of rom models that can be selected and/or interpolated through measurement and classification .before these more formal techniques based upon machine learning were developed , it was already realized that parameter domains could be decomposed into subdomains and a local rom / pod computed in each subdomain .et al . _ used a partitioning based on a binary tree whereas amsallem _et al . _ used a voronoi tessellation of the domain .such methods were closely related to the work of du and gunzburger where the data snapshots were partitioned into subsets and multiple reduced bases computed .thus the concept of library building is well established and intuitively appealing for parametrized systems .we capitalize on these recent innovations and build optimal interpolation locations from multiple dynamics states . however , the focus of this work is on computing , in an online fashion , nearly optimal sparse sensor locations from interpolation points found to work across all the libraries in an offline stage .the offline stage uses the deim architecture as this method gives good starting points for the interpolation . the genetic algorithmwe propose then improves upon the interpolated points by a quick search of nearby interpolation points .it is the pre - computed library structure and interpolation points that allow the genetic algorithm to work with only a short search .ga_sense2 ( 20,62)(a ) ( 60,62)(b ) ( 20,12)(c ) ( 15,22 ) ( 15,53) ( 17,34) ( 83,12) ( 38,44 ) ( 29,42.5)measurement location the background secs . 2 and 3 provide the mathematical framework for the innovations of this paper .up to this point , the deim architecture for parametrized pdes provides good interpolation points for the rom method .our goal is to make the interpolation points optimal or nearly so .unfortunately , non - convex problems such as this are extremely difficult to optimize , leading to the consideration of genetic algorithms , which are a subset of evolutionary algorithms , for determining near optimal interpolation points .the genetic algorithm principal is quite simple : given a set of feasible trial solutions ( either constrained or unconstrained ) , an objective ( fitness ) function is evaluated .the idea is to keep those solutions that give the minimal value of the objective function and mutate them in order to try and do even better .mutation in our context involves randomly shifting the locations of the interpolation points .beneficial mutations that give a better minimization , such as good classification and minimal reconstruction error , are kept while those that perform poorly are discarded .the process is repeated through a number of iterations , or _, with the idea that better and better fitness function values are generated via the mutation process .more precisely , the genetic algorithm can be framed as the constrained optimization problem with the objective function where is a measurement matrix used for interpolation .suppose that mutations , as illustrated in fig .[ fig : ga ] , are given for the matrix so that thus solutions are evaluated and compared with each other in order to see which of the solutions generate the smallest objective function since our goal is to minimize it .we can order the guesses so that the first gives the smallest values of . arranging our data, we then have since the first solutions are the best , these are kept in the next generation . in addition , we now generate new trial solutions that are randomly mutated from the best solutions .this process is repeated through a finite number of iterations with the hope that convergence to the optimal , or near - optimal , solution is achieved .table [ table : alg2 ] shows the algorithm structure particular to our application . in our simulations , mutations are produced and are kept for further mutation . the number of generations is not fixed , but we find that even with , significant improvement in reconstruction error can be achieved .as we will show , the algorithm provides an efficient and highly effective method for optimizing the interpolation locations , even in a potentially online fashion .a disadvantage of the method is that there are no theorems guaranteeing that the iterations will converge to the optimal solution , and there are many reasons the genetic search can fail .however , we are using it here in a very specific fashion .specifically , our initial measurement matrix is already quite good for classification and reconstruction purposes .thus the algorithm starts close to the optimal solution .the goal is then to further refine the interpolation points so as to potentially cut down on the reconstruction and classification error .the limited scope of the algorithm mitigates many of the standard pitfalls of the genetic algorithm . construct initial measurement matrix & + perturb measurements and classify & + keep matrices with correct classification & + save ten best measurement matrices & + repeat steps for generations & + randomly choose one of ten best and repeat & +two models help illustrate the principles and success of the genetic search algorithm coupled with deim . in the first example , only three interpolation points are necessary for classification and reconstruction .moreover , for this problem , a brute force search optimization can be performed to rigorously identify the best possible interpolation points .this allows us to compare the method advocated to a ground truth model . in the second example , the classical problem of flow around a cylinder is considered .this model has been ubiquitous in the roms community for demonstrating new dimensionality - reduction techniques .the ginzburg - landau ( gl ) equation is on of the canonical models of applied mathematics and mathematical physics as it manifests a wide range dynamical behaviors .it is widely used in the study of pattern forming systems , bifurcation theory and dynamical systems .its appeal stems from its widespread use in the sciences : modeling phenomena as diverse as condensed matter physics to biological waves .the particular variant considered here is the cubic - quintic gl with fourth - order diffusion : where is a complex valued function of space and time . under discretization of the spatial variable, becomes a vector with components , i.e. with .sarg_sc_cqgle ( 16,29 ) ( 84,29 ) ( 16,9.5)0 0.5 1.0 1.5 2.0 2.5 ( 38,30)interpolation locations ( 5,70)(a ) ( 5,30)(b ) ( 8,65) ( 29,50) ( 7,58) ( 27,67) ( 56,67) ( 85,67) ( 2,44) ( 8.5,46.5) ( 8.5,39.5) ( 15,37)mode number ( 11,26)0.3 ( 11,12)0.0 ( 9,21) ( 78,8) an efficient and exponentially accurate numerical solution to ( [ eq : gl ] ) can be found using standard spectral methods . specifically , the equation is solved by fourier transforming in the spatial dimension and then time - stepping with an adaptive 4th - order runge - kutta method . the extent of the spatial domain is $ ] with discretized points . importantly , in what follows the interpolation indices are dictated by their position away from the center of the computational domain .the center of the domain is at which is the 513th point in the domain .the interpolation indices demonstrated are relative to this center point ..values of the parameters from equation ( [ eq : gl ] ) that lead to six distinct dynamical regimes . to exemplify our algorithm , the first, third and fifth regimes will be discussed in this paper . [ cols="^,^,^,^,^,^,^,^",options="header " , ] to generate a variety of dynamical regimes , the parameters of the cubic - quintic gl are tuned to a variety of unique dynamical regimes .the unique parameter regime considered are denoted by the parameter which indicates the specific values chosen .table [ table : vals ] shows six different parameter regimes that have unique low - dimensional attractors as described in the table .it has been shown in previous work that only three interpolation points are necessary for classification of the dynamical state , state reconstruction and future state prediction .this previous work also explored how to construct the sampling matrix from the deim algorithm and its multiple dynamical state .cqgle_bflocserror ( 55,74 ) ( 0.5,74 ) ( 50,74 ) ( 97,74 ) ( 75,74 ) ( 70,79)error ( 9,79)interpolation locations ( 0,79)(a ) ( 55,79)(b ) ( -4,-6) ( -2,-3 ) ( 13,-4)spatial grid point ( -5,15 ) we will execute the genetic algorithm outlined in table [ table : alg2 ] for improving the sampling matrix initially determined from the algorithm in . before doing so , we consider a brute force search of the best possible three measurement locations based upon their ability to classify the correct dynamical regime and minimize reconstruction error . although generally this is an -hard problem , the limited number of sensors and small number of potential locations for sensors allow us to do an exhaustive search for the best interpolation locations .the brute force search first selects all indices triplets ( selected from interpolation points 0 to 33 as suggested by ) that correctly classify the dynamical regimes in the absence of noisy measurements . from this subset, white noise is added to the measurements and 400 rounds of classification are performed . only the measurement triplets giving above 95% accuracy for the classification of each dynamical regime are retained .the retained triplets are then sorted by the reconstruction error .figure [ fig : gl1 ] shows the triplet interpolation points retained from the exhaustive search with the classification criteria specified and the position of the interpolation points along with the reconstruction error .the deim algorithm proposed in produces interpolation points with reconstruction errors nearly an order of magnitude larger than those produced from the exhaustive search .our objective is to use a genetic algorithm to reduce our error by this order of magnitude and produce interpolation points consistent with some of the best interpolation points displayed in fig .[ fig : gl1 ] . the brute for search produces a number of interpolation tripletswhose reconstruction accuracy are quite similar . clearly displayed in the graphis the clustering of the interpolation points around critical spatial regions .a histogram of the first ( blue ) , second ( magenta ) and third ( green ) interpolation points is shown in fig .[ fig : gl2](a ) .the first two interpolation points have a narrow distribution around the 4th-8th interpolation points and 12th-16th interpolation points respectively .the third interpolation point is more diffusely spread across a spatial region with improvements demonstrated for interpolation points further to the right in fig .[ fig : gl1](a ) .this histogram provides critical information about sensor and interpolation point locations . of noteis the fact that the deim algorithm always picks the maximum of the first pod mode as an interpolation location .this would correspond to a measurement at .however , none of the candidate triplets retained from a brute force search consider this interpolation point to be important .in fact , the interpolation points starting from the second iteration of the deim algorithm are what seem to be important according to the brute force search .this leads us to conjecture that we should initially use the triplet pair from the 2nd-4th deim points rather than the 1st-3rd deim points .we call these the deim+1 interpolation points as we shift our measurement indices to the start after the first iteration of deim. cqgle_bf_hist ( 17,59)(a ) ( 25,-1)interpolation locations ( 12,2)1 ( 87,2)33 ( 4,27 ) cqgle_ga_both ( 17,59)(b ) ( 5,50) ( 35,0)generation # the genetic algorithm search can now be enacted from both the deim locations computed in and the deim+1 locations suggested by the exhaustive search .figure [ fig : gl2](b ) shows the convergence of the genetic search optimization procedure starting from both these initial measurement matrices . it should be noted that the deim+1 initially begins with approximately half the error of the standard deim algorithm , suggesting it should be used as a default . in this specific scenario ,both initial measurement matrices are modified and converge to the near - optimal solution within only 3 - 5 generations of the search .this is a promising result since the mutations and generations are straightforward to compute and can potentially be done in an online fashion .the benefit from this approach is a reduction of the error by nearly an order of magnitude , making it an attractive scheme .compare ( 8,-3)interpolation locations ( 0,-7) ( 1,-4 ) ( 43.5,61 ) ( 0.5,61 ) ( 39.5,61 ) ( 65,36 ) ( 93,36 ) ( 51,65)(b ) error ( 71,65)(c ) misclassification ( 2,65)(a ) sparse measurement scheme to finish our analysis , we compare the deim architecture against some classic gappy pod and deim methods .figure [ fig : gl3 ] gives an algorithmic comparison of the interpolation point selection of various techniques against the proposed method and the ground truth optimal solution obtained by exhaustive search .both the reconstruction error and classification accuracy are important in selecting the interpolation indices , and both are represented in panels ( b ) and ( c ) of fig . [ fig : gl3 ] .importantly , the method proposed here , which starts with the deim+1 points and does a quick genetic algorithm search produces nearly results that are almost equivalent to the exhaustive search .this is an impressive result given the efficiency of the genetic search and online computation possibilities . andeven if one is not interested in executing the genetic search , the deim+1 points used for provide nearly double the performance ( in terms of accuracy ) versus deim .the previous example provides an excellent proof - of - concept given that we could compute a ground truth optimal solution .the results suggest that we should start with the deim measurement matrix and execute the genetic algorithm from there .we apply this method on the classic problem of flow around a cylinder .this problem is also well understood and has already been the subject of studies concerning sparse spatial measurements .specifically , it is known that for low to moderate reynolds numbers , the dynamics is spatially low - dimensional and pod approaches have been successful in quantifying the dynamics .the reynolds number , , plays the role of the bifurcation parameter in ( [ eq : complex ] ) , i.e. it is a parametrized dynamical system .cylinder_modes04 ( 2,85) ( 59,85) ( 2,39) ( 59,39) the data we consider comes from numerical simulations of the incompressible navier - stokes equation : & & + uu+p-^2u=0 + & & u=0 [ eq : incompresns ] where represents the 2d velocity , and the corresponding pressure field . the boundary condition are as follows : ( i ) constant flow of at , i.e. , the entry of the channel , ( ii ) constant pressure of at , i.e. , the end of the channel , and ( iii ) neumann boundary conditions , i.e. on the boundary of the channel and the cylinder ( centered at and of radius unity ) .we consider the fluid flow for reynolds number and perform an svd on the data matrix in order to extract pod modes .the rapid decay of singular values allows us to use a small number of pod modes to describe the fluid flow and build local roms .the pod modes retained for each reynolds number is shown in fig .[ fig : cyl1 ] .these modes are projected on cylindrical coordinates to better demonstrate the structure of the pressure field generated on the cylinder .the pod modes can be used to construct a deim interpolation matrix illustrated in fig .[ fig : ga ] .the deim interpolation points already provide a good set of interpolation points for classification and reconstruction of the solution .however , the genetic algorithm advocated in this work can be used to adjust the interpolation points and achieve both better classification performance and improved reconstructions . in the cubic - quintic gl equation ,the error was improved by nearly an order of magnitude over the standard deim approach .for the flow around the cylinder , the error is also improved from the deim algorithm , quickly reducing the error with generations and converging to the nearly optimal interpolation points within generations .given the limited number of interpolation points , the genetic search can be computed in an online fashion even for this two - dimensional fluid flow problem .cylinder_10sens ( 10,45) ( 55,45) ( 10,3) ( 55,3) ( 29,45) ( 28,23) ( 79,45) ( 77,23) ( 11,12)0 ( 20,20) ( 33,12) ( 20,3) ( 26,-1)time ( 22,0 ) ( 72,5 ) ( 73,-1)generation # ( 98,18 ) ( 98,5)-1 ( 98,40)0.5 figure [ fig : cyl3 ] is a composite figure showing the pressure field evolution in time along a the cylinder .the heat map shows the dominant , low - dimensional pattern of activity that is used for generating pod modes .overlaid on this are the best sensor / interpolation locations at each generation of the genetic algorithm scheme for 10 interpolation points over 7 generations of the search .note the placement of the interpolation points around the cylinder .specifically , as the number of generations increases , the interpolation points move to better sampling positions , reducing the error in the rom .the convergence of the error across 10 generations of the algorithm is shown in fig .[ fig : cyl4 ] along with the final placement of the interpolation points .the near optimal interpolation points are not trivially found .overall , the deim architecture with genetic algorithm search reduces the error by anywhere between a factor of two and an order of magnitude , making it attractive for online error reduction and rom performance enhancement .[ width=0.55]cylinder_ga10 ( 16,63) [ width=0.55]cylinder_sensors10 ( 16,63) are enabled by two critical steps : ( i ) the construction of a low - rank subspace where the dynamics can be accurately projected , and ( ii ) a sparse sampling method that allows for an interpolation - based projection to provide a nearly optimal subspace approximation to the nonlinear term without the expense of orthogonal projection .innovations to improve these two aims can improve the outlook of scientific computing methods for modern , high - dimensional simulations that are rapidly approaching exascale levels .these methods also hold promise for real - time control of complex systems , such as turbulence .this work has focused on improving the sparse sampling method commonly used in the literature . in partnership with the deim infrastructure, a genetic algorithm was demonstrated to determine nearly optimal sampling locations for producing a subspace approximation of the nonlinear term without the expense of orthogonal projection .the algorithm can be executed in a potentially online manner , improving the error by up to an order - of - magnitude in the examples demonstrated here . in our complex cubic - quintic ginzburg - landau equation example , for a fixed number of interpolation points , the first deim interpolation points are computed and the first point is discarded .this deim+1 sampling matrix alone can reduce the error by a factor of two before starting the genetic algorithm search . in general , genetic algorithmsare not ideal for optimization since they rarely have guarantees on convergence and have many potential pitfalls . in our case ,the deim starting point for the interpolation point selection algorithm is already close to the true optimum .thus the genetic algorithm is not searching blindly in a high - dimensional fitness space .rather , the algorithm aims to simply make small adjustments and refinements to the sampling matrix in order to maximize the performance of the nonlinear interpolation approximation . in this scenario ,many of the commonly observed genetic algorithm failures are of little concern .the method is shown to reduce the error by a substantial amount within only one or two generations , thus making it attractive for implementation in real large - scale simulations where accuracy of the solution may have significant impact on the total computational cost . in comparison to many other sparse sampling strategies used in the literature, it out performs them by a significant amount both in terms of accuracy and ability to classify the dynamical regime .indeed , the algorithm refines the sampling matrix to be nearly optimal .j. n. kutz would like to acknowledge support from the air force office of scientific research ( fa9550 - 15 - 1 - 0385 ) .z. bai , t. wimalajeewa , z. berger , g. wang , m. glauser , and p. k. varshney , low - dimensional approach for reconstruction of airfoil data via compressive sensing , " _ aiaa journal _ , * 53*(4):920933 , ( 2014 ) .s. l. brunton , j. l. proctor , and j. n. kutz , discovering governing equations from data by sparse identification of nonlinear dynamical systems , " _ proceedings of the national academy of sciences _ , * 113*(15):39323937 , ( 2016 ) .m. barrault , y. maday , n. c. nguyen , and a. t. patera , an empirical interpolation method : application to efficient reduced - basis discretization of partial differential equations , " c. r. math .paris , 339 ( 2004 ) , pp .667 - 672 .s. l. brunton , j. h. tu , i. bright , j. n. kutz , compressive sensing and low - rank libraries for classification of bifurcation regimes in nonlinear dynamical systems , " siam j. app .sys . , * 13*(4 ) : 17161732 , 2014 .e. kaiser , b. r. noack , l. cordier , a. spohn , m. segond , m. abel , g. daviller , j. osth , s. krajnovic and r. k. niven , cluster - based reduced - order modelling of a mixing layer , " j. fluid mech .* 754 * , 365414 , ( 2014 ) .d. amsallem , m. j. zahr and k. washabaugh , fast local reduced basis updates for the efficient reduction of nonlinear systems with hyper - reduction , " advances in computational mathematics , february 2015 , doi 10.1007/s10444 - 015 - 9409 - 0 a. e. deane , i. g. kevrekidis , g. e. karniadakis , and s. a. orszag , low - dimensional models for complex geometry flows : application to grooved channels and circular cylinders , " phys .fluids , * 3*:2337 ( 1991 ) .
a genetic algorithm procedure is demonstrated that refines the selection of interpolation points of the discrete empirical interpolation method ( deim ) when used for constructing reduced order models for time dependent and/or parametrized nonlinear partial differential equations ( pdes ) with proper orthogonal decomposition . the method achieves nearly optimal interpolation points with only a few generations of the search , making it potentially useful for _ online _ refinement of the sparse sampling used to construct a projection of the nonlinear terms . with the genetic algorithm , points are optimized to jointly minimize reconstruction error and enable dynamic regime classification . the efficiency of the method is demonstrated on two canonical nonlinear pdes : the cubic - quintic ginzburg - landau equation and the navier - stokes equation for flow around a cylinder . using the former model , the procedure can be compared to the ground - truth optimal interpolation points , showing that the genetic algorithm quickly achieves nearly optimal performance and reduced the reconstruction error by nearly an order of magnitude . reduced order modeling , dimensionality reduction , proper orthogonal decomposition , sparse sampling , genetic algorithm , discrete empirical interpolation method 65l02 , 65m02 , 37m05 , 62h25
considerable effort has been expended recently to assess and compare different space time models for forecasting earthquakes in seismically active areas such as southern california .notable among these efforts were the development of the regional earthquake likelihood models ( relm ) project [ ] and its successor , the collaboratory for the study of earthquake predictability ( csep ) [ ] .the relm project was initiated to create a variety of earthquake forecast models for seismic hazard assessment in california . unlike previous projects that addressed the assessment of models for seismic hazard , the relm participants decided to adopt many competing forecasting models and to rigorously and prospectively test their performance in a dedicated testing center [ ] . with the end of the relm project , the forecast models became available and the development of the testing center was done within the scope of csep .many point process models , including multiple variants of the epidemic - type aftershock sequence ( etas ) models of have now been proposed and are part of relm and csep , though the problem of how to compare and evaluate the goodness of fit of such models remains quite open . in relm , a community consensus was reached that all entered models be tested with certain tests , including the number or n - test that compares the total forecasted rate with the observation , the likelihood or l - test that assesses the quality of a forecast in terms of overall likelihood , and the likelihood - ratio or r - test that assesses the relative performance of two forecast models compared with what is expected under one proposed model [ ] . however , over time several drawbacks of these tests were discovered [ ] and the need for more powerful tests became clear .the n - test and l - test simply compare the quantiles of the total numbers of events in each bin or likelihood within each bin to those expected under the given model , and the resulting low - power tests are typically unable to discern significant lack of fit unless the overall rate of the model fits extremely poorly .further , even when the tests do reject a model , they do not typically indicate where or when the model fits poorly , or how it could be improved .meanwhile , the number of proposed spatial temporal models for earthquake occurrences has grown , and the need for discriminating which models fit better than others has become increasingly important .techniques for assessing goodness of fit are needed to pinpoint where existing models may be improved , and residual plots , rather than numerical significance tests , seem preferable for these purposes .this paper proposes a new form of residual analysis for assessing the goodness of fit of spatial point process models .the proposed method compares the normalized observed and expected numbers of points over voronoi cells generated by the observed point pattern .the method is applied here in particular to the examination of a version of the etas model originally proposed by , and its goodness of fit to a sequence of 520 hector mine earthquakes occurring between october 1999 and december 2000 . in particular , the voronoi residuals indicate that assumption of a constant background rate in the etas model results in excessive smoothing of the seismicity and significant underprediction of seismicity close to the fault line .residual analysis for a spatial point process is typically performed by partitioning the space on which the process is observed into a regular grid and computing a residual for each pixel .that is , one typically examines aggregated values of a residual process over regular , rectangular grid cells .alternatively , residuals may be defined for every observed point in the pattern , using a metric such as deviance , as suggested in .various types of residual processes were proposed in and discussed in and .the general form of these aggregated residual measures is a standardized difference between the number of points occurring and the number expected according to the fitted model , where the standardization may be performed in various ways .for instance , for pearson residuals , one weights the residual by the reciprocal of the square root of the intensity , in analogy with pearson residuals in the context of linear models . smoothing the residual field using a kernel function instead of simply aggregating over pixels ; in practice , this residual field is typically displayed over a rectangular grid and is essentially equivalent to a kernel smoothing of aggregated pixel residuals . also propose scaling the residuals based on the contribution of each pixel to the total pseudo - loglikelihood of the model , in analogy with score statistics in generalized linear modeling .standardization is important for both residual plots and goodness - of - fit tests , since otherwise plots of the residuals will tend to overemphasize deviations in pixels where the rate is high . behind the term _ pearson residuals _ lies the implication [ see , e.g. , the error bounds in figure 7 of ] that these standardized residuals should be approximately standard normally distributed , so that the squared residuals , or their sum , are distributed approximately according to pearson s -distribution . the excellent treatment of pearson residuals and other scaled residuals by , the thorough discussion of their properties in , their use for formal inference using score and pseudo - score statistics as described in , and the fact that such residuals extend so readily to the case of spatial temporal point processes may suggest that the problem of residual analysis for such point processes is generally solved . in practice , however , such residuals , when examined over a fixed rectangular grid , tend to have two characteristics that can limit their effectiveness : when the integrated conditional intensity ( i.e. , the number of expected points ) in a pixel is very small , the distribution of the residual for the pixel becomes heavily skewed .positive and negative values of the residual process within a particular cell can cancel each other out . since pearson residuals are standardized to have mean zero and unit ( or approximately unit ) variance under the null hypothesis that the modeled conditional intensity is correct[ see ] , one may inquire whether the skew of these residuals is indeed problematic .consider , for instance , the case of a planar poisson process where the estimate of the intensity is exactly correct , that is , at all locations , and where one elects to use pearson residuals on pixels .suppose that there are several pixels where the integral of over the pixel is roughly .given many of these pixels , it is not unlikely that at least one of them will contain a point of the process . in such pixels ,the raw residual will be , and the standard deviation of the number of points in the pixel is , so the pearson residual is .this may yield the following effects : ( 1 ) such pearson residuals may overwhelm the others in a visual inspection , rendering a plot of the pearson residuals largely useless in terms of evaluating the quality of the fit of the model , and ( 2 ) conventional tests based on the normal approximation may have grossly incorrect -values , and will commonly reject the null model even when it is correct .even if one adjusts for the nonnormality of the residual and instead uses exact -values based on the poisson distribution , such a test applied to any such pixel containing a point will still reject the model at the significance level of .these situations arise in many applications , unfortunately .for example , in modeling earthquake occurrences , typically the modeled conditional intensity is close to zero far away from known faults or previous seismicity , and in the case of modeling wildfires , one may have a modeled conditional intensity close to zero in areas far from human use or frequent lightning , or with vegetation types that do not readily support much wildfire activity [ e.g. , ] .these challenges are a result of characteristic i above , and one straightforward solution would be to enlarge the pixel size such that the expected count in each cell is higher . while this would be effective in a homogeneous setting , in the case of an inhomogeneous process it is likely that this would induce a different problem : cells that are so large that even gross misspecification within a cell may be overlooked , and thus the residuals will have low power .this is the problem of characteristic ii .when a regular rectangular grid is used to compute residuals for a highly inhomogenous process , it is generally impossible to avoid either highly skewed residual distributions or residuals with very low power .these problems have been noted by previous authors , though the important question of how to determine appropriate pixel sizes remains open [ ] .note that , in addition to pearson residuals and their variants , there are many other goodness - of - fit assessment techniques for spatial and spatial temporal point processes [ ] .examples include rescaled residuals [ ] and superthinned residuals [ ] , which involve transforming the observed points to form a new point process that is homogeneous poisson under the null hypothesis that the proposed model used in the transformation is correct .there are also functional summaries , such as the weighted version [ ] of ripley s -function [ ] , where each point is weighted according to the inverse of its modeled conditional intensity so that the resulting summary has conveniently simple properties under the null hypothesis that the modeled conditional intensity is correct , as well as other similarly weighted numerical and functional summaries such as the weighted r / s statistic and weighted correlation integral [ ] . as noted in , all of these methods can have serious deficiencies compared to the easily interpretable residual diagrams , especially when it comes to indicating spatial or spatial temporal locations where a model may be improved .this paper proposes a new form of residual diagram based on the voronoi cells generated by tessellating the observed point pattern .the resulting partition obviates i and ii above by being adaptive to the inhomogeneity of the process and generating residuals that have an average expected count of 1 under the null hypothesis .for an to point processes and their intensity functions , the reader is directed to . throughout this paperwe are assuming that the point processes are simple and that the observation region is a complete separable metric space equipped with lebesgue measure , .note that we are not emphasizing the distinction between conditional and papangelou intensities , as the methods and results here are essentially equivalent for spatial and spatial temporal point processes .this paper is organized as follows .section [ sec2 ] describes voronoi residuals and discusses their properties .section [ sec3 ] demonstrates the utility of voronoi residual plots .the simulations shown in section [ sec4 ] demonstrate the advantages of voronoi residuals over conventional pixel - based residuals in terms of statistical power .in section [ sec5 ] we apply the proposed voronoi residuals to examine the fit of the etas model with uniform background rate to a sequence of hector mine earthquakes from october 1999 to december 2000 , and show that despite generally good agreement between the model and data , the etas model with uniform background rate appears to slightly but systematically underpredict seismicity along the fault line and to overpredict seismicity in certain locations along the periphery of the fault line , especially at the location 35 miles east of barstow , ca .[ secvoronoiresiduals ] a voronoi tessellation is a partition of the metric space on which a point process is defined into convex polygons , or _voronoi cells_. specifically , given a spatial or spatial temporal point pattern , one may define its corresponding _ voronoi tessellation _ as follows : for each point of the point process , its corresponding cell is the region consisting of all locations which are closer to than to any other point of .the voronoi tessellation is the collection of such cells .see , for example , for a thorough treatment of voronoi tessellations and their properties . given a model for the conditional intensity of a spatial or space time point process , one may construct residuals simply by evaluating the residual process over cells rather than over rectangular pixels , where the cells comprise the voronoi tessellation of the observed spatial or spatial temporal point pattern .we will refer to such residuals as _voronoi residuals_. an immediate advantage of voronoi residuals compared to conventional pixel - based methods is that the partition is entirely automatic and spatially adaptive .this leads to residuals with a distribution that tends to be far less skewed than pixel - based methods . indeed , since each voronoi cell has exactly one point inside it by construction , the raw voronoi residual for cell is given by \\[-8pt]\nonumber & = & 1 - { \vert}c_i{\vert}\bar\lambda,\end{aligned}\ ] ] where denotes the mean of the proposed model , , over .this raw residual can be scaled in various ways , as is well addressed by .note that when is a homogeneous poisson process , the sizes of the cells are approximately gamma distributed .indeed , for a homogeneous poisson process , the expected area of a voronoi cell is equal to the reciprocal of the intensity of the process [ ] , and simulation studies have shown that the area of a typical voronoi cell is approximately gamma distributed [ ] ; these properties continue to hold approximately in the inhomogeneous case provided that the conditional intensity is approximately constant near the location in question [ ] . .the middle panels show results at location where ; the right panels show results at location where .the distribution ( [ eq2 ] ) is overlaid for the top middle and top right plots . ]the raw voronoi residual in ( [ eq1 ] ) will therefore tend to be distributed approximately like a modified gamma random variable .more specifically , the second term , , referred to in the stochastic geometry literature as the reduced area , is well approximated by a two - parameter gamma distribution with a rate of 3.569 and a shape of 3.569 [ ] .the distribution of the raw residuals is therefore approximated by by contrast , for pixels over which the integrated conditional intensity is close to zero , the conventional raw residuals are approximately bernoulli distributed .the exact distributions of the voronoi residuals are generally quite intractable due to the fact that the cells themselves are random , but approximations can be made using simulation .consider the point process defined by the intensity function on the subset \times[-1,1] ] , along with its corresponding voronoi tessellation . in the voronoi residual plot in the top right panel of figure [ correctmodel ] , each tile is shaded according to the value of the residual under the distribution function of the modified gamma distribution ( [ eq2 ] ) .the resulting -values are then mapped to the color scale using an inverse normal transformation .thus , brightly shaded red areas indicate unusually low residuals , corresponding to areas where more points were expected than observed ( overprediction ) , and brightly shaded blue indicates unusually high residuals , corresponding to areas of underprediction of seismicity . the tiles in the voronoi residual plot in figure [ correctmodel ] range from light to moderate hues , representing residuals that are within the range expected under the reference distribution .similarly , the histogram and quantile plot of the voronoi residuals demonstrate that the distribution of the residuals is well approximated by distribution ( [ eq2 ] ) . in order to evaluate the ability of voronoi residuals to detect model misspecification, simulations were obtained using a generating model and then residuals were computed based on a different proposed model . the top left panel of figure [ overestimating ]displays a realization of a poisson process with intensity .the proposed model assumes a constant intensity across the space , . because of the lack of points near the origin , the tiles near the origin are larger than expected under the proposed model and , hence , for such a cell near the origin , the integral exceeds 1 , leading to negative residuals of large absolute value .these unusually large negative residuals are evident in the voronoi residual plot and clearly highlight the region where the proposed model overpredicts the intensity of the process .these residuals are also clear outliers in the left tail of the reference distribution of the residuals and , as a result , one sees deviations from the identity line in the quantile quantile plot in figure [ overestimating ] .[ secpower ] we now consider the manner in which the statistical power of residual analysis using a voronoi partition differs from that of a pixel partition . in the context of a residual plot , a procedure with low power would generate what appears to be a structureless residual plot even when the model is misspecified . to allow for an unambiguous comparison ,here we focus on power in the formal testing setting : the probability that a misspecified model will be rejected at a given confidence level .[ secpit ] as was discussed in section [ secvoronoiresiduals ] , the distribution of voronoi residuals under the null hypothesis is well approximated by a modified gamma distribution , while the distribution of pixel residuals is that of a poisson distributed variable with intensity , for pixel , centered to have mean zero . to establish a basis to compare the consistency between proposed models and data for these two methods, we utilize the probability integral transform ( pit ) [ ] .the pit was proposed to evaluate how well a probabilistic forecast is calibrated by assessing the distribution of the values that the observations take under the cumulative distribution function of the proposed model .if the observations are a random draw from that model , a histogram of the pit values should appear to be standard uniform .one condition for the uniformity of the pit values is that the proposed model be continuous .this holds for voronoi residuals , which are approximately gamma distributed under the null hypothesis , but not for the poisson counts from pixel residuals . for such discrete random variables ,randomized versions of the pit have been proposed . using the formulation in ,if is the distribution function of the proposed discrete model , is an observed random count and is standard uniform and independent of , then is standard uniform , where the method can be thought of as transforming a discrete c.d.f . into a continuous c.d.f . by the addition of uniform random noise .the pit , both standard and randomized , provides a formal basis for testing two competing residual methods . for a given proposed model and a given realization of points , the histogram of pit values , , for each residual method should appear standard uniform if the proposed model is the same as the generating model .the sensitivity of the histogram to misspecifications in the proposed model reflects the statistical power of the procedure .there are many test statistics that could be used to evaluate the goodness of fit of the standard uniform distribution to the pit values . herewe choose to use the kolmogorov smirnov ( k s ) statistic [ ] , where is the empirical c.d.f . of the sample and is the c.d.f . of the standard uniform . since the voronoi residuals of a given realization are not independent of one another , we use critical values from a simulated reference distribution instead of the limiting distribution of the statistic .two models were considered for the simulation study .the first was a homogeneous poisson model on the unit square with intensity on .the second was an inhomogeneous poisson model with intensity on , where , and .the constant is a scaling constant chosen so that the parenthetical term integrates to one .the result is a function that is symmetric about and , reaches a maximum at , integrates to 300 regardless of the choice of , and is reasonably flat along the boundary box .this final characteristic should allow the alternative approach to the boundary problem , described below , to be relatively unbiased .additionally , it presents inhomogeneity similar to what might be expected in an earthquake setting .the procedure for the inhomogeneous simulation was as follows .a point pattern was sampled from the true generating model , ( [ modelbeta ] ) with . for a given proposed model , ( [ modelbeta ] ) with , and a fixed number of pixels on the unit square ^ 2 ] .for the homogeneous model , figure [ hpow ] shows the resulting estimated power curves for several pixel partitions , including .the power of each method was computed for a series of proposed models , .the best performance was by the method that used the voronoi partition , which shows high power throughout the range of misspecification .for the pixel partitions , had the highest power , but as the number of partitions increases , the k s test loses its power to detect misspecification .this trend can be attributed to characteristic i : when the space is divided into many small cells , the integrated conditional intensity is very small and the distribution of the residuals is highly skewed . as a consequence ,the majority of counts are zeros , so the majority of the pit values are being generating by [ equation ( [ eq4 ] ) ] and , thus , the resulting residuals have little power to detect model misspecification . the most powerful test in this homogeneous setting is in fact one with no partitioning , which is equivalent to the number - test from the earthquake forecasting literature [ ] .for the inhomogeneous case , power curves were computed for a series of proposed models of the form ( [ modelbeta ] ) , with .the results are shown in figure [ ih1pow ] .the power curve for the voronoi method presents good overall performance , particularly when the model is substantially misspecified .the voronoi residuals are not ideally powerful for detecting slight misspecification , however , perhaps because the partition itself is random , thus introducing some variation that is difficult to distinguish from a small change in . , where and .the generating model is . ]focusing only on the four pixel methods , the best performance is at pixels .the poor performance of in detecting the large positive misspecification is due to the fact that the model becomes more inhomogeneous as increases , but that inhomogeneity is averaged over cells that are too large ( the problem associated with characteristic ii in section [ sec1 ] ) .meanwhile , the poor overall performance of is due to the same problem that exists in the homogeneous setting , where the pit values are dominated by the random uniform noise . in applications such as earthquake modeling ,the use of pixel methods often results in situations with extremely low intensities in some pixels , similar to the case considered here with , but perhaps even more extreme .for instance , one of the most successful forecasts of california seismicity [ ] estimated rates in each of pixels in a model that estimated a total of only 35.4 earthquakes above m 4.95 over the course of a prediction experiment that lasted from 2006 to 2011 .estimated integrated rates were as low as 0.000007 in some pixels , and 58% of the pixels had integrated rates that were lower than 0.001 .an immediate improvement could be made by aggregating the pixels , but this in turn will average over the strong inhomogeneity along fault lines in the model , which will lower power .for this reason , the voronoi residual method may be better suited to the evaluation of seismicity models , as well as other processes that are thought to be highly inhomogeneous .in this section we apply voronoi residual analysis to the spatial temporal epidemic - type aftershock sequence ( etas ) model of , which has been widely used to describe earthquake catalogs . according to the etas model of , the conditional intensity be written where is a spatial density on the spatial observation region , and are temporal and spatial coordinates , respectively , is the magnitude of earthquake , and where the triggering function , , may be given by one of several different forms .one form for proposed in is where is the lower magnitude cutoff for the catalog .there is considerable debate in the seismological community about the best method to estimate the spatial background rate [ ] .when modeling larger , regional catalogs , is often estimated by smoothing the largest events in the historical catalog [ ] , and in such cases a very important open question is how ( and how much ) to smooth [ ] . for a single earthquake - aftershock sequence , however ,can one instead simply estimate as constant within a finite , local area , as in ?a prime catalog to investigate these questions is the catalog of california earthquakes including and just after the 1999 hector mine earthquake ( figure [ hector - pattern ] ) .this data set was analyzed previously in , and consists of the origin times , epicentral locations and magnitudes of the 520 earthquakes with magnitude at least , from latitude 34 to 35 , longitude to , from 10/16/1999 to 12/23/2000 , obtained from the southern california seismic network ( scsn ) . the parameters in the model were estimated by maximum likelihood estimation , using the progressive approximation technique described in .for the purpose of this analysis , we focused on the purely spatial aspects of the residuals , and thus integrated over the temporal domain to enable planar visualization of the residuals .the result is a voronoi tessellation of the spatial domain where for tile , for the integral in equation ( [ eq1 ] ) , the estimated conditional intensity function is numerically integrated over the spatial tile and over the entire time domain from 10/16/1999 to 12/23/2000 .we take two approaches to detecting inconsistencies between the etas model and the hector mine catalog : the inspection of plots for signs of spatial structure in the residuals and the evaluation of pit histograms as overall indicators of goodness of fit .figure [ hector - res - plots ] shows residual plots based on both the pixel partition and the voronoi tessellation . as in figures[ correctmodel ] and [ overestimating ] , the magnitude of each residual cell is represented by the value that the residual takes under the distribution function appropriate to that model ( poisson or gamma ) , which is the pit value discussed in section [ secpit ] .the pixel residual plot shows that the etas model estimates a much higher conditional intensity along the fault region running from ( .4 , 33.9 ) to ( .1 , 33.3 ) than was observed in the hector mine sequence .away from the fault , the residuals are less structured , with no indication of model misspecification .the voronoi residual plot shares the same color scale as the pixel plot , but excludes the boundary cells by shading them white .some strong overprediction is apparent in several large cells in the general area of the fault line , but the structure is more nuanced than that found in the pixel plot . figure [ hector - res - plot - zoom ] provides an enlarged version of the fault region , showing systematic underprediction along the fault and overprediction on the periphery of the fault .such structure in the residuals indicates that the etas model with uniform background rate may be oversmoothing .this suggests modeling the background rate in equation ( [ eq3 ] ) as inhomogeneous for southern california seismicity , in agreement with who came to a similar conclusion for japanese seismicity .that covers the fault line , which runs from approximately ( .4 , 34.85 ) to ( .25 , 34.4 ) .pit values are transformed to a color scale using the inverse normal transformation and tiles intersecting the boundary of the space are ignored , as the distribution of these tile areas may differ substantially from the gamma distribution . ]this structure is lost when the residuals are visualized using the pixel partition because the over- and underprediction are averaged over the larger fixed cells ( a case of characteristic ii ) .the true intensity in this region is likely highly spatially variable , which makes the spatially adaptive voronoi partition a more appropriate choice . as discussed in section [ secpit ] ,pit values will be uniformly distributed if the fitted model is correct , therefore , pit histograms can be used as a means to assess general goodness of fit [ ] .figure [ hector - pit - hists ] shows the distribution of the randomized pit values resulting from the pixel partition ( left panel ) alongside the pit values from the voronoi partition ( right panel ) .both histograms show deviations from uniformity , suggesting model misspecification .the histogram resulting from the voronoi parition suggests more deviation , however , which is consistent with the finding in section [ secpower ] that this partition is more sensitive to misspecification than the pixel partition .it also suggests that there are areas of strong underprediction as well as overprediction , while the pixel pit values primarily identify the overprediction .the pit histogram is a useful tool to visualize overall goodness of fit , while the voronoi residual plot seems to be more powerful for identifying areas of poor fit .applying voronoi residual analysis to the etas model and the hector mine earthquake sequence suggests model misspecification oversmoothing along the fault that is undetected by other methods .these voronoi residuals may of course be used in tandem with standard , pixel - based residuals , which may in turn be based on a judicious choice of pixel size , or perhaps using a different spatially adaptive grid than the one proposed here .the use of pit values , both in residual plots and histograms , relies upon a readily computable form for , the distribution of residuals under the fitted model . in the case of the voronoi partition , this requires monte carlo integration of the conditional intensity function over the irregular cells . this process can be time consuming if the intensity function is sufficiently inhomogenous or if the number of earthquakes in the catalog is very high .the pit values of the pixel partition are easier to compute and they benefit from a more straightforward interpretation in the residual plot simply because the fixed grid is a more familiar configuration . however , because of their improved statistical power , voronoi residuals are more informative and thus worth the additional computation and consideration .the importance of selecting the size of the cell on which to compute a residual is not unique to this pit k s statistic testing environment .the discrepancy measure proposed by is defined on a borel set of a given shape .the author emphasizes the importance of choosing an appropriate size for ( page 835 ) and points out that if the cell is too small or too large , the power will suffer .a related problem arises in the selection of the bandwidth of the kernel used to smooth a residual field [ , section 13 and discussion ] .although we have focused on formal testing at the level of the entire collection of residuals , testing could also be performed at the level of individual cells . for the voronoi partition ,this extension is straightforward and is essentially what is being done informally in the shaded residual plots . for any pixel partition ,such testing may be problematic , as any pixel with an integrated conditional intensity close to zero would contain zero points with more than probability , so any hypothesis test with using a rejection interval would necessarily have a type i error near 1 . generating the partition using a tessellation of the observed pattern has advantages and disadvantages .the advantage is that it is adaptive and requires no input from the user regarding tuning parameters .the disadvantages are that some sampling variability is induced by the random cell areas and that the residuals are dependent , so techniques relying upon an i.i.d .assumption must be used cautiously .a promising future direction is to consider residuals based on a model - based centroidal voronoi tessellation [ ] , which mitigates characteristics i and ii of the pixel method while providing a partition that creates residuals that are independent of one another if the underlying model is poisson .it should also be noted that the standardization methods proposed in may be used with voronoi residuals or instead one may elect to plot deviance residuals [ ] in each of the voronoi cells . in general , our experience suggests that the standardization chosen for the residuals seems far less critical than the choice of grid .the results seem roughly analogous to kernel density estimation , where the selection of a kernel function is far less critical than the choice of bandwidth governing its range .we thank the reviewer and associate editor for helpful comments that significantly improved this paper .
many point process models have been proposed for describing and forecasting earthquake occurrences in seismically active zones such as california , but the problem of how best to compare and evaluate the goodness of fit of such models remains open . existing techniques typically suffer from low power , especially when used for models with very volatile conditional intensities such as those used to describe earthquake clusters . this paper proposes a new residual analysis method for spatial or spatial temporal point processes involving inspecting the differences between the modeled conditional intensity and the observed number of points over the voronoi cells generated by the observations . the resulting residuals can be used to construct diagnostic methods of greater statistical power than residuals based on rectangular grids . following an evaluation of performance using simulated data , the suggested method is used to compare the epidemic - type aftershock sequence ( etas ) model to the hector mine earthquake catalog . the proposed residuals indicate that the etas model with uniform background rate appears to slightly but systematically underpredict seismicity along the fault and to overpredict seismicity in along the periphery of the fault . , , +
the simulation of spectral energy distributions ( seds ) , images , and polarization maps of young stellar objects has become a profound basis for the analysis and interpretation of observing results .many techniques and approximations for the solution of the radiative transfer ( rt ) problem in different model geometries ( 1d3d ) , considering more and more special physical processes , such as the stochastic and photo - electric heating of small grains ( see , e.g , draine & li 2001 ; bakes & tielens 1994 , siebenmorgen , krgel , & mathis 1992 ) , scattering by spheroidal grains ( see , e.g. , wolf , voshchinnikov , & henning 2002 ; gledhill & mccall 2000 ) , or the coupling of line and continuum rt ( see , e.g. , rybicki & hummer 1992 ) , have been developed .the simulation of the temperature structure in simple - structured circumstellar shells or disks ( see , e.g. , malbet , lachaume , & monin 2001 ; chiang et al .2001 ) , the estimation of the properties ( luminosity , temperature , mass ) of heavily embedded stars ( see , e.g. , kraemer et al .2001 ) , or the determination of the inclination of a circumstellar disk ( see , e.g. , chiang & goldreich 1999 ; menshchikov , henning , & fischer 1999 ; wood et al .1998 ) represented modest , first attempts of the application of the existing sophisticated numerical techniques .more recent efforts are directed to derive the dust grain size distribution in a circumstellar disk from its sed ( wood et al . 2002 ; dalessio , calvet & hartman 2001 ) .thus , it is clear that beside strong observational constrains , the model parameters and considered physical processes have to be questioned in depth in order to derive such detailed conclusions .however , looking behind the scenes , many of the rt models are based on simplifying assumptions of very basic processes such as isotropic instead of anisotropic scattering , mean dust parameters representing dust grain ensembles ( different radii and chemical compositions ) , or the flux - limited diffusion approximation - approximations which are well - suited for handling the energy transfer in hydrodynamic simulations or a rough data analysis but which may not necessarily guarantee the desired accuracy for a detailed sed and image / polarization data analysis . in the advent of ( space ) observatories such as sirtf , which will be able to obtain seds of evolved debris disks around young stars , there exists a strong need for adequate numerical rt techniques in order to allow to trace dust grain growth ( meyer et al .2001 ) and the influence of other physical effects and processes such as the poynting - robertson effect ( see , e.g. , srikanth 1999 ) and dust settling ( dubrulle , morfill , & sterzik 1995 , miyake & nakagawa 1995 ) on the dust grain size distribution in these disks .therefore , we clearly have to understand which influence the different approximations ( as long as they are required ) in the rt simulations may have on the resulting observables . based on two different grain size distributions consisting of astronomical silicate , the differences in the resulting dust grain temperature distributions and the resulting seds between simulations of the rt in ( a ) a `` real '' dust grain mixture and ( b ) under the assumption ( approximation ) of mean dust grain parameters will be discussed .this investigation is therefore mainly focused on two questions : ( 1 ) of what order of magnitude are the differences ( ? ) and ( 2 ) how many grain sizes have to be considered to represent the properties of a real grain size distribution ?the spatial temperature distribution of the considered dust configurations is calculated on the basis of local thermal equilibrium .stochastic heating processes which are expected in case of very small grains consisting of tens to hundreds of atoms ( see , e.g. , draine & li 2000 and references therein ) are not subject of this investigation . in sect .[ rtmodel ] , the rt and the dust grain model are briefly introduced . in sect .[ mean ] , the definition of the mean dust grain parameters is given and the expected deviations of the rt results once based on the mean dust parameters and once on a real grain size distribution are outlined . in sect .[ rtem ] , the rt in a spherical shell with variable optical depth and density distribution ( see sect . [ spsh ] ) is considered , while the temperature structure in a model of the hh30 circumstellar disk is investigated in sect .the sed resulting from a models with a dust grain mixture is compared to the mean particle approximation in sect .the rt simulations presented in this article have been performed with the three - dimensional continuum rt code mc3d which has been described by wolf & henning 2000 ( see also wolf , henning , & stecklum 1999 ; wolf 2002 ) .it is based on the monte - carlo method and solves the rt problem self - consistently . instead of the iterative procedure of estimating the dust temperature distribution ( as described in wolf et al .1999 ) , the concept presented by bjorkman & wood ( 2001 ) , which is based on the immediate correction of the dust grain temperature after absorption of a photon package , has been used .furthermore , the method described by lucy ( 1999 ) , which takes into account the absorption not only at the points of interaction of the photons with the dust grains but also in - between , has been applied . while the first method allows to simulate the rt for optical depths refers to the optical depth based on the extinction cross section of the dust grains at a wavelength of 550 nm . ] , the second was used in order to increase the efficiency of the simulation for low optical depths .in addition to the test of mc3d described by wolf et al .( 1999 ) considering a single grain size , it has been successfully tested for the case of dust grain mixtures against the one - dimensional code of chini et al .( 1986 ) .the rt is simulated at separate wavelengths within the wavelength range [ , .for this reason , the radiation energy of the emitting source(s ) is partitioned into so - called weighted photons ( fischer , henning , & yorke 1999 ) each of which is characterized by its wavelength and stokes vector . in order to consider dust grain ensembles instead of a single grain species , the rt concept as described by wolf et al .( 1999 ) , lucy ( 1999 ) , and bjorkman & wood ( 2001 ) has to be extended in the following manner : 1 .the mean free path length between the point of emission and the point of the first photon - dust interaction ( and - subsequently - between two points of interaction ) is given by \ , \delta l_i , \hspace*{1 cm } l = \sum^{n_{\rm p}}_{i=1 } \delta l_i.\ ] ] here , is the optical depth along the path of the length , is the corresponding numerical accumulated optical depth ( ) , is the number of chemically different dust grain species , is the number of particle radii being considered , and is the spatial coordinate corresponding to the integration point along the path length .the quantity is the number density at the spatial coordinate , the quantity is the extinction cross section , and is the number of integration points along the path .furthermore , is a random number uniformly distributed in the interval $ ] .2 . according to the concept of immediate reemission described by bjorkman & wood ( 2001 ) , at each point of interaction of a photon with a dust grain either scattering or absorption occurs .the probability for a photon to undergo the one or the other interaction process with a grain with the chemical composition # and the particle radius # is given by } , \ ] ] where stands either for absorption or scattering . in case of absorption ,the immediate reemission occurs from the same dust grain ( species ) .3 . in lucys concept ( 1999 ) , which considers the absorption of the electromagnetic radiation field not only at the end points of the photon path ( points of interaction ) but also in - between , simply the absorption due to all dust grain species ( instead of only one ) has to be taken into account : the investigations presented in this paper are based on dust grain ensembles with different grain size distributions but the same chemical composition . since _ the formalism ( and therefore at least the qualitative conclusions ) are the same for ensembles consisting of grains with different chemical composition or size distribution or even both _ , this restriction is justified .the optical properties of astronomical silicate ( draine & lee 1984 ) for [ a ] a size distribution of small grains with radii m and [ b ] a size distribution of larger grains with radii m have been used in sect .[ mean ] , [ spsh ] , and [ sed ] .the grains are assumed to be spherical with a size distribution described by the widely applied power law ( mathis , rumpl , & nordsieck 1977 ) .the optical properties such as the extinction and scattering cross sections as well as the scattering distribution function have been derived on the basis of mie scattering , using the mie scattering algorithm published by bohren & huffman ( 1983 ) .the correct ( anisotropic ) scattering distribution function ( for each dust grain size ) has been considered in the rt process .in case of non - self - consistent rt in a dust grain mixture , the numerical effort ( finally , the computing time and ram requirement ) may be substantially decreased by assuming weighted mean values of those parameters which describe the interaction of the electromagnetic field with the dust grains : the extinction , absorption , and scattering cross section , the stokes parameters , and as as function of the extinction and scattering cross section the albedo . the weight , which represents the contribution of the component of the dust grain mixture , results from the abundance of this component in respect of its dust grain number density ( assuming chemically different dust species ) and the size distribution of the respective material : where and are the minimum and maximum grain radius of the size distribution .the stokes parameters as well as the extinction , absorption , and scattering cross section ( , , ) are additive .therefore , the representative values in case of a dust grain mixture can be derived on the basis of their weighted contributions ( see , e.g. , martin 1978 , solc 1980 ) : and where is the mueller matrix which is used to describe the modification of the photon s stokes vector due to the interaction of a photon with the scattering / absorbing medium ( dust grains ; see bickel & bailey 1985 , bohren & huffman 1983 ) . for the albedo : since this formalism ( eq .[ weight]-[albmean ] ) covers both grain size and chemical distributions , the conclusions to be derived for a grain size distribution but a single chemical component is also valid for grain ensembles consisting of grains with different chemical compositions .the mean dust grain parameters used in the following sections have been derived on the assumption of grain radii equidistantly distributed in the grain size range of the small / large grain ensemble . in thermal equilibrium , the temperature of a dust grain ( grain size or composition ) can be estimated from the assumption of local energy conservation .if is the energy being absorbed by the dust grain component during the time interval , then the local energy conservation can be written as where is the total amount of energy being re - emitted during the time interval , and are the monochromatic absorbed / re - emitted luminosity at the wavelength , is the absorption efficiency and is the planck function of the grain with the temperature . for two particles of the same composition but different radii ( , )the temperature is consequently only the same if this is usually not the case .thus , mean dust grain parameters may be used only in case of the simulation of dust scattering but not for the estimation of the dust grain temperature distribution and resulting observables , such as the spectral energy distribution and images in the mid - infrared to millimeter wavelength range . in the following sections ( [ rtem ] and [ sed ] ) it will be investigated under which conditions the approximation of the mean dust grain parameters provides nevertheless a good estimate for these observables . while the most crucial information can be derived on the basis of a one - dimensional model ( [ rtem ] ) , the study presented in [ sed ] is aimed to reveal the influence of geometrical effects in case of two - dimensional disk - like structures . in each case , typical parameters for both the dust grain composition ( see [ rtmodel ] ) and the circumstellar shells / disks are considered .in order to reveal the differences between the results of the rt based on real dust grain size distributions and the approximation of mean dust grain parameters representing the weighted mean optical parameters of the grain size distribution , a simple - structured but in respect of the astrophysical applications very useful model has been used for the following investigations .it consists of a spherical shell ( outer radius : ; inner boundary : ) with a radial density profile described by a power law , , where is a negative , constant quantity .the dust grain size distribution is assumed to be constant throughout the whole shell .the only heating source is an embedded , isotropically radiating star in the center of the shell with an effective temperature and radius identical to that of the sun . in order to study the influence of multiple scattering and ( additional ) heating due to the dust reemission , the optical depth of the shell ( as seen from the star ) and the exponent , describing the relative density gradient in the shell ,have been considered as variable parameters of the model .four values ( -1.0 , -1.5 , -2.0 , and -2.5 ) for the exponent were chosen , covering a broad range of astrophysical objects ranging from circumstellar shells ( see , e.g. , chini et al .1986 , henning 1985 , yorke 1980 ) and disks ( see , e.g. , menshchikov & henning 1997 , sonnhalter et al . 1995 , lopez et al .1995 ) to the radial density profile being assumed for the dust density distribution in agn models ( see , e.g. , manske et al .1998 , efstathiou & rowan - robinson 1995 , stenholm 1994 ) .the optical depth was varied from the optical thin case ( =0.1 ) to the case of intermediate optical depth ( ) and finally to as the optically thick case . for optical depths below this intervalno remarkable differences to the case are expected since multiple scattering and reemission have a negligible influence on the dust grain temperature which is determined only by the distance from the star ( attenuation of the stellar radiation field ) and the absorption efficiency of the dust grains . for optical depths preparatory studies have shown that the temperature difference between the different dust grains becomes negligible , converging towards the temperature distribution obtained on the basis of the mean dust grain approximation .this finding is in agreement with results obtained by krgel & walmsley ( 1984 investigation of dust and gas temperatures in dense molecular cores ) , wolfire & churchwell ( 1987 study of circumstellar shells around low - mass , pre - main - sequence stars ) , and efstathiou & rowan - robinson ( 1994 multi - grain dust cloud models of compact hii regions ) .furthermore , only the inner region of the circumstellar shell shows a broad distribution of dust grain temperatures at a given distance from the star . in fig .[ s6txax2 ] and [ s6ta - b2 ] the differences between the radial temperature distribution of the single dust grains and that resulting from the approximation of mean dust grain parameters is shown from the inner boundary of the shell to a distance of 1au from the star .the main results are : 1 .the temperature of the different grains spans a range of ( % of the corresponding mean temperature ) around the temperature of the mean grains at the inner boundary of the shell .the temperature difference decreases * towards larger distances from the star , and * with increasing optical depth as soon as the shell becomes optically thick .the relativ density gradient , described by the exponent is of minor importance .however , an increase of the absolute amount of this exponent results in an increased redistribution of energy between the different dust grain components at the inner boundary of the shell ( ) and therefore in a decrease of the temperature dispersion .the difference ( ) strongly depends on both the absorption efficiency , and thus on the dust grain size distribution , and the temperature of the heating source since the combination of these parameters determines the amount of energy being absorbed .this is clearly illustrated by the different signs of the temperature difference of the large and the small grain size distribution to the temperature of the mean grains shown in fig .[ s6txax2 ] and [ s6ta - b2 ] .the found complex , highly dispersed temperature structure in a dust grain mixture requires a large number ( here : ) of considered grain radii to be correctly taken into account and can not be adequately represented by a mean particle .this finding is of special interest for the simulation of chemical networks , since the existence of ice layers , the possibility of a certain reaction on the grain surface , surface reaction rates , and the temperature of the surrounding gas phase depend on the dust grain temperature ( see , e.g. , cazaux & tielens 2002 , markwick et al .2002 , charnley 2001 ) .the dust phase in the close stellar environment is of particular importance , since the chemical evolution takes place here on its smallest timescale .furthermore it was found , that the approximation of mean particle properties is found to be justified for dust configurations of high ( ) optical depth .although all basic characteristics of the temperature structure in a dust grain mixture can be studied on the basis of a one - dimensional model ( sect . [ spsh ] ) , a disk - like structure will be considered in the following .the motivation for this is to investigate the possible influence of geometrical effects which have been minimized in case of the spherical shell .furthermore , two - dimensional models are of particular importance since they are widely applied in simulations of circumstellar disks , debris disks around evolved stars , active galactic nuclei , and galaxies ( see , e.g. , chiang & goldreich 1997 , 1999 ; see also the references in sect .[ spsh ] ) . in the following ,a model of the nearly edge - on disk around the classical t - tauri star hh30 is considered ( see , e.g. , burrows et al .1996 for observational details ) .following cotera et al .( 2001 ) and wood et a. ( 2002 ) , a flared geometry as described by shakura & sunyaev ( 1973 ) is adapted : ^ 2 \right),\ ] ] where is the radial coordinate in the disk midplane , is the density at the stellar surface , and the scale height increases with radius : in order to reproduce the high density ( and therefore temperature ) gradient in the inner region of the disk , a very high resolution of the model both in radial and vertical direction has been applied .the smallest resolved structure at the inner radius of the disk has a linear extent of about 29% both in vertical and radial direction . as for the sed modelling performed by wood et al .( 2002 ) , the following values have been used : stellar radius , stellar effective temperature , , , , inner radius of the disk , and the outer radius of the disk amounts to 200au .in contrast to the simulations presented by cotera et al .( 2001 ) and wood et al .( 2002 ) , only one chemical dust grain component ( astronomical silicate as in sect .[ spsh ] ) has been chosen in order to simplify the simulation analysis .the size distribution is specified using a power law with exponential decay : ( see , e.g. , kim et al .following cotera et al .( 2001 ) , a maximum grain size has been chosen ( other parameter values : , ) .a total number of 32 grain sizes with radii equidistantly distributed on a logarithmic scale between 0.05 and 20 are considered .the visual extinction of the disk amounts to at an inclination of ( see wood et al .2002 , tabl . 2 ) .the rt simulations have been performed on the basis of the model geometry shown in fig .6[b ] in wolf , henning , & stecklum 1999 .while the star could be assumed to be point - like in the one - dimensional model ( [ spsh ] ) , its real extend has to be taken into account now .the radiation characteristic at each point of the stellar surface is described by a law ( star without limb - darkening ; see wolf 2002 for details ) . the resulting temperature distribution for two selected grain sizes ( and ) and their difference are shown in fig .[ disk - t ] . on the largest , scale the temperature of the small grains is up to 40k higher than that of the large grains in the optically thin atmosphere above / below the disk . in agreement with the results found in optically thick spherical shells, the temperature difference decreases towards the optically extremly thick midplane where it amounts to only a few kelvin .however , the vertical temperature structure shows a more complex behaviour in the inner region of the disk , on a size scale of a few au and smaller . here , an _ inversion _ of the temperature difference in the upper layer of the optically thick disk occurs ,i.e. , the large grains are substantially warmer than the small grains .this remarkable effect can be explained by the more efficient heating of large grains by the mid - infrared to far - infrared reemission of the hot inner disk .this explanation is also supported by the comparison of the vertical temperature structure obtained at different distances from the star and [ disk - g ] on a pc with 1gbyte ram , the rt has been simulated only in the inner disk region with a diameter of 8.8au , which is one to two orders of magnitude larger than the regions considered in the following discussions .the comparison with the results obtained on a temperature grid with a lower resolution ( fig .[ disk - t ] ) showed that the resulting differences in the spatial temperature distribution is in the order of or smaller than the statistical noise of the results and therefore negligible . ] : fig .[ disk - z ] shows the difference between vertical temperature distribution of grains with increasing radii ( , 0.1 , 1.1 , 9.0 ) and grains .furthermore , fig .[ disk - g ] shows the temperature difference as a function of the grain radius and radial distance from the star at different distances from the midplane .while at a radial distance of 0.05au the temperature of the 20 m grains is about 130k higher than that of grains in the `` inversion layer '' , the amount of this temperature difference drops to less than at a radial distance of 0.5au from the star ( see fig . [ disk - z ] ) .consequently , the vertical extent and location of the minimum of the temperature inversion region depends on the particular grain size ( and the vertical density distribution in the disk ) .for instance it was found that the minimum of the distribution shown in fig .[ disk - z ] is shifted towards smaller distances from the midplane in all three considered cases ( =0.05au , 0.1au , and 0.5au ) .according to the explanation of the temperature inversion given above , a similar shift of the minima of the difference temperature distribution is expected in case of the other grain sizes ( 0.01 to 1.1 ) as well .however , the spatial distribution of the simulated temperature distribution is too low to allow the verification of this assumption .based on the dust grain temperature distribution shown in fig .[ s6txax2 ] , the spectral energy distribution ( sed ) for the different ( spherical ) shells have been calculated assuming ( ) different grain sizes .the relative difference is shown in fig . [ sed1 ] and [ sed2 ] .here , represents the sed of the real grain size distribution , and is the sed resulting from the approximation of mean dust grain parameters . as fig .[ sed1 ] illustrates , the results of the real grain size distribution converges towards the results based on the approximation of mean dust grain parameters . as it also shows , a minimum of to grain sizes have to be considered to achieve deviations of less then .if too few single grain sizes are considered , the sed does not represent the observable sed of the real grain size distribution sufficiently but drastically overestimates 1 .the absorption of the stellar radiation ( and therefore underestimates the visual to near - infrared sed ) , 2 .the depths of the absorption bands in general ( as the silicate absorption feature at m demonstrates ) , and 3 . the dust reemission spectrum in the near - infrared to millimeter wavelength range .while the different absorption behaviour influences the temperature structure and overall sed in general , the influence of the different scattering behaviour depends on the optical depth . in the optically thin case , differences in the scattering behaviour result mainly in deviations of the short wavelength region of the stellar sed . at higher optical depth ,the scattering behaviour has direct influence on the energy transfer in the dust envelope and thus on the temperature structure and the near- to far - infrared wavelength range as well .however , this effect is of importance only at intermediate optical depths ( ) , since for even higher optical depths the temperature structure , at least in case of a one - dimensional dust configuration , tends not to depend on the description of the dust grain ensemble ( real dust grain size distribution / approximation of mean dust grain parameters ) .in simulations of the radiative transfer in the circumstellar environment of young stellar objects it has been widely established to use mean values for those parameters which describe the interaction of the electromagnetic field with the dust grains ( see , e.g. , modelling efforts by calvet et al .2002 : circumstellar disk around the young low - mass star tw hya ; cotera et al .2001 and wood et al . 2002 : circumstellar disk around the classical t tauri star hh 30 ; fischer , henning , & yorke 1994 , 1996 : polarization maps of young stellar objects ) . a main reason for this is given by the fact that the consideration of a large number of single grain sizes and chemically different grain species results in a nearly linearly increasing amount of required computer memory in order to store the separate temperature distributions .furthermore , the calculation of the spatial temperature distribution for all grains of different sizes and chemical composition requires significantly more computing power since ( a ) the heating by the primary sources ( e.g. , the star embedded in a circumstellar envelope ) has to be performed independently for each grain species , and ( b ) the number of computing steps required to model the subsequent mutual heating of the different dust grain species due to dust re - emission scales even as , where is the number of chemically different components and is the number of separate dust grain radii considered .while these simulations are feasible in case of one - dimensional models ( see , e.g. , chini , krgel , & kreysa 1986 and efstathiou & rowan - robinson 1994 : dust emission from star forming region ; krgel & siebenmorgen 1994 and siebenmorgen , krgel , & zota 1999 : radiative transfer in galactic nulei ) or simple - structured two - dimensional models ( menshchikov & henning 1997 : circumstellar disks ; efstathiou & rowan - robinson 1994 : disks in active galactic nuclei ) , it is hardly possible to handle two- and three - dimensional models with high density gradients and/or high optical depth which require high - resolution temperature grids . in this study ,the difference between the results of rt simulations based ( a ) on a mean dust grain parameter approximation and ( b ) real dust grain size distributions have been investigated .based on a one - dimensional density distribution it was found that the temperature structure of a real grain size distribution shows a very complex behaviour in the inner , hot region of the shell depending on ( 1 ) the grain size distribution , ( 2 ) the effective temperature of the embedded star and the optical depth and therefore on the density distribution .however , the relative difference between the sed based on a real dust grain size distribution on the one hand and the approximation of mean dust grain parameters on the other hand was found to be smaller than about % if a minimum number of to grain sizes have been considered . as the temperature structure in a circumstellar disk based on the model for hh30 shows , the geometry of the density distribution is a significant parameter for the resulting temperature differences between grains of different size , too . in the inner region of the disk with a diameter of a few au a temperature inversion layerwas found where the sign of the temperature difference of the largest and smallest grains is reverted . as this and the results obtained on the basis of the spherical shell ( [ spsh ] ) show , the dust grain temperature structure is not sufficiently represented by dust grains with mean optical parameters . on the one hand , this is of tremendous importance for the simulation of chemical networks since the largest deviations from results based on the approximation of mean dust grain parameters have been found in the inner hot , dense region of the shell / disk where the chemical evolution takes place on its smallest timescale . on the other hand , the complex temperature structure may significantly change the hydrostatic properties of the considered gas / dust density distribution itself .however , these questions have to be investigated in future studies in order to find out the influence on observable quantities such as seds , images , polarization maps , and visibilities .furthermore , the influence on processes taking place in more evolved circumstellar disks , such as the dust settling and dust grain growth , have to be considered taking into account the temperature structure of real grain size distributions .this research was supported through the hst grant go9160 , and through the nasa grant nag5 - 11645 .i wish to thank the referee e. dwek who helped to improve the clarity of the presentation of the results .bakes , e.l.o . , tielens , a.g.g.m .1994 , , 427 , 822 bickel , w.s . , bailey , w.m . 1985 ,, 53 ( 5 ) , 468 bjorkman , j.e . , wood , k. 2001 , , 554 , 615 bohren , c.f . ,huffman , d.r .1983 , `` absorption and scattering of light by small particles '' .john wiley & sons , new york burrows , c.j . , et al . , 1996 , , 473 , 437 calvet , n. , dalessio , p. , hartmann , l. , et al .2002 , , 568 , 1008 cazaux , s. , tielens , a.g.g.m .2002 , , 575 , l29 charnley , s.b .2001 , , 562 , l99 chiang , e.i . , goldreich , p. 1997, , 490 , 368 chiang , e.i . ,goldreich , p. 1999, , 519 , 279 chiang , e.i . , joung , m.k . , creech - eakman , m.j . 2001 , et al ., , 547 , 1077 chini , r. , krgel , e. , kreysa , e. 1986 , , 167 , 315 cotera , a.s . ,whitney , b.a . ,young , e. , wolff , m.j . ,wood , k. , et al .2001 , , 556 , 958 dalessio , p. , calvet , n. , hartmann , l. 2001 , , 553 , 321 draine , b.t ., lee , h.m .1984 , , 285 , 89 draine , b.t . , li , a. , 2000 , , 551 , 807 draine , b.t . , li , a. 2001 , , 551 , 807 dubrulle , b. , morfill , g. , sterzik , m. 1995 , icar . , 114 , 237 efstathiou , a. , rowan - robinson , m. 1994 , mnras , 266 , 212 efstathiou , a. , rowan - robinson , m. 1995 , mnras , 273 , 649 fischer , o. , henning , th . , yorke , h. 1994 , , 284 , 187 fischer , o. , henning , th . , yorke , h. 1996 , , 308 , 863 gledhill , t.m . ,mccall , a. 2000 , mnras , 314 , 123 henning , th .1985 , apss , 114 , 401 kim , s .- h . ,martin , p.g.,hendry , p.d .1994 , , 422 , 164 kraemer , k.e ., jackson , j.m . , deutsch , l.k ., et al .2001 , , 561 , 282 krgel , e. , siebenmorgen , r. 1994 , , 282 , 407 krgel , e. , walmsley , c.m .1984 , a&a , 130 , 5 lopez , b. , mekarnia , lef , j. 1995 , , 296 , 752 lucy , l.b . 1999 , , 344 , 282 malbet , f. , lachaume , r. , monin , j .- l .2001 , , 379 , 515 manske , v , henning , th . , menshchikov , a.b .1998 , , 331 , 52 martin , p.g . 1978 , `` cosmic dust .its impact on astronomy . '' , claderon press , oxford markwick , a.j . ,ilgner , m. , millar , t.j . ,henning , th .2002 , , 385 , 632 mathis , j.s . ,rumpl , w. , nordsieck , k.h .1977 , , 217 , 425 menshchikov , a.b . , henning ,1997 , , 318 , 879 menshchikov , a.b . , henning , th . , fischer , o. 1999 , , 519 , 257 meyer , m.r . , backman , d. , mamajek , e.e . , et al .2001 , aas , 199 , 7608 miyake , k. , nkagawa , y. 1995 , , 441 , 361 rybicki , g.b . ,hummer , d.g .1992 , , 262 , 209 shakura , n.i . , & sunyaev , r.a .1973 , , 24 , 337 siebenmorgen , r. , krgel e. , mathis , j.s .1992 , , 266 , 501 siebenmorgen , r. , krgel , e. , zota , v. 1999 , , 351 , 140 solc , m. , 1980 , acta universitatis carolinae - mathematica et physica , 21 sonnhalter , c. , preibisch , th ., yorke , h. 1995 , , 299 , 545 srikanth , r. 1999 , icar . , 140 , 231 stenholm , l. 1994 , , 290 , 393 wolf , s. , henning , th ., stecklum b. 1999 , , 349 , 839 wolf , s. , henning , th .2000 , comp .132 , 166 wolf , s. , voshchinnikov , n.v .henning , th ., 2002 , , 385 , 365 wolf , s. 2002 , comp ., in press wolfire , m.g . , churchwell , e. 1987 , , 315 , 315 wood , k. , wolff , m.j . , bjorkman , j.e . ,whitney , b. 2002 , , 564 , 887 wood , k. , kenyon , s.j . ,whitney , b. , turnbull , m. 1998 , , 497 , 404 yorke , h.w .1980 , , 86 , 268
the influence of a dust grain mixture consisting of spherical dust grains with different radii and/or chemical composition on the resulting temperature structure and spectral energy distribution of a circumstellar shell is investigated . the comparison with the results based on an approximation of dust grain parameters representing the mean optical properties of the corresponding dust grain mixture reveal that ( 1 ) the temperature dispersion of a real dust grain mixture decreases substantially with increasing optical depth , converging towards the temperature distribution resulting from the approximation of mean dust grain parameters , and ( 2 ) the resulting spectral energy distributions do not differ by more than 10% if grain sizes are considered which justifies the mean parameter approximation and the many results obtained under its assumption so far . nevertheless , the dust grain temperature dispersion at the inner boundary of a dust shell may amount to and has therefore to be considered in the correct simulation of , e.g. , chemical networks . in order to study the additional influence of geometrical effects , a two - dimensional configuration the hh30 circumstellar disk was considered , using model parameters from cotera et al . ( 2001 ) and wood et al . ( 2002 ) . a drastic inversion of the large to small grain temperature distribution was found within the inner of the disk .
neocortical circuits are highly connected : a typical neuron receives synaptic input from of the order of 10000 other neurons .this fact immediately suggests that mean field theory should be useful in describing cortical network dynamics .furthermore , a good fraction , perhaps half , of the synaptic connections are local , from neurons not more than half a millimeter away , and on this length scale ( i.e. , within a cortical column ) the connectivity appears to be highly random , with a connection probability of the order of 10% .this requires a level of mean field theory a step beyond the kind used for uniform systems in condensed matter physics like ferromagnets .it has to describe correctly the fluctuations in the inputs to a given network element as well as their mean values , as in spin glasses .the theory we use here is , in fact , adapted directly from that for spin glasses .a generic feature of mean field theory for spin glasses and other random systems is that the `` quenched disorder '' in the connections ( the connection strengths in the network do not vary in time ) leads to an effectively noisy input to a single unit that one studies : spatial disorder is converted to temporal .the presence of this noise offers a fundamental explanation for the strong irregularity of firing observed experimentally in cortical neurons . for high connectivity ,the noise is gaussian , and the correct solution of the problem requires its correlation function to be found self - consistently . in this paper we summarize how to do this for some simple models for cortical networks .we focus particularly on the neuronal firing statistics .there is a long history of experimental investigations of the apparent noisiness of cortical neurons , but very little in the way of theoretical work based on network models .our work begins to fill that gap in a natural way , since the full mean field theory of a random network is based on self - consistently calculating the correlation function .in particular , we are able to identify the features of the neurons and synapses in the network that control the firing correlations . the basic ideas developed here were introduced in a short paper , and these models are treated in greater detail in several other papers .here we just want to give a quick overview of the mean field approach and what we can learn from it .in all the work described here , our neurons are of the leaky integrate - and - fire kind , though it is straightforward to extend the method to other neuronal models , based , for example , on hodgkin - huxley equations . in our simplest model , we consider networks of excitatory and inhibitory neurons , each of which receives a synaptic connection from every other neuron with the same probability .each such connection has a strength , " the amount by which a presynaptic spike changes the postsynaptic potential . in this model , these strengths are independent of the postsynaptic membrane potential ( `` current - based synapses '' ) . all excitatory to - excitatory connections that are present are taken to have the same strength , and analogously for the three other classes of connections ( excitatory - to - inhibitory , etc . ) .however , the strengths for the different classes are not the same . in addition , excitatory and inhibitory neurons both receive excitation from an external population , representing `` the rest of the brain '' .( for primary sensory cortices , this excitation includes the sensory input from the thalamus . )this is probably the simplest generic model for a generic `` cortical column '' of spiking neurons .the network is taken to have excitatory and inhibitory neurons . a given neuron ( of either kind )receives synaptic input from every excitatory ( resp .inhibitory ) neuron with probability ( resp . , with independent of . in our calculations we take the connection density to be 10% , but the results are not very sensitive to its value as long as it is fairly small .each nonzero synapse from a neuron in population to one in population is taken to have the value .synapses from the external population are treated in the same way , with strengths . for simplicity, neurons in the external population are assumed to fire like stationary independent poisson processes .we consider the limit , , with fixed , where mean field theory is exact .the subthreshold dynamics of the membrane potential of neuron in population obey where is the spike train of neuron in population .the membrane time constant is taken to have the same value for all neurons .we give the firing thresholds a narrow distribution of values ( 10% of the mean value , 1 ) .we take the firing thresholds and the postfiring reset levels to be 0 .we ignore transmission delays .the essential point of mean field theory is that for such a large , homogeneously random network , as for an infinite - range spin glass , we can treat the net input to a neuron as a gaussian random process .this reduces the network problem to a single - neuron one , with the feature that the statistics of the input have to be determined self - consistently from the firing statistics of the single neurons .this reduction was proved formally for a network of spiking neurons by fulvi mari .explicitly , the effective input current to neuron in population can be written .{ \label}{eq : decrec}\ ] ] here is the average rate in population , is a unit - variance gaussian random number , and is a ( zero - mean ) gaussian noise with correlation function equal to , the average correlation function in population . for the contribution from a single population , labeled by , the first term in ( [ eq : decrec ] ) , which represents the mean input current , is larger than the other two , which represent fluctuations , by a factor of : averaging over many independent input neurons reduces fluctuations relative to those in a single neuron by the square root of the number of terms in the sum .( for our way of scaling the synapse strengths , the factor in the first term arises formally from adding terms , each of which is proportional to . ) however , while the fluctuation terms are small in comparison to the mean for a given input population , small compared to the population - averaged input ( the first term in ( [ eq : decrec ] ) ) , we will see that when we sum over all populations the first term will vanish to leading order .what remains of it is only of the same order as the fluctuations .therefore fluctuations can not be neglected .the fact that the fluctuation terms are gaussian variables is just a consequence of the central limit theorem , since we consider the limit .note that one fluctuation term is static and the other dynamic .the origin of the static one is the fact that the network is inhomogeneous , so different neurons will have different number of synapses and therefore different strengths of net time - averaged inputs .it is perhaps not immediately obvious , but the formal derivation ( , see also for an analogous case ) shows that the dynamic noise also originates from the random inhomogeneity in the network .it would be absent if there were no randomness in the connections , as , for example , in a model like ours but with full connectivity .the presence of the factor in the third term in ( [ eq : decrec ] ) makes this point evident ; in the general case the noise variance is proportional to the variance of the connection strengths . in any mean field theory ,whether it is for a ferromagnet , a superconductor , electroweak interactions , or a neural network , one has to make an _ ansatz _ describing the state in question .this ansatz contains some parameters ( generally called order parameters " ) , the values of which are then determined self - consistently . here, our order parameters " are the mean rates , their mean square values ( which appear in ( [ eq : bb ] ) , and the correlation functions .we make an _ ansatz _ for the correlation functions that describes an asynchronous irregular firing state : we take to be time - independent and to have a delta - function peak ( of strength equal to ) at , plus a continuous part that falls off toward zero as .we could also look , for example , for solutions in which was time - dependent and/or had extra delta - function peaks ( these might describe oscillating population activity ) , but we have not done so .thus , we can not exclude the existence of such exotic states , but we can at least check whether our asynchronous , irregularly - firing states exist and are stable .we can find the mean rates , at least when they are low , independently of their fluctuations and the correlation functions : in an irregularly - firing state the membrane potential should fluctuate around a stationary value , with the noise occasionally and irregularly driving it up to threshold . in mean field theory , we have where is given by ( [ eq : decrec ] ) .( we have dropped the neuron index , since we are now doing a one - neuron problem . ) from ( [ eq : decrec ] ) , we see that the leading terms in are large ( ) , so if the membrane potential is to be stationary they must nearly cancel : that is , the mean excitatory ( ) and inhibitory ( ) currents must nearly balance . therefore we call ( [ eq : balance ] ) the balance condition . defining , we can also write it in the form the external rate is assumed known , so these two linear equations can be solved for , .we can write the solution as {ab } j_{b0}r_0 , { \label}{eq : balsoln}\ ] ] where by we mean the inverse of the matrix with elements , .this result was obtained some time ago by amit and brunel and , for a nonspiking neuron model , by van vreeswijk and sompolinsky .however , a complete mean field theory involves the rate fluctuations within the populations and the correlation functions , and it is clear that if we want to understand something quantitative about the degree of irregularity of the neuronal firing , it is necessary to do the full theory .this can not be done analytically , so we resort to numerical calculation .our method was inspired by the work of eisfeller and opper on spin glasses .they , too , had a mean field problem that could not be solved analytically , so they solved numerically the single - spin problem to which mean field theory reduced their system . in our case , we have to solve numerically , the problem of a single neuron driven by gaussian random noise , and the crucial part is to make the input noise statistics consistent with the output firing statistics .this requires an iterative procedure .we have to start with a guess about the mean rates , the rate fluctuations , and the correlation functions for the neurons in the two populations .we then generate noise according to ( [ eq : decrec ] ) and simulate many trials of neurons driven by realizations of this noise . in these trials ,the effective numbers of inputs are varied randomly from trial to trial , with a gaussian distribution of width , to capture the effects of the random connectivity in the network .we compute the firing statistics for these trials and use the result to improve our estimate of the input noise statistics .we then repeat the trials and iterate the loop until the input and output statistics agree .we can get a good initial estimate of the mean rates from the balance condition equation ( [ eq : balsoln ] ) , but this is harder to do for the rate fluctuations and correlation function .the method we have used is to do the initial trials with white noise input ( of a strength determined by the mean rates ) .there seems to be no problem converging to a solution with self consistent rates , rate fluctuations and firing correlations from this starting point .more details of the procedure can be found in . as a measure of the firing irregularity , we consider the fano factor .it is defined as the ratio of the variance of the spike count to its average , where both statistics are computed over a large number of trials .it is easy to relate it to the correlation function , as follows .if is a spike train as in ( [ eq : model1 ] ) , the spike count in an interval from 0 to is its mean is just , and its variance is \rangle { \label}{eq : varcount}\ ] ] the quantity in the averaging brackets in ( [ eq : varcount ] ) is just the correlation function .changing integration variables from to and taking gives for a poisson process , , leading to .thus , a fano factor greater than 1 is not really `` more irregular than a poisson process '' , since any deviation of from 1 comes from some kind of firing correlations ., for 3 values of the relative inhibition parameter .,width=453,height=302 ] for this model we have studied how the magnitude of the synaptic strengths affects the fano factor .we have used in fig .[ fig : js - ff ] we plot f as a function of the overall scaling factor for three different values of the relative inhibition strength .evidently , increasing synaptic strength in either way increases .how can we understand this result ?let us think of the stochastic dynamics of the membrane potential after a spike and reset , as described , for example , by a fokker - planck equation .right after reset , the distribution of is a delta - function at the reset level .then it spreads out diffusively and its center drifts toward a quasi - equilibrium level .the speed of the spread and the width of the quasi - equilibrium distribution reached after a time are both proportional to the synaptic strength .this distribution is only `` quasi - equilibrium '' because on the somewhat longer timescale of the typical interspike interval , significant weight will reach the absorbing boundary at threshold .nevertheless , we can regard it as nearly stationary if the rate is much less than .the center of the quasi - equilibrium distribution has to be at least a standard deviation or so below threshold if the neuron is going to fire at a low - to - moderate rate .thus , since this width is proportional to the synaptic strengths , if we fix the reset at zero and the threshold at 1 the drift of the distribution after reset will be _ upward _ for sufficiently weak strengths and _ downward _ for strong enough ones .hence , in the weak case , there is a reduced probability of spikes ( relative to a poisson process ) for times shorter than , leading to a refractory dip in the correlation function and a fano factor bigger than 1 . in the strong - synapse case , the rapid initial spread of the membrane potential distribution before it has a chance to drift very far downward leads to excess early spikes , a positive correlation function at short times , and a fano factor bigger than 1 .the relevant ratio is the width of the quasi - equilibrium membrane potential distribution ( for this model , roughly speaking , ) divided by the different between reset and threshold .the above argument applies even for neurons with white noise input .but in the mean field description the firing correlation induced by this effect lead to correlations in the input current , which amplify the effects .in a second model , we add a touch of realism , replacing the current - based synapses by conductance - based ones .then the postsynaptic potential change produced by a presynaptic spike is equal to a strength parameter multiplied by the difference between the postsynaptic membrane potential and the reversal potential for the class of synapse in question .in addition , we include a simple model for synaptic dynamics : we need no longer assume that the postsynaptic potential changes instantaneously in response to the postsynaptic spike . now the subthreshold membrane potential dynamics become here is a nonspecific leakage conductance ( taken in units of inverse time ) ; it corresponds to in ( [ eq : model1 ] ) .the are the reversal potentials for the synapses from population ; they are above threshold for excitatory synapses and below 0 for inhibitory ones , so the synaptic currents are positive ( i.e. , inward ) and negative ( outward ) , respectively , in these cases . the time - dependent synaptic conductances reflect the firing of presynaptic neuron in population , filtered at its synapse to postsynaptic neuron in population : when a connection between these neurons is present ; otherwise it is zero .( we assume the same random connectivity distribution as in the previous model . )we have taken the synaptic filter kernel to have the simple form representing an average temporal conductance profile following a presynaptic spike , with characteristic opening and closing times and .this kernel is normalized so that ; thus , the total time integral of the conductance over the period after an isolated spike is equal to .hence , for very short synaptic filtering times , this model looks like ( [ eq : model1 ] ) with a membrane potential - dependent equal to .we take the ( dimensionless ) parameters , like the in the previous model , to be of order 1 , so we anticipate a large ( ) mean current input from each population and , in the asynchronously - firing steady state , a near cancellation of these separately large currents . in mean field theory , we have the effective single - neuron equation of motion in which the total effect of population on a neuron in population is a time - dependent conductance consisting of a population mean static noise of variance and dynamic noise with correlation function where is the correlation function of the synaptically filtered spike trains of population . as for the model with current - based synapses, we can argue that in an irregularly , asynchronously - firing state the average should vanish . from ( [ eq : dumf ] ) we obtain again , for large connectivity the leakage term can be ignored .in contrast to what we found in the current - based case , now the balance condition requires knowing the mean membrane potential .however , we will see that in the mean field limit the membrane potential has a narrow distribution centered just below threshold .since the fluctuations are very small , the factor in ( [ eq : dufull ] ) can be regarded as constant , and we are effectively back to the current - based model .thus , defining we can just apply the analysis from the current - based case .it is useful to measure the membrane potential relative to .so , writing and using the balance condition ( [ eq : condbal ] ) , we find where , { \label}{eq : gtot}\end{aligned}\ ] ] with the fluctuating parts of , the statistics of which are given by ( [ eq : fluctg ] ) and ( [ eq : covarg ] ) .this looks like a simple leaky integrator with current input and a time - dependent effective membrane time constant equal to .following shelley _ , ( [ eq : ddumf ] ) can be further rearranged into the form , { \label}{eq : chaseus}\ ] ] with the `` instantaneous reversal potential '' ( here measured relative to ) given by eq .( [ eq : chaseus ] ) says that at any instant of time , is approaching at a rate . follows the effective reversal potential closely , except when is above threshold . here, the threshold is 1 and the reset 0.94.,width=453,height=302 ] for large , is large ( ) , so the effective membrane time constant is very small and the membrane potential follows the fluctuating very closely . fig . [fig : vvs ] shows an example from one of our simulations .this is the main qualitative difference between mean field theory for the model with current - based synapses and the one with conductance - based ones .it is also the reason we introduced synaptic filtering into the present model . in the current - based one ,the membrane potential filtered the input current with a time constant which we could assume to be long compared with synaptic time constants , so we could safely ignore the latter .but here the effective membrane time constant becomes shorter than the synaptic filtering times , so we have to retain the kernel ( [ eq : synfilter ] ) . herewe have argued this solely from the fact that we are dealing with the mean field limit , but shelley _ et al . _argue that it actually applies to primary visual cortex ( see also ) .we also observe that in the mean field limit , both the leakage conductance and the fluctuation term are small in comparision with the mean , so we can approximate by a constant : furthermore , the fluctuations in the instantaneous reversal potential ( [ eq : instrevpot ] ) are then of order : membrane potential fluctuations can not go far from .but must go above threshold frequently enough to produce firing at the self - consistent rates .thus , must lie just a little below threshold , as promised above . hence , at fixed firing rates , the conductance - based problem effectively reduces to a current - based one with a very small effective membrane time constant and synaptic coupling parameters given by ( [ eq : jeff ] ) .of course , as we increase the firing rates of the external population and thereby increase the rates in the network , we will change , making both and the fluctuations correspondingly smaller .if we neglected synaptic filtering , the resulting dynamics would be rather trivial. it would be self - consistent to take the input current as essentially white noise , for then excursions of above threshold would be be uncorrelated , and , since the membrane potential could react instantaneously to follow it up to threshold , so would the firing be .( simulations confirm this argument . ) therefore , the synaptic filtering is essential .it imparts a positive correlation time to the fluctuations , so if it rises above threshold it can stay there for a while . during this time , the neuron will fire repeatedly , leading to a positive tail in the correlation function for times of the order of the synaptic time constants .this broader the kernel , the stronger this effect . in the self - consistent description ,this effect feeds back on itself : acquires even longer correlations , and these lead to even stronger bursty firing correlations .thus , the mean field limit can be pathological in the conductance - based model with synaptic filtering. however , here we take the view that mean field theoretical calculations may still give a useful description of real cortical dynamics , despite that fact that real cortex is not described by the limit .for example , the true effective membrane time constant is not zero , but , according to experiment , it is significantly reduced from its _ in vitro _value by synaptic input , probably to a value less than characteristic synaptic filtering times .doing mean field theory with moderately , but not extremely large connectivities can describe such a state in a natural and transparent way . for 3 reset values.,width=453,height=302 ] as in the current - based model , the fano factor grows with the ratio of the membrane potential distribution to the threshold reset difference .it also grows with increasing synaptic filtering time , as argued above .[ fig : tau - ff ] shows plotted as a function of , with fixed at 1 ms , for a set of reset values .finally , we try to take a step beyond homogeneously random models to describe networks with systematic structure in their connections .we consider the example of a hypercolumn in primary visual cortex : a collection of orientation columns , within which each neuron responds most strongly to a stimulus of a particular orientation .the hypercolumn contains columns that cover the full range of possible stimulus orientations from 0 to .it is known that columns with similar orientation selectivities interact more strongly than those with dissimilar ones ( because they tend to lie closer to each other on the cortical surface ) .we build this structure into an extended version of our model , which can be treated with essentially the same mean field methods as the simpler , homogeneously random one . in the version we present here, we revert to current - based synapses , but it is straightforward to construct a corresponding model with conductance - based ones .a study of a similar model , for non - spiking neurons , was reported by wolf _ et al .those authors have also simulated a network of spiking neurons like the one described here , complementing the mean - field analysis we give here .each neuron now acquires an extra index labeling the stimulus orientation to which it responds most strongly , so the equations of motion for the membrane potentials become the term represents the external input current for a stimulus with orientation .( in this section we set all thresholds equal to 1 , and refers only to orientation . )we assume it comes through diluted connections from a population of poisson neurons which fire at a constant rate : as in the single - column model , we take the nonzero connections to have the value but now we take the connection probability to depend on the difference between and , according to . {\label}{eq : tunedin}\ ] ] this tuning is assumed to come from a hubel - wiesel feed - forward connectivity mechanism .the general form has to be periodic with period and so would have terms proportional to for all , but here , following ben - yishai _ et al . _ , we use the simplest form possible .we have also assumed that the degree of tuning , measured by the anisotropy parameter , is the same for inhibitory and excitatory postsynaptic neurons .assuming isotropy , we are free to measure all orientations relative to , so we set from now on . similarly , we take the nonzero intracortical interactions to be and take the probability of connection to be .{ \label}{eq : jtuning}\ ] ] analogously to ( [ eq : tunedin ] ) , is independent of both the population indices and , since we always take independent of . in real cortex ,cells are arranged so they generally lie closer to ones of similar than to ones of dissimilar orientation preference , and they are more likely to have connections with nearby cells than with distant ones .this is the anatomy that we model with ( [ eq : jtuning ] ) . in and a slightly different model was used , in which the connection probability was taken constant but the strength of the connections was varied like ( [ eq : jtuning ] ) .the equations below for the balance condition and the column population rates are the same for both models , but the fluctuations are a little different . as for the input tuning ,the form ( [ eq : jtuning ] ) is just the first two terms in a fourier series , but again we use the simplest possible form for simplicity . the balance condition for the hypercolumn model is simply that the net synaptic current should vanish for each column .going over to a continuum notation by writing the sum on columns as an integral , we get \sqrt{k_b } r_b(\theta ' ) = 0 .{ \label}{eq : meanorient}\ ] ] we have to distinguish two cases : broad and narrow tuning . in the broad case , the rates are positive for all .in the narrowly - tuned case ( the physiologically realistic one ) , is zero for greater than some , which we call the tuning width .( in general could be different for excitatory and inhibitory neurons , but with our -independent in ( [ eq : tunedin ] ) and - and -independent in ( [ eq : jtuning ] ) , it turns out not to . ) in the broad case the integral over can be done trivially with the help of the trigonometric identity and expanding .we find that the higher fourier components , , do not enter the result : if ( [ eq : broadbalance ] ) is to hold for every , the constant piece and the part proportional to both have to vanish : for each fourier component we have an equation like ( [ eq : balance ] ) .thus we get a pair of equations like ( [ eq : balsoln ] ) : {ab } j_{b0}r_0 & \;\;\;\ ; & r_{a,2 } = -\frac{2\epsilon}{\gamma}\sum_{b=1}^2 [ { \sf \hat j}^{-1}]_{ab } j_{b0}r_0 \ ; ( = \frac{2\epsilon}{\gamma}r_{a,0 } ) , { \label}{eq : broadsoln}\end{aligned}\ ] ] where , as in the simple model .this solution is acceptable only if , since otherwise will be negative for .therefore , for , we make the _ ansatz _ ( i.e. , we write as ) for and for .we put this into the balance condition ( [ eq : meanorient ] ) .now the integrals run from to , so they are as trivial as in the broadly - tuned case , but the _ ansatz _ works and we find where ( the algebra here is essentially the same as that in a different kind of model studied by ben - yishai _ et al ._ ; see also . ) eqns .( [ eq : balnarrow ] ) can be solved for and , .dividing one equation by the other leads to the following equation for : then one can use either of the pairs of equations ( [ eq : balnarrow ] ) to find the remaining unknowns : {ab } j_{b0}r_0 .{ \label}{eq : r2}\ ] ] the function takes the value 1 at and falls monotonically to at .thus , a solution can be found for . for , and we go back to the broad solution . for , : the tuning of the rates becomes infinitely narrow .note that stronger tuning of the cortical interactions ( bigger ) leads to broader orientation tuning of the cortical rates .this possibly surprising result can be understood if one remembers that the cortical interactions ( which are essentially inhibitory ) act divisively ( see , e.g. , ( [ eq : broadsoln ] ) and ( [ eq : r2 ] ) ) .another feature of the solution is that , from ( [ eq : findtc ] ) , the tuning width does not depend on the input rate , which we identify with the contrast of the stimulus .thus , in the narrowly - tuned case , the population rates in this model automatically exhibit contrast - invariant tuning , in agreement with experimental findings .we can see that this result is a direct consequence of the balance condition .however , we should note that individual neurons in a column will exhibit fluctuations around the mean tuning curves which are not negligible , even in the mean - field limit . these come from the static part of the fluctuations in the input current ( like the second term in ( [ eq : decrec ] ) for the single - column model ) , which originate from the random inhomogeneity of the connectivity in the network . as for the single - column model , the full solution , including the determination of the rate fluctuations and correlation functions , has to be done numerically .this only needs a straightforward extension of the iterative procedure described above for the simple model .we now consider the tuning of the dynamic input noise .using the continuum notation , we get input and recurrent contributions adding up to (\theta ' , t - t ' ) , { \label}{eq : noise}\end{aligned}\ ] ] where is the correlation function for population in column . we can not proceed further analytically for , since this correlation function has to be determined numerically .but we know that for an irregularly firing state always has a piece proportional to .this , together with the external input noise , gives a flat contribution to the noise spectrum of (\theta ' ) .{ \label}{eq : white}\ ] ] the integrals on are of the same kind we did in the calculation of the rates above , so we get \nonumber \\ \ ; & = & r_0(1+\epsilon \cos 2\theta)(j_{a0}^2 - \sum_{bc } j_{ab}^2 [ { \sf \hat j}^{-1}]_{bc } j_{c0 } ) , { \label}{eq : finalnoise}\end{aligned}\ ] ] where we have used ( [ eq : findtc ] ) and ( [ eq : r2 ] ) to obtain the last line .thus , the recurrent synaptic noise has the same orientation tuning as that from the external input , unlike the population firing rates , which are narrowed by the cortical interactions .to study the tuning of the noise in the neuronal firing , we have to carry out the full numerical mean field computation .[ fig : fanotuning ] shows results for the tuning of the fano factor with , for three values of the overall synaptic strength factor . for small is a minimum in at the optimal orientation ( 0 ) , while for large there is a maximum .it seems that for any , is either less than 1 at all angles or greater than 1 at all angles ; we have not found any cases where the firing changes from subpoissonian to superpoissonian as the orientation is varied . for 3 values of .,width=453,height=302 ]these examples show the power of mean field theory in studying the dynamics of dense , randomly - connected cortical circuits , in particular , their firing statistics , described by the autocorrelation function and quantities derived from it , such as the fano factor .one should be careful to distinguish this kind of mean field theory from ones based on `` rate models '' , where a function giving the firing rate as a function of the input current is given by hand as part of the model . by construction, those models can not say anything about firing statistics . herewe are working at a more microscopic level , and both the equations for the firing rates and the firing statistics emerge from a self - consistent calculation .we think it is important to do a calculation that can tell us something about firing statistics and correlations , since the irregular firing of cortical neurons is a basic experimental fact that should be explained , preferably quantitatively , not just assumed .we were able to see in the simplest model described here how this irregularity emerges in mean field theory , provided cortical inhibition is strong enough .this confirms results of , but extends the description to a fully self - consistent one , including correlations .it became apparent how the strength of synapses and the post - spike reset level controlled the gross characteristics of the firing statistics , as measured by the fano factor . a high reset level and/or strong synapses result in an enhanced probability of a spike immediately after reset , leading to a tendency toward bursting .low reset and/or weak synapses have the opposite effect .visual cortical neurons seem to show typical fano factors generally somewhat above the poisson value of 1 .they have also been shown to have very high synaptic conductances under visual stimulation .our mean - field analysis of the model with conductance - based synapses shows how these two observed properties may be connected . in view of the large variation of fano factors that there could be , it is perhaps remarkable that observed values do not vary more than they do .we like to speculate about this as a coding issue : any constrained firing correlations imply reduced freedom to encode input variations , so information transmission capacity is maximized when correlations are minimized .thus , plausibly , evolutionary pressure can be expected to keep synaptic strengths in the right range .finally , extending the description from a single `` cortical column '' to an array or orientation columns forming a hypercolumn provided a way of understanding the intracortical contribution to orientation tuning , consistent with the basic constraints of dominant inhibition , irregular firing and high synaptic conductance .these issues can also be addressed directly with large - scale simulations , as in .however mean field theory can give some clearer insight ( into the results of such simulations , as well as of experiments ) , since it reduces the network problem to a single - neuron one , which we have a better chance of understanding .so we think it is worth doing mean field theory even when it becomes as computationally demanding as direct network simulation ( as it does in the case of the orientation hypercolumn ) . at the least, comparison between the two approaches can allow one to identify clearly any true correlation effects , which are , by definition , not present in mean field theory .much more can be done with mean field theory than we have described here .first , as mentioned above , it can be done for any kind of neuron , even a multi - compartment one , not just a point integrate - and - fire one . at the level of the connectivity model ,the simple model described by ( [ eq : jtuning ] ) can also be extended to include more details of known functional architecture ( orientation pinwheels , layered structure , etc . ) .it is also fairly straightforward to add synaptic dynamics ( not just the average description of opening and closing of the channels on the postsynaptic side described by the kernel in ( [ eq : conductance ] ) ) .one just has to add a synaptic model which takes the spikes produced by our single effective neuron as input to a synaptic model .thus , the path toward including more potentially relevant biological detail is open , at non - prohibitive computational cost .
we review the use of mean field theory for describing the dynamics of dense , randomly connected cortical circuits . for a simple network of excitatory and inhibitory leaky integrate - and - fire neurons , we can show how the firing irregularity , as measured by the fano factor , increases with the strength of the synapses in the network and with the value to which the membrane potential is reset after a spike . generalizing the model to include conductance - based synapses gives insight into the connection between the firing statistics and the high - conductance state observed experimentally in visual cortex . finally , an extension of the model to describe an orientation hypercolumn provides understanding of how cortical interactions sharpen orientation tuning , in a way that is consistent with observed firing statistics .
optimal experiment design ( oed ) or shortly optimal design is a sub - field of optimal control theory which concentrates design of an optimal control law aiming at the maximization of the information content in the response of a dynamical system related to its parameters .the statistical advantage brought by information maximization helps the researchers to generate the best input to their target plant / system that can be used in a system identification experiment producing estimates with minimum variance . with the utilization of mathematical models in theoretical neuroscience research ,the application of optimal experiment design in adaptive stimuli generation should be beneficial as it is expected to have better evaluations of the model specific parameters from the collected stimulus - response data .though these benefits , the optimal experiment design have not found its place among theoretical or computational neuroscience research due to the nature of the models . as the stimulus - response relationship is naturally quite non - linear , computational complexity of the optimization algorithms utilized for an optimal experiment design will typically be very high and thus oed has not gained enough attraction during the past decades .however , thanks to the today s computational powers of new microprocessors , it will be much easier to talk about a real optimal experiment design in neuroscience research ( , ) . in the past decades, some researchers had stimulated their models by gaussian white noise stimuli , and performed an estimation of input - output relationships of their model ( and ) .this algorithmically simpler approach is theoretically proven to be efficient in the estimation of models based on linear filters and their cascades . however , in , it is suggested that white noise stimuli may not be successful as a stimuli in the parametric identification of non - linear response models due to high level of parameter confounding ( refer to for a detailed description of the confounding phenomenon in non - linear models ) . concerning the applications of optimal experiment design to biological neural network models ,there exist a limited amount of research .one such example is where a static non - linear input output mapping is utilized as a neural stimulus - response model .the optimal design of the stimuli is performed by the maximization of the d - optimal metric of the fisher information matrix ( fim ) which reflects a minimization of the variance of the total parametric error of the model network . in the last research ,the parameter estimation is based on the maximum a posteriori ( map ) estimation methodology which is linked to the maximum likelihood estimation ( ml ) approach .two other successful mainly experimental work on applications of optimal experiment design to adaptive data collection are and .the experimental works successfully proven the efficiency of optimal designs for certain models in theoretical neuroscience . however , none of those studies explore fully dynamical non - linear models explicitly . because of this deficiency , this research will concentrate on an application of the optimal experiment design to a fully dynamical non - linear model .the final goal is almost similar to that of .the proposed model is a continuous time dynamical recurrent neural network ( ctrnn ) in general and it also represents the excitatory and inhibitory behaviours of the realistic biological neurons .like in that of and its derivatives , the ctrnn describes the dynamics of the membrane potentials of the constituent neurons .however , the channel activation dynamics is not directly represented .instead it constitutes , a more generic model which can be applied to a network having any number of neurons .the dynamic properties of the neuron membrane is represented by time constants and the synaptic excitation and inhibition are represented as network weights ( scalar gains ) . though not the same , a similar excitatory - inhibitory structure is utilized in numerous studies such as .as there is nt sufficient amount of research on the application of oed to dynamical neural network models , it will be convenient to start with a basic network model having two neurons representing the average of excitatory and inhibitory populations respectively .the final goal is to estimate the time constants and weight parameters .the optimal experiment design will be performed by maximizing a certain metric of the fim .the fim is a function of stimulus input and network parameters .as the true network parameters are not known in the actual problem , the information matrix should depend on the estimated values of the parameters in the current step .an optimization on a time dependent variable like stimulus will not be easy and often its parametrization is required . in auditory neuroscience point of view , that can be done by representing the stimuli by a sum of phased cosine elements .if periodic stimulation is allowed , these can be formed as harmonics based on a base stimulation frequency .the optimally designed stimulus will be the driving force of a joint maximum likelihood estimation ( jmle ) process which involves all the recorded response data .unfortunately , the recorded response data will not be continuous .the reason for this is that , in vivo measurements of the membrane potentials are often very difficult and dangerous as the direct manipulation with the neuron in vivo may trigger the death of a neuron .thus , in the real experimental set - up , the peaks of the membrane potentials are collected as firing instants . as a result, one will only have a neural spike train with the exact neural spiking times ( timings of the membrane potential peaks ) but no other data .this outcome prevents one to apply traditional parameter estimation techniques such as minimum mean square estimation ( mmse ) as it will require continuous firing rate ( is based on the membrane potential ) data .researches like , suggests that the neural spiking profile of sensory neurons obey the famous inhomogeneous poisson distribution . under this assumption, the fisher information matrix and likelihood functions can be derived based on poisson statistical distribution .the optimization of a certain measure of fisher information matrix and the likelihood can be performed by readily available packages such as matlab^^ optimization toolbox ( like well known _ fmincon _algorithm ) .there are certain challenges in this research .first of all , the limited availability of similar studies lead to the fact that this work is one of the first contributions on the applications of optimal experiment design to the dynamical neural network modelling .secondly , we will most probably not be able to have a reasonable estimate just from a single spiking response data set as we do not have a continuous response data .this is also demonstrated in the related kernel density estimation research such as . from these sources, one will easily note that repeated trials and superimposed spike sequences are required to obtain a meaningfully accurate firing rate information from the neural response data . in a real experiment environment , repeating the trials with the same stimulus profile will not be appropriate as the repeated responses of the same stimulus are found to be attenuated . because of this issue , a new stimulus should be designed each time based on the currently estimated parameters of the model and then it should be used in an updated estimation .these updated parameters are used in the next step to generate the new optimal stimulus . as a result onewill have a new stimulus in each step and thus the risk of response attenuation is largely reduced . in a maximum likelihood estimation ,the likelihood function will depend on the whole spiking data obtained throughout the experiment ( or simulation ) .the parallel processing capabilities of matlab^^ ( i.e. _ parfor _ ) on multiple processor / core computers will help in resolving of those issues .the continuous time recurrent neural networks have a similar structure to that of the discrete time counterparts that are often met in artificial intelligence studies . in * figure[ fig : generic - ctrnn ] * , one can see a general continuous time network that may have any number of neurons .the mathematical representation of this generic model can be written as shown below : where is the time constant , is the membrane potential of the neuron , is the synaptic connection weight between the and neurons is the connection weight from input to the neuron and is the input .the term is a membrane potential dependent function which acts as a variable gain on the synaptic inputs to from the neuron to the one .it can be shown by a logistic sigmoid function which can be shown as : where is the maximum rate at which the neuron can fire , is a soft threshold parameter of the neuron and is a slope constant .this is the only source of non - linearity in .in addition it also models the activation - inactivation behaviour in more specific models of the neuron ( like ) .the work by shows that gives a relationship between the firing rate and membrane potential of the neuron . insensory nervous system , some of neurons have excitatory synaptic connections while some have inhibitory ones .this fact is reflected to the model in by assigning negative values to the weight parameters which are originating from neurons with inhibitory synaptic connections . in the introduction of this research, it is stated that it would be convenient to apply the theory to a basic network first of all due to the lack of related research and computational complexity .so a basic excitatory and inhibitory continuous time recurrent dynamical network can be written as shown in the following : where the subscripts and stands for excitatory and inhibitory neurons respectively .starting from now on , we will have a single stimulus and it will be represented by the term which will be generated by the optimal design algorithm .in addition in order to suit the model equations to the estimation theory formalism the time constant may be moved to the right hand side as shown below : =\left[\begin{array}{cc } \beta_{e } & 0\\ 0 & \beta_{i } \end{array}\right]\left\ { -\left[\begin{array}{c } v_{e}\\ v_{i } \end{array}\right]+\left[\begin{array}{cc } w_{ee } & -w_{ei}\\ w_{ie } & -w_{ii } \end{array}\right]\left[\begin{array}{c } g_{e}\left(v_{e}\right)\\ g_{i}\left(v_{i}\right ) \end{array}\right]+\left[\begin{array}{c } w_{e}\\ w_{i } \end{array}\right]i\right\ } \label{eq : our - model - matrix - form}\ ] ] note that this equation is written in matrix form to be conformed to the formal non - linear system forms . a descriptive illustration related tois presented in * figure [ fig : generic - ctrnn]b*. it should also be noted that , in and the weights are all assumed as positive coefficients and they have signs in the equation .so negative signs indicate that originating neuron is inhibitory ( tend to hyper - polarize the other neurons in the network ). the theoretical response of the network in will be the firing rate of the excitatory neuron as . in the actual environment ,the neural spiking due to the firing rate is available instead . while introducing this research ,it is stated that this spiking events conform to an inhomogeneous poisson process which is defined below : =\frac{e^{-\lambda } \lambda^{k}}{k ! } \label{eq : inhomogeneous - poisson}\ ] ] where is the mean number of spikes based on the firing rate which varies with time , and indicates the cumulative total number of spikes up to time , so that is the number of spikes within the time interval . in other words , the probability of having number of spikes in the interval given by the poisson distribution above .consider a spike train in the time interval . herethe spike train is described by a list of the time stamps for the spikes .the probability density function for a given spiking train can be derived from the inhomogeneous poisson process .the result reads : this probability density describes how likely a particular spike train is generated by the inhomogeneous poisson process with the rate function .of course , this rate function depends implicitly on the network parameters and the stimulus used .the network parameters to be estimated are listed below as a vector : = \left[\beta_e,\beta_i , w_e , w_i , w_{ee},w_{ei},w_{ie},w_{ii}\right]\label{eq : theta - ctrnn - param}\ ] ] which includes the time constants and all the connection weights in the e - i network .our maximum - likelihood estimation of the network parameters is based on the likelihood function given by , which takes the individual spike timings into account .it is well known from estimation theory is that maximum likelihood estimation is asymptotically efficient , i.e. , reaching the cramr - rao bound in the limit of large data size . to extend the likelihood function in to the situation where there are multiple spike trains elicited by multiple stimuli , consider a sequence of stimuli .suppose the -th stimulus ( ) elicits a spike trains with a total of spikes in the time window ] , optimizing a single scalar function is sufficient to recover all the parameters .when a sequence of stimuli are generated by optimal design , the stimuli may sometimes alternate spontaneously as if the optimization is performed with respect to each of the parameters one by one . in our network model, the recorded spike train has an inhomogeneous poisson distribution with the rate function .we write this rate as to emphasize that it is a time - varying function that depends on both the stimulus and the network parameters . for a small time window of duration and centered at time , the fisher information matrix entry in is reduced to : since the poisson rate function varies with time , the a - optimal utility function in should be modified by including integration over time : here the time window is ignored because it is a constant coefficient that does not affect the result of the optimization . for convenience , we can also define the objective function with respect to a single parameter as follows : the objective function in is identical to .the optimization of the d - optimal criterion in is not affected by parameter rescaling , or changing the units of parameters .for example , changing the unit of parameter 1 ( say , from msec to sec ) is equivalent to rescaling the parameter by a constant coefficient : .the effect of this transformation is equivalent to a rescaling of the determinant of the fisher information matrix by a constant , namely , , which does not affect the location of the maximum of .by contrast , the criterion function in or are affected by parameter rescaling .a parameter with a smaller unit would tend to have larger derivative value and therefore contribute more to than a parameter with a large unit . to alleviate this problem, we use one by one to generate the stimuli .that is , stimulus 1 is generated by maximizing , and stimulus 2 is generated by maximizing , and so on .once the 8th stimulus is generated by maximizing , we go back and use to generate the next stimulus , and so on .finally , an alternative way to get rid of scale dependence is to introduce logarithm and use as the criterion , which , however , may become degenerate when approaches 0 .as just explained in the previous section about the computational issues in this research , the gradient computation decreases the computation durations considerably .the main issue with this fact is the lack of closed form expressions like in the case of static non - linear mappings as the model . in researches such as , and gradients are computed as a self contained differential equation which is formed by taking the derivatives of the model equations and from both sides .compiling all the information in this section one can write the gradient of the fisher information measure ( i.e. the fisher information matrix with a certain optimality criterion such as a - optimality ) . in the beginning of this section , it is stated that the sensitivity levels of the firing rate w.r.to different network parameters are different and thus it would be convenient to maximize the fisher information for a single parameter at a time .the optimization as expressed in the optimal design problem is converted into a parameter optimization problem to optimize the amplitudes s and s of the stimulus . for the sake of simplicity and modularity in programming, these equations can be written using their shorthand notations .let =[a_1,\ldots , a_{n } , \phi_1,\ldots , \phi_{n } ] \label{eq : param - stimulus}\ ] ] we write the state of the network as a vector : ^{\rm t } ] , } ] , and } ] .it follows from the original dynamical equation in that where ^{\rm t}} ] which have direct effect on the neural model behaviour .this research targeted the estimation of the network parameters only . because ofthat , the parameters of the gain functions are kept as fixed and they have the values .this set of parameters ( gain functions and * table [ tab : true - values - and - statistics ] * ) allows the network to have a unique equilibrium state for each stationary input . to demonstrate the excitatory and inhibitory characteristics of our model, we can stimulate the model with a square wave ( pulse ) stimulus as shown in * figure [ fig : pulse - response - ctrnn]a*. the resultant excitatory and inhibitory neural membrane potential responses ( and ) are shown in * figure [ fig : pulse - response - ctrnn]b * and * figure [ fig : pulse - response - ctrnn]c*. it can be said that , the network has shown both transient and sustained responses . in * figure [ fig : pulse - response - ctrnn]d * , the excitatory firing rate response which is related to excitatory potential as is shown . the response is slightly delayed which leads to the depolarization of excitatory unit until .this delay is also responsible from the subsequent re - polarization and plateau formation in the membrane potential of excitatory neuron .the firing rate is higher during excitation and lower in subsequent plateau and repolarization phases ( * figure [ fig : pulse - response - ctrnn]d * ) .b * in response to a square - wave stimulus .the states of the excitatory and inhibitory units , and , are shown , together with the continuous firing rate of the excitatory unit , .the firing rate of the excitatory unit ( bottom panel ) has a transient component with higher firing rates , followed by a plateau or sustained component with lower firing rates . ]the optimisation of the stimuli requires that the maximum power level in a single stimulus is bounded .this is a precaution to protect the model from potential instabilities due to over - stimulation .in addition , if applied in a real environment the experiment subject will also be protected from such over - stimulations . as the amplitude parameter is assumed positive , assigning an upper bound defined as should be enough .this is applied to all stimulus amplitudes ( i.e. ) . in this research ,a fixed setting of is chosen .the lower bound is obviously . for the phase , no lower or upper bounds are necessary as the cosine function itself has already been bounded as .the frequencies of the stimulus components are harmonics of a base frequency .since we have a simulation time of seconds , we have a reasonable choice of hz which is in fact equal to .so we have chosen an integer relationship between the stimulation frequency and simulation time .the number of stimulus components is chosen as which is found to be reasonable concerning speed and performance balance .it is well known that optimization algorithms such as _ fmincon _ requires an initial guess of the optimum solution .a suitable choice for the initial guesses can be their assignment from a set of initial conditions generated randomly between the optimization bounds . in the optimization of stimuli , the initial amplitudes can be uniformly distributed between ] .although we do not have any constraints on the phase parameter , we limit the initial phase values to a safe assumed range . we follow a similar strategy for the parameter estimation based on maximum likelihood method .the multiple initial guesses will be chosen from a set of values uniformly distributed between the lower and upper bounds defined in * table [ tab : true - values - and - statistics]*. in * section [ sub : jmle - theory ] * , one recall from that the likelihood estimation should produce better results when the number of samples ( i.e. in ) increases .because of this fact , the likelihood function is based on data having all spikes generated since the beginning of the simulation .the number of repeats determines .if simulation is repeated times ( iterations ) , one will have an value of due to the fact that each iteration has optimal designs sub - steps ( with respect to each parameter .read * section [ sub : oed - theory ] * ) .so , if one has 15 iterations , which means likelihood has samples .this also means that optimal design and subsequent parameter estimation will also be repeated times . having all necessary information from * section [ sub : example - details ] * , one can perform an optimal design and obtain a sample optimal stimulus and associated neural responses * figure [ fig : stimexamplederivatives]*. it is noted that the optimal stimulus in * figure [ fig : stimexamplederivatives ] * top panel has a periodic variation as it is modelled as a phased cosines form ( as it is equivalent to real valued fourier series ) . .driven by the same stimulus , the response of eight derivatives variables , namely , the derivatives of the firing rate with respect to all the network parameters , are shown as red curves .these derivatives were solved directly from differential equations . ] in addition to the fundamental responses , the second half of * figure [ fig : stimexamplederivatives ] * displays the variation of the parametric sensitivity derivatives which are generated by integrating and then substituting to .the variation of the sensitivity derivatives support the idea of optimization of the fisher information metric with respect to a single parameter ( see ) .it appears that , the weight parameters have a very high sensitivity which are more than times that of the parameters .also some of the parameters affecting the behaviour of inhibitory ( i ) unit and also the e - i interconnection weights have a reverse behaviour .this fact appears as a negative values variation of the sensitivity derivatives .self inhibition coefficient does not show this behaviour as it represent an inhibitory effect on the inhibitory neuron potential ( which favours excitation ) .one of the most distinguishing result related to optimal design is the statistics of the optimal stimuli ( i.e. the amplitudes and phases ) .this analysis can be performed by generating the histograms of and from available data as shown in * figure [ fig : stimulus - amplitude - histogram]*. the * figure [ fig : stimulus - amplitude - histogram]a * shows the flat uniform distribution of the random stimuli amplitudes and phases .this is expected as the stimuli is generated directly from a uniform distribution . concerning the optimal stimuli , the histograms shown in * figure [ fig : stimulus - amplitude - histogram]b * reveals that optimal design has a tendency to maximize the amplitude of the stimuli towards the upper bound .this reassures that optimal design tends to maximize the stimulus power which is expected increase the efficiency of the parameter estimation process and it distinguishes optimal stimuli from their randomly generated counterparts . and phases .( * a * ) random stimuli were generated by choosing their fourier amplitudes and phases randomly from uniform distributions .a total of 12,000 random stimuli were used in each plot .( * b * ) the optimally designed stimuli showed some structures in the distributions of their fourier amplitudes and phases , which differ radically from a uniform distribution .a total of 12,000 optimally designed stimuli were used in each plot . ]given a dataset consisting of stimulus - response pairs , we can always use maximum - likelihood estimation to fit a model to the data to recover the parameters .maximum - likelihood estimation is known to be asymptotically efficient in the limit of large data size , in the sense that the estimation is asymptotically unbiased ( i .. e , average of the estimates approaches the true value ) and has minimal variance ( i.e. , the variance of the estimates approaches the cramr - rao lower bound ) .we found that maximum likelihood obtained from the optimally design stimuli was always much better than that obtained from the random stimuli ( see * figure [ fig : likelihood - traces - box ] * ) .it also reveals that , the likelihood value increases as the number of stimuli increases . forany given number of stimuli , the optimally designed stimuli always yielded much greater likelihood value than the random stimuli .the minimum difference between the likelihood values ( the minimum from the optimal design and the maximum from the random stimuli based test ) was typically about two times greater than the standard deviation of either estimates except for the case with samples .even in this case , this violation appear only on one sample .in addition , it can be easily deduced from the box diagram in * figure [ fig : likelihood - traces - box ] * that the difference between ^th^ and ^th^ percentiles ( ^st^ and ^rd^ quartiles ) yield a value which is larger than three times the standard deviation of either estimate .the standard deviations of the maximum likelihood values are also larger in the random stimuli based tests .those results are certain evidences of the superiority of an optimal design over the random stimuli based tests .the difference between the maximum values of the two likelihoods ( from optimal and random stimuli ) becomes more significant as the number of samples increases .it would also be convenient to stress the fact that the greater the likelihood , the better the fitting of the data to the model . this fact is demonstrated by two regression lines imposed on the box diagram * figure [ fig : likelihood - traces - box]*. one of those lines correspond to the optimal design and the other correspond to the random stimuli based tests .both regression lines path through the origin point ( 0 , 0 ) approximately .this means that the ratio of the log likelihood values in the two cases is approximately a constant , regardless of the number of stimuli .the regression lines are represented by equations for optimal and for the random stimuli .so the two lines have a slope ratio of approximately .the significance of this number can be explained by a simple example . if one desires to attain the same level of likelihood with optimally designed stimuli , the required number of random stimuli to be generated is equal to a value about .another statistical comparison of the maximized likelihoods can be performed by wilcoxon rank - sum tests .when applied , one will be able to see the difference which is highly significant .regardless of the number of samples the p - values remained at least times smaller than the widely accepted probability significance threshold of or .the likelihood function provides an overall measure of how well a model fits the data .we have also tested the mean errors of individual parameters relative to their true values .the main finding is that , for each individual parameter , the error is typically smaller for the optimally designed stimuli than the error for the random stimuli .this result can be observed from the bar charts presented in * figure [ fig : mean - error - bars]*. the heights of the bars show the mean error levels for randomly and optimally generated stimuli respectively .one can get the benefits of the rank - sum test on the statistical properties of the parameter estimates .computation of rank - sum p - values for each individual parameter corresponding to the case of samples yields : the above result showed that for the differences are statistically significant for parameters . fordifferent values one can refer to * figure [ fig : mean - error - bars]*. in this illustration , the statistical significance of the difference between optimal and random stimuli based tests are indicated as an asterisk placed above the bars .any of the cases with asterisk means that , the associated sample size led to a result where optimal design is significantly better . ) at the top of an optimal design barindicates that the difference from the neighboring random bar is statistically significant at the level in the ranksum test . ] here an interesting result is the statistical non - significance of the third parameter regardless of the sample size .this might be associated with the parameter confounding phenomenon that is discussed in * section [ sub : parameter - confo]*. for convenience , one can see the mean values and standard deviations of the estimates obtained from optimal and random stimuli in * table [ tab : true - values - and - statistics]*. these are obtained with samples . in general ,mean values seem to be comparable for both stimuli however the standard deviations of estimates from optimal stimuli are smaller then that obtained from random stimuli .the mean values of parameters and are also closer to the true values when obtained from optimal stimuli .we need optimization in two places : maximum likelihood estimation of parameters , and optimal design of stimuli .due to speed and computational complexity considerations one needs to utilize gradient base local optimizers such as matlab^^ _ fmincon_.these algorithms often needs initial guesses and not all of the initial guesses will converge to a true value .this issue may especially appear in the cases where the objective function involves dynamical model ( differential or difference equations ) .in such cases , the problem of multiple local maxima might occur in the objective function which often requires multiple initial guesses to be provided to the solver .so an analysis on this issue may reveal useful information .suppose there are repeats or starts from different initial values .let be the probability of finding the `` correct '' solution in an individual run with random initial guess .then in repeated runs , the probably that at least one run will lead to the `` correct '' solution is the probability can be estimated from a pairwise test or directly from the values of the likelihood and fisher information metric . in the test of the likelihood function , one can achieve the goal by starting the optimization from different initial guesses and checking the number of solutions which stay in an error bound for each individual parameter with respect to the solution leading to the highest likelihood value .in other words , to pass the test the following criterion should be satisfied for each individual parameter in : where is the local optimum solution having the highest objective ( likelihood ) value and is the estimated value of .if the above is satisfied for all , this result is counted as one pass .so for maximum likelihood estimation with , the data suggests a probability of and to get a correct rate we will only need repeats .this is a result obtained from multiple initial guesses per different stimuli configurations ( total of occurrences ) . here, we have a high probability of obtaining a global maxima and thus we may get rid of multiple initial guesses requirement in the estimation of . for the optimal design part the problem is expected to be harder as the stimulus amplitudes tend to the upper bound . .it should also be remembered that , the fisher information measure is computed with respect to a single parameter as shown in .thus it will here be convenient to analyse the respective fisher information metric as defined w.r.to each parameter . like in the case of likelihood analysis, we do the analysis on samples ( i.e. stimuli ) separately for amplitudes and phases .the criterion for passing the test is similar to that of after replacing by from and by with being the stimulus parameter yielding the largest value of . note that , since we do not have any concept of `` true stimulus parameters '' we will use in the denominator of . after doing the analysis for amplitudes , one can see that highest probability value is obtained for whereas the smallest value is obtained for as .the second and third smallest values are and having and respectively .the indices ,, and correspond to the ,, and .this means that the lowest probability values occur at the parameters , and .this result might be interesting as those three parameters have strong correlations at least with one other network parameter ( see * section [ sub : parameter - confo ] * ) .the required number of repeats appears to be for the worst case ( for ) . for the phase parameters , different initial conditions lead to different values .this is an expected situation because of this outcome , the solution yielding the largest value of fisher information metric among all runs with different initial conditions should be preferred in the actual application .modern parallel computing facilities will ease the implementation of optimization with multiple random guesses .the errors of some parameters tend to be correlated ( see * figure [ fig : confounding - plots ] * ) .parameter confounding may explain some of the correlations .the idea is that different parameter may compensate each other such that the network behaves in similar ways , even though the parameter values are different .it is known that in individual neurons , different ion channels may be regulated such that diverse configurations may lead to neurons with similar neuronal behaviours in their electrical activation patterns .similar kind of effect also exists at the network level . here we will consider the original dynamical equations and demonstrate how parameter confounding might arise .we first emphasize that different parameters in our model are distinct and there is no strict confounding at all .the confounding is approximate in nature . from the correlation analysis on the optimal design data ( * figures [ fig : confounding - plots]a * and * [ fig : confounding - plots]b * ) , three pairs of parameters stand out with the strongest correlations .these pairs are , , and . we will offer an intuitive heuristic explanation based on the idea of parameter confounding .we here rewrite the dynamical equations for convenience as shown below : the external stimulus drives the first equation through the weight .if this product is the same , the drive would be the same , even though the individual parameters are different .for example , if is increased by 10% from its true value while is decreased by 10% from its true value , then the product stays the same , so that the external input provides the same drive to .of course , any deviation from the true parameter values also leads to other differences elsewhere in the system .therefore , the confounding relation is only approximate and not strict .this heuristic argument gives an empirical formula : where and refer to the true values of these parameters , whereas and refer to the estimated values .these two parameters appear separately in different equations , namely , appearing only in while appearing only in . to combine them, we need to consider the interaction of these two equations . to simplify the problem, we consider a linearised system around the equilibrium state : where and are the slopes of the gain functions , and and are extra terms that depend on the equilibrium state and other parameters .note that appears in only once , in the second term in the curly brackets .since also satisfies , we solve for in terms of from and find a solution of the form : where is a constant .substitution into shows that the parameter combination scales how strongly influences this equation .thus we have a heuristic confounding relation : these two parameters both appear in the curly brackets in .we have a heuristic confounding relation : where and are the equilibrium states .if this equation is satisfied , we expect that the term in the curly brackets in would be close to a constant ( the right - hand side of ) whenever the state and are close to the equilibrium values .when the state variables vary freely , we expect this relation to hold only as a very crude approximation .simulation results show that these three confounding relation can qualitatively account for the data scattering .the random data follow the same pattern ( see * * figure [ fig : confounding - plots]**c ) although they appear to have more scattering compared to the optimal design based data .although the confounding relations are not strictly valid , their offer useful approximate explanations that are based on intuitive argument and are supported by the data as shown in * figure [ fig : confounding - plots]*. the theoretical slopes are always smaller , suggesting that the heuristic theory only accounts for a portion of the correlation .it is likely that there are approximate confounding among more than those three pairs of parameters namely , , and .1 . we have implemented an optimal design algorithm for generating stimuli that can efficiently probe the e - i network , which describes the dynamic interaction between an excitatory neuronal population and an inhibitory neuronal population .the data has been used to model the auditory system etc .2 . the dynamical network allows both transient and sustained response components ( * figure [ fig : pulse - response - ctrnn ] * ) 3 .derivatives are computed directly by differential equations derived from the original system ( * figure [ fig : stimexamplederivatives ] * ) 4 .optimally designed stimuli have patterns .the amplitude tend to be saturated , which can lead to faster search because only boundary values need to be checked ( * figure [ fig : stimulus - amplitude - histogram ] * ) 5 .the optimally designed stimuli elicited responses that have more dynamic range ( more variance in the histogram of amplitude of the stimulus waveform ) 6 .the optimal design yield much better parameter estimation in term of likelihood ( * figure [ fig : likelihood - traces - box ] * ) , and in the errors of the individual parameters ( * figure [ fig : mean - error - bars ] * ) . 7 .we studied parameter confounding ( * figure [ fig : confounding - plots ] * ) .significance of the results can readily lead to practical applications .for example in the modelling of the auditory networks known works all used stationary sound stimuli .however , it is beneficial to include time dependency as realistic neurons and their networks have several dynamic features .the chosen model is an appropriate and simple model which has only two neurons with one being excitatory and one being inhibitory .this choice is reasonable in the context of this research as the availability of previous work targeting a similar goal is quite limited .in addition , the known ones mostly based on the static feed - forward network designs . in order to verify our optimal design strategy we have to start from a simpler model . the theoretical development and results of this work can very easily be adapted to alternative and/or more complicated models . herethe most critical parameter is the number of data generating neurons ( neurons where the spiking events are recorded by electrode implantations ) and the computational complexity .the former is a procedural aspect of the experiment whereas the latter is directly related to the instrumentation .in addition , the universal approximating nature of the ctrnn s is an advantage on this manner . in this work, we aim at investigation of the computational principles without trying the maximizing the computation speed . right now on a single pc , having a intel^^ corei7 processor with cores , the computational duration for optimization of a single stimulus is about minutes .surely , this is an average value as the number of steps required to converge to an optimum depends on certain conditions such as the value of objective value , gradient , constraint violation and step size .a similar situation exist for the optimization of the likelihood .however in this case , the optimization times of the likelihood will vary due to its increasing size as all the historical spiking is taken into account ( see ) leading to a function with gradually increasing complexity .although that is not the only fact contributing to the computational times , the average duration of optimization tend to increase with the size of likelihood .an average value for the observed duration of likelihood optimization is minutes . as a result optimization of one stimulus and subsequent likelihood estimation requires a duration of about minutes .this is approximately one hour .so one complete run with a sample size of is completed in a duration about hours .changing the value of the sample size will have a direct influence on the system computation time .for example , the duration of one complete run will reduce to a value about hours when .this is an expected situation as there will be a reduced number of the summation terms in the likelihood function .the reduction in the duration of the optimal design algorithm is only based on the reduction in the number of trials which is just equal to the sample size .so based on these findings , one will need a speed - up in order to adapt this work to an experiment .there are several ways to speed up the computation : 1 . using a large time step: in this work we have integrated the equations using a time step of -millisecond or -seconds .this value may be increased to levels as high as -seconds .this modification will have a little contribution to the speed of computation .the benefits will most likely be from the optimal design part due to the manipulation of a single interval ( no consideration of past / historical spike trains ) .however , the higher the time step the lower the accuracy of the estimates and the optimality of the stimuli .this main contributor to this fact is the spike generation algorithm where the accuracy of the locations of some spikes are lost when a coarse integration interval is applied .2 . using and/or developing streamlined optimization algorithms : this can be a subject of a new project on the same field .this development is expected to have a considerable contribution to the computation time without any trade - offs over performance .3 . generating the stimuli as a block rather than one by one : this is also a potential topic for a new project .this is expected to reduce the optimal design time without losing performance .4 . employment of larger cluster computing systems ( or high performance computing systems ( hpc ) ) having more than cpu cores : though the most sophisticated and expensive solution , it is the best approach to cure the overall computational burdens and transform the theoretical only study to an experiment adaptable one .5 . porting the algorithms to a lower level programming language such as c / c++ or fortran may help in speeding up the computation . if an efficient and stable numerical differentiation algorithm can be employed in this set - up , another optimality criterion such as _ d _ or _e _ optimality can be used in the computation of the fisher information metric which might help in reducing the number of steps ( i.e and/or values ) .knowing the fact that the optimal stimulus amplitudes tend to the upper boundary ( remember from * figure [ fig : stimulus - amplitude - histogram]b * ) .all the amplitudes can be set to same value as and only is computed from optimization .this setting can be helpful for speeding up the optimization time during an actual experiment .however , it should be verified by simulation whether this choice is meaningful for an actual experiment . in this case, one may not need many repeats as only the statistics of the stimuli is required .this occurrence is quite common in the simulation results thus we do not expect a performance degradation when this change is applied to the stimuli characterization . with the above adaptations , it is expected that we are within reach to reduce the computation time for each 3-second stimuli to less than 3 seconds . in addition one can has the following options which are related to tuning of the algorithms used in this research : 1 .the parameter related to the sample size might be reduced .that is an examined situation in this study .optimal design seems to yield a better estimation performance with a reduced compared to random stimuli case with same .however , the former will lead to a slightly increased computational time . however , the trade - off is not very direct .for example , a random stimuli based simulation with samples requires a longer run than a simulation based on an optimally designed stimulus with samples .this is fairly a good trade - off .2 . another option to increase the computational performance might be the reduction of the cut - off points in the optimization algorithm such as the first order optimality measure ( tolerance of the gradient ) and the step - size .this will result in a faster computation but this approach may bring out questions on the accurate detection of the local minimums among which the best one is chosen ( both in oed and likelihood optimization ) . for the ` fmincon ` algorithm in matlab the first order optimality tolerance and step - sizemight both be shifted from to .this tuning brings improvement in the computational duration about 10% without a considerable performance loss .however , if one has a hpc supported computational environment it is strongly recommended not to modify these settings .the above network is rather a simpler example to demonstrate the optimal design approach and its computational challenges .however more features can be brought to this research concerning efficiency and applicability to an actual experiment . 1 .speed up issues : the tasks related to speed of computation discussed in * section [ sub : computational - speed - issues ] * may be a separate project to be developed on top this research .large number of neurons may be considered together with multiple stimulus inputs and response data collection from multiple neurons ( both excitatory and inhibitory groups of neurons ) .more complex stimulus structures may be utilized .this can be achieved by increasing in or considering different stimulus representations other than phase cosines .4 . in this research ,the primary goal was the estimation of the network weights and time constants .however , it will be interesting to test the methodology for its performance in estimation of firing thresholds and slopes .( i.e. the parameters and in ) 5. some more realistic details like plasticity can be included to obtain a model describing the synaptic adaptation .although it is expected to be a harder problem , the method takes fewer stimuli and should be faster .[ it : optimality ] . however , it is stated in that , d - optimality brings an advantage that the optimization will be immune to the scales of the variables . on the contrary ,it is also stated in the same source that this mentioned fact is not true for a- and e - optimality criteria in general .the sensitivity to scaling of the variables lead to another issue that the confounding of the parameters brings certain problems about the bias and efficiency of the estimates .so it will be quite beneficial to see the results obtained from the same research with the optimal designs performed by d - optimal and other measures of fisher information metric such as e- and f - optimality .as these will require a new set of computations it will be better to include them in a future study .similar to the discussion in * article [ it : optimality ] * above , the methodology of optimization in optimal designs and likelihood optimization should be considered .current work involves evaluation of gradients for the sake of faster computation .however , this requires larger efforts in the preparation as one should develop a specific algorithm to compute the evolution of the gradients satisfactorily .this is also required for a speed - up . with the availability of a high performance computing system ,other optimization methods such as simulated - annealing , genetic algorithms and pattern search might be employed instead of the local minimizers such as _ fmincon _ of matlab .these algorithms may help in searching for a better optimal stimulus .after a sufficiently fast simulation is obtained an experiment can be performed involving a living experimental subject .the mapping of the actual sound heard by the animal during the course of experiment to the optimally designed stimulus is a critical issue here and will also be a part of the future related research .
we present a theoretical application of an optimal experiment design ( oed ) methodology to the development of mathematical models to describe the stimulus - response relationship of sensory neurons . although there are a few related studies in the computational neuroscience literature on this topic , most of them are either involving non - linear static maps or simple linear filters cascaded to a static non - linearity . although the linear filters might be appropriate to demonstrate some aspects of neural processes , the high level of non - linearity in the nature of the stimulus - response data may render them inadequate . in addition , modelling by a static non - linear input - output map may mask important dynamical ( time - dependent ) features in the response data . due to all those facts a non - linear continuous time dynamic recurrent neural network that models the excitatory and inhibitory membrane potential dynamics is preferred . the main goal of this research is to estimate the parametric details of this model from the available stimulus - response data . in order to design an efficient estimator an optimal experiment design scheme is proposed which computes a pre - shaped stimulus to maximize a certain measure of fisher information matrix . this measure depends on the estimated values of the parameters in the current step and the optimal stimuli are used in a maximum likelihood estimation procedure to find an estimate of the network parameters . this process works as a loop until a reasonable convergence occurs . the response data is discontinuous as it is composed of the neural spiking instants which is assumed to obey the poisson statistical distribution . thus the likelihood functions depend on the poisson statistics . the model considered in this research has universal approximation capability and thus can be used in the modelling of any non - linear processes . in order to validate the approach and evaluate its performance , a comparison with another approach on estimation based on randomly generated stimuli is also presented . optimal design , sensory neurons , recurrent neural network , excitatory neuron , inhibitory neuron , maximum likelihood estimation
approaches of hierarchical type lie behind the extensive use of models in theoretical physics , the more so when extending them into new `` applications '' of statistical physics ideas , e.g. in complex systems and phenomena , like in fluid mechanics mimicking agent diffusion . in several studies, researchers have detected the validity of power laws , for a number of characteristic quantities of complex systems .such studies , at the frontier of a wide set of scientific contexts , are sometimes tied to several issues of technical nature or rely only on the exploration of distribution functions . to go deeperis a fact of paramount relevance , along with the exploration of more grounding concepts .the literature dealing with the rank - size rule is rather wide : basically , papers in this field discuss why such a rule should work ( or does not work ) . under this perspective ,pareto distribution and power law , whose statement is that there exists a link of hyperbolic type between rank and size , seem to be suitable for this purpose . in particular , the so - called first zipf s law , which is the one associated to a unitary exponent of the power law , has a relevant informative content , since the exponent can be viewed as a proxy of the balance between outflow and inflow of agents .the theoretical explanation of the zipf s law has been the focus of a large number of important contributions .however , the reason why zipf s law is found to be a valid tool for describing rank - sizes rule is still a puzzle . in this respect, it seems that no theoretical ground is associated to such a statistical property of some sets of data .generally , zipf s law can not be viewed as a universal law , and several circumstances rely on data whose rank and size relationship is not of hyperbolic nature .such a statement is true even in the urban geography case , - the one of the original application of the zipf s law , for the peculiar case of cities ranking .remarkable breakdowns has been assessed e.g. in .a further example is given by the number of cities ( ) per provinces ( ) in italy ( 8092 ) , see the log - log plot of the data from 2011 in fig .[ fig : plotnogood9zmppwldexp ] : the ( 110 ) provinces are ranked by decreasing order of `` importance '' , i.e. their number of cities .fits by ( i ) a power law , ( ii ) an exponential and ( iii ) a zipf - mandelbrot ( zm ) function ^{\omega},\ ] ] being the rank .the fits are , the least to say , quite unsatisfactory in particular in the high rank tail , essentially because data usually often presents an inflection point . therefore , no need to elaborate further that more data analysis can bring some information on the matter .the paper is organized as follows . in section[ alternative ] , an alternative to a hyperbolic rank - size law and its above `` improvements '' are discussed : the data can be better represented by an ( other than zipf s law ) analytic empirical law , allowing for an inflection point . next , we introduce a universal form , allowing for a wider appeal , in sect .[ universal ] , based on a model thereafter presented in sect .[ modelbeta ] .such a general law can be turned into a frequency or probability distribution .thus , the method suggests to consider a criterion of possibly optimal organization through the notion of relative distance to full disorder , i.e. , a ranking criterion of entity distributions based on the entropy ( section [ sec : entropy ] ) .section [ conclusions ] allows us to conclude and to offer suggestions for further research lines .in the context of best - fit procedures , rank - size theory allows to explore the presence of regularities among data and their specified criterion - based ranking .such regularities are captured by a best - fit curve . however , as observed in fig .[ fig : plotnogood9zmppwldexp ] , the main problem strangely resides in missing the distribution high rank tail behavior .no doubt , that this partially arises because most fit algorithms take better care of the high values ( on the -axis ) than the small ones .more drastically , a cause stems in the large rank tail which is usually supposed to extend to infinity , see eq .( [ pwlwithcutoff ] ) , but each system is markedly always of finite size .therefore , more complicated laws containing a power factor , like the stretched exponential or exponential cut - off laws should be considered inadequate .we emphasize that we are in presence of data which often exhibits an inflection point .the presence of an inflection point means that there is a change in the concavity of the curve , even if the slope remains with the same ( negative ) sign for the whole range .thus , one could identify two regimes in the ranked data , meaning that the values are clustered in two families at a low and high ranks . in such cases ,the finite cardinality of the dataset leads to a collapse of the upper regime at rank .nevertheless , the yule - simon distribution , could be arranged in an appropriate way , according to a taylor expansion as in .( [ pwlwithcutoff ] ) can be then rewritten as as discussed by martinez - mekler et al . for rank - ordering distributions , - in the arts and sciences ; see also more recent work on the subject with references therein .( [ lavalette3a ] ) is a ( three - parameter ) generalization of the ( t - parameter ) function used when considering the distribution of impact factors in bibliometrics studies , i.e. , when , and recently applied to religious movement adhesion .notice that there is no fundamental reason why the decaying behavior at low rank should have the same exponent as the collapsing regime at high rank : one should _ a priori _ admit . of itcities ( 8092 ) per provinces ( 110 ) on a log - log scale .the ranking criterion is the one associated to the number of cities ( high rank when the number of cities is high ) .the reference year is 2011 .several fits are shown : power law , exponential and zipf - mandelbrot function , eq . ( [ zmeq3 ] ) .the corresponding correlation coefficients are given ; different colors and symbols allow to distinguish cases.,width=461,height=597 ] ; the provinces are ranked by their decreasing `` order of importance '' , for various years ; the 2007 , 2008 - 2009 and 2010 - 2011 ; data are displaced by an obvious factor for better readability ; the best 3-parameter function , eq .( [ lavalette3a ] ) , fit is shown .parameter values are obtained by fits through levenberg - marquardt algorithms with a 0.01% precision.,width=461,height=597 ] in fact , such an alternative law is easily demonstrated to be an appropriate one for describing size - rank data plots .for example , reconsider the it case , shown in fig .[ fig : plotnogood9zmppwldexp ] , redrawn on a semi - log plot in fig .[ fig : plotliloncp3f ] .( all fits , in this communication , are based on the levenberg - marquardt algorithm with a 0.01% imposed precision and after testing various initial conditions for the regression process . )the rank - size relationship appears to follow a flipped noid function around some horizontal mirror or axis .notice that similar behaviors are observed for different years , although the number of yearly differs .incidently , note that , in this recent time , the official data claims a number of 103 provinces in 2007 , with an increase by 7 units ( bt , ci , fm , mb , og , ot , vs , in conventional notations ) thereafter , leading to 110 provinces .the number of municipalities has also been changing , between 2009 and 2010 , whence the rank of a given province is not constant over the studied years . in view of taking into account a better fit at low and high rank, one can generalize eq .( [ lavalette3a ] ) to a five parameter free equation where the parameter takes into account mandelbrot generalization of zipf s law at low rank , see eq .( [ zmeq3 ] ) , while allows some flexibility at the highest rank .in particular , the shape of the curve in eq .( [ lavalette5 ] ) is very sensitive to the variations of and . as the parameter increases, the relative level of the sizes at high ranks is also increased .this means that the presence of outliers at high ranks is associated to high values of .if one removes such outliers from the dataset and implements a new fit procedure , one obtains a lower level of the calibrated and a flattening of the curve at low ranks . in ,the authors have found something similar in a different context ; they have denoted the major ( upsurging ) outlier at rank by `` king '' and called the other outliers at ranks as `` viceroys '' .the removal of outliers necessarily leads to a more appealing fit , in terms of visualization and , when such a procedure is implemented through power laws . in this respect , the introduction of a further parameter , in this case serves as adjustment term at high ranks , and represents an improvement of the previous theory .indeed , the parameter acts analogously to , but at a low rank . in particular , an increase of associated to a flattening of the five parameter curve of eq .( [ lavalette5 ] ) at medium and low ranks .such a flattening is due to sizes at low ranks which are rather close to those at medium ranks .this phenomenon has been denoted in as `` queen '' and `` harem '' effect , - to have in mind the corresponding `` king '' and `` viceroys '' effects at low ranks .the queen and harem effect is responsible of the deviations of the power law from the empirical data at a low rank .thus , the parameter also constitutes an adjustment term at low ranks and is an effective improvement of the performance of the fitting procedure .substantially , the specific sense of should be also read in terms of `` generalization '' and `` in view of best fit '' .usually , one is not sure about the 0 at the origin of axes .our corresponds to the of mandelbrot ( see eq . ( [ zmeq3 ] ) ) , for which mandelbrot gives no interpretation : it is only a mathematical trick .thus , by `` symmetry '' , we introduce a at high rank .it allows some flexibility due to possible sharp decays , due to outliers at high ranks .this also allows to move away from strict integers , and open the functions to continuous space as done in sect .[ universal ] .we have compared the fits conceptualized in eq .( [ lavalette3a ] ) and eq .( [ lavalette5 ] ) for the specific it case ( compare fig .[ fig : plotliloncp3f ] and fig .[ 2nd-5 ] and table [ table2nd ] ) .even if both these laws are visually appealing and exhibit a high level of goodness of fit , the associated to eq .( [ lavalette3a ] ) is slightly lower than that of eq .( [ lavalette5 ] ) .thus , we can conclude that the five - parameters law , eq .( [ lavalette5 ] ) , performs better than the three - parameters one , eq .( [ lavalette3a ] ) . ; the provinces are ranked according to their decreasing `` order of importance '' , for various years ; the 2007 and 2010 - 2011 data are displaced by an obvious factor of 10 for better readability ; the best 5-parameter function , eq . ( [ lavalette5 ] ) , fit is shown .parameter values are obtained by fits through levenberg - marquardt algorithms with a 0.01% precision.,width=461,height=597 ] & 2007 & 2008/2009 & 2010/2011 + &103&110&110 + &2.049&18.177&182.265 + &0.301&0.316&0.316 + &0.597&0.615&0.614 + & 0.99240 & 0.99445 & 0.99441 + + & 2007 & 2008/2009 & 2010/2011 + &110&110&110 + &3.971&33.709&332.71 + &0.373&0.387&0.386 + &0.499&0.527&0.529 + &-7.441&0.608&0.640 + &0.945&0.926&0.906 + & 0.99402 & 0.99631 & 0.99623 + [ table2nd ] even though one could display many figures describing the usefulness of the above , let us consider two cases , e.g. in sport matter .* consider the ranking of countries at recent summer olympic games : beijing 2008 and london 2012 .the ranking of countries is performed trough the number of `` gold medals '' , but one can also consider the total number of medals , - thus considering a larger set of countries .a country rank is of course varying according to the chosen criterion .it is also true that due to subsequent analysis of athlete urine and other doping search tests , the attribution of medals may change with time .we downloaded the data available on aug .13 , 2012 , from + .+ interestingly , the number of gold medals has not changed between beijing and london , i.e. 302 , but due to the `` equivalence of athletic scores '' , the total number of medals is slightly different : 958 962 .moreover , the number of countries having received at least a gold medal is the same ( 54 ) , but the total number of honored countries decreased from 86 to 85 .obviously , in contrast to the administrative data on it provinces ranking , there is much `` equality between countries '' in olympic games ; therefore a strict rank set contains many empty subsets .it is common to redefine a continuous ( discrete ) index in order to rank the countries .moreover , the rank distributions are much positively skewed ( skewness 3 ) with high kurtosis ( ) .therefore , the inflection points occur near and for a size close to the median value . on fig .[ fig : plot15goldbjgldnlav4 ] and fig .[ fig : plot15totalbjgldnlav4 ] , such a ranking for olympic games medals is displayed , both for the gold medal ranking and the overall ( `` total '' ) medal ranking . reasonably imposing , the parameters of eq( [ lavalette5 ] ) lead to remarkable fits , even though the collapsing behavior of the function occurs outside the finite range .we have tested that a finite does not lead to much regression coefficient improvement .* in other sport competitions , the `` quality '' of teams or / and countries is measured through quantities which are not discrete values .for example , in soccer , more than 200 federations ( called `` association members '' , countries ) are affiliated to the fifa + ( ) .the fifa country ranking system is based on results over the previous four years since july 2006 .it is described and discussed in to which we refer the reader for more information .note that a few countries have zero fifa coefficients .interestingly the skewness and kurtosis of the fifa coefficient distributions are rather `` well behaved '' ( close to or .0 ) , while the coefficient of dispersion is about 250 . from previous studies, it can be observed that the low rank ( `` best countries '' ) are well described by a mere power law , including the mandelbrot correction to the zipf s law .however , the high tail behavior is poorly described .we show in fig .[ fig : plot61fifa1213lav5lilon206 ] that the generalized equation is much better indeed . from a sport analysis point of view , one might wonder about some deviation in the ranking between 170 and 190 .; the best 4-parameter fitting function is displayed , eq .( [ lavalette5 ] ) , with ., width=461,height=597 ] ; the best 4-parameter fitting function is displayed , eq .( [ lavalette5 ] ) , with . ,width=461,height=597 ] ) , is shown ., width=461,height=597 ] these displays suggest to propose some universal vision as presented next .it is easily observed in eq .( [ lavalette3a ] ) that a change of variables , leads to \ ] ] however , in so doing , ] interval ,it is better to introduce the reduced variable , defined as , where is the maximum number of entities .moreover , in order to fully generalize the empirical law , in the spirit of zm , eq . ( [ zmeq3 ] ) , at low rank , a parameter can be introduced . in the same spirit ,we admit a fit parameter allowing for possibly better convergence at ; we expect , .thus , we propose the universal form ^{\chi},\ ] ] for which the two exponents and are the theoretically meaningful parameters .the amplitude represents a normalizing factor , and can be then estimated . indeed , by referring to the case and and posing and , we can write ^{-1 } \equiv \\ \frac{1}{(1+\phi+\psi)^{1+\chi-\zeta}}\;\;\frac{1}{[b_{t}(1-\zeta,1+\chi)]_{t_0}^{t_1 } } \;\;,\end{aligned}\ ] ] with , and , and where is the incomplete euler beta function , itself easily written , when , in terms of the euler beta function , being the standard gamma function . and , ranked by decreasing order of `` importance '' -in the sense of `` number of cities''- of provinces ( in be , bg , and it ) or departments ( in fr ) ; the best function fit , eq .( [ lavalette5u ] ) , is shown ; parameter values are found in table [ tablecitprovreg].,width=461,height=597 ] the function in eq .( [ lavalette5u ] ) is shown on fig .[ fig : plot23begfrituncpdl ] to describe different cases , with various orders of magnitude , i.e. , a semi - log plot of the number of cities in a province , or in a department , , ranked by decreasing order of `` importance '' , for various countries ( be , bg , fr , it ) .the reference year is 2011 . in such cases , , obviously , thereby much simplifying eq .( [ etalavalette5ri ] ) , whence reducing the fit to a three free parameter search . for completeness, the main statistical indicators for the number of cities ( ) , in the provinces ( ) , regions ( ) or departments ( ) in these ( european ) countries , in 2011 is given in table [ tablecitprovreg ] .notice that the distributions differ : the median ( ) is sometimes larger ( or smaller ) than the mean ( ) , while the kurtosis and skewness can be positive or negative . yetthe fits with eq .( [ lavalette5u ] ) seem very fine .the large variety in these characteristics is an _ a posteriori _ argument in favor of having examined so many cases .the presented argument is of wide application as the reader can appreciate .however , the vocabulary in this modeling section can be adequately taken from the jargon of city evolution for better phrasing and for continuing with the analyzed data .a preferential attachment process can be defined as a settlement procedure in urn theory , where additional balls are added and distributed continuously to the urns ( areas , in this model ) composing the system .the rule of such an addition follows an increasing function of the number of the balls already contained in the urns .in general , such a process contemplates also the creation of new urns . in such a general framework , this model is associated to the yule - simon distribution , whose density function is being and real nonnegative numbers .the integral represents the probability of selecting real numbers such that the first one coincides with , from the second to the -th one numbers are less or equal to and the remaining numbers belong to $ ] . in practical words ,newly created urn starts out with balls and further balls are added to urns at a rate proportional to the number that they already have plus a constant . with these definitions ,the fraction of urns ( areas ) having balls ( cities ) in the limit of long time is given by for ( and zero otherwise ) .in such a limit , the preferential attachment process generates a `` long - tailed '' distribution following a hyperbolic ( pareto ) distribution , i.e. power law , in its tail .it is important to note that the hypothesis of continuously increasing urns is purely speculative , even if it is widely adopted in statistical physics .indeed , such an assumption contrasts with the availability of resources , and the growth of the number of settlements is then bounded .therefore , as in verhulst s modification of the keynesian expansion model of population , a `` capacity factor '' must be introduced in the original yule process , thereby leading to the term in eq .( [ lavalette5 ] ) and its subsequent interpretation .one can consider to have access to a sort of `` probability '' for finding a certain `` state '' ( size occurrence ) at a certain rank , through the denominator resulting from eq .( [ etalavalette5ri ] ) .thereafter , one can obtain something which looks like the shannon entropy : .it has to be compared to the maximum disorder number , i.e. .whence we define the relative distance to the maximum entropy as as a illustration , the only case of the ranking of cities in various countries is discussed .values are reported in table [ tablecitprovreg ] .it is observed that the fr and it -values are more extreme than those of bg and be .this corroborates the common knowledge that the former two countries have too many cities , in contrast to the latter two .thus , in this particular case , this distance concept based on the universal ranking function with the two exponents and shows its interest , e.g. within some management or control process .it can be conjectured without much debate that this concept can be applied in many other cases .it is relevant to note that the entropy argument can be extended in a natural way to the -tsallis statistics analysis .such an extension could add further elements to the thermodynamic interpretation of the proposed rank - size analysis .more in details , rank - size law might be associated to -tsallis distribution through a generalization of the central limit theorem for a class of non independent random variables ( see e.g. and ) .however , the tsallis approach is well - beyond the aim of the present study , and we leave this issue to future research .this paper provides a basically three parameter function for the rank - size rule , based on preferential attachment considerations and strict input of finite size sampling .the analysis of the distribution of municipalities in regions or departments has proven the function value after its mapping into `` dimensionless variables '' .it seems obvious that the approach is very general and not limited to this sort of data .other aspects suggest to work on theoretical improvements of the rank - size law connections , through ties with thermodynamics features , e.g. , entropy and time - dependent evolution equations ideas. 0.2 cm * acknowledgements * 0.2 cm this paper is part of scientific activities in cost action is1104 , `` the eu in the new complex geography of economic systems : models , tools and policy evaluation '' and in cost action td1210 analyzing the dynamics of information and knowledge landscapes. .statistical characteristics of the distribution of the number of cities , number of provinces or departments , ( in fr ) , in 2011 , in 4 european countries ; relevant fit exponents with eq . ( [ lavalette5 ] ) , and entropic distance . [ cols="<,<,<,<,<,<",options="header " , ] fujita m , thisse j - f . the formation of economic agglomerations : old problems and new perspectives economics of cities : theoretical perspectives , eds huriot jm , thisse j - f , cambridge univ . press , cambridge , uk ; 2000 .
a mere hyperbolic law , like the zipf s law power function , is often inadequate to describe rank - size relationships . an alternative theoretical distribution is proposed based on theoretical physics arguments starting from the yule - simon distribution . a modeling is proposed leading to a universal form . a theoretical suggestion for the `` best ( or optimal ) distribution '' , is provided through an entropy argument . the ranking of areas through the number of cities in various countries and some sport competition ranking serves for the present illustrations .
with the advent of new application domains such as multilingual databases , computational biology , text retrieval , pattern recognition and function approximation , there is a need for proximity searching , that is , searching for elements similar to a given query element .similarity is modeled using a distance function ; this distance function along with a set of objects defines a metric space .computing distance function can be expensive , for example , one of the requirements in multilingual database systems is to find similar strings , where the distance(_edit distance _ ) between the strings is computed using an o(mn ) algorithm where m , n are the length of the strings compared .this necessitates the use of an efficient indexing technique which would result in fewer distance computations at query time .having an indexing structure serves the dual purpose of decreasing both cpu and i / o costs .existing index structures such as b+ trees used in exact matching proves inadequate for the above requirements .+ + various indexing structures have been proposed for similarity searching in metric spaces .we present the performance analysis of these structures in terms of the percentage of database scanned by varying edit distances from 10% to 100% .+ after providing a preliminary background in section 2 , we move on to the description of the existing index structures in section 3 . section 4 describes the experimental set up and the analysis is presented in section 5 .section 6 concludes the paper .a metric space comprises of a collection of objects and an assosciated distance function satisfying the following properties .* symmetry + * non - negativity + if and if * triangle inequaltiy + a , b , c are objects of the metric space . + + edit distance(_levenshtein distance _ ) satisfies the above mentioned properties .the edit distance between two strings is defined as the total number of simple edit operations such as additions , deletions and substitutions required to transform one string to another . for example , consider the strings _ paris _ and _ spire_. the edit distance between these two strings is 4 , as the transformation of _ paris _ to _ spire _ requires one addition , one deletion and two substitutions. edit distance computation is expensive since the alogorithmic complexity is o(mn ) where m , n are the length of the strings compared .+ one of the common queries in applications requiring similarity search is to find all elements within a given edit distance to a given query string .indexing structures for similarity search make use of the triangle inequality to prune the search space .consider an element p with an assosciated subset of elements x such that + + we want to find all strings within edit distance e from given query string q. that is reject all strings x such that from the triangle inequality , .hence which reduces to from equations ( [ cond1 ] ) and ( [ cond2 ] ) , the criterion reduces to if the inequality is satisfied , the entire subset x is eliminated from consideration .+ however , we need to compute the o(mn ) edit distance for all the elements in the subsets that do not satisfy the above criterion . proposes bag distance which is given as where is the set of the characters in x after dropping all common elements and gives the number of characters in ( x - y ) .the algorithmic complexity for this computation is o(m+n ) where = m , = n. since , bag distance can be used to filter out some of the candidate strings thereby reducing the search cost .in this section , we provide a brief description of the data structures used for similarity indexing . here , * u is the set of all strings .* n is the number of tuples in the dataset . *b is the bucket size , i.e. , the maximum number of tuples a leaf node can hold .* d(a , b ) is the edit distance between strings a and b. * q is the query string . *e is the _ search distance _ ,i.e. , all strings within an edit distance of e from q should be returned on a proximity search .the burkhard - keller tree(bk tree ) presented in is probably the first general solution to search in metric spaces .a pivot element p is selected from the data set u and the dataset is partitioned into subsets such that ( ) .each of the subsets is recursively partioned until there are no more than b elements in a subset .+ for a given query and search distance , the search starts at the root(pivot element p ) and traverses all subtrees at distance i such that holds and proceed recursively till a leaf node is reached . in the leaf node , the query string is compared with all the elements . fixed queries trees is a variation of bk trees .this tree is basically a bk tree where all the pivot elements at the same level are identical .the search algorithm is identical to that for bk trees .the benefit of fq trees over bk trees is that some of the comparisons between the query string and the internal node pivots are saved along the backtracking that occurs in the tree . in fixed height fq trees , all leaves are at the same height .this makes some leaves deeper than necessary , but no additional costs are incurred as the comparison between the query and intermediate level pivot may already have been performed .bisector tree(bs tree ) is a binary tree built recursively as follows : two routing objects and are chosen . while insertion , elements closer to are inserted in the left subtree and those closer to are inserted in the right subtree . for each routing object ,the maximum covering radius( ) , i.e. , the maximum distance of with any element in its subtree is stored . in our implementation , the distance of the element with its parent routing object is also stored .this helps in reducing some of the distance computations as shown in .+ for a given query and edit distance , search starts at the root and recursively traverses the left subtree if and the right subtree if a similar condition holds for .the bisector tree can be extended to m - ary tree by using m routing objects in the internal node instead of two .we select m routing objects for the first level . together with each routing object is stored a covering radius that is the maximum distance of any object in the subtree associated with the routing object .a new element is compared against the m routing objects and inserted into the _ best subtree _ defined as that causing the subtree covering radius to expand less and in the case of ties selecting the closest representative . thus it can be viewed that associated with each routing object , is a region of the metric space reg( ) where is the covering radius .further , each subtree is partitioned recursively .+ in the internal node , and are stored together with a pointer to the associated subtree .further to reduce distance computations m tree also stored precomputed distances between each routing object and its parent .+ for a given query string and search distance , the search algorithm starts at the root node and recursively traverses all the paths for which the associated routing objects satisfy the following inequalities . and in equation ( [ mtree-1 ] ) , we take advantage of the precomputed distance between the routing object and its parent . vantage point tree(vp tree ) is basically a binary tree in which pivot elements called _ vantage points _ partition the data space into spherical cuts at each level to enable effective filtering in similarity search queries .it is built using a top down approach and proceeds as follows .a vantage point is chosen from the dataset and the distances between the vantage point and the elements in its subtree are computed .the elements are then grouped into the left and right subtrees based on the median of the distances , i.e. , those elements whose distance from the vantage point is less than or equal to the median is inserted in the left subtree and others are inserted in the right subtree .this partitioning continues till the elements in the subtree fit in a leaf .the median value m is retained at each internal node to aid in the insertion and search process .in addition , each element in both the internal and leaf node holds the distance entries for every ancestor , which helps in cutting down the number of distance computations at query time .an optimized tree can be obtained by using heuristics to select better vantage points .+ search for a given query string starts at the root node .the distance between q and the vantage point at the node( ) is computed and left subtree is recursively traversed if similarly , right subtree is traversed recursively if the following inequality holds . a leaf node is reached , the query string need to be compared with all the elements in the leaf node , but some of the distance computations can be saved using the ancestral distance information .vp tree can be easily generalized to a multiway tree structure called multiple vantage point tree .a notable feature of mvp tree is that multiple vantage points can be chosen at each internal node and each of them can partition the data space into m groups .hence it is required to store multiple cut off values instead of a single median value at each internal node .the various parameters that can be tuned to improve the efficiency of mvp tree are * the number of vantage points at each internal node ( v ) . * the number of partitions created by each vantage point ( m ) . * the number of ancestral distances associated with each element in the leaf ( p ) .the insertion procedure starts by selecting a vantage point from the dataset .the elements under the subtree of are ordered with respect to their distances from and partitioned into m groups . the m-1 cut off valuesare recorded at the internal node .the next vantage point is a data point in the rightmost ( m-1 ) partitions , which is farthest from and it divides each of the m partitions into m subgroups .it can be observed that the nth vantage point is selected from the rightmost ( m - n+1 ) partitions and the fan out at each internal node is .this is continued until all elements in the subgroup fit in a leaf node . at the leaf ,each element keeps information about its distance from its first p ancestors .+ given a query string q and an edit distance e , q is compared with the v vantage points at each internal node starting at the root .let the distance between the vantage point and q be d( , q ) and be the cut off value between subtrees and . is recursively traversed if the both the inequalities and hold . for traversingthe first subtree , only ( [ mvp_cond1 ] ) need to be satisfied .similarly , the inequality ( [ mvp_cond2 ] ) is used to traverse the last subtree .a detailed description of the search procedure can be found in .another technique used in similarity searching to reduce search cost is clustering . clustering partitions the collection of data elements into groups called clusters such that similar entities fall into the same group .similarity is measured using the distance function , which satisfies the triangle inequality .a representative called clusteroid is chosen from each cluster .while searching , the query string is compared against the clusteroid and the associated cluster can be eliminated from consideration in case criterion ( [ criterion ] ) does not hold , which helps in reducing the search cost .+ proposes bubble for clustering data sets in arbitrary metric spaces .the two distance measures used in the algorithm are given as + + * rowsum * let o = be a set of data elementsin metric space with distance function d. the rowsum of an object o o is defined as rowsum(o ) = .the clusteroid c is defined as the object c o such that .+ + * average inter - cluster distance * let and be two clusters with number of elements n1 and n2 respectively .the average inter - cluster distance is defined as .+ + insertion in bubble starts by creating a cf * tree , which is a height balanced tree .each non - leaf node has entries of the form ( , ) where is the _ cluster feature _ , i.e. , the summarized representation of the subtree pointed to by .the leaf node entries are of the form ( , ) where is the clusteroid and points to the associated cluster . when an element x is to be inserted , it is compared against all the cf * entries in the internal node using the average inter - cluster distance and the child pointer associated with the closest cf * entry is followed . on reaching a leaf node , the cluster closest to x is the one having minimum rowsum value .if the distance between x and the closest clusteroid is less than a threshold value t , it is inserted in that cluster , a new clusteroid is selected and the cf * entries in the path from root to this leaf node are updated . in case the difference is greater than t , a new cluster is formed . in our implementation , each element entry in the cluster contains its distance with the clusteroid to reduce the number of distance computations .+ for a given query string and search distance , the query is compared with all the clusteroids . if it does not satisfy the ( [ criterion ] ) , the cluster elements need to be searched for similar strings .the precomputed distances can be used to eliminate some distance computations . in case of m tree ,a new element x is compared with the routing objects at the internal node and inserted into the best subtree .the best subtree is defined as the one for which the insertion of this element causes the least increase in the covering radius of the associated routing object . in the case of ties ,the closest representative is selected .this continues until we reach a leaf node .this may cause physically close elements to fall into different subtrees . along the path ,the covering radii of the selected routing objects are updated if x is farther from p than any other element in its subtree .thus there are no bounds on the covering radii associated with the routing objects .a possible optimzation is to bound the elements in the leaf nodes to be within a given threshold of the routing objects in its parent node .also , the new element is inserted into the subtree associated with the closest routing object , there by keeping the physically close elements together .these two optimizations result in a new indexing structure , which we call _ m tree with bounds_(mtb ) .thus , in the case of mtb the insertion of an element that causes the covering radius of the routing object of the lowest level internal to exceed the threshold results in a partition of the leaf node entries such that the threshold requirements are maintained .searching is similar to that of the basic tree implementation .we have performed an analysis of the various similarity indexing structures described in the previous section .the metric used for comparing the performance is the percentage of the database scanned for a given query and search distance , which is a measure of the cpu cost incurred .+ the experimental analysis were performed on a pentium iii(coppermine ) 768 mhz celeron machine running linux 2.4.18 - 14 with 512 mb ram .all the indexing structures were implemented in c. the o(mn ) dynamic programming algorithm to compute the edit distance between a pair of strings was used in the experiments .the dataset used for the analysis was an english dictionary dataset comprising of 100,000 words .the average word length of the dataset is around 9 characters .six query sets each of 500 entries were chosen at random from the data set for the experiments .the results presented are obtained by averaging over the results for these query sets .the page size is assumed to be 4k bytes .in this section , we provide the analysis and the experimental results on the performance of the various similarity indexing structures .the implementation details of the various index structures are presented in the next subsection followed by the results .assuming a page size of 4k bytes , the bucket size is taken to be 512 entries for bk tree , fq tree and fh tree as each entry is 8 bytes .the routing data elements are chosen at random from the dataset .+ the leaf node for vp tree as proposed in has a single entry .the routing element is selected using the best spread heuristic . for mvp trees ,we ran the experiments for different values of parameters m , v and p and the values 2 , 2 and 10 were shown to give better performance . for p= 10 , the number of leaf node entries is found to be 110 .the vantage point is selected at random for mvp tree .+ in the case of bisector tree and m tree , the two farthest elements are chosen as pivot elements at the time of split of a full node . for m tree , we ran the experiment with the number of entries in the internal node m taking values 5 and 254 .+ in clustering and indexing with bounds , the threshold value used in our runs was chosen to be 5 .in all the indexing structures , the criterion ( [ criterion ] ) , which is obtained from the triangle inequality is used to prune the search space . as the search distance is increased , the number of pivots(or routing objects ) that fail to satisfy the criterion ( [ criterion ] ) also increases resulting in an increase in the percentage of the database scanned .+ figure 1 shows the performance of the various similarity indexing structures with variation in the search distance .it can be seen that fq tree and fh tree perform better than bk tree .this can be attributed to two reasons : the number of pivot element comparisons is less in case of fq tree and fh tree as these trees have one fixed key per level .whereas , in case of bk tree , there are as many distinct pivot elements per level as the number of nodes at that level .further , fq tree and fh tree use a better splitting technique resulting in more partitions as compared to bk tree . hence , some of the partitions can be eliminated using ( [ criterion ] ) .consider the case when a subset as shown in figure 2 is to be split in bk tree .then the pivot element selected is some .thus the maximum number of partitions that can result is 2i .however , in case of fq tree , since a fixed pivot element is selected for each level , the chosen pivot is away from the subset , which may result in more partitions .it is shown in that this results in better performance . in fh treeall the leaves are at the same level .also , since we have already performed the comparison between the query and pivot of an intermediate level , we eliminate for free the need to consider a leaf .hence fh tree performs slightly better than fq tree .+ our implementation of vp tree uses the best spread heuristic for selecting the vantage points .in addition , each internal node maintains the lower and upper bounds of the distance of elements in left and right subtrees .this can be used to cut down the distance computations using the triangle inequality .because of these factors the performance is better as compared to bk tree .however , just like bk tree , as the vantage point is selected from the subset that is being partitioned and there are multiple distinct vantage points at any given level , fq tree and fh tree show better performance . +as can be seen from the plots in figure 1 , mvp tree outperforms vp tree . each leaf node entry in the mvp tree stores its distance to the first 10 ancestors .these precomputed distances help in reducing the search cost as compared to vp tree .in addition , mvp tree needs two vantage points to partition the data space into four regions whereas vp tree requires three vantage points for the same .this further reduces the number of distance computations at the internal nodes at search time .the left partition obtained using vantage point is partitioned again using the farthest point which is present in the right partition .also , for smaller values of the edit distances( ) the internal node comparisons dominate the results . in case of the mvp tree ,since there are multiple keys at each internal node , it results in more distance computations as compared to the fh and fq tree , which have one fixed key per level .this explains the crossover in the curves of the fq tree , fh tree and mvp tree at search distance 0.4 .+ the clustering technique partitions the dataset into a fixed number of clusters .this number varies inversely as the threshold i.e. the cluster radius . at search time , the query string is compared against each of the cluster representatives , the clusteroids .these comparisons are performed irrespective of the search distance . for a threshold of five , the clustering algorithm partitioned the dataset into 17,912 clusters .this explains the comparitively large number of searches for smaller values of search radii in figure 1 . for clusteroids that do not satisfy the test in ( [ criterion ] ) , the associated cluster elements are sequentially compared against the query string .+ in the case of bisector tree , the insertion of a new data element may result in an increase in the covering radius of the routing object .the covering radii values depend upon the order in which the elements are inserted and may have large values.due to this , at search time , a number of routing objects satisfy the test in equation ( [ mtree-1 ] ) .thus , the bisector tree shows poor performance as compared to the other indexing structures . with m trees , even though the new element is inserted into a subtree such that the resulting increase in the covering radius is the least , there are no bounds on covering radius value .so the performance is identical to that of bisector tree .the poor performance can also be attributed to the presence of more number of routing objects to partition the data space .+ it can be observed from the graph in figure 3 that mtb that combines the features of m tree and clustering shows better performance .this can be attributed to the two optimzations used , which result in well formed clusters . for lower values of the search distance , the overhead of the comparisons with large number of routing objects at the internal nodes results in poor performance .+ the graph in figure 4 shows comparison of the various indexing structures when bag distance computation is used to reduce some of the edit distance computations .the graph shows the edit distance computations needed with search distances varying from 10 to 100% .bk tree & 0.5789 + bk tree ( with bag distance ) & 0.4164 + fq tree & 0 .. 5825 + fq tree ( with bag distance ) & 0.4124 + fh tree & 0.5746 + fh tree ( with bag distance ) & 0.4090 + vp tree & 0.4951 + m tree ( with bag distance ) & 0.3041 + cluster & 0.6531 + mtb ( with bag distance ) & 0.1465 + table 1 lists the average search time(ms ) per query taken by various indexing structures .it can be observed that mtb tree takes comparatively lesser time .bag distance computation helps in reducing the time complexity .we have presented a performance study of the search efficiency of similarity indexing structures .mtb , which combines the features of clustering and m tree is found to perform better than all the other indexing structures for most search distances .bag distance computation , which is cheaper than edit distance computation , was used in the experiments .its use resulted in reduced time complexity .further , in applications where the required search distance is low and the string lengths are large , even better performance might result . +it can be observed that index structures like mvp tree , which make use of precomputed distances with ancestors to prune the search space perform better than others . in similaritysearching , since multiple paths are traversed , keeping a fixed key per level as in fq tree minimizes the search cost by reusing the precomputed distance at that level .thus , reusing the pre computed distances results in better performance .some indexed structures were shown to perform better with smaller values of edit distances( ) whereas some others perform better at higher values .it would be advantageous to maintain multiple index structures and depending upon the edit distance , select the appropriate one .using cheaper distance computation algorithms can also significantly reduce the cpu cost .the quality of partioning is largely dependent on the heuristic used for selecting the pivots . as a future work ,we propose to analyse the performance of various index structures with different heuristics .we thank a kumaran of database systems lab , iisc for his advice during the work .99 w. a. burkhard , r. m. keller , some approaches to best - match file searching , _ comm . of the acm , 16(4):230236 _ , 1973 .r. baeza - yates , w. cunto , u. manber , s. wu , proximity matching using fixed - queries trees , _ the 5th combinatorial pattern matching , volume 807 of lecture notes in computer science , pages 198 - 212 _ , 1994 .v. ganti , r. ramakrishnan , j. gehrke , a. powell , j. french , clustering large datasets in arbitrary metric spaces , _ in the proceedings of international conference on data engineering _ , 1999 .p. ciaccia , m. patella , p. zezula , m - tree : an efficient access method for similarity search in metric spaces , _ in proceedings of the 23rd vldb international conference , athens , greece _ , september 1997 .edgar chavez , gonzalo navarro , ricardo baeza - yates , jose luis marroquin , searching in metric spaces , _ acm computing surveys _ , 1999 .peter n. yianilos , data structures and algorithms for nearest neighbor search in general metric spaces , _ proceedings of the fourth annual acm - siam symposium on discrete algorithms , p.311 - 321 _ , january 25 - 27 , 1993 .tolga bozkaya , meral ozsoyoglu , indexing large metric spaces for similarity search querie , _ acm transactions on database systems , vol .3 , pages 361 - 404 _ ,september 1999 .ilaria bartolini , paolo ciaccia , marco patella , string matching with metric trees using an approximate distance , _spire 2002 : 271 - 283_. i. kalantari , g. mcdonald , a data structure and an algorithm for the nearest point problem , _ ieee transactions on software engineering , 9(5 ) _ , 1983 .
similarity searching finds application in a wide variety of domains including multilingual databases , computational biology , pattern recognition and text retrieval . similarity is measured in terms of a distance function ( _ edit distance _ ) in general metric spaces , which is expensive to compute . indexing techniques can be used reduce the number of distance computations . we present an analysis of various existing similarity indexing structures for the same . the performance obtained using the index structures studied was found to be unsatisfactory . we propose an indexing technique that combines the features of clustering with m tree(mtb ) and the results indicate that this gives better performance .
direct imaging is sensitive to substellar objects with large projected separations from their host objects ( .2 " ; e.g. , ) , corresponding to larger orbital semi - major axes and periods compared to those detected with radial velocity and transit methods .therefore , over timescales of months to years , direct imaging observations often probe only short fractions of these orbits . in these cases , constraints on orbital parameterscan be used to perform a preliminary characterization of the orbit ( e.g. , , , , , , ) .orbital parameter constraints can also lead to mass constraints on directly imaged substellar objects ( e.g. , ) , constraints on additional planets in the system , and information about the interactions between planets and circumstellar disks ( e.g. , , ) .in addition , orbit fitting can be used to constrain the future locations of exoplanets , notably to calculate the probability of a transit ( e.g. ) , or to determine an optimal cadence of observations to reduce uncertainty in orbital parameter distributions . for future direct imaging space missions such as the _ wide - field infrared survey telescope _ ( _ wfirst _ ; , ) , it is particularly important to quickly and accurately fit newly discovered exoplanet orbits in order to plan future observations efficiently .several orbital fitting methods are currently used in astronomy .the family of bayesian markov chain monte carlo methods ( mcmc ) was introduced to the field of exoplanet orbit fitting by ford ( 2004 , 2006 ) and has been widely used ( e.g. , , ) .mcmc is designed to quickly locate and explore the most probable areas of parameter space for a particular set of data , and takes longer to converge as a parameter space becomes less constrained by data , as in the case of astrometry from a fraction of a long - period orbit .in addition , many types of mcmc algorithms can be inefficient at exploring parameter spaces if the corresponding surface is complicated ( e.g. ) .another commonly used tool for fitting orbits is the family of least - squares monte carlo ( lsmc ) methods , which uses a levenberg - marquardt minimization algorithm to locate the orbital fit with minimum value for a set of astrometry .once the minimum orbit is discovered , this method then randomly varies the measured astrometry along gaussian distributions defined by the observational errors . in cases where the parameter space is very unconstrained ,this method often explores only the area closest to the minimum orbit , leading to biases against areas of parameter space with lower likelihoods .for example , found significantly different families of solutions when using lsmc than when using mcmc for the same orbital data for pic b. lsmc is therefore effective at finding the best - fit solution , but not well - suited to characterizing uncertainty by fully exploring the parameter space . in this work ,we present orbits for the impatient ( ofti ) , a bayesian monte carlo rejection sampling method based on that described in , and similar to the method described in .ofti is designed to quickly and accurately compute posterior probability distributions from astrometry covering a fraction of a long - period orbit .we describe how ofti works and demonstrate its accuracy by comparing ofti to two independent mcmc orbit - fitting methods .we then discuss situations where ofti is most optimally used , and apply ofti to several sets of astrometric measurements from the literature .ofti , like other bayesian methods , combines astrometric observations and uncertainties with prior probability density functions ( pdfs ) to produce posterior pdfs of orbital parameters .these orbital parameter posteriors allow us to better characterize systems , for example by predicting future motion or by directly comparing the orbital plane to the orbits of other objects in the system or the distribution of circumstellar material .the basic ofti algorithm consists of the following steps : 1 .monte carlo orbit generation from priors 2 . scale - and - rotate 3 .rejection sampling ofti uses a modified bayesian rejection sampling algorithm .rejection sampling consists of generating random sets of parameters , calculating a probability for each value , and preferentially rejecting values with lower probabilities . for ofti ,the generated parameters are the orbital elements semi - major axis , , period , , eccentricity , , inclination angle , , position angle of nodes , , argument of periastron , , and epoch of periastron passage , . for bayesian rejection sampling algorithms such as ofti , the candidate density functions used to generate these random parameters are prior probability distributions .ofti begins by generating an initial set of seven random orbital parameters drawn from prior probability distributions . in this work, we use a linearly descending eccentricity prior with a slope of -2.18 for exoplanets , derived from the observed distribution of exoplanets detected by the radial velocity method .the use of this prior assumes that long - period exoplanets follow the same eccentricity distribution as the planets detected by the radial velocity method.while the shape of the eccentricity prior directly affects the shape of the eccentricity posterior , as we would expect , the posteriors of other parameters are less affected when changing between a linearly descending and a uniform prior ( see section 3.1 ) .we assume a purely random orientation of the orbital plane , which translates into a sin( ) prior in inclination angle and uniform priors in the epoch of periastron passage and argument of periastron .that is , the inclination angle , position angle of nodes , and argument of periastron priors are purely geometric .ofti initially generates orbits with = 1au and = , but these values are altered in the following step .we note that ofti can be easily run using different priors , making it useful for non - planetary systems and statistical tests . once ofti has generated an initial set of orbital parameters from the chosen priors , it performs a `` scale - and - rotate '' step in order to restrict the wide parameter space of all possible orbits .this increases the number of orbits accepted in the rejection sampling step .the generated semi - major axis and position angle of nodes are scaled and rotated , respectively , so that the new modified set of parameters describes an orbit that intersects a single astrometric data point .ofti also takes the observational uncertainty of the data point used for the scale - and - rotate step into account . for each generated orbit ,random offsets are introduced in separation ( ) and position angle ( ) from gaussian distributions with standard deviations equal to the astrometric errors at the scale - and - rotate epoch .these offsets are added to the measured astrometric values , and then the generated orbit is scaled - and - rotated to intersect the offset data point , rather than the measured data point .the scale - and - rotate step produces a uniform prior in , and imposes a prior in semi - major axis .the posterior distributions ofti produces are independent of the epoch chosen for this step , but the efficiency of the method is not .some choices of the scale - and - rotate epoch result in a much higher fraction of considered orbits being accepted , and so the orbit is fit significantly faster . in order to take advantage of this change in efficiency, our implementation of ofti performs an initial round of tests that pick out the scale - and - rotate epoch resulting in the largest acceptance rate of orbits , then uses this epoch every subsequent time this step is performed .the scale - and - rotate step differentiates ofti from a true rejection sampling algorithm . using the modified semi - major axis and position angle of nodes values ,ofti generates predicted and values for all remaining epochs .ofti then calculates the probability for the predicted astrometry given the measured astrometry and uncertainties .this probability , assuming uncorrelated gaussian errors , is given by : p .finally , ofti performs the rejection sampling step ; it compares the generated probability to a number randomly chosen from a uniform distribution over the range ( 0,1 ) .if the generated probability is greater than this random number , the generated set of orbital parameters is accepted .this process is repeated until a desired number of generated orbits has been accepted ( see figure [ fig : oftiprocess ] ) . as with mcmc, histograms of the accepted orbital parameters correspond to posterior pdfs of the orbital elements .= 1au , = , and other orbital elements randomly drawn from appropriate priors is generated .red dots along the orbit show the astrometric locations of 51 eri b at the times of the observational epochs .the dotted line shows the line of nodes .panels two and three : the scale - and - rotate step is performed .bottom panel : orbits accepted ( light red ) and rejected ( gray ) by the rejection sampling step of the ofti algorithm .inset : close up of bottom panel.,scaledwidth=45.0% ] our implementation of ofti makes use of several computational and statistical techniques to speed up the basic algorithm described above : * our implementation uses vectorized array operations rather than iterative loops wherever possible . for example , instead of generating one set of random orbital parameters at a time , our program generates arrays containing 10,000 sets of parameters , and performs all subsequent operations on these arrays .our program then iterates over this main loop , accepting and rejecting in batches of 10,000 generated orbits at a time .10,000 is the empirically determined optimal number for our implementation . * our implementation of oftiis written to run in parallel on multiple cpus ( our default is 10 ) , speeding up runtime by a constant factor .* our implementation of ofti is equipped with a statistical speedup that increases the fraction of orbits accepted per orbits tested . due to measurement errors ,the minimum orbit typically has a non - reduced value greater than 0 .ofti makes use of this fact by calculating the minimum value of all orbits tested during an initial run , then subtracting this minimum value from all future generated values , rendering them more likely to be accepted in the method s rejection step . in rejection sampling , having a random variable whose range is greater than the maximum probability does nt change the distribution of parameters , but does result in more rejected trials . * our implementation of ofti also restricts the ranges of the input and total mass priors based on initial results .after our implementation has accepted 100 orbits , it uses the maximum , minimum , and standard deviation of the array of accepted parameters to infer safe upper and lower limits to place on the relevant prior .this changes only the range of the relevant prior , not the shape of the prior .this speedup prevents our implementation of ofti from wasting time generating orbits that have a negligible chance of being accepted . to illustrate that ofti returns identical results to mcmc over short orbital arcs , we fit the same orbit and priors with ofti and two mcmc orbit - fitting routines : the metropolis hastings mcmc algorithm described in , and an affine invariant mcmc orbit fitter from . in figure[ fig : sdss ] , we plot the metropolis hastings mcmc and ofti posterior pdfs calculated from astrometry of the system sdss j105213.51 + 442255.7 ab ( hereafter sdss 1052 ; ) from 2005 - 2006 .sdss 1052 is a pair of brown dwarfs with period of approximately 9 years .we chose only a subset of the available astrometry of sdss 1052 to illustrate the effect of fitting a short orbital arc .in addition , we assume a fixed system mass , and we use astrometry provided in table 2 of . the posterior distributions produced by ofti and the metropolis - hastings mcmc are identical .ofti was also validated using the relative astrometry of 51 eri b , a directly imaged exoplanet discovered by the gpi exoplanet survey in 2015 ( , ) . in figure[ fig : goi2_compare ] , we plot the posterior distributions produced by all three methods , calculated from relative astrometry of 51 eri b taken between 2014 december and 2015 september . as in the previous case ,all three sets of posterior distributions produced by ofti and the two mcmc implementations are identical .an important difference between mcmc and ofti involves the types of errors on the posteriors produced by the two methods . because each step of ofti is independent of previous steps , deviations from analytical posteriors have the form of uncorrelated noise , i.e.if our implementation of ofti is run until orbits are accepted , the output posteriors will not be biased , but simply noisy . as our implementation of oftiis run until greater numbers of orbits are accepted , noise reduces by .mcmc steps , on the other hand , are not independent . because the next mcmc step depends on the current location in parameter space , an un - converged mcmc run will result in biased , rather than noisy , posteriors .this is especially important in cases where mcmc has not been run long enough to achieve a satisfactory gelman - rubin ( gr ) statistic ( a measure of convergence ; ) . in these cases, ofti produces an unbiased result , while mcmc does not .this situation is illustrated in figure [ fig : marta ] , showing the posteriors produced by a metropolis - hastings implementation of mcmc and our implementation of ofti for all known astrometry of roxs 42b b ( , ) .after running for 30 hours on 10 cpus , the mcmc posteriors are still un - converged , as can be seen by the lumpy shape of the posterior . as a result , we see biases in the other posteriors .the gr statistics for each parameter were between 1.1 and 1.5 ( an acceptable gr statistic is , see e.g. ) .our implementation of ofti produced this result in 134 minutes , more than an order of magnitude faster than mcmc . to demonstrate the differences in the random errors incurred by ofti and the systematic errors of mcmc , and to illustrate ofti s computational speed for short orbital arcs, we calculated how the semi - major axis distributions generated by ofti and mcmc changed as more sets of orbital elements were accepted for ofti , and generated for mcmc .for both ofti and mcmc , we calculate the median of the first semi - major axes tested as a function of , resulting in an array of medians for ofti and an array of medians for mcmc .we then take the ratio of each number in these arrays to the median of the complete distribution of tested semi - major axes . since mcmc and ofti converge on the same distributions , the medians of both complete semi - major axis distributions are the same . as approaches the total number of orbits tested , the partial distributions approach the complete distribution , and the ratios approach 1 .we repeat this procedure for the lower and upper 1 limits of the ofti and mcmc semi - major axis distributions .these results are shown in figure [ fig : changing_medians ] for the orbit of 51 eri b. note that this represents the number of orbits tested , rather than number of orbits accepted , and so is directly proportional to computation time .after approximately orbits are tested , the ofti semi - major axis distribution ( red line ) converges on the final median semi - major axis ( to within 5% ) , while the mcmc semi - major axis distribution ( black line ) suffers from systematic over- and under - estimates of the final semi - major axis value until more than 3 orders of magnitude more orbits have been tested .similarly , ofti converges on the appropriate 1 upper and lower limits for the output semi - major axis distribution ( to within 5% ) after approximately orbits are tested , while it takes mcmc correlated steps in order to do the same .ofti is most efficient for astrometry covering smaller fractions of orbits , while mcmc achieves convergence faster for larger fractions of orbits .figure [ fig : timeplot ] illustrates this difference by displaying the wallclock time per cpu needed for each method to achieve convergence using astrometry from pic b . in order to compare the time for convergence by mcmc and ofti , we define a proxy for convergence time : our distributions are said to be `` converged '' for a statistic of interest ( e.g. median , 68% confidence interval ) when the statistic is within 1 of the final value , where the final value is calculated from a distribution of accepted orbits . for astrometry covering small fractions of a total orbit , ofti can compute accurate future location predictions in much less time than mcmc .we illustrate this application in figure [ fig : goi2 ] , which shows probability distributions predicting the , of 51 eri b on 2015september 15 from four earlier astrometric points taken over a timespan of less than 2 months .overplotted is the actual measured location of 51 eri b. the predicted and observed medians for are and , and for are and .this prediction analysis was made before the most recent astrometric data were obtained .51 eri a was unobservable between february and september of 2015 .it took ofti less than 5 minutes ( running in parallel on ten 2.3ghz amd opteron 6378 processors ) to produce these predictions .permissible orbits in 134 minutes .the gr statistics for the mcmc chains plotted are all greater than 1.1 , and the gr statistic for is close to 1.5,scaledwidth=50.0% ] limit , median , and upper 1 limit for one metropolis - hastings mcmc chain ( black ) , and one ofti run ( red ) , computed for all published astrometry of 51 eri b. ofti converges on the appropriate median solution after testing approximately sets of orbital elements , while mcmc continues to `` wander '' in a correlated fashion until accepting approximately orbits .red horizontal lines are located at ratios of 1.05 and 0.95 , and a black dashed line is located at a ratio of 1.00.,scaledwidth=50.0% ] error ) the lower 1 limit , median , and upper 1 limit of the complete distribution , as a function of the orbital fraction covered by input astrometry of pic b. as orbital fraction decreases , ofti performance improves , while mcmc performance slightly improves . for orbital fractions greater than 15% ,we extrapolated the depicted behavior from the time ofti took to accept 50 orbits.,scaledwidth=50.0% ] , .,scaledwidth=50.0% ] ofti is a rejection - sampling method , meaning that it works by randomly sampling the parameter space of interest , then rejecting the sampled areas that do not match the data . as astrometry drawn from a larger fraction of the orbit becomes available , the orbit becomes more constrained , and the areas of parameter space that match the data shrink , so that ofti becomes less efficient .a useful analogy for this phenomenon is throwing darts at a dartboard : when astrometry from only a small fraction of an orbit is available , many diverse orbits might fit the data , and a large fraction of the dartboard is acceptable , which results in a high acceptance rate .however , when more astrometry becomes available , a much smaller set of orbits will fit the data , and a much smaller fraction of the dartboard is acceptable , resulting in a lower acceptance rate .accordingly , ofti is most efficient for astrometry covering a short fraction of an orbit , typically less than 15% of the full orbital period .because directly imaged exoplanets and brown dwarfs have large physical separations from their primary objects ( greater than several au ) , ofti is ideal for fitting the orbits of directly imaged systems , especially when the time spanned by direct imaging observations is short .ofti is also optimal when a quick estimate of the mean of a distribution is required. this will be particularly helpful in planning follow - up observations for space missions , as it allows to quickly estimate the optimal time for observations , also taking into account the possibility of the planet passing behind the star ( see e.g. ) .as figure [ fig : changing_medians ] shows , ofti can converge on an estimate for the median of the 51 eri b semi - major axis distribution within 5% of the true median after fewer than orbits are tested . while an implementation of mcmc would have to run to completion in order to avoid a biased estimate of the semi - major axis distribution ,the independence of successive ofti trials allows ofti to converge on an unbiased estimate much faster .we use ofti to fit orbits to 10 sets of astrometry from directly imaged exoplanets , brown dwarfs , and low - mass stars in the literature .each substellar object has at least two published epochs of astrometry .we chose mostly objects for which an orbital fit has not been calculated because the available astrometry covers a short fraction of the object s orbit . in performing these fits, we make the natural assumption that all objects are bound , and that all objects execute keplerian orbits , as the chance of catching a common proper motion companion in the process of ejection or during the closest approach of two unassociated objects is particularly small .we calculate fits using only the data available in the literature .random and systematic errors in the astrometry available in the literature can bias these results .in particular , systematic errors in the measurement of plate scale or true north of the various instruments used to compile a single astrometric data set can significantly change orbit fits .sharp apparent motion due to astrometric errors is likely to be fit as a higher eccentricity orbit ; more astrometric data are needed to identify outliers of this nature . for each substellar object , we compiled relative astrometry , distances , and individual object mass estimates from the literature ( see appendix ) .values of the companion mass with error bars were given for 2 m 1207 b , and b , cd-35 2722 b , and gj 504 b , and these were adopted as reported by the listed references .for hd 1160 b and c , tel b , and hr 3549 b , no central value with error bars were given , but instead a range of masses was provided ; in these cases we adopted the middle of the range .we note that for visual orbits mass of the system enters kepler s third law as the sum of the mass of both components , and so uncertainties in the orbit are dominated by uncertainties in the primary mass .we used gaussian mass priors centered at the sum of the appropriate primary and secondary masses , with fwhm conservatively chosen to be the sum of the two mass uncertainties . for companion massesless than of the primary mass , which was the case for all objects except hip 79797 b and 2 m 1207 , we neglected the uncertainty in the companion mass , and simply adopted the uncertainty of the primary .we used symmetric gaussian priors in parallax or distance ( we used distance priors only if no parallax was available in the literature ) , and symmetric or asymmetric gaussian priors ( asymmetric where asymmetric error bars were given ) in total mass .asymmetric gaussian distributions consist of two half - gaussians with individual values , pieced together at the median of the aggregate distribution . for each orbit , we provide : * a table listing the maximum probability ( maximum product of probability and prior probabilities ) , minimum , median , 68% confidence interval , and 95% confidence interval orbital elements * a triangle plot showing posterior distributions for each orbital element and 2-dimensional covariances for each pair of orbital elements * a 3-panel plot showing 100 orbits drawn from the posterior distributions gj 504 b is the coldest and bluest directly - imaged exoplanet to date , and one of the lowest mass . its discovery was reported by , who also perform a rejection - sampling orbit fit similar to ofti . masses and astrometry are provided in tables [ tab : starparams ] and [ tab : gj504 ] , and results are shown in table [ tab : gj504_outputs ] and figures [ fig : gj504_covariance ] and [ fig : gj504_orbit ] .our results are consistent with the posterior distributions they find ( noting that have used a flat prior in eccentricity ) . using a linearly descending prior in eccentricity , we find a median semi - major axis of 48au , with 68% confidence between 39 and 69au , and a corresponding period of 299 years , with 68% confidence between 218 and 523 years .we also note that find an posterior that decreases with eccentricity , as we do for ofti calculations performed assuming both a uniform eccentricity prior and a linearly descending eccentricity prior .we calculated fraction of orbital coverage by dividing the time spanned by observations by the 68% confidence limits of the posterior distribution in period produced by ofti .the calculated orbital fraction for the orbit of gj 504 b is % . to illustrate the impact of our choice of eccentricity prior on the results, we performed another fit to the astrometry of gj 504 b using a uniform , rather than a linearly descending , prior in eccentricity .the results are shown in figure [ fig : prior_effect ] . the use of a different eccentricity prior changes the eccentricity posterior pdf , but does not significantly affect the other posterior pdfs .for example , when a linearly descending eccentricity prior is used , the semi - major axis posterior is , and when a uniform eccentricity prior is used , the posterior shifts to .ccccccc & au & 44.48 & 67.24 & 48 & 39 - 69 & 31 - 129 + & yr & 268.56 & 508.23 & 299 & 218 - 523 & 155 - 1332 + & & 0.0151 & 0.1519 & 0.19 & 0.05 - 0.40 & 0.01 - 0.62 + & & 142.2 & 131.7 & 140 & 125 - 157 & 111 - 171 + & & 91.7 & 4.9 & 95 & 31 - 151 & 4 - 176 + & & 133.7 & 61.6 & 97 & 46 - 146 & 8 - 173 + & yr & 2228.11 & 2419.96 & 2145.10 & 2068.06 - 2310.13 & 2005.07 - 2825.03 + [ tab : gj504_outputs ] orbit ( red line ) .these orbits are the same as the orbits plotted in the left panel.,scaledwidth=45.0% ] au to au when changing the linearly descending eccentricity prior to a uniform one .the two relevant eccentricity priors are plotted in the eccentricity panel as thin red and black lines , but the black prior is not visible behind the black posterior.,scaledwidth=50.0% ] hd 1160 a hosts two known companions : hd 1160 b , a low - mass object at the stellar / brown dwarf boundary , and hd 1160 c , an m3.5 star at a wider projected separation than hd 1160 b .we compute fits to both the orbits of hd 1160 b and hd 1160 c with respect to hd 1160 a based on astrometry compiled in tables [ tab : hd1160b ] and [ tab : hd1160c ] , and mass and distance values provided in table [ tab : starparams ] in the appendix .preliminary orbital fits were provided by , but an updated fit including the latest astrometry published in has not been computed .the results of our fits are shown in tables [ tab : hd1160b_outputs ] and [ tab : hd1160c_outputs ] and figures [ fig : hd1160b_covariance ] - [ fig : hd1160c_orbit ] .because hd 1160 b and c are non - planetary companions , we used a uniform prior in , rather than the empirically derived linearly descending prior used for exoplanets .this choice is supported by evidence that the empirical eccentricity distribution of long - period stellar binaries is approximately uniform ( e.g. ) . for hd 1160 b ,the fraction of orbital coverage is % , while for hd 1160 c , the fraction of orbital coverage is % .ofti constrains the range of possible inclination angles for the orbit of hd 1160 b tightly , with a 68% confidence interval of 96 - 119 , indicating close to an edge - on orbit .we find a semi - major axis of , and a period of years .a high eccentricity is favored .as seen in figure [ fig : hd1160b_orbit ] , the minimum orbit passes through all 1 error bars in and , allowing ofti to converge on a set of permissible orbits in fewer iterations than , for example , the orbit of tel b , whose data set contains several clear outliers ( see section 3.6 ) . the fit favors a more face - on orbit for hd 1160 c than for hd 1160 b , returning an inclination angle of .the probability that the inclination angle of hd 1160 b is within 10 of the inclination angle of hd 1160 c is 8% .it is fairly typical for triple stellar systems to be non - coplanar ( e.g. , ) , so this result is plausible . in keeping with the larger projected separation of hd 1160 c ,we find a larger semi - major axis , , and period , years , for hd 1160 c compared to hd 1160 b. hip 79797 ba and hip 79797 bb are a close binary brown dwarf system orbiting the a star hip 79797 a , first detected as an unresolved companion by , and resolved into a binary by . perform a preliminary orbit fit using mcmc , but note that the mcmc fit was un - converged .masses and astrometry are provided in tables [ tab : starparams ] and [ tab : hip79797 ] .results are shown in table [ tab:79797_outputs ] and figures [ fig : hip79797_covplot ] and [ fig : hip79797_orbit ] . like ,our ofti fit favors an edge - on orbit ( ) , but we find a significantly smaller semi - major axis ( , where nielsen et al give a semi - major axis of ) and corresponding period ( years , where give a period of years ) .our fit favors high eccentricity orbits .again , we use a uniform prior in eccentricity .the calculated orbital fraction for the orbit of hip 79797 ba and hip 79797 bb is % .hr 3549 b is a brown dwarf orbiting the a0v star hr 3549 a , discovered by , and followed up by .masses and astrometry are provided in tables [ tab : starparams ] and [ tab : hr3549 ] . use a lsmc technique to constrain the orbit of hr 3549 , and provide orbital elements for three orbits with greatest likelihood ( least ) . while minimization algorithms like these are effective at finding local maxima in likelihood space , ofti produces full bayesian posteriors that show probability , rather than likelihood .our ofti fit , in contrast to that of , makes use of both astrometry points provide , rather than just one , and uses a gaussian prior in mass with full width at half maximum ( fwhm ) equal to 5% of the reported value in , rather than a fixed value .our results are provided in table [ tab : hr3549_outputs ] and figures [ fig : hr3549_covplot ] and [ fig : hr3549_orbit ] .we find a semi - major axis of , which contains one of the semi - major axis values reported by ( 133.2au ) , while the other two semi - major axis values report ( 299.7 and 441.2au ) are at 1.7 and 2.9 , respectively , of our semi - major axis distribution . like , we find that most values of eccentricity , and values of , are consistent with the astrometry , a reflection of the fact that the astrometry covers only a fraction of the full orbital arc ( % ) .2massw j1207334 - 393254 b ( hereafter 2 m1207 b ) is a planetary mass companion orbiting 2 m 1207 a , a brown dwarf with mass approximately three times that of 2 m 1207 b. 2 m1207 b was first discovered using the very large telescope with naco by .it was followed up first by , who confirmed the common proper motion of the pair , establishing 2 m 1207 b as a bound companion , and again by .masses and astrometry are provided in tables [ tab : starparams ] and [ tab:2m1207 ] , and orbital fits are shown in table [ tab:2m1207_outputs ] and figures [ fig:2m1207_covariance ] and [ fig:2m1207_orbit ] . using a linearly descending prior in eccentricity, we find a median semi - major axis of 46au , with 68% confidence between 31 and 84au .we only loosely constrain the range of possible inclination angles , and determine that high eccentricity ( ) orbits are dispreferred .the median orbital period is 1,782 years , with 95% confidence between 633 years and 20,046 years .the calculated orbital fraction for the orbit of 2 m 1207 b is % . and b is a substellar companion orbiting the late b star and .the discovery of and b was first reported by , and followed up by .masses and astrometry are provided in tables [ tab : starparams ] and [ tab : kand ] , and results are shown in table [ tab : kapand_outputs ] and figures [ fig : kapand_covariance ] and [ fig : kapand_orbit ] . using a linearly descending prior in eccentricity, we find a median semi - major axis of 77au , with 68% confidence between 54 and 123au .eccentricity remains mostly unconstrained after our analysis , but inclination is determined to be between 59 and 159 with 95% confidence .the median orbital period is 378 years , with 95% confidence between 144 years and 2,033 years .the calculated orbital fraction for the orbit of and b is % . tel b is a brown dwarf orbiting the a0v star tel a ( , ) .masses and astrometry are provided in tables [ tab : starparams ] and [ tab : etatel ] , and results are shown in table [ tab : etatel_outputs ] and figures [ fig : etatel_covariance ] and [ fig : etatel_orbit ] .we find a median semi - major axis of 192 au , with 68% confidence between 125 and 432au .the corresponding median period is 1,493 years , with 68% confidence between 781 and 5,028 years .ofti produced a well - constrained posterior in inclination angle , with 95% of orbits having an inclination angle between 40 and 120 .several data points are outliers , which results in a comparatively low orbital acceptance rate of 0.002% .we use a uniform prior in eccentricity . the calculated orbital fraction for the orbit of b is % .2mass j01033563 - 5515561 ( ab ) b ( hereafter 2 m 0103 - 55 ( ab ) b ) is a 12 - 14 jupiter - mass object in orbit with respect to 2 m 0103 - 55 ( ab ) , a binary consisting of two low mass stars . first reported the discovery of 2 m 0103 - 55 ( ab ) b , and provide two epochs of astrometry .masses and astrometry are provided in tables [ tab : starparams ] and [ tab:2m0103 ] , and results are shown in table [ tab:2m0103_outputs ] and figures [ fig:2m0103_covariance ] and [ fig:2m0103_orbit ] .two epochs of astrometry is generally too short of a baseline for the two implementations of mcmc discussed in this work to converge within the timescale of a few days , but ofti quickly returns the appropriate posteriors . using a linearly descending prior in eccentricity , we find a semi - major axis of 102au , with 68% confidence between 75 and 149au , and a corresponding period of 1,682 years , with 68% confidence between 1,054 and 2,990 years .ofti also returns a firm lower limit on inclination angle , with 95% of orbits having inclination angles greater than 112 .the calculated orbital fraction for the orbit of 2 m 0103 ( ab ) b is % .cd-35 2722 b is an l - dwarf companion to the m1 dwarf cd-35 2722 a , discovered by the gemini nici planet - finding campaign . report two epochs of relative astrometry and show that cd-35 2722 b is a bound companion on the basis of common proper motion .masses and astrometry are provided in tables [ tab : starparams ] and [ tab : cd35 ] , and results are shown in table [ tab : cd35_outputs ] and figures [ fig : cd35_covariance ] and [ fig:2m0103_orbit ] . with ofti, we find a median semi - major axis of 115au , with 68% confidence between 74 and 216au .the corresponding period is 1,853 years , with 68% confidence between 947 and 4,772 years .inclination angle and eccentricity remain mostly unconstrained .we use a uniform prior in eccentricity . the calculated orbital fraction for the orbit of cd-35 2722b is % .ofti is a novel orbit - fitting method that reproduces the outputs of mcmc in orders of magnitude less time when fitting astrometric data covering only small fractions of orbits .a key difference with mcmc is that each ofti orbit is independent of the others , whereas an mcmc chain produces a series of correlated values which only define the posterior pdf once the chains have fully converged .this difference makes ofti an ideal tool when parameter estimates are required quickly , as in the context of a space mission .for example , when planning future observations , ofti can quickly compute the expected decrease in errors on orbital parameters for different observing cadences without having to wait for multiple mcmc chains to converge . in this paper, we have demonstrated the accuracy of the ofti method by comparing the outputs of our implementation of ofti with those of mcmc , the speed of ofti by analyzing the outputs produced for varying input orbital fraction , and the utility of ofti by providing fits to ten sets of astrometric data covering very short orbital arcs ( orbital coverage for all but one ) from the literature .ofti is a useful tool for constraining the orbital parameters of directly imaged long - period exoplanets , brown dwarfs , and long - period stellar binaries .it has been applied to fitting the orbits of exoplanets imaged by the gpies campaign and extremely long - period brown dwarfs , and will be used to fit the orbits of future _ wfirst _ targets .ofti s efficiency will be critical for a space - based mission like _ wfirst _ , allowing future observations to be planned effectively and efficiently .we thank the anonymous referee for helpful comments that improved the quality of this work .. has been supported by the national science foundation research experiences for undergraduates program under grant no .ast-1359346 and the stanford summer research early - identification program .s.b . would also like to acknowledge and thank charles royce and the royce fellowship for their support , and jasmine garani for useful discussion .thanks to mike fitzgerald , anand sivaramakrishnan , alexandra greenbaum , max millar - blanchaer , and vanessa bailey for helpful comments . based on observations obtained at the gemini observatory , which is operated by the association of universities for research in astronomy , inc ., under a cooperative agreement with the national science foundation ( nsf ) on behalf of the gemini partnership : the nsf ( united states ) , the national research council ( canada ) , conicyt ( chile ) , the australian research council ( australia ) , ministrio da cincia , tecnologiae inovacao ( brazil ) and ministerio de ciencia , tecnologa e innovacin productiva ( argentina ) ., f.m . , and m.p .were supported by nasa grantnnx14aj80 g .r.j.d.r , d.r , j.j.w , j.r.g have been supported by nsf grant ast-1518332 , national aeronautics and space administration ( nasa ) origins grant nnx15ac89 g , and nasa nexss grant nnx15ad95 g .this work benefited from nasa s nexus for exoplanet system science ( nexss ) research coordination network sponsored by nasa s science mission directorate .ccccccc hd 1160 a , hd 1160 b & & & & & & % + & & & & + + hd 1160 a , hd 1160 c & & & & , & & % + & & & & + + hip 79797 ba , hip 79797 bb & & , & & & & % + + hr 3549 a , hr 3549 b & & & & & & % + + 2 m 1207 a , 2 m 1207 b & & & & & & % + + and a , and b & & & & & & % + + tel a , tel b & & & & , & & % + & & & & & + + 2 m 0103 - 55 ( ab ) , 2 m 0103 - 55 ( ab ) b & & & & & & % + + cd-35 2722 a , cd-35 2722 b & & & & & & % + + gj 504 a , gj 504 b & & & & & & % + [ tab : starparams ] cccc 2002.57 & & & + 2003.84 & & & + 2005.98 & & & + 2008.50 & & & + 2010.71 & & & + 2010.83 & & & + 2010.89 & & & + 2010.90 & & & + 2011.52 & & & + 2011.67 & & & + 2011.80 & & & + 2011.85 & & & + 2014.62 & & & + 2014.62 & & & + + & & total change in : & 2.31 [ tab : hd1160b ] cccc 2002.57 & & & + 2002.74 & & & + 2002.87 & & & + 2003.53 & & & + 2003.64 & & & + 2003.84 & & & + 2003.86 & & & + 2005.98 & & & + 2007.90 & & & + 2008.50 & & & + 2010.71 & & & + 2010.83 & & & + 2010.89 & & & + 2010.90 & & & + 2011.52 & & & + 2011.67 & & & + 2011.76 & & & + 2011.80 & & & + 2011.85 & & & + 2014.62 & & & + + & & total change in : & 1.83 [ tab : hd1160c ] cccc 2010.27 & & & + 2010.35 & & & + 2011.35 & & & + 2012.26 & & & + 2012.66 & & & + + & & total change in : & 7 [ tab : hip79797 ] cccc 2013.03 & & & + 2015.01 & & & + 2015.96 & & & + 2015.99 & & & + + & & total change in : & 1.49 [ tab : hr3549 ] cccc 2004.32 & & & + 2004.66 & & & + 2005.10 & & & + 2005.23 & & & + 2005.24 & & & + 2005.32 & & & + + & & total change in : & 0.21 [ tab:2m1207 ] cccc 1998.49 & & & ; + 2000.31 & & & ; + 2000.38 & & & + 2004.33 & & & + 2004.33 & & & + 2004.34 & & & + 2004.34 & & & + 2006.43 & & & + 2007.75 & & & + 2008.31 & & & + 2008.60 & & & + 2008.60 & & & + 2009.27 & & & + 2009.35 & & & + 2009.50 & & & + + & & total change in : & 1.2 [ tab : etatel ] cccc 2011.23 & & & + 2011.39 & & & + 2011.61 & & & + 2011.62 & & & + 2012.16 & & & + 2012.28 & & & + 2012.39 & & & + + & & total change in : & 1.8 [ tab : gj504 ] ccccccc a & au & 45.27 & 77.86 & 77 & 50 - 173 & 42 - 834 + p & yr&218.02 & 482.21 & 479 & 252 - 1627 & 194 - 17134 + e & & 0.8119 & 0.2550 & 0.77 & 0.35 - 0.94 & 0.05 - 0.98 + i & & 109.3 & 98.2 & 103 & 96 - 119 & 92 - 149 + & & 10.2 & 99.9 & 96 & 39 - 143 & 6 - 174 + & & 66.1 & 61.1 & 63 & 38 - 91 & 9 - 169 + & yr & 2137.84 & 2138.27 & 2220.06 & 2131.28 - 2796.96 & 2087.36 - 9871.18 + [ tab : hd1160b_outputs ] ccccccc a & au&1631.43 & 2512.52 & 651 & 491 - 1083 & 372 - 2454 + p & yr & 45119.44 & 80174.59 & 11260 & 7362 - 24190 & 4852 - 82443 + e & & 0.6286 & 0.8110 & 0.33 & 0.11 - 0.60 & 0.02 - 0.82 + i & & 136.1 & 170.9 & 146 & 128 - 163 & 110 - 174 + & &31.4 & 93.1 & 84 & 28 - 150 & 4 - 176 + & &25.2 & 56.4 & 62 & 20 - 148 & 3 - 177 + & yr & 46827.45 & 2374.57 & 3812.12 & 2915.07 - 8680.52 & 2195.37 - 31708.72 + [ tab : hd1160c_outputs ] ccccccc & au & 2.01 & 9.96 & 3 & 2 - 8 & 2 - 35 + & yr & 8.71 & 82.70 & 23 & 11 - 86 & 7 - 735 + & & 0.9133 & 0.8965 & 0.75 & 0.31 - 0.95 & 0.05 - 0.99 + & & 75.2 & 85.0 & 83 & 66 - 93 & 34 - 117 + & & 151.1 & 104.0 & 95 & 39 - 145 & 6 - 174 + & &168.4 & 8.3 & 157 & 51 - 166 & 4 - 177 + & yr & 1998.44 & 2061.23 & 2005.47 & 1998.34 - 2034.01 & 1995.48 - 2338.70 + [ tab:79797_outputs ] ccccccc & au & 85.84 & 83.73 & 94 & 66 - 159 & 50 - 360 + & yr & 520.01 & 496.86 & 592 & 346 - 1303 &231 - 4461 + & & 0.5103 & 0.4797 & 0.47 & 0.16 - 0.75 & 0.02 - 0.92 + & & 138.5 & 138.1 & 130 & 111 - 152 & 96 - 170 + & & 136.8 & 167.3 & 87 & 26 - 152 & 4 - 176 + & & 172.6 & 14.0 & 77 & 13 - 168 & 2 - 178 + & yr & 2094.84 & 2105.86 & 2115.38 & 2078.47 - 2319.26 & 2030.61 - 3633.32 +[ tab : hr3549_outputs ] ccccccc & au & 35.26 & 91.88 & 46 & 31 - 84 & 24 - 231 + & yr & 1153.44 & 4648.27 & 1782 & 974 - 4413 &633 - 20046 + & & 0.2226 & 0.5226 & 0.49 & 0.15 - 0.83 & 0.02 - 0.98 + & & 41.6 & 41.5 & 69 & 36 - 109 & 13 - 150 + & & 141.4 & 161.6 & 90 & 29 - 151 & 4 - 176 + & & 129.6 & 1.9 & 119 & 52 - 146 & 7 - 174 + & yr & 2424.39 & 2172.58 & 2683.34 & 2285.03 - 4277.65 & 2107.69 - 12883.36 + [ tab:2m1207_outputs ] ccccccc & au & 184.96 & 184.96 & 77 & 54 - 123 & 40 - 236 + & yr & 1385.97 & 1385.97 & 378 & 223 - 768 & 144 - 2033 + & & 0.8908 & 0.8908 & 0.41 & 0.12 - 0.70 & 0.02 - 0.85 + & & 116.6 & 116.6 & 101 & 83 - 125 & 59 - 159 + & & 138.7 & 138.7 & 112 & 29 - 159 & 3 - 177 + & & 67.8 & 67.8 & 63 & 49 - 84 & 26 - 127 + & yr & 2040.10 & 2040.10 & 2065.27 & 2043.38 - 2188.31 & 2015.35 - 2858.45 + [ tab : kapand_outputs ] ccccccc & au & 206.39 & 207.86 & 192 & 125 - 432 & 106 - 2091 + & yr & 1600.73 & 1640.25 & 1493 & 781 - 5028 & 612 - 53621 + & & 0.9084 & 0.0354 & 0.77 & 0.34 - 0.96 & 0.05 - 1.00 + & & 88.4 & 87.4 & 86 & 72 - 96 & 40 - 120 + & & 121.8 & 165.6 & 98 & 37 - 146 & 5 - 175 + & & 169.8 & 167.4 & 165 & 42 - 170 & 3 - 178 + & yr & 2851.39 & 3623.16 & 2669.33 & 2391.73 - 4459.40 & 2263.35 - 26828.14 + [ tab : etatel_outputs ] ccccccc & au & 104.92 & 134.46 & 102 & 75 - 149 & 58 - 256 + & yr & 1746.65 & 2337.20 & 1682 & 1054 - 2990 & 716 - 6723 + & & 0.1233 & 0.2839 & 0.32 & 0.09 - 0.59 & 0.01 - 0.74 + & & 123.6 & 115.6 & 127 & 119 - 144 & 112 - 165 + & & 34.3 & 145.7 & 87 & 25 - 155 & 3 - 177 + & & 124.4 & 136.9 & 122 & 98 - 143 & 22 - 167 + & yr & 3355.07 & 2028.69 & 3081.80 & 2534.01 - 4267.75 & 2068.56 - 7551.37 + [ tab:2m0103_outputs ] ccccccc & au & 286.48 & 10048.78 & 115 & 74 - 216 & 53 - 520 + & yr & 6858.18 & 1454272.75 & 1853 & 947 - 4772 & 580 - 17796 + & & 0.9058 & 0.9973 & 0.82 & 0.57 - 0.95 & 0.18 - 0.99 + & & 160.4 & 154.2 & 126 & 95 - 156 & 49 - 172 + & & 110.4 & 136.7 & 128 & 32 - 163 & 3 - 177 + & & 72.1 & 98.2 & 75 & 58 - 104 & 20 - 158 + & yr & 2088.36 & 2087.50 & 2125.23 & 2099.91 - 2184.95 & 2082.76 - 2531.59 + [ tab : cd35_outputs ]
we describe a bayesian rejection sampling algorithm designed to efficiently compute posterior distributions of orbital elements for data covering short fractions of long - period exoplanet orbits . our implementation of this method , orbits for the impatient ( ofti ) , converges up to several orders of magnitude faster than two implementations of mcmc in this regime . we illustrate the efficiency of our approach by showing that ofti calculates accurate posteriors for all existing astrometry of the exoplanet 51 eri b up to 100 times faster than a metropolis - hastings mcmc . we demonstrate the accuracy of ofti by comparing our results for several orbiting systems with those of various mcmc implementations , finding the output posteriors to be identical within shot noise . we also describe how our algorithm was used to successfully predict the location of 51 eri b six months in the future based on less than three months of astrometry . finally , we apply ofti to ten long - period exoplanets and brown dwarfs , all but one of which have been monitored over less than 3% of their orbits , producing fits to their orbits from astrometric records in the literature .
the unprecedented adoption of mobile handhelds has created a host of new services that require accurate localization , even in gps - denied environments .how to accurately localize without satellite positioning has been an active research topic .various cooperative localization methods have been developed suitable to high noise scenarios , e.g. , indoors .the motivation behind this work is the assumption in the literature of distributed cooperative localization that requires a universally known _ global coordinate system ( gcs)_. even in anchor - free algorithms , such as those in , where nodes localize to a relative _ local coordinate system ( lcs ) _ only , the anchors with shared gcs knowledge are required to transform the local coordinates to global ones .otherwise , nodes would have no idea the whereabout of the global origin .in the case of distributed ad - hoc networks , where the anchors are simply nodes with a good estimate of their gps coordinates , achieving a shared gcs between them is non - trivial .presumably , in this case , the anchors would have to communicate with each other to agree on a gcs with a common origin and pass that information to the rest of the network nodes .in addition , up - to - date information between nodes should be maintained , inducing an increased communication overhead .the solution we suggest is to use directly a gcs for all calculations , therefore eliminating any need for lcs and all of the described issues .the first obvious choice of a gcs would be to use gps as a common gcs to all nodes .unfortunately using gps coordinates in message passing operations would easily make calculations underflow due to the small distances inherent in indoors localization , and would require scaling and normalization at each node .this means that every node would have to convert incoming messages to a lcs , suitable for message passing operations , and then convert them back to the gcs for transmission , increasing the computational cost of the algorithm .instead , we propose a grid - based gcs that can be used directly to conduct message passing operations and at the same time hugely decrease the computational cost . in this mannerwe both remove any requirement for the networks to consent on a gcs and also achieve very low complexity .this correspondence proposes a novel scheme that uses a gcs system suitable for message passing algorithms in cooperative localization . as a real - life example, we use the nato s military grid reference system ( mgrs ) , cf .the approach no longer requires gcs coordination between anchors .also , it has inspired the use of parametric representations using multinomial probability mass functions ( pmfs ) , allowing for a fast robust and accurate cooperative localization algorithm that elegantly resolves the gcs knowledge requirement . in summary , we have made the following contributions : * we propose a grid - based gcs solution , i.e. , map gps coordinates to unique grid identifiers , solving the common reference issue in all distributed localization techniques .* parametric approximations to the pmfs are proposed to overcome the computational bottleneck of non - parametric belief propagation ( bp ) used in cooperative localization .* simulation results illustrate that the proposed grid - based bp method , which is referred to as grid - bp , provides similar accuracy with low computational cost when compared to common techniques with _ideal _ reference .we consider a network of nodes in a 2d environment which consists of agents and anchors , where and .let the space be subdivided into a square grid where each square `` bucket '' has a unique identifier , namely an i d .then let ] .for a specific node , we have thus , can be evaluated using the bayes rule as in which the sign `` '' means `` is proportional to '' , and normalization should be done to obtain the pmf .cooperative localization can be viewed as running inference on a probabilistic graphical model ( pgm ) , where messages are representations or probability functions . here , the proposed grid - based localization algorithm will be presented .first , the use of a cluster graph and bp will be motivated , and the used pgm will be analysed. then efficient approximation for the marginalization and the product operation will be given .the network can be modelled as a cluster graph and loopy belief message passing algorithm can be used .we adopt a bethe cluster graph .the lower factors are composed of univariate potentials .the upper region is composed of factors equal to , e.g. , see fig .[ fig : clustergraph ] .the lower factors are set to the initial beliefs for the given time slot , and the upper factors to the corresponding cpmfs : messages are then passed between nodes for multiple iterations until the node beliefs have converged .the message from node to node , at bp iteration is calculated by where intuitively , a message is the belief that node has about the location of node and is the observed value of the distance between the nodes , at time slot . then the belief of node is updated as where is a dampening factor used to facilitate convergence .bp continues until convergence , or if convergence is not guaranteed , reaches a maximum number of iterations . then the beliefs , representing approximations to the true marginals , for each node are found by , i.e. , .the proposed grid - bp is given as algorithm [ alg : hsm ] .each node needs to perform a marginalization operation , and a product operation .approximations are required for both complex operations . in grid - bp , we take advantage of the multinomial parametric form which we discuss next . [alg : hsm ] initialize beliefs nodes broadcast current belief collect distance estimates initialize initialize receive calculate , using using gibbs sampling ( i.e. , algorithm [ alg : hsmgibbs ] ) calculate , using .check for convergence send update belief , using .the calculation of gives the belief node has about node . to understand this ,let us assume the case where all energy in is concentrated at a single i d , then node would believe that node is located in one of the ids that approximate a `` circle '' with centre and radius . hence , to get , first we draw particles from . then we draw samples from and samples from .the gibbs sampling algorithm is provided as algorithm [ alg : hsmgibbs ] .we repeat algorithm [ alg : hsmgibbs ] for all incoming messages and we will get . [ alg : hsmgibbs ] set to empty sample which is a multinomial pdf sample ] , where denotes the number of unique ids in the pmf , i.e. .firstly we use the samples from each incoming message as observations and get the map estimate ] and the number of particles being .furthermore , to showcase the increased complexity of using gps coordinates as a common gcs , a variant of heva that uses gps was also provided . in heva - gps , messages contain gps coordinates that every node converts to an lcs before calculating and .afterwards the updated beliefs are converted back to gps coordinates and transmitted . when the average node connectivity is in fig .[ fig : noise ] , the rms localization error of all the algorithms as the noise coefficient increases is shown .we see that heva - bp and heva - bp - gps have a better accuracy than both grib - bp and hybrid - bp .grid - bp has a slightly higher rms error . versus the amount of particles used .] in fig .[ fig : time ] , the average simulation time against the number of particles is shown .the improvement in computational cost by grid - bp is observed , especially as the number of particles increases .it should also be noted that for higher particle numbers the cost of gps scaling becomes almost insignificant compared to the cost of message passing equations .note that both algorithms , heva and hybrid - bp have the strong assumption of sharing knowledge of the gcs origin , while grid - bp and heva - gps do not ( the realistic scenario ) .even though heva - bp - gps performs as good as heva - bp , there is an increase in computational cost with heva - bp due to the mapping of the gps coordinates to a local cartesian reference frame ( as can be observed in the mean simulation time in fig .[ fig : time ] ) . asthe number of particles gets higher , the relative computational efficiency of grid - bp can also be seen .note that all the simulations shown were run on an intel i7 2.6ghz , using python for scientific computing .this correspondence has proposed a novel parametric bp algorithm for cooperative localization that uses a grid - based system .the resulting grid - bp algorithm combines a grid - based gcs that alleviates the hidden issue of requiring shared reference knowledge , and a parametric representation which allows quick and efficient inference .simulation results showed that grid - bp s performance is significantly better than other bp algorithms that rely on a known gcs .grid - bp is also easy to be extended for mobile applications and add the non - los mitigation filter proposed in , making it a versatile and reliable choice in both military and civilian applications .
we present a novel parametric message representation for belief propagation ( bp ) that provides a novel grid - based way to address the cooperative localization problem in wireless networks . the proposed grid - bp approach allows faster calculations than non - parametric representations and works well with existing grid - based coordinate systems , e.g. , nato s military grid reference system ( mgrs ) . this overcomes the hidden challenge inherent in all distributed localization algorithms that require a universally known global reference system ( gcs ) , even though every node localizes using arbitrary local coordinate systems ( lcss ) for a reference . simulation results demonstrate that grid - bp achieves similar accuracy at much reduced complexity when compared to common techniques that assume _ ideal _ reference . = 1 belief propagation , cooperative localization .
the intricate character of financial markets has been one of the main motives for the physicists interest in the study of their statistical and dynamical properties . besides the asymptotic power - law behaviour for probability density function for price fluctuations , _ the return _ , and the long - lasting correlations in the absolute return , another important statistical feature observed is the return multi - fractal nature .this propriety has been important in the establishment of analogies between price fluctuations and fluid turbulence and the development of multiplicative cascade models for return dynamics too .changes in the price of a certain equity are basically related transactions of that equity .so , the _ traded volume _ , which is defined as the number of stocks that change hands during some period of time , is an important observable in the dynamics of financial markets .this observation is confirmed by an old proverb at old street that `` it takes volume to make prices move '' . in previous works several proprieties of traded volume , , either statistical or dynamical have been studied . in this article , we present a study of the multi - fractal structure of -minute traded volume time series of the equities which are used to compose the dow jones industrial average , dj30 .our series run from the of july until the december of with a length of around elements each .the analysis is done using the multi - fractal detrended fluctuation analysis , mf - dfa - order polynomials . from this orderahead , we have a nearly polynomial - order - independent multi - fractal spectrum , contrarly to what happens with fitting polynomials of smaller order . ] . besides the multi - fractral analysis we weight the influence of correlation , asymptotic power - law distribution and non - linearities in the multi - fractality of traded volume .since we are dealing with intra - day series we have to be cautious with the well - known daily pattern which is often considered as a lacklustre propriety .to that , we have removed that intra - day pattern of the original time series and normalised each element of the series by its mean value defining the normalised traded volume , where , and is defined as the average over time ( represents the intra - day time and the day ) .a common signature of complexity in a system is the existence of ( asymptotic ) scale invariance in several typical quantities .this scale invariance , self - affinity for time series , can be associated to a single type of structure , characterised by a single exponent , ( the hurst exponent ) or by a composition of several sub - sets , each one with a certain local exponent , , and all supported onto a main structue .the former is defined as a _ mono - fractal _ and the latter as a _ multi - fractal_. in this case the statistical proprieties of the various sub - sets are characterised by the local exponents are related with a fractal dimension composing the _ multi - fractal spectrum_. to evaluate , numerically , this function we have applied the mf - dfa method . for this procedureit was proved that the -th order fluctuation function , , presents scale behaviour .the correspondence between mf - dfa and the standard formalism of multifractals is obtained by , where is the exponent of the generalised partition function . from legendretransform , , we can relate with the hlder exponent , .thus , using the previous equation we get + 1.\ ] ] in fig .[ fig](left ) we display the spectrum ( full line ) obtained from averages for each over the values of the companies . in our analysis runs from to .we have verified that presents a wide range of exponents from up to , corresponding to a deep multi - fractal behaviour .for we have obtained which agrees with strong persistence previously observed ._ vs. _ .right panel : scaling exponents ._ vs. _ averaged over the companies .the legend on the right is also valid for the left .the `` original '' and shuffled time series present a strong multi - fractal character , while the shuffled plus phase randomised time series presents a narrow width in related with the almost linear dependence of and also with the strong contribution of non - gaussian pdf of traded volume to multi - fractality . ] from our time series we can define new and related ones that can help us to quantify which factors contribute the most to the multi - fractal character of this observable . among these factors we name : linear correlations , non - linear correlations and power - law - like pdf .to that , we have shuffled the elements ( within each time series ) and from these series we have computed . from these uncorrelated time serieswe have created another set by randomising the phase in the fourier space .afterwards , we have applied the inverse fourier transform to come back to the time variable .these new series have gaussian stationary distribution and scaling exponent . in fig .[ fig](left ) we see that these two series also present multi - fractal spectrum , although the shuffle series has a wider spectrum than the shuffled plus phase randomised series . concerning the hurst exponent , , we have obtained for shuffled and for shuffled plus phase randomised series .considering error margins , these values are compatible with of brownian motion .furthermore , we have made a set of only phase randomised time series for which we have obtained . from this valueswe have concluded that correlations have a key role in the persistent character of traded volume time series .using scaling relation for the three series and assuming that all the factors are independent , we can quantify the influence of correlations , , the influence of a non - gaussian pdf , , and the weight of non - linearities , .the multi - fractality of a time series can be analysed by mens of the difference of values between and , hence it is a suitable way to characterise multi - fractality . for a mono - fractalwe have , _i.e. _ , linear dependence of with . in fig .[ fig](right ) we have depicted for several time series from which we have computed the various .the results obtained are the following : , , , and .as it can be easily concluded the influence of linear correlations in traded volume muli - fractal nature is minimal with corresponding to of .this value is substancially smaller than the influence of which corresponds to of .our result is in perfect accordance with another previous result of us where , using a non - extensive generalised mutual information measure , we were able to show that non - linear dependences are not only stronger but also more resilient than linear dependences ( correlations ) in traded volume time series .last but not least , from the values of we have verified that the main factor for the multi - fractality of traded volume time series is its non - gaussian , generalised -gamma , probability density function with a weight of in .moreover , we have verified that the behaviour of for is quite different from the , which is also visible in the strong asymmetry of .this could indicate that large and small fluctuations appear due to different dynamical mechanisms .such scenario is consistent with the super - statistical approach recently presented vol - smdq , vol - dumas and closely related with the current non - extensive framework based on _ tsallis entropy _ . within this context and bearing in mindthe relation , we conjecture that , for traded volume , the sensivity to inicial conditions may be described by ^{\frac{1}{1-q_{sens}}}$ ] with .we thank c. tsallis and e. m. f. curado for their continuous encouragement and valuable discussions with p. ch .ivanov about phase randomisation .the data used in this work was provided by olsen data services to which we also ackowledge .this work benefitted from infrastructural support from pronex ( brazilian agency ) and financial support of fct / mces ( portuguese agency ) .99 r.n . mantegna and h.e .stanley , _ an introduction to econopysics : correlations and complexity in finance _ , ( cambridge u. p. , cambrigde , 1999 ) ; j.p .bouchaud and m. potters , _ theory of financial risks : from statistical physics to risk management _ , ( cambridge u. p. , cambridge , 2000 ) ; j. voit , _ the statistical mechanics of financial markets _ , ( springer - verlag , berlin , 2003 ) .
in this article we explore the multi - fractal properties of 1 minute traded volume of the equities which compose the dow jones 30 . we also evaluate the weights of linear and non - linear dependences in the multi - fractal structure of the observable . our results show that the multi - fractal nature of traded volume comes essencially from the non - gaussian form of the probability density functions and from non - linear dependences .
in the context of distributed control , distributed consensus algorithms are employed as the building block for implementing other distributed algorithms which rely on the individual decision of agents and the local communication among them .the extension of this class of algorithms to the quantum domain has been addressed in where four different generalizations of classical consensus states have been proposed . in necessary and sufficient conditions for asymptotic convergence of the quantum consensus algorithm is studied .optimizing the convergence rate of the algorithm to the consensus state has been addressed in .the majority of the analysis regarding the convergence rate of the algorithm has been focused on quantum networks with an undirected underlying graph . in this paper, we aim to study the convergence rate of the distributed consensus algorithm over a network of qudit systems with general ( i.e. either directed or undirected ) underlying topologies .the convergence rates to two different states of the network of quantum systems have been studied .these states are consensus and synchronous states .consensus state is the symmetric state which is invariant to all permutations .synchronous state is the state where the reduced states of the quantum network are driven to a common trajectory . employing the intertwining relation between the eigenvalues of the laplacian matrices of the induced graphs , we have shown that the convergence rate to the consensus state is obtained from the spectrum of all induced graphs combined .on the contrary , the convergence rate to the synchronous state is dictated by only the spectrum of the underlying graph of the network and therefore it is independent of the dimension of the hilbert space . by establishing the relation between the convergence rates to consensus and synchronous states , we have shown that both convergence rates are equal and independent of if the aldous conjecture holds true for all induced graphs of the networks .furthermore , we have proved that the synchronous state is reachable for any permutation - invariant system hamiltonian , while for the algorithm to converge to the consensus state , either the system hamiltonian should be zero ( i.e. ) or the analysis should be limited to the interaction picture . for different network topologies , by plotting the pareto region of the convergence rates to the consensus and synchronous states , we have studied the pareto optimal points and the global optimal points regarding both convergence rates . the rest of the paper is organized as follows .preliminaries on graph theory are provided in section [ preliminaries ] .section [ sec : evolutionqnet ] explains the evolution of the quantum network . in section [ sec : simulations ] , optimization of the convergence rates of the algorithm have been addressed over different topologies and section [ sec : conclusion ] concludes the paper .in this section , we present the fundamental concepts on graph theory , cayley and schreier coset graphs . a directed graph ( digraph ) is defined as with as the set of vertices and as the set of edges .each edge is an ordered pair of distinct vertices , denoting an directed edge from vertex to vertex . throughout this paper ,we consider directed simple graphs with no self - loops and at most one edge between any two different vertices .a weighted graph is a graph where a weight is associated with every edge according to a proper map , such that if , then ; otherwise .the edge structure of the weighted graph is described through its weighted adjacency matrix .the weighted adjacency matrix is a matrix with -th entry defined as below the indegree ( outdegree ) of a vertex is the sum of the weights on the edges heading in to ( heading out of ) vertex .a directed path ( dipath ) in a digraph is a sequence of vertices with directed edges pointing from each vertex to its successor in the sequence .a simple dipath is the one with no repeated vertices in the sequence .a directed graph is called strongly connected if there is a dipath between any pair of vertices in the graph .the weighted laplacian matrix of graph is defined as below , where is the indegree of the -th vertex .this definition of the weighted laplacian matrix can be expressed in matrix form as , where and are the indegree and the adjacency matrices of the graph .the laplacian matrix of a directed graph is not necessarily symmetric and its eigenvalues can have imaginary parts . defining and as vectors of length with all elements equal to one and zero , respectively , for the laplacian matrixwe have . in directed graphs , the eigenvalues of the the associated laplacian can be arranged in non - decreasing order as below , denotes the real part of the -th eigenvalue of the weighted laplacian matrix of the digraph .the digraph is said to be strongly connected iff .let be a group and let .the cayley graph of generated by ( referred to as the generator set ) , denoted by , is the directed graph where and .if ( i.e. , is closed under inverse ) , then is an undirected graph .if acts transitively on a finite set , we may form a graph with vertex set and edge set . similarly ,if is a subgroup in , we may form a graph whose vertices are the right cosets of , denoted and whose edges are of the form .these two graphs are the same when is the coset space , or when is the stabilizer of a point of and is called the schreier coset graph .considering a quantum network as a composite ( or multipartite ) quantum system with qudits , and assuming as the d - dimensional hilbert space over , then the state space of the quantum network is within the hilbert space .the state of the quantum system is described by its density matrix ( a positive hermitian matrix with trace one ) .the network is associated with an underlying graph , where is the set of indices for the qudits , and each element in is an ordered pair of two distinct qudits , denoted as with .permutation group acts in a natural way on by mapping onto itself . for each permutation we associate unitary operator over , as below where is an operator in for all . employing the quantum gossip interaction introduced in , the evolution of the quantum network can be described by the following master equation + \sum\nolimits _ { \pi \in \textit{b } } { w_{\pi } \left ( u_{\pi } \times \boldsymbol{\rho } \times u_{\pi}^{\dagger } - \boldsymbol{\rho } \right ) } \end{gathered } % \vspace{-4pt}\ ] ] where is a subset of the permutation group , is the ( time - independent ) system hamiltonian , is the imaginary unit , is the reduced planck constant and where is a positive time - invariant weight corresponding to the permutation .these weights form the distribution of limited amount of weight up to , among edges of the underlying graph , i.e. where is the sum of the cycle length of cycles appearing in except for trivial one - cycles .the generator set should be selected in a way that the underlying graph corresponding to ( the group generated by which is a subset of ) is connected . in , four different consensus states generalized to the quantum domain are exploited .based on these schemes , three different possible consensus states can be defined which are reachable by quantum consensus algorithm .observable - expectation consensus [ avgstatedef ] the observable - expectation consensus is defined as the state where for any observable the following holds , where .synchronous state [ synchdef ] the synchronous state is defined as the state where the following holds where is the reduced state of the subsystem for an overall system state i.e. . in ,equation is defined as the reduced state consensus .in it is shown that the observable - expectation consensus and the synchronous state are equivalent .symmetric state [ symstatedef ] the symmetric state is defined as the state where for each unitary permutation , with for the special case that the subset is able to generate the permutation set ( i.e. ) the symmetric state is referred to as the consensus state .for the case of permutation - invariant , i.e. = 0 \quad \text{for every}\quad \pi \in s_n , \end{gathered } % \vspace{-4pt}\ ] ] we can eliminate the first term in lindblad equation by writing it in interaction picture , i.e. and then substituting the result in lindblad equation which in turn results in the following , in , it is shown that equation ( [ eq : lindblad3 ] ) can asymptotically reach the symmetric state given below , substituting ( 8) in ( 7 ) and using the fact that is permutation - invariant i.e. it can be concluded that and .[ theorem1 ] for permutation - invariant , the equation ( [ eq : lindblad2 ] ) can reach the symmetric state ( [ eq : averagestate ] ) .the following can be written for the symmetric state ( [ eq : averagestate ] ) therefore , the equation ( [ eq : lindblad2 ] ) will reach the symmetric state ( [ eq : averagestate ] ) , as long as the equation ( [ eq : lindblad3 ] ) reaches the symmetric state ( [ eq : qcmefinalconsensus ] ) .[ theorem2 ] equation ( [ eq : lindblad2 ] ) can reach the synchronous state if the underlying graph corresponding to is strongly connected and is permutation - invariant .based on ( [ eq : formula396 ] ) we have where is the unitary representation of a permutation that maps to . from equation ( [ eq : formula404 ] ) , it can be concluded that the i.e. the reduced states of the -th and -th qudits in the equilibrium are the same , i.e synchronous state is obtained .also equation([eq : formula404 ] ) implies the following , where . in ,equation is defined as the -expectation consensus .based on theorem [ theorem2 ] and definition [ symstatedef ] , we can state that for permutation - invariant and a strongly connected underlying graph which can generate the permutation group ( i.e. ) , the consensus state is reachable for , and on the other hand the consensus state is reachable for if .thus to have both consensus and synchronous states as feasible states , for the rest of the analysis presented in this paper we have assumed that and . the density matrix can be expanded in terms of the generalized gell - mann matrices ( * ? ? ?* appendix a ) as below , where is the number of particles in the network and denotes the cartesian product and matrices are the generalized gell - mann matrices .note that due to hermity of density matrix , its coefficients of expansion are real numbers and because of unit trace of we have .substituting the density matrix from ( [ eq : decompositiondensitygeneral ] ) in lindblad master equation ( [ eq : lindblad3 ] ) and considering the independence of the matrices we can conclude the following for lindblad master equation ( [ eq : lindblad3 ] ) , for all , with the constraint ( [ eq : lindbladconstraint ] ) on the edge weights .the tensor component of the quantum consensus state ( [ eq : qcmefinalconsensus ] ) can be written as below and for the strongly connected underlying graph , the qcme reaches quantum consensus , componentwise as below comparing the set of equations in ( [ eq : densityequation1 ] ) with those of the classical continuous - time consensus ( ctc ) problem we can see that the quantum consensus master equation ( [ eq : lindblad3 ] ) is transformed into a classical ctc problem with tensor component as the agents states . defining as a column vector of length with components , the state update equation of the classical ctccan be written as below , with the constraint ( [ eq : lindbladconstraint ] ) on the edge weights . is the corresponding laplacian matrix as below , where is the swapping operator given in ( * ? ? ?* appendix a ) provided that is replaced with which in turn results in gell - mann matrices of size .it is obvious that the convergence rate of the obtained classical ctc problem ( [ eq : quantumstateupdate ] ) is dictated by , where is the eigenvalue of with second smallest real value .thus , the corresponding optimization problem can be written as below , this problem is known as the fastest continuous time quantum consensus ( fctqc ) problem .in it is shown that the underlying graph of the obtained classical ctc problem ( [ eq : quantumstateupdate ] ) is a cluster of connected components .similar result can be deduced for directed underlying graphs and it can be shown that each strongly connected graph component corresponds to a given partition of into integers , namely , where and for is the number of indices in with equal values . for a given partition and its associated young tabloids , more than one connected component can be obtained .therefore for each partition we consider only one of them , and we refer to this graph as the induced graph .these induced graphs are the same as those noted in .each young tabloid is equivalent to an agent in the induced graph of the ctc problem and its corresponding coefficient ( ) is equivalent to the state of that agent . for more details on young tabloids and their association with partitions of an integer, we refer the reader to .based on the fact that the underlying graph of the ctc problem ( [ eq : densityequation1 ] ) is a cluster of connected components , in it is shown that the laplacian matrix is a block diagonal matrix where each block corresponds to one of the connected components , with state vector . the state update equation ( [ eq : quantumstateupdate ] ) for the state vector is as below , with as the laplacian matrix which is one of the blocks in . therefore , the convergence rate of ( [ eq : lindblad3 ] ) to the fixed point ( [ eq : qcmefinalconsensus ] ) is equivalent to the convergence rate of the obtained classical ctc problem , i.e. , which in turn is determined by the real part of the spectrum of the induced graphs laplacian matrices . in other wordswe have .let and be two given partitions of , then dominates if we have in , it is shown that the spectrum of the induced graph corresponding to the dominant partition is included in that of the less dominant partition .this is known as the intertwining relation .also , in , it is shown that if the underlying graph of the quantum network is a connected and undirected graph then the second smallest eigenvalues of all induced graphs are equal .this is known as the generalization of aldous conjecture . using this result , in , the problem of optimizing the convergence rate of quantum consensus algorithmis reduced to the problem of maximizing the second smallest eigenvalue of the underlying graph of the quantum network .thus , the convergence rate is independent of ( the dimension of the hilbert space ) . in general, aldous conjecture does not hold true for the case of directed underlying graphs .therefore for optimizing , all induced graphs should be considered .hence , the convergence rate will depend on ( the dimension of the hilbert space ) . in next section , for several examples , we have shown that the aldous conjecture is partially true . expanding in terms of gell - mann matrices ( similar to ( [ eq : decompositiondensitygeneral ] ) ) and substituting as in ( [ eq : formula413 ] ) we have since ( [ eq : formula413 ] ) can be concluded if the equation above holds true therefore ( [ eq : formula413 ] ) is the necessary and sufficient condition for reaching the synchronous state .in other words , reaching consensus in the underlying graph ( not necessarily in the induced graphs ) is the necessary and sufficient condition for reaching the synchronous state .therefore , it can be concluded that for analyzing the convergence rate to the synchronous state , it suffices to study the convergence rate of the classical consensus algorithm over only the underlying digraph of the network . as a result the convergence rate to the synchronous state wold be independent of ( the dimension of hilbert space ) . for optimizing the convergence rate to the synchronous state, the corresponding optimization problem can be written as below , where is the laplacian matrix of the underlying graph .note that as long as the aldous conjecture holds true , the convergence rate to both the synchronous and the consensus states are the same , otherwise the convergence rate to the synchronous state is faster than that of the consensus state . for the rest of this paper , we will study different underlying digraphs and we will investigate if the aldous conjecture holds true and also we will study the convergence rate to both the synchronous state and the consensus states .in this section , we optimize the convergence rates of the distributed quantum consensus algorithm to the consensus and the synchronous states , over different topologies with three and four qudits .topologies with qudits have two induced graphs of sizes and , where the smaller induced graph is identical to the underlying graph of the topology .we denote the laplacian matrices of the underlying graph and the induced graphs by , and , respectively . using the intertwining relation between laplacian matrices of the induced graphs, we can state that all eigenvalues of are amongst those of .therefore , the convergence rates to the synchronous and consensus states are dictated by the second smallest eigenvalues of and , respectively .the first topology that we consider is a digraph with three qudits with one cycle of length three and one transposition .the underlying and the induced graphs of this topology are depicted in figure [ fig : g3d1graph ] .the weight on edges of the cycle and the transposition are denoted by and , respectively .and ( b ) the resultant induced graph.,width=415 ] the laplacian matrices for the underlying and the induced graphs of this topology are as below , } , % } \end{aligned } % \vspace{-2pt}\ ] ] where .the nontrivial eigenvalues of are , where and .thus the second smallest eigenvalue of is . in the case of nontrivial eigenvalues are and and the second smallest eigenvalue of is .hence the convergence rate to the synchronous state and the consensus state is dictated by the following , in figure [ fig : g3d1pareto ] , we have plotted the parieto region for and , with the constraint . from figure[ fig : g3d1pareto ] , it is obvious that there is not any global optimal point for both and .the region is bounded by two lines , namely and and a convex curve , where all three of them are obtained for the constraint .the line is obtained for where and thus .the line is obtained for which results in .the convex curve between two lines is obtained for .note that the points along the line are the only set of convergence rates where the aldous conjecture holds true .regarding the consensus state , the optimal convergence rate is the point and in the pareto region which is obtained for and for synchronous state , the optimal convergence rate can be obtained for any of the points along the line . although the points on the convex curve in figure [ fig : g3d1pareto ] do not result in any of the optimal convergence rates but these points act as maximal points which can be used for trade - off between the convergence rates to the consensus states and the synchronous state . and of the graph depicted in figure [ fig : g3d1graph ] ( a).,width=566 ] the second topology that we consider is a digraph with three qudits , two cycles and one transposition .the underlying and the induced graphs of this topology are depicted in figure [ fig : g3d2graph ] .the weight on edges of the cycles and the transposition are denoted by , and , respectively . and ( b ) the resultant induced graph.,width=415 ]the laplacian matrices for the underlying and the induced graphs of this topology are as below , } , % } \end{aligned } % \vspace{-4pt}\ ] ] where .the nontrivial eigenvalues of are where and and the second smallest eigenvalue of is . in the case of , the nontrivial eigenvalues are and and the second smallest eigenvalue of is . hence the convergence rate to the synchronous state and the consensus state is dictated by the following , with constraint , format of the pareto region for this topology is similar to that of , i.e. the pareto region is bounded by the vertical line , the line and a convex curve between these two lines .the boundaries of the pareto region are obtained for the case that either one of or is zero . in this case, this topology reduces to .in other words , the second cycle is redundant as it is slowing down the convergence rates to both the consensus state and the synchronous state .interestingly , for the case ( where the underlying graph is an undirected graph ) , the aldous s conjecture does not always hold true .as an example for , the convergence rates are .similar to topology , the points along the line are the only set of convergence rates where the aldous conjecture holds true .the third topology that we have analyzed is a digraph with three qudits and two transpositions .this topology is identical to an undirected path graph with three vertices .the weight on edges of the transpositions are denoted by and .the laplacian matrices for the underlying of this topology is as below , } , \end{gathered } % \vspace{-4pt}\ ] ] in it is shown that for topologies with undirected and connected underlying graphs , the convergence rate of the quantum consensus algorithm to the consensus state is obtained from the second smallest eigenvalue of the laplacian matrix of the underlying graph .thus it can be concluded that independent of the value of the weights , for topologies with undirected and connected underlying graphs , the convergence rates to the consensus and the synchronous states are the same .hence for topology , the convergence rates to consensus and the synchronous states are always equal and considering the constraint , the pareto region for this topology would be a direct line between points and .point is the global optimal point in terms of both convergence rates which is obtained for .the fourth topology that we have considered in this section is a digraph with four qudits , one cycle of length four and two transpositions .the laplacian matrix of this topology is as below , , % $ } \end{gathered } % \vspace{-4pt}\ ] ] the weight on edges of the cycle and the transposition are denoted by and , respectively .this topology has four induced graphs of sizes , , and .we denote the laplacian matrices of these induced graphs by , , , and the second smallest eigenvalues of each one of these matrices by , , and . is identical to the laplacian matrix of the underlying graph of the topology .the smallest eigenvalue of each one of these laplacian matrices is zero . using the intertwining relation between laplacian matrices of the induced graphs, we can state that all eigenvalues of , and are amongst those of , and , respectively .since includes all irreducible representations of ( except the one corresponding to the largest eigenvalue ) then based on , it can be concluded that all eigenvalues of ( except its largest eigenvalue ) are amongst the eigenvalues of .therefore , the convergence rates to the synchronous and consensus states are equal to and , respectively .as depicted in figure [ fig : g4d5pareto ] , the boundary of the pareto region for this topology is bounded by lines and and two concave curves . the line stretches between points and where the latter point is obtained for , , and it is denoted as point in figure [ fig : g4d5pareto ] .the vertical line stretches between points , and , where the latter point is obtained for , , and it is denoted as point in figure [ fig : g4d5pareto ] .note that this point is the global optimal point in regards to the convergence rates of the algorithm to both consensus and synchronous states .the point that two boundary curves meet each other is , which is obtained for , , and it is denoted as point in figure [ fig : g4d5pareto ] . and of the graph .,width=529 ]considering a network of qudits , we have studied the convergence rate of the distributed consensus algorithm with general ( i.e. either directed or undirected ) underlying topology towards consensus and synchronous states .we have established that the convergence rate to both states are equal iff the aldous conjecture holds true . in case of networks that their underlying graph contains cycles ,the aldous conjecture does not necessary hold true and it should be analyzed per case for each combination of weights . in our future work , we will study the relation between the convergence rates and the aldous conjecture by relaxing the consensus state to a symmetric state which is invariant to only a subset of all permutations .other future studies will focus on analysing the discrete - time model of the quantum consensus algorithm and the quantum gossip algorithm over quantum networks with general underlying topologies .l. mazzarella , a. sarlette and f. ticozzi , _ a new perspective on gossip iterations : from symmetrization to quantum consensus _ , 1em plus 0.5em minus 0.4emieee 52nd annual conference on decision and control ( cdc ) , pp .250 - 255 , 2013 .l. mazzarella , a. sarlette and f. ticozzi , _consensus for quantum networks : symmetry from gossip interactions _ ,1em plus 0.5em minus 0.4emieee trans .control , vol .158 - 172 , jan . 2015 .g. shi , d. dong , i. r. petersen and k. johansson , _ reaching a quantum consensus : master equations that generate symmetrization and synchronization _, 1em plus 0.5em minus 0.4emieee trans .control , pp . 1 - 14 , 2015. g. shi , s. fu and i. r. petersen , _ quantum network reduced - state synchronization part ii - the missing symmetry and switching interactions _, 1em plus 0.5em minus 0.4emamerican control conference ( acc ) , pp .92 - 97 , 2015 .
distributed consensus algorithm over networks of quantum systems has been the focus of recent studies in the context of quantum computing and distributed control . most of the progress in this category have been on the convergence conditions and optimizing the convergence rate of the algorithm , for quantum networks with undirected underlying topology . this paper aims to address the extension of this problem over quantum networks with directed underlying graphs . in doing so , the convergence to two different stable states namely , consensus and synchronous states have been studied . based on the intertwining relation between the eigenvalues , it is shown that for determining the convergence rate to the consensus state , all induced graphs should be considered while for the synchronous state only the underlying graph suffices . furthermore , it is illustrated that for the range of weights that the aldous s conjecture holds true , the convergence rate to both states are equal . using the pareto region for convergence rates of the algorithm , the global and pareto optimal points for several topologies have been provided . quantum networks synchronization , distributed consensus , aldous conjecture , optimal convergence rate
the most challenging instances of computational problems are often found near a critical threshold in the problem s parameter space , where certain characteristics of the problem change dramatically .one such problem , already discussed in ref . , is the 3-coloring problem .consider a random graph having vertices and edges placed randomly among all possible pairs of vertices .the number of edges emanating from each vertex is then poisson - distributed around a mean degree . to 3-color the graph ,we need to assign one of three colors to each vertex so as to minimize the number of `` monochromatic '' edges , i.e. , those connecting vertices of the same color .in particular , we may want to decide whether it is possible to make an assignment without any monochromatic edges using , for instance , a backtracking assignment procedure .typically , if the mean degree is low ( for example , when each vertex most likely has fewer than 3 neighbors ) , one quickly finds a perfect coloring .if the mean degree is high , one soon determines that monochromatic edges are unavoidable after fixing just a small number of vertices . at an intermediate mean degree value , however , some graphs are perfectly colorable while others are not . in that case , for each instance one must inspect many almost - complete colorings , most of which fail only when trying to assign a few final vertices , before colorability can be decided . for increasing , the regime of mean degree values for which the decision problem is hard becomes narrowly focused , while the computational complexity of the backtracking algorithm within this regime grows faster than any power of , signs of the impending singularity associated with a phase transition .such findings have spawned considerable interest among computer scientists and statistical physicists alike .on one hand , there appear to be close links to the properties of spin glass systems . using replica symmetry breaking ,it was recently argued that 3-coloring undergoes a colorability transition at , heralded by the spontaneous emergence at of a sizable 3-core that becomes over - constrained at the transition .this analysis shows , furthermore , that the hardest instances to decide are located between a clustering transition at and . on the other hand, attempts have been made to relate the nature of the phase transition to complexity classifications developed by computer scientists for combinatorial problems : it has been suggested that np - complete problems , which are hard to solve , display a first - order phase transition while easier problems lead to a second - order transition .such a relation , while intriguing and suggestive , is bound to be questionable in light of the fact that these phase transitions are based on a notion of average - case complexity ( for instance , 3-coloring averaged over the ensemble of random graphs ) , distinct from the notion of worst - case complexity used by computer scientists to define np - completeness and other such complexity theoretic categories .in fact , it currently appears that a first - order transition is merely an indicator for the complexity of certain types of algorithms : local searches . a model problem with a discontinuous transition , -xorsat , can be solved by a fast global algorithm yet it is extremely hard for local search or backtracking assignment algorithms . in this paperwe consider the 3-coloring problem mentioned above .the problem is among the hardest combinatorial optimization problems , making it difficult to study the asymptotic properties of its phase transition with exact methods .it is also of considerable interest in its own right as a model for many practical optimization problems , and it is of some physical relevance due to its close relation to potts anti - ferromagnets . some aspects of the 3-coloring phase transition have previously been explored . in particular , culberson and gent studied the phase transition for random graphs with _ exact _methods by `` growing '' random graphs of size , sequentially adding random edges to an existing graph to increase , and checking along the way whether the graph is still colorable .once a graph becomes uncolorable , it is discarded from the list of growing graphs , so the set of graphs becomes increasingly less representative of the ensemble when passing through the transition . in the process , these authors have evaluated the constrainedness of the variables in the graph , studying in detail many aspects of the approach to the transition . however , the main quantity they measure , called the `` spine '' , is in general an upper bound on the order parameter we consider here , since contributions from uncolorable graphs are neglected .we investigate the properties near the phase transition by applying an optimization heuristic called _ extremal optimization _( eo ) .eo was recently introduced as a general - purpose optimization method based on the dynamics of driven , dissipative systems .our study shows that eo is capable of determining many ground - state properties efficiently , even at the phase transition .eo performs a local neighborhood search that does not get stuck in local minima but proceeds to explore near - optimal configurations broadly .hence , it is particularly well suited to measure global properties of the configuration space . here, we use it to estimate the `` backbone '' , an overlap property between the highly degenerate ground state configurations that provides a more convenient order parameter than measuring mutual overlaps of all ground states . while eo is not exact , benchmark comparisons with exactly - solved , large instances justify our confidence in its results .our biggest errors originate from the lack of statistics at large .our results indicate that the transition in the backbone size is of first order , though with only a small discontinuity . in fact , the discontinuity does not arise uniformly for all graphs in the ensemble , but is due to a fraction of instances that have a strong backbone of a characteristic size while the rest have hardly any backbone at all . using the procedure of ref . to control the quality of finite size scaling for the ground state cost function , we estimate the location of the transition as , where the numbers in parentheses denote the statistical error bar in the final digits .this is consistent with the presumably correct value of given by replica symmetry breaking methods ( see also refs . for earlier estimates ) .we measure the size of the scaling window as with , close to the value of 1.5 estimated for 3-sat , although it may be that trivial -fluctuations from the variables not belonging to the 3-core dominate at much larger than considered here . in the following section, we introduce the problem of 3-coloring in more detail and discuss the relevant observables we measure in order to analyze the phase transition . in sec .[ eoalgo ] we discuss our eo implementation and its properties . in sec .[ numerics ] we present the results of our measurements , and we conclude with sec . [ conclusion ] .a random graph is constructed from a set of vertices by assigning an edge to of the pairs of vertices with equal probability , so that is the average vertex degree . here, we will only consider the regime of `` sparse '' random graphs where and .the goal of graph coloring is to label each vertex with a different color so as to avoid monochromatic edges .three different versions of the coloring problem are of interest .first , there is the classic problem of determining the `` chromatic number '' for a given graph , i.e. , the minimum number of colors needed to color the graph while avoiding monochromatic edges .it is very difficult to devise a heuristic for this problem . in the other two versions ,we are given a fixed number of colors to select from .the decision problem , -col , addresses the question of whether a given graph is colorable or not .finally , the optimization problem , max--col , tries to minimize the number of monochromatic edges ( or equivalently , maximize the number of non - monochromatic edges , hence its name ) . clearly , if we define the number of monochromatic edges as the `` cost '' or `` energy '' of a color assignment , determining whether the minimal cost is zero or non - zero corresponds to solving the decision problem -col , so finding the actual cost of the ground state is always at least as hard .much of the discussion regarding the complexity near phase transitions in the computer science literature is focused on the decision problem . from a physics perspective, it seems more intuitive to examine the behavior of the ground states as one passes the transition .accordingly , we will focus on the max--col problem in this paper .all these versions of coloring are np - hard , and thus computationally hard in the worst case .to determine exact answers would almost certainly require a computational time growing faster than any power of .thus , extracting results about asymptotic properties of the problems is a daunting task , calling for the use of accurate heuristic methods , as discussed in the following section .the control parameter describing our ensemble of instances is the average vertex degree of the random graphs .constructing an appropriate order parameter to classify the transition is less obvious .the analogy to spin - glass theory suggests the following reasoning . in a homogeneous medium possessing a single pure equilibrium state, the magnetization provides the conjugate field to analyze the ferromagnetic transition . for our 3-coloring problem, the disorder induced by the random graphs leads to a decomposition into many coexisting but unrelated pure states with a distribution of magnetizations . since the colors correspond to the spin orientations in the related potts model , we need in principle to determine , for each graph , the overlap between all pairs of ground state colorings .finally , this distribution has to be averaged over the ensemble . to simplify the task, one can instead extract directly the `` backbone '' , which is the set of variables that take on the same state in _ all _ possible ground state colorings of a given instance .but even determining the backbone is a formidable undertaking : it requires not only finding a lowest cost coloring but sampling a substantial number of those colorings for each graph , since the ground state entropy is extensive .another level of difficulty arises due to the invariance of the ground states under a global color permutation .thus , in the set of all ground states , each vertex can take on any color and the backbone as defined above is empty . to avoid this triviality, one may redefine the backbone in the following way . instead of considering individual vertices , consider all _pairs _ of vertices that are not connected by an edge . define the pair to be part of the _ frozen _ backbone if its vertices are of like color ( monochromatic ) in all ground state colorings , so that the presence of an edge there would necessarily incur a cost .define the pair to be part of the _ free _ backbone if its vertices are of unlike color ( non - monochromatic ) in all ground state colorings , so that the presence of an edge there would not incur any cost . since the fraction of pairs that belong to the frozen backbone measures the constrainedness of an instance , it is the relevant order parameter .we have also sampled the free backbone .as shown in sec .[ numerics ] , both seem to exhibit a first - order transition , though the jump for the frozen backbone is small . by definition, the location of the transition is determined through a ( second - order ) singularity in the cost function : the cost is asymptotically vanishing below the transition , it is continuous at the transition , and above it is always non - zero , we have therefore measured the ground state cost , averaged over many instances , for a range of mean degree values and sizes .to investigate the phase transition in 3-col , we employ the extremal optimization heuristic ( eo ) .the use of a heuristic method , while only approximate , allows us to measure observables for much larger system sizes and with better statistics then would be accessible with exact methods .we will argue below that we can obtain optimal results with sufficient probability that even systematic errors in the exploration of ground states will be small compared to the statistical sampling error .our eo implementation is as follows .assume we are given a graph with a ( however imperfect ) initial assignment of colors to its vertices .each vertex has edges to neighboring vertices , of which are `` good '' edges , i.e. , to neighbors of a different color ( not monochromatic ) .we define for each vertex a `` fitness '' ,\end{aligned}\ ] ] and determine a permutation ( not necessarily unique ) over the vertices such that at each update step , eo draws a number from a distribution with a bias toward small numbers .a vertex is selected from the ordered list in eq .( [ lambdaeq ] ) according to its `` rank '' , i.e. , .vertex is updated _ unconditionally _ , i.e. , it always receives a new color , selected at random from one of the other colors . as a consequence , vertex andall its neighbors change their fitnesses and a new ranking will have to be established .then , the update process starts over with selecting a new rank , and so on until some termination condition is reached . along the way, eo keeps track of the configuration with the best coloring it has visited so far , meaning the one that minimizes the number of monochromatic edges , .previous studies have found that eo obtains near - optimal solutions for a variety of hard optimization problems for a carefully selected value of . for 3-col ,initial trials have determined that best results are obtained for the system sizes at a ( fixed ) value of .this rather large value of helps explore many low - cost configurations efficiently ; if we merely wanted to determine low - cost solutions , larger values of could have been reached more efficiently at a smaller value of .it should be noted that our definition of fitness does not follow the generic choice that would give a total configuration cost of .while this formulation sounds appealing , and does produce results of the same quality , our choice above produces those same results somewhat faster ; there appears to be some advantage to treating all vertices , whose individual degrees are poisson - distributed around the mean , on an equal footing . furthermore , our implementation limits itself to partially sorting the fitnesses on a balanced heap , rather than ranking them perfectly as in eq .( [ lambdaeq ] ) . in this way, the computational cost is reduced by a factor of while performance is only minimally affected . the backbone , described in sec .[ 3col ] , is a collective property of degenerate ground states for a given graph .thus , in this study we are interested in determining not only the cost of the ground state , but also a good sampling of _ all _ possible ground state configurations . local search with eois ideally suited to probe for properties that are broadly distributed over the configuration space , since for small enough it does not get trapped in restricted regions . even after eo has found a locally minimal cost configuration, it proceeds to explore the configuration space widely to find new configurations of the same or lower cost , as long as the process is run . against these advantages, one must recognize that eo is merely a heuristic approximation to a problem of exponential complexity .thus , to safeguard the accuracy of our measurements , we devised the following adaptive procedure . for each graph , starting from random initial colorings , eo was run for update steps , using a minimum of 5 different restarts . for the lowest cost seen so far , eo keeps a buffer of up to most recently visited configurations with that cost .if it finds another configuration with the same cost , it quickly determines whether it is already in the buffer .if not , eo adds it on top of the buffer ( possibly `` forgetting '' an older configuration at the bottom of the buffer ) .thus , eo does not keep a memory of all minimal cost configurations seen so far , which for ground states can have degeneracies of even at . instead of enumerating all groundstates exhaustively , we proceed as follows .when eo finds a new , lowest cost configuration , it assumes initially that all pairs of equally colored vertices are part of the frozen backbone and all other pairs are part of the free backbone . if another configuration of the same cost is found and it is not already in the buffer , eo checks all of the pairs in it .if a pair has always been frozen ( free ) before and is so now , it remains part of the frozen ( free ) backbone . if a pair was always frozen ( free ) before and it is free ( frozen ) in this configuration , it is eliminated from both backbones . if a pair has already been eliminated previously , no action is taken . in this way , certain ground - state configurations may be missed or tested many times over , without affecting the backbones significantly. eventually , even if new and unrecognized configurations of the lowest cost are found , no further changes to either backbone are likely to occur .this fact motivates our adaptive stopping criterion for eo .assume the current backbone was last modified in the restart .then , for this graph eo restarts for a total of at least times , terminating only when there has been no updates to the backbone over the previous restarts .of course , every time a new , lower - cost configuration is found , the buffer and backbone arrays are reset . ultimately , this procedure leads to adaptive runtimes that depend on the peculiarities of each graph .the idea is that if the lowest cost state is found in the first start and the backbone does not change over 5 more restarts , one assumes that no further changes to it will ever be found by eo .however , if eo keeps updating the backbone through , say , the 20th restart , one had better continue for 20 more restarts to be confident of convergence .the typical number of restarts was about 10 , while for a few larger graphs , more than 50 restarts were required .a majority of our computational time is spent merely confirming that the backbones have converged , since during the final restarts nothing new is found .nevertheless , eo still saves vast amounts of computer time and memory in comparison with exact enumeration techniques .the trade - off lies in the risk of missing some lowest cost configurations , as well as in the risk of never finding the true ground state to begin with . to get an estimate of the systematic error resulting from these uncontrollable risks, we have benchmarked our eo implementation against a number of different exact results .first , we used a set of 700 explicitly 3-colorable graphs over 7 different sizes , ( 100 graphs per value of ) at , kindly provided by j. culberson , for which exact spine values were found as described in ref . . for colorable graphs such as these , the spine is identical to the backbone .our eo implementation correctly determined the 3-colorability of all but one graph , and reproduced nearly all backbones exactly , regardless of size .for all graph sizes , eo failed to locate colorable configurations on at most 5 graphs out of 100 , and in those cases overestimated either backbone fraction by less than 4% . only at eo miss the colorability of a single graph to find instead , thereby underestimating both backbones . in a different benchmark , containing colorable as well as uncolorable graphs, we generated 440 random graphs over 4 different sizes , and 11 different mean degree values ( 10 graphs per value of and ) .we found the exact minimum cost and exact fraction of pairs belonging to the backbone for these graphs , by removing edges until an exact branch - and - bound code due to m.a .trick determined 3-colorability .for example , finding that a graph had a ground state cost of involved considering all possible 2-edge removals until a remainder graph was found to be 3-colorable .we then added edges to vertex pairs in this remainder graph , checking whether the graph stayed colorable : if so , that pair was eliminated from the frozen backbone .likewise , we merged vertex pairs , checking whether the graph stayed colorable : if so , that pair was eliminated from the free backbone .finally , we would repeat the procedure on all colorable 2-edge - removed remainder graphs , potentially eliminating pairs from the frozen and free backbones each time . comparing with the exact testbed arising from this procedureshows that , for all graphs , eo found the correct ground state cost .moreover , eo overestimated the frozen backbone fraction on only 4 graphs out of 440 ( 2 at and 2 at , in both cases at ) , and by at most 0.004 .this leads to a predicted systematic error that is at least an order of magnitude smaller than the statistical error bars in the results we present later .eo s free backbone results were not of as good quality , overestimating the backbone fraction on 36 graphs out of 440 , by an average of 0.003 though in one case ( , ) by as much as 0.027 . the resulting systematic error , however ,is still small compared to the statistical error bars in our main results .it is also instructive to study the running times for eo , how they scale with increasing graph size , and how they compare with the exact algorithm we have used for benchmarking . in our eo implementation, we have measured the average number of update steps it took ( 1 ) to find the ground state cost for the first time , , and ( 2 ) to sample the backbone completely , .( note that , corresponding to in sec .[ backbone ] , is always less than half the total time spent to satisfy the stopping criterion for an eo run described above . )both and can fluctuate widely for graphs of a given and , especially when . however , since our numerical experiments involve a large number of graphs , the average times and are reasonably stable .furthermore , and show only a weak dependence on , varying by no more than a factor of 2 : increases slowly for increasing , and has a soft peak at for , leading more to a plateau than to a peak. it may also be the case that a search with eo is less influenced by the transition , as other studies have suggested , but the range around that we have studied here is too small to be conclusive on this question . ] .thus , for each we average these times over as well , leading only to a slight increase in the error bars ( on a logarithmic scale ) .we have plotted the average quantities in fig .[ 3coltiming ] on a log - log scale as a function of .it suggests that the time eo takes to find ground states increases exponentially but very weakly so .once they are found , eo manages to sample a sufficient number of them in about updates to measure the backbone accurately .2.4 in by contrast , one can not easily quantify the scaling behavior of running times for the exact branch - and - bound benchmarking method . as ground state cost increases ,the complexity of the method quickly becomes overwhelming , and rules out using it to measure average quantities with any statistical significance .clearly , branch - and - bound itself has exponential complexity for determining colorability . for the sizes studied here, however , the exponential growth in appears in fact sub - dominant to the complexity of evaluating the backbone for a graph with non - zero ground state costs .when or , the combinatorial effort is manageable , but at , graphs just at the transition ( ) reach and the algorithm takes weeks to test all remainder graphs . from this comparison , one can appreciate eo s speed in estimating the backbone fractions , however approximate !with the eo implementation as described above , we have sampled ground state approximations for a large number of graphs at each size . in particular we have considered , over a range of , 100000 random graphs of size , 10000 of size , 4000 of size , and 1000 of size . by averaging over the lowest energies found for these graphs , we obtain an approximation for average ground state costs as a function of and , as shown in fig . [ 3colplot ] .we have also sampled 160 instances of size , which provided enough statistics for the backbone though not for the ground state costs . with a finite size scaling ansatz , \label{scalingeq}\end{aligned}\ ] ] systematically applied , it is possible to extract precise estimates for the location of the transition and the scaling window exponent . in the scaling regime , one might assume that the cost for the fixed argument of the scaling function is independent of the size , i.e. , , indicated by the fact that for all values of the cost functions cross in virtually the same point .hence , in results we have previous reported , we obtained what appeared to be the best data collapse by fixing and choosing and , with the error bars in parentheses being estimates based on our own assessment of the data collapse . but a more careful automated fit to our data , provided to us by s.m .bhattacharjee , gives , , and with a tolerance level of ( see ref . ) . while these fits are consistent with our previous results , they are also consistent with and much closer to the presumably exact result of , and the error estimates are considerably more trustworthy .the scaling window is determined by two competing contributions : for the intermediate values of accessible in this study it is dominated by nontrivial contributions arising from the correlations amongst the variables , which yields , similar to satisfiability problems . however , for sufficiently large , wilson has shown that , due to intrinsic features of the ensemble of random graphs .the argument may be paraphrased as follows . since and vertex degrees are poisson - distributed with mean , a finite fraction of vertices in a random graph have degrees 0 , 1 , or 2 ( those not belonging to the 3-core ) and thus can not possibly cause monochromatic edges .but this finite fraction itself undergoes ( normal ) fluctuations , and these fluctuations limit the narrowing of the cost function s scaling window at large .such variables make up about 15% of the total near , so we estimate the crossover to occure at or , assuming all other constants to be unity . 2.4 in our next main result is the estimate of the backbone near the phase transition , as described in sec .[ backbone ] .we have sampled the frozen and the free backbones separately .our results show the fraction of vertex pairs in each backbone , and are plotted in fig .[ backboneplot ] . for the free backbone ,consistent with our definition , we do not include any pairs that are already connected by an edge .although they make up only of the pairs , the inclusion of these would cause a significant finite size effect when the backbone is small , and only by omitting them does the free backbone vanish for . in principle , according to our definition one should also be sure to exempt from the frozen backbone any pairs that are connected by a monochromatic edge in all ground state configurations , but at their impact on the backbone is only .4.5 in as fig .[ backboneplot ] shows , both backbones appear to evolve toward a discontinuity for increasing .the backbone fraction comes increasingly close to vanishing below , followed by an increasingly steep jump and then a plateau that , to within statistical noise , appears stable at large .the height of the plateau at suggests that on average about 6% of all pairs are frozen and close to 20% are free , with both values rising further for increasing degree .the `` jump '' in the frozen backbone is somewhat smaller than that in the free backbone , adding a higher degree of uncertainty to that interpretation , although still well justified within the error bars .indeed , given the considerable ground state degeneracies , it would be surprising if the frozen backbone were large . 4.6 in a more detailed look at the data ( fig . [ bbprob ] )suggests that the distribution of frozen backbone fractions for individual instances is bimodal at the transition , i.e. , about half of the graphs have a backbone well over 10% while the other half have no backbone at all , leading to the average of 6% mentioned above .furthermore , there appears to be some interesting structure in the backbone discontinuity , which may be significant beyond the noise .note in fig .[ backboneplot ] that for larger , the increase of the frozen backbone stalls or even reverses right after the jump before rising further .this property coincides with the emergence of non - zero costs in the ground state colorings ( see fig . [ 3colplot ] ) .the sudden appearance of monochromatic edges seems initially to reduce the frozen backbone fraction : typically there are numerous ways of placing those few edges , often affecting the most constrained variables pairs and eliminating them from the frozen backbone .similar observations have been made by culberson . according to this argument, only the frozen backbone should exhibit such a stall ( or dip ) .indeed , fig .[ backboneplot ] indeed shows a less hindered increase in the free backbone , though the difference there may be purely due to statistical noise .we have considered the phase transition of the max-3-col problem for a large number of instances of random graphs , of sizes up to and over a range of mean degree values near the critical threshold . for each instance , we have determined the fraction of vertex pairs in the frozen and free backbones , using an optimization heuristic called _ extremal optimization _ ( eo ) .based on previous studies , eo is expected to yield an excellent approximation for the cost and the backbone .comparisons with a testbed of exactly - solved instances suggest that eo s systematic error is negligible compared to the statistical error . using a systematic procedure for optimizing the data collapse in finite size scaling ,we have argued that the transition occurs at , consistent with earlier results as well as with a recent replica symmetry breaking calculation yielding 4.69 .we have also studied both free and frozen backbone fractions around the critical region .a simple argument demonstrates that below the critical point the backbone fraction always vanishes for large . at and above the critical point , neither backbone appears to vanish , suggesting a first - order phase transition .this is in close resemblance to -sat for ; indeed , both are computationally hard at the threshold .even though the backbone is defined in terms of minimum - cost solutions , its behavior appears to correlate more closely with the complexity of finding a zero - cost solution ( solving the associated decision problem ) at the threshold .one possible explanation is that instances there have low cost , so finding the minimal cost is only polynomially more difficult than determining whether a zero - cost solution exists .interestingly , our 3-coloring backbone results mirror those found for the spine , an upper bound on the backbone that is defined purely with respect to zero - cost graphs .the authors of that study speculate that at the threshold , although the spine is discontinuous , the backbone itself might be _continuous_. our results contradict this speculation , instead providing support for a relation albeit restricted between backbone behavior and average - case complexity .we are greatly indebted to somen bhattacharjee for providing an automated evaluation of our finite size scaling fit .we wish to thank gabriel istrate , michelangelo grigni , and joe culberson for helpful discussions , and the los alamos national laboratory ldrd program for support .a. mehotra and m.a .trick , _ a column generation approach to graph coloring , _ technical report series , graduate school of industrial administration , carnegie mellon university , 1995 .graph coloring code available at _ http://mat.gsia.cmu.edu / color / solvers / trick.c_.
we investigate the phase transition of the 3-coloring problem on random graphs , using the extremal optimization heuristic . 3-coloring is among the hardest combinatorial optimization problems and is closely related to a 3-state anti - ferromagnetic potts model . like many other such optimization problems , it has been shown to exhibit a phase transition in its ground state behavior under variation of a system parameter : the graph s mean vertex degree . this phase transition is often associated with the instances of highest complexity . we use extremal optimization to measure the ground state cost and the `` backbone '' , an order parameter related to ground state overlap , averaged over a large number of instances near the transition for random graphs of size up to 512 . for graphs up to this size , benchmarks show that extremal optimization reaches ground states and explores a sufficient number of them to give the correct backbone value after about update steps . finite size scaling gives a critical mean degree value . furthermore , the exploration of the degenerate ground states indicates that the backbone order parameter , measuring the constrainedness of the problem , exhibits a first - order phase transition .
wireless communication has a broadcast nature , where security issues are captured by a basic wire - tap channel introduced by wyner in . in this model, a source node wishes to transmit confidential information to a destination node and wishes to keep a wire - tapper as ignorant of this information as possible .the performance measure of interest is the secrecy capacity , which is the largest reliable communication rate from the source node to the destination node with the wire - tapper obtaining no information . for the wire - tap channel where the channel from the source node to the destination and the wire - tapper is degraded ,the secrecy capacity was given in for the discrete memoryless channel and in for the gaussian channel .the general wire - tap channel without a degradedness assumption and with an additional common message for both the destination node and the wire - tapper was considered in , where the capacity - equivocation region and the secrecy capacity were given .the wire - tap channel was also considered recently for the fading and multiple antenna channels in .the secrecy capacity was addressed either for the case with a fixed fading state or from the outage probability viewpoint . in this paper , we study the ergodic secrecy capacity of the fading wire - tap channel , which is the maximum secrecy rate that can be achieved over multiple fading states . we assume the fading gain coefficients of the source - to - destination channel and the source - to - wire - tapper channel are stationary and ergodic over time .we also assume both the transmitter and the receiver know the channel state information ( csi ) .note that the csi of the source - to - wire - tapper channel at the source can be justified as follows . in wireless networks ,a node may be treated as a wire - tapper " by a source node because it is not the intended destination of particular confidential messages . in this case , the wire - tapper " is not a hostile node , and may also expect its own information from the same source node . hence it is reasonable to assume that this wire - tapper " feeds back the csi to the source node .the fading wire - tap channel can be viewed as a special case of the parallel wire - tap channel with independent subchannels in that the channel at each fading state realization corresponds to one subchannel .hence we first study a parallel wire - tap channel with independent subchannels .each subchannel is assumed to be a general broadcast channel and is not necessarily degraded , which is different from the model studied in .this channel model also differs from the model studied in in that the wire - tapper can receive outputs from all subchannels .the secrecy capacity of the parallel wire - tap channel is established .this result then specializes to the secrecy capacity of a parallel wire - tap channel with degraded subchannels , which is directly related to the fading wire - tap channel . for this model , we assume each of the subchannels satisfies the condition that the output at the wire - tapper is a degraded version of the output at the destination node , and each of the subchannels satisfies the condition that the output at the destination node is a degraded version of the output at the wire - tapper . we show that to achieve the secrecy capacity , it is optimal to keep the inputs to the subchannels null , i.e. , use only the subchannels , and choose the inputs to the subchannels independently .therefore , the secrecy capacity reduces to the sum of the secrecy capacities of the subchannels .we further apply our result to obtain the secrecy capacity of the fading wire - tap channel .the fading wire - tap channel we study differs from the parallel gaussian wire - tap channel studied in in that we assume the source node is subject to an average power constraint over all fading state realization instead of each subchannel ( channel corresponding to one fading state realization ) being subject to a power constraint as assumed in .since the source node knows the csi , it needs to optimize the power allocation among fading states to achieve the secrecy capacity .we obtain the optimal power allocation scheme , where the source node uses more power when the source - to - destination channel experiences a larger fading gain and the source - to - wire - tapper channel has a smaller fading gain .the secrecy capacity is not achieved by the water - filling allocation that achieves the capacity for the fading channel without the secrecy constraint . in this paper, we use } ] to indicate a group of vectors , where indicates the vector . throughout the paper ,the logarithmic function is to the base .the paper is organized as follows .we first introduce the parallel wire - tap channel with independent subchannels , and present the secrecy capacity for this channel .we next present the secrecy capacity for the fading wire - tap channel .we finally demonstrate our results by numerical examples .we consider a parallel wire - tap channel with independent subchannels ( see fig .[ fig : wiretap_para ] ) , which consists of finite input alphabets } ] and } ] ; one decoder at the destination node that maps a received sequence }^n ] and } ] and } ] is given by }^n , y_{[1,l]}^n , z_{[1,l]}^n ) \\ & { \hspace{5mm}}=p(w)p(x_{[1,l]}^n|w)\prod_{i=1}^n \;\prod_{l=1}^l \ ; p_l(y_{li},z_{li}|x_{li } ) \end{split}\ ] ] by fano s inequality ( * ? ? ?* sec . 2.11 ), we have }^n ) \leq nr p_e+1 : = n\delta\ ] ] where if .we now bound the equivocation rate : }^n\big ) \\ & = h\big(w|z_{[1,l]}^n\big)-h(w)+h(w ) \\ & { \hspace{5mm}}-h\big(w|y_{[1,l]}^n\big)+h\big(w|y_{[1,l]}^n\big)\\ & \overset{(a)}{\leq } i(w;y_{[1,l]}^n)-i(w;z_{[1,l]}^n)+n\delta \\ & = \sum_{l=1}^l \big[i(w;y_l^n|y_{[1,l-1]}^n)-i(w;z_l^n|z_{[l+1,l]}^n)\big]+n\delta \\ & = \sum_{l=1}^l\sum_{i=1}^n \big[i(w;y_{li}|y_{[1,l-1]}^ny_l^{i-1})\\ & { \hspace{1cm}}{\hspace{5mm}}-i(w;z_{li}|z_{l[i+1]}^nz_{[l+1,l]}^n)\big]+n\delta \\ & \overset{(b)}{= } \sum_{l=1}^l\sum_{i=1}^n \big[i(wz_{l[i+1]}^nz_{[l+1,l]}^n;y_{li}|y_{[1,l-1]}^ny_l^{i-1})\\ & { \hspace{1cm}}{\hspace{5mm}}-i(z_{l[i+1]}^nz_{[l+1,l]}^n;y_{li}|wy_{[1,l-1]}^ny_l^{i-1})\\ & { \hspace{1cm}}{\hspace{5mm}}-i(wy_{[1,l-1]}^ny_l^{i-1};z_{li}|z_{l[i+1]}^nz_{[l+1,l]}^n ) \\ & { \hspace{1cm}}{\hspace{5mm}}+i(y_{[1,l-1]}^ny_l^{i-1};z_{li}|wz_{l[i+1]}^nz_{[l+1,l]}^n)\big]+n\delta \\ & \overset{(c)}{= } \sum_{l=1}^l\sum_{i=1}^n \big[i(wz_{l[i+1]}^nz_{[l+1,l]}^n;y_{li}|y_{[1,l-1]}^ny_l^{i-1})\\ & { \hspace{1cm}}{\hspace{5mm}}-i(wy_{[1,l-1]}^ny_l^{i-1};z_{li}|z_{l[i+1]}^nz_{[l+1,l]}^n)\big]+n\delta \\ & = \sum_{l=1}^l\sum_{i=1}^n \big[i(z_{l[i+1]}^nz_{[l+1,l]}^n;y_{li}|y_{[1,l-1]}^ny_l^{i-1})\\ & { \hspace{1cm}}{\hspace{5mm}}+i(w;y_{li}|y_{[1,l-1]}^ny_l^{i-1}z_{l[i+1]}^nz_{[l+1,l]}^n)\\ & { \hspace{1cm}}{\hspace{5mm}}-i(y_{[1,l-1]}^ny_l^{i-1};z_{li}|z_{l[i+1]}^nz_{[l+1,l]}^n ) \\ & { \hspace{1cm}}{\hspace{5mm}}-i(w;z_{li}|y_{[1,l-1]}^ny_l^{i-1}z_{l[i+1]}^nz_{[l+1,l]}^n)\big]+n\delta \\ & \overset{(d)}{= } \sum_{l=1}^l\sum_{i=1}^n \big[i(w;y_{li}|y_{[1,l-1]}^ny_l^{i-1}z_{l[i+1]}^nz_{[l+1,l]}^n)\\ & { \hspace{1cm}}{\hspace{5mm}}-i(w;z_{li}|y_{[1,l-1]}^ny_l^{i-1}z_{l[i+1]}^nz_{[l+1,l]}^n)\big]+n\delta \\ & \overset{(e)}{= } \sum_{l=1}^l\sum_{i=1}^n\big [ i(u_{li};y_{li}|q_{li})-i(u_{li};z_{li}|q_{li } ) \big]+n\delta\\ \end{split}\ ] ] where follows from fano s inequality , follows from the chain rule , and follow from lemma 7 in , and follows from the following definition : }^ny_l^{i-1}z_{l[i+1]}^nz_{[l+1,l]}^n ) , { \hspace{5mm}}u_{li}=(wq_{li}).\ ] ] we note that satisfy the following markov chain condition : we introduce a random variable that is independent of all other random variables , and is uniformly distributed over .define , , , , and .note that satisfy the following markov chain condition : using the above definitions , becomes + \delta\ ] ] therefore , an upper bound on is + \delta\ ] ] where the maximum is over the probability distributions },u_{[1,l]},x_{[1,l]},y_{[1,l]},z_{[1,l]}) ] .we assume the channel state information ( realization of ) is known at both the transmitter and the receiver instantaneously .as we mentioned in the introduction , the source node gets the csi of the channel to the wire - tapper when the wire - tapper is not an actual hostile node and is only not the intended destination node for a particular confidential message .we first introduce the following lemma that follows from ( * ? ? ?* lemma 1 ) .this lemma is useful to obtain the secrecy capacity of the fading wire - tap channel .[ lemma : marg ] the secrecy capacity of the wire - tap channel depends only on the marginal transition distributions of the source - to - destination channel and of the source - to - wire - tapper channel . the following generalization of the result in follows directly from lemma [ lemma : marg ] . the secrecy capacity of the gaussian wire - tap channel given in ( * ? ? ?* theorem 1 ) holds for the case with general correlation between the noise variables at the destination node and the wire - tapper . based on lemma [ lemma : marg ] and corollary [ cor : dgcapa ], we obtain the secrecy capacity of the fading wire - tap channel .[ th : fdcapa ] the secrecy capacity of the fading wire - tap channel is \leq p } \ ; { \mathrm{e}}_a \bigg [ & \log\left(1+\frac{p({\underline{h}})|h_1|^2}{\mu^2}\right ) \\ & -\log\left(1+\frac{p({\underline{h}})|h_2|^2}{\nu^2}\right)\bigg ] .\end{split}\ ] ] where . the random vector has the same distribution as the marginal distribution of the process at one time instant .the optimal power allocation that achieves the secrecy capacity in is given by where is chosen to satisfy the power constraint =p ] .one can check that the following function of \ ] ] is concave .the optimal given in can be derived by the standard kuhn - tucker condition ( see e.g. , ) .we first consider the rayleigh fading wire - tap channel , where and are zero mean proper complex gaussian random variables with variances 1 .hence and are exponentially distributed with parameter . in fig .[ fig : powerdist ] ( a ) , we plot the optimal power allocation as a function of .it can be seen from the graph that most of the source power is allocated to the channel states with small .this behavior is shown more clearly in fig .[ fig : powerdist ] ( b ) , which plots as a function of for different values of , and in fig .[ fig : powerdist ] ( c ) , which plots as a function of for different values of .the source node allocates more power to the channel states with larger to forward more information to the destination node , and allocates less power for the channel states with larger to prevent the wire - tapper to obtain information .it can also be seen from fig .[ fig : powerdist ] ( b ) and fig .[ fig : powerdist ] ( c ) that the source node transmits only when the source - to - destination channel is better than the source - to - wire - tapper channel . fig .[ fig : crcs_sig1 ] plots the secrecy capacity achieved by the optimal power allocation , and compares it with the secrecy rate achieved by a uniform power allocation , i.e. , allocating the same power for all channel states .it can be seen that the uniform power allocation does not provide performance close to the secrecy capacity for the snrs of interest .this is in contrast to the rayleigh fading channel without the secrecy constraint , where the uniform power allocation can be close to optimum even for moderate snrs .this also demonstrates that the exact channel state information is important to achieve higher secrecy rate .we next consider a fading wire - tap channel , where and are uniformly distributed over finite mass points .it can be seen from fig .[ fig : crcs_unif ] that the secrecy rate achieved by the uniform power allocation approaches the secrecy capacity as snr increases .hence the uniform power allocation can be close to optimum for certain distributions of the fading gain coefficients .we have established the secrecy capacity for the parallel wire - tap channel with independent subchannels .we have further applied this result to obtain the secrecy capacity for the fading wire - tap channel , where the channel state information is assumed to be known at both the transmitter and the receiver . in particular , we have derived the optimal power allocation scheme to achieve the secrecy capacity .our numerical results demonstrate that the channel state information at the transmitter is useful to improve the secrecy capacity .j. k and k. marton , `` comparison of two noisy channels , '' in _ topics in information theory_.1em plus 0.5em minus 0.4emkeszthely ( hungary ) : colloquia math .j bolyai , amsterdam : north - holland publ . ,1977 , 1975 , pp . 411423 .
the fading wire - tap channel is investigated , where the source - to - destination channel and the source - to - wire - tapper channel are corrupted by multiplicative fading gain coefficients in addition to additive gaussian noise terms . the channel state information is assumed to be known at both the transmitter and the receiver . the parallel wire - tap channel with independent subchannels is first studied , which serves as an information - theoretic model for the fading wire - tap channel . each subchannel is assumed to be a general broadcast channel and is not necessarily degraded . the secrecy capacity of the parallel wire - tap channel is established , which is the maximum rate at which the destination node can decode the source information with small probability of error and the wire - tapper does not obtain any information . this result is then specialized to give the secrecy capacity of the fading wire - tap channel , which is achieved with the source node dynamically changing the power allocation according to the channel state realization . an optimal source power allocation is obtained to achieve the secrecy capacity . this power allocation is different from the water - filling allocation that achieves the capacity of fading channels without the secrecy constraint .
the problem of computing integrals with respect to gibbs measures occurs in chemistry , physics , statistics , engineering and elsewhere . in many situations ,there are no viable alternatives to methods based on monte carlo . given an energy potential ,there are standard methods to construct a markov process whose unique invariant distribution is the associated gibbs measure , and an approximation is given by the occupation or empirical measure of the process over some finite time interval . however , a weakness of these methods is that they may be slow to converge .this happens when the dynamics of the process do not allow all important parts of the state space to communicate easily with each other . in large scale applicationsthis occurs frequently , since the potential function often has complex structures involving multiple deep local minima .an interesting method called parallel tempering has been designed to overcome some of the difficulties associated with rare transitions . in this technique ,simulations are conducted in parallel at a series of temperatures .this method does not require detailed knowledge of or intricate constructions related to the energy surface and is a standard method for simulating complex systems . to illustrate the main idea, we first discuss the diffusion case with two temperatures .discrete time models will be considered later in the paper , and there are obvious analogues for discrete state systems .suppose that the probability measure of interest is , where is the temperature and is the potential function .the normalization constant of this distribution is typically unknown . under suitable conditions on , is the unique invariant distribution of the solution to the stochastic differential equation where is a -dimensional standard wiener process .a straightforward monte carlo approximation to is the empirical measure over a large time interval of length , namely , where is the dirac measure at and denotes a burn - in period .when has multiple deep local minima and the temperature is small , the diffusion can be trapped within these deep local minima for a long time before moving out to other parts of the state space . this is the main cause for the inefficiency .now consider a second , larger temperature .if and are independent wiener processes , then of course the empirical measure of the pair gives an approximation to the gibbs measure with density the idea of parallel tempering is to allow swaps between the components and . in other words , at random times the component is moved to the current location of the component , and vice versa .swapping is done according to a state dependent intensity , and so the resulting process is actually a markov jump diffusion .the form of the jump intensity can be selected so that the invariant distribution remains the same , and thus the empirical measure of can still be used to approximate .specifically , the jump intensity or swapping intensity is of the metropolis form , where and is a constant .note that the calculation of does not require the knowledge of the normalization constant .a straightforward calculation shows that is the stationary density for the resulting process for all values of [ see ( [ eqn : echeverria ] ) ] .we refer to as the swap rate , and note that as increases , the swaps become more frequent .the intuition behind parallel tempering is that the higher temperature component , being driven by a wiener process with greater volatility , will move more easily between the different parts of the state space .this ease - of - movement is transferred to the lower temperature component via the swapping mechanism so that it is less likely to be trapped in the deep local minima of the energy potential .this , in turn , is expected to lead to more rapid convergence of the empirical measure to the invariant distribution of the low temperature component .there is an obvious extension to more than two temperatures .although this procedure is remarkably simple and needs little detailed information for implementation , relatively little is known regarding theoretical properties .a number of papers discuss the efficiency and optimal design of parallel tempering .however , most of these discussions are based on heuristics and empirical evidence . in general ,some care is required to construct schemes that are effective .for example , it can happen that for a given energy potential function and swapping rate , the probability for swapping may be so low that it does not significantly improve performance .there are two aims to the current paper .the first is to introduce a performance criteria for monte carlo schemes of this general kind that differs in some interesting ways from traditional criteria , such as the magnitude of the sub - dominant eigenvalue of a related operator .more precisely , we use the theory of large deviations to define a rate of convergence for the empirical measure .the key observation here is that this rate , and hence the performance of parallel tempering , is monotonically increasing with respect to the swap rate .traditional wisdom in the application of parallel tempering has been that one should not attempt to swap too frequently .while an obvious reason is that the computational cost for swapping attempts might become a burden , it was also argued that frequent swapping would result in poor sampling . for a discussion on prior approaches to the question of how to set the swapping rate and an argument in favor of frequent swapping , see .the use of this large deviation criteria and the resulting monotonicity with respect to directly suggest the second aim , which is to study parallel tempering in the limit as .note that the computational cost due just to the swapping will increase without bound , even on bounded time intervals , when .however , we will construct an alternative scheme , which uses different process dynamics and a weighted empirical measure . because this process no longer swaps particle positions, it and the weighted empirical measure have a well - defined limit as which we call infinite swapping . in effect, the swapping is achieved through the proper choice of weights and state dependent diffusion coefficients .this is done for the case of both continuous and discrete time processes with multiple temperatures .an outline of the paper is as follows . in the next section the swapping model in continuous timeis introduced and the rate of convergence , as measured by a large deviations rate function , is defined . the alternative scheme which is equivalent to swapping but which has a well defined limit is introduced , and its limit as is identified .the following section considers the analogous limit model for more than two temperatures , and discusses certain practical difficulties associated with direct implementation when the number of temperatures is not small .the continuous time model is used for illustration because both the large deviation rate and the weak limit of the appropriately redefined swapping model take a simpler form than those of discrete time models . however , the discrete time model is what is actually implemented in practice . to bridge the gap between continuous time diffusion models and discrete time models , in section 4 we discuss the idea of infinite swapping for continuous time markov jump processes and prove that the key properties demonstrated for diffusion models hold here as well .we also state a uniform ( in the swapping parameter ) large deviation principle .the discrete time form actually used in numerical implementation is presented in section 5 .section 6 returns to the issue of implementation when the number of temperatures is not small .in particular , we resolve the difficulty of direct implementation of the infinite swapping models via approximation by what we call partial infinite swapping models .section 7 gives numerical examples , and an appendix gives the proof of the uniform large deviation principle .although the implementation of parallel tempering uses a discrete time model , the motivation for the infinite swapping limit is best illustrated in the setting where the state process is a continuous time diffusion process .it is in this case that the large deviation rate function , as well as the construction of a process that is distributionally equivalent to the infinite swapping limit , is simplest . in order to minimize the notational overhead, we discuss in detail the two temperature case .the extension to models with multiple temperatures is obvious and will be stated in the next section .let denote the markov jump diffusion process of parallel tempering with swap rate .that is , between swaps ( or jumps ) , the process follows the diffusion dynamics ( [ eq : two_temp_diff ] ) .jumps occur according to the state dependent intensity function . at a jump time , the particles swap locations , that is , . hence for a smooth functions the infinitesimal generator of the process is given by+ \tau_{2}\text{tr}\left [ \nabla_{x_{2}x_{2}}^{2}f(x_{1},x_{2})\right ] \\ & \quad\quad+ag(x_{1},x_{2})\left [ f(x_{2},x_{1})-f(x_{1},x_{2})\right ] , \end{aligned}\ ] ] where and denote the gradient and the hessian matrix with respect to , respectively , and tr denotes trace . throughout the paperwe also assume the growth condition this condition not only ensures the existence and uniqueness of the invariant distribution , but also enforces the exponential tightness needed for the large deviation principle for the empirical measures .recall the definition of in ( [ eq : two_temp_dens ] ) and let be the corresponding gibbs probability distribution , that is , straightforward calculations show that for any smooth function which vanishes at infinity since the condition ( [ eq : growth_rate ] ) implies that as , by the echeverria s theorem ( * ? ? ?* theorem 4.9.17 ) , is the unique invariant probability distribution of the process .it follows from the previous discussion and the ergodic theorem that , for a fixed burn - in time , with probability one as . for notational simplicitywe assume without loss of generality that from now on .a basic question of interest is how rapid is this convergence , and how does it depend on the swap rate ?in particular , what is the rate of convergence of the lower temperature marginal ?we note that standard measures one might use for the rate of convergence , such as the second eigenvalue of the associated operator , are not necessarily appropriate here .they only provide indirect information on the convergence properties of the empirical measure , which is the quantity of interest in the monte carlo approximation .such measures properly characterize the convergence rate of the transition probability as .however , they neglect the time averaging effect of the empirical measure , an effect that is not present with the transition probability .in fact , it is easy to construct examples such as nearly periodic markov chains for which the second eigenvalue suggests a slow convergence when in fact the empirical measure converges quickly .another commonly used criterion for algorithm performance is the notion of asymptotic variance . for a given functional , one can establish a central limit theorem which asserts that as \rightarrow\sigma^{2}.\ ] ] the magnitude of is used to measure the statistical efficiency of the algorithm . the asymptotic variance is closely related to the spectral properties of the underlying probability transition kernel .however , as with the second eigenvalue the usefulness of this criterion for evaluating performance of the empirical measure is not clear . in this paper, we use the large deviation rate function to characterize the rate of convergence of a sequence of random probability measures . to be more precise , let be a polish space , that is , a complete and separable metric space. denote by the space of all probability measures on .we equip with the topology of weak convergence , though one can often use the stronger -topology . under the weak topology , is metrizable and itself a polish space .note that the empirical measure is a random probability measure , that is , a random variable taking values in the space .a sequence of random probability measures is said to satisfy a large deviation principle ( ldp ) with rate function ] .then can be expressed as where \nu(dx_{1}dx_{2})\\ j_{1}(\nu ) & = \int_{\mathbb{r}^{d}\times\mathbb{r}^{d}}g(x_{1},x_{2})\ell\left ( \sqrt{\frac{\theta(x_{2},x_{1})}{\theta(x_{1},x_{2})}}\right ) \nu(dx_{1}dx_{2}),\end{aligned}\ ] ] and where for is familiar from the large deviation theory for jump processes .the key observation is that the rate function is affine in the swapping rate , with the rate function in the case of no swapping .furthermore , with equality if and only if for -a.e .this form of the rate function , and in particular its monotonicity in , motivates the study of the _ infinite swapping limit _ as . [remark : limit - rate ] _ the limit of the rate function _ _ _ satisfies__{cl}j_{0}(\nu ) & \theta(x_{1},x_{2})=\theta(x_{2},x_{1})\text { } \nu\text{-a.s.,}\\ \infty & \text{otherwise.}\end{array } \right.\ ] ] _ hence for _ _ _ to be finite it is necessary that _ _ _ _ put exactly the same relative weight as _ _ _ on the points _ _ _ and _ _ . note that if a process could be constructed with __ _ as its rate function , then with the large deviation rate as our criteria such a process improves on parallel tempering with finite swapping rate in exactly those situations where parallel tempering improves on the process with no swapping at all ._ from a practical perspective , it may appear that there are limitations on how much benefit one obtains by letting . when implemented in discrete time, the overall jump intensity corresponds to the generation of roughly independent random variables that are uniform on ] such that as .we omit the proof here , since in theorem [ thm : ldp_mp ] the analogous result will be proved in the setting of continuous time jump markov processes .in practice parallel tempering uses swaps between more than two temperatures .a key reason is that if the gap between the temperatures is too large then the probability of a successful swap under the [ discrete time version of the ] metropolis rule ( [ eq : swap_rate ] ) is far too low for the exchange of information to be effective .a natural generalization is to introduce , to the degree that computational feasibility is maintained , a ladder of higher temperatures , and then attempt pairwise swaps between particles .there are a variety of schemes used to select which pair to attempt the swap , including deterministic and randomized rules for selecting only adjacent temperatures or arbitrary pair of temperatures .however , if one were to replace any of these particle swapped processes with its equivalent temperature swapped analogue and consider the infinite swapping limit , one would get the same system of process dynamics and weighted empirical measures which we now describe .suppose that besides the lowest temperature ( in many cases the temperature of principal interest ) , we introduce the collection of higher temperatures let be a generic point in the state space of the process and define a product gibbs distribution with the density the limit of the temperature swapped processes with temperatures takes the form to define these weights it is convenient to introduce some new notation .let be the collection of all bijective mappings from to itself . has elements , each of which corresponds to a unique permutation of the set , and forms a group with the group action defined by composition. let denote the inverse of .furthermore , for each and every , define . at the level of the prelimit particle swapped process , we interpret the permutation to correspond to event that the particles at location are swapped to the new location . under the temperature swapped process, this corresponds to the event that particles initially assigned temperatures in the order have now been assigned the temperatures .the identification of the infinite swapping limit of the temperature swapped processes is very similar to that of the two temperature model in the previous section . by exploiting the time - scale separation, one can assume that in a small time interval the only motion is due to temperature swapping and the motion due to diffusion is negligible .hence the fraction of time that the permutation is in effect should again be proportional to the relative weight assigned by the invariant distribution to , that is , thus if then the fraction of time that the permutation is in effect is .note that for any , going back to the definition of the weights , , it is clear that they represent the limit proportion of time that the -th particle is assigned the temperature and hence will satisfy likewise the replacement for the empirical measure , accounting as it does for mapping the temperature swapped process back to the particle swapped process , is given by where {\sigma}=(\bar{y}_{\sigma(1)}^{\infty}(t),\bar{y}_{\sigma(2)}^{\infty}(t),\ldots,\bar{y}_{\sigma(k)}^{\infty}(t)) ] and given a current position , the weighted empirical measure has contributions from all locations of the form , balanced exactly according to their relative contributions from the invariant density .the dynamics of are again symmetric , and the density of the invariant distribution at point is [ remark : partial ] _ the infinite swapping process described above allows the most effective communication between all temperatures , and is the best in the sense that it leads to the largest large deviation rate function and hence the fastest rate of convergence .however , computation of the coefficients becomes very demanding for even moderate values of __ , since one needs to evaluate _ _ _ terms from all possible permutations . in section[ sec : discrete ] we discuss a more tractable and easily implementable family of schemes which are essentially approximations to the infinite swapping model presented in the current section and have very similar performance .we call the current model the full infinite swapping model since it uses the whole permutation group_ _ _ _ , as opposed to the partial infinite swapping model in section [ sec : discrete ] where only subgroups of_ _ _ _ are used .the continuous time diffusion model is a convenient vehicle to convey the main idea of infinite swapping . in practice , however , algorithms are implemented in discrete time . in this sectionwe discuss continuous time pure jump markov processes and the associated infinite swapping limit .the purpose of this intermediate step is to serve as a bridge between the diffusion and discrete time markov chain models .these two types of processes have some subtle differences regarding the infinite swapping limit which is best illustrated through the continuous time jump markov model . in this sectionwe discuss the two - temperature model , and omit the completely analogous multiple - temperature counterpart .we will not refer to temperatures and to distinguish dynamics .instead , let and be two probability transition kernels on given .one can think of as the dynamics under temperature for .we assume that for each the stationary distribution associated with the transition kernel admits the density in order to be consistent with the diffusion models , and define we assume that the kernels are feller and have a density that is uniformly bounded with respect to lebesgue measure .these conditions would always be satisfied in practice .finally , we assume that the detailed balance or reversibility condition holds , that is , in the absence of swapping [ i.e. , swapping rate , the dynamics of the system are as follows . let denote a continuous time process taking values in .the probability transition kernel associated with the embedded markov chain , denoted by , is without loss of generality , we assume that the jump times occur according to a poisson process with rate one . in other words ,let be a sequence of independent and identically distributed ( iid ) exponential random variables with rate one that are independent of .then the infinitesimal generator of is such that for a given smooth function , \alpha_{1}(x_{1},dy_{1})\alpha_{2}(x_{2},dy_{2}).\ ] ] owing to the detailed balance condition ( [ eqn : detailed_balance ] ) , the operator is self - adjoint . using arguments similar to but simpler than those used to prove the uniform ldp in theorem [ thm : ldp_mp ] , the large deviations rate function associated with the occupation measure can be explicitly identified : for any probability measure on with and , and is extended to all of by lower semicontinuous regularization .denote by the state process of the finite swapping model with swapping rate , and let be the embedded markov chain .the probability transition kernel for is , \end{aligned}\ ] ] where is defined as in ( [ eq : swap_rate ] ) .furthermore , let be a sequence of iid exponential random variables with rate and define in other words , the jumps occur according to a poisson process with rate .note that there are two types of jumps . at any jump time , with probability it will be a jump according to the underlying probability transition kernels and . with probability it will be an attempted swap which will succeed with the probability determined by .as grows , the swap attempts become more and more frequent .however , the time between two consecutive jumps of the first type will have the same distribution as where is a geometric random variable with parameter .it is easy to argue that for any the distribution of is exponential with rate one. this observation will be useful when we derive the infinite swapping limit .the infinite swapping limit for as can be similarly obtained by considering the corresponding temperature swapped processes . since the times between jumps determined by and are always exponential with rate one , the infinite swapping limit is a pure jump markov process where jumps occur according to a poisson process with rate one . in other words , where is the embedded markov chain and a sequence of iid exponential random variables with rate one .furthermore , the probability transition kernel for is where the weight function is defined as in theorem [ thm : inf_swp_lim ] .it is not difficult to argue that the stationary distribution for is \,dy_{1}dy_{2},\ ] ] and the weighted occupation measure dt \label{eqn : etatinf}\ ] ] converges to as .it is obvious that the dynamics of the infinite swapping limit are symmetric and instantaneously equilibrate the contribution from and according to the invariant measure , owing to the weight function .we have the following uniform large deviation principle result , which justifies the superiority of infinite swapping model .its proof is deferred to the appendix .it should be noted that rate identification is not covered by the existing literature , even in the case of a fixed swapping rate , due to the pure jump nature of the process .[ thm : ldp_mp ] the occupation measure satisfies a large deviation principle with rate function . more generally , define the finite swapping model as in subsection [ sec : finite_swap ] .consider any sequence ] be its unique invariant probability distribution . if is absolutely continuous with respect to with (\boldsymbol{x}) ] .assume is such that {1}=[\nu]_{2} ] , is such that {1}=\kappa\bar{\mu} ] .then {1}\otimes\varphi)-a\int\log r(\boldsymbol{y})[\nu]_{1}(d\boldsymbol{y})+a\log a - a+1.\nonumber\end{aligned}\ ] ] let be the set where . then {1}(d\boldsymbol{y})<\infty ] , and we also have {1}(d\boldsymbol{y})=1 ] .it follows that {1}(c)=0 ] , which implies {1}\otimes \varphi)=\infty ] and since is the invariant distribution of it follows that {1}\vert\bar{\mu})<\infty ] , it follows that {1}(d\boldsymbol{x})>-\infty ] we conclude that \nu(d\boldsymbol{x},d\boldsymbol{y})>-\infty .\label{eqn : re_lb}\ ] ] by relative entropy duality ( ( * ? ? ?* proposition 1.4.2 ) ) } \bar{\mu}(d\boldsymbol{x})\varphi ( \boldsymbol{x},d\boldsymbol{y})\\ & \leq r(\nu\vert\bar{\mu}\otimes\varphi)-\frac{1}{2}\int\left [ \log \kappa(\boldsymbol{x})+\log\kappa(\boldsymbol{y})\right ] \nu(d\boldsymbol{x},d\boldsymbol{y})\end{aligned}\ ] ] is valid as long as the right hand side is not of the form , which is true by ( [ eqn : re_lb ] ) .the chain rule then gives } \bar{\mu}(d\boldsymbol{x})\varphi ( \boldsymbol{x},d\boldsymbol{y})\\ & \quad\leq r(\nu\vert\bar{\mu}\otimes\varphi)-\int\log\kappa(\boldsymbol{x})[\nu]_{1}(d\boldsymbol{x})\\ & \quad = r(\nu\vert\lbrack\nu]_{1}\otimes\varphi)+r([\nu]_{1}\vert\bar{\mu } ) -\int\log\kappa(\boldsymbol{x})[\nu]_{1}(d\boldsymbol{x})\\ & \quad = r(\nu\vert\lbrack\nu]_{1}\otimes\varphi)-\int\log r(\boldsymbol{x})[\nu]_{1}(d\boldsymbol{x}),\end{aligned}\ ] ] and thus{1}\otimes\varphi)+\int\log r(\boldsymbol{x})[\nu ] _ { 1}(d\boldsymbol{x})}.\ ] ] then ( [ eqn : two_rates ] ) follows from the fact that if and then by taking {1}\otimes\varphi)-\int\log r(\boldsymbol{x})[\nu ] _ { 1}(d\boldsymbol{x}) ] , and left \rightarrow\mathbb{r} ] is uniformly bounded .using that inf , is follows that the weak limit of {2}=\delta_{\infty} ] {1}\otimes\varphi) ] and gives _ { 1}=e\bar{\psi}_{\infty}.\ ] ] observe that _ { 1}(d\boldsymbol{x}).\ ] ] because of the superlinearity of we have uniform integrability , and thus passing to the limit gives {1}(d\boldsymbol{x})[\xi^{\infty}]_{2|1}(dz|\boldsymbol{x})=a[\kappa^{\infty}]_{1}(d\boldsymbol{x}).\ ] ] hence with the definition {2|1}(dz|\boldsymbol{x}) ] .using fatou s lemma , jensen s inequality and the weak convergence , we get the following lower bound on the corresponding relative entropies ( the first sum in ( [ lb])):{2|1}(dz|\boldsymbol{x})\right ) [ \xi^{\infty}]_{1}(d\boldsymbol{x})\\ & = \int\ell\left ( b(\boldsymbol{x})\right ) [ \xi^{\infty}]_{1}(d\boldsymbol{x})\\ & = a\int h(1/b(\boldsymbol{x}))[\kappa^{\infty}]_{1}(d\boldsymbol{x}).\end{aligned}\ ] ] let . then we can write the combined lower bound on the relative entropies as {1,2}\vert\lbrack\kappa^{\infty}]_{1}\otimes \varphi)+ae\int h(1/b(\boldsymbol{x}))[\kappa^{\infty}]_{1}(d\boldsymbol{x})\nonumber\\ & = aer([\kappa^{\infty}]_{1,2}\vert\lbrack\kappa^{\infty}]_{1}\otimes \varphi)-ae\int\log r(\boldsymbol{y})[\kappa^{\infty}]_{1}(d\boldsymbol{y})+a\log a - a+1 .\label{eqn : final_lb}\ ] ] we next consider and .using the limiting properties of the , we have dt\\ & \rightarrow\int\left [ \rho(\boldsymbol{y})\delta_{\boldsymbol{y}}(c)+(1-\rho(\boldsymbol{y}))\delta_{\boldsymbol{y}^{r}}(c)\right ] \left [ \xi^{\infty}\right ] _ { 1}\\ & = \bar{\eta}_{\infty}(c)\end{aligned}\ ] ] and _ { 1}=\bar{\psi } _ { \infty}(c).\ ] ] note that this implies the relation .\label{eqn : eta_psi_relation}\ ] ] finally we consider the weighted empirical measure . by ( [ eqn : final_lb ] ) , lemma [ lem : rate_relation ] and jensen s inequality we have the lower bound ] .then and using that and symmetry \left ( \rho ( \boldsymbol{x})\alpha(\boldsymbol{x},d\boldsymbol{y})+\rho(\boldsymbol{x}^{r})\alpha(\boldsymbol{x}^{r},d\boldsymbol{y}^{r})\right ) \\ & = \frac{1}{2}\int\left ( \sqrt{a(\boldsymbol{x})a(\boldsymbol{y})}+\sqrt{a(\boldsymbol{x}^{r})a(\boldsymbol{y}^{r})}\right ) \mu(d\boldsymbol{x})\left ( \rho(\boldsymbol{x})\alpha(\boldsymbol{x},d\boldsymbol{y})+\rho(\boldsymbol{x}^{r})\alpha(\boldsymbol{x}^{r},d\boldsymbol{y}^{r})\right ) \\ & = \frac{1}{2}\int\left ( \sqrt{a(\boldsymbol{x})a(\boldsymbol{y})}+\sqrt{a(\boldsymbol{x}^{r})a(\boldsymbol{y}^{r})}\right ) \mu(d\boldsymbol{x})\alpha(\boldsymbol{x},d\boldsymbol{y}).\end{aligned}\ ] ] we now use to obtain .[ we note for later use that given that is absolutely continuous with respect to lebesgue measure and such that , can be shown for a that maps to by taking /2 ] , & \geq\left [ f(e\bar{\eta}_{\infty})+i^{\infty}(e\bar{\eta}_{\infty})\right ] \\ & \geq\inf_{\nu\in\mathcal{p}(s)}\left [ f(\nu)+i^{\infty}(\nu)\right ] , \end{aligned}\ ] ] and an argument by contradiction shows that the bound is uniform in .thus \right\ } \\ & \geq\liminf_{t\rightarrow\infty}-\frac{1}{t}\log\left\ { tc\cdot \bigvee_{r^{a}=\left\lceil \frac{t}{c}\right\rceil } ^{\left\lfloor tc\right\rfloor } e\left [ \exp\{-tf(\eta_{t}^{a})-\infty1_{\left\ { r^{a}/t\right\ } ^{c}}(r^{a}/t)\}\right ] \right\ } \\ & \geq\inf_{\nu\in\mathcal{p}(s)}\left [ f(\nu)+i^{\infty}(\nu)\right ] .\end{aligned}\ ] ] we now partition ] and (\boldsymbol{x}) ] , so that .let (\boldsymbol{x})\geq\tau/2. ] , and using the bound we can factor {1}(d\boldsymbol{x})\bar{\varphi}(\boldsymbol{x},d\boldsymbol{y}) ] , and not the desired distribution .let {1}](\boldsymbol{x}) ] is reshaped into .choose so that equality holds in ( [ eqn : two_rates ] ) , and set .then is chosen to be .consistent with the analysis of the upper bound , we do not perturb the distribution of the other variables , so that and .we then construct the controlled processes using these measures in exact analogy with the construction of the original process . with this choice and lemma [ lem : rep ]we obtain the bound \\ & \leq e\rule{0pt}{24pt}\left [ \rule{0pt}{24pt}f(\bar{\eta}_{t}^{a})+\frac{1}{t}\sum_{i=0}^{\bar{r}^{a}-1}\left [ r\left ( \bar{\alpha } ( \boldsymbol{\bar{u}}^{a}(i),\cdot|\bar{m}_{0}^{a}(i))\left\vert \alpha(\boldsymbol{\bar{u}}^{a}(i),\cdot|\bar{m}_{0}^{a}(i))\right .\right ) \right .\\ & \left .+ r\left ( \beta^{ab(\boldsymbol{\bar{u}}^{a}(i))}\left\vert \beta^{a}\right .\right ) \right ] \rule{0pt}{24pt}\right ] \rule{0pt}{24pt}.\end{aligned}\ ] ] now apply the ergodic theorem , and use that w.p.1 {1}(d\boldsymbol{x})$ ] , and also that asymptotically the conditional distribution of is given by . then right hand side of the last display converges to we have the limit{1}(d\boldsymbol{x})}\int\left [ \sum_{z=1}^{2}r\left ( \bar{\alpha}(\boldsymbol{x},d\boldsymbol{y}|z)\left\vert \alpha(\boldsymbol{x},d\boldsymbol{y}|z)\right .\right ) \rho_{z}(\boldsymbol{x})\right ] r(\boldsymbol{x})[\eta ] _ { 1}(d\boldsymbol{x})\\ & \quad+\frac{1}{\int b(\boldsymbol{x})[\eta]_{1}(d\boldsymbol{x})}\int\left ( -\log b(\boldsymbol{x})+b(\boldsymbol{x})-1\right ) b(\boldsymbol{x})[\eta]_{1}(d\boldsymbol{x})\\ & = f(\nu^{\tau})+a\int r\left ( \bar{\varphi}(\boldsymbol{x},d\boldsymbol{y})\left\vert \varphi(\boldsymbol{x},d\boldsymbol{y})\right .\right ) \varsigma(d\boldsymbol{x})-a\int\log r(\boldsymbol{x})\varsigma ( d\boldsymbol{x})+a\log a - a+1\\ & = f(\nu^{\tau})+k(\varsigma)\\ & = f(\nu^{\tau})+i^{\infty}(\nu^{\tau}).\end{aligned}\ ] ] this completes the proof of ( [ eqn : laplace_ub ] ) and also the proof of theorem [ thm : ldp_mp ] .markov chain monte carlo maximum likelihood . in _ computing science and statistics : proceedings of the 23rd symposium on the interface _ ,pages 156163 , new york , 1991 .american statistical association .
parallel tempering , also known as replica exchange sampling , is an important method for simulating complex systems . in this algorithm simulations are conducted in parallel at a series of temperatures , and the key feature of the algorithm is a swap mechanism that exchanges configurations between the parallel simulations at a given rate . the mechanism is designed to allow the low temperature system of interest to escape from deep local energy minima where it might otherwise be trapped , via those swaps with the higher temperature components . in this paper we introduce a performance criteria for such schemes based on large deviation theory , and argue that the rate of convergence is a monotone increasing function of the swap rate . this motivates the study of the limit process as the swap rate goes to infinity . we construct a scheme which is equivalent to this limit in a distributional sense , but which involves no swapping at all . instead , the effect of the swapping is captured by a collection of weights that influence both the dynamics and the empirical measure . while theoretically optimal , this limit is not computationally feasible when the number of temperatures is large , and so variations that are easy to implement and nearly optimal are also developed . markov processes , pure jump , large deviations , relative entropy , ergodic theory , martingale , random measure 60j25 , 60j75 , 60f10 , 28d20 , 60a10 , 60g42 , 60g57
the smithsonian / nasa astrophysics data system ( ads ) provides access to the astronomy and physics literature . as of september 2006it provides a search system for almost 4.9 million records , covering most of the astronomical literature ( including planetary sciences and solar physics ) and a large part of the physics literature .the ads has been described in detail in a series of articles in astronomy and astrophysics supplements . since 1994 ,the astrophysics data system ( ads ) has scanned the scholarly literature in astronomy .as of september 2006 , we have scanned over 3.3 million pages .these articles are available free of charge world - wide . in order to make this resource even more accessible ,we have used optical character recognition ( ocr ) software to obtain the text of all the scanned pages in ascii form .this allows us to index the full text of all articles and make it searchable .this search system covers the astronomical literature that was published only on paper . in order to search the literature published in electronic form, we developed a system that sends queries to the search systems of several publishers .the results of these queries are then combined with the results of the ads internal queries to seamlessly cover the majority of the literature .this article describes some of the features of the search system for the full text of the astronomical literature .we have so far scanned about 3.3 million pages from 43 journals , 15 conference series and 67 individual conferences , as well as a significant number of historical observatory publications .the scanned pages as of september 2006 use about 600 gb of disk space , the ocrd text uses 72 gb . the ocrd text is so - called `` dirty ocr '' because it has not been checked manually and it contains significant numbers of errors .this means that this text can not for instance be used to extract numerical data from tables , it would be inaccurate .however , for searching the text for specific words , this `` dirty ocr '' is good enough .significant words are usually used more than once , so even if the ocr software made a mistake in recognizing a word once , it will still show up correctly in other places of the same article .indexing of the ocrd text proved to be challenging .the number of unique words from this text is large .one reason for the large number of words is the fact that mistakes during the ocr process create new misspelled words . to reduce this problem, we remove words that have spurious characters in them that are ocr errors . but even after removing such words , as well as other unusable words like numbers , there are 14 million unique words in the index .the files produced during indexing are large , the largest being about 3.7 gb , close to the limit of 32 bit addressing .there is still some room for growth , but eventually we will have to move to 64 bit addressing for the full text search system in the ads .there are two search forms available .the basic search form allows you to enter the search term(s ) and select which search systems to query .the search terms are combined with and , meaning that all search words must be present on a page in order to be selected .the system supports phrase searching when multiple words are enclosed in double quotes . by default ,synonym replacement is enabled .this means that the system not only searches for a specified word but also for all other words that have the same meaning .synonym replacement can be turned off for individual words be pre - pending a =. this will search for the exact spelling of the word , which can be useful for words that have synonyms that are very common and would produce many matches . for instance `` galaxy '' is a synonym for `` extragalactic '' . a search for `` = extragalactic'' will remove `` galaxy '' from the matches .the advanced search form allows in addition the selection of a publication date range and a journal .it also allows the selection of several sort options .one important sort option is `` oldest first '' .this allows you to find the first occurrence of a word or phrase in the literature ( see [ ex ] example usage ) .the search returns a list of articles that contain the search terms . under each articleit lists each page individually that contains the search terms .it includes a partial sentence around the search terms , with the search term highlighted in red . for pages that are not in articles (cover pages , various other material , and pages from issues where we do nt have the pagination information ) , the pages are listed individually .the article information links back to the regular ads abstract service , the page information links directly to the scanned page .in order to include the more recent literature that was published in electronic form , the user can select to include one or more external search systems in the query .the external search systems are queried in parallel .as results are returned from the external systems , they are displayed to the user .once all results are available , a final combination of all the results is compiled and displayed .the search fan - out is still experimental .it is not yet very stable since none of the external systems provide a dedicated interface for such external queries .it was implemented by simulating regular user queries to these systems .this makes our fan - out system vulnerable to changes in the external search systems .if an api ( application programming interface ) becomes available for any of the external systems , we will implement it to build a more stable system .we currently query the systems listed in table [ ss ] .ll search system & journals searched + google scholar & monthly notices of the royal astronomical society + & annual review in astronomy and astrophysics + & annual review of earth and planetary sciences + & applied optics + & journal of the optical society of america + university of chicago press & astronomical journal + & astrophysical journal + & astrophysical journal letters + & astrophysical journal supplement + & publications of the astronomical society + & of the pacific + edp sciences & astronomy and astrophysics + nature & nature + & nature physics + & nature physical science + national academy of science & proceedings of the national academy of science +using the full text search system is different from using the abstract search system in the ads . since there are so many more words in the full text , there are usually many more matches .it is therefore generally advisable to use more unique words , more search terms , and/or phrases .for instance if you are trying to find out when the concept of a critical mass was first described , searching for the words `` critical mass '' without the double quotes would not produce anything useful , but a search for the phrase `` critical mass '' with double quotes from the advanced search form , with `` oldest first '' selected , quickly finds an article in pasp from 1919 that attributes this phrase to professor eddington .another interesting question is to find out when the name pluto was first suggested for a planet .if you enter : planet pluto in the search field and select `` oldest first '' under the sort options .one of the first matches will be of an article in `` the observatory '' from 1898 that suggests using pluto as the name for the recently discovered planet dq .incidentally , the name had to wait for another 30 years before it was actually used for a planet. this capability can be very useful for astronomy historians .the ads provides a search capability for the full text of a large part of the astronomical literature .this capability complements the regular abstract search system .it allows the in - depth analysis of the older literature and especially the historical observatory publications , a part of the astronomical literature that has not been accessible in any search system until now .accomazzi , a. , eichhorn , g. , kurtz , m. j. , grant , c. s. , & murray , s. s. 2000 , , 143 , 85 eichhorn , g. , kurtz , m. j. , accomazzi , a. , grant , c. s. , & murray , s. s. 2000 , , 143 , 61 grant , c. s. , accomazzi , a. , eichhorn , g. , kurtz , m. j. , & murray , s. s. 2000 , , 143 , 111 kurtz , m. j. , eichhorn , g. , accomazzi , a. , grant , c. s. , murray , s. s. , & watson , j. m. 2000 , , 143 , 41
the smithsonian / nasa astrophysics data system ( ads ) provides a search system for the astronomy and physics scholarly literature . all major and many smaller astronomy journals that were published on paper have been scanned back to volume 1 and are available through the ads free of charge . all scanned pages have been converted to text and can be searched through the ads full text search system . in addition , searches can be fanned out to several external search systems to include the literature published in electronic form . results from the different search systems are combined into one results list . the ads full text search system is available at : http://adsabs.harvard.edu/fulltext_service.html
the classical capillary instability , the breakup of a cylindrical liquid thread into a series of droplets , is perhaps one of the most ubiquitous fluid instabilities and appears in a host of daily phenomena from glass - wine tearing , and faucet dripping to ink - jet printing .the study of capillary instability has a long history . in 1849 ,plateau attributed the mechanism to surface tension : the breakup process reduces the surface energy .lord rayleigh pioneered the application of linear stability analysis to quantitatively characterize the growth rate at the onset of instability , and found that a small disturbance is magnified exponentially with time .subsequently , tomotika investigated the effect of viscosity of the surrounding fluid , showing that it acts as a deterrent to slow down the instability growth rate .many additional phenomena have been investigated , such as the cascade structure in a drop falling from a faucet , steady capillary jets of sub - micrometer diameters , and double - cone neck shapes in nanojets .capillary instability also offers a means of controlling and synthesizing diverse morphological configurations .examples include : a long chain of nanospheres generated from tens - of - nanometer diameter wires at the melt state ; polymer nanorods formed by annealing polymer nanotubes above the glass transition point ; and nanoparticle chains encapsulated in nanotubes generated by reduction of nanowires at a sufficiently high temperature .moreover , instabilities of fluid jets have numerous chemical and biological applications .( an entirely different instability mechanism has attracted recent interest in elastic or visco - elastic media , in which thin sheets under tension form wrinkles driven by elastic instabilities . )fiber drawing of cylindrical shells and other microstructured geometries clearly opens up rich new areas for fluid instabilities and other phenomena , and it is necessary to understand these phenomena in order to determine what structures are attainable in drawn fibers . although it is a striking example , the observed azimuthal breakup process appears to be a complicated matter because surface tension does not produce azimuthal instability in cylinders or cylindrical shells , the azimuthal breakup must be driven by the fiber draw - down process and/or by additional physics such as visco - elastic effects or thermal gradients . simulatingthe entire draw - down process directly is very challenging , however , because the lengthscales vary by orders of magnitude from the preform ( ) to the drawn layers ( sub- ) .another potentially important breakup process , one that is amenable to study even in the simplified case of a cylindrical - shell geometry , is axial instability .not only does the possibility of axial breakup impose some limits on the practical materials for fiber drawing , but understanding such breakup is arguably a prerequisite to understanding the azimuthal breakup process , for two reasons .first , the azimuthal breakup process produces cylindrical filaments , and it is important to understand why these filaments do not exhibit further breakup into droplets ( or under what circumstances this should occur ) .second , it is possible that the draw - down process or other effects might couple fluctuations in the axial and azimuthal directions , so understanding the timescales of the axial breakup process is necessary as a first step in evaluating whether it plays any physical role in driving other instabilities .therefore , as a first step towards understanding the various instability phenomena in drawn microstructured ( cylindrical shell ) fibers , we investigate the impact of a single mechanism : cwhere there is not as yet a linear - theory analysis , between two limits that can be understood by dimensional analysis .this paper is organized as follows .we provide more background on microstructured fibers in section [ sec : feature - size - in ] , and review the governing equations and dimensionless groups pertaining to fiber thermal - drawing processing in section [ sec : governing - equations ] .section [ sec :- simulation - algorithm ] describes the numerical finite - element approach to solving the navier stokes equations , and section v presents our simulation results for cylindrical shell .most optical fibers are mainly made of a single material , silica glass .recent work , however , has generalized optical fiber manufacturing to include microstructured fibers that combine multiple distinct materials including metals , semiconductors , and insulators , which expand fiber - device functionalities while retaining the simplicity of the thermal - drawing fabrication approach .for example , a periodic cylindrical - shell multilayer structure has been incorporated into a fiber to guide light in a hollow core with significantly reduced loss for the fabrication process has four main steps to create this geometry .( i ) a glass film is thermally evaporated onto a polymer substrate .( ii ) the glass / polymer bi - layer film is tightly wrapped around a polymer core .( iii ) additional layers of protective polymer cladding are then rolled around the structure .( iv ) the resulting centimeter - diameter preform is fused into a single solid structure by heating under vacuum .the solid preform is then heated into a viscous state and stretched into an extended millimeter - diameter fiber by the application of axial tension , as shown in fig.[fig : fiberdrawing ] .optical - fiber thermal drawing .preform is heated at elevated temperature to viscous fluid , and stretched into extended fibers by applied tension .this preform is specially designed with a thin cylindrical shell in polymer matrix . ]sem micrographs of cylindrical shells in fiber .( a ) photograph of fiber .( b ) sem of fiber cross - section .magnified view of multilayer structures reveals the thickness of micrometer ( c ) and tens of nanometers ( d ) , respectively . bright and dark color for glass and polymer in sem , respectively . ]have been successfully achieved in fibers by this method .during thermal drawing , the temperature is set above the softening points of all the materials , which consequently are in the viscous fluid state to enable polymer and glass codrawing . to describe this fluid flow , we consider the incompressible navier stokes ( ns ) equations : =-\nabla p+\nabla\cdot\left[\eta\frac{\nabla\vec{u}+\left(\nabla\vec{u}\right)^{t}}{2}\right]-\gamma\kappa\vec{n}\delta,\\ \nabla\cdot\vec{u}=0,\end{cases}\label{eq : ns}\ ] ] where is velocity , is pressure , is materials density , is viscosity , is interfacial tension between glass and polymer , is the delta function ( at the polymer and glass interface and otherwise ), is curvature of interface , and is a unit vector at interface . to identify the operating regime of the fiber drawing process , we consider the relevant dimensionless numbers .the reynolds number ( ) , froude number ( ) , and capillary number( ) are : , these dimensionless numbers in a typical fiber draw are , and . and the length of the neck - down region is , the ratio is much less than , and thus the complicated profile of neck - down cone is simplified into a cylindrical shape for the purpose of easier analysis .to develop a quantitative understanding of capillary instability in a cylindrical - shell geometry , direct numerical simulation is performed using the finite element method . in order to isolate the effect of radial fluctuations , we impose cylindrical symmetry, so the numerical simulation simplifies into a problem in the plane ( where the operations are replaced by their cylindrical forms ) .our simulation algorithm is briefly presented in fig .[ fig : simulation - algorithm ] ( a ) .the geometry of a cylindrical shell is defined by two interfaces and in fig .[ fig : simulation - algorithm ] ( b ) . the flow field ( )is calculated from navier stokes equations , as seen in fig .[ fig : simulation - algorithm ] ( c ) .consequently , this new flow field generates interface motion , resulting in an updated interface and an updated flow field . by these numerical iterations, the interface gradually evolves with time .a level - set is coupled with the ns equations to track the interface ( see appendix 1 ) , where the interface is located at the contour and the evolution is given by via : the local curvature ( ) at an interface is given in terms of by : for numerical stability reasons , we add an artificial diffusion term proportional to a small parameter in eq .[ eq : levelset ] ( see appendix 4 ) ; the convergence as is discussed in the next section .simulation algorithm .( a ) schematic of algorithm .( b ) interfaces _ _ and of cylindrical shell are defined by a level set function .( c ) flow field is determined by the ns equations ( color scale for pressure and arrows for fluid velocity ) . ] [ [ section ] ] numerical convergence with respect to various approximations in the simulation .( a ) numerical convergence shown by instability amplitude at time t=300 seocnds vs number of mesh elements ; ( b ) established errors ( corresponding to higher resolution ) versus resolution ( solid line = expected asymptotic rate ) ; ( c ) the artificial diffusion term for numerical stability , and ( d ) the numerical errors represented by deviations from volume conservation . ] the time integration is performed by a fifth - order backward differentiation formula , characterized by specified relative and absolute error tolerances ( and ) with acceptable accuracy ( ) . fig . [ fig : convergence](a ) shows numerical convergence of the instabiltiy amplitude at time t=300 seocnds as a fucntion of the number of triangular mesh elements . from last four data points in ( a ), the convergence of with respect to the mesh resolution , compared with a very fine mesh of mesh elements , is presented in fig .[ fig : convergence](b ) . because of the pde s discontinuous coefficient , we expect 1st - order convergence : that is error 1/(mesh resolution ) , corresponding to slope ( solid line ) in fig . [fig : convergence](b ) : the data points appear to be approaching this slope , but we are not able to measure errors at high enough resolution to verify the convergence rate conclusively .however , we found mesh elements , corresponding to a typical element diameter , yield good accuracy ( error ) .the physical results are recovered in the limit , and so we must establish the convergence of the results as is reduced and identify a small enough to yield accurate results while still maintaining stability . ( a ) snapshot of the flow field and interfaces during instability evolution . color scale for pressure , arrows for fluid velocity .( b ) the instability amplitude grows exponentially with time at interfaces and . ][ fig : evolution](a ) ( i)(v ) presents snapshots of the ( i ) initially , the pressure of the inner fluid is higher than that of the outer fluid due to laplace pressure ( ) originating from azimuthal curvature of the cylindrical geometry at interfaces and .( ii)(iv ) the interfacial perturbations generate an axial pressure gradient , and hence a fluid flow occurs that moves from a smaller - radius to a larger - radius region for the inner fluid .gradually the amplitude of the perturbation is amplified .( v ) thesmaller - radius and _ expanded _ larger - radius regions of inner fluid further enhance the axial pressure gradient , resulting in a larger amplitude of the perturbation . as a result , the small perturbation is exponentially amplified by the axial pressure gradient .[ fig : evolution](b ) shows the instability amplitude at interfaces and growing with time exponentially on a semilog scale in the plot .[ [ section-1 ] ] growth of perturbation amplitude as a function of time . at small amplitudes below _ , the perturbation grows exponentially as expected from linear theory . at large amplitude above , significant deviations from linear theory occur .scale invariance , with various values of r superimposed , is expected in the stokes regime with low re number . ]time - dependent perturbation amplitude curves for various cladding viscosities ( ) , but with a fixed shell viscosity ( ) . ]we now investigate the dependence of instability timescale on cladding viscosity ( ) with a fixed shell viscosity ( ) , in order to help us to identify suitable cladding materials for fiber fabrication .as viscosity slows down fluid motion , the low - viscosity cladding has a faster instability growth rate and a shorter time scale , while the high - viscosity cladding has a slower instability growth rate and a longer instability time scale .[ fig : viscositycontrast ] shows the time - dependent perturbation amplitude curves for various viscosity contrast by changing the cladding viscosity ( ) .instability time scale for the each given viscosity contrast is obtained by exponentially fitting the curves in fig .[ fig : viscositycontrast ] . instability time scale ( ) as a function of viscosity contrast is presented in fig .[ fig : viscositycladding ] the existing linear theory has only been solved in case of equal viscosity , and predicts that the instability time scale is proportional to the viscosity .we obtain a more general picture of the instability time scale for unequal viscosity by considering two limits . in the limit of , an instability time scale ( ) for unequal viscosity with a fixed shell viscosity ( ) . in the limit of , determined by and approaches to a constant . in the opposite limit of , be determined by and is linearly proportional to . ] [ [ section-2 ] ] instability time scale ( ) for different values of the radius ( ) and the viscosity ( ) is displayed in fig .[ fig : stabilitymap ] .ectional geometry in the calculation is shown in the inset : interface _ _ _ _ is located at radius , the cylindrical - shell thickness is , and interface _ _ is at radius .the interfacial tension in the calculations was set to , which was the measured interfacial tension .two cases are considered : one is equal viscosity , and the other is unequal viscosity . in the case of , the instability time scale is calculated exactly from stone and brenner s linear theory , where the fastest growth factor was found by searching numerically within a wide range of wavelengths for a certain value of ( eq . [ eq : growthrate ] in appendix 2 ) .[ fig : stabilitymap ] plots this time scale versus radius for corresponding to as , compared to the dwelling time . in the other case of ( in the regime as discussed in section [ sub : unequal - viscosities ] ) , the instability time scale can be roughly estimated from dimensional analysis . although dimensionless analysis does not give the constant factor , for specificity ,we choose the constant coefficient from the tomotika model , },\label{eq : unequtimescale}\ ] ] where the fastest growth factor ] . converting to for a position - dependent axial flow velocity during thermal drawing , we therefore obtain a total exponential growth factor : where $ ] is axial position in the neck - down region with length . corresponds to breakup , while corresponds to stability . in order to provide a conservative estimation of filament stability , the capillary instability timeis calculated from the fastest growth factor at each axial location ( this is a very conservative estimate ) , and the polymer viscosity is set to be the minimum value ( at the highest temperature ) during thermal drawing . the capillary instability time scale is calculated based on the tomotika model as follows , \right\ } } .\label{eq : tomotikacal}\ ] ] the complex shape of neck - down profile is fitted from experiment can be approximately described by following formula , ^{p}-1,\quad p=2.\label{eq : neckdownprofile}\ ] ] due to the incompressibility of the viscous fluid , the velocity of flow scales inversely with area : where is the preform velocity .again by incompressibility , the filament radius should scale as the fiber radius : the temperature distribution during thermal drawing , fit from experiment , is found to be approximately parabolic , in calculations , parameters for the typical fiber drawing are cm, , cm , , , , nm , pa .[ fig : calculated - relevant - parameters ] ( b)(d ) presents the corresponding position dependent variables including radius , velocity , temperature and viscosity .finally , we obtain this satisfies , but only barely if this were an accurate estimate of the growth factor , instability might still be observed . however , the assumptions we made above were so conservative that the true growth factor must be much less than this , indicating the instability should not be observable during the dwelling time of fiber drawing .so , the observed filaments are consistent with the tomotika model , although of course we can not yet exclude the possibility that there are also additional effects ( _ e.g. , _ elasticity ) that further enhance stability .relevant parameters in the neck - down region during thermal drawing .( a ) photograph of neck - down region from preform to fiber , ( b)(e ) for the calculated radius , velocity , temperature and viscosity . ] in experiments , we observed that thin films preferentially break up along the azimuthal direction rather than the axial direction .the discussion of the previous section [ sub : discussion - of - continuous ] suggests a simple geometrical explanation for such a preference , regardless of the details of the breakup mechanism .the key point is that any instability will have some characteristic wavelength of maximum growth rate for small perturbations , and this must be proportional to the characteristic feature size of the system , in this case the film thickness . as the fiber is drawn , however ,the thickness and hence decreases .now , we consider what happens to an unstable perturbation that begins to grow at some wavelength when the thickness is .if this is a perturbation along the _ axial _ direction , then the fiber - draw process will _ stretch _ this perturbation to a _ longer _ wavelength , that will no longer correspond to the maximum - growth ( which is shrinking ) , and hence the growth will be damped .that is , the axial stretching competes with the layer shrinking , and will tend to suppress _ any _ axial breakup process .in contrast , if is an _ azimuthal _perturbation , the draw - down process will _ shrink _ along with the fiber cross - section at exactly the same rate that and shrink .therefore , azimuthal instabilities are _ not _ suppressed by the draw process .this simple geometrical argument immediately predicts that the first observed instabilities will be azimuthal ( although axial instabilities may still occur if the draw is sufficiently slow ) .in this paper , motivated by recent development in microstructured optical fibers , we have explored capillary instability due to radial fluctuations in a new geometry of concentric cylindrical shell by 2d numerical simulation , and applied its theoretical guidance to feature size and materials selections in the microstructured fibers during thermal drawing processes .our results suggest several directions for future work .first , it would be desirable to extend the analytical theory of capillary instability in shells , which is currently available for equal viscosity only , to the more general case of unequal viscosities we have developed a very general theory in an appearing paper .second , we plan to extend our computation simulations to include 3d azimuthal fluctuations together with radial fluctuations ; as argued in section [ sub : azimuthal - preference ] , we anticipate a general geometrical preference for azimuthal breakup over axial breakup once the draw process is included .third , there are many additional possible experiments that would be interesting to explore different aspects of these phenomena in more detail , such as employing different geometries ( _ e.g. _ , non - cylindrical ) , temperature - time profiles , or materials ( _ e.g. _ , sn pei or se pe ) .finally , by drawing more slowly so that axial breakup occurs , we expect that experiments should be able to obtain more diverse structures ( _ e.g. _ axial breakup into rings or complete breakup into droplets ) that we hope to observe in the future .this work was supported by the center for materials science and engineering at mit through the mrsec program of the national science foundation under award dmr-0819762 , and by the u.s .army through the institute for soldier nanotechnologies under contract w911nf-07-d-0004 with the u.s .army research office .in the simulation , the level set function is defined by a smoothed step function reinitialized at each time step , where and arethe radius of interfaces i and ii of the coaxial cylinder , and is the half thickness of the discretized interface .the level set , 0 , and corresponds to the region of concentric cylindrical shell , of surrounding fluid , and interface , respectively .contour tracks the interface and .a smoothed delta function is defined to project surface tension at interface , a smoothed step function is introduced to create a smooth transition of the level - set function from to across the interface , ^{-1}-\left[1+e^{(r - r_{2})/\lambda}\right]^{-1}.\label{eq : stepfunction}\ ] ] the boundary condition for the level set equation at the edge of the computational cell is where and are the normal and tangential vectors at the boundary .in addition , boundary conditions for the ns equations are \vec{n } & = 0.\end{cases}\label{eq : nsboundary}\ ] ] in the simulation , time - stepping accuracy is controlled by an absolute and relative error tolerance ( and ) for each integration step .let be the solution vector at a given time step , be the solvers estimated local error in during this time step , and be the number of degrees of freedom in the simulation .then a time step is accepted if the following condition is satisfied , ^{^{1/2}}<1.\label{eq : numerical}\ ] ] a triangular finite - element mesh is generated , and second - order quadratic basis functions are used in the simulation .parameters for fig .[ fig : evolution ] in the simulation are , , , , .growth factor of instability as a function of perturbation wavelength .fast- and slow- modes occur at wavelengths above their respective critical wavelengths .inset is a sketch of coaxial cylinder with radius and equal viscosities . ]a linear theory of capillary instability for a co - axial cylinder with equal viscosities is provided in the literature by stone and brenner .the growth rate ( ) for a wave vector is a solution of the following quadratic equations , \lambda(r , r)\right\ } \\\times\left\ { \sigma-\frac{k^{2}\gamma_{2}}{r\eta}\left[1-(rk)^{2}\right]\lambda(r , r)\right\ } \\= \frac{k^{4}\gamma_{1}\gamma_{2}}{rr\eta^{2}}\left[1-(rk)^{2}\right]\left[1-(rk)^{2}\right]\lambda(r , r)^{2},\label{eq : analyticformula}\end{gathered}\ ] ] where and are the radii of the unperturbed interfaces i and ii , and are the interfacial tensions , and is viscosity . , where , is associated with the modified bessel function , \label{eq : besselassociated}\ ] ] for the case of , the growth rate has the following formula , where the growth factor of in eq .[ eq : growthrate ] is a complicated function of instability wavelength .the instability time scale is scaled with radius . for the case of , this growth factoris calculated in fig .[ fig : growthfactor ] .a positive growth factor indicates a positive growth rate ( ) , for which any perturbation is exponentially amplified with time .instability occurs at long wavelengths above a certain critical wavelength .two critical wavelengths exist for the co - axial cylinder shell .one is a short critical wavelength for a faster - growth mode ( red line ) .the other is a long critical wavelength for slower - growth mode ( blue line ) . in the numerical simulation, the wavelength is chosen between these two wavelengths ( ) , and fast modes dominate . from the simulation parameters and , together with the wavelength corresponding to a growth factor , the linear theory predicts an instability time scale ., the viscosity of chalcogenide glass - forming melts depends on temperature and is calculated from an empirical arrhenius formula , temperature - dependent viscosity for various chalcogenide glasses .typical temperature during fiber drawing for glass is around . ]to ensure numerical stability in the simulation , an artificial diffusion term proportional to a small parameter is added to level - set eq .[ eq : levelset ] as follows
recent experimental observations have demonstrated interesting instability phenomenon during thermal drawing of microstructured glass / polymer fibers , and these observations motivate us to examine surface - tension - driven instabilities in concentric cylindrical shells of viscous fluids . in this paper , we focus on a single instability mechanism : classical capillary instabilities in the form of radial fluctuations , solving the full navier stokes equations numerically . in equal - viscosity cases where an analytical linear theory is available , we compare to the full numerical solution and delineate the regime in which the linear theory is valid . we also consider unequal - viscosity situations ( similar to experiments ) in which there is no published linear theory , and explain the numerical results with a simple asymptotic analysis . these results are then applied to experimental thermal drawing systems . we show that the observed instabilities are consistent with radial - fluctuation analysis , but can not be predicted by radial fluctuations alone an additional mechanism is required . we show how radial fluctuations alone , however , can be used to analyze various candidate material systems for thermal drawing , clearly ruling out some possibilities while suggesting others that have not yet been considered in experiments .
tetracyclines ( tc ) are a set of compounds used in human therapy against a broad - spectrum of microbial agents since 1947 .they can be classified as low cost drugs with small side - effects being largely used by food industry and for therapy in animals and plants .tc present activity against gram - positive and gram - negative anaerobic and aerobic bacteria , mycrobacterium and several protozoan parasites .non - antibacterial effects as anti - inflammatory , immunosuppressive , antioxidant and anticancer are also attributed to them .the excessive use of tc during years helped resistant bacteria proliferation , limiting then their clinical use .more than 30 tc resistance determinants have been described and several different tc resistance determinants have been identified in e. coli .this emergence of resistant pathogens is becoming an increasingly important problem and it motivates the search for new tc variations and derivatives .tc can be divided into two classes , the bacteriostatic typical tc and atypical tc , which are bactericidal and they exert their effects by promoting bacterial autolysis . most probably due to their different modes of action , atypical tc have been shown to exert activity against some tc resistant bacteria .the chemistry of tc in solution is quite complicated due to their ability to adopt different conformations , protonation states , and tautomeric forms , depending on the conditions investigated .tc and its analogues undergo complex formation with a variety of the metal cations present in biological fluids and this affects their chemical as well as conformational equilibrium in solutions .theoretical and experimental investigations have contributed to the understanding of the interaction of tc with biological medium and their mode of action .theoretical calculations have also been applied to investigations of new applications for old , new and hypothetical tc derivatives . in this work we present an investigation of electronic and quantum features for a series of tc ( figure [ fig1 ] , table [ tab1 ] ) for which the biological activity against _e. coli _ is known and also to new compounds .our results showed that it is possible to directly correlate some quantum electronic descriptors to tc biological activity .this information is then used in the design of new compounds .basic structure of tetracycline.,width=302 ] [ cols="^,^,^,^,^,^,^,^,^,^",options="header " , ] finally , it is interesting to note the role played by substituents in the biological behavior of the new set of tetraciclines .according to our qualitative analysis the main substitution requirement for biological activity is the presence of nh or no or dialkylamino groups at c7 and/or c9 . even fornon - active derivative 13 ( table i ) the presence of these substituents classifies as active new compounds 81 - 84 and 87 - 90 ( table viii ) . despite the whole molecular skeleton composed of 4 fused rings and the integrity of the ring a, the rest of the molecule including position c5 and c6 just plays minor role on the biological activity of tetracyclines .in this work we applied the eim methodology in the classification of activity of 14 derivatives of the tetracicline and in the predictive classification of the activity of 90 hypothetical new derivatives .our results showed that the eim results are not method dependent indicating that the classification of activity of tetraciclines is reproducible for different semiempirical methods . in comparison with conventional physicochemical descriptors the importance of eim descriptorsis reinforced by pca and ann methodologies .the rules constructed with eim , pca and ann methods reproduced the experimental data with 100% , 96% and 100% of accuracy , respectivelly . for the new 90 derivatives proposed eim and ann are in good agreement with a percentage of 93% consistent classification . these resultsadd to increasing data of eim studies indicating the pure quantum electronic indices can be reliably used to classify the biological activity of large number of different classes of materials .we hope the present study estimulate further experimental studies for tetracyclines in order to try to produce new antibiotic compounds .this work has been supported by the brazilian agency fapesp and also thanks to capes , cnpq , fapemig , and immp .lertvorachon , j. ; kim , j. p. ; soldatov , d. v. ; boyd , j. ; roman , g. ; cho , s. j. ; popek , t. ; jung , y. s. ; laua , p. c. k. ; konishia .y. _ bioorganic & medicinal chemistry _ , 2005 , * 13 * , 4627 - 4637. 1,12-substituted tetracyclines as antioxidant agents .g. sengelov , b. halling - sorensen , and f. m. aarestrup .veterinary microbiology , 95:91 - 101 , 2003 .susceptibility of escherichia coli and enterococcus faecium isolated from pigs and broiler chickens to tetracycline degradation products and distribution of tetracycline resistance determinants in e. coli from food animals .levy , s. b. ; hedges , r. w. ; sullivan , f.;medeiros , a. a. ; sosroseputro .h. multiple antibiotic - resistance plasmids in enterobacteriaceae isolated from diarrheal specimens of hospitalized children in indonesia ._ j. antimicrob .chemother _ , * 1985 * , 16 , 7 - 16 .jones , c. s. ; osborne , d. j. ; stanley .j. enterobacterial tetracycline resistance in relation to plasmid incompatibility . _cell probes _ , * 1992 * , 6 , 313 - 317 .p. m. v. b. chartone - souza , e. ; loyola , t. l. ; bucclarelli - rodriguez , w. ; menezes , m. a. d. ; rey , n. a. ; pereira - mala .e. c. synthesis and characterization of a tetracycline - platinum ( ii ) complex active against resistant bacteria ._ journal of inorganic biochemistry _ , * 2005 * , 99 , 1001 - 1008 .oliva , b. ; chopra .i. tet determinants provide poor protection against some tetracyclines - further evidence for division of tetracyclines into 2 classes ._ antimicrob .agents chemother _ , _ 1992 _ , 36 , 876 - 878 .halling - sorensen , b. ; sengelov , g. ; tornelund . j. toxicity of tetracyclines and tetracycline degradation products to environmentally relevant bacteria , including selected tetracycline - resistant bacteria . _toxicol _ , * 2002 * , 42 , 263 - 271 .cosentino , u. ; var , m. r. ; saracino , a. a. g. ; pitea , d. ; moro , g. ; salmona , m. tetracycline and its analogues as inhibitors of amyloid fibrils : searching for ageometrical pharmacophore by theoretical investigation of their conformational behavior in aqueous solution ._ j. mol . model ._ , * 2005 * , 11 , 17 - 25 .lambs , l. ; decock - le reverend , b. ; kozlowski , h. ; berthon , g. metal ion - tetracycline interactions in biological - fluids .9 .circular - dichroism spectra of calcium and magnesium complexes with tetracycline , oxytetracycline , doxycycline , and chlortetracycline and discussion of their binding modes ._ inorg . chem ._ , _ 1988 _ , 27 , 3001 - 3012 .lambs , l. ; berthon , g. metal - ion tetracycline interactions in biological - fluids .7 .quantitative investigation of methacycline complexes with ca(ii ) , mg(ii ) , cu(ii ) and zn(ii ) ions and assessment of their biological significance .acta _ , * 1988 * , 151 , 33 - 43 .brion , m. ; lambs , l. ; berthon , g. metal iontetracycline interactions in biological - fluids .6 .formation of copper(ii ) complexes with tetracycline and some of its derivatives and appraisal of their biological significance .chim . acta _ ,* 1986 * , 123 , 61 - 68 .lambs , l. ; brion , m. ; berthon , g. metal - ion tetracycline interactions in biological - fluids .4 . potential influence of ca-2 + and mg-2 + ions on the bioavailability of chlortetracycline and demethylchlortetracycline , as expected from their computer - simulated distributions in blood - plasma .acta _ , * 1985 * , 106 , 151 - 158 .bi , s. y. ; song , d. q. ; tian , y. ; zhou , x. ; liu , x. y. ; zhang , h. q. molecular spectroscopic study on the interaction of tetracyclines with serum albumins ._ spectrochimica acta part a - molecular and biomolecular spectroscopy _ , _ 2005 _ , 61 , 629 - 636 .baptiste , d. c. ; hartwick , a. t. e. ; jollimore , c. a. b ; baldridge , w. h. ; seigel , g. m. ; kelly , m. e. m. an investigation of the neuroprotective effects of tetracycline derivatives in experimental models of retinal cell death ._ molecular pharmacology _ ,* 2004 * , 66 , 1113 - 1122 .nicolas , i. ; vilchis , m. ; aragon , n. ; miranda , r. ; hojer , g. ; castro , m. theoretical study of the structure and antimicrobial activity of horminone ._ international journal of quantum chemistry _ , * 2003 * , 93 , 411 - 421 .zerner , m. c. ; dos santos , h. f. ; de almeida , w. b. conformational analysis of the anhydrotetracycline molecule : a toxic decomposition product of tetracycline ._ journal of pharmaceutical sciences _ , * 1988 * , 87 , 190 - 195 .belletato , p. ; de almeida , w. b. ; dos santos , h. f. ; nascimento , c. s. the conformational and tautomeric equilibrium of 5a,6-anhydrotetracycline in aqueous solution at ph 7 .structure - theochem _ , * 2003 * , 626 , 305 - 319 .de miranda . c. f.; costa , l. a. s. ; de almeida , w. b. ; dos santos , h. f. ; macial , b. l. structure and properties of the 5a,6-anhydrotetracycline platinum(ii ) dichloride complex : a theoretical ab initio study , _ journal of inorganic biochemistry _ , * 2006 * , 100(10 ) , 1594 - 1605 .dewar , m. j. s. ; zoebish , e.g. ; healy , e.f . ; stewart , j. j. p. the development and use of quantum - mechanical molecular - models .76 .am1 - a new general - purpose quantum - mechanical molecular - model , _ journal american chememical society _ , * 1985 * , 107 , 3092 - 3909 .paniago , e. b. ; tosi , l. ; beraldo , h. ; de siqueira , j. m. ; carvalho , s. metal - complexes of anhydrotetracycline .1 . a spectrometric study of the cu(ii ) and ni(ii ) complexes , _ j. pharm ._ , * 1994 * , 83 , 291 - 295 .garniersuillerot , a. ; beraldo , h. ; machado , f. c. ; demicheli . c. metal - complexes of anhydrotetracycline .2 . absorption and circular - dichroism study of mg(ii ) , al(iii ) , and fe(iii ) complexes - possible influence of the mg(ii ) complex on the toxic side - effects of tetracycline , _ j. inorg ._ , * 1995 * , 60 , 163 - 173 .beraldo , h. ; matos , s. v. d. m. metal - complexes of anhydrotetracycline .3 .an absorption and circular - dichroism study of the ni(ii ) , cu(ii ) and zn(ii ) complexes in aqueous - solution , _ j. braz ._ , * 1995 * , 6(4 ) , 405 - 411 .braga , r. s. ; vendrame , r. ; galvao , d. s. structure - activity relationship studies of substituted 17 alpha - acetoxyprogesterone hormones , _ journal of chemical information and computer sciences _ , * 2000 * , 40(6 ) , 1377 - 1385 .braga , s. f. ; galvao , d. s. benzo[c]quinolizin-3-ones theoretical investigation : sar analysis and application to nontested compounds , _ journal of chemical information and computer sciences _ , * 2004 * , 44(6 ) , 1987 - 1997 .troche , k. s. ; braga , s. f. ; coluci , v. r. ; galvao , d. s. carcinogenic classification of polycyclic aromatic hydrocarbons through theoretical descriptors , _ international journal of quantum chemistry _ , * 2005 * , 103(5 ) , 718 - 730 .braga , s. f. ; galvao , d. s. a structure - activity study of taxol , taxotere , and derivatives using the electronic indices methodology ( eim ) , _ journal of chemical information and computer sciences _ , * 2003 * , 43(2 ) , 699 - 706 .
tetracyclines are an old class of molecules that constitute a broad - spectrum antibiotics . since the first member of tetracycline family were isolated , the clinical importance of these compounds as therapeutic and prophylactic agents against a wide range of infections has stimulated efforts to define their mode of action as inhibitors of bacterial reproduction . we used three sar methodologies for the analysis of biological activity of a set of 104 tetracycline compounds . our calculation were carried out using the semi - empirical austin method one ( am1 ) and parametric method 3 ( pm3 ) . electronic indices methodology ( eim ) , principal component analysis ( pca ) and artificial neural networks ( ann ) were applied to the classification of 14 old and 90 new proposed derivatives of tetracyclines . our results make evident the importance of eim descriptors in pattern recognition and also show that the eim can be effectively used to predict the biological activity of tetracyclines . de fsica gleb wataghin , universidade estadual de campinas - unicamp , campinas - sp cp6165 , cep 13081 - 970 , brazil . ( ncleo de estudos de qumica computacional ) , departamento de qumica , instituto de cincias exatas , universidade federal de juiz de fora , campus universitrio , martelos , juiz de fora , mg , 36.036 - 900 , brazil .
the two - slit interference experiment with particles has become a cornerstone for studying wave - particle duality .so fundamental is the way in which the two - slit experiment captures the essence of quantum theory , that feynman ventured to state that it is a phenomenon which has in it the heart of quantum mechanics ; in reality it contains the _ only _ mystery " of the theory . that radiation and massive particles can exhibit both wave nature and particle nature in different experiment ,had become quite clear in the early days of quantum mechanics .however , niels bohr emphasized that wave - nature , characterized by two - slit interference , and the particle - nature , characterized by the knowledge of which slit the particle passed through , are mutually exclusive .in doing this he raised this concept to the level of a new fundamental principle . much later, this principle was made quantitatively precise by deriving a bound on the extent to which the two natures could be observed simultaneously , by greenberger and yasin and later by englert .greenberger and yasin characterised the particle nature by the ability to correctly predict which slit the particle passed through .this predictability was based only on the initial state of the particle , and not on any measurement on it .englert characterised the particle nature by the ability to distinguish between the two paths of the particle , by an actual measurement .he introduced a quantity for this purpose , which took values between 0 and 1 .the wave nature was characterised by the visibility of the interference , given by .the relation putting a bound on the path distinguishability and fringe visibility is given by thus one can see that and , which can take values between 0 and 1 , are dependent on each other .a full which - way information ( ) would surely wash out the interference ( ) .( [ egy ] ) can be thought to be a quantitative statement of bohr s complementarity principle .it smoothly interpolates between the two extreme scenarios discussed by bohr , namely , full which - way information and no which - way information . a dramatic manifestation of bohr scomplementarity principle has been demonstrated in the so - called quantum eraser . here , erasing " the which - way information after the particle has passed through the slits , allows one to recover the lost interference fringes .wave - particle duality has also been connected to various other phenomena .connection of wave - particle duality with uncertainty relations has been investigated .an interesting complementarity between entanglement and the visibility of interference has been demonstrated for two - path interference . that the entanglement between the particle and which - path detector is what affects the visibility of interference , is also the view we take in this investigation .various aspects of complementarity were also explored by jaeger , shimony and vaidman where they also explored the case where the particle can follow multiple path , and not just two , before interfering .bohr s principle of complementarity should surely apply to multi - slit experiments too .however , one might wonder if one can find a quantitatively precise statement of it for multi - slit experiments .various attempts have been made to formulate a quantitatively precise statement of complementarity , some kind of a duality relation , for the case of multibeam interferometers .however , the issue is still not satisfactorily resolved . beyondthe well studied two - slit experiment , the simplest multi - beam case is the 3-slit interference experiment .englert s duality relation was derived only in the context of 2-slit experiments , and one would like an analogous relation for the case of 3-slit experiments .that is the focus of this paper .of late there has been a newly generated focus on the three - slit interference experiments , albeit for a different reason .three slit interference is somewhat more involved than its 2-slit counterpart simply due to the fact that while the two - slit interference is the result of interference between two parts coming from the two slits , in the 3-slit interference there are three parts which interfere in different ways . in the general case , if we assume that the separation between slits 1 and 2 is and that between slits 2 and 3 is , there are two interferences from slit 1 and 2 and from slit 2 and slit 3 .in addition there is an interference between parts from slit 1 and 3 , which involves a slit separation of . in the case where the two slit separations are the same , , there are two interferences with slit separation and one interference with slit separation .having more than two slits also allows , in principle , the possibility of having different geometrical arrangement of slits .however , we restrict ourselves to the case of three slits in a linear geometry , as shown in fig .[ trislit ] , as that is the geometry in which the experiment is usually done , and that is also the geometry which is used in previous investigations of multi - slit experiments .we expect additional complicacy in interpreting bohr s complementarity because if we know that the particle did not go through ( say ) slit 3 , it may not imply complete loss of interference as there is still ambiguity regarding which of the other two slits , 1 or 2 , the particle went through .first we would like to have a way of knowing which of the three slits the particle passed through .any which - path detector should have three states which should correlate with the particle passing through each slit .let these states be , which correspond to particle passing through slits 1 , 2 and 3 , respectively .without loss of generality we assume that the states are normalized , although they may not necessarily be mutually orthogonal .the combined state of the particle and the which - path detector can be written as where are the amplitudes of the particle passing through the slit 1 , 2 and 3 , respectively .particle passes through slits 1 , 2 , and 3 with probabilities , and , respectively .if are mutually orthogonal , we can find a hermitian operator ( and thus , a measurable quantity ) which will give us different eigenvalues corresponding to latexmath:[ ] .it represents the three wave - packets spreading and overlapping with each other .the smaller the width of the slits , the stronger is the overlap between the three wave - packets .the probability ( density ) of finding the particle at a position on the screen is given by \nonumber\\ & & + \sqrt{p_2p_3}e^{-{2x^2+\ell_2 ^ 2 + 2x\ell_2\over 4\sigma^2}}\left[\langle d_2|d_3\rangle e^{{ix\ell_2\hbar t / m + i\hbar t\ell_2^ 2/2 m \over 4\omega^2 } } + \langle d_3|d_2\rangle e^{-{ix\ell_2\hbar t / m + i\hbar t\ell_2 ^ 2/2m\over 4\omega^2}}\right ] \nonumber\\ & & + \sqrt{p_1p_3}e^{-{x^2+(\ell_1 ^2+\ell_2 ^ 2)/2+x(\ell_2-\ell_1)\over 2\sigma^2}}\left [ \langle d_1|d_3\rangle e^{{i[{x(\ell_1+\ell_2)}+{(\ell_2 ^ 2-\ell_1 ^ 2)/2}]{\hbar t/ m}\over 4\omega^2}}\right.\nonumber\\ & & \left.\left.+ \langle d_3|d_1\rangle e^{-{i[{x(\ell_1+\ell_2)}+{(\ell_2 ^ 2-\ell_1 ^ 2)/2}]{\hbart / m}\over 4\omega^2}}\right]\right),\end{aligned}\ ] ] where and .we write the overlaps of various detector states as : , , .the phases being arbitrary , any additional phases coming from the amplitudes of the initial state ( [ initial ] ) would be absorbed in them .the probability density then reduces to where , , and .visibility of the interference fringes is conventionally defined as where and represent the maximum and minimum intensity in neighbouring fringes , respectively .since all the factors multiplying the cosine terms in ( [ pattern ] ) are non - negative , the maxima of the fringe pattern will occur where the value of all the cosines is .the minima will occur where value of all the cosines is at the same time .in the usual three - slit interference experiments , the three slits are equally spaced . in that case , and and .provided we ignore , the cosines can indeed have values all or all for certain values of . ignoring amounts to ignoring in comparison to , which is justified if one looks at any fringe except the central one ( at ) , since fringe width on the screen is much larger than the slit separation .ideal _ visibility can then be written down as where ^{-{d^2\over 2\sigma^2}}\right)$ ] . in reality, fringe visibility will be reduced due many factors , including the width of the slits .for example , if the width of the slits is very large , the fringes may not be visible at all .it will also get reduced by the varying phases . in an ideal situation, the maximum visibility one can theoretically get will be in the case when , which amounts to assuming that the spread of a particular wave - packet , when it reaches the screen , is so large that the separation between two neighboring slits is negligible in comparison , and also when all the phases are either zero or their effect cancels out .actual fringe visibility will be less than or equal to that , and can be written as using ( [ d3 ] ) the above equation gives eqn .( [ duality ] ) is a new duality relation which puts a bound on how much which - way information we can obtain and how much fringe visibility we can get at the same time .it is straightforward to check that is possible only for , and implies .note that ( [ duality ] ) can also be expressed in another form when the three slits are equally spaced , maximum is achieved when the two terms and the term have values + 1 at the same time , which happens when , provided that are zero . when the three slits are unequal , the three terms , and can not all have values + 1 at the same time .thus the maximum intensity will be smaller than that in the equally spaced case , or .minimum intensity in the equally spaced case is attained when all the cosine terms are equal to , which happens when , provided that are zero .when the three slits are unequally spaced , the three terms , and can not all have values at the same time .thus the minimum intensity will be larger than that in the equally spaced case , or .these two observations lead to the straightforward conclusion that fringe visibility in the case of unequal slits will be strictly smaller than that in the case of equal slits , other things being the same , using the above in conjunction with ( [ v3 ] ) we can write for the fringe visibility , for the case of unequal slits using ( [ d3 ] ) the above equation gives note the strictly less than sign in the above , in contrast with ( [ duality ] ) .the above relation implies that if the slits are unequally spaced , even if the distinguishability is reduced to zero , the visibility of interference can never be 1 .the reason of this behaviour lies in the fact that a certain amount of loss of interference visibility is rooted not in the path distinguishability , but in the unequal spacing of the three slits .let us look at some special cases arising from the fact that there are not two , but three slit .suppose we have a which - path detector which can detect with certainty if the particle has passed through slit 1 or not .if the particle has not passed through slit 1 , the detector is unable to say which of the other two slits , 2 or 3 , has the particle taken .such a scenario can occur , for example , if we have a tiny camera in front of slit 1 which , for each particle , can say if the particle has gone through slit 1 or not . in this case is orthogonal to both and , and , are parallel .thus , in this case , and .assuming , for simplicity , , the distinguishability , in this case is physically what is happening is the following .particle going through slits 2 and 3 gives rise to a sharp interference pattern , however , particle going through slit 1 gives rise to a uniform background particle count , thus reducing the overall visibility of the fringes arising from slits 2 and 3 .let us consider another case where and are orthogonal to each other , but both have equal overlap with .such a case can be exemplified by , and , where form an orthonormal set . in this case( again , for simplicity , assuming ) the distinguishability is just for completeness , here we wish to derive a duality relation for a two - slit experiment in the case where one defines path distinguishability based on uqsd .we define the distinguishability of two paths as the upper limit of ( [ pn ] ) for n=2 .the path distinguishability then reads where and are the probabilities of the particle to go through the first and the second slit , respectively . by carrying out an analysis similar to the one earlier in this section ( essentially , putting in ( [ pattern ] ) and redefining some constants ), one can show that the distinguishability and fringe visibility , in the two - slit experiment , are bounded by the above relation is a simple wave - particle duality relation for a two - slit interference experiment where the beams may be unequal .it can be connected to englert s duality relation ( [ egy ] ) for the case . in englerts analysis , distinguishability is given by , which can be related to in ( [ d2e ] ) , for , by the relation if one plugs in the above form of in ( [ dnew ] ) , the latter reduces to , which is just englert s duality relation ( [ egy ] ) .the new relation ( [ dnew ] ) appears to be more versatile for two - slit experiments , because it also applies to certain modified two - slit experiments in which the which - path detector is replaced by a quantum device " .lastly we discuss a particular scenario in which the two states of the which - path detector are identical , namely .in such a situation , experimentally one can not tell which slit the particle went through .however , if the probabilities of the particle for going through the two slits are known to be different , one can _ predict _ which slit the particle is most likely to have gone through . in this situation our , given by ( [ d2e ] ) ,is reduced to interestingly the above reduced distinguishability is related to the _ predictability _ , defined by greenberger and yasin as , by the following relation so , for the case , our new duality relation ( [ dnew ] ) reduces to which is precisely the duality relation derived by greenberger and yasin .so , the versatility of the new two - slit duality relation can be seen from the fact that for it reduces to englert s duality relation dealing with _distinguishability _ , and for , it reduces to greenberger and yasin s duality relation dealing with _in the analysis carried out in this paper , we have introduced a new path distinguishability , based on uqsd , which is just the upper limit of the probability with which one can _ unambiguously _ distinguish between the quantum states of the which - path detector correlated with the paths of the particle .consequently , it is the maximum probability with which one can _ unambiguously _ tell which slit the particle went through .we carried out a wave - packet evolution of a particle through a triple - slit . calculating the fringe - visibility after a schrdinger evolution, we relate it to the path distinguishability and derive a new duality relation .the analysis is restricted to three slits of equal widths , in a linear geometry , as shown in fig .[ trislit ] . starting from the triple - slit , the time evolution , leading to the probability density of the particle on the screen ,various approximations , in the subsequent analysis , are made only to obtain the maximum possible visibility of interference , given a particular .because of the way in which the analysis is carried out , this should be the tightest possible bound on distinguishability and fringe visibility for the 3-slit experiment . for two - slit interference , we derive a new duality relation which reduces to englert s duality relation and greenberger and yasin s duality relation , in different limits .lastly , we feel that ( [ pn ] ) suggests a straightforward definition of distinguishability for n - slit interference experiments : where is the state of the path - detector correlated with the ith of the n possible paths .mohd asad siddiqui thanks the university grants commission , india for financial support .
the issue of interference and which - way information is addressed in the context of 3-slit interference experiments . a new path distinguishability is introduced , based on unambiguous quantum state discrimination ( uqsd ) . an inequality connecting the interference visibility and path distinguishability , , is derived which puts a bound on how much fringe visibility and which - way information can be simultaneously obtained . it is argued that this bound is tight . for 2-slit interference , we derive a new duality relation which reduces to englert s duality relation and greenberger - yasin s duality relation , in different limits .
the topology of the pattern of contacts between individuals plays a fundamental role in determining the spreading patterns of epidemic processes .the first predictions of classical epidemiology were based on the homogeneous mixing hypothesis , assuming that all individuals have the same chance to interact with other individuals in the population .this assumption and the corresponding results were challenged by the empirical discovery that the contacts within populations are better described in terms of networks with a non - trivial structure .subsequent studies were devoted to understanding the impact of network structure on the properties of the spreading process .the main result obtained concerned the large susceptibility to epidemic spread shown by networks with a strongly heterogeneous connectivity pattern , as measured by a heavy - tailed degree distribution ( defined as the probability distribution of observing one individual connected to others ) with a a diverging second moment .the original studies considered the interaction networks as static entities , in which connections are frozen or evolve at a time scale much longer than the one of the epidemic process .this static view of interaction networks hides however the fact that connections appear , disappear , or are rewired on various timescales , corresponding to the creation and termination of relations between pairs of individuals .longitudinal data has traditionally been scarce in social network analysis , but , thanks to recent technological advances , researchers are now in a position to gather data describing the contacts in groups of individuals at several temporal and spatial scales and resolutions .the analysis of empirical data on several types of human interactions ( corresponding in particular to phone communications or physical proximity ) has unveiled the presence of complex temporal patterns in these systems . in particular , the heterogeneity and burstiness of the contact patterns are revealed by the study of the distribution of the durations of contacts between pairs of agents , the distribution of the total time in contact of pairs of agents , and the distribution of gap times between two consecutive interactions involving a common individual .all these distributions are indeed heavy - tailed ( often compatible with power - law behaviors ) , which corresponds to the burstiness of human interactions .these findings have led to a large modeling effort and stimulated the study of the impact of a network s dynamics on the dynamical processes taking place on top of it .the processes studied in this context include synchronization , percolation , social consensus , or diffusion .epidemic - like processes have also been explored , both using realistic and toy models of propagation processes .the study of simple schematic spreading processes over temporal networks helps indeed expose several properties of their dynamical structure : dynamical processes can in this context be conceived as probing tools of the network s temporal structure .the study of spreading patterns on networks is naturally complemented by the formulation of vaccination strategies tailored to the specific topological ( and temporal ) properties of each network .optimal strategies shed light on how the role and importance of nodes depend on their properties , and can yield importance rankings of nodes . in the case of static networks ,this issue has been particularly stimulated by the fact that heterogeneous networks with a heavy - tailed degree distribution have a very large susceptibility to epidemic processes , as represented by a vanishingly small epidemic threshold .in such networks , the simplest strategy consisting in randomly immunizing a fraction of the nodes is ineffective .more complex strategies , in which nodes with the largest number of connections are immunized , turn out to be effective but rely on the global knowledge of the network s topology .this issue is solved by the so - called acquaintance immunization , which prescribes the immunization of randomly chosen neighbors of randomly chosen individuals .few works have addressed the issue of the design of immunization strategies and their respective efficiency in the case of dynamical networks . in particular , consider datasets describing the contacts occurring in a population during a time interval ] to decide which individuals should be immunized in order to limit the spread during the remaining time ] is immunized .in the recent strategy , the last contact before of each of the randomly chosen individuals is immunized .both strategies are defined in the spirit of the acquaintance immunization , insofar as they select nodes using only partial ( local ) information on the network . using a large , that these strategies perform better than random immunization and show that this is related to the temporal correlations of the dynamical networks . in this paper, we investigate several immunization strategies in temporal networks , including the ones considered by , and address in particular the issue of the length of the `` training window '' , which is highly relevant in the context of real - time , specific tailored strategies .the scenario we have in mind is indeed the possibility to implement a real - time immunization strategy for an ongoing social event , in which the set of individuals to the immunized is determined by strategies based on preliminary measurements up to a given time .the immunization problem takes thus a two - fold perspective : the specific rules ( strategy ) to implement , and the interval of time over which preliminary data are collected .obviously , a very large will lead to more complete information , and a more satisfactory performance for most targeting strategies , but it incurs in the cost of a lengthy data collection . on the other hand, a short will be cost effective , but yield a smaller amount of information about the observed social dynamics . in order to investigate the role of the training window length on the efficiency of several immunization strategies , we consider a simple snowball susceptible - infected ( si ) model of epidemic spreading or information diffusion . in this model, individuals can be either in the susceptible ( s ) state , indicating that they have not been reached by the `` infection '' ( or information ) , or they can be in the infectious ( i ) state , meaning that they have been infected by the disease ( or that they have received the information ) and can further propagate it to other individuals .infected individuals do not recover , i.e. , once they transition to the infectious state they remain indefinitely in that state . despite its simplicity ,this model has indeed proven to provide interesting insights into the temporal structure and properties of temporal networks . herewe focus on the dynamics of the si model over empirical time - varying social networks .the networks we consider describe time - resolved face - to - face contacts of individuals in different environments and were measured by the sociopatterns collaboration ( ` http://www.sociopatterns.org ` ) using wearable proximity sensors .we consider the effect on the spread of an si model of the immunization of a fraction of nodes , chosen according to different strategies based on different amounts of information on the contact sequence .we find a saturation effect in the increase of the efficiency of strategies based on nodes characteristics when the length of the training window is increased .the efficiency of strategies that include an element of randomness and are based on temporally local information do not perform as well but are largely independent on the amount of information available . the paper is organized as follows : we briefly describe the empirical data in sec .[ sec : empir - cont - sequ ] . in sec .[ sec : epid - models - numer ] we define the spreading model and some quantities of interest .the immunization strategies we consider are listed in sec .[ sec : immun - strat ] .[ sec : numerical - results ] contains the main numerical results , and we discuss in sec .[ sec : effects - temp - corr ] and sec .[ sec : effects - non - determ ] the respective effects of temporal correlations and of randomness effects in the spreading model . section [ sec : conclusions ] finally concludes with a discussion on our results .we consider temporal networks describing the face - to - face close proximity of individuals in different contexts , collected by the sociopatterns collaboration . we refer to ` http://www.sociopatterns.org ` and for details on the data collection strategy , which is based on wearable sensors worn by individuals .the datasets give access , for each pair of participating individuals , to the list of time intervals in which they were in face - to - face close proximity ( m ) , with a temporal resolution of seconds . in this paper, we use temporal social networks measured in several different social contexts : the 2010 european semantic web conference ( `` eswc '' ) , a geriatric ward of a hospital in lyon ( `` hosp '' ) , the 2009 acm hypertext conference ( `` ht '' ) , and the 2009 congress of the socit francaise dhygine hospitalire ( `` sfhh '' ) .these data correspond therefore to the fast dynamics of human contacts over the scale of a few days .a more detailed description of the corresponding contexts and analyses of these datasets can be found in . in table[ tab : summary ] we summarize some of properties of the considered datasets ..some properties of the sociopatterns datasets under consideration : number of different individuals engaged in interactions ( ) ; total duration of the contact sequence ( ) , measured in intervals of length sec . ; average degree ( number of different contacts ) and average strength ( total time spent in face - to - face interactions ) of the network of contacts aggregated over the whole sequence ; average number of interactions at each time step . [ cols="^,^,^,^,^,^",options="header " , ]we simulate numerically the susceptible - infected ( si ) spreading dynamics on the above describe datasets of human face - to - face proximity .the process is initiated by a single infected individual ( `` seed '' ) . at each time step, each infected individual infects with probability the susceptible individuals with whom is in contact during that time step .the process stops either when all nodes are infected or at the end of the temporal sequence of contacts .different individuals have different contact patterns and _ a priori _ contribute differently to the spreading process . in order to quantify the spreading efficiency of a given node , we proceed as follows : we consider as the seed of the si process , all other nodes being susceptible .we measure the half prevalence time , i.e. , the time needed to reach a fraction of infected nodes equal to of the population . since not all nodes appear simultaneously at of the contact sequence , we define the _ half - infection time _ of seed node as , where is the time at which node first appears in the contact sequence . the half - infection time can thus be seen as a measure of the spreading power of node : smaller values correspond to more efficient spreading patterns .we first focus on the deterministic case ( the effects of stochasticity , as given by , are explored in sec .[ sec : effects - non - determ ] ) . figure [ fig : rank_t ] shows rank plot of the rescaled half - infection times for various datasets , where is the duration of the contact sequence .we note that is quite heterogeneous , ranging from up to .as the time needed to reach a different fraction of the population , such as e.g. , leads to a similar heterogeneity . ]divided by the contact sequence duration for the various datasets.,width=321 ] some nodes are therefore much more efficient spreaders than others .this implies that the immunization of different nodes could have very different impacts on the spreading process . to estimate this impact ,we define for each node the infection delay ratio as where is the half - infection time obtained when node is the seed of the spreading process and node is immunized , and the ratio is averaged over all possible seeds and over different starting times for the si process ( the half - infection time being much smaller than the total duration of the contact sequence , ) . is not present during the time window in which the si process is simulated ; in this case , . ]the infection delay ratio quantifies therefore the average impact that the immunization of node has on si processes unfolding over the temporal network .figure [ fig : rank_tau ] displays a rank plot of for various datasets . as expected , the immunization of a single node does most often lead to a limited delay of the spreading dynamics .interestingly however , is broadly distributed and large values are also observed . for various datasets.,width=321 ]the infection delay ratio of a single node , , can be generalized to the case of the immunization of any set of nodes , with .we measure the spreading slowing down obtained when immunizing the set through the infection delay ratio where is the half - infection times of node when all the nodes of set are immunized , and the average is performed over all possible seeds and different starting times for the si process .in addition to slowing down the propagation process , the immunization of certain individuals can also block the spreading paths towards other , non - immunized , individuals , limiting in this way the final number of infected individuals .we measure this effect through the _ average outbreak size ratio _ where and are the number of infected individuals ( outbreak size ) for an si process with seed , with and without immunization of the set , respectively .the ratio is averaged over all possible seeds and over different starting times of the si process .an immunization strategy is defined by the choice of the set of nodes to be immunized .we define here different strategies , and we compare their efficiencies in section [ sec : numerical - results ] by measuring and .more precisely , for each contact sequence of duration we consider an initial temporal window ] ; the aggregated degree of an individual corresponds to the number of different other individuals with whom has been in contact during ] ; * acquaintance protocol .we choose randomly an individual and immunize one of his contacts in ] , repeating for various elements until individuals are immunized ; * recent protocol .we choose randomly an individual and immunize his last contact in ] . the * rn * strategy uses no information about the contact sequence and we use it as a worst case performance baseline .the * t * strategy makes use , through the quantity , of the entire information about the contact sequence as well as complete information about the average effect of node immunization on si processes taking place over the contact sequence .it could thus be expected to yield the best performance among all strategies . the * a * , * w * ,* r * and * rn * strategies involve a random choice of individuals .in each of these cases , we average the results over independent runs ( each run corresponding to an independent choice of the individuals to immunize ) .we first study the role of the temporal window on the efficiency of the various immunization strategies . to this aim , we consider two values of the fraction of immunized individuals , and , and compute the infection delay ratio as a function of for each immunization protocol , and for each dataset .the results , displayed in figs . [ fig : delay_dt ] and [ fig : delay_dt_f02 ] , show that an increase in the amount of information available , as measured by an increase in , does not necessarily translate into an larger efficiency of the immunization , as quantified by the delay of the epidemic process .the , and protocols have in all cases lower efficiencies that remain almost independent on . moreover , and in contrast with the results of on a different dataset , and do not perform better than . on the other hand , the immunization efficiency of the , and protocols increases at small and reaches larger values for all the datasets. as expected , the * rn * protocol , which does not use any information , fares the worst . for ,all protocols yield an infection delay ratio that is largely independent from for large enough training windows . for , the increase of more gradual but tends to saturate for as well . in all cases ,a limited knowledge of the contact time series is therefore sufficient to estimate which nodes have to be immunized in order to delay the spreading dynamics , especially for small , i.e. , in case of limited resources .interestingly , in some cases , the and protocols lead to a larger delay of the spread than the protocol , despite the fact that the latter is designed to explicitly identify the nodes which yield the maximal ( individual ) infection delay ratio .this could be ascribed to correlations between the activity patterns of nodes , leading to a non - linear dependence on of the immunization efficiency ( in particular , the list of nodes to immunize is built using the list of degrees , betweenness centralities , and values computed on the original network , without recomputing the rankings each time a node is removed ) .figure [ fig : infect_dt ] reports the outbreak ratio as a function of the temporal window for different vaccination protocols .results similar to the case of the infection delay ratio are recovered : the reduction in outbreak size , as quantified by the average outbreak size ratio defined in eq .( [ eq:3 ] ) , reaches larger values for the degree , betweenness centrality and protocols than for the , and protocols .we finally investigate the robustness of our results when the fraction of immunized individuals varies . to this aim , we use a fixed length for the training window and we plot the infection delay ratio and the average outbreak size ratio as a function of , respectively , in figs .[ fig : delay_f ] and [ fig : infect_f ] .the results show that the ranking of the strategies given by these two quantities is indeed robust with respect to variations in the fraction of immunized individuals .in particular , the and protocols perform much better than the and protocols for at least one of the efficiency indicators .real time - varying networks are characterized by the presence of bursty behavior and temporal correlations , which impact the unfolding of dynamical processes . for instance , if a contact between vertices and takes place only at the ( discrete ) times , it can not be used in the course of a dynamical processes at any time .a propagation process initiated at a given seed might therefore not be able to reach all the other nodes , but only those belonging to the seed s set of influence , i.e. , those that can be reached from the seed by a time respecting path . in order to investigate the role of temporal correlations , we consider a reshuffled version of the data in which correlations between consecutive interaction among individuals are removed .to this aim , we consider the list of events describing a contact between and at time and reshuffle at random their time stamps to build a synthetic uncorrelated new contact sequence .we then apply the same immunization protocols to this uncorrelated temporal network .figure [ fig : delay_sran ] displays the corresponding results for the infection delay ratio computed for si spreading simulations performed on a randomized dataset ( similar results are obtained for the average outbreak size ratio ) .we have checked that our results hold across different realizations of the randomization procedure .the efficiency of the protocol is then largely independent of the training window length .as the contact sequence is random and uncorrelated , all temporal windows are statistically equivalent , and no new information is added by increasing : in particular , as nodes appear in the randomly reshuffled sequence with a constant probability that depends on their empirical activity , the ranking of nodes according for instance to their aggregated degree remains very stable as changes , so that a very small is enough to reach a stable such ranking .nevertheless , the efficiency ranking of the different protocols is unchanged : the degree , betweenness centrality , and protocols outperform the other immunization strategies . moreover , the efficiency levels reached are higher than for the original contact sequence : the correlations present in the data limit the immunization efficiency in the case of the present datasets .note that studies of the role of temporal correlations on the speeding or slowing down of spreading processes have led to contrasting results , as discussed by , possibly because of the different models and dataset properties considered .we also verify the robustness of our results using a probabilistic si process with .we consider the same immunization strategies and we compute the same quantities as in the case . given the probabilistic nature of the spreading process , we now average the above observables over realizations of the si process .figure [ fig : delay02 ] shows that our results hold in the case of a probabilistic epidemics spreading , although in this case the infection delay ratio presents a noisy behavior , due to the stochastic fluctuations originated in the probabilistic spreading dynamics .the average outbreak ratio , not shown , behaves in a very similar way .thus , also in this more realistic case with , a limited knowledge of the contact sequence is enough to identify which individuals to immunize .within the growing body of work concerning temporal networks , few studies have yet considered the issue of immunization strategies and of their efficiency . in general terms , the amount of information that can be extracted from the data at hand about the characteristics of the nodes and links is a crucial ingredient for the design of optimal immunization strategies . understanding how much informationis needed in order to design good ( and even optimal ) strategies , and how the efficiency of the strategies depend on the information used , remain largely an open questions whose answer might depend on the precise dataset under investigation .we have here leveraged several datasets describing contact patterns between individuals in varied contexts , and performed simulations in order to measure the effect of different immunization strategies on simple si spreading processes .we have considered immunization strategies designed according to different principles , different ways of using information about the data , and different levels of randomness .strategies range from the completely random to the , and strategies that include a random choice , to the fully deterministic , and that are based on various node s characteristics .moreover , uses only local information while and rely on the global knowledge of the connection patterns .the strategies that are most efficient , as measured by the change in the velocity of the spread and by the final number of nodes infected , are the deterministic protocols , namely and .strategies based on random choices , even when they are designed in order to try to immunize `` important '' nodes , are less efficient .we have moreover investigated how the performance of the various strategies depends on the time window on which the nodes characteristics are measured .a longer time window corresponds indeed a priori to an increase in the available information and hence to the possibility to better optimize the strategies .we have found , however , a clear saturation effect in the efficiency increase of the various strategies as the training window on which they are designed increases .this is particularly the case when the fraction of immunized nodes is small ( fig .[ fig : delay_dt ] ) , for which a small is enough to reach saturation , while the saturation is more gradual for larger fractions of immunized ( fig .[ fig : delay_dt_f02 ] ) .moreover , the strategies that involve a random component yield results that are largely independent on the amount of information considered . in order to understand these results in more details , we have considered the evolution with time of the nodes properties .in particular , we compare the largest degree nodes in the fully aggregated network with the set of nodes , chosen by following the strategy on the training window ] , as a function of , for several nodes .the fraction of nodes with the largest degree in the fully aggregated network are ranked among the most connected nodes already for small training windows . for ,the ranking fluctuates more and takes longer to stabilize .overall , while the precise ordering scheme of the nodes according to their degree is not entirely stable with respect to increasing values of , a coarse ordering is rather rapidly reached : the nodes that reach a large degree at the end of the dataset are rapidly ranked among the highest degree nodes , and the nodes that in the end have a low degree are as well rapidly categorized as such .this confirms the result of fig .[ fig : fhighestdegnode ] and explains why the strategy reaches its best efficiency even at short training windows for small , and with a more gradual saturation for larger fractions of immunized nodes .the fact that high degree nodes are identified early on in the information collection process comes here as a surprise : for a temporal network with poissonian events , all the information on the relative importance of links and nodes is present in the data as soon as the observation time is larger than the typical timescale of the dynamics ; this is however a priori not the case for the bursty dynamics observed in real - world temporal networks .various factors can explain the observed stability in the ranking of nodes .on the one hand , some nodes can possess some intrinsic properties giving them an important a priori position in the network ( for instance , nurses in a hospital , or senior scientists in a conference ) that ensure them a larger degree than other nodes even at short times .on the other hand , the stability of the ranking could in fact be only temporary , and due to the fact that nodes arriving earlier in the dataset have a larger probability to gather a large number of contacts . in this case , the observed stability of the ordering scheme could decrease for data collected on longer timescales .another reason for the saturation of the efficiency of the various strategies is shown in fig .[ fig : degotimeslice ] , which displays the evolution of the degree of some nodes in the network aggregated on a temporal window ] can have temporarily a small degree in a subsequent time window .the strong variation in the degree of nodes at different times clearly limits the efficiency not only of immunization strategies based on information that is local in time , but even of strategies based on aggregated information .the main conclusion of our study is therefore twofold . on the one hand ,a limited amount of information on the contact patterns is sufficient to design relevant immunization strategies ; on the other hand , the strong variation in the contact patterns significantly limits the efficiency of any strategy based on importance ranking of nodes , even if such deterministic strategies still perform much better than the `` recent '' or `` weight '' protocols that are generalizations of the `` acquaintance '' strategy .moreover , strategies based on simple quantities such as the aggregated degree perform as well as , or better , than strategies based on more involved measures such as the infection delay ratio defined in sec . [ sec : epid - models - numer ] .we also note that , contrarily to the case investigated by , the `` recent '' and `` weight '' strategies , which try to exploit the temporal structure of the data , do not perform clearly better than the simpler `` acquaintance '' strategy .such apparent discrepancy might have various causes .in particular , consider spreading processes starting exactly at while we average over different possible starting times .the datasets used are moreover of different nature ( indeed obtain contrasted results for different datasets ) and have different temporal resolutions .a more detailed comparison of the different datasets properties would be needed in order to fully understand this point , as discussed for instance by .rps acknowledges nancial support from the spanish micinn , under project no .fis2010 - 21781-c02 - 01 , the junta de andaluca , under project no .p09-fqm4682 , and additional support through icrea academia , funded by the generalitat de catalunya .ab , cc and rps are partly supported by fet project multiplex 317532 .47 natexlab#1#1url # 1`#1`urlprefix[2][]#2 [ 2]#2 , , . ., , , , . . ,. , . . , . , ,. . , . , , , , , ,. . , . , , ,. . , . , , ,. . , . , , ,, , , , . . ,, , , , , , . , in : , , .pp . . , , , , , ,. . , . , , , , , , ,. . , . , ,. . , . , ,, , , , , , , . . ,. , . . , . , , , ,. . , . , ,. . , . , ,. . , . , , ,. . , . , , ,, , , , , , . . ,, , , , , , , , . . ,, , , , , , . ., , , , , . . ,, , , , . . ,, , , . . ,, , , , , . . , . , , , ,. . , . , , ,. . , . , , ,. , , , , , , , , , , , a. . ., , , , , , , , , , , b. . , ., , , , . . ,, , , , . , in : . , , , , , ., , , , , , . , in : , p. ., , , , . . ,
spreading processes represent a very efficient tool to investigate the structural properties of networks and the relative importance of their constituents , and have been widely used to this aim in static networks . here we consider simple disease spreading processes on empirical time - varying networks of contacts between individuals , and compare the effect of several immunization strategies on these processes . an immunization strategy is defined as the choice of a set of nodes ( individuals ) who can not catch nor transmit the disease . this choice is performed according to a certain ranking of the nodes of the contact network . we consider various ranking strategies , focusing in particular on the role of the training window during which the nodes properties are measured in the time - varying network : longer training windows correspond to a larger amount of information collected and could be expected to result in better performances of the immunization strategies . we find instead an unexpected saturation in the efficiency of strategies based on nodes characteristics when the length of the training window is increased , showing that a limited amount of information on the contact patterns is sufficient to design efficient immunization strategies . this finding is balanced by the large variations of the contact patterns , which strongly alter the importance of nodes from one period to the next and therefore significantly limit the efficiency of any strategy based on an importance ranking of nodes . we also observe that the efficiency of strategies that include an element of randomness and are based on temporally local information do not perform as well but are largely independent on the amount of information available . time - varying contact networks , epidemic spreading , immunization strategies
ordinary approach to quantum algorithm is based on quantum turing machine or quantum circuits .it is known that this approach is not powerful enough to solve np - complete problems . in have proposed a new approach to quantum algorithm which goes beyond the standard quantum computation paradigm .this new approach is a sort of combination of the ordinary quantum algorithm and a chaotic dynamics .this approach was based on the results obtained in the paper .there are important problems such as the knapsack problem , the traveling salesman problem , the integer programming problem , the subgraph isomorphism problem , the satisfiability problem that have been studied for decades and for which all known algorithms have a running time that is exponential in the length of the input .these five problems and many other problems belong to the set of * np*-complete problems .many * np*-complete problems have been identified , and it seems that such problems are very difficult and probably exponential .if so , solutions are still needed , and in this paper we consider an approach to these problems based on quantum computers and chaotic dynamics as mentioned above .as in the previous papers we again consider the satisfiability problem as an example of np - complete problems and argue that the problem , in principle , can be solved in polynomial time by using our new quantum algorithm .it is widely believed that quantum computers are more efficient than classical computers .in particular shor gave a remarkable quantum polynomial - time algorithm for the factoring problem .however , it is known that this problem is not * np*-complete but is np - intermidiate .since the quantum algorithm of the satisfiability problem ( sat for short ) has been considered in , accardi and sabbadini showed that this algorithm is combinatric one and they discussed its combinatric representation .it was shown in that the sat problem can be solved in polynomial time by using a quantum computer under the assumption that a special superposition of two orthogonal vectors can be physically detected .the problem one has to overcome here is that the output of computations could be a very small number and one needs to amplify it to a reasonable large quantity . in this paperwe construct a new model ( representation ) of computations which combine ordinary quantum algorithm with a chaotic dynamical system and prove that one can solve the sat problem in polynomial time . for a recent discussion of computational complexity in quantum computing see .mathematical features of quantum computing and quantum information theory are summarized in .let be a set .then and its negation are called literals and the set of all such literals is denoted by .the set of all subsets of is denoted by and an element is called a clause .we take a truth assignment to all variables if we can assign the truth value to at least one element of then is called satisfiable . when is satisfiable , the truth value of is regarded as true , otherwise , that of is false .take the truth values as true false then let be a boolean lattice with usual join and meet and be the truth value of a literal in then the truth value of a clause is written as further the set of all clauses is called satisfiable iff the meet of all truth values of is 1 ; thus the sat problem is written as follows : sat problem : given a set and a set of clauses , determine whether is satisfiable or not .that is , this problem is to ask whether there exsits a truth assignment to make satisfiable .it is known in usual algorithm that it is polynomial time to check the satisfiability only when a specific truth assignment is given , but we can not determine the satisfiability in polynomial time when an assignment is not specified .note that a formula made by the product ( and ) of the disjunction ( or ) of literals is said to be in the _ product of sums _( pos ) form .for example , the formula is in pos form .thus a formula in pos form is said to be _satisfiable _ if there is an assignment of values to variables so that the formula has value 1 .therefore the sat problem can be regarded as determining _ whether or not a formula in pos form is satisfiable_. the following analytical formulation of sat problem is useful .we define a family of boolean polynomials , indexed by the following data .one is a set where and is defined as we assume here the addition modulo 2 .the sat problem now is to determine whether or not there exists a value of such that the quantum algorithm of sat problem is needed to add the dust bits to the input bits , the number of dust bites has been shown the order of . therefore for simplicity we will work in the -tuple tensor product hilbert space **c** in this paper with the computational basis where or we denote the quantum version of the function given by the unitary operator we assume that the unitary matrix can be build in the polynomial time , see .now let us use the usual quantum algorithm : \(i ) by using the fourier transform produce from the superposition \(ii ) use the unitary matrix to calculate now if we measure the last qubit , i.e. , apply the projector to the state then we obtain that the probability to find the result is where is the number of roots of the equation if is suitably large to detect it , then the sat problem is solved in polynominal time .however , for small the probability is very small and this means we in fact do nt get an information about the existence of the solution of the equation so that in such a case we need further deliberation .let us simplify our notations .after the step ( ii ) the quantum computer will be in the state where and are normalized qubit states and effectively our problem is reduced to the following qubit problem .we have the state and we want to distinguish between the cases and (small positive number ) .it is argued in that quantum computer can speed up * np * problems quadratically but not exponentially .the no - go theorem states that if the inner product of two quantum states is close to 1 , then the probability that a measurement distinguishes which one of the two it is exponentially small .and one could claim that amplification of this distinguishability is not possible . at this pointwe emphasize that we do not propose to make a measurement ( not read ) which will be overwhelmingly likely to fail .what we do it is a proposal to use the output of the quantum computer as an input for another device which uses chaotic dynamics in the sequel . the amplification would be not possible if we use the standard model of quantum computations with a unitary evolution .however the idea of our paper is different .we propose to combine quantum computer with a chaotic dynamics amplifier .such a quantum chaos computer is a new model of computations going beyond usual scheme of quantum computation and we demonstrate that the amplification is possible in the polynomial time. one could object that we dont suggest a practical realization of the new model of computations .but at the moment nobody knows of how to make a practically useful implementation of the standard model of quantum computing ever .quantum circuit or quantum turing machine is a mathematical model though convincing one .it seems to us that the quantum chaos computer considered in this paper deserves an investigation and has a potential to be realizable . in this paperwe propose a mathematical model of computations for solving sat problem by refining our previous paper .a possible specific physical implementation of quantum chaos computations with some error correction will be discussed in a separate paper , which is some how related to the recently proposed atomic quantum computer .various aspects of classical and quantum chaos have been the subject of numerious studies , see and ref s therein.the investigation of quantum chaos by using quantum computers has been proposed in . herewe will argue that chaos can play a constructive role in computations .chaotic behaviour in a classical system usually is considered as an exponential sensitivity to initial conditions .it is this sensitivity we would like to use to distinquish between the cases and from the previous section .consider the so called logistic map which is given by the equation .\ ] ] the properties of the map depend on the parameter if we take , for example , then the lyapunov exponent is positive , the trajectory is very sensitive to the initial value and one has the chaotic behaviour .it is important to notice that if the initial value then for all it is known that any classical algorithm can be implemented on quantum computer .our quantum chaos computer will be consisting from two blocks .one block is the ordinary quantum computer performing computations with the output .the second block is a computer performing computations of the _ classical _ logistic map .this two blocks should be connected in such a way that the state first be transformed into the density matrix of the form where and are projectors to the state vectors and this connection is in fact nontrivial and actually it should be considered as the third block .one has to notice that and generate an abelian algebra which can be considered as a classical system . in the second block the density matrix aboveis interpreted as the initial data , and we apply the logistic map as where is the identity matrix and is the z - component of pauli matrix on this expression is different from that of our first paper . to find a proper value we finally measure the value of in the state such that after simple computation we obtain thus the question is whether we can find such a in polynomial steps of the inequality for very small but non - zero here we have to remark that if one has then and we obtain for all if the stochastic dynamics leads to the amplification of the small magnitude in such a way that it can be detected as is explained below . the transition from to nonlinear and can be considered as a classical evolution because our algebra generated by and is abelian.the amplification can be done within atmost 2n steps due to the following propositions . since is of the logistic map with we use the notation in the logistic map for simplicity .for the logistic map with ] let and a set if is then there exists an integer in satisfying proof : suppose that there does not exist such in then for any the inequality implies thus we have from which we get according to the above inequality , we obtain since we have which is definitely less than and it is contradictory to the statement for any thus there exists in satisfying let and be the same in the above proposition .if there exists in such that then proof : since we have which reduces to for in satisfying , it holds it follows that from which implies according to these propositions , it is enough to check the value around the above when is for a large . more generally , when = with some integer it is easily checked that the above two propositions are held and the value becomes over around the above. one can think about various possible implementations of the idea of using chaotic dynamics for computations , which is open and very intersting problem . about this problem , realization of nonlinear quantum gateswill be essential , on which we will discuss in atomic quantum computer in .finally we show in fig.1 how we can easily amplify the small in several steps .the complexity of the quantum algprithm for the sat problem has been considered in where it was shown that one can build the unitary matrix in the polynomial time .we have also to consider the number of steps in the classical algorithm for the logistic map performed on quantum computer .it is the probabilistic part of the construction and one has to compute several times to be able to distingish the cases and thus it concludes that the quantum chaos algorithm can solve the sat problem in polynominal time according to the above propositions . in conclusion , in this paper the quantum chaos algorithm is proposed .it combines the ordinary quantum algorithm with quantum chaotic dynamics amplifier .we argued that such a algorithm can be powerful enough to solve the * np*-complete problems in the polynomial time .our proposal is to show existence of algoritm to solve np - complete problem .the physical implimentation of this algorithm is another question and it will be strongly desirable to be studied .m. ohya and i.v .volovich , quantum computing , np - complete problems and chaotic dynamics , in : t.hida and k.saito , eds ._ quantum information ii _ , ( world sci .2000 ) ; or quant - ph/9912100 . m.ohya and i.v.volovich,quantum computing and chaotic amplifier , to be published in j.opt.b .l.accardi , r.sabbadini : on the ohya masuda quantum sat algorithm , in : proceedings international conference unconventional models of computations , i. antoniou , c.s .calude , m. dinneen ( eds . ) springer 2001
ordinary approach to quantum algorithm is based on quantum turing machine or quantum circuits . it is known that this approach is not powerful enough to solve np - complete problems . in this paper we study a new approach to quantum algorithm which is a combination of the ordinary quantum algorithm with a chaotic dynamical system . we consider the satisfiability problem as an example of np - complete problems and argue that the problem , in principle , can be solved in polynomial time by using our new quantum algorithm . + * keywords : * quantum algorithm , np - complete problem , chaotic dynamics
there has been a strong interest toward obtaining highly efficient deep neural network architectures that maintain strong modeling power for different applications such as self - driving cars and smartphone applications where the available computing resources are practically limited to a combination of low - power , embedded gpus and cpus with limited memory and computing power .the optimal brain damage method was one of the first approaches in this area , where synapses were pruned based on their strengths . proposed a network compression framework where vector quantization was leveraged to shrink the storage requirements of deep neural networks .et al . _ utilized pruning , quantization and huffman coding to further reduce the storage requirements of deep neural networks .hashing is another trick utilized by chen _et al . _ to compress the network into a smaller amount of storage space .low rank approximation and sparsity learning are other strategies used to sparsify deep neural networks . recently ,shafiee _ et al . _ tackled this problem in a very different manner by proposing a novel framework for synthesizing highly efficient deep neural networks via the idea of evolutionary synthesis .differing significantly from past attempts at leveraging evolutionary computing methods such as genetic algorithms for creating neural networks , which attempted to create neural networks with high modeling capabilities in a direct but highly computationally expensive manner , the proposed novel _ evolutionary deep intelligence _approach mimics biological evolution mechanisms such as random mutation , natural selection , and heredity to synthesize successive generations of deep neural networks with progressively more efficient network architectures .the architectural traits of ancestor deep neural networks are encoded via probabilistic ` dna ' sequences , with new offspring networks possessing diverse network architectures synthesized stochastically based on the ` dna ' from the ancestor networks and computational environmental factor models , thus mimicking random mutation , heredity , and natural selection .these offspring networks are then trained , much like one would train a newborn , and have more efficient , more diverse network architectures while achieving powerful modeling capabilities .an important aspect of evolutionary deep intelligence that is particular interesting and worth deeper investigation is the genetic encoding scheme used to mimic heredity , which can have a significant impact on the way architectural traits are passed down from generation to generation and thus impact the quality of descendant deep neural networks .a more effective genetic encoding scheme can facilitate for better transfer of important genetic information from ancestor networks to allow for the synthesis of even more efficient and powerful deep neural networks in the next generation .as such , a deeper investigation and exploration into the incorporation of synaptic clustering into the genetic encoding scheme can be potentially fruitful for synthesizing highly efficient deep neural networks that are more geared for improving not only memory and storage requirements , but also be tailored for devices designed for highly parallel computations such as embedded gpus . in this study, we introduce a new synaptic cluster - driven genetic encoding scheme for synthesizing highly efficient deep neural networks over successive generations .this is achieved through the introduction of a multi - factor synapse probability model where the synaptic probability is a product of both the probability of synthesis of a particular cluster of synapses and the probability of synthesis of a particular synapse within the synapse cluster .this genetic encoding scheme effectively promotes the formation of synaptic clusters over successive generations while also promoting the formation of highly efficient deep neural networks .the proposed genetic encoding scheme decomposes synaptic probability into a multi - factor probability model , where the architectural traits of a deep neural network are encoded probabilistically as a product of the probability of synthesis of a particular cluster of synapses and the probability of synthesis of a particular synapse within the synapse cluster . *cluster - driven genetic encoding .* let the network architecture of a deep neural network be expressed by , with denoting the set of possible neurons and denoting the set of possible synapses in the network .each neuron is connected via a set of synapses to neuron such that the synaptic connectivity is associated with a denoting its strength .the architectural traits of a deep neural network in generation can be encoded by a conditional probability given its architecture at the previous generation , denoted by , which can be treated as the probabilistic ` dna ' sequence of a deep neural network . without loss of generality , based on the assumption that synaptic connectivity characteristics in an ancestor network are desirable traits to be inherited by descendant networks, one can instead encode the genetic information of a deep neural network by synaptic probability , where encodes the synaptic strength of each synapse . in the proposed genetic encoding scheme ,we wish to take into consideration and incorporate the neurobiological phenomenon of synaptic clustering , where the probability of synaptic co - activation increases for correlated synapses encoding similar information that are close together on the same dendrite .to explore the idea of promoting the formation of synaptic clusters over successive generations while also promoting the formation of highly efficient deep neural networks , the following multi - factor synaptic probability model is introduced : \end{aligned}\ ] ] where the first factor ( first conditional probability ) models the probability of the synthesis of a particular cluster of synapses , , while the second factor models the probability of a particular synapse , , within synaptic cluster .more specifically , the probability represents the likelihood that a particular synaptic cluster , , be synthesized as a part of the network architecture in generation given the synaptic strength in generation . for example , in a deep convolutional neural network , the synaptic cluster can be any subset of synapses such as a kernel or a set of kernels within the deep neural network .the probability represents the likelihood of existence of synapse within the cluster in generation given its synaptic strength in generation . as such , the proposed synaptic probability model not only promotes the persistence of strong synaptic connectivity in offspring deep neural networks over successive generations , but also promotes the persistence of strong synaptic clusters in offspring deep neural networks over successive generations . *cluster - driven evolutionary synthesis .* in the seminal paper on evolutionary deep intelligence by shafiee _ et al . _ , the synthesis probability is composed of the synaptic probability , which mimic heredity , and environmental factor model which mimic natural selection by introducing quantitative environmental conditions that offspring networks must adapt to : in this study , is reformulated in a more general way to enable the incorporation of different quantitative environmental factors over both the synthesis of synaptic clusters as well as each synapse : \vspace{-0.45 cm}\end{aligned}\ ] ] where and represents environmental factors enforced at the cluster and synapse levels , respectively .* realization of cluster - driven genetic encoding . * in this study , a simple realization of the proposed cluster - driven genetic encoding scheme is presented to demonstrate the benefits of the proposed scheme . here , since we wish to promote the persistence of strong synaptic clusters in offspring deep neural networks over successive generations , the probability of the synthesis of a particular cluster of synapses , is modeled as where encodes the truncation of a synaptic weight and is a normalization factor to make a probability distribution , $ ] .the truncation of synaptic weights in the model reduces the influence of very weak synapses within a synaptic cluster on the genetic encoding process .the probability of a particular synapse , , within synaptic cluster , denoted by can be expressed as : where is a layer - wise normalization constant . by incorporating both of the aforementioned probabilities in the proposed scheme ,the relationships amongst synapses as well as their individual synaptic strengths are taken into consideration in the genetic encoding process .evolutionary synthesis of deep neural networks across several generations is performed using the proposed genetic encoding scheme , and their network architectures and accuracies are investigated using three benchmark datasets : mnist , stl-10 and cifar10 .the lenet-5 architecture is selected as the network architecture of the original , first generation ancestor network for mnist and stl-10 , while the alexnet architecture is utilized for the ancestor network for cifar10 , with the first layer modified to utilize kernels instead of kernels given the smaller image size in cifar10 .the environmental factor model being imposed at different generations in this study is designed to form deep neural networks with progressively more efficient network architectures than its ancestor networks while maintaining modeling accuracy .more specifically , and is formulated in this study such that an offspring deep neural network should not have more than 80% of the total number of synapses in its direct ancestor network .furthermore , in this study , each kernel in the deep neural network is considered as a synaptic cluster in the synapse probability model . in other words ,the probability of the synthesis of a particular synaptic cluster ( i.e , ) is modeled as the truncated summation of the weights within a kernel .* results & discussion*. in this study , offspring deep neural networks were synthesized in successive generations until the accuracy of the offspring network exceeded 3% , so that we can better study the changes in architectural efficiency in the descendant networks over multiple generations .table [ tab : mnist - stl10res ] shows the architectural efficiency ( defined in this study as the total number of synapses of the original , first - generation ancestor network divided by that of the current synthesized network ) versus the modeling accuracy at several generations for three datasets .as observed in table [ tab : mnist - stl10res ] , the descendant network at the 13th generation for mnist was a staggering -fold more efficient than the original , first - generation ancestor network without exhibiting a significant drop in the test accuracy ( .7% drop ) .this trend was consistent with that observed with the stl-10 results , where the descendant network at the 10th generation was -fold more efficient than the original , first - generation ancestor network without a significant drop in test accuracy ( .2% drop ) .it also worth noting that since the training dataset of the stl-10 dataset is relatively small , the descendant networks at generations 2 to 8 actually achieved higher test accuracies when compared to the original , first - generation ancestor network , which illustrates the generalizability of the descendant networks compared to the original ancestor network as the descendant networks had fewer parameters to train .finally , for the case of cifar10 where a different network architecture was used ( alexnet ) , the descendant network at the 6th generation network was .4-fold more efficient than the original ancestor network with % drop in test accuracy , thus demonstrating the applicability of the proposed scheme for different network architectures ..cluster efficiency of the convolutional layers ( layers 1 - 3 ) and fully connected layer ( layer 4 ) at first and the last reported generations of deep neural networks for mnist and stl-10 .columns ` e ' show overall cluster efficiency for synthesized deep neural networks . [cols="<,^,^,^,^,^,^,<,^,^,^,^,^,^ " , ] * embedded gpu ramifications*. table [ tab : numkernelres ] and [ tab : cifar ] shows the cluster efficiency per layer of the synthesized deep neural networks in the last generations , where cluster efficiency is defined in this study as the total number of kernels in a layer of the original , first - generation ancestor network divided by that of the current synthesized network .it can be observed that for mnist , the cluster efficiency of last - generation descendant network is .7x , which may result in a near 9.7-fold potential speed - up in running time on embedded gpus by reducing the number of arithmetic operations by .7-fold compared to the first - generation ancestor network , though computational overhead in other layers such as relu may lead to a reduction in actual speed - up . the potential speed - up from the last - generation descendant network for stl-10 is lower compared to mnist dataset , with the reported cluster efficiency in last - generation descendant network .finally , the cluster efficiency for the last generation descendant network for cifar10 is .8x , as shown in table [ tab : cifar ] .these results demonstrate that not only can the proposed genetic encoding scheme promotes the synthesis of deep neural networks that are highly efficient yet maintains modeling accuracy , but also promotes the formation of highly sparse synaptic clusters that make them highly tailored for devices designed for highly parallel computations such as embedded gpus . this research has been supported by canada research chairs programs , natural sciences and engineering research council of canada ( nserc ) , and the ministry of research and innovation of ontario .the authors also thank nvidia for the gpu hardware used in this study through the nvidia hardware grant program .d. white and p. ligomenides , `` gannet : a genetic algorithm for optimizing topology and weights in neural network design , '' in _ international workshop on artificial neural networks_.1em plus 0.5em minus 0.4emspringer , 1993 , pp . 322327 .o. welzel , c. h. tischbirek , j. jung , e. m. kohler , a. svetlitchny , a. w. henkel , j. kornhuber , and t. w. groemer , `` synapse clusters are preferentially formed by synapses with large recycling pool sizes , '' _ plos one _ , vol . 5 , no . 10 , p. e13514 , 2010 .g. kastellakis , d. j. cai , s. c. mednick , a. j. silva , and p. poirazi , `` synaptic clustering within dendrites : an emerging theory of memory formation , '' _ progress in neurobiology _ , vol .1935 , 2015 .
there has been significant recent interest towards achieving highly efficient deep neural network architectures . a promising paradigm for achieving this is the concept of _ evolutionary deep intelligence _ , which attempts to mimic biological evolution processes to synthesize highly - efficient deep neural networks over successive generations . an important aspect of evolutionary deep intelligence is the genetic encoding scheme used to mimic heredity , which can have a significant impact on the quality of offspring deep neural networks . motivated by the neurobiological phenomenon of synaptic clustering , we introduce a new genetic encoding scheme where synaptic probability is driven towards the formation of a highly sparse set of synaptic clusters . experimental results for the task of image classification demonstrated that the synthesized offspring networks using this synaptic cluster - driven genetic encoding scheme can achieve state - of - the - art performance while having network architectures that are not only significantly more efficient ( with a -fold decrease in synapses for mnist ) compared to the original ancestor network , but also tailored for gpu - accelerated machine learning applications .
4ward is the proposed architecture and design for the future internet .the network of information ( netinf ) is one of the components of the 4ward project .the cornerstone of this architecture is that the information takes the prime position superseding the node - centric approach of the current internet .the genesis of netinf was influenced by existing major technologies and beyond .the features which netinf exhibits are the melange of existing and innovative solutions .mobility is one of the scenarios considered in netinf .late locator construction ( llc ) is a proposal to handle mobility and multihoming issues in netinf .it implements the locator / identifier separation idea .whenever there is a mobility or rehoming event , there is no interruption in the ongoing tcp session between nodes .the notion behind the llc is to use a global locator ( gl ) for routing between the core network ( cn ) and the edge networks ( ens ) .ipv4/ipv6 can be considered as the cn and nodes , mobile or stationary , forming a topology at the edge of cn , can be considered as ens .gl is built inside the locator construction system ( lcs ) which is embedded inside the core network . each edge network ( en ) and the host attached to these enshave attachment registers ( ar ) within the lcs .each ar has ids of itself and of its neighbors .the overall goal of llc is to minimize the update signaling to the locator construction system ( lcs ) to update the gl to deal with the scalability problem within the core network .it has also been proposed that the core network scales well as core and edge networks are responsible for their own routing .location identifier separation protocol ( lisp ) is a network based approach for locator / identifier separation .it focuses on limiting the size of routing tables and improving scalability and routing system .lisp mobile node ( mn ) is an extention to the classic lisp for mobile nodes .it has multiple design goals including a wide range of communication possibilities in different mobility cases along with multihoming in mn , as well as allowing mn to act as a server .the lisp mn architecture has some lisp features together with additional characteristics to support mobility and rehoming events . what has been envisaged here for netinfis to provide seamless connectivity between the mobile nodes even if there is a simultaneous roaming .embedding lisp mn features in netinf mn can then improve mobility management in llc for the netinf architecture .although llc is a good proposal , it has however some discrapencies in terms of properly addressing both scalability and mobility issues .lisp mn inherits classic lisp features but lacks complete compatibility to work with llc .our proposal consists in a new approach to deal with these issues .the prime goal is the mobility management in netinf .a netinf mn should bear charactersitics and features of classic netinf node architecture together with features introduced by the lisp mn architecture .the design goal of such a node is to deal with mobility in a highly dynamic environment .1 presents a high - level overview of the netinf mn architecture . at the bottom both physical and network layersprovide services to the transport layer placed above . within the transport layeris located the transport control engine ( tce ) . in netinf , tce is responsible for the coordination of protocols used for accessing netinf objects . in this design , both inner locator construction tunnel router ( ilctr ) andouter locator construction tunnel router ( olctr ) include their functionalities in tce .the goal of these two routers is to work under the conditions when non - netinf sites are communicating .the working of these routers will be extensively explained in the future work .the virtual node layer within the tce is another feature introduced for the mobility management of information . at the top , the api provides services to users .this api gets services from the transport layer below .it should be noted that netinf and the nodes are interdependent on each other .nodes are dependent on netinf for the network management , while netinf depends on nodes in scenarios where it has to manage and propagate the resources inside the network . in order to understand the basic functionalities of our proposed architecture , consider two mobility scenarios . in fig .2 , mn1 roams into a new edge network en3 during the on - going session with a stationary node mn2 in edge network en2 .mn1 first contacts local server ls in en3 to register its new i d allocated by the dhcp . it also queries the local cache mechanism to find any trace of mn2 activities in en3 .if it is successful in finding one , it can use that cached locator and resume its session . otherwise , it contacts the core network .3 presents a simultaneous rehoming event .two mobile nodes mn1 and mn2 are communicating in their respective edge networks en1 and en2 .a mobility event occurs when both mobile nodes move to a new edge network en3 at the same time . in this case, it would be more efficient if mobile nodes mn1 and mn2 could access a local server in en3 and register their ids to continue their session , instead of connecting to the core network .as ids are global and unique , no further name resoluton is then required . however, a lookup request must be sent to the local server when both nodes are registered and each node should set ttl . if nodes relocate each other before their ttl expires , the session is re - established ( in case of interruption ) or continued .otherwise , both should contact the core network .the main problems faced in the network because of mobility are : the unpredictable motion of nodes and unpredictable availability of nodes i.e. nodes are continuously joining and leaving the network . since information takes the centric position within the network , the mobility management of the information becomes a major concern .one solution for the management of information mobility is the use of virtual mobile nodes .these virtual nodes can be thought of as a program running on real nodes . whenever there is a mobility or rehoming event , the node which departs from the local network handovers its connectivity to one of the virtual nodes so that the connectivity of the ongoing session is carried away smoothly .this is made possible in our proposed netinf mn architecture by the introduction of a new layer , the virtual node layer that provides virtual mobile node services in coordination with the real node .all mobile nodes can act as temporary virtual mobile node and can provide services to the local mobile nodes .for example , in fig .2 en1 hosts two nodes namely mn1 and mn3 .these two nodes have the ability to support virtual mobile node features introduced in their respective layered architecture .lets assume that mn1 is in session with mn2 in en2 .when mn1 moves from en1 to en3 , it initiates the algorithm to make use of the embedded virtual layer .it contacts mn3 in en1 and establishes a local connection by informing about its new destination ( en3 ) .mn3 is now acting as a virtual mn1 and is communicating with mn2 .this continues until mn1 reaches en3 .when this happens , it gets all the updates from mn3 and reconnects to mn2 again .there are still some open issues to be addressed for the proposed architecture for mobile entities in netinf .so far , the work is on the midway .ilctr and olctr services are yet to be finalised . besides providing services for non - netinf sites , they can also work as storage spaces in edge networks in case of disconnection from the core network .different parameters have to be considered to evaluate the proposal .as ilctr encapsulates packets before sending them to non - netinf sites , how could the maximum transmission unit ( mtu ) be maximized .a new efficient protocol shall be proposed to deal with the mobility cases presented above . at present , we are in the process of evaluating our architecture through extensive simulations and real mobility traces . finally , we plan to address the multihoming issue along with the security issue during mobility .
in this paper , we propose an architecture for network of information mobile node ( netinf mn ) . it bears characteristics and features of basic netinf node architecture with features introduced in the lisp mn architecture . we also introduce a virtual node layer for mobility management in the network of information . therefore , by adopting this architecture no major changes in the contemporary network topologies is required . thus , making our approach more practical .
in a dynamical system driven by thermal fluctuations the effective energy as a function of conformation is related to the probability that the conformation is observed by the boltzmann formula , where is the boltzmann constant and is the temperature .the conformation of a simple system may be specified by a small number of variables . however , in studies of the folding of bio - polymers the conformational space of the system has many degrees of freedom . in some cases ,such systems can be described in terms of a single reaction coordinate , , and the dynamics of the system can be modeled by diffusion in this 1d space under the influence of an effective energy . in numerical simulationsthe reaction coordinate may be the radius of gyration of the structure , the fraction of native contacts , or another measure of the level of compaction or organization of the molecule . in single molecule manipulationexperiments the end - to - end extension of the molecule is typically used as a reaction coordinate .the energy as a function of the reaction coordinate follows from eq .[ eq : boltzmann ] as the arbitrary constant is included because the energy of a system is only defined up to a additive constant .although this formula can be used , in principle , to determine the energy surface from the probability density function , this is only practical when the energy varies in a range which is narrow compared with .the exponential dependence of the probability density on means that states with relative energy that is large compared with will be impossible to sample in a finite time .one solution to this problem is to apply an external force field to the system which tends to bias it towards the regions of the reaction coordinate that would otherwise be poorly sampled .often , this takes the form of a harmonic constraint , which adds an additional term to the energy , where is the effective stiffness and is the origin of the constraint . by selecting an appropriate value of and varying ,the system can be forced to visit various regions of the reaction coordinate , allowing more uniform convergence of statistics .this technique , often referred to as umbrella sampling , is widely used in simulations , and has been applied to single molecule experiments .we can still apply eq .[ eq : energy ] to the system with a specific configuration of the harmonic constraint , but we will obtain a biased energy which is the sum of the intrinsic energy and the energy of the constraint . to find the unbiased energy , we subtract the known constraint energy , and obtain for each position of the constraint we obtain a measurement of the energy surface over the region visited by the system . each local energy surface contains an independent constant .if we wish to find the global energy surface , defined over the entire domain of , we need to choose the constants and combine the local energy landscapes in a self - consistent manner .if there is substantial overlap between the domains of the local landscapes , the constants can be determined by requiring that the energy surfaces corresponding to different constraints are consistent in the overlap regions .the weighted histogram analysis method ( wham ) has been formulated to reconstruct the energy surface from monte carlo or molecular dynamics simulations with arbitrary biasing potentials .the method provides an optimal estimate for the unbiased probability density , where is the probability density sampled in biased simulation and the summation is over all simulations . in the case of a harmonic constraint centered at ,the weights are given by } , \label{eq : wham1}\ ] ] where is the total number of measurements in simulation .the constants are defined implicitly by a system of nonlinear equations , } , \label{eq : wham2}\end{aligned}\ ] ] where the histogram count is the number of measurements between and in system , and is related to the probability by . in the following section ,we describe another method of obtaining the global energy surface which we call differential energy surface analysis ( desa ) . in desa, we consider the slope of the energy landscape , rather than the energy itself .differentiating eq .[ eq : constrainedenergy ] with respect to we obtain - \alpha ( x - x_j ) .\label{eq : constrainedde}\ ] ] an important feature of this equation is that the constants are eliminated , so that it is not necessary to find a self - consistent solution to obtain the global function . at any given point along the landscape, can be obtained by averaging obtained from the system at various constraint origins .in order to define the method of differential energy landscape analysis , we assume a thermally driven system with one reaction coordinate which is characterized by an energy function .we assume that the dynamics of the system are measured in the presence of a harmonic constraint of stiffness for distinct constraint origins . for each ,the time series of is used to compile a histogram containing total samples .we assume that the histogram binning is consistent for all , and that the values of and are chosen so that there is significant overlap between the histograms .the slope of the energy landscape at position is given by where the summation is over the constraint origins , , and is defined by eq .[ eq : constrainedde ] . interpreting this formula ,the value of at position is a weighted average of found from the systems with constraint origins . using , we can express eq .[ eq : desadef ] entirely in terms of histogram counts , as (x_i)}{\sum_j h_j(x_i)}. \label{eq : desarewrite}\end{aligned}\ ] ] this formula has been used to reconstruct energy landscapes of molecular dynamics simulations , and experimental data .we will show below that eq .[ eq : desarewrite ] gives an optimal estimation of .when determining the mean value of a gaussian distributed variable from uncorrelated data points which have differing uncertainty , the maximum likelihood solution is where is the standard deviation of the statistical ensemble from which is taken , is the mean of and is the standard deviation of . in order to show that eq .[ eq : desadef ] is a maximum likelihood estimate of we must show that the choice meets the criteria set out in eq .[ eq : maxl ] . starting with eq .[ eq : constrainedde ] , the evaluation of will involve a finite difference of the natural logarithm of , , \label{eq : desauncertain}\end{aligned}\ ] ] where represents the statistical uncertainty in , and .note that any terms with are suppressed in eq .[ eq : desadef ] , but we also must suppress any terms where or is zero , since in this case the derivative is undefined .we can re - express eq .[ eq : desauncertain ] as \nonumber\\ & = & \frac{{\ensuremath{k_\mathrm{b}}}t}{\delta x}\left[\rule{0pt}{15pt}\ln \left(h_{j}(x_{i+1})\right ) - \ln\left(h_{j}(x_{i-1})\right ) \right.\nonumber \\ & & \ \ + \ln\left(1 \pm \frac{\delta h_{j}(x_{i+1})}{h_{j}(x_{i+1})}\right)\nonumber\\ & & - \left .\ln \left(1 \pm \frac{\delta h_{j}(x_{i-1})}{h_{j}(x_{i-1})}\right)\right ] , \label{eq : desauncertain2}\end{aligned}\ ] ] so that the uncertainties in and produce additive uncertainties in . assuming the uncertainty in is statistical , the uncertainty terms can be simplified using , so that where the last step is an expansion of the expression to first order .since and are uncorrelated , the errors arising from these terms add in quadrature . in the limit that is small compared with any important features of the energy landscape we can neglect the difference between and , and replace both by . using eq .[ eq : logsimplify ] we can then approximate the uncertainty in as the statistical weight required for maximum likelihood is therefore since an overall multiplicative factor will cancel out in eq .[ eq : maxl ] and not affect the calculation of the mean value , eq .[ eq : desadef ] is equivalent to the maximum likelihood estimation of and is an optimal estimation ., where energy is measured in pn m and distance is measured in m .( a ) system i with and wells centered at ( , ) and ( , ) with width ( .02 ) .contours are spaced by 0.0075 .( b ) system ii with .the first well has center ( , ) with width ( , ) and the second well has center ( , ) with width ( , ) .contours are spaced by 0.01 ., width=264 ] coordinate for system i. the coordinate of system i fluctuations around zero ( data not shown ) .( b ) coordinate for system ii .( c ) coordinate for system ii.,width=264 ] as a function of evaluated using eq .[ eq : chi2 ] for system i ( a ) and system ii ( b).,width=264 ] is plotted for system i ( a ) and system ii ( b ) . the energy for each division is compared with the energy of division .,width=264 ] function is plotted using a short dashed line and the energy obtained by wham is plotted using a long dashed line .the solid line is the energy of the system as a function of , assuming that the system remains in thermal equilibrium with respect to .data for system i is shown in ( a ) and data for system ii is shown in ( b).,width=264 ] recent work has provided criteria for error estimation in free energy calculations based on the weighted histogram analysis method .the desa result is obtained by straightforward averaging of estimates obtained with different constraint origins .use of the maximum likelihood estimation assures that the optimal value of is produced , and straightforward error propagation can be used to obtain the uncertainty in the values of obtained .however , when employing umbrella sampling , it is necessary to assume that the histograms obtained for different constraint origins overlap and that the data acquired with different constraint origins are sampling the same energy surface .one potential pitfall of umbrella sampling whether wham or desa is used for analysis of the data is that we can obtain a smooth measured energy surface even if the energy is not a single - valued function of the reaction coordinate .here we introduce two criteria that can be applied in order to detect inconsistencies in values obtained from different constraint origins .we will later show that these criteria give a warning when the reconstructed landscape is not accurate .the first method involves comparison of the biased energy surfaces obtained from different constraint origins . subtracting two biased energies , we obtain - \left[e(x ) + \frac{\alpha}{2}(x - x_j)^2\right]\nonumber\\ & & = x \left [ \alpha(x_j - x_k)\right ] + \frac{\alpha}{2}\left(x_k^2 - x_j^2\right ) , \label{eq : diag1}\end{aligned}\ ] ] where isthe biased energy surface measured with constraint origin and is the unbiased energy of the system .the cancelation of leaves terms which depend only on the biasing potential .the constant term is not of interest , since the energy itself is only defined up to an additive constant .however , we expect the energy difference to manifest a straight line with slope determined by the constraint strength and the relative constraint displacement , .if a different effective energy surface is in effect after the constraint origin bas been moved , will fail to cancel in eq .[ eq : diag1 ] and anomalous features will appear in the difference curve . we can also test the self - consistency of the desa analysis by determining if values obtained from individual constraint origins deviate from the mean value in a manner that is consistent with their statistical uncertainty . for each histogram bin to position , we evaluate where is the uncertainty in the value of obtained from the constraint position ( eq . [ eq : sigmaj ] ) .if the deviation of the individual values of from the mean are consistent with the statistical uncertainty , the value of should be of order 1 .a value significantly larger than 1 indicates that systematic errors are present in the values .25 nm / s.,width=264 ] curves obtained from unfolding and folding of the hairpin are compared.,width=264 ] as a function of extension , for the data shown in fig .[ fig : timeseries](a ) .( b ) the difference in biased energy for adjacent intervals where the time series is divided into six intervals rather than twenty .the differences , and are shown.,width=264 ] ( a ) .( b ) comparison of obtained by desa and wham from the time series in fig .[ fig : timeseries](a).,width=264 ] wham produces an optimal estimation of which is closely related to the energy and desa produces an optimal estimation of . for a well - behaved system with good statistical convergencewe expect both methods to converge to the underlying energy surface .however , in simulations and in experiments it is often a challenge to obtain adequate statistics , or obtain a reaction coordinate which unambiguously specifies the state of the system . in section [ sec : experiment ] below we will apply desa and wham to an experimental system and compare the results . in this sectionwe will apply desa and wham to two simulated systems in order to evaluate the accuracy with which the known energy surface is obtained and illustrate the use of the diagnostic criteria that were introduced in section [ sec : diagnostics ] .both simulated systems involved diffusion on a 2d energy surface with two stable states and in both cases we assume that only one coordinate ( ) is measured and that the biasing potential is a function of only .contour maps for the potential functions for the two systems are shown in fig .[ fig : mcontour ] . for both cases , the landscape consists of two overlapping potential wells with 2-dimensional lorentzian profile .the form of the potential is where specifies the depth of each well , ( , ) specifies the center and ( , ) specifies the width of each well in the and direction . in systemi , illustrated by fig .[ fig : mcontour](a ) , there are two symmetrical potential wells lying on the axis ( parameters given in the fig .[ fig : mcontour ] caption ) . for this potentialthere is only one stable value of for each . in system ii , illustrated by fig .[ fig : mcontour](b ) the two potential wells have different width and are displaced in as well as . in systemii there is more than one stable value of for a given value of and the measurement of is not sufficient to determine the state of the system .the transition between the two stable states of the system involves a change in the unmeasured variable .both simulated systems could serve as a model for a single - molecule unfolding experiment ( such as the one described in section [ sec : experiment ] ) where a quantity such as the end - to - end distance of the structure is under experimental control but other undetectable degrees of freedom are present .in system i , the measured variable is a good reaction coordinate and in the system ii it is not .the energy surfaces are used as the basis of a strongly damped langevin simulation with thermal energy and drag coefficient . in the simulation 400 time steps of taken between each tabulated sample point .these parameters were chosen so that the energy and time scales of the simulated systems roughly correspond to those of the experimental system which we describe in section [ sec : experiment ] . as a result ,simulated and experimental runs of equal time result in comparable statistical sampling .both simulation systems are run with a harmonic biasing potential which is continuously swept from negative to positive to sample the transition . in systemi the constraint stiffness is 150 pn / nm and the constraint origin sweeps from -0.03 m to 0.03 m over 4 s and in system ii the stiffness is 100 pn / nm and the origin sweeps the same range of position .the trajectories obtained for the two versions of the simulation are shown in fig .[ fig : strajectory ] . in systemi , the biasing potential causes the system to be swept through the transition state with good sampling over the domain of the reaction coordinate .( in the course of the transition , fluctuates around zero , data not shown . ) in system ii , the biasing potential also produces relatively uniform sampling of , although makes several abrupt transition between the basins of attraction at ( -0.009 ,-0.011 m ) and ( 0.009 ,0.011 m ) .the potential well at positive is more extended in than the one at negative , resulting in larger fluctuations in when is positive .the record of vs. time of the trajectories is divided into 20 equal time intervals and the histogram of position is calculated for each interval .data for each division is analyzed using the constant constraint stiffness and the average position of the constraint origin . in this simulation, it would be more natural to move the constraint origin in discrete steps and hold it constant as each histogram is collected .we move it continuously to more closely model the experimental procedure used in the experiment described in section [ sec : experiment ] . in order to apply desa or wham the constraintmust be moved sufficiently slowly that the system remains in quasi - equilibrium with respect to as the constraint origin moves .we have chosen the simulation parameters to ensure that this condition is satisfied for both systems . in fig .[ fig : sdedx ] the reconstruction of from the simulated systems is shown . in fig .[ fig : sdedx](a)-(c ) eq . [ eq : constrainedde ] is used to obtain an estimation of from three representative divisions of the system i trajectory .the three curves cover overlapping ranges of the reaction coordinate .the curve obtained from each division exhibits good statistical convergence in center of its domain and poorer statistical convergence at the margins . within statistical uncertainty ,the curves are consistent with the potential used in the simulation and with each other . when the 20 curves obtained from the 20 divisions are combined using eq .[ eq : desarewrite ] good agreement is found between the reconstructed shown in fig .[ fig : sdedx](d ) and the energy surface used in the simulation . when the same analysis is applied to system ii the desa methodprovides a visual indication that the dynamics of the system are not described by an energy which can be expressed as a function of a single reaction coordinate . in fig .[ fig : strajectory](e)-(g ) estimates from three divisions of the trajectory are shown .they are compared with derivative of the system energy with respect to , assuming that the system remains in equilibrium with respect to ( solid curve ) , assuming that the system is confined to negative ( dashed curve ) and assuming that the system is confined to positive ( short dashed curve ) .the estimate of obtained from division 5 conforms to the potential for negative while the estimate from divisions 10 and 15 conform to the potential for positive . the reconstructed curve shown in fig . [fig : sdedx](h ) gives a smooth curve despite the fact that it is obtained from averaging of inconsistent functions .we next apply the desa diagnostics introduced in section [ sec : diagnostics ] . in fig .[ fig : schi2 ] the reduced test defined in eq .[ eq : chi2 ] is applied to both simulated systems .the purpose of this test is to determine if the values of obtained from the various divisions of the data are consistent with each other , taking into account the statistical uncertainties of the various estimates . fig .[ fig : schi2](a ) shows that for system i the reduced is of order 1 over the full range of .this indicates that the functions obtained for the different constraint origins are mutually consistent . however , fig .[ fig : schi2](b ) shows that for system ii the value of the reduced function increases to 15 in the vicinity of the apparent transition state .this confirms our observation that in fig .[ fig : sdedx](e)-(g ) the functions deviate from each other by an amount exceeding the statistical uncertainty .this alerts us that data obtained from different biasing potentials are not consistent and is not a well defined function of , despite the fact that the curve obtained is smooth and appears plausible .next we consider the diagnostic criteria defined in eq .[ eq : diag1 ] , in which we subtract the raw energy surfaces obtained from data taken with different biasing potentials . in fig .[ fig : sslope](a ) , eq .[ eq : diag1 ] is evaluated for representative divisions of the data from system i. linear curves are found with slopes that are consistent with the constraint stiffness used in the simulation . in fig .[ fig : sslope](b ) the same measure is applied to representative divisions from system ii .inconsistent slopes , or non - linear curves are observed .this indicates that the intrinsic energy surface of the system failed to cancel when the energy surfaces of different divisions were subtracted . as in the case of fig .[ fig : schi2 ] , it is evident that the data produced by the simulation of system ii are not self - consistent . the final question we can address is whether desa or wham are more accurate in determining the relative energy of the initial and final states for the two systems . in fig .[ fig : senergy ] we compare the energy surfaces obtained by direct integration of the desa curve , and using wham . in the case of the well - behaved system i ( fig .[ fig : senergy](a ) ) , both desa and wham produce energy curves which match the potential used in the simulation . in the case of systemii ( fig .[ fig : senergy](b ) ) we find that desa and wham produce energy curves which are effectively identical .both curves fail to agree with the actual energy difference between the initial and final state . in this example, the energy surface was known _ a priori _ making direct comparison possible. however , the diagnostic criteria illustrated in fig . [ fig : schi2 ] and [ fig : sslope ] alerted us to problems in the reconstruction of the energy surface and did not require knowledge of the correct energy surface .here we apply desa and wham to a single molecule experiment in which a dna hairpin is unfolded under a harmonic constraint applied by an optical trap .the hairpin has sequence ccgcgagttgattcgccatacacctgctaatcccggtcgcttttgcgaccgggattagcaggtgtatggcgaatcaactcgcgg , which folds into a 40 base - pair stem with a 4-t loop .the hairpin is connected to the boundary of the sample chamber on one side and to a polystyrene micro - sphere on the other via biotin and digoxigenin tagged double - stranded dna linkers .this creates a single - molecule tether which anchors the micro - sphere to the surface .when the optical trap is held at constant position and intensity , the combination of the restoring force imposed on the micro - sphere by the optical trap and the elasticity of the handles produce a harmonic constraint acting on the hairpin with .the position of the sample chamber relative to the trapping beam is controlled by a piezoelectric positioning stage with nanometer resolution , and the origin of the constraint is controlled by varying the position of the sample chamber with respect to the trap center . the optical trap measuresthe instantaneous position of the micro - sphere and the instantaneous force applied to the tether as the constraint origin is swept . by determining the distance between the micro - sphere and the sample chamber boundary and subtracting off the instantaneous extension of the double - stranded dna handles ( estimated using a worm - like chain model of dna elasticity )the extension of the hairpin itself is determined .the apparatus and experimental procedure has been described elsewhere .the measured time series comprised of approximately samples is shown for unfolding of the hairpin in fig .[ fig : timeseries](a ) , and for folding of the hairpin in fig .[ fig : timeseries](b ) . as in the case of the simulated data ,the record of extension vs. time of the experimental system is divided into 20 equal time intervals and the histogram of position is calculated for each division .calibration data is used to calculate the mean stiffness and origin of the constraint for each of the 20 divisions .since the constraint origin moves continuously as data is acquired , it is not a constant within each interval .however , the deviation of the constraint origin from the mean value does not exceed nm in the course of an interval , which implies an error in the constraint force of less than .2 pn .the resulting error in the reconstruction of is negligible . in fig .[ fig : dedxd ] the functions calculated from representative divisions of the data in fig .[ fig : timeseries](a ) are shown in panels ( b ) through ( f ) and the function obtained by averaging all 20 divisions is shown in panel ( a ) .just as in fig .[ fig : sdedx](a)-(c ) , the individual estimates in fig .[ fig : dedxd ] are consistent with each other and with the average function within statistical uncertainty .this justifies the assumption that the experimental system continues to explore the same energy landscape as the constraint origin moves . as in the simulated system ,the umbrella sampling method requires us to assume that the biasing potential is time independent and that the system remains in thermodynamic equilibrium as data is collected .since the constraint origin moves continuously as data is collected this condition is not formally satisfied , and we must insure that the movement of the constraint is sufficiently slow that the system remains in equilibrium to good approximation . the most convincing evidence that this condition is satisfied is that identical energy landscapes are obtained for folding and unfolding of the hairpin , for which the constraint origin moves in opposite directions .the energy landscapes obtained from data in figs . [ fig : timeseries](a ) and [ fig : timeseries](b ) are compared in fig .[ fig : reverse ] . no significant difference is found between landscapes obtained for folding and unfolding of the hairpin .in contrast with the simulations , the effective energy of the hairpin is not known _ a priori _ so the diagnostic criteria introduced in section [ sec : diagnostics ] are of critical importance in establishing the validity of the energy landscape reconstruction . in fig .[ fig : ediagnostic ] we apply the two diagnostic criteria defined in section [ sec : diagnostics ] to the experimental data set . in fig .[ fig : ediagnostic](a ) the measure is plotted on a logarithmic scale .note that for the central region of the reaction coordinate , corresponding to the transition state , the value of is of order unity , which indicates that the curves obtained in the transition state region are consistent within statistical uncertainty .this confirms that a well - defined energy function is being measured . at the extremes ( near extension 0 m and 0.05 m ) the value of is larger , indicating that the curves are inconsistent at large and small extension .the larger values are found in regimes of extension where the hairpin is either fully open or fully folded. when the constraint is positioned to stabilize the hairpin in the fully open or fully closed conformation , the conformational dynamics of the hairpin itself are minimal and the fluctuations in the measured extension are mainly due to thermal fluctuation in the extension of the double - stranded dna handles . at small extensions ,the problem is exacerbated by the fact that the average force is low , resulting in lower effective stiffness of the handles and increased fluctuations .these measurement errors blur the sharp cutoff that would otherwise appear in the probability density of extension as the hairpin approaches the fully - open or fully - folded state and similarly blur the energy function .the function alerts us to the fact that the energy surface is accurately measured in the transition state region , but is affected by systematic errors near the fully folded or fully unfolded state .we also apply the criterion based on eq .[ eq : diag1 ] and show the results in fig . [fig : ediagnostic](b ) .the fact that linear curves are obtained when the unbiased energies are subtracted indicate that the intrinsic energy of the hairpin cancels , as expected , and that the effective biasing potential has the expected parabolic shape .this is confirmation that the optical trapping apparatus is applying an accurate biasing potential to the hairpin .based on fig .[ fig : ediagnostic ] we conclude that the energy of the hairpin is a well - defined function of extension and that desa has produced an accurate measurement of the transition state region . in order to verify the desa result , the data shown in fig .[ fig : timeseries ] was also analyzed using wham , as defined by eqs [ eq : wham1 ] and [ eq : wham2 ] . in fig .[ fig : desawham](a ) the energy surface obtained by wham is plotted along with the energy surface obtained from integration of the curve shown in fig .[ fig : dedxd ] . the desa and wham curves are indistinguishable .the overall slope of the energy landscape is reproduced , as well as the ripples that arise from the sequence dependence of the dna hybridization energy .the sequence dependence is more apparent in the plot of , which is shown in fig .[ fig : desawham](b ) . as in the case of the energy , results obtained from desa and whamare indistinguishable .there are systems , such as pseudoknots , g - quadruplex dna and others , which exhibit large irreversible steps when disrupted in single molecule experiments . in such cases, the techniques described here would not be suitable for reconstructing the global energy landscape .the main obstacle is that it is impossible to apply a biasing potential which will stabilize the system in the transition state or states .both wham and desa require that the histograms of the reaction coordinate obtained with different biasing potentials have substantial overlap .nonequilibrium analysis methods have been developed which can determine the energy surface from data taken far from equilibrium .these methods typically require a great deal of experimental data , since they involve measuring the dependence of the disruption force on force loading rate or averaging many trajectories with weights determined by the external work performed . in caseswhere biasing potentials can be used to stabilize a system along the reaction coordinate , desa is an alternative to wham .we have shown that desa and wham produce indistinguishable results when applied to simulated and experimental data . however , desa has the advantage of being computationally simple compared with wham , which requires the self - consistent solution of a system of nonlinear equations ( eq .[ eq : wham2 ] ) .another advantage of desa is that the construction of provides direct visual cues which can be used to confirm that the different biasing potentials are sampling the same energy surface ( see fig .[ fig : dedxd ] ) .in addition , the two diagnostic criteria defined in section [ sec : diagnostics ] provide quantitative measures of the quality of the energy surface measurement .the signatures of an ill - defined energy surface are demonstrated in the analysis of data generated by simulation system ii . finally , using the desa diagnostics, we show that it is possible to apply a precisely controlled biasing potential in an experimental system and obtain highly accurate information about the shape of the energy surface for folding and unfolding .25ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( )
in experiments and in simulations , the free energy of a state of a system can be determined from the probability that the state is occupied . however , it is often necessary to impose a biasing potential on the system so that high energy states are sampled with sufficient frequency . the unbiased energy is typically obtained from the data using the weighted histogram analysis method ( wham ) . here we present differential energy surface analysis ( desa ) , in which the gradient of the energy surface , , is extracted from data taken with a series of harmonic biasing potentials . it is shown that desa produces a maximum likelihood estimate of the folding landscape gradient . desa is demonstrated by analyzing data from a simulated system as well as data from a single - molecule unfolding experiment in which the end - to - end distance of a dna hairpin is measured . it is shown that the energy surface obtained from desa is indistinguishable from the energy surface obtained when wham is applied to the same data . two criteria are defined which indicate whether the desa results are self - consistent . it is found that these criteria can detect a situation where the energy is not a single - valued function of the measured reaction coordinate . the criteria were found to be satisfied for the experimental data analyzed , confirming that end - to - end distance is a good reaction coordinate for the experimental system . the combination of desa and the optical trap assay in which a structure is disrupted under harmonic constraint facilitates an extremely accurate measurement of the folding energy surface .
correlation of interference over sequential periods of time is an important quantity to study because it affects the correlation of receiver outage , the end - to - end delay , the handoff rate etc .it arises due to correlations in the propagation channel and the scheme . for aloha type of mac ,the propagation conditions become correlated in time when there are correlations in the fading channel and the user mobility . keeping in mind the ongoing standardization activities for the deployment of commercial networks ,the impact of blockage and deployment area on the correlation of interference becomes an attractive topic to study .thus far , the performance analysis of static networks , e.g. neglects the correlation of links that share common obstacles and also assumes infinite deployment area . in , these assumptions are not adopted , however , only the short - term correlation of interference is studied using the model , and the user locations are discrete . in this paper, we consider a continuous and bounded deployment , and we study the temporal correlation of interference for static users , and users with locations over time .the former is useful for studying static networks .the latter can be used to calculate the correlation of interference in highly moving networks and/or the long - term interference correlation in networks with asymptotic independent mobility , e.g. , random walk , brownian motion , constrained mobility with wrap around or bouncing back , , etc . studying interference correlation with uncorrelated mobility will also highlight that a bounded domain and/or a domain with blockage can make the interference pattern correlated too . even though the analysis in the space seems to be an over - simplification , it allows getting useful insights about the correlation of interference at a low complexity .the scenario can also find practical applications , e.g. , in vehicular networks .next , we summarize the most important insights about the system behaviour which , to the best of our knowledge , are new : ( i ) with uncorrelated user mobility , the temporal correlation of interference becomes inversely proportional to the size of the deployment domain when there is no blockage .( ii ) with a finite density of blockage , the correlation coefficient stays positive , even if the deployment area is infinite .this is because blockage introduces correlation in the interference levels generated by different users .( iii ) in the static case , blockage increases the correlation of interference .( iv ) with uncorrelated mobility , there is a critical user - to - blockage density ratio that determines the correlation of interference as compared to the case without blockage . at a high density of blockage, the critical ratio can be expressed in a closed - form .we consider two independent poisson point processes ( ppps ) , one for the users and the other for the blockage , over the line segment ] .assuming common transmit power level for all users , the interference at time slot and location ] is a uniform modeling the location for the -th user .the distribution of is difficult to obtain in terms of simple functions , however the moments of the penetration loss at distance , i.e. , between the -th user and the location can be computed as .even though the users are distributed independently of each other , they may be blocked by some common obstacles .the first - order cross - moment of penetration loss for two users depends on the relative locations of w.r.t .when the two links and do not share any obstacles , the penetration losses are uncorrelated , . otherwise , . inwhat follows , we will make use of the to analyze the moments of interference .the of interference at time slots is where , , and are vectors of with elements , , and at time slots , stands for the poisson distribution , and the arguments in the probability distribution functions are omitted for brevity . in order to assess the correlation of interference at time slots we use the pearson correlation coefficient , i.e. , the ratio of the covariance of rvs divided by the product of their standard deviations .we consider static users , and users with infinite velocity . in the former ,the locations of users are fixed but unknown . in the latter , a new realization of usersis drawn in every time slot . in both cases ,the statistics of interference are independent of the time slots we take the measurements and the time - lag . therefore the pearson correlation coefficient can be written as for the static case , we denote the correlation coefficient by . for the mobile case ,we denote it by .the correlation coefficient is location - dependent but we omit the related index for brevity .we will show how to calculate the coefficients at the origin .the expressions at an arbitrary point $ ] can be obtained in a similar manner .the mean of interference is computed after evaluating the first derivative of the at . where ( a ) follows from the fact that the penetration loss depends on the user location , ( b ) uses that the users are indistinct and also averages over the poisson distribution , is the generalized exponential integral , and the transmit power level has been taken equal to .the second moment of interference is where it has been used that , and the term captures the correlation in the interference levels generated by different users the calculation of can be split into two terms , , depending on whether pairs of links share common obstacles or not .the uncorrelated part is equal to , and the correlated part can be written as . in order to calculate ,one has to take care of the piecewise nature of the pathloss model . for a positive , we finally get in equation , the integral , where and has the least contribution of the four terms .it can be computed in terms of the incomplete gamma function only if the constants are equal .this is not true unless , where the integral becomes trivial to solve and equals . for a positive ,the integral decays sharply with .one may avoid numerical integration , and use the laplace method to approximate it instead .due to the lack of space , we give only the second - order approximation for , , where , .even this has sufficient accuracy for our problem . for impenetrable blockage, one has to substitute in equations , . for , equationbecomes indefinite .one should use instead .w.r.t . the user density . minimum penetration loss , pathloss exponent , size of the deployment domain and continuous user activity .,width=336 ] the cross - correlation of interference can be computed from the first - order cross - derivative of the , at . for the static case ,the penetration losses of a single user at different time slots are fully correlated .hence , where is computed as in . with infinite velocity ,the locations of a user at different time slots are uncorrelated but the penetration losses may still be correlated . hence , using equation , the first term in equation can also be written as . the correlation coefficients are computed after substituting equations , in equation without blockage , the interference levels generated by different users become uncorrelated , i.e. , . after substituting in equation , and this back in , we get , and [ remark:1 ] a bounded domain introduces correlation in the distance - based propagation pathloss , and makes the interference level correlated in time , even if the user mobility is uncorrelated . for infinite line , . with a finite blockage density [ remark:2 ] for infinite line , .hence , from equation , .thus , the coefficient , unlike the coefficient , is positive even if the deployment area is infinite .[ remark:3 ] starting from equation and using that , one can show that in a static network , blockage increases the correlation of interference , i.e. , .[ remark:4 ] using that the pearson correlation coefficient is at most equal to one , one can show that the first derivative of in equation w.r.t . is positive .also , , and .therefore with uncorrelated mobility , blockage reduces the correlation of interference at low user densities , while the opposite is true at high user densities .there will be a critical user density where .let us denote by the ratio of user density to blockage density .if we expand the moments of interference around , we get , and . after substituting these approximations in equation, the correlation coefficients and around , keeping finite or , can be read as where in the expression of , the contribution of the terms has been omitted from the series expansion of the fraction .this would be a valid approximation for a large .[ remark:5 ] at a high density of blockage , the correlation coefficients increase with the user - to - blockage density ratio . in fig .[ fig : variablelamda ] , we have used equation to compute the correlation coefficients for various user and blockage densities . in the static case ,blockage makes the propagation pathloss of different users correlated resulting in higher correlation coefficients than in the case without blockage , see remark [ remark:3 ] . in the mobile case ,the impact of blockage on the interference correlation depends on the user density , see remark [ remark:4 ] : when the user density is low , the interference level is also low , and it would vary significantly with mobility because of the transitions in the propagation conditions , from to and vice versa .these transitions make the correlation of interference less than in the case without blockage . on the other hand , when the user density is high , the correlation of penetration losses among the user prevails , and mobility does not help much in reducing it .some users will transit from to but at the same time , some others with transit from to .overall , the interference level will not vary much .when , the approximations for a high density of blockage , see equation become valid . for the parameter settings used to generate fig .[ fig : variablelamda ] , , we get after neglecting the contribution of the term . from equationwe get for and after neglecting the contribution of the terms .therefore , for , see fig . [fig : variablelamda ] . to sum up , for a high density of blockage, the critical user - to - blockage density ratio can be expressed in a closed - form in terms of the size of the deployment area , the channel model and the user activity .when the user density is fixed and finite and the blockage density keeps on increasing , the correlation of penetration losses from different users starts to reduce beyond a certain density of blockage . as a result , the correlation coefficients will reduce too , see fig .[ fig : variablemu ] and remark [ remark:5 ] . in fig .[ fig : variablemu ] , we also see that smaller domains are associated with higher correlation coefficients .this is because a smaller domain results in less randomness in the distance - based propagation pathloss of a user at different time slots . obviously , the impact of distance - based pathloss on the interference is more prominent at low blockage densities . in the static case ,the size of the deployment domain does not impact much the correlation of interference .the curves for different domains in fig .[ fig : variablemu ] practically overlap .w.r.t . the blockage density .the parameter settings are available in the caption of fig .[ fig : variablelamda ] , unless otherwise stated in the legend.,width=336 ] to get a glimpse on the location - dependent properties of interference correlation , we also study it at the boundary , . without blockage , the correlations coefficients are and .[ remark:6 ] the coefficient at the boundary is half the coefficient at the center because at the boundary there is more randomness in the distance - based pathloss . with blockage , the coefficient at the boundary will be marginally higher than the coefficient at the center , because the boundary sees more correlated penetration losses .on the other hand , the coefficient is smaller at the boundary than at the center , see fig .[ fig : border ] .this is because at the boundary , where the level of interference is also less , the randomness in the distance - based propagation pathloss is higher . for increasing density of blockage ,the generated interference is dominated from the users located close to the boundary .therefore the higher randomness of the link gains starts to vanish and the correlation becomes less sensitive to the location , see fig .[ fig : border ] .it can be shown that for a high density of blockage , the coefficient at the boundary can also be approximated by the expression in equation .in this letter , we showed that a bounded domain and/or a domain with blockage can induce temporal correlation of interference even if the user locations are uncorrelated over time . with blockage , the correlation coefficient increases with the density of users .therefore beamforming techniques , which essentially scale down the density of users generating interference , will scale down the temporal correlation of interference too . extending the results of this paper in two - dimensional areas with beamforming and nonuniform distribution of users , e.g. , due to mobilityis a topic for future work .s. krishnan and h.s.dhillon , `` spatio - temporal interference correlation and joint coverage in cellular networks '' , available at http://arxiv.org/pdf/1606.05332.pdf u. schilcher , c. bettstetter and g. brandner , `` temporal correlation of interference in wireless networks with rayleigh block fading '' , _ ieee trans .mobile comput ._ , vol . 11 , pp . 2109 - 2120 , dec . 2012
in this letter , we study the joint impact of user density , blockage density and deployment area on the temporal correlation of interference for static and highly mobile users . we show that even if the user locations become uncorrelated in the limit of , the interference level can still be correlated when the deployment area is bounded and/or there is blockage . in addition , we study how the correlation coefficients of interference scale at a high density of blockage . [ los]line - of - sight [ nlos]non - line - of - sight [ pdf]probability distribution function [ rv]random variable [ mgf]moment generating function [ mac]medium access control [ rwpm]random waypoint mobility [ i.i.d.]independent and identically distributed [ mmw]millimeter - wave wireless [ 1d]one - dimensional blockage , correlation , interference , mobility .
the concept of fractional differentiation with non - singular and non - local kernel has been suggested recently and is becoming a hot topic in the field of fractional calculus .the concept was tested in many fields including chaotic behaviour , epidemiology , thermal science , hydrology and mechanical engineering .the numerical approximation of this differentiation was also proposed in . in the recent decade, the integral equations were revealed to be great mathematical tools to model many real world problems in several fields of science , technology and engineering . in many research papers under some conditions ( see e.g. and references therein ) , it was proven that there is equivalence between a given differential equation and its integral equation associate .recently proposed a method based on a semi - discrete finite difference approximation in time and galerkin finite element method in space . in this workwe propose a new numerical approximation of atangana - baleanu integral which is the summation of the average of the given function and its fractional integral in riemann - liouville sense .this numerical scheme will be validate by solving the partial differential equation describing the subsurface water flowing within a confined aquifer model with the new derivative with fractional order in time component .this paper is organized as follows : in section [ sec:2 ] , for the convenience of the reader , we recall some definitions and properties of fractional calculus within the scope of atangana - baleanu . in section [ sec:3 ] a numerical approach of atangana - baleanu derivative with fractional order is introduced . in section [ sec:4 ] , as application of the new numerical approximation of atangana - baleanu fractional integral .we study analytically and numerically the model of groundwater flow within a leaky aquifer based upon the mittag - leffler function .finally , section [ sec:5 ] is dedicated to our perspectives and conclusions .we recall the definitions of the new derivative with non singular kernel and integral introduced by atangana and baleanu in the senses of caputo and riemann - liouville derivatives .let and let be a function of the hilbert space .we define by the derivative of as distribution on .the sobolev space of order in is defined as [ atangana - baleanu_caputo ] let and a function , .the atangana - baleanu fractional derivative in caputo sense of order of with a based point is defined as ,\ ] ] where has the same properties as in caputo and fabrizio case , and is defined as is the mittag - leffler function , defined in terms of a series as the following entire function .the mittag - leffler functions appear in the solution of linear and nonlinear fractional differential equations .the above definition is very helpful to discuss real world problems and will also have a great advantage when using the laplace transform with initial condition .now let us recall the definition of the atangana - baleanu fractional derivative in the riemann - liouville sense .[ atangana - baleanu_riemann ] let and a function .the atangana - baleanu fractional derivative in the riemann - liouville sense of order of is defined as .\ ] ] notice that , when the function is constant , we get zero .[ atangana - baleanu_integral : definition ] the atangana - baleanu fractional integral of order with base point is defined as this section , we will start by given the discretization for riemmann - liouville fractional integral .next , following the same idea as , we introduce a numerical scheme to discretize the temporal fractional integral , and give the corresponding error analysis which will be used to give the numerical solution of the atangana - baleanu fractional integral and also to obtain the solution of the modified groundwater flow within a leaky aquifer .let .then could be discretized as follows as on the other hand with the riemann - liouville fractional integral we have the following .we choose ] .the atangana - baleanu fractional integral is given by where , , and , .using the second approach of the discretization , we obtain the following approximation + \tilde{r}_{k,\alpha},\ ] ] where and if the function is differentiable such that its atangana - baleanu fractional integral exists then then for the atangana - baleanu integral can be decomposed as follows where .nevertheless where also let us now assume that is two times differentiable on ] , which represents the head , we seek such that the flow of water within the leaky aquifer is governed by where and , denotes the coefficient of storage , denotes the conductivity .the problem of groundwater flow within the leaky aquifer will be approached analytically and solved numerically using the implicit scheme . in the following ,we discuss the existence and uniqueness of solutions of the direct problem .\times \partial \omega,\\ \varphi(r_{c } , t ) = \varphi_{c}\qquad \qquad \text{in}~~ \{0\}~\times ~\omega,\\ \end{cases}\ ] ] for the initial datum , and where given , a solution of is a positive function ] we want to use the contraction mapping theorem , so for this purpose we need to build a closed set of ] is locally lipschitz .we consider two bounded functions and in ] , for all , and we shall follow the idea of . since the nonlinear operator is locally lipschitz , for there exists such that and let be a constant such that .set ;~~{\left\lvert\varphi\right\rvert}_{h^{1 } } \leq \bar{\varphi_{c } } , \text{for all}~ t \in [ 0,t_{1 } ] \big\},\ ] ] endowed with the norm then is a closed convex subset of ] , it follows that thus for all , this shows that is a contraction mapping in .thus has a fixed point which is a solution to .now let us show that the problem has an unique solution .let , be two solutions of and let . then by thanking the norm on both sides , by the gronwall inequality , the result follows . in the model here discussed for water flow in the leaky aquifer , the head , which appears in ,is assumed to be governed by the one - dimensional time - fractional differential equation involving the atangana - baleanu fractional derivative .applying laplace transform to , the fundamental solution , results to be : (p ) + \mathcal{l}_{t } \bigg [ \partial_{rr}\varphi \bigg](p ) + \mathcal{l}_{t } \bigg [ \frac{1}{r } \partial_{r}\varphi\bigg](p ) - \varpi^{-2 } \mathcal{l}_{t } \big[\varphi \big](p ) = 0,\ ] ] where denotes the laplace transform . replacing each term by its value, we get }{(1-\alpha)p^{\alpha } + \alpha } + \partial_{rr } \tilde{\varphi } + \frac{1}{r}\partial_{r}\tilde{\varphi } - \varpi^{-2 } \tilde{\varphi } = 0.\ ] ] hence the following differential equation in the form holds with , where since is positive , the exact solution of the differential equation is given in terms of bessel function of the first kind , and modified kind , as where and are the constants and , respectively given as using the boundary condition in , we obtain , then the solution is reduced to due to the difficulties to obtain the inverse laplace transform of , we therefore propose to obtain the approximate solution of by using the proposed numerical approximation of the atangana - baleanu integral .to achieve this , we revert the fractional differential equation to the fractional integral equation using the link between the atangana - baleanu derivative and the atangana - baleanu integral to obtain for some positive and large integers , the grid sizes in space and time is denoting respectively by and .the grid points in the space interval ] are the numbers , .the value of the function at the grid points are denoted by .using the implicit finite differences method , a discrete approximation of given by can be obtained as follows in order to use the numerical approximation proposed in this work , discrete solution of is then given as follows therefore the numerical approximation can be given as follows in order obtain the plots of the solution given by , we shall consider equidistant nodes in ] the better approximated solution is obtained in the whole interval . and and orange ( graph in the left ) .approximate solution of for time step and and orange ( graph in the right),title="fig:",scaledwidth=35.0% ] and and orange ( graph in the left ) .approximate solution of for time step and and orange ( graph in the right),title="fig:",scaledwidth=35.0% ] next in figure [ fig : fig_c ] we show the approximate solution of by using the numerical approximation of the atangana - baleanu method proposed for different time steps and for and in $ ] . , blue and ( graph in the left ) .approximate solution of for , blue and ( graph in the right).,title="fig:",scaledwidth=35.0% ] , blue and ( graph in the left ) .approximate solution of for , blue and ( graph in the right).,title="fig:",scaledwidth=35.0% ]the new scheme of the fractional integral in the sense atangana - baleanu has been proposed in this work .the error analysis of the novel scheme was successfully presented and the error obtained shows that the scheme is highly accurate .a new model of groundwater flowing within a leaky aquifer was suggested using the concept of fractional differentiation based on the generalised mittag - leffler function in order to fully introduce into mathematical formulation the complexities of the physical problem as the flow taking place in a very heterogeneous medium .the mittag - leffler operator provide more natural observed fact than the more used power law .the new model was analysed , as the uniqueness and the existence of the solution was investigated with care . to further access the accuracy of the proposed numerical scheme, we solved the new model numerically using this suggested scheme .some simulations have been presented for different values of fractional order .we strongly believe that this numerical scheme will be applied in many fields of science , technology and engineering for those problems based on the new fractional calculus .99 a. atangana . _ on the new fractional derivative and application to nonlinear fisher s reaction - diffusion equation ._ applied mathematics and computation , 1 , 273 , pp .948956 , ( 2016 ) .i. area , j. d. djida , j. losada , and juan j. nieto ._ on fractional orthonormal polynomials of a discrete variable_. discrete dynamics in nature and society , vol .2015 , article i d 141325 , 7 pages , 2015 .doi:10.1155/2015/141325 a. atangana , and j. j. nieto ._ numerical solution for the model of rlc circuit via the fractional derivative without singular kernel_. advances in mechanical engineering , vol.7 , no.10 , pp .17 , ( 2015 ) .p. wang , c. huang , and l. zhao ._ point - wise error estimate of a conservative difference scheme for the fractional schrdinger equation _, journal of computational and applied mathematics , 306 , pp . 231247 , ( 2016 ) .
many physical problems can be described using the integral equations known as volterra equations . there exists quite a number of analytical methods that can handle these equations for linear cases . for non - linear cases , a numerical scheme is needed to obtain an approximation . recently a new concept of fractional differentiation was introduced using the mittag - leffler function as kernel , and the associate fractional integral was also presented . up to this point there is no numerical approximation of this new integral in the literature . therefore , to accommodate researchers working in the field of numerical analysis , we propose in this paper a new numerical scheme for the new fractional integral . to test the accuracy of the new numerical scheme , we first revisit the groundwater model within a leaky aquifer by reverting the time classical derivative with the atangana - baleanu fractional derivative in caputo sense . the new model is solved numerically by using this new scheme .
in this lecture , we present the most important aspects of the antenna and receiver components of synthesis telescopes . due to the increased breadth of material , we can not cover all of the topics contained in the previous version of the summer school chapter on antennas , in particular the section on antenna polarization properties .instead , we review the basics of antennas while adding new details of interest to astronomers on the atacama large millimeter / submillimeter array ( alma ) and the karl g. jansky very large array ( vla ) dish reflectors .we follow with an overview of the heterodyne receiver systems and the receiver calibration techniques in use at these telescopes .figure [ fig1 ] shows a simple block diagram of the major components required in an synthesis telescope .the role of the primary antenna elements of an interferometer is much the same as in any single element telescope : to track and capture radiation from a celestial object over a broad collecting area and focus and couple this signal into a receiver so that it can be detected , digitized and analyzed . at the output of the receiver feed ,the signal is at the radio , or sky , frequency , typically with a significant bandwidth .the signal undergoes frequency translations and filtering as it propagates through the electronics system . in synthesis telescopes of recent design ,the analog signal processing and digitization systems are located in the antennas , with the resulting digital data transmitted over fiber optic cables to the correlator building . in general , the receiver , intermediate frequency , transmission cables , lo , and baseband portions of the electronics system all have the requirement of good amplitude and phase stability .these requirements and others such as stable bandpass shape , low spurious signal generation , and good signal isolation are discussed in and in the alma and evla memo series .historically , a great variety of antenna types have been employed in synthesis telescopes ( see the list in * ? ? ?* table 1 ) . in all cases ,the diffraction beam of the primary antenna defines the solid angle over which an interferometer is sensitive , and is called the _ primary beam_. the details of this angular response pattern , including beam shape , sidelobe level , and polarization purity , as well as how accurately it can track the target are important , and will directly affect the observed data .the three major categories of antennas used in radio astronomy include : simple dipole antennas , horn antennas , and parabolic reflecting antennas .dipole antennas provide the widest field response but at low gain , meaning that large arrays of hundreds or thousands of them are necessary to form beams with any reasonable level of resolution and sensitivity .they are typically used at wavelengths longer than 1 m , such as in the long wavelength array ( lwa , * ? ? ?horn antennas provide the most well - controlled beam shape and uniformity of response vs. frequency . indeed , a hybrid of the horn and parabolic reflecting antenna types , the crawford hill horn - reflector , yielded the first detection of the cosmic microwave background .horn antennas have been combined into small interferometric arrays built on tracking platforms such as the degree angular scale interferometer .however , horn antennas are not practical when a large collecting area is required because large single elements would be long , heavy , and difficult to arrange into a compact configuration .the reflecting dish antenna provides both good sensitivity and beam performance in its single element form , while being amenable to arrangement into reconfigurable interferometric arrays . in order to access a wide frequency range ,many different receiver bands must be arranged with some mechanism to share the focal plane . however ,this issue has been solved by a variety of approaches .for example , at the green bank telescope ( gbt , * ? ? ?* ) , up to eight receivers are mounted on a circular carriage which can rotate the selected receiver onto the focal axis .because alma , vla and many other major interferometers employ symmetric dish antennas with circular apertures , we will concentrate the rest of this section on this style of antenna . since the mid-1960 s, reflector antennas have been designed using the principle of homology . rather than trying to build a structure to resist the deformation associated with changes in orientation , a homologous design responds to the changes by allowing the surface to perturb from one parabola to another .this change can then be compensated simply by applying a calibrated , concomitant motion ( i.e. focus ) of the subreflector .further discussion on homology can be found in , which is an excellent reference on performance measurements techniques for parabolic reflector antennas .structural engineering of antennas is discussed in . to summarize in a single sentence, the typical modern antenna presents a thin aluminum reflecting surface composed of dozens to hundreds of molded or machined segments supported by a space frame backup structure ( bus ) composed of carbon fiber reinforced ( cfr ) tubes and/or steel members and fasteners .these components promote high surface efficiency while offering some immunity to thermal deformation .the two major choices when designing a reflector antenna for use in a synthesis array are the choice of mount and the choice of optics . for radio astronomy dishes ,the alt - azimuth mount is the most prevalent in use today .its advantages are its simplicity and the fact that gravity always acts on the reflector in the same plane , easing the challenge of a homologous design .the major disadvantage of this mount is that , as the antenna tracks , the aperture ( and hence the primary beam ) rotates with respect to the source , around the primary optical axis .if the source size is significant compared to the beam size , and if the beam is not circularly symmetric , this rotation will cause the apparent brightness distribution to vary . since aperture blockage usually makes the beam sidelobe pattern non - circularly symmetric , andthe antenna instrumental polarization is not circularly symmetric , the dynamic range of total intensity images of very large sources and polarization images of extended sources will be limited by this effect .observers of extended sources need to consider this effect when judging the fidelity of subtle features in the images of these sources .a minor disadvantage of the alt - az mount is that sources passing close to the zenith can not be tracked well due to the high rates of azimuth rotation needed . technically , the sidereal azimuth rate exceeds the ( relatively slow ) slew rate of the vla and gbt ( /minute ) only for elevations .however , many antenna servos are not necessarily designed for smooth tracking at high rates , so errors may be larger ( [ tracking ] ) .often of greater concern is the typically reduced accuracy of the pointing model at elevations ( [ pointing ] ) .there are a variety of optical systems that can be used to feed a large radio reflector ( e.g. * ? ? ?figure [ fig2 ] shows the major types of feed systems that are in use on current radio telescope reflectors .the _ prime focus system _ , as in the westerbork synthesis radio telescope ( wsrt , * ? ? ? * ) and the giant meterwave radio telescope ( gmrt , * ? ? ? * ) , has the advantage that it can be used over the full frequency range of the reflector , including the lowest frequencies where secondary focus feeds become impractically large .the disadvantages of the prime focus are that space for , and access to , the feed and receiver is restricted and spillover noise from the ground decreases sensitivity .all of the _ multiple reflector systems _( figure [ fig2](b)(f ) ) have the advantage of more space , easier access to the feed and receiver , and reduced noise pickup from the ground .in addition , the primary and secondary reflectors can be shaped to provide more uniform illumination in the main reflector aperture , as described in [ shaped ] .the _ off - axis cassegrain _ ( e.g. , vla , vlba , alma ) is particularly suitable for synthesis telescopes needing frequency flexibility , because many feeds can be located in a circle around the main reflector axis .changing frequency simply requires either a rotation of the asymmetric subreflector around this axis , as in the vla and vlba , or by adjusting the pointing of the primary mirror as in alma .the disadvantage of this geometry is that the asymmetry degrades polarization performance .the _ nasmyth geometry _( e.g. , the 10.4 m leighton dishes of the combined array for millimeter astronomy ( carma ) , * ? ? ?* ) provides a receiver cabin external to the antenna structure , whilst the _ bent nasmyth geometry _ ( e.g. , submillimeter array ( sma ) ) minimizes disturbances to the receivers because they ( along with the final three mirrors ) do not tilt in elevation .the bent nasmyth geometry provides maximum convenience for service access , even during observations . finally , the _ dual offset gregorian _( e.g. , gbt ) has no blockage and thus delivers a circularly symmetric beam with low sidelobes which is particularly important for galactic h i observations .this characteristic makes it an attractive choice for wide field - of - view synthesis telescopes , but the increased complexity of reflector panel tooling and subreflector support structure leads to increased cost . as described in , there is a fourier transform relationship between the complex voltage distribution of the electric field , , in the aperture of an antenna and the corresponding complex far - field voltage radiation pattern , of the antenna ( see also * ? ? ?* ) . in both domains , the power pattern is the square of the absolute magnitude of the voltage pattern .the form of for an antenna is determined by the way in which the antenna feed illuminates the aperture .in general , the more that is tapered at the edge of the aperture , the lower will be the aperture efficiency and the sidelobe response , and the broader the main beam .calculations for a variety of , and their corresponding , can be found in antenna textbooks ( e.g. * ? ? ?* chapter 4 ) .figure [ fourierpair ] shows one - dimensional cuts through and for uniform and tapered illumination patterns . in order to maximize sensitivity ( at the expense of higher sidelobes ) , the vla antennas were designed to have a nearly uniform illumination ( ) over the whole aperture , except where the aperture is blocked by the subreflector and its support struts . because efficient receiver feedhorns have a tapered response , achieving this uniform illumination required mathematically perturbing the primary and secondary mirror surfaces into complementary `` shaped '' surfaces . in other words ,the vla antennas are * not * the classical cassegrain combination of paraboloidal primary with hyperboloidal secondary .while the primary differs from a paraboloid by only 1 cm rms , recent optical modeling employed a polynomial of order 13 to accurately represent the surface .the vlba antennas are similarly shaped to provide uniform illumination out to 95% of the dish radius , then tapered to -15 db at the edge . with uniform illumination , for a circularly symmetric aperture of diameter , the beam pattern takes the form , which has the following properties : first sidelobe level db ( i.e. compared to the peak ) , half power beam width hpbw , and the radius of the first null .these values are in good agreement with measured beam parameters for the vla 25 m diameter reflector , except for the first sidelobe level ( about -16 db ) , which is increased from theory by the aperture blockage .the vla beam patterns in the various bands are characterized by sixth - order polynomial functions in the aips software package , and analytically by an airy pattern ( truncated at the 10% level ) in the casa software package .more accurate patterns are being added to casa .some disadvantages of shaped cassegrain geometries , which do not usually preclude their use for synthesis telescopes , include compromised prime focus operation above a frequency of about 1 ghz because of the shaped main reflector ( the vlba 600 mhz system suffers 5% loss due to this effect ) , and very bad beam degradation if the feed is moved away from the secondary focal point .this latter problem can limit their use in synthesis arrays designed to obtain very wide fields of view by using focal plane arrays ( fpas ) .note that the apertif fpa system , which has a 8 field of view at 1.4 ghz , operates at prime focus on the wsrt whose 25 m antennas are paraboloidal in shape and thus do not suffer from this complication .in contrast , alma antennas were designed to have a tapered illumination pattern because it provides reduced sidelobes which promotes better single dish imaging performance , a required capability of alma .in addition , the classical cassegrain geometry provides good performance over a much larger area of the focal plane , which the stationary alma receivers must share . by specification ,the taper of the receiver feeds at the edge of the subreflector is db in the gaussian beam approximation , which equates to db in the physical optics analysis .a quadratic taper of db ( i.e. the power at the edge of the dish is 10% of the peak ) corresponds to a hpbw of ( * ? ? ?* eq 4.13 ) .the central obstruction of 0.75 m on the 12 m antennas produces a further % reduction in the beam pattern to a final hpbw of /d .the theoretical peak of the first sidelobe is db for an unblocked aperture .the effect of a central blockage is to increase the odd - numbered sidelobes by a few db while similarly decreasing the even - numbered sidelobes .currently in casa , the alma beam pattern is an airy pattern scaled to match the measured hpbw , i.e. the airy pattern for a 10.7 m antenna is used for the 12 m antennas .an improved representation of the beam patterns from celestial holography measurements is currently under test . when considering the effect of radio frequency interference ( rfi ) , it is important to know the response in the far sidelobes , which has been measured on the vla and vlba antennas at cm .the declining envelope of the sidelobe response is consistent with the reference radiation pattern for large diameter ( ) parabolic cassegrain antennas tabulated in recommendation sa.509 - 3 of the radiocommunication sector of the international telecommunications union . in general , the gain relative to the main beam will drop below -60 db somewhere between off axis . for antennas that have a well defined physical collecting area , such as reflector , lens or horn antennas ,the ratio of the effective area to the physical area of the aperture is called the aperture efficiency , a dimensionless quantity less than unity : the antenna aperture efficiency directly impacts the sensitivity of the synthesis telescope and is the product of a number of different loss factors , where reflector surface efficiency , reflector blockage efficiency ( feed legs and subreflector ) , feed spillover efficiency , illumination taper efficiency , panel reflection efficiency , and miscellaneous efficiency losses due to reflector diffraction , feed position errors , and polarization efficiency . as we will see , the term that is the most frequency dependent is .hence , it is often the case that observatory documentation will define the aperture efficiency as where is simply the product of all the other efficiencies .for example , the alma technical handbook quotes .it is important to realize that is typically elevation - dependent , with the best values occurring at moderate elevations ( usually between 45 - 60 ) .gain curves showing vs. elevation for the vla antennas are stored in casa ( figure [ gaincurves ] ) and can be used to correct for this varying amplitude response in the data . accounts for loss due to inaccuracies in the profile of the reflector .surface errors cause the electric field from different parts of the aperture to not add together perfectly in phase at the feed , leading to a decrease in received power . gives an expression for surface efficiency where is the rms surface error , with the errors assumed to be gaussian random and uncorrelated across the aperture . in a cassegrain ( or more complicated ) mirror system, is an appropriately defined composite rms error of the primary and secondary reflector surfaces , which should always dominate over subsequent ( more accurate ) smaller mirrors .if the errors are correlated over significant fractions of the aperture , then additional terms are required on the right hand side of eq .[ eq:10 ] , or more accurately , an integration of the surface profile map must be performed .[ eq:10 ] predicts that for an rms error of , , which is often taken to define the useful upper frequency limit ( ) for a reflector . for the vla , with m, corresponds to ghz . for alma , with m , ghz .most of the drop in occurs between 0.5 - 1.0 , as can be seen in table [ rxtable ] . as well as the loss of sensitivity resulting from a low value of , one must be concerned with the quality of the primary beam .the surface errors cause scattering which produces a broad pedestal surrounding the main lobe of the beam that can be higher than the usual diffraction - limited sidelobes . this pedestal can enhance image artifacts caused by sources near the primary beam . for a reflector of diameter ,if the reflector errors are correlated over distances then the scatter pattern will be times broader than the diffraction - limited main lobe , and often correspond to the panel segment size. measurements of this pattern can be made by scanning large objects like the moon .good performance requires careful structural design for wind , thermal and gravitational loading , together with precise reflector panels ( e.g. * ? ? ?* ; * ? ? ?* ) and an accurate panel setting technique ( see [ holography ] ) .the feed or subreflector and its multi - legged support structure block the aperture of a reflector antenna .this typically results in a blockage efficiency in the range .a formula for is given by the effective blocked area is the blocked area weighted for the illumination taper in the aperture ( see also * ? ? ?similarly , the total area is weighted for the illumination taper in the aperture .equation [ eq:11 ] shows , for small blockage , that the loss in efficiency is twice the fractional blocked area . as well as the loss in aperture efficiency ,the increase in antenna beam sidelobe level due to blockage is important for synthesis telescopes . using the fouriertransform relationship , the form of the antenna voltage pattern with blockage can be calculated as the unblocked voltage pattern minus the voltage patterns of the blocked areas . as a practical example , the alma 12 m antennas are of three different designs ( vertex , aec and melco , corresponding to the three funding partners : north america , europe , east asia ) , and the effect of their different blockage can be seen in their respective beam patterns . the feedleg design of the aec antennas is significantly different from the other two designs in that the four struts are mounted entirely from the edge of the dish ( figure [ beams ] ) .in contrast , the feed struts of the vertex and melco antennas meet the dish in several places along the outer half of the dish , meaning that scattering occurs twice once on the way from the sky down to the primary mirror ( plane wave scattering ) and again on the way back up to the subreflector ( spherical wave scattering ) .as shown in figure [ beams ] , the first sidelobe is lower and more azimuthally uniform on the aec antennas compared to the vertex antennas .these two efficiency terms are related to one another , and their product is sometimes ( confusingly ) referred to as the illumination efficiency .the spillover efficiency can most easily be understood by considering the antenna in transmission , rather than reception mode .the spillover efficiency is the fraction of the power radiated by the feed that is intercepted by the subreflector for a cassegrain feed , or by the main reflector for a prime focus system .clearly , power that does not intercept the reflector is lost , and we can be confident that a similar loss occurs in reception mode by invoking the reciprocity principle .simultaneously , the illumination taper efficiency arises whenever the outer parts of the antenna are illuminated at a lower power level than the central portion , and hence contribute lower `` weight '' in the aggregate signal ( similar to applying a uv - taper in synthesis imaging ) .the spillover and taper efficiencies can be computed using the integral formulas given in ; but in a qualitative sense , it should be obvious that adjusting the taper in one direction will generally improve one term at the expense of the other . for unshaped , classical cassegrain systems ( like alma antennas ) , the illumination taper that gives the best trade - off ( i.e., -10 db , * ? ? ?* ) will produce a spillover efficiency of , a taper efficiency of , and consequently , a net product of . by comparison , for the vla antennas ,whose illumination pattern is much closer to uniform , the net product is . aside from surface errors encompassed by the term , smooth unpainted aluminum surfaces generally have a very high reflectivity at centimeter through submillimeter wavelengths ( typically per mirror , * ? ? ?addition of paint , which provides long - term protection , adds a small amount of loss at centimeter frequencies due to additional scattering ( up to a few percent , * ? ? ?however , above 100 ghz the dissipative loss of the paint s dielectric material becomes significant , which is why the panels of most ( sub)millimeter telescopes are left unpainted .although unpainted , alma panels are slightly roughened in order to scatter infrared radiation to enable safe observations of the sun .not included in the previous efficiency terms is the effect of diffraction at each aperture . whenever the focusing mirror diameters are large compared to the wavelength of observation , diffraction losses are low ( a few percent or less ). however , these losses become significant at the long wavelength end of many telescopes . for example , at 1 ghz , the diffraction efficiency of the vla antennas is 0.85 .[ misceff ] the ideal amplitude response of the primary beam is circularly symmetric with respect to the optical axis , with constant phase out to the first null , and alternating by in successive sidelobes . in reality , small errors in alignment of the subreflector with respect to the primary surface ( i.e. focus errors ) can produce a non - circular beam , which is accompanied by a reduction in efficiency .furthermore , any small errors in the alignment of the receiver feed with respect to the optical axis ( termed an `` illumination offset '' ) produce non - uniform phase response in the outer portion of the main beam .in addition to loss of efficiency , this effect can produce problems when imaging extended objects and may require special calibration if the misalignment is significant .precision multi - panel reflectors are composed of individual panel segments typically with four or five screw adjustment points per panel . the initial setting of the surface segments is generally performed with a mechanical alignment device or theodolite - assisted technique , which can achieve accuracies of part in of the total aperture diameter .further refinement is often done using photogrammetry .this technique entails placing reflective tape targets on the corners of each panel , imaging the entire surface with a high resolution digital camera from various angles , and solving for the best fit surface profile . applying manual surface adjustments based on this measured profile can typically reach accuracies of a few parts in of .the ultimate surface performance is usually achieved using microwave holography , a technique developed during the 1970 s and in use at nearly every radio through submillimeter observatory ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) . in this context, the term holography refers to the process of mapping the complex ( amplitude and phase ) beam pattern of an antenna and fourier transforming the data to the aperture plane .the angular extent of the map ( typically ) determines the linear resolution on the dish .the phase map provides the surface deviations in units of the observed wavelength ; thus , the wavelength sets the ultimate accuracy , but it is constrained by the available sources of radiation .centimeter - wave telescopes typically use geosynchronous broadcast satellite signals in band ( cm ) , either their analog continuous wave beacons ( i.e. spectral line observed with a narrowband filter ) or their broadband digital transmissions .continuum from bright quasars can also be used .millimeter - wave telescopes need higher precision ( hence higher frequency sources ) and typically use ground - based 3 mm transmitters mounted on towers , though wavelengths as short as 0.4 mm have been used . with a well - tuned holography system ,the measurement errors are typically about 0.5 - 1 part in of .in addition to measuring panel misalignment , holography can be used to measure illumination offsets ( [ misceff ] ) , systematic antenna panel mold error as well as large - scale deformations due to thermal effects .large - scale features can also be measured by celestial holography , in which bright maser lines or quasar continuum emissions are used as the radiation source .a sidereal source provides the advantage that dish deformations can be measured at different elevations .all of these forms of traditional holography require a second stationary antenna as a reference signal .an alternative technique called `` phase retrieval '' or `` out - of - focus '' holography can be performed on a single antenna by mapping the amplitude beam and fitting for the associated phase error .each antenna in a synthesis array employs a pointing model which continuously converts the requested topocentric coordinates into the actual encoder coordinates that will put the antenna on - source .pointing models account for basic effects such as encoder zero offsets , collimation error , non - perpendicularity of the axes , and pad tilt , as well as higher - order terms that account for gravitational flexure , encoder eccentricity and other mechanical asymmetries ( e.g. * ? ? ?* ; * ? ? ?some antennas employ additional metrology , such as thermometers ( gbt , * ? ? ?* ) and tilt meters and linear displacement sensors ( alma , * ? ? ?* ) , in order to input real - time dynamic terms .the pointing model must be fit to `` all - sky '' pointing datasets , typically consisting of continuum scans on dozens of quasars scattered all about the sky , using fitting software such as tpoint .when an antenna is relocated to a different pad , at least several terms of the pointing model must be remeasured and updated to counteract the small but inevitable changes in geometry . with the best pointing model in place , the so - called `` blind pointing '' performance refers to the typical pointing error after slewing to a particular direction . for the vla ,the blind pointing error is typically rms in calm nighttime conditions , but can exceed during the day . to improve the pointing accuracy during higher frequency observations , the technique of offset pointing is employed , in which local pointing corrections are periodically measured toward a bright quasar within of the science target .this technique can reduce the rms error to for vla antennas , and for alma antennas ( see fig . [ tracking ] ) . [ pointing ] in addition to static pointing error ,antenna servo tracking errors are always present at some level and can cause time variation in the visibility amplitudes , particularly at high frequencies where the primary beam is smallest . at the vla , the rms tracking error is about in low wind , which yields typical peak excursions of , or 10% of the primary beam at 43 ghz .while this sounds relatively harmless , it becomes critical when imaging extended objects .for example , at the phase center of a gaussian beam , an rms tracking error of 10% of the beamwidth yields an amplitude variation of only 2.7% ; however , at the half - power point of the beam , it yields a variation of % ( figure [ trackingfigure ] ) .tracking accuracy is generally the worst in high or gusty winds . in benign weather conditions ,the tracking accuracy of an antenna is ultimately set by the quality of its servo control system .a description of a modern digital antenna servo control system for the 6 m diameter sma antennas is given in .good servo systems are designed with safety as the highest priority , with interlocking emergency - stop and hardware limit switches connected directly to the power source .additional software safety measures include : 1 ) software motion limits , 2 ) consistency checks between the position feedback devices ( encoders ) and integrals of the velocity feedback devices ( tachometers ) , 3 ) monitoring of motor currents and temperatures , 4 ) routine servicing of a watchdog timer which will trigger a system shutdown in case of a processing hangup , and 5 ) re - engaging of mechanical brakes whenever power is lost .while all of the safety logic is running relentlessly in the background , the system must also compute the instantaneous torque to apply to track a celestial source at the sidereal rate , or perhaps perform faster on - the - fly imaging . a typical servo design implements nested loops in which the calculations and adjustments operate at the appropriate rate .for example , the azimuth / elevation position loop runs at hz and computes velocity commands to send to the azimuth / elevation velocity loop running at hz , which in turn commands the motor current ( torque ) loop at khz . in the past , the gains of these loops ( traditionally consisting of proportional , integral and derivative terms ) were set in hardware by fixed values of resistors and capacitors .similarly , the time constants of any filters on the feedback signals had to be set in this manner . in modern systems ,these gains and filters are configurable in software , and can be adjusted ( if necessary ) when operating in different modes or under different conditions .also , command - shaping can be implemented to smoothly transition between slewing and tracking modes to prevent overshoot and more rapidly acquire the target .finally , more complicated algorithms can be attempted to try to achieve faster response times while avoiding the excitation of structural resonances .both the gbt and vla are in the midst of an upgrade of their original servo system hardware and software .the role of the receivers in a synthesis telescope is to linearly amplify weak radio frequency signals while adding minimal noise , and down - convert them into room - temperature analog output signals on coaxial cables at intermediate frequencies ( ifs ) suitable for digitization .receivers are traditionally comprised of a _ front - end _ ( fe ) and a _ back - end _ ( be ) .the fe includes the components that _ must _ be attached to the antenna , in contrast to the be which includes the electronics ( sometimes called the _ if chain _ ) that can be mounted in a separate rack and which process the if signals . in this chapter, we will cover the fe polarization splitting device , the fe detector , and the be electronics preceding the digitizers .three current technological limitations can essentially explain the configuration of receivers in radio and ( sub)millimeter astronomy .first , we can build broadband low - noise amplifiers ( lnas ) with optimal noise performance up to about 120 ghz ( see the review of * ? ? ?second , we can digitize signals of instantaneous bandwidth up to about 2 ghz , which require a nyquist sampling rate of 4 gigasamples / second ( gs / s ) . from these two facts, we can see that to observe at ghz , the first device in the front - end must be a device to downconvert the signal ( called a mixer ) instead of an amplifier . also , to observe with an aggregate bandwidth ghz , a mixer must be present in the if chain , regardless of the observing frequency .the third limitation is that when used as the first component in the front - end , both mixers and amplifiers must be cooled to cryogenic temperatures to yield competitive performance400 mhz ) where the galactic background increases from tens to thousands of kelvin . in this regime ,the room - temperature performance of lnas is adequate without introducing extra noise . ] .lnas reach their optimal performance at about 15 k , which requires only a two - stage cryostat .in contrast , the best submillimeter mixers are superconducting tunnel junctions which require a more complicated three - stage cryostat to reach their optimal operating temperature of k , a practical disadvantage compared to lnas at ghz . putting together these facts, we can surmise that most alma fes must begin with a cold mixer followed by a cold lna , while vla fes begin with a cold lna .following these components , both alma and vla receivers require room temperature mixers and amplifiers in their bes , prior to digitization . for optimal sensitivity , we want to build dual - polarization receivers that can accept both polarizations from astronomical targets .this ability requires either dual linear or dual circular feeds .because fe amplifiers and mixers operate on individual polarization signals , a polarization splitting device is needed .there are two broad categories of devices that can provide db of polarization purity with low loss : waveguide and quasioptical .the most common waveguide devices are called ortho - mode transducers ( omts ) and are placed between the feed horn and the fe device .an omt is a four - port microwave device with three physical ports .it accepts a dual polarization signal into its one common input port ( typically a square or circular waveguide ) and splits the two polarizations into separate physical output ports ( either rectangler waveguide or coax ) .omts can be designed numerically using software that solves maxwell s equations .although it can be challenging to achieve octave - wide ( operating over factor of 2 in frequency ) designs with good isolation and low loss , they are fairly straightforward to machine when the dimensions are large , i.e. at wavelengths longward of 1 mm ( e.g. * ? ? ?* ; * ? ? ?* ) , but good results have been obtained at 0.6 mm ( alma band 8 ; * ? ? ?the most common quasioptical splitter is a wire grid which reflects one polarization and transmits the other .wire grids must be placed at a beam waist preceding the feedhorn , and the required wire separation is for good performance . at high frequencies ,wire grids are easier to construct than omts .the disadvantage of wire grids is that you need two feedhorns ( one per polarization ) instead of one and each has an accompanying mirror to refocus the beam after the grid .thus , optical alignment can be tricky and often leads to a significant _ beam squint _ , a condition in which the primary beams of the two polarizations differ in pointing direction by .1 beam ( the alma specification ) or worse .the vla receiver system consists of 10 bands ( see table [ rxtable ] and figure [ rxphoto ] ) .the two lowest frequency bands employ crossed dipole feeds in front of the subreflector followed by room temperature amplifiers .the rest of the bands employ offset - cassegrain corrugated feed horns followed by cryogenic lnas housed in individual cryostats .the polarization splitters in between the feedhorns and the lnas consist of quadruple - ridge omts ( ghz ) , waveguide ( bifot junction ) omts ( ghz ) , or sloped septums ( q - band ) . at the input of the lnas ,most of the bands ( those above 4 ghz ) also employ isolators , which are passive , non - reciprocal two - port devices which , like a subway turnstile , prevent the propagation of signal in the reverse direction . in this case , the isolators serve to reduce leakage between the polarizations and reduce standing waves in the optics .ccccccc & frequency range & & & sideband + central & code & ( ghz ) & & type & splitter & type + + 4 m & 4 & 0.0580.084 & 1.0 & dual linear & dipole & ssb + 90 cm & p & 0.230.47 & 1.0 & dual linear & dipole & ssb + 20 cm & l & 12 & 1.0 & dual circular & q .- r .omt & ssb + 13 cm & s & 24 & 1.0 & dual circular & q .- r .omt & ssb + 6 cm & c & 48 & 0.99 & dual circular & q .- r .omt & ssb + 3 cm & x & 812 & 0.97 & dual circular & q .- r .omt & ssb + 2 cm & k & 1218 & 0.94 & dual circular & wg .omt & ssb + 1.3 cm & k & 1826.5 & 0.87 & dual circular & wg .omt & ssb + 1 cm & k & 26.540 & 0.74 & dual circular & wg .omt & ssb + 7 mm & q & 4050 & 0.57 & dual circular & septum & ssb + + 7 mm & 1 ( q ) & 3551 & 1.0 & dual linear & omt & ssb + 4 mm & 2 ( e ) & 6790 & 1.0 & dual linear & omt & ssb + 3 mm & 3 ( w ) & 84116 & 0.99 & dual linear & omt & 2sb + 2 mm & 4 & 125163 & 0.99 & dual linear & omt & 2sb + & wvr & 175.3191.3 & 0.98 & single linear & & dsb + 1.6 mm & 5 & 163211 & 0.98 & dual linear & omt & 2sb + 1.3 mm & 6 & 211275 & 0.96 & dual linear & omt & 2sb + 0.9 mm & 7 & 275373 & 0.93 & dual linear & wire grid & 2sb + 0.7 mm & 8 & 373500 & 0.87 & dual linear & omt & 2sb + 0.45 mm & 9 & 600720 & 0.74 & dual linear & wire grid & dsb + 0.35 mm & 10 & 787950 & 0.59 & dual linear & wire grid & dsb + + + + + + + an amplifier is an active , two - port device , meaning that it requires a voltage supply , and has one rf input and one rf output .the input signal emerges at the output with greater power ( typically 1030 db , i.e. 10x1000x ) and is unchanged in frequency .the current generation of nrao cryogenic lnas on the vla , vlba , gbt , and other telescopes employ heterostructure field effect transistors ( hfet ) which operate at 15 k. they deliver a noise temperature performance of k at low frequency and about 5 times the quantum noise limit at high frequency ( >12 ghz , * ? ? ?* ) : these indium - phosphide ( inp ) hfets come from the `` cryo3 '' series of wafers manufactured by northrup grumman space technology in 1999 . at each frequency range, there exists an optimal size ( gate periphery ) for a transistor that facilitates the design of a broadband amplifier .the gate peripheries range from 200 m at ku - band ( and below ) down to 30 m at w - band .the hfet devices are housed inside a block of gold - plated brass and the input is connected by either coaxial cable ( low frequency receivers ) or waveguide ( high frequency receivers ) to the corresponding omt output .the lna package has a total gain of db . to avoid feedback ,the hfets are mounted in a channel small enough to block all waveguide modes in the band of operation .electrical connections are made via microstrip and bond wires .it is important to note that the lna is not the only contributor to the overall receiver temperature ( ) .while the details and measurements can be found in the evla memo series , a rough model for the mid - band for vla bands above 12 ghz is k. mixers were invented around the time of world war i for radio direction finding ( see the historical review of * ? ? ?a mixer is a three - port device which accepts two inputs : a broad rf signal and a narrow lo continuous wave ( cw ) tone , and produces a broadband if output signal .this frequency conversion occurs in a diode with a strongly nonlinear current ( )-voltage ( ) characteristic , . as an example , consider a simple square - law diode for which when the two input fields are superposed : the non - linear nature of the mixer will effectively multiply the lo and rf signals together .this effect can be seen by substituting equation eq . [added ] into eq .[ nonlinear ] , which produces three terms for the resulting current , including the cross term which can be expanded by a trigonometric identity into : .\ ] ] thus , current will flow at the difference frequency of the lo and if .it is important to notice that the multiplication serves to transfer the phase from the rf signal to the if signal , a process called heterodyning . because they can transfer phase in this manner , mixers are key components for interferometers .mixers range from low - cost , off - the - shelf packages using si schottky diodes that operate at room temperature ( in cell phones etc . ) to the expensive , delicate , research - grade superconductor - insulator - superconductor ( sis ) tunnel junctions that were developed in the late 1970s and early 1980s and today serve as the cryogenic fe detector for alma bands 3 through 10 . at terahertz frequencies , the performance of sis mixers declines andhot - electron bolometer mixers are used instead ( e.g. * ? ? ?room temperature mixers are also employed in alma and vla if circuitry after the received signal has been sufficiently amplified so that their conversion loss is of little consequence . typically ,the if signal output by a mixer will contain overlapping signals from two frequency ranges termed _ sidebands _ ( see figure [ mixer ] ) , resulting in double sideband ( dsb ) performance .one sideband represents a piece of the rf spectrum above the lo frequency and the other is a piece below the lo frequency . the dsb confusion can be avoided either by pre - filtering the rf signal ( termed an ssb mixer ) , or by designing a mixer to separate the sidebands into separate if outputs ( termed a 2sb mixer ) .if only one of the sidebands from a 2sb mixer is desired , the if output corresponding to the unwanted sideband can be terminated into a load ( such as a waveguide absorber ) , which is a configuration termed a 1sb mixer .vla receivers employ ssb mixers , as is likely for alma bands 1 and 2 ( still under development ) .alma bands 3 through 8 are 2sb , while bands 9 and 10 are dsb , as are all receivers on the sma .the relative sensitivities of these mixer types has been evaluated for the alma site , but in general 2sb is superior to dsb .a development project to upgrade the band 9 mixers into 2sb format is underway .the alma receiver system consists of ten frequency bands ( table [ rxtable ] ) , all housed in the same 0.97 m diameter , 450 kg cryostat ( figure [ rxphoto ] ) which is mounted inside the receiver cabin at the cassegrain focus .currently , a receiver band is brought into focus by adjusting only the pointing of the primary mirror to orient the desired sky direction toward the desired receiver window .there is some improvement to be gained from also adjusting the subreflector to tilt approximately half - way toward the selected receiver , but this added complication has not yet been introduced into the system since the pointing and focus models will need to account for it .a low - loss polymer membrane , initially made of gore - tex but replaced by teflon fluorinated ethylene propylene ( fep ) , protects the cabin from the outdoor environment . the alma fe optics have three generic layouts . bands 1 and 2 use room - temperature ( external to the cryostat ) polymer lenses as the focusing element . bands 3 and 4are in the outermost position in the cryostat and use a pair of external room - temperature mirrors ( band 3 includes a teflon lens on the corrugated feedhorn , * ? ? ?the focusing optics for the higher bands are off - axis ellipsoidal mirrors mounted on the cold cartridge assemblies ( ccas ) inside the cryostat .the diameters of the cryostat windows are set large enough to avoid significant truncation losses . for polarization separation , alma bands 7 , 9 and 10 use wire grids while the other bands use omts ( table [ rxtable ] ) .the sis mixers and their enclosing ccas in the alma receivers were constructed by different international parties ( see the references in table [ rxtable ] ) . in all cases ,the receiver noise performance meets specification , beginning with 41 k in band 3 , and essentially following the function in the higher bands .finally , the room - temperature dicke - switched water vapor radiometer ( wvr ) in each antenna has proven effective and essential to removing atmospheric phase fluctuations on short timescales , down to the adopted integration time of 1.152 second ( ) , and on all baseline lengths .being required to drive mixers ( [ mixers ] ) , an lo signal must be a clean tone with high signal to noise ratio ( snr ) in order to obtain accurate astronomical spectra .los are constructed from a base oscillator with a high- electro - mechanical feature , such as a tunable cavity or a resonant sphere , which is ultimately synchronized to an atomic frequency standard .for alma , the fundamental oscillators located in each warm cartridge assembly ( wca ) are commercially - produced yttrium iron garnet ( yig ) spheres embedded in a magnetic field generated by the sum of a coarse tuning coil and a fine tuning ( fm ) coil .these compact yig packages produce clean tones in the 2 - 40 ghz range and are also used in the vla . for the higher frequency bands of alma , the yig tonemust be multiplied by one or more integer multiplication stages , many of which include power amplifiers to boost the multiplied signal .all of the los for alma and vla were built by nrao .the lo supplying the first mixer in the fe is often called lo1 in order to distinguish it from the los in the be , which are numbered starting from lo2 .although a free - running yig tone is typically very clean , it is subject to drift in frequency with time , and its close - in phase noise ( i.e. the line - broadening of the tone ) is not negligible . in contrast , a radio telescope requires a precisely stable lo frequency in order to observe spectral line features .an interferometer has a further requirement that the receivers in all antennas be phase - locked so that celestial signals will correlate . a circuit to stabilize and lock the lois called a phase - lock loop ( pll ) .a good description of a modern digital pll used on the sma interferometer is given in .similar in concept to an antenna servo ( [ tracking ] ) , the pll circuit continuously analyzes the phase difference with respect to an accurate low - frequency reference signal ( of order 20100 mhz ) , which is produced by a device called the first lo offset generator ( floog ) in nrao terminology .the pll computes and applies a correction to the fm tuning magnetic coil of the yig in order to maintain lock .the bandwidth of the correction circuit is typically a 0.5 - 1 mhz , which enables rapid re - locking after a walsh cycle phase change ( see [ walsh ] ) .initial lock is achieved by starting from a computed ( or a look - up table ) tuning value and sweeping the coarse coil until the tone is at the prescribed location in the if of the pll ( see , e.g. * ? ? ?a pll typically relies on an external mixer to downconvert the signal from the lo being controlled to a value close to the frequency of the low - frequency reference , and this external mixer in turn relies on an accurate high - frequency reference signal for its lo . for alma , this reference signal is delivered by a photomixer located in each wca which converts the photonic frequency reference distributed via fiber optic cable and originating from the laser synthesizer in the central lo .in fact , the fundamental references for lo2 in the be and for the floog in the wca are also distributed on the same fiber using wavelength division multiplexing . in an interferometer , aside from enabling the mixers to operate , the los are also crucial for implementing many additional features that ensure data quality .very fine control of the frequency and phase of lo1 is inserted via the pll reference generated by a direct digital synthesizer ( dds ) , which is part of the floog in nrao systems .for example , suppression of spurious tones and dc offset is achieved by modulating lo1 with phase switching using a walsh function sequence and removing it after digitization by adjusting the digital signal datastream in a supplementary fashion ( i.e. via a sign change ) .this technique serves to wash out any signals in the digital datastream that did not enter at the fe mixer ( i.e. those that did not originate from the sky ) . in high spectral resolution modes, phase switching does not work well , so a complementary technique is used by alma called lo offsetting . in this case , each antenna s lo1 is shifted by a different small amount ( in integer steps of 30.5 khz = 15625/512 ) , then this shift is removed downstream in lo2 or a combination of lo2 and the tunable filter bank ( tfb ) lo ( sometimes called lo4 ) in the baseline correlator . in the vla , the lo offsetting techniqueis called `` f - shift '' and uses prime number frequency steps , which are removed by the fringe rotators in the wideband interferometer digital architecture ( widar ) correlator . a spurious signal that enters the system between the insertion and removal points receives a residual fringe frequency equal to the spacing of the shifts between antennas andis suppressed upon normal integration of the data . in particular , lo offsetting can supply more than 20 db of additional rejection of the unwanted image sideband in 2sb receivers .this extra suppression is crucial to eliminate strong spectral lines whose remnants may otherwise survive the moderate ( 10 - 20 db ) suppression supplied by a 2sb receiver alone . in practice on alma ,lo offsetting also reduces residual closure errors .finally , the two sidebands in a dsb receiver ( [ mixers ] ) can be quite effectively separated ( db ) by applying walsh phase switching . in this manner, both sidebands can be recovered simultaneously in the correlator , thus doubling the effective bandwidth , as is done on the sma . however, walsh switching will not remove image signals which are not common to all antennas , including the atmospheric noise from the image sideband in a dsb receiver .the required distribution of the lo reference signals in large arrays like alma and vla occurs via many kilometers of fiber optic cables .optical fibers offer many advantages to radio interferometers , including low loss and wide bandwidths .however , the thermal expansion coefficient of these fibers combined with the diurnal temperature change of the environment leads to temporal changes in the optical path length of the fiber that vary as a function of antenna pad location . although the cables are buried , the thermal effect is still significant and causes delay changes to the lo references carried by the fiber .mechanical stresses on the above - ground portions of the fiber are also significant . to compensate for these problems, alma employs a line length correction ( llc ) system , which uses a piezoelectric fiber stretcher driven by a pll to maintain a constant optical path length on each fiber during an observation . in principle, the stretcher requires only enough range to compensate drift between visits to the phase calibrator , provided that the stretchers are only ever reset to the center of their range in between consecutive integrations on the phase calibrator _ and _ that the offline calibration software ( casa ) is aware of the resets . in practice, this synchronization has not been implemented in the alma system ( nor in casa ) , mainly because the stretchers have been found to have sufficient range to handle drifts over a few hours ( even on 10 km baselines ) before slipping a fringe , which is longer than current ( cycle 3 ) standard alma observing blocks ( minutes ) .the evla project also built a round trip phase ( rtp ) measuring system to mitigate project risk when it was not clear what the stability performance of the fiber would be .the rtp was designed to be able to measure and send corrections to the evla widar correlator .although the vla has longer baselines , it operates at lower frequencies , and the amount of phase change in the fiber observed by the rtp was small and slow compared to atmospheric phase variations ( and thus are effectively removed by the normal phase calibration sequence ) .however , the rtp was very useful in diagnosing phase vs. elevation changes due to antenna electronics and temperature changes _ not _ in the fiber , thus allowing those problems to be identified and fixed .although the rtp system was deactivated in 2010 , it can be re - activated if needed , e.g. for a future pie town link .although it is not implemented as a round - trip measurement , the vlba has a pulse - cal system to measure the relative instrumental phase between baseband channels . a train of 1 mhz or 5 mhz pulses from a tunnel - diode are injected into a directional coupler , the same one used to inject the signal from the noise diode ( see [ tcal ] ) .the pulses pass through the lna and all downstream processing and the resulting phases are detected for selectable tones in the back - end .these so - called data are supplied to the user and can be used to calibrate the phase characteristics ( offset and frequency slope ) of the antenna s components independent of the atmosphere or whether a calibrator is being observed .two unglamorous but essential parts of the receiver be are the ability to : 1 ) measure the total power of signals using square law detectors ( sqlds ) which convert the power in a continuum signal into voltage ; and 2 ) adjust signal levels in order to optimize the inputs into successive devices along the if chain .for example , when the input signal to an amplifier exceeds its specified input range , it no longer functions in a linear fashion .in other words , the effective gain factor is less than what it would be with a smaller input .this effect is termed `` gain compression '' , and is often accompanied by other bad characteristics including spurious oscillations and change in the bandpass shape . to avoid these pitfalls ,all observations begin with the insertion of a known load ( possibly just blank sky ) into the beam followed by a sequential adjustment of the programmable attenuators placed strategically along the if chain in order to achieve optimal levels , which are implemented as target values at each set of sqlds .an attenuator is a wideband two - port device that dissipates a fraction of the input power into a resistor .programmable attenuators typically have a range of 0 - 15 or 0 - 31 db with steps of 0.5 or 1 db .after the final mixing stage , the signal to be digitized has reached its lowest frequency , which is traditionally termed a _baseband _ even if the low end of the band is not at 0 hz . in alma and vla ,each of the 2 ghz - wide basebands covers 2 - 4 ghz . after passing through an analog anti - aliasing filter ,the alma baseband signals are sampled in the second nyquist zone of the 3-bit 4 gs / s sampler .adjusting the power level in each baseband is also essential as it represents the input level to the digitizers , which often have a narrow range of input power over which the snr is optimal , and is particularly true for alma .in fact , alma observations must set and remember two different sets of attenuation levels one set used for the system temperature scans ( to avoid saturation on the hot load ) and one set used for normal observing .the vla includes an additional device called a gain slope equalizer , which allows the signal power to be adjusted in a frequency - dependent manner providing a more uniform level across the baseband fed to the digitizers .the alma band 10 receiver also includes an equalizer in the if section of its wca .finally , sqlds can also be used independently of the autocorrelation portion of the correlator to perform continuum pointing or focus scans , and to measure the baseband - averaged system temperature t . in all alma observations , the and in each basebandis measured periodically to be able to properly calibrate the data and account for atmospheric absorption . by design, the measurement is a spectral measurement in order to capture the inherent variation of the receiver sensitivity across the observing band , and to capture the spectral variation in atmospheric opacity due to molecular absorption features ( * ? ? ? *the measurement is achieved using three autocorrelation spectra measured sequentially on : the sky , the ambient temperature load , and the heated load .the loads are temperature - controlled blackbodies with nearly perfect emissivity ( 0.999 , * ? ? ?* ) mounted in the mechanically - driven alma calibration device ( figure [ acd ] ) .the online software ( telcal , * ? ? ?* ) takes the two load spectra to compute via the -factor method ( see eq . 12 in the chapter on basics of radio astronomy ) .telcal then uses this result along with the sky spectrum and the atmospheric model to compute using the chopper wheel method ( e.g. * ? ? ?it accounts for the relative sideband gain , currently using a single value per baseband , but it could eventually be a spectrum .these temperature spectra are stored in the data and applied offline in casa to place the visibilities onto an absolute temperature scale and to set the relative weights of the data .an example of the three measurements and the resulting and spectra are shown in figure [ tsys ] .although the power level varies by a factor of three across the baseband , the overall receiver sensitivity is fairly constant .however , the fact that the spectra show upward spikes at the frequencies of atmospheric molecular absorption lines ( in this case from ozone ) reflects the fact that those channels are less sensitive in the data , and hence can be down - weighted ( using the spectral weights option available in casa version ) .this capability is important to obtain the best performance when combining all the channels of all the spectral windows to obtain a single multi - frequency synthesis ( mfs ) continuum image .currently , all spectra are obtained in the low resolution mode of the baseline correlator , time division mode ( tdm ) , which yields 15.625 mhz channels in dual - polarization mode . for higher resolution spectral windows obtained in frequency division mode ( fdm ), the correction must be interpolated to narrower channels when applied to the data in casa .a future software upgrade should allow spectra to be obtained directly in fdm spectral windows , which will provide an additional benefit of improved removal of atmospheric lines .this capability was introduced on the aca correlator starting in alma cycle 3 .an alternative to a mechanized load system is a broadband calibrated noise diode that is switched on and off at a high rate ( hz ) . at the expense of a slight increase in system noise, it provides a stable modulated reference signal .each vla receiver contains such a diode with a noise temperature ( ) measured in the laboratory on a 25 - 100 mhz grid ( depending on the band ) .a synchronous detector located in the widar station boards calculates the sum ( ) and difference ( ) powers along with the ratio and every second : these values are stored in the switched power table of the astronomical data .application of in aips or casa places the visibilities on an absolute temperature scale .it also removes any gain variations in the electronics between the diode and the correlator down to 1 second timescales .only a single value per subband is computed , using an interpolated value from the grid ; thus , the spectral resolution of the correction is typically coarser than alma s channelized .also , in contrast to the discussed in [ trxtsys ] , the definition of is with respect to the receiver input , so it does not account for the effect of the atmosphere .the resulting elevation - dependence of the gain can be compensated for in the offline software using a measurement or estimate of the zenith opacity .it is worth noting that widar s digital requantization of the 3-bit or 8-bit signal into a 4-bit signal introduces an additional gain change after the synchronous detector .this gain change is automatically applied to the 3-bit data online and is also stored in the same table for reference .the radiometer equation is fundamental to radio astronomy as it predicts the expected standard deviation of repeated measurements of the antenna noise temperature based on finite time samples .it is based on a few concepts in physics and statistics which can be easily simulated on a computer . herewe present a simulation which illustrates the fundamental derivation of this equation .the derivation can be found in more detail in . shown in figure [ gaussian ]is the time series and corresponding histogram of a wide - sense stationary random rf signal ( often termed `` white noise '' ) , whose amplitude ( ) follows gaussian statistics and has a mean value of zero and a variance equal to the mean power .we have set the mean power to 4 in arbitrary units . shown in figure [ gaussiantwo ]is the square of the amplitude which is the instantaneous noise power , and its histogram , which follows the expected gamma distribution . for any chosen sample of the noise power ,statistics tells us that the standard error of the mean will have a distribution variance of where is the number of samples .the number of statistically independent samples in time of a signal with bandwidth ( ) is .thus , the expected standard deviation of the power is . in the limit of large ,the mean of the gamma distribution is the expectation mean of and is equivalent to , thereby yielding the traditional radiometer equation : since in the rayleigh - jean limit , it can also be written in terms of , with in place of ( as is done in eq .14 of the chapter on basics of radio astronomy ; see also * ? ? ?* ) . in figure[ integrate ] , we simulate observing the rf signal with a single - dish telescope for increasing values of the sample size . in the left panel ,we break the datastream of samples into observations each containing samples .each observation provides an estimate of the noise power and is placed into the corresponding bin of the histogram .the distribution peaks near the value of 4.0 but with a broad uncertainty of , yielding an snr of 7.07 .the expected uncertainty of the variance , , is 0.566 .thus , the prediction of the radiometer equation matches the simulated result to . in the second panel ,we increase the size of each observation to 1000 samples , and place each of the resulting noise power estimates into the corresponding power bin .the peak of the distribution remains close to 4 while the width of the distribution becomes narrower .the uncertainty is now and the snr is now 22.6 . in this case , the radiometer equation predicts an uncertainty of . finally , in the third panel , the size of each observation is increased to 10000 samples .the resulting uncertainty in the noise power is now down to , again matching the prediction of the radiometer equation .the snr is now 70.2 . to summarize ,we have increased the observation time by a factor of 100 and the snr has improved by a factor of 10 .the radiometer equation applies equally well to an interferometer .however , because cross - correlation represents a multiplication of two independent , zero - mean rf signals ( in contrast to auto - correlation which effectively squares a single signal ) , the resulting product is not positive definite like it is in figures [ gaussiantwo ] and [ integrate ] .instead , the mean of the distribution gives the correlated power rather than the total power .thus , it can be zero if there is no source in the beam , although the variance will not be zero .a similar numerical simulation shows that the uncertainty of the variance of this cross - correlation product is : .thus , the noise on any given baseline is lower than the single - dish case by is , which is consistent with the general formula that the snr scales as ( e.g. * ? ? ? * ) .this result implicitly assumes that the antennas have equal collecting area and efficiency , and that we are in the limit that the correlated signal is small compared to the noise .of course , the cross - correlation data product is more commonly examined in terms of amplitude and phase rather than power .the expected probability distributions of these component quantities as a function of the snr are given in .we conclude with a note of caution .all receivers ( sis mixers and lnas ) exhibit gain fluctuation at some level ( e.g. hemts , * ? ? ?* ; * ? ? ?* ) , which can lead to a performance that is worse than predicted by the radiometer equation if the statistics of the noise is non - stationary .non - stationary noise typically exhibits a power law spectrum , i.e. noise , and its variance diverges with time .another source of gain fluctuation is cryostat temperature fluctuations that reach the cold mixer , which can be compensated by adjusting the voltage bias of the subsequent lna in real time .regardless of the origin of the instability , the receiver total power output stability vs. integration time is often characterized by the allan variance ( see * ? ? ?* and references therein ). at short integrations , the sensitivity follows the expected curve for white noise but eventually levels off and will begin to increase at longer integrations .thus , in general , the integration time must be kept short to avoid losing sensitivity .the national radio astronomy observatory is a facility of the national science foundation operated under agreement by the associated universities , inc .alma is a partnership of eso ( representing its member states ) , nsf ( usa ) and nins ( japan ) , together with nrc ( canada ) and nsc and asiaa ( taiwan ) and kasi ( republic of korea ) , in cooperation with the republic of chile .the joint alma observatory is operated by eso , aui / nrao and naoj .this research made use of nasa s astrophysics data system bibliographic services and the spie and ieee xplore digital libraries .the authors thank vivek dhawan , anthony r. kerr , and marian w. pospieszalski of nrao for comments , corrections and improvements , and robert kimberk of the harvard - smithsonian center for astrophysics for discussions and research on the radiometer equation .
the primary antenna elements and receivers are two of the most important components in a synthesis telescope . together they are responsible for locking onto an astronomical source in both direction and frequency , capturing its radiation , and converting it into signals suitable for digitization and correlation . the properties and performance of antennas and receivers can affect the quality of the synthesized images in a number of fundamental ways . in this lecture , their most relevant design and performance parameters are reviewed , with emphasis on the current alma and vla systems . we discuss in detail the shape of the primary beam and the components of aperture efficiency , and we present the basics of holography , pointing , and servo control . on receivers , we outline the use of amplifiers and mixers both in the cryogenic front - end and in the room temperature back - end signal path . the essential properties of precision local oscillators ( los ) , phase lock loops ( plls ) , and lo modulation techniques are also described . we provide a demonstration of the method used during alma observations to measure the receiver and system sensitivity as a function of frequency . finally , we offer a brief derivation and numerical simulation of the radiometer equation .
the stewart platform is a parallel manipulator with six degrees of freedom .we will use the ( standard ) variables and , where and are the coordinates of the centre of the top platform , and and denote the euler angles defining the inclination of this platform with respect to the bottom platform , see figure [ fig1 ] . the aim of this paper is to study the singular manifold which is defined by the physical configurations for which it will not be possible to determine the position of the platform uniquely by fixing the lengths of the legs .this is a well - known problem in parallel manipulators .the solution to the forward kinematics problem naturally divides into two cases , namely , a singular and a non - singular . in the non - singular case we recall the work of ji and wu andshow that there are possible isolated singular solutions that correspond to the same legs lengths . in the singular casewe extend the previous analysis and show how to obtain , for a given set of length legs , a set of singular solutions all of them parameterized by a scalar parameter .these solutions are a continuous curves in position space and in rotation space in which the platform moves without changing the values of the leg lengths .this fully characterize the singular manifold and shows that the platform is , in this case , completely singular .spatial rotations in three dimensions can be parameterized using both euler angles and unit quaternions , .a unit quaternion may be described as a vector in the rotation matrix is given by consider the stewart platform shown in figure [ fig1 ] .as shown there , the two coordinate systems and are fixed to the base and the mobile platforms .the platform geometry can be described by vectors , , defined by , , where and are the base and top vertices coordinates , respectively , and is the center point of the top plate .we assume that these points are related by where is a orthogonal matrix ( , where is the identity matrix ) and ,1[$ ] is called the rescaling factor .the coordinates of the base vertices are given by given the position and the transformation matrix between the two coordinate systems , the leg vectors may be written as so the length for each -leg is given by given , and the leg lengths are given by the forward kinematics the six leg lengths , , are given , while and are unknown .let , , and expand ( [ eq : l ] ) , then one gets , or +(1+\mu^2)(x_i^2+y_i^2 ) .\label{eq : l_f_k}\end{aligned}\ ] ] define as and , where then relation ( [ eq : l_f_k ] ) can be written as a linear system with the form where the matrix is given by note that if the base points are all different and belong to a conic section then .the matrix given by ( [ eq : maux ] ) corresponds to the well known braikenridge - maclaurin construction . in the next sections we will show that one can obtain the rotation matrix and the position in terms of the solution of the linear system given by ( [ eq : ls ] ) .the solution to the forward kinematics problem naturally divides into two cases , namely , a non - singular case where and a singular case where . in the singular case ,we obtain for a given set of length legs , , a singular solution parameterized by a scalar parameter .these solutions are curves in position space and in rotation space in which the platform moves without changing the values of the leg lengths . in the case where the six base vertices are not on a conic section, one gets , and so the solution of ( [ eq : ls ] ) , , can be obtained from the first three equations ( [ eq : w1 ] ) , ( [ eq : w2 ] ) and ( [ eq : w3 ] ) determines the rotation parameters , namely , , and the last three ( [ eq : w4 ] ) , ( [ eq : w5 ] ) and ( [ eq : w6 ] ) the position . to determine the rotation parameters consider the equations which are obtained from ( [ eq : w4 ] ) , ( [ eq : w5 ] ) and ( [ eq : w6 ] ) , respectively . eliminating ,one gets , let then the above equations can be written as so , where substituting yields assuming and that ( [ eq : q3sol ] ) and ( [ eq : q2sol ] ) have two roots each , then , is determined by ( [ eq : q1q2 ] ) .consequently , we have a total of four different quaternions . these are where are the roots . to determine the position , consider the equations thus obviously ( [ eq : plane1 ] ) and ( [ eq : plane2 ] ) represent two planes and their intersection is a line with equation given by where is the parameter of the line . the vectors and are given by the line ( [ eq : line ] ) intersects the sphere ( [ eq : sphere ] ) at two points given by where note that in order to exist one should have so , both and are found , and totally they have eight possible different solutions for a given set of leg lengths .in this case , we assume that all points belong to a circle ( we can assume without loss of generality ) , . in this casethe matrix is singular , that is , and in fact , if all points are different and belong to a conic section the rank of is five ( corresponding to the braikenridge - maclaurin construction ). this will be the case if , , and for , .this fact enables us to explicitly compute the factorization of the matrix in terms of the coordinate of the vertices of the base , .these expressions are to big to be shown here but a script for the maxima computer algebra system is available upon request to the author .so the linear system can be put into the for where and is a matrix with rank .the solution of ( [ eq : lnlu ] ) is given in terms of a solution which depends on the value of , which we take to be a free parameter .notice that any other quantity could be used for this purpose , although expression ( [ eq : sphere ] ) suggests that is the good choice .so the expressions given by ( [ eq : q1sol ] ) , ( [ eq : q2sol ] ) , ( [ eq : q3sol ] ) and ( [ eq : q0sol ] ) can be used to determine the the values of the quaternion , the rotation matrix , and the point as a function of the free parameter .the singular manifold of a stewart platform is define by the physical configurations for which it will not be possible to determine the position of the platform uniquely by fixing the lengths of the legs . by considering a simple stewart platform , for which the base vertices are in a circle ( although the result naturally holds for any conic section ) andthe bottom and top plates are related by a rotation and a contraction , it was shown that the platform is always in a singular configuration .it was also shown how to characterize the singular manifold in this case and how it can be parameterize by a scalar parameter .
this paper presents a study of the characterization of the singular manifold of the six - degree - of - freedom parallel manipulator commonly known as the stewart platform . we consider a platform with base vertices in a circle and for which the bottom and top plates are related by a rotation and a contraction . it is shown that in this case the platform is always in a singular configuration and that the singular manifold can be parameterized by a scalar parameter .
optical wide - field surveys with a high cadence has recently emerged as a potential powerful tool to search for optical transient phenomena , and hence to advance our understanding of astrophysics of compact and high - energy objects .the palomar transient factory ( ptf ) survey with the palomar 1.2-m schmidt telescope and the kiso supernova survey ( kiss ) with the kiso wide field camera ( kwfc ) on the kiso 1.0-m schmidt telescope have discovered new species of transients with timescales of hours to days from supernovae , novae , to active galactic nuclei .high - cadence and wide - field monitoring surveys with the hyper suprime - cam on the subaru 8.2-m telescope and the kepler space telescope have successfully obtained transient light - curves originating in shock breakouts of core - collapse supernovae .these results demonstrate importance of high - cadence wide - field survey observations to find rare and short - duration transients . unsurprisingly, some next - generation survey projects , such as 8.2-m large synoptic survey telescope ( lsst ) and zwicky transient facility ( ztf ) on the palomar schmidt telescope , will be also high - cadence and cover wide fields for galactic , extra - galactic , and solar - system objects .the tomo - e gozen is a wide - field cmos camera on the kiso 1.0-m schmidt telescope under development at the time of writing , and is expected to be completed in 2018 .it is optimized for movie observations with sub - second to seconds time - resolutions , and once completed , will be capable of taking consecutive frames with a field - of - view of 20 deg at 2 frames per second ( fps ) by 84 chips of 2k 1k cmos sensors .the primary objective of observations with the tomo - e gozen is to catch rare and fast transient phenomena with a time duration shorter than 10 seconds , such as optical counterparts of fast radio bursts , giant pulses in millisecond pulsars , gamma - ray bursts , and binary neutron - star mergers .the wide - field movie observation with the tomo - e gozen is also ideal for exploring fast moving objects , including near - earth objects and space debris .such phenomena can not be explored by the projects using ccds like ptf , ztf and lsst , whose time resolutions are 60 , 30 and 15 seconds , respectively . a prototype model of the tomo - e gozen with 8 cmos chips on the kiso schmidt telescope( hereafter , the tomo - e pm ; * ? ? ?* ; * ? ? ?* ) has been developed , which has achieved the frame rate of 2-fps so far .we have obtained movie data with a format of fits cube for each cmos sensor . a data filetypically consists of 400 frames of the same field images with pixels .the data rate of the tomo - e pm is mb s , corresponding to tb per night .then , an observation with the complete model of the tomo - e gozen will produce a huge amount of data of tb per night .it is therefore practically indispensable to compress the data .the data - compression methods have two types : lossless and lossy compressions . since random noise pervades all the pixels in our case ,the former is not effective .for example , the popular lossless tool gzip reduced our movie data by only % .some lossless methods were proposed for astronomical data sets . in this paper , we discuss the latter , lossy compression .various lossy compression algorithms have been proposed for application to astronomical data , such as jpeg2000 , the method using singular value decomposition ( svd ; * ? ? ?* ) and wavelet . none of them , however , are so far optimized for astronomical `` movie '' data .commercial compression tools for movie data like mpeg4 are readily available .however , we think that at least mpeg method is not appropriate for our data. this method divides images into box regions , and for each box region it compresses the data based on the discrete cosine transform .the level of compression is not uniform , then the fluctuation of pixel intensities becomes different in a source and a background region used for aperture photometry .it results in uncertainty for the photometric intensity of point sources . here, we propose an efficient method to compress the movie data for both the space and time domains simultaneously in the lossy - compression manner but without losing signals of transient phenomena , with a calculation speed comparable or faster than the production rate to avoid accumulating yet - unprocessed data .in order to compress the movie data obtained by the tomo - e gozen , we choose to use low - rank matrix approximation to reduce the data size without losing transient events . before applying it, we re - arrange the pixel values contained in a movie data set ( frames of images with pixels ) into a matrix with rows and columns ( figure [ fig : matrix_decomp_merge ] ) . here , , , and are , , and , respectively .the concept of low - rank matrix approximation with sparse matrix decomposition was originally proposed by . improved the algorithm and the computational speed , and developed godec .since the computational speed is critical in the tomo - e gozen application , we develop our method based on godec . in our case, an original data matrix is decomposed into , where , , and are a low - rank , sparse , and noise matrix , respectively ( see figure [ fig : matrix_decomp_merge ] ) , so that the following function is minimized , subject to and , where denotes the frobenius norm of a matrix .the parameters and control the rank and cardinarity of the low - rank and sparse matrices , respectively . for the low - rank matrix , we check the distribution of singular values , applying singular value decomposition ( svd ) , as shown in the main panel of figure [ fig : sigval ] .the matrix is expressed as , where and are orthogonal matrices , and is a diagonal one .then , we set the rank of the low - rank matrix by setting zeros for the singular values at indices larger than the rank . within the sparse matrix , the transient events can be easily extracted , because these events are innately sparse .the time variation of the sky background can be monitored by checking the noise matrix .after the data matrix is decomposed into three matrices , , , and , further data processing is necessary , because otherwise the data size would remain three times larger than that of the original data .the low - rank matrix is easily compressed to three small matrices , as shown in figure [ fig : matrix_decomp_merge ] . for the sparse matrix , the frames that contain a transient event(s )must be preserved , and the others should be discarded . for this reduction ,we use machine - learning methods to select point sources , which has been already established ( e.g. , * ? ? ? * and references therein ) . for the noise matrix , we obtain some statistics to summarize the distribution of the pixel values and remove the matrix .then , the number of pixels can be reduced from the original number of to where is a rank of the low - rank matrix , and is the number of frames of the sparse matrix , which contains transient events .typically , is about and is smaller than 10 .therefore , the reduction factor of the data is .the original code of godec was implemented with matlab .however , we find the original code to be impractical to apply to our movie data due to the speed of computation and memory consumption . we have rewritten the godec code with c++ , utilizing openblas and lapack libraries .we use quick select , instead of full sorting , to select non - zero elements for a sparse matrix in the godec algorithm . made by the decomposition .the central panel shows the distribution of singular values of the matrix as a function of the index of the diagonal matrix , obtained by svd .the five images in the left - hand side show stable point sources , where each image is the decomposed component in the matrix corresponding to the top five singular values .the upper panel shows the time variation of intensity of these components in the matrix in gray scale ., width=302 ]we used a movie dataset of a cmos sensor for 400 frames obtained with the tomo - e pm in 2015 december , which contains some transient events lasting for a short duration .panels ( a ) and ( e ) of figure [ fig : ols_img ] show sub - array images with 300 300 pixels in two different time - frames , which contained a transient point source and a meteor , respectively .we applied the decomposition to the data by setting and .panels ( b , c , d ) and ( f , g , h ) of figure [ fig : ols_img ] show the result . a transient was extracted in the sparse matrix ( panel c ) and a line generated by a meteor was also detected ( panel g ) .in contrast , the low - rank image ( panels b , f ) did not contain any transients .these results confirm that the decomposition was successful .figure [ fig : sigval ] shows the singular values obtained by svd of the low - rank matrix , suggesting that the rank of 10 is sufficient to preserve the information of stable point sources .figure [ fig : noise ] shows the pixel values of the noise images .the histograms have a symmetrical shape , no bias and no anomaly structure . in figure[ fig : noise_g_m ] , pixel intensities of noise images were plotted against those of original images , which shows that the intensities of are independent of the original intensities .we found that the photometric intensity of point sources in the low - rank matrix is reduced from the original value , as shown in figure [ fig : intensity_ratio ] .however , it is possible to recover the original intensity as follows .the photometric intensity of a point source is measured by the standard aperture photometry . to model the distribution of the pixel intensity of a point source , namely point spread function ( psf ) , we assume a two - dimensional gaussian function with symmetrical widths for simplicity. that is , the pixel value at a distance from the center of the point source is given by where and are the fwhm and the total intensity of a point source , respectively .then , the total intensity of the pixels within an aperture radius is .\ ] ] for the residual from the low rank approximation , the pixels lower than a threshold ( ) are separated into the sparse matrix or the noise matrix .this applies for the psf , which results in reducing the aperture size of a point source to , where .the corresponds to the lower threshold of pixel value selected for the sparse matrix .then , the fraction of the photometric intensity in the low - rank matrix ( ) to the original one ( ) is approximated by figure [ fig : intensity_ratio ] shows the model curve of for , obtained from the sparse matrix ( figure [ fig : sparse ] ) , and it is in good agreement with the actual data .this implies that the original intensities are recovered well from the low - rank matrix .next , we examined the sparse matrix , using the transient point source that appeared in the frame of the upper row of figure [ fig : ols_img ] .we extracted the source from the original ( panel a ) and sparse ( panel c ) matrices , and found that the intensity of the latter was 74% of the former .this would not be a problem , because we can simply preserve the original image data in any time - frame that is found to contain a transient source(s ) .finally , the processing time of the decomposition was 320 s for the movie data with pixels and frames , by setting 10 for the rank of a low - rank matrix , for which we used a budget computer , equipped with a cpu of intel xeon processor e5 - 1630 ( 3.70 ghz ) consisting of eight cores .the processing time is 1.6 times longer than the duration of observation of 200 s ( s frames ) , which is sufficiently fast for daily observations , as long as one cpu is used for each sensor .we have proposed to use the low - rank matrix approximation with sparse matrix decomposition in order to compress the movie astronomical data without losing transient events . compared with the conventional low - rank matrix approximation with principal component analysis and svd, our method has an advantage of preserving the transients in the sparse matrices , which is essential for the transient - search project with the tomo - e gozen .although the value of was chosen by hand in the current study , it should be chosen so as to maximize the number of the transients which would be selected by the following machine - learning method .if the optimal is small , further data compression is possible for the sparse matrix by storing numbers for the indices and values of non - zero elements of . in the machine - learning step for the transient detection , either of and can be used .the latter data may be better , but we have not yet confirmed .we will present these studies in the future .our method may miss transient sources which have low significance in each frame but are high significance by stacking multiple frames , if they have time durations of more than a few seconds . in our policy, we discard this type of sources , because the primary objective of the tomo - e gozen project is discovering transient sources with a short time scale .in addition , it is possible to keep these transients by stacking raw frames in various time durations like 5 , 10 , 50 seconds and so on , before applying our compression method .now , we are developing both the hardware and software systems of the tomo - e gozen survey , including the cmos camera , data analysis pipeline , and scheduling .our proposed method has successfully overcome one of the challenges that the project faces .the tomo - e gozen will pioneer a new field of astronomy , `` movie astronomy . ''this research is supported by crest and presto , japan science and technology agency ( jst ) , and in part , by jsps grants - in - aid for scientific research ( kakenhi ) grant number jp25120008 , jp25103502 , jp26247074 , jp24103001 , jp16h02158 and jp16h06341 .this work was achieved using the grant of joint development research by the research coordination committee , national astronomical observatory of japan ( naoj ) .the tomo - e pm was developed in collaboration with the advanced technology center of naoj and is equipped with the full - hd cmos sensors developed in collaboration with canon inc .
optical wide - field surveys with a high cadence are expected to create a new field of astronomy , so - called `` movie astronomy , '' in the near future . the amount of data of the observations will be huge , and hence efficient data compression will be indispensable . here we propose a low - rank matrix approximation with sparse matrix decomposition as a promising solution to reduce the data size effectively , while preserving sufficient scientific information . we apply one of the methods to the movie data obtained with the prototype model of the tomo - e gozen mounted on the 1.0-m schmidt telescope of kiso observatory . once the full - scale observation of the tomo - e gozen commences , it will generate tb of data per night . we demonstrate that the data are compressed by a factor of about 10 in size without losing transient events like optical short transient point - sources and meteors . the intensity of point sources can be recovered from the compressed data . the processing runs sufficiently fast , compared with the expected data - acquisition rate in the actual observing runs .
in a web application we are provided with a rich view of each user .for example in a video streaming application , like netflix , we can observe not only their preference for different types of content but also how those preferences change with respect context , such as time of day , day of week , device , and so on .an important contextual variable that influences a customer s preferences is their geographical location .it is reasonable to assume that customers who live in close proximity may have similar viewing preferences .hence , a model is required that can capture not only a customer s latent viewing preferences , but also the relationship between those and their location . to capture both these aspects we seek to model them in a unified model so that both location and viewing behavior can take advantage of information in each modality .for this task , we employ a nonparametric latent factor model to jointly model a customer s viewing history and their geographical location .nonparametric mixed membership style techniques have shown great promise in modeling large collections of documents [ 1 ] .given that there is more information available for a document ( author , date of publishing , metadata etc . )than just its content , it seems natural to extend these approaches to model all these modalities in a unified approach .hence there have been many attempts in applying nonparametric latent factor modeling for such data sets [ 2 ] .our approach uses a similar model structure as [ 2 ] which attempts to model document - level features along with the content of documents . for the problem under consideration, we view a customer s viewing history as an unordered collection of discrete view events from netflix s video catalog .geographical locations of customers are expressed in longitudes and latitudes .the geographical locations can be viewed as points on a 2-sphere .therefore we use an approach similar to [ 3 ] using von mises - fisher distribution to describe geographical data .the full model combines these sub - components ( viewing history and geographical location ) and is able to learn embeddings for customers viewing history data , geographical location data , and the interactions between the two .the following sections detail how we model these components , how we infer that model ( in a way that scales to large - scale data sets ) , and finally results from an internal netflix data set that illustrates that the model is indeed able to capture interactions between geographical information and viewing preferences .the component of our model which describes customers streaming history data is a nonparametric mixed membership model that uses a hierarchical dirichlet process to learn latent video factors ; each of which are multinomial distributions over content catalog .similarly , the component of our model that models geographical locations uses hierarchical dirichlet process to learn latent factors for geographical locations ; each of which are von mises - fisher distributions over a 2-sphere .finally the relationship between the two latent spaces is expressed through a dirichlet process over the interaction of video and location latent factor spaces .we summarize our modeling assumptions as follows and then comment on different components of the model .+ . for geographical location data of customers ,we need a distribution which can express the spherical nature of the data .we make use of von mises - fisher distribution for modeling locations .we use the following parameterization of von mises - fisher distribution : where ; and are the parameters of the distributions ; is modified bessel function of first kind with order computed at .this parameterization requires locations to be expressed in euclidean coordinates .hence , we convert geo - spherical coordinates to euclidean system .the prior distributions for and are : the prior distribution of is chosen to be a von mises - fisher distribution itself which is conjugate to von mises - fisher likelihood .the concentration parameter does not have a conjugate prior .we use a log normal prior for similar to [ 3 ] .as mentioned above , we view customers videos streaming history as unordered collections of videos watched from the netflix s catalog .we use a dirichlet - multinomial conjugate model for representing video streaming history of customers : represents a single draw from the multinomial distribution on a video catalog of size * v*. the interaction of video and geographical latent spaces is modeled by a dirichlet process with a product base measure i - e the base measure is on atoms which are pairs of dirichlet processes drawn from the dirichlet process on location and video latent factors respectively .this construction allows the model to flexibly learn as many interactions between video preferences and geo - locations as needed to best express the data .we use a sampling based approach for posterior inference . due to dirichlet - multinomial conjugacy in the video component of the model , we collapse out for each latent video factor . for the location component of the model , prior distribution of ( von mises - fisher )is conjugate to von mises - fisher likelihood , hence we collapse out as well for each latent location factor .the prior distribution of ( log - normal ) is not conjuage to von mises - fisher likelihood , hence we use metropolis - hasting algorithm to sample for each latent location factor . for the nonparametric components , we make use of the direct assignment scheme described in [ 1 ] .hence , instead of sampling atoms , we sample indicators to those atoms .specifically , ( taking values in t = 1, ... , ) is the indicator to the atom , ( taking values in s = 1, ... , ) is the indicator to the atom , and ( taking values in z = 1, ... , ) is the indicator to the atom .additionally , we sample the global dirichlet processes and according to the direct assignment scheme in [ 1 ] .the sampling distributions for these latent variables are as follow : above , represents the complete conditional distribution of the variable .notations like represent conditional counts ; count of variables and ignoring customer for example .notations like and represent marginal counts ; marginal counts of variable , marginalizing over and respectively for all customers except ( subscripts and are used to differentiate the two marginals involving ) .in order to scale our sampling based posterior inference , we use an approximate parallel gibbs sampling approach as described in [ 4 ] . for our experimentwe use an internal data set that contains video viewing history for one million netflix customers along with their geographical locations .we include some of the examples of latent video and geographical factor learned by our model as well as the top three video topics for the two geographical latent factors found in the united states of america . 0.5 0.5 0.5 0.5 0.5 0.5we use bayesian non - parameteric machinery to combine geographical and viewing behavior information of customers of netflix for location - aware video recommendations .the approach presented can also be helpful in situations where the viewing history data is sparse or cold - start scenario .[ 1 ] teh , y.w . , jordan , m.i.,beal , m.j . , & blei , d.m .( 2006 ) hierarchical dirichlet process ._ journal of the american statistical association , _ 101 , 1566 - 1581 .
we are interested in learning customers video preferences from their historic viewing patterns and geographical location . we consider a bayesian latent factor modeling approach for this task . in order to tune the complexity of the model to best represent the data , we make use of bayesian nonparameteric techniques . we describe an inference technique that can scale to large real - world data sets . finally we show results obtained by applying the model to a large internal netflix data set , that illustrates that the model was able to capture interesting relationships between viewing patterns and geographical location .
recently , quantum portfolios of quantum algorithms encoded on two - level systems ( qbits ) have been reported .it has been shown that maximizing the efficiency of quantum calculations can be accomplished via the formation of portfolios of the algorithms and by minimizing both the running time and its uncertainty .furthermore , these portfolios have been shown to outperform single algorithms when applied to certain types of algorithms with variable running time such as the `` las vegas '' algorithms and np - complete problems such as combinatorial search algorithms . in this paper a discussion of quantum portfolios of algorithms encoded on qbits and their continuous variables ( cv ) version ( here , harmonic oscillators )is presented .continuous variables quantum computation performed with harmonic oscillators is discussed in .a quantum portfolio of algorithms is formed and a risk neutral model , analogous to the traditional black - scholes and merton options valuation model , is obtained from the underlying stochastic equations .the quantum algorithms are here encoded on simple harmonic oscillator ( sho ) states , and a fokker - planck equation for the glauber p - representation is obtained as a starting point for the analysis .a portfolio of qbits is also formed and the resultant fokker - planck equation of the qbit polarization is used to obtain the risk neutral valuation of the portfolio and measurement option .the results should prove useful in quantum computation and decoherence .a simplified model for quantum computation is proposed wherein the algorithms are encoded on simple harmonic oscillator basis states and are in the presence of a thermal bath , the temperature decoherence effects needing to be taken into account as is usually the case in real - world applications . the quantum computation utilizing harmonic oscillator basis sets representations has been discussed elsewhere .representations of computation operators or effective operators can be made and the basis sets of these operators can be expressed as harmonic oscillator basis sets . for this case , we will also consider damping , this could be from other noise sources , external or system specific .the harmonic oscillator can be described by a master equation such as {eqn1.png } \label{eqn1}\ ] ] here , and are destruction and creation operators as usual , and is the density matrix operator .the mean thermal excitation due to the thermal bath is , and is the damping rate . to obtain a fokker - planck equation for the sho ,we first suppose that has a glauber p - respresentation {eqn2.png } \label{eqn2}\ ] ] substituting eq.([eqn2 ] ) into eq.([eqn1 ] ) we obtain the fokker - planck equation with mixed derivatives {eqn3.png } \label{eqn3}\ ] ] the fokker - planck equation is put into a regular form if we change variables using quadratures , and eq.([eqn3 ] ) becomes {eqn4.png } \label{eqn4}\ ] ]the fokker - planck equation for the sho implies that there is an underlying stochastic differential equation(s ) of the ito form {eqn5.png } \label{eqn5}\ ] ] {eqn6.png } \label{eqn6}\ ] ] here , the wiener processes ( noise ) are taken to be a gaussian white noise , and are delta correlated .these stochastic processes can now be included in a portfolio analysis . to construct a portfolio ,we first look at the case where the algorithm is encoded on two harmonic oscillators . it can be seen that a generalization to the multi - asset case proceeds straightforwardly .we write the legendre transform for the n - asset portfolio {eqn7.png } \label{eqn7}\ ] ] and here , with , the function is the analogue to the ( call ) option in finance . in our case, it can represent the probability distribution for measurement of the observables of the quantum algorithm(s ) .these observables are legendre transformed such that the state function , the portfolio , evolves at the known or _ wanted _rate such that .the n algorithms evolve independently , regardless of initial phase , and will have differing instantaneous values in their stochastic sample paths .they all have in common the same value for their moments ( means and variances ) , as from the point of view of their statistics they have the same drift and diffusion coefficients and are equivalent on the average .this is different from the financial portfolio case where the portfolio of shares of a stock(s ) assets have the same instantaneous value for all the shares ( say , of a particular stock ) and would resemble a portfolio of different traded stocks of one share per stock that are different in prices at any instantaneous period in time yet that equivalently have the same values for their 1st and second moments of means and variances .next we expand in a taylor series to first order in time , and to 2nd order in the variables , since {eqn8.png } \label{eqn8}\ ] ] setting for the two asset case , we take the differentials for .we have {eqn9.png } \label{eqn9}\ ] ] and we substitute the expressions for and for and .thus we are able to make the choice for the multipliers directly as the terms .this immediately eliminates the drift terms in the original process , in favor of the known rate drift terms .this is a general feature in the sense that much more complex ( nonlinear ) process drifts can be eliminated in favor of the known rate linear drifts . upon making the substitutions, we obtain the backwards fokker - planck equation that corresponds to the black - scholes options pricing model .this pde for the two - asset ( and similarly for n algorithms ) portfolio is {eqn10_fromtex.png } \label{eqn10 } \label{10}\ ] ] the two - point version of this equation can be solved with the following choice for the function {eqn11.png } \label{eqn11 } \label{11}\ ] ] with this choice , the pde is transformed to a standard form backwards fokker - planck pde .this backwards pde is solved by . herethe two - point function defined in eq.([11 ] ) also solves the forward fokker - planck equation {eqn13.png } \label{eqn13 } \label{13}\ ] ] thus the underlying stochastic processes are of the form {eqn14.png } \label{eqn14 } \label{14}\ ] ] {eqn15.png } \label{eqn15 } \label{15}\ ] ] the solution of the one - point function is a gaussian .we obtain the solution by transforming the stochastic differential equations to a drift - free form .the resultant one - point function is then {eqn16.png } \label{eqn16 } \label{16}\ ] ] and similarly for the other variable .here the diffusion constant is related to the damping rate as .the observables statistics 2-point function is formally solved by {eqn17.png } \label{eqn17}\ ] ] and we can make the boundary value choices for the observables at the later time {eqn19a.png } \label{eqn19a}\ ] ] where these boundary values correspond to the terminal values of the evolution of the algorithms .these boundary values allow us to immediately integrate and obtain the solution for {eqn19.png } \label{eqn19}\ ] ] the interpretation of this formal result for the delta function boundary values is that the value of the distribution of the observables of the quantum computation algorithm is evaluated at the later time and then evolved backwards to the present time similarly to the evaluation of the price of a financial instrument such as the call or put option in portfolio theory .other terminal boundary values can be made , such as step functions in direct analogy to insurance instruments . closed form solutions are also possible ,as the step function limits the integration and can be solved by several methods , and will not be discussed further here ... note that due to risk management strategies in portfolio theory having a direct correspondence to observation strategies for portfolios of quantum computation algorithms , alternative boundary values are likely to become important for different applications .also , as there are generalizations of the portfolio theory to beyond the risk neutral valuation model , the simple delta function boundary values and formal solution can be considered as a simplest observation strategy .an example of the implementation of the strategy of observations of the quantum computations is the analog of the delta hedging strategy .the deltas and should be kept as close to zero as possible to ensure the evolution of the portfolio as .this can be done by observing algorithms ( selling ) or adding algorithms ( buying ) to the portfolio at each computation time step .in a paper that describes realistic models of interferometric continuous measurement of qbit readout systems relevant to quantum computing , the author was able to describe the polarization of the qbit in terms of a fokker - planck pde that accounted for the randomness inherent in the measurement process .the result due to the interferometric measurement markov process was given by ( a similar analysis can be performed for the qbits in the presence of thermal noise ) {eqn20.png } \label{eqn20}\ ] ] here is the diffusion coefficient and is the drift velocity .the microscopic qbit polarization yields the time dependent polarization since here {eqn21.png } \label{eqn21}\ ] ] is the polarization from eq.([eqn21 ] ) .the other terms in eq.([eqn20 ] ) are , the photon flux in photons per second , and , the phase shift due to perturbation of the qbit by a photon . the underlying ito stochastic differential equation can be readily obtained and is {eqn2351.png } \label{eqn2351}\ ] ] and can be seen to contain complicated drift terms as well as the diffusion coefficients. this however will be the replacement by a linear drift term mentioned in the previous derivation .the portfolio analysis proceeds as before and we obtain for a two - qbit measurement distribution the following black - scholes pde {eqn24.png } \label{24}\ ] ] defining the function as and substituting , we obtain the pde {eqn25.png } \label{25}\ ] ] this equation implies that the underlying stochastic process , say for , is now of the form {eqn26.png } \label{26}\ ] ] the solution of the one - point function is ( for one asset ) again a gaussian form .the drift - free transformation and allows us to obtain the solution for the one - point function {eqn27.png } \label{27}\ ] ] the choice of the boundary terms is again the simplest possible delta functions corresponding to the instantaneous measurement .performing the integration we obtain the formal solution {eqn28.png } \label{28}\ ] ]in this article , we have derived a risk neutral valuation model analogous to the black - scholes and merton derivatives valuation model .this model is derived for portfolios of assets of quantum algorithms in continuous variables .the fokker - planck equations for the qbits are the starting points for the derivation of the formal solutions for the derivatives which in this interpretation are the distributions for the measurement of the observables of the quantum computation algorithms of qbits encoded on harmonic oscillators and optics. risk neutral rate of evolution of the value of the portfolio and for which the statistics of the computation are derived for future terminal computation time formal solution brought back to the current time at the known or desired rate of return .this can be interpreted in several ways , including the probability of successful computation per computation time step , the number of successful computations algorithms per unit time step , etc . and is an analogy to the rate of return on investment in finance .a connection to financial risk analysis and quantum computation is thus established , as the traditional trading strategies of risk minimization by portfolio optimization hedging schemes such as delta neutral , etc ., have a direct correspondence to observation strategies . 99 s.m .maurer , t. hogg , b.a .quant - ph/0105071 .url : aps.ariv.org .b.a.huberman , r.m.lukose , t.hogg , science(275 ) , 51 ( 1997 ) .plenio , j.hartley , and j .eisert .arxiv : quant - ph/0402004v218 mar 2004 .+ adam brazier and martin b. plenio .arxiv : quant - ph/0304017 v227 dec 2004braunstein , h.j .physical review letters ( 80 ) , 869 ( 1998 ) . + f. black , m.j .scholes , journal of political economy ( 81 ) , 637 ( 1973 ) .merton , the journal of finance ( 29 ) , 449 ( 1974 ) .cox , s.a .ross , journal of financial economics ( 3 ) , 145 ( 1976 ) .handbook of stochastic methods , 2nd ed . ,springer , 1997 .gardiner , quantum noise , springer , 1991 .sidles , quant - ph/9612001 , url : aps.arxiv.org .j.a.sidles , applied physics letters 55 ( 25 ) p.2588 ( 1989 ) .sidles , physical review letters , 68 ( 8) p.1124 ( 1992 ) .d. stariolo , phys .lett . a 185 , 262 ( 1994 )
quantum portfolios of quantum algorithms encoded on qbits have recently been reported . in this paper a discussion of the continuous variables version of quantum portfolios is presented . a risk neutral valuation model for options dependent on the measured values of the observables , analogous to the traditional black - scholes valuation model , is obtained from the underlying stochastic equations . the quantum algorithms are here encoded on simple harmonic oscillator ( sho ) states , and a fokker - planck equation for the glauber p - representation is obtained as a starting point for the analysis . a discussion of the observation of the polarization of a portfolio of qbits is also obtained and the resultant fokker - planck equation is used to obtain the risk neutral valuation of the qbit polarization portfolio .
econophysics is the study of economic systems by employing methods and tools developed in physics . up to now , many economists have been worrying that econophysicists are just reinventing the wheel ; while many physicists are studying properties of toy economic models that are not directly relevant to economics . in this paper , we investigate the inequality of wealth in a simple - minded econophysical model known as the minority game ( mg ) using the so - called replica trick . by doing so, we hope to make a small step forward in the application of physical methods when studying real economic systems .the mg is a simple - minded model of a complex adaptive system which captures the cooperative behavior of selfish players in a real market . in this game , players have to choose one of the two possible alternatives in each turn based only on the minority sides in the previous turns .the wealth of those who end up in the minority side will be increased by one while the wealth of the others will be reduced by one . to aid the players in making their choice ,each of them is randomly and independently assigned deterministic strategies once and for all when the game begins .each deterministic strategy is nothing but a map from the set of all possible histories ( a string of the minority side of the previous turns ) to the set of the two possible alternatives .all players make their choices according to their current best strategies . in the mg , the complexity of the systemis usually indicated by the control parameter which is the ratio of the size of the strategy space to the size of strategies at play . clearly , the mean attendance of either choice is as the game is symmetrical for both choices .in contrast , the variance of this probability , which is conventionally denoted by , is highly non - trivial .it attains a very small value when , indicating that the players are cooperating . that is why previous studies of the mg and its variants focus mainly on the study of .since the strategies are assigned once and for all to each player , it is possible that some poorly - performing players are somehow forced to cooperate with some well - performing peers .therefore , it makes sense to study the inequality of wealth in mg in detail . in section [ gini_formula ] , we introduce a common method that measures wealth inequality in economics known as the modified gini index .we then study the gini index in the mg numerically in section [ num_re ] .our numerical simulation shows that both the maximal cooperation point and the point of maximum wealth inequality occur around .this confirms our suspicion that the apparent cooperation of players shown in the does not tell us the complete story .in fact , we are able to explain the trend of a modified gini index qualitatively using the crowd - anticrowd theory . in particular ,we find that the cooperation comes along with wealth inequality partially because poorly - performing players can not change their strategies in the mg . in this way, we show that the crowd - anticrowd theory is not only able to explain , but also explains other features of other quantities in the mg . in section [ replica ] , we try to reproduce our numerically simulated gini index in the so - called asymmetric phase using the replica method .we recall that one has to average over the disorder variables in the conventional replica method ; the direct application of the replica trick can not provide the wealth distribution of players and thus the gini index of the mg .fortunately , a careful semi - analytic application of the replica method can be used to reproduce the gini index qualitatively as a function of .finally , we wrap up by giving a brief summary of our work in section [ con ] .in order to measure the inequality of wealth among players in the mg qualitatively , we follow our economics colleagues employing the so - called gini index . in the original definition ,the gini index in a population is the mean of the absolute differences between the wealth all possible pairs of players .that is to say , g_j.\ ] ] in the above equation , is the number of players in the population , is the wealth earned by a player divided by the total wealth in the population . andthe s are ranked in ascending order , i.e. , . clearly , ranges from to .the larger , the more serious the wealth inequality . if , the players wealth is uniformly distributed .if , one of the players possesses the total wealth of the population and the wealth inequality is served .however , eq . ( [ ogini_eq ] ) is only applicable in two cases : ( 1 ) all players have positive wealth ; or ( 2 ) all players have negative wealth .since players in the mg may have positive or negative wealth , we can not use , in general , to measure wealth inequality .we employ an extension of , introduced by chen _et al . _ , known as the modified gini index , is given by } , \ ] ] where is defined in such a way that and . for simplicity , we refer to the modified gini index as the gini index from now on . just like the original gini index ,the modified gini index measures the normalized wealth inequality of players .again , ranges from to .the larger the value of , the more serious the wealth inequality . when all players are equally wealthy , i.e. , , the term vanishes and becomes zero .in contrast , if the total wealth of the system is owned by a single player , i.e. , and the term , then attains a value of one as .also , is reduced to the original gini index when all players have positive wealth or all players have negative wealth .moreover , is unchanged if the wealth of each player is multiplied by a non - zero constant .in this section , we investigate the wealth inequality of the players in the mg . since we are only interested in studying the generic properties of the gini index , we average the gini index over independent runs .because measures the normalized wealth distribution of players rather than simply the first and second moments of this distribution , the time of convergence of gini index is much longer than that of the variance of attendance and it differs for different initial configurations of the system .so we employ an adaptive scheme to check for system equilibration before taking any measurement .specifically , in each run , we record the time series of until the absolute difference of between successive steps is less than .then , we obtain the equilibrated value of by using finite size scaling . finally , we take to be the average over measurements each separated by steps .we recall from section [ gini_formula ] that is a measure of the normalized wealth distribution . from our numerical simulation, equilibrates logarithmically and slowly although the wealth of players is decreasing in each turn .we will explain the reason for convergence of in detail at the end of this section .we have performed numerical simulations for the cases where players draw their strategies from full strategy space and reduced strategy space respectively .the gini indices obtained in these two cases are very similar . since the analytical investigation performed in section [ replica ] is simpler if we focus on reduced strategy space , we present the numerical results based on reduced strategy space here for consistency .let us study the gini index averaged over the initial conditions versus the control parameter as shown in fig .[ fig : f1 ] .( note that we use to denote the average over the initial configuration of the system . )our numerical results show that the curves of for different coincide . which means that the gini index , just like the variance of attendance , depends only on the control parameter in the mg .we now move on to discuss the properties of the gini index as a function of in detail .[ fig : f1 ] shows that the gini index is small when . in other words ,the wealth of all players is roughly the same in such a case .in fact , the small value of can be explained by the crowd - anticrowd theory as follows . in the small regime , players are likely to have at least one high ranking strategy at each instance , as each player possesses a relatively large portion of strategies of the reduced strategy space .thus , most of the players are using the crowd of high ranking strategies , i.e. , those high ranking strategies are overcrowded . due to the overcrowding of strategies , each strategy alternatively wins and loses one virtual score repeatedly when the same history appears , under the period - two dynamics .that is to say , each strategy has approximately the same probability to win for any history .therefore , all players have roughly the same amount of wealth and this leads to a small gini index .as the control parameter increases , the gini index rises rapidly and subsequently attains its maximum value when the number of strategies at play is approximately equal to the reduced strategy space size . to explain this, we recall that the aim of each player in the mg is to maximize one s own wealth , which is achieved under the maximization of the global profit .subsequently , the attendance of each choice always tends to upon equilibration for all values of since the two alternatives are symmetric in the mg .that is to say , the system always `` distributes '' approximately the same amount of wealth to the population in each turn regardless of the value of . moreover , whenever , unlike in the cases of symmetric and asymmetric phases of the mg , it is not uncommon for a player to hold only low ranking strategies since the number of strategies at play and the reduced strategy space size are of the same order .consequently , a significant number of players are forced to use the crowd of low ranking strategies and keep on losing . on the other hand ,those players picking the crowd of high ranking strategies have a higher winning probability and keep on using those strategies .note that the ranking of the strategies is almost unchanged when . as a result ,the wealth distribution of players would become relatively diverse and the gini index of the population attains its maximum value when .actually , the increase in the gini index when can be justified by the frozen probability of the mg . we recall that in the mg a player employs the virtual score system to determine which strategy to use in the next time step . in the asymmetric phase , the probability that a strategy assigned to a player has a virtual score asymptotically higher than all the other strategies assigned to the same player increases as decreases .some players end up using only one strategy after the system equilibrates , they are regard as frozen players .the frozen probability indicates the number of frozen players . a small frozen probability ,i.e. most players in the game keep changing their best strategies , implies that only a few player will keep on winning or keep on losing all the time and the gini index should be low . on the other hand ,a high frozen probability may indicate that while some frozen players are using strategies that win most of time , the best performing strategies for the other frozen players are losing badly .thus , there is a wide spread in wealth distribution of players .the gini index should be high in this case .the frozen probability follows the same trend of gini index as , which further supports the validity of the result of the gini index .moreover , when , it is likely that those frozen players which form the majority of crowds and anti - crowds in the game use anti - correlated strategy pairs , resulting in effective crowd - anticrowd cancellation between frozen players . also , those frozen players who picked the anti - correlated strategy pairs keep winning or keep losing throughout the game . after attaining the maximum value ,the gini index decreases and gradually tends to zero when the control parameter further increases . according to crowd - anticrowd theory ,it is because most of the strategies at play are uncorrelated to each other when the strategy space size becomes much larger than the number of strategies at play .therefore , it is as if each player is making random choices in the game when is large .hence , the winning probability of all strategies is roughly the same . as a result ,the gini index of the population is small in this regime . as we have reasoned above, the winning probability of each individual player is steady after equilibration of the system .since the wealth distribution depends solely on the winning probabilities of individual players , the s , and hence the gini index , converge over a sufficiently long time .moreover , it is easy to check that the s converge logarithmically . therefore , the equilibration time for is much longer than that of .in the previous section , we have explained the wealth inequality of the players in the mg qualitatively .in fact , the system of the mg can be described as a disorder spin system since the dynamics of the mg indeed minimizes a global function related to market predictability . in this section ,we calculate the gini index of the population in mg semi - analytically by mapping the mg to a spin glass .as we shall see , this approach works well whenever .let us start to link the mg , a repeated game with players , to the spin glass . in this formalism, every player has to choose one out of two actions corresponding to the two alternatives at each time step .we denote the action of the player at time by .after all players have chosen their actions , those players choosing the minority action win and gain one unit of wealth while all the others lose one . in the mg ,the only public information available to the players is the so - called history , which is the string of the minority action of the last time steps .namely , the history is a string , where denotes the minority action at time . for convenience, we label the history by an index as follows : at the beginning of the game , each player picks once and for all strategies randomly from the strategy space .in fact , a strategy specifies an action taken by the player for all possible histories . in the mg ,agents make use of the virtual score , i.e. , the hypothetical profit for using a strategy throughout the game , to evaluate the performance of a strategy .to guess the next global minority action , each player uses their own current best strategy which is the strategy with the highest virtual score at that moment .assuming each player has strategies which are labeled by and , we define the disorder variables \{ , } as : here we use the spin variable to denote the strategy used by the player at time .thus , the action of this player is given by with the above formalism , we can employ a statistical tool called the replica trick to study the stationary state properties of the mg by solving the ground state of the hamiltonian : where and .note that denotes the average over history and denotes the average over time .in other words , our aim is to find the minimum of defined by : ^n } h\{\vec{m}\ } = - \lim_{\beta \rightarrow \infty } \frac{1}{\beta } \langle \ln z ( \beta ) \rangle_\xi,\ ] ] where the partition function here , denotes the integral of on ^n ]. then we set if and otherwise .therefore , for the history , the action of the player at time can be written as note that the history is generated randomly at each time step in our simulation .in addition , the difference in the numbers of players choosing the two alternatives at time is given by so we obtain the minority side at time after determining the minority side , the wealth of the players , , is updated by we repeat the above algorithm times for the system to equilibrate . after the equilibration , we measure on the gini index of the population for the quenched disorder using eq .( [ gini_eq ] ) .then we calculate the average gini index for independent runs .we denote the gini index calculated by this algorithm with averaging over the quenched disorder by .in fact , we find that the average gini index converges after iterations , where is the number of possible histories .[ fig : f2 ] gives the gini index obtained from semi - analytical calculation of versus the control parameter for mg with .we find that the trend of the curves of agrees with the numerical findings .this implies that we have successfully reproduced the numerical results of the gini index in the asymmetric phase of the mg by using the replica method .however , the curves of are systematically lower than those from numerical simulation .this is because the coupling between the actions of players and the dynamics of the system is completely ignored in our stochastic simulation as the actions of the players depend only on the spin variable .consequently , the global cooperation among the players is suppressed in our semi - analytical calculation .hence , the wealth distribution of players is less diverse which results in under - estimation of the gini index in the mg . to make our semi - analytical calculation more `` realistic '', we allow the history to be updated sequentially by \bmod p , \ ] ] and we denote the gini index averaged over the quenched disorder calculated in this approach by .note that and are calculated using the same algorithm except that the history is updated in a different way . fig .[ fig : f3 ] shows the gini index versus the control parameter in the mg .we observe that the values of agree well with the numerical results when is large .according to crowd - anticrowd theory , if is large , most strategies of the players are uncorrelated to each other due to the under - sampling of the strategy space .moreover , most of the strategies are used by either one or none of the players in the mg whenever .therefore , the cooperation between the players can be neglected for .in addition , the probability of the occurrence of different histories is not the same in the mg when .indeed , these two conditions are satisfied in our stochastic simulation using the sequential history .so , the values of match the numerical estimates when is large . on the other hand ,when approaches , the values of become larger than the numerical results .this discrepancy can be explained as follows .as mentioned in section [ num_re ] , since there is effective crowd - anticrowd cancellation , the history in the mg becomes more uniform as approaches .in contrast , although players still have the same chance to pick anti - correlated pairs separately at the beginning of the game in our sequential simulation , the strategy actually used by each player at each turn is not determined by its virtual score , but a randomly assigned disorder spin variable instead . consequently , two players are less likely to be frozen on an anti - correlated strategy pair .this makes the crowd - anticrowd cancellation less effective among frozen players in our sequential simulation .so , the actions among these frozen players may give a strong bias in the output , especially for , where frozen probability is highest .in turn , the history becomes much more non - uniform .this greatly increases the gini index as some players have more chance to stay at the winning ( or losing ) side .finally , we remark that both and calculated by the stochastic simulation are independent of .this is expected , since the results of the replica calculation do not depend explicitly on .in summary , we have investigated the inequality of wealth among players in the mg using the well - known measure in economics called the gini index . in particular ,our numerical findings show that the wealth inequality of players is very severe near the point of maximum global cooperation .that is to say , in the minority game , global cooperation comes hand in hand with uneven distribution of players wealth . specifically , a significant number of players are forced to use the low ranking strategies and cooperate with those players using the high ranking strategies since the number of strategies at play and the reduced strategy space size are of the same order whenever . in this respect, we have showed that the crowd - anticrowd theory offers a simple and effective platform to study the wealth inequality in the mg .in addition , we have studied the gini index semi - analytically by mapping the system of the mg to a spin glass . with this formalism, we semi - analytically reproduce our numerically simulated gini index in the asymmetric phase of mg by investigating the stationary state properties of mg using the replica trick .we would like to thank the computer center of hku for their helpful support in providing the use of the high performance computing cluster for the simulation reported in this paper .useful conversations with w. c. man are also gratefully acknowledged .99 j. feigenbaum , rep .prog . phys . *66 * , 1611 ( 2003 ) .d. challet and m. marsili , phys .e * 60 * , r6271 ( 1999 ) .d. challet , m. marsili and r. zecchina , phys .lett . * 84 * , 1824 ( 2000 ) .m. marsili , d. challet and r. zecchina , physica a * 280 * , 522 ( 2000 ) .d. challet and y. c. zhang , physica a * 246 * 407 ( 1997 ) .y. c. zhang , europhys .news * 29 * , 51 ( 1998 ) .r. savit , r. manuca and r. riolo , phys .lett . * 82 * , 2203 ( 1999 ) .d. challet and y. c. zhang , physica a * 256 * , 514 ( 1998 ) . n. f. johnson , p. m. hui , r. jonson and t. s. lo , phys . rev .lett . * 82 * , 3360 ( 1999 ) .d. challet , m. marsili and y. c. zhang , physica a * 276 * , 284 ( 2000 ) .d. challet , m. marsili and y. c. zhang , physica a * 299 * , 228 ( 2001 ) .m. hart , p. jefferies , n. f. johnson and p. m. hui , physica a * 298 * , 537 ( 2001 ) .m. hart , p. jefferies , n. f. johnson and p. m. hui , eur .j. b * 20 * , 547 ( 2001 ) .m. l. hart and n. f. johnson , cond - mat/0212088 .z. m. berrebi and j. silber , the quarterly journal of economics , * 100 * , 807 ( 1985 ) . c. n. chen , t. w. tsaur and t. s. rhai , oxford econ .papers , * 34 * , 473 ( 1982 ) .z. m. berrebi and j. silber , oxford econ .papers , * 37 * , 525 ( 1985 ) . c. n. chen , t. w. tsaur and t. s. rhai , oxford econ. papers , * 37 * , 527 ( 1985 ) .r. manuca , y. li , r. riolo and r. savit , physica a , * 282 * , 559 ( 2000 ) .d. challet , private communication ( 2004 ) .d. challet and m. marsili , phys .e. * 62 * , 1862 ( 2000 ) .
to demonstrate the usefulness of physical approaches for the study of realistic economic systems , we investigate the inequality of players wealth in one of the most extensively studied econophysical models , namely , the minority game ( mg ) . we gauge the wealth inequality of players in the mg by a well - known measure in economics known as the modified gini index . from our numerical results , we conclude that the wealth inequality in the mg is very severe near the point of maximum cooperation among players , where the diversity of the strategy space is approximately equal to the number of strategies at play . in other words , the optimal cooperation between players comes hand in hand with severe wealth inequality . we also show that our numerical results in the asymmetric phase of the mg can be reproduced semi - analytically using a replica method .
measurements on separated subsystems in a joint entangled state may display correlations that can not be mimicked by local hidden variable models .these correlations are known as nonlocal , and they are detected by violating the so - called bell inequalities . in recent years , however , it has been become clear that non - locality is interesting not only for fundamental reasons , but also as a resource for many device - independent ( di ) quantum information tasks , such as quantum key distribution or random number generation . from this new point of view , the violations of bell inequalities are not merely indicators of non - locality , but can be used to infer qualitative and quantitative statements about different operationally relevant quantum properties .traditionally , the construction of bell inequalities has been addressed from the point of view of deriving constraints satisfied by local models . following this standard approach ,the inequalities are constructed using well - known techniques in convex geometry .indeed , the set of correlations admitting a local hidden variable model corresponds to a polytope , that is , a convex set with a finite number of extreme points or vertices .these vertices are known and correspond to local deterministic assignments , while the ( in general ) unknown facets are the desired bell inequalities .such facet bell inequalities " form a complete set of bell inequalities , in the sense that they provide necessary and sufficient criteria to detect the non - locality of given correlations .clauser - horne - shimony - holt ( chsh ) and collins - gisin - linden - massar - popescu ( cglmp ) inequalities are examples of tight inequalities .if such facet bell inequalities are optimal detectors of non - locality , they are , however , not necessarily optimal for inferring specific quantum properties in the device - independent setting .for instance , in a scenario where two binary measurements are performed on two entangled subsystems , it is well known that the violation of the chsh inequality is a necessary and sufficient condition for non - locality .but certain non - facet " bell inequalities are better certificates of quantum randomness than the chsh inequality when the two quantum systems are partially entangled . in the present work ,we consider the problem of constructing bell inequalities whose maximal quantum violation , usually referred to as the _ tsirelson bound _ , is attained for maximally entangled states of two qudits .this is a desirable property since such states have particular features , such as perfect correlations between outcomes of local measurements in the same bases , and therefore many quantum information protocols rely on them .the main aim of this work is to introduce a family of bell inequalities with an arbitrary number of measurements and outcomes which are maximally violated by the maximally entangled pair of two qudits .crucially , their maximal quantum violation can be computed _analytically_. in the case where only two measurements are made on each subsystem , all facet bell inequalities are known for a small number of outputs and they are all of the cglmp form .however , the cglmp inequalities are not maximally violated by the maximally entangled states of two qudits ( except in the case corresponding to the chsh inequality ) .we should therefore not expect a priori our inequalities to be facet inequalities , and indeed they are not .the fact that our inequalities will not necessarily be facet inequalities also implies that we can not use standard tools based on convex geometry and polytopes to construct them .in fact , no quantum property is used for the construction of tight bell inequalities like cglmp and , in this sense , it may not be that surprising that their maximal violation does not require maximal entanglement .our approach is completely different : it starts instead from quantum theory and exploits the symmetries and perfect correlations of maximally entangled states to derive a bell inequality .it is also closely linked to sum of squares decompositions of the bell operator , which can be used to determine their tsirelson bound .thus , quantum theory becomes a key ingredient of our method for generating new bell inequalities .our results have the potential to be used in di quantum information protocols .the inequalities are good candidates for improved di random number generation or quantum key distribution protocols or to self - test maximally entangled states of high dimension .interestingly , they also give further insight into the structure of the set of quantum correlations .the paper is organized as follows : in section [ secprelim ] , we review the necessary framework to state our results . in section [ secclass ]we introduce our bell inequalities and their derivation , while in section [ secproperties ] we study their properties and give their tsirelson bound . in sections [ secappli ] and [ secdiscuss ] , we briefly discuss their possible application to di protocols and the interest of our findings .throughout this work , we consider a bipartite bell scenario in which two distant parties and ( often taking the placeholder names _ alice _ and _ bob _ ) perform measurements on their share of some physical system .we suppose that they have possible measurement choices ( or inputs ) at their disposal and that each measurement has possible outcomes ( or outputs ) .we denote this scenario .we label their inputs and outputs as and .the correlations that can be obtained in such a bell experiment are described by a set of joint probabilities that alice and bob obtain and upon performing the and measurement , respectively . these probabilities can be given a geometric representation by ordering them into a vector importantly , the set of allowed can vary , depending on the physical principles the probabilities obey .thus , to every physical theory , one can assign a set of correlations in .if the measurements correspond to spacelike separated events , the observed correlations should obey the _ no - signalling principle _ , which prevents any faster - than - light communication among the parties .these correlations form a convex set that is a polytope , which we denote by .contained in this set is the set of quantum correlations , denoted , which corresponds to those whose components can be written as where is some state in a tensor product hilbert space whose dimension is unconstrained , and and are projection operators defining , respectively , the measurement on alice s system and the measurement on bob s system .finally , the set of correlations admitting local hidden variable models , termed also _ local _ or _ classical _ , corresponds to those that can be written as a convex sum of product deterministic correlations of the form where and denote alice s and bob s predetermined outputs for inputs and , respectively .bell was the first to prove that not all quantum correlations admit a local hidden variable model .to this end , he used the concept of a bell inequality with being the so - called bell expression that , most generally , is a linear combination of the joint probabilities of the form and is the local ( or classical ) bound of the bell inequality and it is the maximum value that can achieve on product deterministic correlations .the quantum or tsirelson bound of the bell expression is the maximum value that it can achieve for quantum correlations .such a bell expression corresponds to a proper bell inequality one that can be violated by quantum theory if .let us finally define to be the maximal value of over all no - signalling correlations .it turns out that for most bell inequalities the chain of inequalities holds true .the set of local correlations is a polytope which is defined through bell inequalities .hence , if violates a bell inequality , the correlations described by are nonlocal .the set , on the other hand , is not a polytope , yet it is convex .there have been several attempts to characterize it from an operational point of view , but an operational characterization remains to be found .the main obstacle is the current lack of mathematical understanding of the structure of the set of quantum correlations .this makes the derivation of tsirelson bounds a difficult problem .indeed , given an arbitrary bell inequality , there is no procedure that guarantees finding its tsirelson bound , and it was achieved analytically only in a handful of cases .there is however a practical approximation scheme introduced in based on a semidefinite programming , which consists in a hierarchy of sets converging to as .the sets are the feasible regions of semi - definite programs , which are efficiently solvable .although this method yields in practice very good upper numerical bounds ( often tight ones ) on the maximal violations of bell inequalities for small bell scenarios , it is limited by the fact that it becomes computationally expensive for larger scenarios and for high .as stated in the introduction , our aim is to introduce a family of bell expressions , whose maximal quantum value is attained by the maximally entangled states of two qudits . to derive these bell expressions, we start from the premise that their maximal quantum values are obtained when alice and bob perform the optimal cglmp measurements introduced in for the case and generalized to more inputs in .the reason for this choice is that these measurements simply generalize the chsh measurements ( ) to the case and that they lead to non - local correlations that are the most robust to noise or give a stronger statistical test ( at least in the case ) .these measurements are presented in detail in appendix [ appendixmeasurements ] . note that this choice of measurements is arbitrary and only used as a starting point to determine the bell expressions that we are looking for .but once we have determined them , we will no longer make any assumptions on the particular measurements that alice and bob perform , in particular , when we derive formally their quantum bounds .the probabilities obtained when using the optimal cglmp measurements on have several symmetries , detailed in appendix [ appendixmeasurements ] .for instance , they only depend on the difference .if we impose that our bell expressions respect this particular symmetry , the probabilities should be treated equally for all . in other words , the bell expressionsshould be written as linear combinations of .taking into account the other symmetries ( see appendix [ appendixmeasurements ] ) , a generic form for our bell expressions is where \ ] ] and ,\ ] ] and where we define that .the parameters and are the only degrees of freedom left and we fix them such that the resulting bell inequalities are indeed maximally violated by the state .note that taking for , one recovers the cglmp bell inequalities . to exploit the symmetries inherent in bell inequalities ,we often write them in terms of correlators instead of probabilities . fortwo - output measurements one can switch from correlators to probabilities by means of an invertible transformation , but for it becomes necessary to appeal to the notion of generalized correlators .these are in general complex numbers that are defined through the two - dimensional discrete fourier transform of the probabilities , that is , where and , and and can be thought of as measurements whose outcomes are labelled by roots of unity .for quantum correlations these numbers can be expressed in terms of the born s rule . indeed , assuming correlations to be quantum and given by eq .( [ quantum ] ) , we can interpret as the average value of the tensor product of the following operators in the state .thus , in what follows , whenever we work with quantum correlations we have the above representation in mind .note the operators in eq .( [ operators ] ) are unitary , their eigenvalues are the roots of unity , and they enjoy properties such as and for any .exploiting now transformation ( [ correlators ] ) , expression ( [ bellproba ] ) can be rewritten as where , for clarity , the change of variables with was introduced on bob s side .note that due to the convention , the term is defined in a slightly different manner as . for simplicity , we ignored an irrelevant scalar term in ( [ bellcorr ] ) and rescaled the expression . to recover exactly from , one has to add that scalar term corresponding to , and divide the expression by .each choice for the free parameters and now corresponds to a choice for the variables .as explained above , our aim is to fix their value according to the quantum property we need : maximal violation by the maximally entangled state . at this point , it is instructive to look at the specific example of the chsh bell expression ( , ) . in the notation ( [ bellcorr ] ) , we write the chsh bell expression as where and .notice now that for the optimal measurements leading to the quantum bound of the chsh inequality , we have that and .our intuition is to fix this condition generically for any and : we choose the parameters and such that the conditions hold for and , in the case that the initial operators and are the optimal cglmp operators .further intuition for imposing these exact conditions will be provided in the next section , where we prove the tsirelson bound of the expressions . conditions ( [ conditions ] ) give rise to a set of linear equations for the variables and which is solved in detail in appendix [ appendixcoefficients ] , giving \ ] ] \ ] ] with . to sum up, our class of bell expressions is given by ( [ bellproba ] ) or equivalently by ( [ bellcorr ] ) , with coefficients ( [ alpha ] ) and ( [ beta ] ) .we arrived at this class of bell expressions by first writing the most general bell expression satisfying the symmetry of cglmp correlations , then re - writing these bell expressions in the simple form ( [ bellcorr ] ) through a change of variable on bob s side , and then imposing the conditions ( [ conditions ] ) that generalize a property observed in the chsh case .so far we can not guarantee that these bell expressions lead to proper bell inequalities violated by quantum theory , nor that their quantum bound is attained by the maximally entangled state , but we show in the next section that this is indeed the case .we now present our main results for the class of bell expressions ( [ bellcorr ] ) .all the values for the bounds of ( [ bellproba ] ) can be obtained directly from those of ( [ bellcorr ] ) as mentioned in appendix [ appendixcoefficients ] .[ theoclass ] the classical bound of is given by \right\ } - m.\ ] ] we start with the probability version of the bell expression , and since we can restrict the problem to local deterministic strategies , finding the classical bound becomes a question of distributing and over all the terms .it turns out that the maximizing strategy is to have terms equal to 1 multiplied by and term equal to 1 multiplied by .all other terms must be equal to 0 .more details can be found in appendix [ appendixclassical ] .importantly , the resulting bell inequality is violated by quantum mechanics .indeed , we can reach the value for by applying the cglmp measurements on the maximally entangled state .this is easily seen using eq .( [ conditions ] ) , the unitarity of , and the following property of the maximally entangled states : for and operators .one can see how all the correlators in ( [ bellcorr ] ) are then equal to 1 , yielding the quantum violation of after summing over and .crucially , as we prove below , the value turns out to be the maximal quantum violation of our bell inequalities .[ theoquantum ] the tsirelson bound of is given by .here we present a sketch of the proof , while its more detailed version is deferred to appendix [ appendixtsirelson ] .the idea of the proof is to construct a sum - of - squares ( sos ) decomposition of the shifted bell operator , where is the identity operator and the bell operator corresponding to expression ( [ bellcorr ] ) , as was done for instance in . for any positive semidefinite operator ,an sos decomposition is a finite collection of operators such that it is clear that if the shifted bell operator can be written as ( [ sostheo ] ) it must be semidefinite positive , which proves that is an upper bound to our bell expression .indeed , for any quantum state , it then holds that , which implies for the bell operator that .this approach is in principle valid for any shifted bell operator , thus for any bell expression . as we expect the s to be polynomials of the measurement operators of alice and bob, we can define the order of the sos decomposition as the largest degree of these polynomials . in our case, we show that is indeed the maximal quantum violation of our class of bell inequalities as the shifted bell operator can be decomposed as where , and with , , .our bell operator is , and the decomposition is independent of the choice of and .the second sum of terms with coefficients , and was added to compensate some non - vanishing terms in the first sum .the exact values of the coefficients along with details on the sos decomposition can be found in appendix [ appendixtsirelson ] .let us elaborate on how the sos works in the case , which justifies _ a posteriori _ the imposition of conditions ( [ conditions ] ) . for , only the first part of the sos decomposition remains .at the point of maximal violation , both sides of ( [ sos ] ) applied on must yield . since the measurements are now the optimal cglmp ones , eq .( [ conditions ] ) holds , and can be used as above with the property to see easily how the first sum of the decomposition cancels .one can now grasp the intuition behind conditions ( [ conditions ] ) : imposing them leads to having an sos of the form ( [ sos ] ) , more precisely an sos of order one in the operators and . in the chsh case, one can observe the same effect , as these same properties of the optimal state and measurements allow the bell operator to have the following sos decomposition , which is also of order one : with , and .thus , our construction generalizes this quantum aspect of chsh . in the case , the sos does not generalize as directly , and one has to add `` by hand '' the extra terms .however , the order of the sos remains one .[ theons ] the no - signalling bound of is given by in appendix [ appendixns ] , we provide a no - signalling behaviour and show that it attains the algebraic bound of our bell expressions .it corresponds to having all the probabilities which are multiplied by in equal to 1 , and all the others equal to 0 . for our expressions to form a non - trivial bell inequalities ,the classical bound must be smaller than the quantum one for all .we show this in appendix [ appendixscalings ] , and we also study the scaling of the classical , quantum , and no - signalling bounds . finally , note that for , our bell expressions coincide with those introduced in refs . and then rederived in using a different approach .moreover , the maximal quantum violations of these bell inequalities was computed in refs . exploiting alternative techniques . on the other hand , for and any , our class recoversthe well - known chained bell inequalities .a natural application for our expressions is self - testing , a di protocol in which a state and measurements performed on it are certified up to local isometries , based on the nonlocal correlations they produce here , on the violation of a bell inequality . to perform self - testing ,the point of maximal violation must be unique , which is a property that we have not proven for our inequalities .there exists a numerical method for self - testing called the swap method , and we applied it to the simplest case and .the results of the program are plotted in figure [ swap3 ] .it shows that , in this scenario , the maximal violation is unique and self - testing the maximally entangled state of two qutrits is possible , with robustness .an open question consists in generalizing these self - testing results to any dimension , which must be done analytically .our inequalities could then find a direct application in di random number generation protocols . indeed ,if the point of maximal violation is proven to be unique , one can successfully apply the method of and use the symmetries of the bell expressions to guarantee a dit of randomness when observing the maximal violation .ultimately , by increasing the dimension , one would achieve unbounded randomness expansion .our inequalities could also find applications in di quantum key distribution .indeed , it was shown in through the example of the cglmp inequalities that exploiting high dimensional systems can be beneficial in noisy scenarios .an advantage that our inequalities have over cglmp in that scenario is that the maximally entangled state can produce perfect correlations between the users , which should lead to higher key generation rates than using the cglmp inequalities . .at the maximal violation , the fidelity is equal to , meaning that the quantum state used in the bell experiment must be equal to the reference state . for lower violations ,the fidelity decreases .[ swap3],scaledwidth=65.0% ]in summary , we have introduced a new technique allowing to construct bell inequalities with an arbitrary number of measurements and outcomes that are maximally violated by the maximally entangled states .it exploits the concept of sos decompositions of bell operators and , crucially , allows one to compute analytically their tsirelson bounds .we also provide the classical and no - signalling bounds of the resulting bell inequalities .our results are very general as , unlike previous works , we do not consider a particular bell scenario , but allow the number of inputs and outputs to be arbitrary .our inequalities can be seen as the `` quantum '' or the di - oriented generalization of chsh bell inequality in the same spirit as the cglmp inequality generalizes the chsh one classically . indeed ,while the cglmp inequalities preserve the property of being facets of the local polytope , our inequalities possess the same sum - of - squares structure as chsh at the maximal quantum violation , which leads to the important property for di protocols of being maximally violated by the maximal entangled state .moreover , let us note that deriving tsirelson bounds allows us to gain insight about the quantum set of correlations more specifically its boundary and has thus fundamental implications .in particular , a feature of our class of inequalities worth highlighting is that their tsirelson bound corresponds to the bound obtained using the npa hierarchy at the first level , i.e. , within the set .this is a rare property , which to our knowledge has been previously observed only for xor games . that the tsirelson bounds of our inequalities are attained in follows from our sos decomposition ( see eq .( [ sos ] ) ) .indeed the degree of an optimal sos decomposition for a bell operator is directly linked to the level of the npa hierarchy at which the quantum bound is obtained .an sos of degree 1 , as in our case , corresponds to the first level .this means that the boundaries of the sets and intersect on the point of maximal violation of our inequalities .this observation , in conjunction with the results of ref . , raises a question about .indeed , the boundaries of and seems to intersect at points that correspond to the maximal violation of bell inequalities attained by maximally entangled states .one should confirm whether this trend is a general property , and could perhaps use it as a way to characterize .we wish to thank m. navascus and t. vrtesi for fruitful discussions , and especially j .- d .bancal for sharing with us his code .this work was supported erc cog qitbox and adg osyris , the axa chair in quantum information science , spanish mineco ( foqus fis2013 - 46768-p and sev-2015 - 0522 ) , fundaci privada cellex , the generalitat de catalunya ( sgr 874 and sgr 875 ) , the eu projects qalgo and siqs , and the john templeton foundation .s. p. is a research associate of the fonds de la recherche scientifique f.r.s .- fnrs ( belgium ) .r. a. acknowledges funding from the european union s horizon 2020 research and innovation programme under the marie skodowska - curie grant agreement no 705109. 99 j.s .bell , physics * 1 * , 195 ( 1964 ) .n. brunner , d. cavalcanti , s. pironio , v. scarani , and s. wehner , rev .phys . * 86 * , 419 ( 2014 ) .d. mayers , and a. yao , proc .39th ann . symp . on foundations of computer science ( focs ) , 503 ( 1998 ) .a. acn , n. brunner , n. gisin , s. massar , s. pironio , and v. scarani , phys . rev .lett * 98 * , 230501 ( 2007 ) .r. colbeck , phd thesis , university of cambridge ( 2006 ) ; r. colbeck , and a. kent , j. phys .a : math . theor . * 44 * , 095305 ( 2011 ) .s. pironio __ , nature * 464 * , 1021 ( 2010 ). j. f. clauser , m. a. horne , a. shimony , and r. a. holt , phys .* 23 * , 880 ( 1969 ) .d. collins , n. gisin , n. linden , s. massar and s. popescu , phys .lett . * 88 * , 040404 ( 2002 ) .a. acn , s. massar , and s. pironio , phys .* 108 * , 100402 ( 2012 ) .b. s. cirelson , lett . mat .phys . * 4 * , 93 ( 1980 ) .a. acn , t. durt , n. gisin , and j. i. latorre , phys .a * 65 * , 052325 ( 2002 ) .s. zohren , and r. gill , phys .lett . * 100 * , 120406 ( 2008 ) .m. navascus , s. pironio , and a. acn , phys .lett . * 98 * , 010401 ( 2007 ) ; new j. phys .* 10 * , 073013 ( 2008 ) .m. mckague , t. h. yang , and v. scarani , j. phys .a * 45 * , 455304 ( 2012 ) .a. fine , phys .. lett . * 48 * , 291 ( 1982 ) .s. popescu and d. rohrlich , found .* 24 * , 379 ( 1994 ) .r. ramanathan , j. tuziemski , m. horodecki and p. horodecki , arxiv:1410.0947 .m. pawowski , t. paterek , d. kaszlikowski , v. scarani , a. winter , and m. ukowski , nature * 461 * , 1101 ( 2009 ) .m. navascus , and h. wunderlich , proc .lond a * 466 * , 881 ( 2009 ) .t. fritz , a. b. sainz , r. augusiak , j. b. brask , r. chaves , a. leverrier , a. acn , nature communications * 4 * , 2263 ( 2013 ) .m. navascus , y. guryanova , m. j. hoban , and a. acn , nature communications * 6 * , 6288 ( 2014 ) . d. kaszlikowski , p. gnaciski , m. ukowski , w. miklaszewski , and a. zeilinger , phys . rev . lett . * 85 * , 4418 ( 2000 ) . j. barrett , a. kent , and s. pironio , phys .. lett . * 97 * , 170409 ( 2006 ) .a. acn , r. gill and n. gisin , phys .lett . * 95 * , 210402 ( 2005 ) . c. bamps , and s. pironio , phys .rev . a * 91 * , 052111 ( 2015 ) .w. son , j. lee , and m. s. kim , phys . rev .lett . * 96 * , 060406 ( 2006 ) .j. de vicente , phys .a * 92 * , 032103 ( 2015 ) .lee , y. w. cheong , and j. lee , phys .a * 76 * , 032108 ( 2007 ) .a. pearle , phys .d * 2 * , 1418 ( 1970 ) ; s. l. braunstein and c. caves , ann .( n.y . ) * 202 * , 22 ( 1990 ) .yang , and t. vrtesi , j - d .bancal , v. scarani , and m. navascus , phys .lett . * 113 * , 040401 ( 2014 ) .s. pironio and s. massar , phys .a * 87 * , 012336 ( 2013 ) . c. dhara , g. prettico , and a. acn , phys .a * 88 * , 052116 ( 2013 ) .m. huber , and m. pawowski , phys .a * 88 * , 032309 ( 2013 ) .s. pironio , m. navascus , a. acn , siam j. optim . * 20 * , 2157 ( 2010 ) .i. upi , r. augusiak , a. salavrakos , and a. acn , new j. phys .* 18 * , 035013 ( 2016 ) .we present here the `` optimal cglmp measurements '' first introduced in and generalized to an arbitrary number of inputs in .we use them throughout our work .they are defined as follows where ] , so that in the correlator form our bell expression for becomes ,\ ] ] and theorems [ theoclass ] , [ theoquantum ] , and [ theons ] give ] , respectively .this is the well - known chained bell inequality , which was recently used in ref . to self - test the maximally entangled state of two qubits and the corresponding measurements . in the second case ,i.e. , that of and any , the bell expression in the probability form is given by eq . with the expressions and simplifying to and where we have exploited the convention that . then , the coefficients and are given by ,\qquad \beta_k=\frac{1}{2d}\left[g\left(k + 1/2 \right)-(-1)^d\tan\left(\frac{\pi}{4d}\right)\right],\qquad\ ] ] with ] .it should be noticed that this bell inequality previously studied in refs . and , and , in particular in ref . and the maximal quantum violation was found using two different methods .let us start with expression ( [ bellproba ] ) and note that we can rewrite it as : ,\ ] ] with .this is possible because of the form ( [ alpha ] ) and ( [ beta ] ) of coefficients and . indeed , since , the terms of the sum which were attached to the coefficients can be shifted to indices and now associated to an . in the odd case , we should in principle impose that the term disappears , but it happens naturally since . as stated in the main text ,finding the classical bound of expression ( [ bellproba22 ] ) reduces to computing the optimal deterministic strategy .thus , to describe the difference between the outcomes associated to and , we can assign one value such that . as depends on inputs and butnot all pairs of and appear in the bell expression , we thus define variables such that : due to the chained character of these equations , must obey a superselection rule involving the other s , which is where the sum is modulo . due to the fact that the dependence of the coefficients on is only through the cotangent function ,proving theorem [ theoclass ] boils down to the following maximization problem .theoclass let , \nonumber\ ] ] and let then , notice that to recover the exact expression from the main text , one needs to reintroduce the constant factors appearing in the definition of and use eq .( [ relation ] ) . to prove the theorem ,we first demonstrate two lemmas . note thatthroughout this appendix , we assume that and .although these are not tight conditions to prove our results , they are in any case satisfied by the definition of a bell test . [ lemma2 ] let ] .therefore , the remaining terms on the right - hand side of eq .( [ gilgamesh ] ) can be wrapped up as \nonumber\\ & & -\sum_{k=1}^{d-1}\left[\omega^k ( a_k^*)^2(b_1^{d - k})^{\dagger}(b_{m}^{d - k})+\omega^{-k}a_k^2(b_{m}^{d - k})^{\dagger}(b_1^{d - k})\right].\end{aligned}\ ] ] by substituting eqs .( [ druid ] ) and ( [ gilgamesh2 ] ) into eq .( [ sos_app ] ) and exploiting the explicit form of the operators , one obtains \mathbbm{1}.\nonumber\\\end{aligned}\ ] ] it is easy to finally realize that the last two terms in the above formula amount to , which completes the proof .as for the proof of the classical bound we start from the bell expression written as : ,\ ] ] with .following considerations from appendix [ appendixclassical ] , the coefficient is the biggest of the sum .clearly , the algebraic bound of is then . to complete the proof , we provide a no - signalling behaviour that reaches this bound .let us recall the no - signalling conditions for a probability distribution : which express that the marginals on alice s side do not depend on bob s input , and conversely .the behaviour that we present is the following . for inputs and such that or : there is a special case for and : where the addition is modulo . for all the other input combinations ( i.e. the ones not appearing in the inequalities ) , we have : one can easily verify that this distribution satisfies conditions ( [ nosigncond ] ) . to obtain the expression from theorem [ theons ] , it suffices to write explicitly and to use relation ( [ relation ] ) .here , we study the asymptotic behaviour of the bounds of our bell expressions for large numbers of inputs and outputs . we also show that for any values of and , the classical bound is strictly smaller than the quantum bound , which is strictly smaller than the no - signalling bound .this ensures in particular that the bell inequality is never trivial .let us start with the quantity : - m}\ ] ] which is the ratio between the quantum and classical bounds .we also consider the ratio between the no - signalling and quantum bounds , which is : to observe the behaviour of these quantities for high number of inputs and outputs , we can use the taylor series expansion in two variables , and , and keep the dominant terms .we obtain : thus , when the parameters and are of the same order and both very large , i.e. , both ratios tend to .it is interesting to consider how fast the bounds tend towards each other : since the ratio between the no - signalling and quantum bounds lacks a term in , it is clear that the quantum bound approaches the no - signalling bound faster than the classical bound approaches the quantum bound .if we fix the number of outputs and consider the limit of a large number of inputs , the ratios still tend to . however , if we fix and considers the limit of large , both ratios tend to constants which are a bit bigger than .they are : it is worth mentioning that both functions of appearing on the right - hand sides of the above formulas attain their maxima for which are and , respectively . to give the reader more insight, we present in tables [ table1 ] and [ table2 ] the numerical values of these ratios for low values of and . we prove that , which is equivalent to ( [ lemmaclasseq ] ) since both bounds are larger than .this inequality can be written as : if we define and , it becomes : for and . since the first term is positive for these intervals , it suffices to show that clearly , .this minimum corresponds to the limit , since the derivative of with respect to is strictly positive on the considered intervals of and .indeed , it holds that which can be rewritten as .\ ] ] now , due to the fact that for , one has that for and , and therefore the right - hand side of eq .( [ cidra ] ) is strictly positive within the above intervals .now , computing the limit of when , one obtains it can be verified straightforwardly that this expression is strictly positive in the interval , by comparing the two functions and , and noticing that the former upper bounds the latter in the interval . indeed , at , we have that , and in this interval , both their derivatives are negative , with the derivative of the first function smaller than the derivative of the second one .thus , . to this end , we show that for any and any integer .we notice that for , , and that '\geq [ a\tan(x)]'\geq 0 $ ] , meaning that both and are monotonically increasing functions and that the former grows faster than the latter .the inequality for the derivatives holds true because is a monotonically decreasing function for which implies that .
bell inequalities have traditionally been used to demonstrate that quantum theory is nonlocal , in the sense that there exist correlations generated from composite quantum states that can not be explained by means of local hidden variables . with the advent of device - independent quantum information processing , bell inequalities have gained an additional role as certificates of relevant quantum properties . in this work we consider the problem of designing bell inequalities that are tailored to detect the presence of maximally entangled states . we introduce a class of bell inequalities valid for an arbitrary number of measurements and results , derive analytically their maximal violation and prove that it is attained by maximally entangled states . our inequalities can therefore find an application in device - independent protocols requiring maximally entangled states .
solar physics increasingly relies on high - spatial - resolution , high - spectral - purity observations in order to determine the three - dimensional structure of the solar atmosphere . in such observations multiple spectral linesare often used because they sample different heights or are sensitive to different parameters in the solar atmosphere . in particular , the comparison of polarization signals from different lines , often using sophisticated inversion techniques , allows a more accurate determination of the magnetic field and physical conditions at the source .however , in order for the comparison or correlations among different spectral lines to be useful , it is often necessary that the lines be observed not only cospatially , but also within the evolutionary timescales of the resolved atmospheric element . to assure that these conditions are met , it is important to take into consideration the effects of atmospheric dispersion , in particular for slit - based spectrographs .the magnitude of the atmospheric refraction varies with zenith distance , an effect called `` spatial differential refraction '' .the variation with wavelength of the index of refraction of air causes an additional variation , called `` spectral differential refraction '' or `` atmospheric dispersion '' .this produces an offset in the direction perpendicular to the horizon of the apparent position of the same object viewed at different wavelengths .this is of particular importance for spectrographs because if the slit is not oriented along the direction of this dispersion , different positions on the sky will be sampled at different wavelengths . for nighttime observations the effects of spatial differential refraction and atmospheric dispersionhave been studied in the past , in particular by , which led to greater care being taken in the slit orientation by nighttime observers .new multi - object spectroscopic techniques brought a slightly different set of problems as discussed by , , and most recently by .these authors discuss the atmospheric effects on various types of multi - object spectrographs in different observing regimes and how to optimize the observing setup in the presence of these effects . however , solar observations present some characteristics that are significantly different from nighttime observations , making it difficult to apply the results of these previous works to the solar case .firstly , because the insolation by the sun produces local heating that results in atmospheric turbulence close to the telescope itself , high - resolution solar observations are often best performed when the sun is at a relatively low elevation .indeed , at many sites , the most stable local atmospheric conditions are obtained in the few hours after sunrise when the sun is at elevations as low as 10 .this constraint on atmospheric stability means that the observations can not necessarily be optimized , for example by choosing the optimal hour angle at which to observe a given object , in order to reduce the effects of the atmospheric refraction .further , for nighttime observations of a `` bright '' object against a darker background , the principal effect of atmospheric dispersion is a reduced system efficiency at wavelengths where the image of the object is shifted off of the slit ( or other entrance aperture ) . in solar observations, such shifts will result instead in illumination from different elements of the solar atmosphere at different wavelengths , which can lead to difficulties in the physical interpretation of the data .in addition , solar telescopes are generally not fitted with atmospheric dispersion correctors .one example to the contrary is the swedish one - meter solar telescope , which was designed with the possibility to compensate for the atmospheric dispersion as a side benefit of the schupmann system used to correct the chromatic aberration of the telescope s singlet objective .current solar telescopes are able to resolve features as small as 0.1 on the solar surface .while future telescopes , such as gregor and the advanced technology solar telescope ( atst ; * ? ? ?* ) , will have even greater resolutions , the latter with a diffraction limit as small as 0.025 . advances in multi - conjugate adaptive optics and the maturing of image - reconstruction techniques are making it increasingly routine to achieve diffraction - limited images over large fields of view .it should be noted that correlation tracking or adaptive optics generally provide no remedy to the problem of spectral differential refraction .these systems correctly stabilize the solar image at their operating wavelength but do nothing to correct the relative offsets due to the atmospheric dispersion of images at other wavelengths . as the diffraction limit decreases ,the atmospheric dispersion , which is independent of aperture , becomes increasingly significant with respect to the resolution element and deserves accurate consideration . in section [ sec : refraction ] we calculate the annual and diurnal variation of the atmospheric dispersion for the sun for a specific location . in section 3we examine the effects of atmospheric dispersion on scanning - slit spectroscopy and describe an observing procedure that minimizes the effects of this dispersion while also maintaining regular spatial sampling .finally , in section 4 we discuss the effects of atmospheric dispersion for filter - based images .the formula for the index of refraction of moist air ( _ n _ ) has been most recently described by , who presents updated versions of edlen s equations for calculating the index of refraction based on the temperature , humidity , pressure , and co concentration of the atmosphere .atmospheric refraction will produce a significant change in the apparent zenith angle that varies with index of refraction and the altitude of the observed object .the magnitude of the refraction can be approximated by where is the refractivity at the observation site , is the true zenith angle , and is the ratio of the height of the equivalent homogeneous atmosphere at the observatory to the geocentric distance of the observatory .the value of is approximately 8 km and can be approximated as , where is the temperature in kelvin at the observatory .this formula for the refraction is accurate to better than approximately 1for zenith angles less than 75 .more accurate determination of the absolute refraction for the correction of spatial differential refraction requires a tropospheric lapse - rate term or a full integration over the atmospheric path . since the fields of view utilized in high - resolution solar physics are generally only a few arcminutes across , the distortions in the field produced by spatial differential refraction are usually less than a few arcseconds .this will cause time - dependent image distortions in observations covering a large range of zenith angles , but absolute positions are generally not required for solar data analysis .we are instead concerned with the dependence of the index of refraction on wavelength which causes proportional variations in , called atmospheric dispersion . for zenith angles up to approximately 70 , the dispersion can be approximated using only the first term in equation ( [ eqn : r ] ) . for slightly larger zenith angles , considering that 1 - is very close to unity andthat in most situations is at least an order of magnitude greater than , the dispersion can be approximated using where and are the observation and reference wavelengths .we have also calculated the magnitude of the dispersion using numerical integration through a model atmosphere following the technique outlined by and observe that the above equation reproduces the atmospheric dispersion to better than 0.05 up to zenith angles of 80 .closer to the horizon this linear approximation rapidly breaks down .examined the dependence of the index of refraction on the input parameters and points out that the magnitude of the atmospheric dispersion is most sensitive to variations in the atmospheric temperature and pressure , indicating the importance in knowing these values for the accurate calculation of the atmospheric dispersion for a given observation .the direction of the shift between images at different wavelengths always remains along the vertical circle , that is perpendicular to the horizon .however , this direction rotates continuously during the day with respect to the axes of the celestial coordinate system .the angle ( ) between the vertical circle and the hour circle passing through the celestial poles and the observed object is called the _parallactic angle_. away from the earth s poles , the parallactic angle will be zero only when the observed object is on the meridian . gives the following equation for parallactic angle : where is the object s hour angle ( positive west of meridian ) and is observer s latitude . [ cols="^,^,^",options="header " , ] we use the relationships defined above to estimate the magnitude and direction of the atmospheric dispersion expected for observations from haleakal , hawaii .we chose this site since it is the proposed site of the new four - meter solar telescope atst and high - resolution , multi - wavelength observations have been indicated as important goals for this new facility .similar calculations for la palma in the canary islands , another prime location for high - resolution solar observations , are shown in appendix [ appendixa ] .first , the right ascension and declination of the sun are calculated with a one - minute time step for each day throughout the year ( the calculations were specifically done for 2006 , but are applicable to any year ) . then using the coordinates of haleakal ( latitude : 20.71 ; longitude : -156.25 ; altitude : 3055 m ) , the azimuth and altitude of the solar disk center was calculated for each minute . using the equations given by ciddor ( 1996 )we then calculate the atmospheric index of refraction at several different typical wavelengths of importance in solar observations , shown in table 1 .we use typical conditions for haleakal ( temp : 11 ; pressure : 71000 pa ; humidity : 30% ; co mixing ratio : 380 ppm ) as measured at the mees solar observatory , except for the co fraction , which is the value measured on mauna loa .we combine the indices of refraction for two different wavelengths with the zenith distance of the sun to calculate the magnitude of atmospheric dispersion during the course of the year as shown in equation ( [ eqn : r ] ) . the programs for calculating the refractivity and refraction are available in solarsoft ( http://www.lmsal.com/solarsoft/ ) and at http://www.arcetri.astro.it/science/solar/ , as well as solar physics electronic supplementary material .we show the calculations for the magnitude of the atmospheric dispersion between 400 and 850 nm in figure [ figure:1 ] .we choose these two sample wavelengths since they represent a fairly typical observing combination and cover approximately the full visible spectral range .since equation ( [ eqn : deltar ] ) shows that the atmospheric dispersion scales almost linearly with the difference of the indices of refraction for zenith distances less than approximately 80 , it should be straightforward to apply the following discussions to other wavelength combinations .the figure shows the calculated value only for those times when the sun is more than 10 above the horizon .figure [ figure:1 ] shows that even at moderate spatial resolutions ( ) the atmospheric dispersion remains significant for the first few hours after sunrise or before sunset throughout the course of the year . at higher resolutions, the dispersion will need to be taken into account at almost all times .the shift between images obtained at different wavelengths due to atmospheric dispersion could be removed by aligning on common solar structures , such as the granulation pattern , taking care not to be biased by variations in the structures observed at different wavelengths .an alternate method would be to apply a correction based on the calculated magnitude and direction of the atmospheric dispersion given the local meteorological conditions .it may be desirable to calculate the atmospheric refraction using more accurate models or through numerical integration of a standard atmosphere .a combined approach could also be developed using the measured offset between images at two suitably selected wavelengths ( _ i.e. _ well separated and observing similar structures ) and interpolating the offset to other wavelengths based of the relative variation of the index of refraction with wavelength .since the magnitude of the atmospheric dispersion varies with the zenith angle and with variations in the local atmospheric conditions , any alignment of multi - wavelength observations obtained over an extended period of time should take into account the temporal variations of the atmospheric dispersion .classical long - slit spectrograph observations are widely used in solar physics to record an approximately one - dimensional slice of the solar atmosphere often in multiple spectral lines covering a significant wavelength range . for many scientific questionsit is necessary to obtain spectral information over a fully filled 2-d field with reasonably high time resolution . in order to achieve this ,the field of view is stepped across the spectrograph slit in a direction perpendicular to the orientation of the slit , recording separate spectra at each position .there is a new group of multi - wavelength ( various ranges from 390 1600 m ) , high - resolution ( 0.3 /pixel ) , scanning spectrographs currently being used in solar physics , including spinor , polis , and trippel , or being constructed , such as the visp for the atst . due to the limitations given by the photon flux, it is often necessary with these instruments to integrate for several seconds or more at each slit position .a single scan of a sizable area of the solar surface may require tens of minutes or up to an hour .in addition , since the temporal evolution of the solar structures or the accurate measurement of oscillatory behavior is important , it is often necessary to track and repeatedly scan the same region in the solar atmosphere continuously for periods of several hours or more .the spectrograph slit is placed in a focal plane , extracting a portion of the image formed on the slit . however , the atmospheric dispersion will cause images at multiple wavelengths to be formed at different positions on the focal plane with respect to the slit .if a significant component of the separation between the images at different wavelengths lies perpendicular to the length of the slit , then the spectrograph will sample different regions on the solar atmosphere at each wavelength . by orienting the slit along the direction of the chromatic separation , the same region will be observed ( with a slight shift along the length of the slit ) at all wavelengths .thus the spectra at different wavelengths can subsequently be aligned using the techniques described above .this is the approach taken , for example , by , , and more recently by and .using a scanning spectrograph with the slit held fixed along the direction of the atmospheric dispersion thus allows observations for which at each slit step the same portion of the solar surface is observed at all wavelengths .however , this approach has generally been avoided because in this case the slit undergoes a constant rotation with respect to the celestial object being observed .this can be seen in figure [ figure:2 ] where the parallactic angle has been calculated from equation ( [ eqn : parang ] ) using the same ephemeris employed for figure [ figure:1 ] .it can be seen that during the winter the parallactic angle undergoes a continuous variation during the day , while in the summer when the sun passes nearly overhead at the latitude of haleakal , the parallactic angle remains nearly constant except for a rapid variation near local noon corresponding to the large changes in azimuth as the sun passes near the zenith .this apparent rotation of the slit orientation will produce complications in the geometry of the spatial scan , as illustrated in figure [ figure:3 ] .the extent of this distortion will depend on the rate of change of the parallactic angle and the amount of time it takes complete the scan .subsequent scans of the same area , taken with the sun at the different altitude and hour angle , will have differing amounts of distortion and will all need to be mapped to a common grid in order to be compared .it should also be noted that this rotation of the observed object with respect to the direction of the atmospheric dispersion implies that multi - wavelength fixed - position ( _ i.e. _ with no spatial scanning ) spectrographic observations are inherently plagued by the fact that there is no means to keep a one - dimensional slice of the solar surface observed at multiple wavelengths fixed with respect to a spectrograph slit over time .generally then , when building up a two - dimensional field with a scanning spectrograph the slit is held fixed with respect to the celestial coordinate system and thus provides a regular sampling of the solar surface . in this casethere will usually be some component of the atmospheric dispersion which is perpendicular to the slit . since a full map of the solar surface is observed , it is possible to correct for an arbitrary direction of the atmospheric dispersion by applying the appropriate shifts to the data cubes in directions both parallel and perpendicular to the slit . in most casesthe changes in the magnitude and the direction of the atmospheric dispersion during a single scan are not significant and the mean value can be applied , although at higher resolutions the variations during a scan may become important .however , even though the shifts between maps at different wavelengths can be removed , there remains a problem in that the spectra of the same portion of the solar surface may be obtained at different times in different wavelengths .the time delay between sampling the same position at two different wavelengths can be given by where is the magnitude of the chromatic separation perpendicular to the slit , is the size of the scan step , and is the time required for each step .since the temporal evolution can be rapid with respect to the scanning speed , the correlations between measurements at multiple wavelengths could be compromised by changes in the solar structures or differing phases of solar oscillations .consider , for example , an observation using a scan step of 0.2 ( corresponding to 140 km on the solar surface ) and a four - second exposure time per scan position . the evolutionary time scale of a 140 km element in the solar photosphere is on the order of 20 seconds or more ( chromospheric timescales will be even shorter ) .observations requiring direct comparison between structures or spectral profiles measured at different wavelengths should all be acquired within this time span . if the magnitude of the chromatic separation perpendicular to the slit is greater than approximately 1 , this condition will not be met and comparisons among multiple wavelengths , even after spatial alignment to remove the offsets produced by the dispersion , will remain problematic .examination of figure [ figure:1 ] shows that the magnitude of the dispersion is greater than 1during the first three hours after sunrise or before sunset .as even higher resolutions are achieved , the relative importance of the atmospheric dispersion will increase with respect to the slit width and scan step .while the bulk rotation of the celestial coordinate frame with respect to the direction of the atmospheric dispersion can not be eliminated , it is possible to devise a method that would at least allow for the regular sampling of a 2-d area of the solar surface while ensuring that the same elements on the solar surface are observed simultaneously at multiple wavelengths .this approach takes advantage of the fact that the rotation between the image plane and the slit operates over the full slit length , while for the shift between images at different wavelengths the lever arm is only as long as the chromatic separation between the images .this gives a greater tolerance in the orientation of the slit so that it can be allowed to rotate away from the true vertical direction within certain limits , allowing the slit to be maintained at a fixed orientation in celestial coordinates .the basic approach is that prior to performing a spatial scan with the spectrograph , the slit is oriented at an angle corresponding to the mean parallactic angle for the full scan to be performed . during the scan ,the solar image is rotated such that its orientation is held fixed with respect to the spectrograph slit , resulting in rectilinear sampling of the observed field .eventually the parallactic angle will change such that the images at the wavelengths being observed will be significantly offset perpendicularly to the slit .this will require that the slit be oriented along the new mean parallactic angle before resuming the image rotation necessary to maintain the slit at a fixed direction on the solar surface .when the parallactic angle is stepped all subsequent scans will have a different overall orientation in celestial coordinates , but this can be dealt with through a simple bulk rotation of the datacube to a common orientation .the metric of interest in this case is the amount of time that the slit can be held at a fixed orientation in celestial coordinates before the change of the parallactic angle causes the shifts perpendicular to the slit for images at different wavelengths to be significant .a perpendicular shift of half of the slit width can be considered significant since it will cause the primary contribution to the flux at different wavelengths will come from different spatial positions .since the half of a slit width shift is acceptable in either of the two directions perpendicular to the slit , we take one slit width as the cutoff for the maximum allowable shift .the magnitude of the perpendicular shift at any moment depends on the difference between the starting ( and actual ( parallactic angle and the magnitude of the atmospheric dispersion at that moment . taking each one - minute time step as the starting point ,we calculate the value of for all subsequent times during that day .we then determine the first moment when this function exceeds the defined cutoff of one slit width .the result is a measure of how long , starting from the slit oriented along the direction of the chromatic separation , the image can be rotated to maintain the slit at a fixed orientation in celestial coordinates without introducing significant offsets transverse to the slit at different observed wavelengths .figure [ figure:5 ] shows the allowable durations for all times during the year , calculated for an example observation with the atst spanning the wavelengths 400 and 850 nm , and with a slit width of 0.05 .observations at this resolution will require excellent seeing and adaptive optics stabilization , but obtaining multiwavelength spectral information on solar structures at this scale is an important science driver for atst .it can be seen that the calculated duration can vary strongly during the day and shows different behaviors at different times of the year . in the winter months from october through march , the allowable time is often 15 minutes or less . considering a typical exposure time of approximately five seconds ,this only allows for 180 step positions , which may allow a scan of less than 10on the solar surface .there are also two one - month periods , starting in april and august , when the slit can be held at a fixed orientation with respect to the celestial coordinates for an hour or more .these periods might be best employed for certain observations requiring spatially and temporally extended observations at multiple wavelengths .a further optimization in the calculation of the allowable durations is possible by not forcing the slit to be aligned strictly in the direction of the atmospheric dispersion at the start of the observation , but rather to permit it to be set to any angle , while still maintaining the perpendicular displacement to be less than the cutoff value . in some casesthis can allow for longer periods of observations without the need to adjust the orientation of the slit with respect to the vertical direction .the gain is seen mostly at midday when the sun passes near the zenith and the parallactic angle rotates rapidly but is very small .we have been primarily concerned with the effect of atmospheric dispersion on spectrographic observations of the sun , but obviously the same effect will present in filter observations as well . while in this latter casethe acquisition of instantaneous two - dimensional maps at each wavelength allow for the application of arbitrary shifts to align the images , it should be remembered that the appropriate shifts to coalign different wavelengths will vary in magnitude and direction during the course of the day .one concern with imaging through a filter is the smearing caused by the separation between the solar scene at the different wavelengths transmitted by the filter .the significance of this smearing depends on the magnitude of the atmospheric dispersion , the width of the filter passband and the resolution achieved in the final image . in order to quantify the effects of this smearing, we estimate the change in the diffraction limited psf for a telescope with a given aperture .the transmission profile for a filter provides the percentage of the incident flux transmitted at each wavelength . for each transmitted wavelength, we can calculate , for a specific set of atmospheric conditions and observing circumstances , the associated atmospheric dispersion . for each spectral position in the profilethen we can calculate the offsets relative to the central wavelength by setting in equation ( [ eqn : deltar ] ) to the central filter wavelength . by combining the calculated offsets with the transmission profile , we construct a weighted distribution of shifts in the resulting image .this spatial smearing profile can be convolved with the diffraction limited psf for a telescope of a given aperture to determine how the telescope psf is broadened by the effects of the atmospheric dispersion .this smearing will be produced only along the direction of the atmospheric dispersion .we perform this calculation for a range of wavelengths and telescope apertures .we calculate the refractivity for the atmospheric conditions given in section 2.2 and for observations at an elevation of 15 above the horizon .we calculate the transmission profile for a ideal two - cavity filter centered on each wavelength in a range from 350 to 900 nm .we then convert the transmission profile into a spatial smearing profile that is convolved with the airy function for telescopes with apertures ranging from 0.5 to 5 meters . for each combination of wavelength and aperture, we find the fwhm of the filter profile that results in a reduction of the peak transmission of airy profile by a factor of 0.8 .this value for the strehl ratio implies a possibly tolerable but not altogether negligible degradation in the image quality .this smearing should be included during the design process in the overall error budget in determining the final image quality for a given instrument the results of this calculation are shown in figure [ figure:6 ] , where the contours indicate the fwhm of the two - cavity filter that achieves the defined strehl ratio . even at the current apertures of one meter or less , typical filters with a full width of ten nm can result in a notable image degradation at shorter wavelengths . with a four - meter class telescopethis constraint becomes more limiting , reducing the usable filter passbands and offsetting the gain in photon flux with the larger aperture .for example , g - band observations with a four - meter telescope may be limited to a 0.5 nm passband , down from the one nm filters currently in common use .we have evaluated the magnitude and direction of the atmospheric dispersion for the sun for all times during the year .the relative offsets between images at different wavelengths are significant , especially in the low - elevation observations typical for high - resolution solar observations .the magnitude of this effect changes during the course of the day , which means that the alignment between images at different wavelengths will be a function of the time of observations .image alignment can not rely solely on reference points within the telescope ( grids , crosshairs , _ etc . _ ) , but must be either measured from the solar structures or calculated using the known functions for atmospheric refraction ( or some combination of the two ) .spectroscopic observations covering a broad range of wavelengths and requiring a direct comparison among different spectral lines , should be made by aligning the slit along the parallactic angle , although this will cause a rotation of the slit in celestial coordinates and a distortion of the scanning geometry .this will require a separate remapping of all of the data to a common grid , but it has the advantage for telescopes on an alt - azimuth mount that no optical image derotator is required for some instrument positions and the slit is held at a fixed position with respect to the telescope , possibly simplifying the polarization calibration .also , it is simpler to maintain the slit at a fixed orientation with respect to the horizon , since the spatial differential refraction will produce an small but non - negligible extraneous rotation of the observed celestial object , in addition to the rotation caused by the changing parallactic angle , that will need to be calculated and corrected .we have described a method allowing a regular sampling in celestial coordinates that allows the slit to deviate from the parallactic angle as long as the perpendicular shifts do not exceed a defined limit . using this limit we can calculate the acceptable amount of time that the solar image can be rotated to maintain a fixed orientation to the celestial coordinates without causing an unacceptable shift of images at different wavelengths perpendicular to the slit direction .the calculated period obviously depends on the slit width being used and the wavelength separation , but it still provides an observational constraint even for existing telescopes and resolutions , as can be seen in appendix a. the proper consideration of the atmospheric dispersion places constraints on the observational configuration .if the orientation of the slit is dictated by the parallactic angle then it can not be oriented based on the solar structure to be observed .for example , the slit can only be placed parallel or perpendicular to the limb of the sun at specific position angles that vary during the day . especially in the winter , when the sun passes lower in the sky , the celestial object will undergo a significant rotation with respect to a slit held at or near the parallactic angle . since different types of observations may be more or less constrained by these considerations , observation scheduling may have to take into account the different periods of the year when the parallactic angle changes more or less rapidly .we thank the referee for useful comments that helped to improve the paper .we are grateful for discussions and careful readings of the manuscript by gianna cauzzi , fabio cavallini , and alexandra tritschler .the mees meteorological data were kindly provided by don mickey .this work was supported by the italian research ministry , prin - miur 2004 . 33 , l.h . andstandish , e.m . : 2000 , * 119 * , 2472 . , c. , schmidt , w. , kentischer , t. , and elmore , d. : 2005 , * 437 * , 1159 . , j.m ., mauter , h.a . , mann , g.r . , andbrown , d.r .: 1972 , * 25 * , 81 ., l.r . , langhans , k. , and schlichenmaier , r. : 2005 , * 443 * , l7 .ruiz cobo , b. , and collados , m. : 2000 , * 535 * , 475 ., k. , vial , j .- c . , koutchmy , s. , and zirker , j.b . : 1994 , in k.s .balasubramaniam and g.w .simon ( eds . ) , _ asp conf . ser .68 : solar active region evolution : comparing models with observations _ , astron .pacific , san francisco , p. 389 . , d. , bellot rubio , l.r ., and del toro iniesta , j.c .: 2005 , * 439 * , 687 .chambers , k.c .: 2005 , in p.k .seidelmann and a.k.b .monet ( eds . ) , _ asp conf . ser . 338 : astrometry in the age of the next generation of large telescopes _ , astron .pacific , san francisco , p. 134 ., p.e . : 1996 , * 35 * , 1566 ., p.e . : 1999 , * 38 * , 1663 . , j.g . and cromer , j. : 1988 , * 100 * , 1582 . ,l.e . , robinson , r.d . ,mauter , h.a . ,mann , g.r . , and phillis , g.l .: 1981 , * 71 * , 237 .brodie , j.p ., bixler , j.v . , and hailey , c.j .: 1989 , * 101 * , 1046 ., b. : 1966 , _ metrologia _ * 2 * , 71 .elmore , d.f ., socas - navarro , h. , card , g.l . , and streander , k.v .: 2005 , in s. fineschi and viereck , r.a .( eds . ) , _ solar physics and space weather instrumentation _ , _ proc .spie _ * 5901 * , 60 .: 1982 , * 94 * , 715 . , f. , beckers , j. , brandt , p. _ et al . _ : 2004 , in j.m .oschmann ( ed . ) , _ ground - based telescopes _ , _ proc .spie _ * 5489 * , 122 . , s. , oschmann , j.m . , rimmele , t.r .et al . _ : 2004 , in j.m .oschmann ( ed . ) , _ ground - based telescopes _ , _ proc .spie _ * 5489 * , 625 ., d. : 2006 , _ the trippel spectrograph : a user guide _ , http://www.solarphysics.kva.se/lapalma/spectrograph/spectrograph.html , b.w . ,rutten , r.j . , and berger , t.e . : 1999 , * 517 * , 1013 . , n. and kosovichev , a. : 2003 , * 412 * , 541 . , d. 2006 , mso weather measurements , http://www.solar.ifa.hawaii.edu/weather/measurements.html , l.h.m . , hansteen , v.h . , carlsson , m. _ et al ._ : 2005 , * 435 * , 327 . , j. , whl , h. , kuera , a. , hanslmeier , a. , and steiner , o. : 2004 , * 420 * , 1141 . , g. , owner - petersen , m. , korhonen , t. , and title , a. : 1999 , in t.r .rimmele , k.s .balasubramaniam , and r.r .radick ( eds . ) , _ asp conf .183 : high resolution solar physics : theory , observations , and techniques _ , astron .pacific , san francisco , p. 157 .seidelmann , p.k . : 1992 , _ explanatory supplement to the astronomical almanac _ , university science books , new york city ., g.w . : 1966 , * 71 * , 190 . , h. , beckers , j. , brandt , p. _ et al . _ : 2005 , * 117 * , 1296 . , h. , elmore , d.f . ,pietarila , a. _ et al . _: 2006 , * 235 * , 55 . , d.c.h . ,thomas , j.h . , and lites , b.w .: 1997 , * 477 * , 485 . , r.c . : 1996 , * 108*,1051 . ,g.p . : 2005 , * 443 * , 703 . , r. , von der lhe , o. , kneer , f. _ et al ._ : 2005 , in s. fineschi and viereck , r.a .( eds . ) , _ solar physics and space weather instrumentation _ , _ proc .spie _ * 5901 * , 75 .woolard , e.w . andclemence , g.m . : 1966 , _ spherical astronomy _ , academic press , new york city .young , a.t . : 2004 , _ astron . j. _* 127 * , 3622 .we present here plots , similar to those shown in the main paper , calculated for roque de los muchachos , la palma , spain ( latitude : 28.76 ; longitude : -17.88 ; altitude : 2350 m ) .this is an alternate site for the atst and the location of other high - resolution solar telescopes such as the swedish one - meter solar telescope ( sst ) .these calculations will also apply to gregor which will be located on the island of tenerife . for simplicity, we use the same meteorlogical conditions as for halekal . due to the sites higher latitude , the behavior of the parallactic angle , for example , is different from that of a tropical site .
we investigate the effects of atmospheric dispersion on observations of the sun at the ever - higher spatial resolutions afforded by increased apertures and improved techniques . the problems induced by atmospheric refraction are particularly significant for solar physics because the sun is often best observed at low elevations , and the effect of the image displacement is not merely a loss of efficiency , but the mixing of information originating from different points on the solar surface . we calculate the magnitude of the atmospheric dispersion for the sun during the year and examine the problems produced by this dispersion in both spectrographic and filter observations . we describe an observing technique for scanning spectrograph observations that minimizes the effects of the atmospheric dispersion while maintaining a regular scanning geometry . such an approach could be useful for the new class of high - resolution solar spectrographs , such as spinor , polis , trippel , and visp .
since its introduction during world war ii , radar ( _ radio detection and ranging _ ) has been the standard method for the detection of moving objects .the future efficacy of radar for military applications is being called into question , as many countries have become interested in developing weapons systems that employ various technologies to evade radar detection . currently , only the united states has radar - evading , or `` stealth , '' aircraft in service , namely the f-22 raptor and the b-2 spirit ( the first stealth - capable fighter - bomber , the f-117 nighthawk , is no longer in service , while the f-35 joint strike fighter is still under development ) .however , other countries are currently working on developing stealth - capable aircraft .notably , russia is currently developing the t-50 fighter jet as a rival to the f-22 , with a planned operational deployment sometime around 2015 .the acquisition and development of stealth technologies by rivals of the united states presents a potential threat to the united states and her interests .thus , in the near future , it will be necessary to develop new , stealth - resistant , methods for detecting aircraft and other moving objects , such as uavs , cruise and ballistic missiles .such detection methods will both provide warning time to prepare for an impending attack , as well as to identify the location of a threat , which may then be destroyed with appropriate counter - measures . a variety of detection methods other than traditional radar - based methods are possible .first of all , some stealth technology works by reflecting the incoming radar beam away from the source . while the source radar can not detect the incoming object , using multiple station radar allows for the reception of this diverted radar signal , which may then be used to determine the location of the aircraft via triangulation .the problem with this approach is that it will not work for stealth aircraft that achieve their radar - evading capability by absorbing the incoming radar beam , using so - called _ radar - absorbing materials _ , or ram .another approach relies on detecting the heat signature of an aircraft using infrared sensors .once again , the problem with this approach is that modern stealth aircraft are generally designed to minimize their heat signatures , make this approach problematic . a third approach involves using high resolution optical imaging to directly observe the moving object in the visible spectrum .while this approach may be viable for the time being , at least for detection of aircraft during daytime , there are currently research efforts underway to design cloaking " devices that can bend light around an object and make the object appear transparent .such devices are based on meta - materials , " whose optical properties can be controlled by appropriate design of their internal structure .finally , a fourth approach involves detecting the sound made by an approaching aircraft , uav , or missile .if the object is subsonic , then this approach is feasible , as the sound wave generated by the object arrives at the detector before the object itself .however , for supersonic objects , the sound wave will arrive at the detector after the object , rendering this approach useless . here, we propose an alternative method for the detection of moving objects .this method exploits the fact that all massive objects generate a gravitational field , and that a moving object will lead to a time - varying gravitational field that can be measured at various points . by measuring this time - varying field at a sufficient number of points , it is possible to obtain the mass , position , and velocity of the object by solving a system of nonlinear algebraic equations .this approach has an advantage over other detection methods , in that , because it is impossible to hide or shield a gravitational field , this method should be much more difficult , if not impossible , to counter , than other methods .there are two main drawbacks and one limitation with this method .the first drawback is that it requires the ability to detect gravitational fields that are four to five orders of magnitude weaker than what is possible with current gravimetric devices .the second drawback is that , even with a perfect detector , the gravitational signal generated by the moving object of interest may be masked by effects such as atmospheric disturbances and clutter due to the random motion of various other objects ( e.g. cars , animals , etc . ) .a potential limitation of this method is that it may not work well for ships and submarines that displace a mass of water equal to their own mass .the reason for this is that a variation in the gravitational field is generated by variations in the mass density distribution .if a moving ship or submarine simply displaces an equal mass of water , then the mass distribution may not change sufficiently to lead to a detectable signal .thus , this mass detection method , if feasible , is likely only to be applicable to massive objects that travel on the ground or in the air .this paper contains the basic theory underlying a gravity - based approach for mass detection .we will discuss various ways to deal with anticipated drawbacks of this method .in particular , we will propose an initial set of studies to determine whether the method is at all feasible .therefore , the present work has the nature of an entire research program .a full journal version with details of the relevant physical background may be found in .according to newton s universal law of gravitation , two point - objects of mass and interact through a gravitational force of magnitude , where is the distance between the objects , and is the _ gravitational constant _ , which in si units is ] and [mm / s ] .these parameters are within readily measurable limits , so that , if we can design a gravity - based interferometer that produces an interference effect characterized by these parameters , then it should be possible to design a gravimetric device with the necessary sensitivity .one approach for obtaining the desired mass to velocity ratio is to take an object with a mass of [amu ] moving at [mm / sec ] .such an object has a mass comparable to that of a virus .while it may seem counter - intuitive or difficult to exploit quantum interference effects with such massive objects , it should be noted that the double - slit diffraction experiment has been pushed considerably beyond electrons to small molecules .there is currently active research on creating quantum superpositions of even more massive objects such as viruses , and perhaps even as large as 1 millimeter .of course , it is necessary to cool such objects to near absolute zero in order to ensure that they are in their ground quantum state , thereby preventing a phenomenon known as decoherence from destroying quantum superposition effects .furthermore , in order to exploit these effects to create a usable device , it will be necessary to create a steady particle beam out of these comparatively massive objects . clearly then , while the design of ultra - sensitive gravimeters based on gravity - induced quantum interference is not necessarily an idea based in science fiction , it is nevertheless at the outer edge of our current technological capabilities .a second possibility would be to use what is known as a coherent matter wave , or `` matter laser , '' generated from a bose - einstein condensate .a bose - einstein condensate , or bec , is a phase of matter whereby all of the particles are in the ground energy state .because the particles of a bec are in the same quantum state , the bec exhibits strong quantum superposition effects at macroscopic scales . a bec fluid can not be treated using classical fluid mechanics , rather , a purely quantum - mechanical approach must be adopted . in analogy with coherent light that is used to create a laser ,researchers are interested in using becs to create particle beams that are essentially coherent matter waves , i.e. , a `` matter laser . ''such a matter laser could form the `` working fluid '' of our proposed gravimetric device . in particular , in analogy to laser physics , the interference of two individual bose - einstein condensate wavefunctions demonstrates multiparticle interference with matter waves ; see and the references therein .this body of work demonstrates that bose condensed molecules are `` laser like '' : they are coherent and show long - range correlations .indeed , the first bec was achieved with rubidium atoms in 1995 , cooled to 170[nk ] .rubidium has an atomic weight of approximately 85[au ] , so that , in order to achieve the desired mass to velocity ratio for our device as described above , we require a particle velocity of [m / s ] . using the de broglie formula, this corresponds to a particle wavelength of [cm ] .since this corresponds to the ground state of a single - particle wavefunction , the condensate would have to be created in a box with a length on the order of 110[cm ] .in si units , the critical temperature for condensate formation is given by , where is the particle density and is the mass per particle .a condensate temperature of 1[ k ] then requires a particle density of [ particles/ = [particles/ . for a condensate temperature of 1[n k ] , which is on the order of the lowest achievable temperature to date , a particle density on the order of [particles/ required .given the required dimensions of the container in which our bec is to be created , this means that it will be necessary to create a bec with on the order of a minimum of particles .given that the largest becs to date have been achieved using on the order of particles [ 6 ] , it is clear that atom cooling and bec technologies will have to be developed some more before our proposed gravimetric device is feasible .further , it has recently been experimentally demonstrated that it is possible to split gaseous bose - einstein condensate into two coherent condensates by deforming an optical- well where the condensate was trapped .experiments analogous to what is envisioned for the proposed device have been performed and the two condensates were brought together at which point a matter - wave interference pattern was observed .the coherent condensates were separated for a duration of 5[ms ] by 13[ m ] and by 80[ m ] in .the spatial scales in these experiments have been several orders of magnitude smaller than the scales envisioned for the proposed gravimeter. however , cryogenic technology exists , and is developing fast , that is expected to make the necessary space scales feasible .the temperatures that are needed to be maintained along the path of the beam are of the order of 1[ ] while the present - day record for achievable low temperatures is below 1[nk ] , i.e. , _ the record is more than two orders of magnitude lower than the temperature that a gravitometer will require ._ this is very encouraging although the needed conditions will have to be maintained for the substantial length that the matter wave would need to traverse .finally , it should be emphasized that each gravimetric device will actually consist of up to three interferometers , which will separately measure the gravitational field in the and directions , respectively .the one potential issue with using the direction is that the gravitational field in this direction is already relatively strong , since it is the direction of the earth s gravitational field .a bec or virus - based detector aligned along this field may be too sensitive , and could essentially be overwhelmed by the strength of the signal ( i.e. , the interference pattern will be characterized by parameters that will make it difficult to measure ) .using an alternative working fluid , such as a neutron beam , for this direction , will allow one to measure the gravitational field in the z direction , however , it will not have the required sensitivity to measure the extremely weak fluctuations generated by a moving object . as a result , it may be preferable to only measure the time - variation in the local gravitational fields in the x and y directions . then , instead of requiring two detectors per moving object , it will be necessary to use three detectors .clearly , it is essential to determine whether or not our method for detecting moving objects will work in the presence of various disturbances that can introduce noise into the gravitational signal at the detectors .there are two major sources of noise that will need to be filtered out from the gravimetric signal .the first major source of noise is due to moving objects that are not of interest , such as civilian traffic and the movement of local wildlife .the second is noise due to atmospheric disturbances .this includes weather phenomena such as wind , precipitation , clouds , and even simple convection patterns .there are several complementary approaches to dealing with these sources of noise , all of which will need to be considered in any feasibility study .first of all , a moving object with a flight profile that would make it of interest for detection will likely generate a gravimetric signal that has distinct characteristics from the sources of noise described above .determining the defining features of this signal , along with the defining features of the various sources of noise described above , will allow us to design appropriate digital filters for extracting the gravimetric signal generated by the moving object itself .in particular , for certain types of moving objects , such as civilian traffic , it may be possible to detect individual cars and airplanes using optical or radar trackers .this could be done within a certain radius of the detectors where their signal could be expected to significantly distort the gravimetric signal generated by the object to be detected . by accounting for the gravitational field generated by these objects, it will be possible to filter out the contribution that these objects make to the gravimetric signal at the detectors .we will also explore general distributed models of civilian traffic and wildlife , which is not directly detectable by other means , that could affect the gravimetric signal , especially if the detectors are placed next to major cities . such civilian traffic should generate a relatively constant signal with low frequency noise .in fact , one should expect relative constancy of motion and slow speeds for many typical non - military sources .filtering out the effect of such additional background sources from a typical military target should be achievable quite effectively using standard bandpass filtering techniques in the frequency domain .next , it should be noted that , for the case of general atmospheric disturbances , the overall effect on the gravimetric signal is expected to be relatively weak .the reason for this is that the gravitational field is determined entirely by the mass distribution , and not by the velocity field associated with the distribution .therefore , even if there are moving masses of air and precipitation , the overall change in the mass distribution may be sufficiently small as to have a comparatively small effect on the gravitational field .the precise contribution of disturbances such as weather fronts and wind gusts , albeit small , is expected to be more challenging . in order to model such disturbances , we will need to work with fluid dynamic models that describe atmospheric phenomena . for our purposes ,such models do not need to be overly sophisticated , as our goal is not to predict specific weather patterns , but rather to characterize the gravimetric signal generated by such phenomena , in order to develop an appropriate filter to detect and remove them .an appropriate class of such models has been proposed in the meteorology literature based on the movements of weather fronts and using the theory of mass transport .these models are very easy to implement on computer , and could allow us to make the necessary differentiation of the signature of a moving ( compressible ) mass of air as opposed to that of a rigid object such as an aircraft .we anticipate the spectral content of these two signals to be significantly different , thereby allowing statistical analysis and the relevant filtering techniques to be brought to bear on the problem .we review some of the numerical methods that were employed in our simulations of section [ sec : simulations ] .the goal behind newton - raphson iteration , or simply newton s method , is to solve a nonlinear system of equations .the idea behind the method is to choose an initial guess , denoted that is reasonably close to the actual solution , which we denote by if this is the case , then we may make a first - order approximation and write , and so we may solve for by solving the linear system in reality , however , the value obtained for using the above procedure will not give the actual value for , but some other value , denoted the reason for this is that the first - order taylor expansion is not exact .nevertheless , if is close enough to so that the first - order taylor expansion is sufficiently accurate , then at the very least will be closer to the true value of than this means that may be used as an improved initial guess for eq .( [ a1 ] ) , which should then generate an improved estimate for , denoted continuing in this way , we generate a sequence of points , converging to that are related to each other by the recursion relation , very often , the difficulty with obtaining convergence using newton s method stems from an inability to pick a good initial guess .we illustrate one approach for dealing with this problem : suppose we wish to solve the nonlinear system of equations and we are given some initial guess this initial guess will not generate a sequence that converges to the desired solution , nevertheless , this initial guess is essentially as good as any other , since the system of equations is such that it is difficult to choose a good initial guess .the idea is to therefore start with the given initial guess and see if it is possible to reach the desired solution to the system of equations .we begin by defining so that is the value of the function evaluated at the initial guess .we then define a continuous curve with the properties that and thus , as ranges from 0 to 1 , goes from to we now choose some positive integer and for we define note that this generates a sequence by continuity , the distance between successive values of decreases as increases .so suppose we are able to obtain the solution , denoted , to the nonlinear system of equations for large the difference between and should be fairly small , so that should be fairly close to the solution to the nonlinear system of equations therefore , if we use as the initial guess to the solution for , then newton s method is fairly likely to converge in such a case .in this way , starting from the initial guess which is the solution to , we obtain from newton s method a sequence of points where at each step , where newton s method is used to generate from convergence is likely , because and are sufficiently close that and are close enough as well for newton s method to converge using as the initial guess . note that this approach does not attempt to solve the original system , it starts with a solution vector for which is the solution to and then continuously deforms into as a result , we term this method newton - raphson iteration with solution deformation . finally , although many deformations of into are possible , here we employ a linear deformation , given by where ,$ ] in our simulations .this paper describes a possible scheme for the construction of a gravimetric radar . the goal is the detection of large , fast moving objects , within a reasonable range , through the extraction of signals from gravimetric datathis will have direct applications to stealth technology . at present ,feasibility of such a device hinges on the development of sensors that are four orders of magnitude better than existing technology .it is envisioned that such a device will be based on gravity - induced quantum interferometry and the use of bose - einstein condensates in the form of particle beams with relatively massive , ultra - cold particles .the functionality and reliability of the gravimetric radar will rely critically on a substantial signal processing component to account and mediate the effects of known disturbances produced by other large moving objects or weather fronts .this research was partially funded by boeing .y. shin , m. saba , t. a. pasquini , w. ketterle , d. e. pritchard , and a. e. leanhardt , `` atom interferometry with bose - einstein condensates in a double - well potential , '' _ phys .lett . _ * 92 * , ( 2004 ) , 50405 .t. schumm , s. hofferberth , l. m. andersson , s. wildermuth , s. groth , i. bar - joseph , j. schmiedmayer , and p. krger , `` matter - wave interferometry in a double well on an atom chip , '' _ nature phys . _* 1 * , 5762 ( 2005 ) . m. zwierlein , c. stan , c. schunck , s. raupach , s. gupta , z. hadzibabic , and w. ketterle , `` observation of bose - einstein condensation of molecules , '' _ physical review letters _ * 91 * , no .25 ( 2003 ) , 250401 .
this paper discusses a novel approach for detecting moving massive objects based on the time variation that these objects produce in the local gravitational field measured by several detectors . such an approach may provide a viable method for detecting stealth aircraft , uavs , cruise , and ballistic missiles . by inverting a set of nonlinear algebraic equations , it is possible to use the time variation in the gravitational fields to compute the mass , position , and velocity of one or more moving objects . the approach is essentially a gravity - based form of triangulation . based on order - of - magnitude calculations , we estimate that under realistic scenarios , this approach will be feasible if it is possible to design gravimetric devices that are four to five order of magnitude more sensitive than current devices . to achieve such a level of sensitivity , we suggest designing detectors that exploit a quantum - mechanical effect known as _ gravity - induced quantum interference_. furthermore , even if we have a perfect detector , it will be necessary to determine the magnitude of various atmospheric disturbances and other sources of noise .
a great interest was shown lately towards problems concerning optimal partitions related to some spectral quantities ( see ) . among them , we distinguish two problems which interest us .let be a bounded and connected domain and the set of partitions of in disjoint and open subdomains .we look for partitions of which minimize * problem 1 . * : : the largest first eigenvalue of the dirichlet laplace operator in : * problem 2 . * : : the sum of the first eigenvalues of the dirichlet laplace operator in : for simplicity we refer to these two problems in the sequel as minimizing the sum or the max .theoretical results concerning the existence and regularity of the optimal partitions regarding these problems can be found in ( and references therein ) . despite the increasing interest in these problemsthere are just a few cases where optimal partitions are known explicitly , and exclusively in the case of the max .[ [ structure - of - the - paper ] ] structure of the paper + + + + + + + + + + + + + + + + + + + + + + in the following section , we apply known results to obtain estimates of the energy of optimal partitions in the case of a disk , a square or an equilateral triangle and recall informations about the structure of an optimal partitions .then , we focus on the minimization problem for the max : we give explicit results for and in other cases , we present the best candidates we obtained by using several numerical methods . in section [ sec.compar ] , we recall and apply a criterion which shows when a partition optimal for the max can not be optimal for the sum . in cases where the previous criterion shows that candidates for the max are not optimal for the sum , we propose better candidates in section [ sec.sum ] .these candidates are either obtained with iterative methods already used in or are constructed explicitly .by monotonocity of the -norm , we easily compare the two optimal energies : more quantitative bounds can be obtained with the eigenmodes of the dirichlet - laplacian on . for , we denote by the -th eigenvalue of the dirichlet - laplacian on ( arranged in increasing order and repeated with multiplicity ) and by the smallest eigenvalue ( if any ) for which there exists an eigenfunction with nodal domains ( _ i.e. _ the components of the nonzero set of the eigenfunction ) . in that case, the eigenfunction gives us a -partition such that the first eigenvalue of the dirichlet - laplacian on each subdomain equals .we set if there is no eigenfunction with nodal domains .it is standard to prove ( see for example ) : let us consider . since the first eigenvalue of the dirichlet - laplacian is simple and connected , then necessarily any eigenfunction associated with has one or two nodal sets whether or .consequently for and furthermore , any nodal partition associated with is optimal for the max when .we say that an eigenfunction associated with is _ courant - sharp _ if it has nodal domains with .+ in this paper , we focus on three geometries : a square of sidelength , a disk of radius and an equilateral triangle of sidelength .the eigenvalues are explicit and given in table [ tab.vp ] where is the -th positive zero of the bessel function of the first kind and is the -th element of the set . in table [ tab.estim ] , we explicit the lower and upper bounds in . this work is supported by a public grant overseen by the french national research agency ( anr ) as part of the `` investissements davenir '' program ( labex _ sciences mathmatiques de paris _ anr-10-labx-0098 and anr _ optiform _ anr-12-bs01 - 0007 - 02 ) .dirichlet eigenfunctions of the square membrane : courant s property , and a. stern s and .pleijel s analyses . in _ analysis and geometry _ , vol .127 of _ springer proc ._ springer , cham , 2015 , pp . 69114 .available from : http://dx.doi.org/10.1007/978-3-319-17443-3_6 , http://dx.doi.org/10.1007/978-3-319-17443-3_6 [ ] .courant - sharp eigenvalues for the equilateral torus , and for the equilateral triangle ., 12 ( 2016 ) , 17291789 .available from : http://dx.doi.org/10.1007/s11005-016-0819-9 , http://dx.doi.org/10.1007/s11005-016-0819-9 [ ] .beniamin bogosel and virginie bonnaillie - nol + dpartement de mathmatiques et applications ( dma - umr 8553 ) + psl research university , ens paris , cnrs + 45 rue dulm , f-75230 paris cedex 05 , france + ` beniamin.bogosel.fr ` and ` bonnaillie.cnrs.fr `
in this paper we compare the candidates to be spectral minimal partitions for two criteria : the maximum and the average of the first eigenvalue on each subdomains of the partition . we analyze in detail the square , the disk and the equilateral triangle . using numerical simulations , we propose candidates for the max , prove that most of them can not be optimal for the sum and then exhibit better candidates for the sum .
during the last years , the interest in the utilization of porous media composed of fibers has been considerably increased , especially for energy conversion applications .for instance , carbon papers and carbon felt are by now widely used as gas diffusion layers of fuel cells .but the rapid rise of decarbonized green energy demand does not limit the application of such materials to fuel cells .flow batteries have recently been perceived as one of the most promising technologies for electrochemical energy storage . even though flow batteries are known since the late 1980s , it is only during recent years that the scientific community has focused on improving their performance . a cell of a flow batteryis composed by two porous media fibrous electrodes .the inner surfaces of the porous media act as active site where electrochemical reduction and oxidation reactions of the electrolytes occur .both half - cells are supplied with the electrolyte solutions which are stored in external tanks and circulated by pumps to keep on the reactions .one limitation to the peak performance of flow batteries consists of the too slow electrolyte transport in the electrodes .the fluid dynamic optimization of the porous medium which provides both the electrochemical active surfaces and the mixing volume of the chemical species is one of the main technological issues to be dealt with .in fact , the slow dispersion process of species in water represents a bottleneck for the peak performance of flow batteries . specifically , the mass diffusion coefficients of the species in water , , are about 10000 smaller then the water kinematic viscosity , , indicating that the mass diffusion is 10000 times slower than the momentum transport .enhancing this diffusivity can produce a dramatic increase in the cell performance .a proper designed geometry of a non - isotropic porous medium can enhance this effective mass transport while minimizing the drag , thus improving and optimizing the batteries performances. .the present study deals with such analyses , by means of a lattice boltzmann model and a lagrangian particle tracking algorithm . even if the influence of medium porosity on the flow drag has been largely studied , the impact of its microscopic design on the combined mixing / transport mechanisms and drag is still not well assessed .in fact , even though the anomalous ( i.e. non - fickian ) behavior of dispersion in porous media has been widely investigated , it is not clear to what extent the micro - structure of the medium can impact macroscopic dispersion phenomena .local heterogeneities at various scales have been considered capable of generating such anomalous behavior .berkowitz and sher claimed that a wide distribution of delay times limiting the transport in porous media results in non - fickian dispersion which can not be represented by an equation including a time - dependent dispersion coefficient .instead , the authors highlighted that all the time evolution of motion must be taken into account , and that the macroscopic advection - dispersion equation ( ade ) must be non - local in time .whitaker identified different fluid - dynamic variables responsible for the dispersion by means of the volume averaging technique .this analysis revealed the presence of different terms in the averaged ade which act as sources of dispersion and convection .nevertheless , the volume averaging technique is not sufficient to predict the dispersion behavior in a general way , since the evaluation of the effective dispersion tensor is limited by some constraints .in fact , in practical applications the value of the effective dispersion tensor may be significantly different than expected , since it depends on the unconditioned statistics of hydraulic permeabilities of the porous medium .several authors agree that dispersion should tend to the standard fickian dispersion at a certain temporal or length scale for which all the hypothesis of the central limit theorem are satisfied , i.e. when the particle motion is no more correlated .such transient anomalous behavior has been recently recognized in a variety of physical - chemical and socio - economical systems , which can also present non - gaussian yet fickian dispersion behaviors . however , the aforementioned time or length scales strongly depend on the medium structure and , thus , they are not easy to determine a priori .more recently , other causes have been identified as responsible of the anomalous dispersion , such as the presence of three - dimensional vortices , particle jumps and different mechanisms of dispersion acting on subgroups of particles .castiglione et al . suggested that two mechanisms of dispersion ( i.e. a weak anomalous dispersion and a strong anomalous dispersion associated to ballistic motion ) can give rise to transient anomalous dispersion in several systems .the authors underlined that even though it is not particularly difficult to build up probabilistic models exhibiting anomalous dispersion , understanding anomalous dispersion in nontrivial systems , such as porous media , is much more difficult . a review of the literature about anomalous dispersion revealed that this behavior is really difficult to predict .furthermore , to the best of these authors knowledge , a good understanding of how porous medium micro - structure can enhance macroscopic transport is still lacking , especially for fibrous porous media . many works on such media have been focused on the geometrical properties which can possibly affect standard fickian dispersion and reaction , rather than on the intrinsic behavior of dispersion phenomena . in order to clarify this issue, this study presents results of several simulations at different preferential orientation of fibers , porosity and reynolds number . a lattice - boltzmann - based model coupled with a lagrangian particle tracking algorithm has been used .the aim of the present paper is to clarify how the nematic properties of the porous medium affects the mass and momentum transport mechanisms in order to design optimal porous media with low drag and high effective mass diffusion .the minimization of drag reduces the pump power demand , while the maximization of the mixing improves the homogeneity of reacting species all along the porous medium , both effects enhancing the performance of flow batteries .it will be shown , that porous media constituted by fibers preferentially oriented along the flow direction exhibit smaller drag and higher effective diffusion .during recent years the lattice - boltzmann method ( lbm ) has gained much attention as alternative solution to the navier - stokes equations . due to its numerical efficiency , easy parallelization and capability of handling complex geometries , the lbm is a promising tool to simulate complex flow fields at low reynolds number , such as flows through fibrous electrodes for flow batteries . in the present study a three - dimensional d3q19 lattice - boltzmann multi - relaxation time( mrt ) model has been implemented .the mrt scheme allows to overcome some drawbacks of the bhatnagar - gross - krook ( bgk ) formulation , which is the simplest and most common lattice - boltzmann equation , such as the viscosity - dependent numerical errors , especially in the case of very complex geometries . in order to simulate a pressure gradient in the flow, an equivalent body force has been implemented .the lattice - boltzmann mrt equation reads as follows : where is the distribution function along the -th lattice direction at the position and time , is the so - called discrete velocity along the -th direction , is the transformation matrix , the collision matrix , the identity matrix , and , are the moment and the equilibrium moment along the -th lattice direction , respectively .eq.([lbmmrt ] ) is a discrete formulation of the boltzmann transport equation and states the relation between the collision step ( right - hand of the equation ) and the streaming step ( left - hand of the equation ) .the set of moments consists of the hydrodynamic moments , which are conserved during collision , e.g. mass and momentum , and the non - conserved moments . in order to recover the correct navier - stokes equation and avoid discrete lattice effects ,the body force has been added during the collision step as follows : where is the weight of the lbm scheme along the -th lattice direction and , and are the eulerian component of the discrete speed , velocity , and pressure gradient , along the directions .the macroscopic density and velocity are accurately recovered from the distribution functions : left panel : dimensionless permeability values for different cases of porosity and orientation .results gather around the predicted value of the blake - kozeny equation , as expected .it should be pointed out that the permeability values diminish as the porosity increases , a trend already observed in whitaker .right panel : drag coefficients from numerical simulations of flows through single packed - bed of spheres of radius compared with results of zick and homsby . ]the present model is a further development of the lattice - boltzmann model already validated and used in maggiolo et al .the model has been further validated by evaluating the permeability values obtained with different values of porosity , fiber orientation and reynolds number and the drag exerted by the flow on single packed - beds of spheres .the permeability values have been obtained by means of the darcy equation which relates the velocity with the pressure gradient : where the pressure gradient corresponds to the applied body force , \ , \int_v u_x dv ] , evaluated from the calculated dimensionless values , overwhelm the molecular diffusion coefficients when real electrodes are considered . with regards to all - vanadium redox flow batteries ,the kinematic viscosity and the typical fiber diameter are ] , respectively , whereas the value of typical displacement length is of the order of centimeters along the streamwise directions , so that .the dispersion coefficients can be thus evaluated as . along the flow directionthe dispersion coefficients are ] , for the isotropic and the preferentially streamwise - oriented medium , whereas along the transverse direction they result ] , respectively .it should be noted that the typical molecular diffusion coefficient of vanadium ions in water is of the order of ] , so the provides a useful estimate of the effective diffusion being larger than the pore scale induced effective diffusion . ]these considerations highlight the major role of the fibrous medium in the enhancement of mixing in liquids .finally , the present findings on the effective diffusion can be directly applied to numerically solve advection - dispersion - reaction macroscopic equations for the species flowing in real fibrous media , in order to design optimal electrodes .in order to investigate the possible effects of inertia on dispersion dynamics at finite reynolds number , simulations of flows through isotropic fibrous media have been performed at higher number .interestingly , no significant differences have been found by increasing the reynolds number up to which can be considered an upper bound for redox flow battery applications .figure [ fig12 ] shows a comparison between the mean square displacements at and at , with the two curves overlapping almost perfectly .since the reynolds number range for the present application is , it can be concluded that the dispersion dynamics on flow batteries does not depend on the reynolds number and consequently inertial effects are negligible .this work was supported as part of the maestra project ( from materials for membrane - electrode assemblies to electric energy conversion and storage devices , 2014 - 2016 ) funded by the university of padua .34ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1038/nmat1368 [ * * , ( ) ] link:\doibase 10.1002/adma.201100984 [ * * , ( ) ] link:\doibase 10.1016/j.rser.2013.08.001 [ * * , ( ) ] link:\doibase 10.1007/s10800 - 011 - 0348 - 2 [ * * , ( ) ] in link:\doibase 10.1088/1742 - 6596/655/1/012049 [ _ _ ] , vol .( , ) p. link:\doibase 10.1007/bf00141261 [ * * , ( ) ] link:\doibase 10.1002/cjce.5450640302 [ * * , ( ) ] link:\doibase 10.1002/2014gl061475 [ * * , ( ) ] link:\doibase 10.1103/physreve.90.013032 [ * * , ( ) ] link:\doibase 10.1029/wr026i008p01749 [ * * , ( ) ] link:\doibase 10.1007/978 - 94 - 017 - 3389 - 2 [ ( ) , 10.1007/978 - 94 - 017 - 3389 - 2 ] link:\doibase 10.1029/2000wr900362 [ * * , ( ) ] link:\doibase 10.1016/j.advwatres.2008.08.005 [ * * , ( ) ] link:\doibase 10.1029/95wr00483 [ * * , ( ) ] link:\doibase 10.1063/1.866716 [ * * , ( ) ] link:\doibase 10.1029/2002wr001723 [ * * ( ) , 10.1029/2002wr001723 ] link:\doibase 10.1029/2000wr900364 [ * * , ( ) ] link:\doibase 10.1038/nmat3308 [ * * , ( ) ] link:\doibase 10.1029/2008gl035343 [ * * ( ) , 10.1029/2008gl035343 ] link:\doibase 10.1103/physreve.63.021112 [ * * , ( ) ] link:\doibase 10.1016/s0167 - 2789(99)00031 - 7 [ * * , ( ) ] link:\doibase 10.1063/1.473941 [ * * , ( ) ] link:\doibase 10.1063/1.1582431 [ * * , ( ) ] _ _ ( , ) link:\doibase 10.1098/rsta.2001.0955 [ * * , ( ) ] link:\doibase 10.1016/j.camwa.2009.02.008 [ * * , ( ) ] link:\doibase 10.1103/physreve.65.046308 [ * * , ( ) ] link:\doibase 10.1017/s0022112082000627 [ * * , ( ) ] link:\doibase 10.1063/1.4818453 [ * * , ( ) ] _ _ ( , ) link:\doibase 10.1103/physreve.92.022148 [ * * , ( ) ] link:\doibase 10.1016/s0169 - 7722(00)00170 - 4 [ * * , ( ) ] link:\doibase 10.1016/j.jpowsour.2013.09.071 [ * * , ( ) ]
given their capability of spreading active chemical species and collecting electricity , porous media made of carbon fibers are extensively used as diffusion layers in energy storage systems , such as redox flow batteries . in spite of this , the dispersion dynamics of species inside porous media is still not well understood and often lends itself to different interpretations . actually , the microscopic design of efficient porous media which can potentially and effectively improve the performances of flow batteries , is a still open challenge . the present study aims to investigate the effect of fibrous media micro - structure on dispersion , in particular the effect of fiber orientation on drag and dispersion dynamics . several lattice - boltzmann simulations of flows through differently - oriented fibrous media coupled with lagrangian simulations of particle tracers have been performed . results show that orienting fibers preferentially along the streamwise direction minimizes the drag and maximizes the dispersion , which is the most desirable condition for diffusion layers in flow batteries applications . _ this article is currently in press on physics of fluid journal . _
many of the signal and image processing problems can be posed as problems of learning a low dimensional linear or multi - linear model .algorithms for learning linear models can be seen as a special case of subspace fitting .many of these algorithms are based on least squares estimation techniques , such as principal component analysis ( pca ) , linear discriminant analysis ( lda ) , and locality preserving projection .but in general , training data may contain undesirable artifacts due to occlusion , illumination changes , overlaying component ( such as foreground texts and graphics on top of smooth background image ) .these artifacts can be seen as outliers for the desired signal .as it is known from statistical analysis , algorithms based on least square fitting fail to find the underlying representation of the signal in the presence of outliers .different algorithms have been proposed for robust subspace learning to handle outliers in the past , such as the work by torre , where he suggested an algorithm based on robust m - estimator for subspace learning .robust principal component analysis is another approach to handle the outliers . in ,lerman et al proposed an approach for robust linear model fitting by parameterizing linear subspace using orthogonal projectors .there have also been many works for online subspace learning / tracking for video background subtraction , such as grasta , which uses a robust -norm cost function in order to estimate and track non - stationary subspaces when the streaming data vectors are corrupted with outliers , and t - grasta , which simultaneously estimate a decomposition of a collection of images into a low - rank subspace , and sparse part , and a transformation such as rotation or translation of the image . in this work, we present an algorithm for subspace learning from a set of images , in the presence of structured outliers and noise .we assume some structure on outliers that suits many of the image processing applications , which is connectivity and sparsity . as a simple example we can think of smooth images overlaid with texts and graphics foreground , or face images with occlusion ( as outliers ) . to promote the connectivity of the outlier component, the group - sparsity of outlier pixels is added to the cost function ( it is worth mentioning that total - variation can also be used to promote connectivity ) .we also impose the smoothness prior on the learned subspace representation , by penalizing the gradient of the representation .we then propose an algorithm based on the sparse decomposition framework for subspace learning .this algorithm jointly detect the outlier pixels and learn the low - dimensional subspace for underlying image representation .after learning the subspace , we present its application for background - foreground segmentation in still images , and show that it achieves better performance than previous algorithms .we compare our algorithm with some of the prior approaches , including k - means clustering in djvu , shape primitive extraction and coding ( spec ) , least absolute deviation fitting ( lad ) .the proposed algorithm has applications in text extraction , medical image analysis , and image decomposition - .one problem with previous clustering - based segmentation techniques is that if the intensity of background pixels has a large dynamic range , some part of the background could be segmented as foreground , but our proposed model can correctly segment the image .one such example is shown in fig .1 , where the foreground mask ( a binary mask showing the location of foreground pixels ) for a sample image by clustering and our algorithm are shown .the structure of the rest of this paper is as follows : section ii presents the proposed framework for subspace learning .the detail of alternating optimization problem is presented in section ii .a , and the application of this framework for image segmentation is presented in ii .b. section iii provides the experimental results for the proposed algorithm and its application for image segmentation . andfinally the paper is concluded in section iv .despite the high - dimensionality of images ( and other kind of signals ) , many of them have a low - dimensional representation .for some category of images , this low - dimensional representation may be a very complex manifold which is not simple to find , but for many of the smooth images this low - dimensional representation can be assumed to be a subspace .therefore each signal can be efficiently represented as : where where , and denotes the representation coefficient in the subspace .+ there have been many approaches in the past to learn efficiently , such as pca and robust - pca .but in many scenarios , the desired signal can be heavily distorted with outliers and noise , and those distorted pixels should not be taken into account in subspace learning process , since they are assumed to not lie on the desired signal subspace .therefore a more realistic model for the distorted signals should be : where and denote the outlier and noise components respectively .here we propose an algorithm to learn a subspace , , from a training set of samples , by minimizing the noise energy ( ) , and some regualrization term on each component , as : where and denote suitable regularization terms on the first and second components , promoting our prior knowledge about them .here we assume the underlying image component is smooth , therefore it should have a small gradient . and for the outlier , we assume it is sparse and also connected , therefore we need to promote the sparsity and connectivity .hence , and , where shows the m - th group in the outlier ( the pixels within each group are supposed to be connected ) .+ putting all these together , we will get the following optimization problem : here by we mean all elements of the vector should be non - negative .note that denotes the spatial gradient , which can be written as : where and denote the horizontal and vertical derivative matrix operators , and ^t$ ] .the optimization problem in eq ( 4 ) can be solved using alternating optimization over , and . in the following part, we present the update rule for each variable by setting the gradient of cost function w.r.t that variable to zero .+ the update step for would be : the update step for the -th group of the variable is as follows : note that , because of the constraint , we can approximate , and then project the from soft - thresholding result onto , by setting its negative elements to 0 . the block - soft ( . ) is defined as : + for the subspace update , we first ignore the orthonormality constraint ( ) , and update the subspace column by column , and then use gram - schmidt algorithm to orthonormalize the columns .if we denote the j - th column of by , its update can be derived as : where , and .after updating all columns of , we apply gram - schmidt algorithm to project the learnt subspace onto .note that orthonormalization should be done at each step of alternating optimization .it is worth to mention that for some applications the non - negativity assumption for the structured outlier may not be valid , so in those cases we will not have the constraint . in that case, the problem can be solved in a similar manner , but we need to introduce an auxiliary variable , to be able to get a simple update for each variable . after learning the subspace , it can be used for different applications , such as segmentation and classification of signals .here we use this framework for background - foreground segmentation in still images .suppose we want to separate the foreground texts and graphics from background regions .we can think of foreground as the outliers overlaid on top of background , and use the learned subspace along with the following sparse decomposition framework to separate them : in our image segmentation problem , the m - th column of each block is chosen as the m - th group , .the reason being there are more vertical connectivity in english texts than horizontal .we could also impose both column - wise and row - wise connectivity , but it would require introducing auxiliary variables in the optimization framework . the problem in ( 6 ) can be easily solved using admm , and proximal optimization . after solving this problem , the component will be thresholded to find the foreground position .to evaluate the performance of our algorithm , we trained the proposed framework on image patches extracted from some of the images of the screen content image segmentation dataset provided in . before showing the results , we will report the weight parameters in our optimization .we used , and , which are tuned by testing on a validation set .we provide the results for subspace learning and image segmentation in the following sections .we extracted around 8,000 overlapping patches of size 32x32 , with stride of 5 from a subset of these images and used them for learning the subspace , and learned a 64 dimensional subspace ( which means 64 basis images of size 32x32 ) .the learned atoms of this subspace are shown in figure 2 . as we can seethe learned atoms contain different edge and texture patterns , which is reasonable for image representation .the right value of subspace dimension highly depends to the application . for image segmentation problemstudied in this paper , we found that using only first 20 atoms performs well on image patches of 32x32 .the experiments are performed using matlab 2015 on a laptop with core i5 cpu running at 2.2ghz .it takes around 78 seconds to learn the 64 dimensional subspace . after learning the subspace, we use this representation for background - foreground segmentation in still images , as explained in section ii.b .the segmentation results in this section are derived by using a 20 dimensional subspace for background modeling .we use the same model as the one in eq ( 6 ) for decomposition of an image into background and foreground , and s are set to the same value as mentioned before .we then evaluate the performance of this algorithm on the remaining images from screen content image segmentation dataset , and some other images , and compare the results with three previous algorithms ; hierarchical k - means clustering in djvu , spec , sparse and low - rank decomposition , and lad . for sparse and low rank decomposition , we apply the fast - rpca algorithm on the image blocks , and threshold the sparse component to find the foreground location . for low - rank decomposition, we have used the matlab implementation provided by stephen becker at . to provide a numerical comparison ,we report the average precision , recall and f1 score achieved by different algorithms over this dataset .the average precision , recall and f1 score by different algorithms are given in table 1 .recall & f1 score + spec & 50% & 64% & 56.1% + hierarchical clustering & 64% & 69% & 66.4% + low - rank decomposition & 78% & 86.5% & 82.1% + least absolute deviation & 91.4% & 87% & 89.1% + the proposed algorithm & 93% & 86% & 89.3% + [ tblcomp ] the precision and recall are defined as in eq .( 7 ) , where tp , fp and fn denote true positive , false positive and false negative respectively . in our evaluation, we treat a foreground pixel as positive .the balanced f1 score is defined as the harmonic mean of precision and recall , as it is shown in eq 8 . as it can be seen , the proposed scheme achieves much higher precision and recall than hierarchical k - means clustering and spec algorithms .compared to the least absolute deviation fitting , the proposed formulation has slightly better performance .0.18 0.18 0.18 + 0.18 0.18 0.18 + 0.18 0.18 0.18 + 0.18 0.18 0.18 + 0.18 0.18 0.18 + 0.18 0.18 0.18 to see the visual quality of the segmentation , the results for 3 test images ( each consisting of multiple 64 blocks ) are shown in fig .it can be seen that the proposed algorithm gives superior performance over djvu and spec in all cases .there are also noticeable improvement over least absolute deviation ( lad ) fitting and low - rank decomposition in one of the images .we would like to note that , this dataset mainly consists of challenging images where the background and foreground have overlapping color ranges . for simpler cases where the background has a narrow color range that is quite different from the foreground , djvu , lad and low - rank decomposition will work wellthis paper proposed a subspace learning algorithm for a set of smooth signals in the presence of structured outliers and noise .the outliers are assumed to be sparse and connected , and suitable regularization terms are added to the optimization framework to promote this properties .we then solve the optimization problem by alternatively updating the model parameters , and the subspace .we also show the application of this framework for background - foreground segmentation in still images , where the foreground can be thought as the outliers in our model , and achieve better results than the previous algorithms for background / foreground separation .the authors would like to thank ivan selesnick , arian maleki , and carlos fernandez - granda for their valuable comments and feedback .we would also like to thanks stephen becker for providing the matlab implementation of fastrpca algorithm .h abdi , lj williams , `` principal component analysis '' , wiley interdisciplinary reviews : computational statistics , 2010 .aj izenman , `` linear discriminant analysis '' , modern multivariate statistical techniques , springer , pp .237 - 280 , 2013 .x niyogi , `` locality preserving projections '' , in neural information processing systems , vol . 16 , 2004 .pj huber , robust statistics , springer berlin heidelberg , 2011 .f de la torre , mj black , `` a framework for robust subspace learning '' , international journal of computer vision , pp.117 - 142 , 2003 .j wright , a ganesh , s rao , y peng , y ma , `` robust principal component analysis : exact recovery of corrupted low - rank matrices via convex optimization '' , in advances in neural information processing systems , 2009 .g lerman , mb mccoy , ja tropp , t zhang , `` robust computation of linear models by convex relaxation '' , foundations of computational mathematics 15 , no . 2 : 363 - 410 , 2015 .j he , l balzano , a szlam , `` incremental gradient on the grassmannian for online foreground and background separation in subsampled video '' , in computer vision and pattern recognition , pp . 1568 - 1575 , ieee , 2012 .j he , d zhang , l balzano , t tao , `` iterative online subspace learning for robust image alignment '' , international conference and workshops on automatic face and gesture recognition , ieee , 2013 .r chartrand , b wohlberg , `` a nonconvex admm algorithm for group sparsity with sparse groups '' , international conference on acoustics , speech and signal processing .ieee , 2013 .jf cai , b dong , s osher , z shen , `` image restoration : total variation , wavelet frames , and beyond '' , journal of the american mathematical society 25.4 : 1033 - 1089 , 2012 .p. haffner , p.g .howard , p. simard , y. bengio and y. lecun , `` high quality document image compression with djvu '' , journal of electronic imaging , 7(3 ) , 410 - 425 , 1998 .t. lin and p. hao , `` compound image compression for real - time computer screen image transmission '' , ieee transactions on image processing , 14(8 ) , 993 - 1005 , 2005 .s. minaee and y. wang , `` screen content image segmentation using least absolute deviation fitting '' , ieee international conference on image processing , pp.3295 - 3299 , sept .j. zhang and r. kasturi , `` extraction of text objects in video documents : recent progress '' , document analysis systems , 2008 .s minaee and y wang , `` screen content image segmentation using sparse decomposition and total variation minimization '' , international conference on image processing , ieee , 2016 .m. soltani and c. hegde , `` a fast iterative algorithm for demixing sparse signals from nonlinear observations '' , ieee global conference on signal and information processing , 2016 .s zhang , y zhan , m dewan , j huang , dn metaxas , s zhou , `` towards robust and effective shape modeling : sparse shape composition '' , medical image analysis , 2012 .a taalimi , h qi , r khorsandi , `` online multi - modal task - driven dictionary learning and robust joint sparse representation for visual tracking '' , advanced video and signal based surveillance ( avss ) , ieee , 2015 .s minaee and y wang , `` screen content image segmentation using robust regression and sparse decomposition '' , ieee journal on emerging and selected topics in circuits and systems , 2016 .m soltani , c hegde , `` iterative thresholding for demixing structured superpositions in high dimensions '' , nips workshop , 2016 .s minaee , a abdolrashidi and y wang , `` screen content image segmentation using sparse - smooth decomposition '' , asilomar conference on signals , systems , and computers , ieee , 2015 .a aravkin , s becker , v cevher , p olsen , `` a variational approach to stable principal component pursuit '' , conference on uncertainty in artificial intelligence , 2014. f bach , r jenatton , j mairal , g obozinski , `` optimization with sparsity - inducing penalties '' , " foundations and trends in machine learning 4.1 : 1 - 106 , 2012 . s. boyd , n. parikh , e. chu , b. peleato and j. eckstein , `` distributed optimization and statistical learning via the alternating direction method of multipliers '' , foundations and trends in machine learning , 3(1 ) , 1 - 122 , 2011 . pl combettes , jc pesquet , `` proximal splitting methods in signal processing '' , fixed - point algorithms for inverse problems in science and engineering .springer , 185 - 212 , 2011 .sj leon , a bjorck , w gander , `` gramschmidt orthogonalization : 100 years and more '' , numerical linear algebra with applications 20.3 : 492 - 532 , 2013 . https://sites.google.com/site/shervinminaee/research/image-segmentation https://github.com/stephenbeckr/fastrpca dm powers , ``evaluation : from precision , recall and f - measure to roc , informedness , markedness and correlation '' , 2011 .
subspace learning is an important problem , which has many applications in image and video processing . it can be used to find a low - dimensional representation of signals and images . but in many applications , the desired signal is heavily distorted by outliers and noise , which negatively affect the learned subspace . in this work , we present a novel algorithm for learning a subspace for signal representation , in the presence of structured outliers and noise . the proposed algorithm tries to jointly detect the outliers and learn the subspace for images . we present an alternating optimization algorithm for solving this problem , which iterates between learning the subspace and finding the outliers . this algorithm has been trained on a large number of image patches , and the learned subspace is used for image segmentation , and is shown to achieve better segmentation results than prior methods , including least absolute deviation fitting , k - means clustering based segmentation in djvu , and shape primitive extraction and coding algorithm .
our study is carried out for the prisoner s dilemma game ( pd ) .this has often been used to model selfish behavior of individuals in situations , where it is risky to cooperate and tempting to defect , but where the outcome of mutual defection is inferior to cooperation on both sides .formally , the so - called `` reward '' represents the payoff for mutual cooperation , while the payoff for defection on both sides is the `` punishment '' . represents the `` temptation '' to unilaterally defect , which results in the `` sucker s payoff '' for the cooperating individual . given the inequalities and , which define the classical prisoner s dilemma , it is more profitable to defect , no matter what strategy the other individual selects .therefore , rationally behaving individuals would be expected to defect when they meet _once_. however , defection by everyone is implied as well by the game - dynamical replicator equation , which takes into account imitation of superior strategies , or payoff - driven birth - and - death processes .in contrast , a coexistence of cooperators and defectors is predicted for the snowdrift game ( sd ) . while it is also used to study social cooperation ,its payoffs are characterized by ( i.e. rather than ) . as is well - known , cooperation can , for example , be supported by repeated interactions , by intergroup competition with or without altruistic punishment , and by network reciprocity based on the clustering of cooperators . in the latter case ,the level of cooperation in two - dimensional spatial games is further enhanced by `` disordered environments '' ( approximately 10% unaccessible empty locations ) , and by diffusive mobility , provided that the mobility parameter is in a suitable range .however , strategy mutations , random relocations , and other sources of stochasticity ( `` noise '' ) can significantly challenge the formation and survival of cooperative clusters . when no mobility or undirected , random mobility are considered , the level of cooperation in the spatial games studied by us is sensitive to noise ( see figs . 1d and 3c ) , as favorable correlations between cooperative neighbors are destroyed . _success - driven _ migration , in contrast , is a robust mechanism : by leaving unfavorable neighborhoods , seeking more favorable ones , and remaining in cooperative neighborhoods , it supports cooperative clusters very efficiently against the destructive effects of noise , thus preventing defector invasion in a large area of payoff parameters .we assume individuals on a square lattice with periodic boundary conditions and sites , which are either empty or occupied by one individual .individuals are updated asynchronously , in a random sequential order .the randomly selected individual performs simultaneous interactions with the direct neighbors and compares the overall payoff with that of the neighbors .afterwards , the strategy of the best performing neighbor is copied with probability ( `` imitation '' ) , if the own payoff was lower . with probability ,however , the strategy is randomly `` reset '' : * noise 1 * assumes that an individual spontaneously chooses to cooperate with probability or to defect with probability until the next strategy change . the resulting strategy mutations may be considered to reflect deficient imitation attempts or trial - and - error behavior . as a side effect, such noise leads to an independence of the finally resulting level of cooperation from the initial one at , and a _ qualitatively different _ pattern formation dynamics for the same payoff values , update rules , and initial conditions ( see fig .[ s1 ] ) . using the alternative fermi update rule would have been possible as well .however , resetting strategies rather than inverting them , combined with values much smaller than 1/2 , has here the advantage of creating particularly adverse conditions for cooperation , independently of what strategy prevails .below , we want to learn , if predominant cooperation can survive or even emerge under such adverse conditions .`` success - driven migration '' has been implemented as follows : before the imitation step , an individual explores the expected payoffs for the empty sites in the migration neighborhood of size ( the moore neighborhood of range ) .if the fictitious payoff is higher than in the current location , the individual is assumed to move to the site with the highest payoff and , in case of several sites with the same payoff , to the closest one ( or one of them ) ; otherwise it stays put .computer simulations of the above model show that , in the _ imitation - only _ case of classical spatial games with noise 1 , but _ without _ a migration step , the resulting fraction of cooperators in the pd tends to be very low .it basically reflects the fraction of cooperators due to strategy mutations . for , we find almost frozen configurations , in which only a small number of cooperators survive ( see fig .[ s1]d ) . in the _ migration - only _ case without an imitation step ,the fraction of cooperators changes only by strategy mutations .even when the initial strategy distribution is uniform , one observes the formation of spatio - temporal patterns , but the patterns get almost frozen after some time ( see fig .[ s1]e ) .it is interesting that , although for the connectivity structure of our pd model neither imitation only ( fig . [ s1]d ) nor migration only ( fig . [ s1]e ) can promote cooperation under noisy conditions , their _ combination _ does : computer simulations show the formation of cooperative clusters with a few defectors at their boundaries ( see fig .once cooperators are organized in clusters , they tend to have more neighbors and to reach higher payoffs on average , which allows them to survive . it will now have to be revealed , how success - driven migration causes the _ formation _ of clusters at all , considering the opposing noise effects .in particular , we will study , why defectors fail to invade cooperative clusters and to erode them from within , although a cooperative environment is most attractive to them . to address these questions , figure 2 studies a `` defector s paradise '' with a single defector in the center of a cooperative cluster . in the noisy _ imitation - only _spatial prisoner s dilemma , defection tends to spread up to the boundaries of the cluster , as cooperators imitate more successful defectors ( see figs .but if imitation is combined with _ success - driven migration _, the results are in sharp contrast : although defectors still spread initially , cooperative neighbors who are steps away from the boundary of the cluster can now evade them . due to this defector - triggered migration, the neighborhood reconfigures itself adaptively .for example , a large cooperative cluster may split up into several smaller ones ( see figs .eventually , the defectors end up at the boundaries of these cooperative clusters , where they often turn into cooperators by imitation of more successful cooperators in the cluster , who tend to have more neighbors .this promotes the spreading of cooperation .since evasion takes time , cooperative clusters could still be destroyed when continuously challenged by defectors , as it happens under noisy conditions .therefore , let us now study the effect of different kinds of randomness .* noise 1 * ( defined above ) assumes _ strategy mutations _ , but leaves the spatial distribution of individuals unchanged ( see fig .* noise 2 * , in contrast , assumes that individuals , who are selected with probability , move to a randomly chosen free site without considering the expected success _( random relocations ) ._ such random moves may potentially be of long distance and preserve the number of cooperators , but have the potential of destroying spatial patterns ( see fig .* noise 3 * combines noise 1 and noise 2 , assuming that individuals randomly relocate with probability and additionally reset their strategy as in noise 1 ( see fig .3c ) . while cooperation in the imitation - only case is quite sensitive to noise ( see figs .3a - c ) , the combination of imitation with success - driven motion is not ( see fig .3d - f ) : whenever an empty site inside a cluster of cooperators occurs , it is more likely that the free site is entered by a cooperator than by a defector , as long as cooperators prevail within the migration range .in fact , the formation of small cooperative clusters was observed for _ all _ kinds of noise .that is , the combination of imitation with success - driven migration is a robust mechanism to maintain and even spread cooperation under various conditions , given there are enough cooperators in the beginning .it is interesting , whether this mechanism is also able to facilitate a spontaneous _ outbreak _ of predominant cooperation in a noisy world dominated by selfishness , without a `` shadow of the future '' .our simulation scenario assumes defectors only in the beginning ( see fig .4a ) , strategy mutations in favor of defection , and short - term payoff - maximizing behavior in the vast majority of cases . in order to study conditions under which a significant fraction of cooperators is unlikely ,our simulations are performed with noise 3 and , as it tends to destroy spatial clusters and cooperation ( see fig .3c ) : by relocating 5% randomly chosen individuals in each time step , noise 3 dissolves clusters into more or less separate individuals in the imitation - only case ( see figs .in the case with success - driven migration , random relocations break up large clusters into many smaller ones , which are distributed all over the space ( see figs . 3b+c and 4b ) . therefore, even the clustering tendency by success - driven migration can only partially compensate for the dispersal tendency by random relocations .furthermore , the strategy mutations involved in noise 3 tend to destroy cooperation ( see figs .3a+c , where the strategies of 5% randomly chosen individuals were replaced by defection in 95% of the cases and by cooperation otherwise , to create conditions favoring defection , i.e. the dominant strategy in the prisoner s dilemma ) .overall , as a result of strategy mutations ( i.e. without the consideration of imitation processes ) , only a fraction of all defectors turn into cooperators in each time step , while a fraction of all cooperators turn into defectors ( i.e. 5% in each time step ) .this setting is extremely unfavorable for the spreading of cooperators .in fact , defection prevails for an extremely long time ( see figs. 4b and [ s2]a ) . butsuddenly , when a small , supercritical cluster of cooperators has occurred by coincidence ( see fig .4c ) , the fraction of cooperators spreads quickly ( see fig . [ s2]a ) , and soon cooperators prevail ( see figs . 4d and [ s2]b ) . note that this spontaneous birth of predominant cooperation in a world of defectors does not occur in the noisy imitation - only case and demonstrates that success - driven migration can overcome the dispersive tendency of noises 2 and 3 , if is moderate and has a finite value .that is , success - driven migration generates spatial correlations between cooperators more quickly than these noises can destroy them .this changes the outcome of spatial games essentially , as a comparison of figs .2a - d with 4a - d shows .the conditions for the spreading of cooperators from a supercritical cluster ( `` nucleus '' ) can be understood by configurational analysis ( see fig . *s1 * ) , but the underlying argument can be both , simplified and extended : according to fig .6a , the level of cooperation changes when certain lines ( or , more generally , certain hyperplanes ) in the payoff - parameter space are crossed .these hyperplanes are all of the linear form where . the left - hand side of eq .[ 1 ] represents the payoff of the most successful cooperative neighbor of a focal individual , assuming that this has cooperating and defecting neighbors , which implies .the right - hand side reflects the payoff of the most successful defecting neighbor , assuming that is the number of his / her cooperating neighbors and the number of defecting neighbors , which implies . under these conditions ,the best - performing cooperative neighbor earns a payoff of , and the best - performing defecting neighbor earns a payoff of .therefore , the focal individual will imitate the cooperator , if , but copy the strategy of the defector if .equation [ 1 ] is the line separating the area where cooperators spread ( above the line ) from the area of defector invasion ( below it ) for a certain spatial configuration of cooperators and defectors ( see fig .every spatial configuration is characterized by a set of -parameters . as expected, the relative occurence frequency of each configuration depends on the migration range ( see fig .6b ) : higher values of naturally create better conditions for the spreading of cooperation , as there is a larger choice of potentially more favorable neighborhoods .figure 6b also shows that success - driven migration extends the parameter range , in which cooperators prevail , from the parameter range of the snowdrift game with to a considerable parameter range of the prisoner s dilemma .for this to happen , it is important that the attraction of cooperators is mutual , while the attraction of defectors to cooperators is not .more specifically , the attraction of cooperators is proportional to , while the attraction between defectors and cooperators is proportional to .the attraction between cooperators is stronger , because the prisoner s dilemma usually assumes the inequality .besides the speed of finding neighbors to interact with , the time scales of configurational changes and correlations matter as well : by entering a cooperative cluster , a defector triggers an avalanche of strategy changes and relocations , which quickly destroys the cooperative neighborhood . during this process, individuals may alter their strategy many times , as they realize opportunities by cooperation or defection immediately .in contrast , if a cooperator joins a cooperative cluster , this will stabilize the cooperative neighborhood .although cooperative clusters continuously adjust their size and shape , the average time period of their existence is longer than the average time period after which individuals change their strategy or location .this coevolution of social interactions and strategic behavior reflects features of many social environments : while the latter come about by individual actions , a suitable social context can make the average behavior of individuals more predictable , which establishes a reinforcement process . for example , due to the clustering tendency of cooperators , the likelihood of finding another cooperator in the neighborhood of a cooperator is greater than 1/2 , and also the likelihood that a cooperator will cooperate in the next iterationit is noteworthy that all the above features the survival of cooperation in a large parameter area of the pd , spatio - temporal pattern formation , noise - resistance , and the outbreak of predominant cooperation can be captured by considering a mechanism as simple as success - driven migration : success - driven migration _ destabilizes _ a homogeneous strategy distribution ( compare fig .[ s1]c with [ s1]a and fig .[ s1]f with [ s1]d ) .this triggers the spontaneous formation of agglomeration and segregation patterns , where noise or diffusion would cause dispersal in the imitation - only case .the self - organized patterns create self - reinforcing social environments characterized by behavioral correlations , and imitation promotes the further growth of supercritical cooperation clusters .while each mechanism by itself tends to produce frozen spatial structures , the combination of imitation and migration supports adaptive patterns ( see fig .this facilitates , for example , the regrouping of a cluster of cooperators upon invasion by a defector , which is crucial for the survival and success of cooperators ( see fig .2e - h ) . by further simulationswe have checked that our conclusions are robust with respect to using different update rules , adding birth and death processes , or introducing a small fraction of individuals defecting unconditionally .the same applies to various kinds of `` noise '' .noise can even trigger cooperation in a world full of defectors , when the probability of defectors to turn spontaneously into cooperators is 20 times smaller than the probability of cooperators to turn into defectors .compared to the implications of the game - dynamical replicator equation , this is remarkable : while the replicator equation predicts that the stationary solution with a majority of cooperators is _un_stable with respect to perturbations and the stationary solution with a majority of defectors is stable , success - driven migration _inverts _ the situation : the state of 100% defectors becomes _ unstable _ to noise , while a majority of cooperators is stabilized in a considerable area of the payoff parameter space .our results help to explain why cooperation can be frequent even if individuals would behave selfishly in the vast majority of interactions .although one may think that migration would weaken social ties and cooperation , there is another side of it which helps to establish cooperation in the first place , without the need to modify the payoff structure .we suggest that , besides the ability for strategic interactions and learning , the ability to _ move _ has played a crucial role for the evolution of large - scale cooperation and social behavior .success - driven migration can reduce unbalanced social interactions , where cooperation is unilateral , and support local agglomeration .in fact , it has been pointed out that local agglomeration is an important precondition for the evolution of more sophisticated kinds of cooperation .for example , the level of cooperation could be further improved by combining imitation and success - driven migration with other mechanisms such as costly punishment , volunteering , or reputation .flache a , hegselmann r ( 2001 ) do irregular grids make a difference ? relaxing the spatial regularity assumption in cellular models of social dynamics ._ journal of artificial societies and social simulation _ 4 , no .
according to thomas hobbes _ leviathan _ ( 1651 , english ed . : touchstone , new york , 2008 ) , `` the life of man [ is ] solitary , poor , nasty , brutish , and short '' , and it would need powerful social institutions to establish social order . in reality , however , social cooperation can also arise spontaneously , based on local interactions rather than centralized control . the self - organization of cooperative behavior is particularly puzzling for social dilemmas related to sharing natural resources or creating common goods . such situations are often described by the prisoner s dilemma . here , we report the sudden outbreak of predominant cooperation in a noisy world dominated by selfishness and defection , when individuals imitate superior strategies and show success - driven migration . in our model , individuals are unrelated , and do not inherit behavioral traits . they defect or cooperate selfishly when the opportunity arises , and they do not know how often they will interact or have interacted with someone else . moreover , our individuals have no reputation mechanism to form friendship networks , nor do they have the option of voluntary interaction or costly punishment . therefore , the outbreak of prevailing cooperation , when directed motion is integrated in a game - theoretical model , is remarkable , particularly when random strategy mutations and random relocations challenge the formation and survival of cooperative clusters . our results suggest that mobility is significant for the evolution of social order , and essential for its stabilization and maintenance . = 1 hile the availability of new data of human mobility has revealed relations with social communication patterns and epidemic spreading , its significance for the cooperation among individuals is still widely unknown . this is surprising , as migration is a driving force of population dynamics as well as urban and interregional dynamics . below , we model cooperation in a game - theoretical way , and integrate a model of stylized relocations . this is motivated by the observation that individuals prefer better neighborhoods , e.g. a nicer urban quarter or a better work environment . to improve their situation , individuals are often willing to migrate . in our model of success - driven migration , individuals consider different alternative locations within a certain migration range , reflecting the effort they are willing or able to spend on identifying better neighborhoods . how favorable a new neighborhood is expected to be is determined by test interactions with individuals in that area ( `` neighborhood testing '' ) . the related investments are often small compared to the potential gains or losses after relocating , i.e. exploring new neighborhoods is treated as `` fictitious play '' . finally , individuals are assumed to move to the tested neighborhood that promises to be the best . so far , the role of migration has received relatively little attention in game theory , probably because it has been found that mobility can undermine cooperation by supporting defector invasion . however , this primarily applies to cases , where individuals choose their new location in a random ( e.g. diffusive ) way . in contrast , extending spatial games by the specific mechanism of _ success - driven _ migration can support the survival and spreading of cooperation . as we will show , it even promotes the spontaneous _ outbreak _ of prevalent cooperation in a world of selfish individuals with various sources of randomness ( `` noise '' ) , starting with defectors only .
beam diffusion can lead to emittance growth , halo formation and particle loss .a standard method currently used to measure transverse diffusion requires scraping the beam with collimator jaws moved close to the beam , then retracting the jaws and waiting for the beam to diffuse to the outer position of the jaws .this procedure is time consuming and the method is only applicable to storage rings where the beam circulates for times long enough to enable the measurement .beam echoes were introduced into accelerator physics more than two decades ago as a novel method to measure diffusion .a single echo observation can be done typically within a thousand turns with nonlinear tune spreads in the range 0.001 - 0.01 .hence diffusion measurements with echoes would be considerably faster than the standard method and could also enable diffusion to be measured in synchrotrons where beams circulate for relatively short times . shortly after the introduction of the beam echo concept ,longitudinal unbunched beam echoes were observed at the fermilab antiproton accumulator and at the cern sps the original motivation however had been to measure transverse diffusion from transverse echoes .more than a decade ago , transverse bunched beam echoes were observed in rhic during dedicated experiments .however the existing model as applied to the data did not yield consistent values for the diffusion coefficients .the next generation of intensity frontier hadron synchrotrons will require tight control of particle amplitude growth . at fermilab the integrable optics test accelerator ( iota ) ring is under construction where the novel concept of nonlinearly integrable lattices will be tested and could serve as a model for future synchrotrons .this ring offers the opportunity of testing a fast diffusion measurement technique which could help determine the degree of integrability ( or stable motion ) among different lattice models . with this motivation ,we revisit the earlier rhic measurements with an updated theoretical model to enable extraction of self - consistent diffusion coefficients . in sectionii we describe the updated model , in section iii we apply this model to the rhic data , in section iv we consider beam related time scales and we summarize in section v with lessons to be applied to future echo measurements .here we discuss the model to calculate the echo amplitude with diffusion using the same method and notation as in .the phase space coordinates used and action angle coordinates are related as x = , & p = x + x = - + j = ( x^2 + p^2 ) , & = - the initial distribution is taken to be exponential in the action _ 0(j ) = where , the initial rms emittance .we first consider the dipole moment after a dipole kick and the general case where the dipole kicker is at a phase advance from the bpm location where the centroid is measured . following the procedure in , the dipole moment after the dipole kick by an angle is x^amp(t )= where are the beta functions at the kicker and bpm respectively , with the nonlinear betatron frequency dependence on the action .this moment is independent of the phase advance from the kicker to the bpm .it differs from the expression in only by the replacement of by the geometric mean and in the exponent replaced by . following the dipole kick , the beam decoheres with the centroid amplitude decaying over a characteristic time , the decoherence time . at time after the dipole kick , a single turn quadrupole kick is applied to generate the echoes , the first of which occurs around time 2 .the echo amplitude and pulse shape is affected by the diffusive beam motion .we consider the density distribution to evolve according to the conventional form of the diffusion equation = [ d(j)]here the diffusion coefficient has the usual units of [ action/time ] or [ m/s ] and it differs from the definition of used in .again following the same method as in , we find that the echo amplitude near time is x ( t ) & = & -_g q dj j^2 _ 0((t - 2 ) ) where is the dimensionless quadrupole kick strength defined as , the ratio of the beta function at the quadrupole to its focal length and we defined .we consider the action dependent transverse angular frequency to be of the form where is the angular betatron frequency and we consider the diffusion coefficient to be of the form d(j ) = _n=0 d_n ( ) ^n where all coefficients have the same dimensions .the average dipole moment is given by x(t ) & = & _ g q _ rev [ e^[i _ 0 ] _0^ z^2 e^[i_1 j_0 z ] dz + [ eq : dipmoment_n ] where . using where is the tune shift at an action equal to the emittance , it is convenient to define scaled diffusion coefficients as d_n = d_n ( ) ^2 [ eq : dn_dn ] these coefficients have the dimension of . in the followingwe will consider specific cases of the above general form of .different physical processes contribute to the diffusion coefficients .it is likely that space charge effects , beam - beam interactions ( not present in the rhic measurements discussed below ) and intra - beam scattering all contribute to and higher order coefficients .early studies at the tevatron at injection energy with additional sextupoles as the driving nonlinearity had measured a constant term which varied with the proximity to a fifth order resonance . measurements at the lhc at top energy during collisions showed that diffusion at the smallest amplitude measurable was finite , implying a non - zero .a numerical simulation showed that modulation diffusion leads to a constant diffusion term .beam - gas scattering and noise in dipoles lead to a term while noise in quadrupoles leads to a term .there are likely other sources for these coefficients .given that the beam is subject to multiple effects , the complete action dependence of the diffusion may be complex .here we focus on the three simplest models with two diffusion coefficients that can be compared to measurements . in the first case , we assume that the diffusion is of the form d(j ) = d_0 + d_1 ( ) in this case , the dipole moment is given by x ( t ) & = & _ g q j_0 [ eq : amp_0_1 ] + t_1 ^ 3 & = & ( t-)^3 + ^3 , _ 0 = _( t - 2 ) , = 1 + d_1 ^2 t_1 ^ 3 , = _ rev(t - 2 ) the second case is the quadratic dependence model where d(j ) = d_0 + d_2()^2 the general time dependent form of the echo at time where can have either sign is x(t ) ^amp & = & _ g q _ rev[e^i _ 0 h_02 ] [ eq : amp_0_2 ] + h_02(t ) & & _ 0^z^2 dz + & = & ( ) ^5/2\ { ( ) erfc ( ) - a_0 } + a_0 & = & 1 - i = 1 - i_revt , b_2 = d_2 ^2 t_1 ^ 3 = d_2 ^2 [ ( + t)^3 + ^3 ] + here erfc is the complementary error function .the last case we consider is the linear and quadratic dependence d(j ) = d_1 ( ) + d_2()^2 in this case , the time dependent form of the echo at time is x(t ) ^amp & = & _ g q _ rev [ e^i _ 0 h_12(t ) ] [ eq : amp_d1_d2 ] + h_12(t ) & & _ 0^ z^2 dz + & = & ( ) ^5/2\ { ( ) erfc ( ) - a_1 } + a_1 & = & ( 1 + b_1 ) - i , b_1 = d_1 ^2 t_1 ^ 3 = d_1 ^2 [ ( + t)^3 + ^3 ] + the left plot in fig .[ fig : echopulse ] shows the relative echo amplitude as a function of the diffusion coefficient for three values of . in each case , only the single was non - zero .for the same value of , the amplitude decreases faster as increases .the right plot in this figure shows the form of the echo pulse with the model for a particular choice of and other machine parameters are taken from the rhic values .the red curve shows the upper envelope of the pulse which is used to obtain the full width at half maximum .analytical results for the optimum values of the detuning and delay parameters that maximize the echo amplitude can be obtained for model 1 with diffusion coefficients . as a function of the time delay, this amplitude has a maximum at a delay , such that the two coefficients can be related as d_1 = [ eq : d1_1 ] it is understood that is held fixed at while finding the optimum delay . defining and substituting this into the equation for the relative amplitude, we have for the maximum amplitude obtained at the delay = _ revq _ m[]^3 [ eq : amp_d0_d1_tau ] this equation can be solved for and subsequently can be found .positivity of requires that the solution for obey .similarly , as a function of the detuning , the amplitude has a maximum at such that d_1 = [ eq : d1_2 ] here is held fixed at while finding the optimum in . defining and again , substituting for , we can write the maximum relative amplitude at as = _ revq _ m _ f[]^3 [ eq : amp_d0_d1_mu ] here requires that the solution for obey .if both and are measured , then the diffusion coefficient can be found from equating the two expressions for which results in a quadratic equation for with the roots d_0 = [ eq : d0_mum_taum ] once is determined , can be determined from either of equations ( [ eq : d1_1 ] ) or ( [ eq : d1_2 ] ) .positivity of requires that the above solution obey and .this solution for both diffusion coefficients is obtained without necessarily using the value of echo amplitude except for recording where it has a maximum .it uses just the optimum detuning and the delay and could be useful when the bpm resolution is low .however this would require that all other beam conditions such as the dipole kick , quadrupole kick , bunch intensity etc are kept exactly the same during the detuning and delay scans . if this is not met , the solution given by eq .( [ eq : d0_mum_taum ] ) can not be used . for the or models discussed here, the optimum values of the detuning and delay parameters must be found numerically .in addition to the amplitude , the echo can also be characterized by the echo pulse width , e.g the full width at half maximum ( fwhm ) can be chosen as a width measure . for the model , the fwhm can be found analytically from eq .( [ eq : amp_0_1 ] ) . keeping terms to first order in , we find for the fwhm t_fwhm = 2 ( ) + 3()^2 [ eq : fwhm_0_1 ] the fwhm increases with increasing but very slowly with as seen in fig .[ fig : fwhm_d0_d1_d2 ] . when there is no diffusion , we have for the minimum fwhm t_fwhm^min = [ eq : fwhm_min ] in units of turns , this theoretical minimum fwhm depends only on the detuning coefficient .this value when compared with measured fwhm values can set limits on the detuning parameter , as will be seen later . for the other models with either or , the time dependent pulse shape and hence the fwhm must be found numerically . from this pulse shape ,the upper envelope is found numerically as an interpolating function and the fwhm then calculated from this envelope function .[ fig : fwhm_d0_d1_d2 ] shows the dependence of the fwhm on the coefficients scaled by a parameter m/s .the fwhm increases linearly with both and but with increases by only 3% over this range .the fwhm with increases the fastest and covers the range of values obtained from the rhic data .we briefly discuss the experimental procedure here , more details can be found in .the echo experiments were first done with au ions , later with cu ions and also with protons , all at injection energy .a special purpose quadrupole kicker was built with a rise time of 12.8 , about one revolution time in rhic .the nonlinear detuning was provided by a set of octupoles which are normally set to zero at injection , in order to observe the echoes .the initial dipole kick was delivered only in the horizontal plane by injection under a varying angle .echoes were generated with different conditions including variable dipole and quadrupole kicks , beam intensities , tunes , different delays between the dipole kick and the quadrupole kick and different octupole strengths .the emittance delivered to rhic for each species was nearly constant .while echoes were observed with each species , the most consistent echoes were obtained with the au ions and we will consider only those results in this article .table[table : rhic ] shows some of the relevant parameters for the au ions .parameter & nominal value + beam relativistic & 10.52 + revolution time & 12.8 + initial emittance , un - normalized & 1.6 m + delay & 450 turns + initial detuning parameter & 0.0014 + quadruple strength & 0.025 + quadrupole rise time & 12.8 + in evaluating the detuning parameter for calculating echo amplitudes , it is important to use the emittance following the dipole kick .the rms emittance is given by & = & [ x^2 p^2 - ( x p ) ^2 ] ^1/2 + & = & 2 [ j ^2j ^2 - j^ 2]^1/2 the ensemble averages are calculated using the distribution function at time after the kick which can be written in the notation of as _ 2(j , , t ) = _ 0(j+(- ( j)t ) + _ k ^2 ) and the averages are found from e.g. etc . it can be shown this leads to an rms emittance given by ( t ) & = & [ ( j_0 + _ k ^2)^2 - a_2(t)^2 ] ^1/2 + a_2(t ) & = & , _ 2 = 2 j_0 t at times , the term and we can approximate = j_0 + _ k ^2 = _ 0 [ 1 + ( ) ^2 ] [ eq : emit_kicked ] where is the initial emittance , is the change in beam position at the bpm and is the initial beam size at the bpm .the last expression in eq.([eq : emit_kicked ] ) has the same form as in .thus a kick to a 3 amplitude results in an emittance which is 5.5 times larger than the initial emittance .we will take this as an average estimate for the emittance following the dipole kick . by definition ,the detuning parameter increases linearly with emittance and hence increases from its nominal value of 0.0014 to 0.0077 following the dipole kick . without this rescaling, the model can not agree with the experimental results , as seen in the earlier analysis .we discuss first the analysis of the nonlinear detuning scan done on march 11 , 2004 . during this scan, the quadrupole kick and delay between the dipole kick and quadrupole kick were kept constant .octupole strengths were set to values .the nominal value was corresponding to a nominal detuning parameter before the dipole kick .echoes were observed for all .the largest echoes were observed at . at this ,the nominal detuning parameter is 0.001 while the rescaled detuning value is . for the model ,the starting solutions were obtained by solving eqs.([eq : d1_2 ] ) and ( [ eq : amp_d0_d1_mu ] ) .these yielded , , which lead to /s and /s .these found values for yield a maximum at by design but the amplitude values decrease more slowly with than the data . to improve the fit with the data ,a numerical fitting was done ( using mathematica ) to the data with the model shown in eq.([eq : amp_0_1 ] ) . these yielded /s and /s and led to a better fit with all the data .these values for were labeled as respectively and subsequent values were scaled by these values for convenience . with both the and the models ,a least square minimization was done to fit the data against the respective models for the amplitude .the fit for from the model was similarly labeled as .the resulting fits and the data are shown in fig.[fig : muscan_d0_d1_d2 ] .the values of the coefficients are shown in table [ table : au_d0_d1_d2_scans ] .compared to the previous comparison to theory cf .fig . 4 in ,these fits show significant improvement .of the three models , the best fit with the lowest chi squared is seen with the model with the next best being the model .however the models are fairly close and no model can be ruled out based on this data . on a later day( march 17 , 2004 ) , the delay between the dipole kick and the quadrupole kick was varied with values ( 450 , 500 , 550 , 600 , 900 ) turns .echoes were only observed at the first three values of the delay . in all six echoeswere observed with the largest amplitudes at 450 turns .the quadrupole kick strength , the octupole strengths and the tunes were kept constant .we will use this limited data set to obtain the diffusion coefficients from the delay scan . for the model , we start by solving eq .( [ eq : d1_1 ] ) and eq .( [ eq : amp_d0_d1_tau ] ) for the coefficients from the echo amplitude and the value of the optimum delay .again , better fits to the data are obtained by a least square minimization which is also the procedure for the other two models .table [ table : au_d0_d1_d2_scans ] shows the best fit values with this delay scan .compared to the values from the detuning scan , the coefficients for the same model are within a factor of two from this delay scan .some of the variation in the values between the scans can be due to different beam conditions on the two days such as bunch intensities and machine tunes .however the uncertainties associated with these values are large since there were too few data points .[ fig : tauscan_d0_d1_d2 ] shows the comparison of the fitted models with the data . againall three models show similar goodness of fits with the best fit ( minimum chi squared ) obtained with the model but all chi squared values are close .all models show that the relative echo amplitude reaches a maximum at around 390 turns which is less than the minimum delay of 450 turns used in the experiment .model & detuning scan & delay scan + / & 1.6 / 1.3 & 0.65 / 1.3 + / & 1.9 / 0.025 & 3.7 / 0.015 + / & 2.3 / 0.025 & 1.9 / 0.013 + [ table : au_d0_d1_d2_scans ] the above analysis has shown that all three models are viable candidates in describing the data dependence on either the detuning or the delay .we now use turn by turn ( tbt ) data to fit both the echo amplitude and the echo pulse width with each model .ten such data sets could be retrieved from the 2004 measurements . in this tbt set ,the bunch intensity varied but the other parameters including the quadrupole kick strength , tunes , delay and detuning were kept constant .[ fig : echopulse_fwhm ] shows two examples from this set , one with a clean echo pulse and the other where the beam centroid takes a longer time to decohere after the initial kick and the echo pulse is also much wider. some of the more distorted signal could be due to oscillations from off - axis injection and could partly be due to the higher bunch intensity . for each data set , an interpolating function was found to fit the upper envelope of the echo pulse and the fwhm was extracted from this interpolating function . using the value of the rescaled detuning parameter , the minimum theoretical value of the fwhm without diffusion , using eq .( [ eq : fwhm_min ] ) , is 32 turns .this is consistent with the minimum fwhm with diffusion from the data set which is 37 turns .the bare detuning parameter of would have predicted a minimum fwhm of 160 turns , much larger than any fwhm value measured .[ fig : fwhm_intensity ] shows the fwhm plotted as a function of the bunch intensity .this figure shows that the fwhm fell into three distinct clusters because the bunch intensity could be varied around three values . except for the two outlier points labeled as ( 1 , 2 ) ,all other points show that the fwhm increases with intensity .these other points are fit to a power law curve fwhm(n ) = t_fwhm^min + a n^p where is the minimum fwhm , from eq.([eq : fwhm_min ] ) , is the bunch intensity and are the fit parameters .the fit shows that the exponent is , so the fwhm increases quadratically with the intensity .since the detuning , delay , and tune were kept constant during these measurements , the outlier points show that the fwhm values may depend on other parameters , such as the initial dipole kick amplitude .we now solve for two diffusion coefficients using the relative echo amplitude and the fwhm .for the model , the fwhm can be found analytically , as shown in eq.([eq : fwhm_0_1 ] ) .the coefficient can be written as a function of the echo amplitude and using the echo amplitude equation in [ eq : amp_0_1 ] as d_0 = - [ eq : d0_from_d1 ] where is the relative echo amplitude in terms of the dipole kick .the positivity of implies an upper limit to as d_1^max = [ ( ) ^1/3 - 1 ] the value of can be found by numerically solving the equation eq.([eq : fwhm_0_1 ] ) for the fwhm with substituted from eq .( [ eq : d0_from_d1 ] ) .we find that this model yields positive coefficients in only four of the ten cases .we conclude therefore that the model is not well suited for this data . with the model, the coefficient can again be found analytically as a function of the echo amplitude and using d_0 = - [ [ eq : d0_from_d2 ] where is defined in eq.([eq : amp_0_2 ] ) .we find again that no solutions with positive can be found in all cases with fwhm 70 turns .even in other cases where the solutions can be found , the values of are significantly larger than the values found in the previous sections , hence appear to be in a disconnected region of the parameter space . since has little impact on the fwhm ( see fig . [fig : fwhm_d0_d1_d2 ] ) , in both the and models , large values of the fwhm can make or large which then require a negative to satisfy the amplitude condition . thus fitting the models to both the amplitude and fwhm rules out the models with . in the case of the model ,neither coefficient can be found analytically from the amplitude equation . instead the amplitude and the fwhm equations must be solved numerically .figure [ fig : amp_fwhm_3d ] shows the forms of the function and . also shownare the intersections of these surfaces with the plane of constant amplitude or fwhm value respectively . in each case, the intersection of the surface with the plane determines a curve of solutions for that equation .the intersection of the two curves in the plane would determine the required solution for given values of the amplitude and fwhm .in this figure the values of are scaled by which are obtained from using eq .( [ eq : dn_dn ] ) .these plots demonstrate that for the range of measured values of the echo amplitude and the fwhm , solutions for the diffusion coefficients exist in the range and .it turns out to be easier to do a least squared minimization to find the solution .here we define the function as ^2 = ( ) ^2 + ( ) ^2 where ampl and fwhm are the amplitude function ( from eq .( [ eq : amp_d1_d2 ] ) ) and the fwhm function defined numerically and and are the estimated uncertainties in the two data variables .this least squares method turns out to be efficient and leads to positive solutions for in all cases .table [ table : d1_d2_amp_fwhm ] shows the values of the diffusion coefficients in these cases .we observe that these values are close to the values of found from the optimal detuning and delay measurements shown in table [ table : au_d0_d1_d2_scans ] .the values differ by an order of magnitude in the two tables but considering that the delay and detuning scan methods for the amplitude are less sensitive to and also from the larger number of data points in the fwhm analysis , we expect the values in table [ table : d1_d2_amp_fwhm ] to be more accurate . in most cases ,the coefficient is an order of magnitude greater than .the single exception ( row 2 of this table ) corresponds to the outlier point labeled 1 in fig .[ fig : fwhm_intensity ] . as a function of intensity , increases while appears to be independent of the intensity .bunch intensity & rel .& fwhm & & + [ & [ ] & [ turns ] & /s & /s + 0.25 & 0.245 & 39.8 & 1.28 & 0.0030 + 0.27 & 0.225 & 54.6 & 0.13 & 0.51 + 0.32 & 0.160 & 40.6 & 1.49 & 0.32 + 0.54 & 0.127 & 47.5 & 2.00 & 0.28 + 0.6 & 0.142 & 52.1 & 1.98 & 0.21 + 0.63 & 0.125 & 37.0 & 1.98 & 0.30 + 0.76 & 0.114 & 75.0 & 2.53 & 0.24 + 0.77 & 0.122 & 81.0 & 2.18 & 0.24 + 0.81 & 0.110 & 78.3 & 2.53 & 0.24 + 0.84 & 0.0998 & 73.6 & 2.53 & 0.24 + we focus now on the model which is the only one that can describe both the amplitude and pulse width of the echo . during the measurements on march 17 , 2004an intensity scan was done with all other parameters kept constant . while the turn by turn data from that scan is not easily accessible , the echo amplitudes are available with 27 data points .this data can be used to measure the diffusion coefficients as a function of bunch intensity .both coefficients can be found by a least square minimization of the fit to the amplitude .this process allows a determination of as a function of intensity , the left plot in fig .[ fig : au_fit_intensity ] shows the values found and a linear fit to the values .this confirms the behavior seen in the previous section but now with a larger data set .similarly as earlier , the values are nearly independent of the intensity. we can parameterize the echo amplitude s dependence on intensity via these fits for and the amplitude equation ( [ eq : amp_d1_d2 ] ) .the linear fit yields where is the intensity in units of 10 while for we take the mean value over this set , the right plot in fig .[ fig : au_fit_intensity ] shows the measured echo amplitudes ( in red ) as a function of the intensity and also the calculated amplitude ( in blue ) from these fits for . the measured echo amplitude decreases with increasing intensity , and this trend is well reproduced by the theoretical amplitude function .this is a consistency check and is to be expected , since the linear fit for and constant for were obtained from the data set .the comparison in fig . [ fig : au_fit_intensity ] shows that we can parameterize the diffusion coefficients as d(j ) = [ a_10 + a_11n ] ( ) + a_20()^2 where are functions of machine and beam parameters such as the nonlinearity , tunes , emittance etc .but independent of the intensity .space charge effects and intra - beam scattering ( ibs ) are the dominant source of particle diffusion for heavy ions such as au in rhic , at injection energy .the incoherent space charge and ibs induced diffusion and emittance growth depends linearly on the intensity and our analysis confirms that the leading diffusion coefficient increases linearly with intensity .the coefficient is likely to be determined by diffusion from single particle nonlinear dynamics processes .one useful time scale that can be extracted from the diffusion coefficients is the mean escape time associated with probabilistic processes .it was shown in that in the case that , the time dependent density distribution solution to the diffusion equation leads to a beam lifetime which is close to the escape time estimate . defining where is the particle number , it was shown that t_l = 0.7 , t_esc = where is the action at the absorbing boundary .we will assume that the mean escape time is also a useful beam relevant time scale when .the mean escape time from an action to an absorbing boundary at action is given by t_esc(j ) & = & _ j^j_a dj = _j^j_a dj + & = & averaging this over the initial beam distribution , we have t_esc & = & _ 0^ dj + & = & a_f [ eq : escape ] where is the incomplete gamma function and we have assumed .the dimensionless amplifying factor , defined by the terms in square brackets , depends only the ratios , figure [ fig : escape_d1_d2 ] shows the dependence of the dimensionless terms on for three values of corresponding to apertures at ( 6,10 , 12) respectively . for , is of order unity .hence the mean escape time is determined primarily by . in the case that , the time scale would be determined by . with m , andtaking a representative value /s from table [ table : d1_d2_amp_fwhm ] , we have . while this time is extremely short , it corresponds to the lifetime of a beam at large amplitudes and not to a beam circulating on the nominal closed orbit .observations in rhic did show that lifetimes of kicked beams were significantly smaller compared to that for beams not kicked .however the early losses of the kicked beams were dominated by scraping at aperture restrictions , so there is no straightforward way to determine the contribution of diffusion to those lifetimes .nevertheless , the diffusion coefficients and the associated time scales should be useful for relative measures of beam growth and particle loss .as an example , it could be useful in iota to quickly distinguish between lattices with different degrees of integrability .if echoes can be generated by small amplitude kicks , then the calculated diffusion coefficients and the time scales would be more representative of beam behavior under nominal conditions .determining the diffusion coefficients may require different parameterizations of at small and large amplitudes , as seen for example in .in this article , we revisited earlier observations of transverse beam echoes in rhic to extract diffusion coefficients from those measurements .we considered three models for the action dependence of the diffusion coefficients : , , and .all three models were found to adequately describe the echo amplitudes measured during scans of the nonlinear detuning and the delay between the dipole and quadrupole kicks .next , turn by turn data was used to extract both the amplitude and the fwhm of the pulse width . here both models with do not describe the data with larger pulse widths , so the only model that successfully describes both the amplitude and the fwhm data is the model .we find that is an order of magnitude larger than in most cases , it increases linearly with the intensity while is nearly independent of the intensity . using these intensity dependencies, the model also adequately describes another set of data where the echo amplitudes were measured as a function of intensity .these results show that transverse echoes can indeed be used to measure transverse beam diffusion in existing and future hadron synchrotrons , we make some observations on requirements for future measurements .the diffusion measurements require good control of several machine and beam parameters such as the initial dipole kick , the quadrupole kick , machine nonlinearity , tunes and beam emittance , to name the most important .injection oscillations can strongly influence the echo amplitude and pulse shape , so these need to be controlled to the extent possible . alternatively if available , a fast dipole kicker in the ring would be preferable to initiate the echo .in such a case , a transverse damper can damp initial oscillations and then be turned off before the dipole kicker is used .while the echo amplitude variation with scans of the detuning and time delay are useful , detailed analysis of the turn by turn data yields more information . as an example of this , we found that the fwhm scales quadratically with the intensity and therefore is more sensitive to intensity changes than the echo amplitude .the proximity of resonances can also spoil echoes so the tunes and the dipole kick amplitudes need to be chosen carefully as well .k. mess and m. seidel , _ nucl .instr . & meth ._ a * 351 * , 279 ( 1994 ) w. fischer , m. giovannozzi and f. schmidt , phys . rev. e , * 55 * , 3507 ( 1997 ) .fliller iii et al , pac 2003 , 2904 ( 2003 ) .g. valentino et al , prst - ab , * 16 * , 021003 ( 2013 ) g. stancari , proceedings of hb2014 , 294 ( 2014 ) .stupakov and s.k .kaufmann , preprint sscl-587 ( 1992 ) g.v .stupakov and a.w .chao , proceedings of pac97 , 1834 ( 1997 ) l.k .spentzouris et al , _ phys ._ , * 76 * , 620 ( 1996 ) o. bruning et al , proceedings of pac97 , 1816 ( 1997 ) w. fischer et al .proceedings of pac2005 , 1955 ( 2005 ) s. sorge , o. boine - frankenheim , and w. fischer , proceedings of icap 06 , 234 ( 2006 ) . s. nagaitsev et al . , proceedings of ipac12 , 16 ( 2012 ) a.w .chao , lecture notes at www.slac.stanford.edu//lecturenotes.html t. chen et al , _ phys ._ , * 68 * , 33 ( 1992 ) f. zimmermann , _ part ._ , * 49 * , 57 ( 1995 ) d. edwards and m.j .syphers , _ introduction to the physics of high energy accelerators _ , wiley , new york ( 1993 ) wolfram research inc . , mathematica , version 10.4 , champaign , il ( 2016 ) c.w .gardiner , _ handbook of stochastic methods _ , springer ( 1985 ) t. sen , _ jinst _ , * 6 * , 10017 ( 2011 )
we study the measurement of transverse diffusion through beam echoes . we revisit earlier observations of echoes in rhic and apply an updated theoretical model to these measurements . we consider three possible models for the diffusion coefficient and show that only one is consistent with measured echo amplitudes and pulse widths . this model allows us to parameterize the diffusion coefficients as functions of beam intensity . this study shows that echoes can be used to measure diffusion much more quickly than present methods and possibly be useful to a variety of hadron synchrotrons .
there are various experimental methods to determine the boltzmann constant including acoustic gas thermometry ,, ; dielectric constant gas thermometry , , , , johnson noise thermometry ( jnt ) , , , and doppler broadening , .codata ( committee on data for science and technology ) will determine the boltzmann constant as a weighted average of estimates determined with these methods .here , we focus on jnt experiments which utilize a quantum - accurate voltage - noise source ( qvns ) . in jnt ,one infers true thermodynamic temperature based on measurements of the fluctuating voltage and current noise caused by the random thermal motion of electrons in all conductors . according to the nyquist law , the mean square of the fluctuating voltage noise for frequencies below 1 ghz and temperature near 300 k can be approximated to better than 1 part in billion as , where is the boltzmann constant, is the thermodynamic temperature , is the resistance of the sensor , and is the bandwidth over which the noise is measured .since jnt is a pure electronic measurement that is immune from chemical and mechanical changes in the material properties of the sensor , it is an appealing alternative to other forms of primary gas thermometry that are limited by the non - ideal properties of real gases .recently , interest in jnt has dramatically increased because of its potential contribution to the new si " ( new international system ) , in which the unit of thermodynamic temperature , the kelvin , will be redefined in 2018 by fixing the numerical value of .although almost certainly the value of will be primarily determined by the values obtained by acoustic gas thermometry , there remains the possibility of unknown systematic effects that might bias the results , and therefore an alternative determination using a different physical technique and different principles is necessary to ensure that any systematic effects must be small . to redefine the kelvin , the consultative committee on thermometry ( cct ) of the international committee for weights and measures ( cipm ) has required thatbesides the acoustic gas thermometry method , there must be another method that can determine with a relative uncertainty below 3 . as of now, jnt is the most likely method to meet this requirement . in actual experiments , transmission lines that connect the resistor and the qvns to preamplifiersproduce a ratio spectrum that varies with frequency . duesolely to impedance mismatch effects , for the frequencies of interest , one expects the ratio spectrum to be an even polynomial function of frequency where the constant term ( offset ) in the polynomial is the value provided that dielectric losses are negligible .the theoretical justification for this polynomial model is based on low - frequency filter theory where measurements are modeled by a lumped - parameter approximation . "in particular , one models the networks for the resistor and the qvns as combination of series and parallel complex impedances where the impedance coupling in the resistor network is somewhat different from that in the qvns network .for the ideal case where all shunt capacitive impedances are real , there are no dielectric losses . as a caveat , in actual experiments , other effects including electromagnetic interference and filtering aliasing also affect the ratio spectrum . as discussed in , for the the recent experiment of interest , dielectric losses and other effects are small compared to impedance mismatch effects . as an aside , impedance mismatch effects also influence results in jnt experiments that do not utilize a qvns , , . based on a fit of the ratio spectrum model to the observed ratio spectrum , one determines the offset parameter .given this estimate of and values of other terms on the right hand sided of eq .3 ( which are known or measured ) , one determines the boltzmann constant .however , the choice of the order ( complexity ) of the polynomial ratio spectrum model and the upper frequency cutoff for analysis ( fitting bandwidth ) significantly affects both the estimate and its associated uncertainty . in jnt, researchers typically select the model complexity and fitting bandwidth based on scientific judgement informed by graphical analysis of results .a common approach is to restrict attention to sufficiently low fitting bandwidths where curvature in the ratio spectrum is not too dramatic and assume that a quadratic spectrum model is valid ( see for an example of this approach ) .in contrast to a practitioner - dependent subjective approach , we present a data - driven objective method to select the complexity of the ratio spectrum model and the fitting bandwidth .in particular , we select the ratio spectrum model based on cross - validation ,,, .we note that in addition to cross - validation , there are other model selection methods including those based on the akaike information criterion ( aic ) , the bayesian information criterion ( bic ) , statistics .however , cross - validation is more data - driven and flexible than these other approaches because it relies on fewer modeling assumptions . since the selected model determined by any method is a function of random data , none perform perfectly . hence , uncertainty due to imperfect model selection performance should be accounted for in the uncertainty budget for any parameter of interest , .failure to account for uncertainty in the selected model generally leads to underestimation of uncertainty . in cross - validation ,one splits observed data into training and validation subsets .one fits candidate models to training data , and selects the model that is most consistent with validation data that are independent of the training data . here( and in most studies ) consistency is measured with a cross - validation statistic equal to the mean square deviation between predicted and observed values in validation data .we stress that , in general , this mean square deviation depends on both random and systematic effects . for each candidate model, practitioners sometimes average cross - validation statistics from many random splits of the data into training and validation data set , , .here , from many random splits , we instead determine model selection fractions determined from a five - fold cross - validation analysis .based on these model selection fractions , we determine the uncertainty of the offset parameter for each fitting bandwidth of interest .we select by minimizing this uncertainty .as far as we know , our resampling approach for quantification of uncertainty due to random effects and imperfect performance of model selected by five - fold cross - validation is new . as an aside , for the case where models are selected based on statistics , efron determined model selection fractions with a bootstrap resampling scheme . in , the complexity of the ratio spectrum and the fitting bandwidthwere selected with an earlier version of the method described here for the case where was no greater than 600 khz . here ,we re - analyze the data from but allow to be as large as 1400 khz . in this work, we also quantify an additional component of uncertainty that accounts for we stress that this work focuses only on the uncertainty of the offset parameter in ratio spectrum model . for a discussion of other sources of uncertainty that affect the estimate of the boltzmann constant , see . in a simulation study, we show that our methods correctly selects the correct ratio spectrum for simulated data with additive noise similar to observed data .finally , for experimental data , we quantify evidence for a possible linear temporal trend in estimates for the offset parameter .following , to account for impedance mismatch effects , we model the ratio of the power spectral densities , , as a order even polynomial function of frequency as follows where , and is a reference frequency ( 1 mhz in this work ) . throughout this work , as shorthand , we refer to this model as a model if , a model if , and so on . the constant term , in the eq .4 model corresponds to the time to acquire data varied from 15 h to 20 h. for each run , fourier transforms of time series corresponding to the resistor at the triple point of water temperature and the qvns were determined at a resolution of 1 hz for frequencies up to 2 mhz .were formed for frequency blocks of width 1.8 khz .for the frequency block with midpoint , for the run , we denote the estimate for the resistor and qvns for the run as and respectively where 1,2 , 45 .following , we define a reference value for the offset term in our eq .4 model as where is the codata2010 recommended value of the boltzmann constant , is the measured resistance of the resistor with traceability to the quantum hall resistance , is the triple point water temperature and is the calculated power spectral density of .the difference between the maximum and minimum of the estimates determined from all 45 runs is 2.36 . for the run ,we denote the values and as and respectively .even though and vary from run - to - run , we assume that temporal variation of their difference , , is negligible .later in this work , we check the validity of this key modeling assumption .the component of uncertainty of the estimate of for any run due to imperfect knowledge of is approximately 2 .the estimated weighted mean value of our estimates , , is 1.000100961 .the weights are determined from relative data acquisition times for the runs .following , for each frequency , we pool data from all 45 runs to form a numerator term and a denominator term . from these two terms , we estimate one ratio for each frequency as ( see figure 1 ) . from the eq . 6 ratio spectrum, we estimate one residual offset term where is the weighted mean of values from all runs . in our cross - validation approach ,we select the model that produces the prediction ( determined from the training data ) that is most consistent with the validation data .since varies from run - to - run , we correct spectra so that our cross - validation statistic is not artificially inflated by in .the corrected ratio spectrum for the run is where in effect , given that our calibration experiment measurement of has negligible systematic error , the above correction returns produces a spectrum where the values of should be nearly the same for all runs .we stress that after selecting the model , we estimate from the uncorrected eq .6 spectrum . in our cross - validation method , we generate 20 000 random five - way splits of the data .in each five - way split , we assign the pair of spectra , ( and , from any particular run to one of the five subsets by a resampling method .each of the 45 spectral pairs appears in one and only one of the five subsets .we resample spectra according to run to retain possible correlation structure within the spectrum for any particular run .each simulated five - way split is determined by a random permutation of the positive integers from 1 to 45 .the first nine permuted integers corresponds to the runs assigned to the first subset .the second nine correspond to the runs assigned to the second subset , and so on . from each random split , four of the subsets are aggregated to form training data , and the other subset forms the validation data . within the training data , we pool all spectrum and all spectrum and form one ratio spectrum . similarly , for the validation data , we pool all spectrum and spectrum and form one ratio spectrum .we fit each candidate polynomial ratio spectrum model to the training data , and predict the observed ratio spectrum in the validation data based on this fit .we then compute the ( empirical ) mean squared deviation ( msd ) between predicted and observed ratios for the validation data .for any random five - way split , there are five ways of defining the validation .hence , we compute five msd values for each random split . the cross - validation statistic for each ,cv( ) , is the average of these five msd values .for each random five - way split , we select the model that yields the smallest value of cv( ) . based on 20 000 random splits of the 45 spectra , we estimate a probability mass function where the possible values of are : 2,4,6,8,10,12 or 14 . for any choice of , suppose that is known exactly .for this ideal case , based on a fit of the ratio spectrum model to the eq .6 ratio spectrum , we could construct a coverage interval for with standard asymptotic methods or with a parametric bootstrap method .for our application , we approximate the parametric bootstrap distribution of our estimate of as a gaussian distribution with mean and variance , where is predicted by asymptotic theory . to account for the effect of uncertainty in on our estimate of , we form a mixture of bootstrap distributions as follows where is the probability density function ( pdf ) for a gaussian random variable with expected value and variance .for any , we select the that yields the largest value of . given that the probability density function ( pdf ) of a random variable is , andthe mean and variance of a random variable with pdf are and , the mean and variance of are and hence , the mean and variance of a random variable sampled from the eq .9 pdf are and respectively , where and where and for each value , we estimate the uncertainty of our estimate of as .we select by minimizing on grid in frequency space . for any fitting bandwidth , is the weighted - mean - square deviation of the estimates of from the candidate models about their weighted mean value where the weights are the empirically determined selection fractions .the term is the weighted variance of the parametric bootstrap sampling distributions for the candidate models where the weights are again the empirically determined selection fractions .we stress that both and are affected by imperfect knowledge of the ratio spectrum model .we fit candidate ratio spectrum models to the eq . 6 observed ratio spectrum by the method of least squares ( ls ) .we determine model selection fractions ( table 1 ) and ( table 2 ) for values on an evenly space grid ( with spacing of 25 khz ) between 200 khz and 1400 khz . in figure 2, we show how selected , and vary with .our method selects = ( 1250 khz , 8) , 3.25 , and 2.36 . in figure 3 , we show results for the values of ( that yield the five lowest values of . for these five fitting bandwidths ( 900 khz , 1150 khz , 1175 khz , 1225 khz and 1250 khz ) values appear to follow no pattern as a function of frequency , however visual inspection suggests that the may follow a pattern .it is not clear if the variations in figure 2 and 3 are due to random or systematic measurement effects .to get some insight into this issue , we study the performance of our method for idealized simulated data that are free of systematic measurement error .we simulate three realizations of data based on the estimated values of and in table 3 . in our simulation , for each run , we set = 0 , = 1 and equals the sum of the predicted ratio spectrum and gaussian white noise . for each run, the variance of the noise is determined from fitting the ratio spectrum model to the experimental ratio spectrum for that run . for simulated data ,the , and spectra exhibit fluctuations similar to those in the experimental data ( figures 4,5,6 ) .since variability in simulated data ( figures 4,5,6 ) is due solely to random effects , we can not rule out the possibility that random effects may have produced the fluctuations in the experimental spectra ( figures 2 and 3 ) . for the third realization of simulated data ( figure 6 ) , the estimated values of corresponding to the values that yield the five lowest values of form two clusters in space which are separated by approximately 300 khz ( figure 7 ) .this pattern is similar to that observed for the experimental data ( figures 3 ) . as a caveat , for the experimental data, we can not rule out the possibility that systematic measurement error could cause or enhance observed fluctuations .our method correctly selects the =8 model for each of three independent realizations of simulation data ( table 4 ) . in a second study, we simulate three realizations of data according to a =6 polynomial model based on the fit to experimental data for 900 khz .our methods correctly selects the =6 for each of the three realizations .in , our method selected 575 khz and when was constrained to be less than 600 khz .for these selected values , our current analysis yields 1.81 and 3.58 . in this study ,when is allowed to be as large as 1400 khz , our method selects = 1250 hz and , and 2.36 and 3.25 .the difference between the two results , 0.55 , is small compared to the uncertainty of either result .we expect imperfections in our fitting bandwidth selection method for various reasons .first , we conduct a grid search with a resolution of 25 khz rather than a search over a continuum of frequencies .second , there are surely fluctuations in due to random effects that vary with fitting bandwidth .third , different values of can yield very similar values of but somewhat different values of .therefore , it is reasonable to determine an additional component of uncertainty , , that accounts for uncertainty due to imperfect here we equate to the estimated standard deviation of estimates of corresponding to the five values that yielded the lowest values ( figures 3 and 7 ) .for the three realizations of simulated spectra , the corresponding values are 0.39 , 0.45 , and 1.83 . for the experimental data 0.56 .for both simulated and experimental data , our update for the total uncertainty of estimated is where for the simulated data , is 2.69 , 3.08 , and 3.42 . for the experimental data , 3.29 . as a caveat, the choice to quantify as the standard deviation of the estimates corresponding to values that yield the five lowest values is based on scientific judgement .for instance , if we determine based on the the values that yield the ten lowest rather the five lowest values , for the three simulation cases and the observed data we get 0.55 , 1.08 , 1.42 , and 0.73 respectively . from the corrected eq.7 spectra, we estimate for each of the 45 runs by fitting our selected ratio spectrum model ( 8 and 1250 khz ) to the data from each run by the method of ls ( figure 8) .ideally , on average , these estimates should not from run - to - run assuming that our eq . 8 correction model based on calibration data is valid .from these estimates , we determine the slope and intercept parameters for a linear trend model by the method of weighted least squares ( wls ) where we minimize above , for the run , and are the estimated and predicted ( according to the trend model ) values of , and is the inverse of the squared asymptotic uncertainty of associated with our estimate determined by the ls fit to data from the run .we determine the uncertainty of the trend model parameters with a nonparametric bootstrap method ( see appendix 1 ) following ( table 5 ) .we repeat the bootstrap procedure but set to a constant .this analysis yields an estimate of the null distribution of the slope estimate corresponding to the hypothesis that there is no trend .the fraction of bootstrap slope estimates with magnitude greater than or equal to the magnitude of the estimated slope determined from the observed data is the bootstrap -value corresponding to a two - tailed test of the null hypothesis .for the values that yield the five lowest values of , our bootstrap analysis provides strong evidence that varies with time . at the value of 575 khz, there is a moderate amount of evidence for a trend . for each choice ,we test the hypothesis that the linear trend is consistent with observations based on the value of .if the observed data are consistent with the trend model , a resulting -value from this test is realization of a random variable with a distribution that is approximately uniform between 0 and 1 .hence , the large -values reported column 7 of table 5 suggest that the asymptotic uncertainties determined by the ls method for each run may be inflated . as a check, we estimate the slope uncertainty with a parametric bootstrap method where we simulate bootstrap replications of the observed data by adding gaussian noise to the estimated trend with standard deviations equal to the asymptotic uncertainties determined by the ls method .in contrast to the method from , the parametric bootstrap method yields larger slope uncertainties .for instance , for the 1250 khz and 575 khz cases , the parametric bootstrap slope uncertainty estimates are larger than the corresponding table 5 estimates by 30 and 23 respectively. could result due to heteroscedasticity ( frequency dependent additive measurement error variances ) .this follows from the well - known fact that when models are fit to heteroscedastic data , the variance of parameter estimates determined by the ls method are larger than the variance of parameter estimates determined by the ideal wls method .based on fits of selected models to data pooled from all runs , we test the hypothesis that the variance of the additive noise in the ratio spectrum is independent of frequency . based on the breush - pagan method , the -values corresponding to the test of this hypothesis are 0.723 , 0.064 , 0.006 , 0.001 , 0.001 and 0.001 for fitting bandwidths of 575 khz , 900 khz , 1150 khz , 1175 khz , 1225 khz , and 1250 khz respectively .hence , the evidence for heteroscedasticity is very strong for the larger fitting bandwidths . for a fitting bandwidth of 1250 khz ,the variation of the estimated trend over the duration of the experiment is .in contrast , the uncertainty of our estimate of determined under the assumption that there is no trend is only 3.29 ( table 4 ) .the evidence for a linear trend at particular fitting bandwidths above 900 khz is strong ( p - values are 0.012 or less ) ( column 5 in table 5 ) . in contrast , the evidence ( from hypothesis testing ) for a linear trend at 575 khz ( the selected fitting bandwidth in ) is not as strong since the p - value is 0.049 .however , the larger p - value at 575 khz may be due to more random variability in the estimated offset parameters rather than lack of a deterministic trend in the offset parameters .this hypothesis is supported by the observation that the uncertainty of the estimated slope parameter due to random effects generally increases as the fitting bandwidth is reduced , and that fact that all of the slope parameter estimates for cases shown in table 5 are negative and vary from -0.440 / day to -0.164 / day . together , these observations strongly suggest a deterministic trend in measured offset parameters at a fitting bandwidth of 575 khz . currently , experimental efforts are underway to understand the physical source of this ( possible ) temporal trend .if there is a linear temporal trend in the offset parameters at a fitting bandwidth of 575 khz , with slope similar to what we estimate from data , the reported uncertainty for the boltzmann constant reported in is optimistic because the trend was not accounted for in the uncertainty budget . to account for the effect of a linear trend on results, one must estimate the associated systematic error due to the trend at some particular time .unfortunately , we have no empirical method to do this . as a further complication ,if there is a trend , it may not be exactly linear .for electronic measurements of the boltzmann constant by jnt , we presented a data - driven method to select the complexity of a ratio spectrum model with a cross - validation method . for each candidate fitting bandwidth , we quantified the uncertainty of the offset parameter in the spectral ratio model in a way that accounts for random effects as well as systematic effects associated with imperfect knowledge of model complexity .we selected the fitting bandwidth that minimizes this uncertainty .we also quantified an additional component of uncertainty that accounts for imperfect knowledge of the selected fitting bandwidth . with our method , we re - analyzed data from a recent experiment by considering a broader range of fitting bandwidths and found evidence for a temporal linear trend in offset paramaters . for idealized simulated data free of systematic error with additive noise similar to experimental data ,our method correctly selected the true complexity of the ratio spectrum model for all cases considered . in the future, we plan to study how well our methods work for other experimental and simulated jnt data sets with variable signal - to - noise ratios and investigate how robust our method is to systematic measurement errors .we expect our method to find broad metrological applications including selection of optimal virial equation models in gas thermometry experiments , and selection of line - width models in doppler broadening thermometry experiments. * acknowledgements .* contributions of staff from nist , an agency of the us government , are not subject to copyright in the us .jnt research at nim is supported by grants nsfc ( 61372041 and 61001034 ) .jifeng qu acknowledges samuel p. benz , alessio pollarolo , horst rogalla , weston l. tew of the nist and rod white of msl , new zealand for their help with the jnt measurements analyzed in this work .we also thank tim hesterberg of google for helpful discussions .we denote the estimate of from the run as , and the data acquisition time for the run as .our linear trend model is where the observed data is , is an additive noise term , and the matrix is and where the is the intercept parameter and is the slope parameter .above , we model the component of as a realization of a random variable with expected value 0 and variance where is known but is unknown . following ,the predicted value in a linear regression model is where the estimated model parameters are where is a diagonal weight matrix . here, we set the component of to where is the estimated asymptotic variance of determined by the method of ls .following , we form a modified residual where is the i diagonal element of the hat matrix .this transformation ensures that the modified residuals are realizations of random variables with nearly the same variance .the component of a bootstrap replication of the observed data is where and is sampled with replacement from where is the mean modified residual . from each of 50 000bootstrap replications of the observed data , we determine a slope and intercept parameter .the bootstrap estimate of the uncertainty of the slope and intercept parameters are the estimated standard deviations of the slope and intercept parameters determined from these estimates .\1 . based on a calibration data estimate of for each run , correct spectra per eq .+ 2 . set fitting bandwidth to = 200 khz .+ 3 . set resampling counter to 1 .+ 4 . randomly split and spectral pairs from each of 45 runs into five subsets .select by five - fold cross - validation and update appropriate model selection fraction estimate .increase by 1 .if 20 000 , go to 4 .if 20 000 , select the polynomial model with largest associated selection fraction . calculate estimate of residual offset and its uncertainty ( eq .16 ) from pooled uncorrected spectrum ( see eq .increase by 25 khz and go to 3 if 1400 khz .select by minimizing .denote the minimum value as .assign component of uncertainty associated with imperfect performance of method to select as to estimated standard deviation of offset estimates corresponding to values that yield the five lowest values of .our final estimate of the uncertainty of the offset is .+ for completeness , we note that simulations and analyses were done with software scripts developed in the public domain r language and environment .please contact kevin coakley regarding any software - related questions .9 pitre l , sparasci f , truong d , guillou a , risegari l , and himbert m e ( 2011 ) .determination of the boltzmann constant using a quasi - spherical acoustic resonator , _ int .j. thermophys .1825 - 1886 .gavioso r m , benedetto g , albo p a g , ripa d m , merlone a , guianvarch c , moro f and cuccaro r ( 2010 ) . a determination of the boltzmann constant from speed of sound measurements in helium at a single thermodynamic state , _ metrologia _ 47 387 - 409 lin h , feng x j , gillis k a , moldover m r , zhang j t , sun j p , duan y y ( 2013 ) . improved determination of the boltzmann constant using a single , fixed - length cylindrical cavity , _ metrologia _ 50 417 - 32 .gaiser c , zandt t , fellmuth b , fischer j , gaiser c , jusko o , priruenrom t , and sabuga w and zandt t ( 2011 ) . improved determination of the boltzmann constant by dielectric - constant gas thermometry , _ metrologia _ vol .48 , pp . 382 - 390l7-l11 .jifeng , q , s. benz , a. pollarolo , h. rogalla , w. tew , d. white and k. zhou , ( 2015 ) .improved electronic measurement of the boltzmann constant by johnson noise thermometry , _ metrologia _ , vol .s242-s256 .fasci , e , m. d. de vizia , a. merlone , l. moretti , a. castrillo and l. gianfrani ( 2015 ) , the boltzmann constant from the h vibration rotation spectrum : complementary tests and revised uncertainty budget _ metrologia _ _ 52 _ s233s241 .blalock t.v . and r.l .shepard ( 1982 ) , a decade of progress in high temperature johnson noise therometry , _ temperature , its measurement and control in science and industry , 5 , instrument society of america , _ pp .1219 - 1223 .white d.r . , r. galleano , a. actis , h. brixy , m. de groot , j. dubbeldam , a reesink , f. edler , h. sakurai , r.l .shepard , j.c .gallop ( 1996 ) , the status of johnson noise thermometry , _ metrologia _ _ 33 _ pp .325 - 335 .akaike , h. ( 1973 ) , information theory and an extension of the maximum likelihood principle in _ second international symposium on information theory,_(eds .b.n . petrov and f. czaki ) akademiai kiado , budapest , 267 - 81 .2010 c|ccccccc & selection fractions : & & & + ( khz ) & =2 & =4 & =6 & =8 & =10 & =12 & =14 + + 200 & 0.7789 & 0.1269 & 0.0698 & 0.0046 & 0.0159 & 0.0039 & 0.0000 + 300 & 0.3271 & 0.2510 & 0.0000 & 0.4160 & 0.0047 & 0.0012 & 0.0000 + 400 & 0.0304 & 0.2234 & 0.6427 & 0.0017 & 0.0240 & 0.0716 & 0.0061 + 500 & 0.0249 & 0.6043 & 0.0010 & 0.2648 & 0.0701 & 0.0001 & 0.0349 + 525 & 0.1347 & 0.0960 & 0.0150 & 0.6758 & 0.0747 & 0.0037 & 0.0000 + 550 & 0.0290 & 0.7016 & 0.0000 & 0.0000 & 0.2668 & 0.0027 & 0.0000 + 575 & 0.0000 & 0.9446 & 0.0000 & 0.0000 & 0.0012 & 0.0513 & 0.0029 + 600 & 0.0000 & 0.9264 & 0.0003 & 0.0000 & 0.0002 & 0.0730 & 0.0000 + 625 & 0.0027 & 0.2192 & 0.1273 & 0.0746 & 0.0002 & 0.5289 & 0.0472 + 650 & 0.1072 & 0.0022 & 0.2342 & 0.1557 & 0.0006 & 0.0000 & 0.5001 + 675 & 0.0320 & 0.0287 & 0.6831 & 0.0077 & 0.0201 & 0.0000 & 0.2285 + 700 & 0.2659 & 0.0106 & 0.5514 & 0.0002 & 0.1718 & 0.0000 & 0.0002 + 725 & 0.0326 & 0.0181 & 0.8166 & 0.0159 & 0.1067 & 0.0101 & 0.0000 + 750 & 0.0700 & 0.0103 & 0.5985 & 0.1475 & 0.0242 & 0.1495 & 0.0001 + 775 & 0.0000 & 0.0863 & 0.2889 & 0.3968 & 0.1279 & 0.0980 & 0.0021 + 800 & 0.0000 & 0.0216 & 0.4607 & 0.4439 & 0.0017 & 0.0600 & 0.0120 + 825 & 0.0009 & 0.0667 & 0.8472 & 0.0320 & 0.0004 & 0.0168 & 0.0362 + 850 & 0.0000 & 0.0000 & 0.9294 & 0.0071 & 0.0034 & 0.0059 & 0.0541 + 875 & 0.0000 & 0.0000 & 0.8301 & 0.0344 & 0.0001 & 0.0000 & 0.1354 + 900 & 0.0278 & 0.0012 & 0.9357 & 0.0314 & 0.0030 & 0.0000 & 0.0009 + 925 & 0.0178 & 0.0002 & 0.6384 & 0.3411 & 0.0003 & 0.0000 & 0.0022 + 950 & 0.1015 & 0.0000 & 0.0135 & 0.8824 & 0.0022 & 0.0006 & 0.0000 + 975 & 0.0421 & 0.0212 & 0.0782 & 0.8560 & 0.0026 & 0.0000 & 0.0000 + 1000 & 0.0000 & 0.1072 & 0.1866 & 0.6441 & 0.0622 & 0.0000 & 0.0000 + 1025 & 0.0000 & 0.1704 & 0.0000 & 0.8158 & 0.0074 & 0.0029 & 0.0035 + 1050 & 0.0010 & 0.0000 & 0.0000 & 0.9751 & 0.0232 & 0.0007 & 0.0000 + 1075 & 0.0117 & 0.0001 & 0.0000 & 0.9646 & 0.0235 & 0.0000 & 0.0000 + 1100 & 0.0000 & 0.0021 & 0.0000 & 0.9976 & 0.0003 & 0.0000 & 0.0000 + 1125 & 0.0000 & 0.0000 & 0.0000 & 0.9519 & 0.0481 & 0.0000 & 0.0000 + 1150 & 0.0000 & 0.0000 & 0.0000 & 0.9962 & 0.0012 & 0.0016 & 0.0010 + 1175 & 0.0000 & 0.0000 & 0.0000 & 0.9996 & 0.0000 & 0.0003 & 0.0000 + 1200 & 0.0000 & 0.0002 & 0.0000 & 0.7125 & 0.1578 & 0.1269 & 0.0027 + 1225 & 0.0000 & 0.0000 & 0.0000 & 0.9998 & 0.0002 & 0.0000 & 0.0000 + 1250 & 0.0000 & 0.0000 & 0.0000 & 0.9427 & 0.0571 & 0.0003 & 0.0000 + 1275 & 0.0000 & 0.0000 & 0.0000 & 0.4784 & 0.5181 & 0.0034 & 0.0000 + 1300 & 0.0000 & 0.0000 & 0.0000 & 0.1782 & 0.5986 & 0.2231 & 0.0000 + 1325 & 0.0000 & 0.0000 & 0.0000 & 0.0010 & 0.1050 & 0.8885 & 0.0055 + 1350 & 0.1879 & 0.2538 & 0.0000 & 0.0000 & 0.0000 & 0.2315 & 0.3268 + 1375 & 0.0362 & 0.0322 & 0.0000 & 0.0000 & 0.1788 & 0.7038 & 0.0491 + 1400 & 0.2812 & 0.0593 & 0.0000 & 0.0000 & 0.4527 & 0.0060 & 0.2007 + cccccccc & & & & & & & + + ( khz ) & & & & & & & + 200 & 2 & -0.57 & 4.15 & -1.97 & 4.57 & 3.173 & 5.564 + 300 & 8 & -7.58 & 5.62 & -2.71 & 4.67 & 4.348 & 6.380 + 400 & 6 & -2.09 & 4.40 & -1.35 & 4.41 & 3.041 & 5.357 + 500 & 4 & 1.88 & 3.47 & 0.39 & 4.00 & 2.133 & 4.535 + 525 & 8 & -0.42 & 4.38 & -0.75 & 4.13 & 1.255 & 4.319 + 550 & 4 & 1.90 & 3.26 & 0.51 & 3.69 & 2.179 & 4.290 + 575 & 4 & 1.81 & 3.18 & 1.48 & 3.31 & 1.360 & 3.579 + 600 & 4 & 2.21 & 3.14 & 1.79 & 3.30 & 1.473 & 3.618 + 625 & 12 & -2.65 & 4.83 & -1.04 & 4.31 & 2.295 & 4.886 + 650 & 14 & -5.09 & 5.05 & -2.35 & 4.34 & 4.384 & 6.167 + 675 & 6 & 3.55 & 3.49 & 1.36 & 3.88 & 3.455 & 5.197 + 700 & 6 & 2.88 & 3.40 & -0.95 & 3.32 & 5.318 & 6.271 + 725 & 6 & 2.63 & 3.37 & 1.92 & 3.46 & 2.345 & 4.177 + 750 & 6 & 1.99 & 3.39 & 0.97 & 3.61 & 3.270 & 4.874 + 775 & 8 & 3.88 & 3.72 & 1.78 & 3.68 & 2.422 & 4.404 + 800 & 6 & 1.15 & 3.29 & 1.96 & 3.57 & 1.699 & 3.953 + 825 & 6 & 1.64 & 3.31 & 0.92 & 3.39 & 2.454 & 4.182 + 850 & 6 & 1.61 & 3.25 & 1.50 & 3.36 & 0.545 & 3.403 + 875 & 6 & 1.29 & 3.21 & 1.07 & 3.45 & 0.735 & 3.530 + 900 & 6 & 1.44 & 3.17 & 1.36 & 3.17 & 0.769 & 3.266 + 925 & 6 & 1.03 & 3.13 & 1.49 & 3.27 & 0.656 & 3.334 + 950 & 8 & 2.85 & 3.51 & 3.12 & 3.44 & 0.994 & 3.577 + 975 & 8 & 2.35 & 3.44 & 2.04 & 3.39 & 4.020 & 5.261 + 1000 & 8 & 1.97 & 3.41 & -1.29 & 3.33 & 8.390 & 9.028 + 1025 & 8 & 2.73 & 3.44 & -2.49 & 3.41 & 11.510 & 12.010 + 1050 & 8 & 2.90 & 3.40 & 2.91 & 3.41 & 0.968 & 3.549 + 1075 & 8 & 2.94 & 3.37 & 3.38 & 3.40 & 4.185 & 5.394 + 1100 & 8 & 2.78 & 3.37 & 2.69 & 3.37 & 1.842 & 3.837 + 1125 & 8 & 3.13 & 3.34 & 3.09 & 3.36 & 0.343 & 3.372 + 1150 & 8 & 2.69 & 3.30 & 2.69 & 3.30 & 0.329 & 3.315 + 1175 & 8 & 2.82 & 3.30 & 2.82 & 3.30 & 0.012 & 3.301 + 1200 & 8 & 2.18 & 3.28 & 2.34 & 3.42 & 0.780 & 3.511 + 1225 & 8 & 2.62 & 3.25 & 2.62 & 3.25 & 0.002 & 3.253 + 1250 & 8 & 2.36 & 3.22 & 2.40 & 3.24 & 0.145 & 3.246 + 1275 & 10 & 3.13 & 3.53 & 2.56 & 3.38 & 0.592 & 3.431 + 1300 & 10 & 3.52 & 3.48 & 2.88 & 3.49 & 0.863 & 3.597 + 1325 & 12 & 2.23 & 3.76 & 2.42 & 3.73 & 0.553 &3.774 + 1350 & 14 & 3.38 & 4.05 & 25.47 & 6.62 & 90.27 & 90.52 + 1375 & 12 & 2.82 & 3.81 & 9.22 & 4.54 & 43.23 & 43.47 + 1400 & 10 & 4.32 & 3.57 & 68.13 & 8.41&112.2&112.5 + cc parameter & estimate + + & 2.36 ( 3.22 ) + & -4.33 ( 0.41 ) + & 1.66 ( 0.13 ) + & -2.25 ( 0.13 ) + & 6.26 ( 0.46 ) + cccccccccccc & & & & & & & & & & + & ( khz ) & & & & & & & & & + + + + experimental & 1250 & 8 & 2.36 & 3.22 & 2.40 & 3.24 & 0.14 & 3.25 & 0.56 & 3.29 + + + realization 1 & 1400 & 8 & 4.25 & 2.64 & 4.20 & 2.66 & 0.18 & 2.67 & 0.39 & 2.69 + realization 2 & 1150 & 8 & -0.43 & 2.98 & -0.39 & 3.00 & 0.59 & 3.05 & 0.45 & 3.08 + realization 3 & 1325 & 8 & -2.59 & 2.76 & -2.88 & 2.82 & 0.60 & 2.89 & 1.83 & 3.42 + + cccccccc ( khz ) & & intercept & slope & -value & & -value & + & & & day & ( trend test ) & & ( model consistency test ) & + + 200 & 2 & 11.98(6.50 ) & -0.243(0.125 ) & 0.050 & 33.0 & 0.864 & -0.57(5.56 ) + 300 & 8 & 11.74(10.54 ) & -0.440(0.199 ) & 0.027 & 47.0 & 0.311 & -7.58(6.38 ) + 400 & 6 & 10.31(6.64 ) & -0.247(0.126 ) & 0.062 & 35.8 & 0.775 & -2.09(5.36 ) + 500 & 4 & 10.69(5.12 ) & -0.175(0.097 ) & 0.072 & 32.6 & 0.877 & 1.88(4.54 ) + 575 & 4 & 10.13 ( 4.42 ) & -0.164 ( 0.084 ) & 0.049 & 28.2 & 0.960 & 1.81(3.58 ) + 700 & 6 & 13.56(4.95 ) & -0.213(0.093 ) & 0.021 &31.0 & 0.914 & 2.88(6.27 ) + 800 & 6 & 10.49(3.94 ) & -0.175(0.074 ) & 0.018 & 22.5 & 0.996 & 1.15(3.95 ) + + + 900 & 6 & 9.15 ( 3.49 ) & -0.164 ( 0.066 ) & 0.012 & 19.9 & 0.999 & 1.44(3.27 ) + 1150 & 8 & 11.39 ( 3.79 ) & -0.179 ( 0.071 ) & 0.011 & 23.2 & 0.994 & 2.69(3.32 ) + 1175 & 8 & 11.57 ( 3.81 ) & -0.185 ( 0.072 ) & 0.010 & 24.0 & 0.991 & 2.82(3.30 ) + 1225 & 8 & 12.13 ( 3.79 ) & -0.204 ( 0.071 ) & 0.004 & 25.0 & 0.987 & 2.62(3.25 ) + 1250 & 8 & 12.02 ( 3.78 ) & -0.202 ( 0.071 ) & 0.004 & 25.2 & 0.986 & 2.36(3.25 ) + for each run by fitting a ratio spectrum model by the method of least squares .the half - widths of the intervals are asymptotic uncertainties determined by the least squares method .the fitting bandwidth is = 1250 khz.,height=720 ]
in the electronic measurement of the boltzmann constant based on johnson noise thermometry , the ratio of the power spectral densities of thermal noise across a resistor at the triple point of water , and pseudo - random noise synthetically generated by a quantum - accurate voltage - noise source is constant to within 1 part in a billion for frequencies up to 1 ghz . given knowledge of this ratio , and the values of other parameters that are known or measured , one can determine the boltzmann constant . due , in part , to mismatch between transmission lines , the experimental ratio spectrum varies with frequency . we model this spectrum as an even polynomial function of frequency where the constant term in the polynomial determines the boltzmann constant . when determining this constant ( offset ) from experimental data , the assumed complexity of the ratio spectrum model and the maximum frequency analyzed ( fitting bandwidth ) dramatically affects results . here , we select the complexity of the model by cross - validation a data - driven statistical learning method . for each of many fitting bandwidths , we determine the component of uncertainty of the offset term that accounts for random and systematic effects associated with imperfect knowledge of model complexity . we select the fitting bandwidth that minimizes this uncertainty . in the most recent measurement of the boltzmann constant , results were determined , in part , by application of an earlier version of the method described here . here , we extend the earlier analysis by considering a broader range of fitting bandwidths and quantify an additional component of uncertainty that accounts for imperfect performance of our fitting bandwidth selection method . for idealized simulated data with additive noise similar to experimental data , our method correctly selects the true complexity of the ratio spectrum model for all cases considered . a new analysis of data from the recent experiment yields evidence for a temporal trend in the offset parameters . + keywords : boltzmann constant , cross - validation , johnson noise thermometry , model selection , resampling methods , impedance mismatch
new technologies are producing an ever - increasing volume of sequence data , but the inadequacy of current tools puts restrictions on large - scale analysis . at over 3 billion base pairs ( bp ) long , the human genome naturally falls into the category of big data , and techniques need to be developed to efficiently analyze and uncover novel features .applications include sequencing individual genomes , cancer genomes , inherited diseases , infectious diseases , metagenomics , and zoonotic diseases . in 2003 ,the cost of sequencing the first human genome was 100 in several years .the dramatic decrease in the cost of obtaining genetic sequences has resulted in an explosion of data with a range of applications , including early identification and detection of infectious organisms from human samples .dna sequences are highly redundant among organisms . for example , homo sapiens share about 70% of genes with the zebrafish . despite the similarities ,the discrepancies create unique sequences that act as fingerprints for organisms .the magnitude of sequence data makes correctly identifying organisms based on segments of genetic code a complex computational problem. given one segment of dna , current technologies can quickly determine what organism it likely belongs to .however , the speed rapidly diminishes as the number of segments increases . the time to identify all organisms present in a human sample ( blood or oral swab ) , can be up to 45 days ( figure [ ecolib ] ) with major bottlenecks being ( 1 ) sample shipment to sequencing centers and ( 2 ) sequence analysis .consider the specific case of an _e. coli _ outbreak that took place in germany in the summer of 2011 ( figure [ ecolia ] ) . in this instance , while genetic sequencing resolved how the virulence of this isolate was different with the insertion of a bacterial virus carrying a toxic gene from previously characterized isolates , it arrived too late to significantly impact the number of deaths .figure [ ecolib ] shows that there are a number of places in the `` current '' time frames for the sequencing process where improved algorithms and computation could have a significant impact .ultimately , with the appropriate combination of technologies , it should be possible to shorten the timeline for identification from weeks to less than one day .a shortened timeline could significantly reduce the number of deaths from such an outbreak . first developed in 1990 , the basic local alignment search tool ( blast )is the current gold - standard method used by biologists for genetic sequence searches .many versions are available on the national center for biotechnology information ( ncbi ) website to match nucleotide or protein sequences . in general , blast is a modification of the smith waterman algorithm and works by locating short `` seeds '' where the reference and unknown samples have identical seed sequences .seeds are expanded outwards , and with each added base pair , probabilities and scores are updated to signify how likely it is the two sequences are a match .base pairs are added until the probabilities and scores reach threshold values .the score most highly used by biologists to distinguish true matches is the expect value , or e - value .the e - value uses distributions of random sequences and sequence lengths to measure the expected number of matches that will occur by chance .similar to a statistical p - value , low e - values represent matches .e - value thresholds depend on the dataset , with typical values ranging from 1e-6 to 1e-30 . despite the relation to the p - value ,e - value calculations are complex and do not have a straightforward meaning .additionally , the underlying distributions are known to break down in alignments with shorter sequences or numerous repeats ( low - complexity sequences ) . direct use of blast to compare a 600 gb collection with a comparably sized reference set requires months on a 10,000 core system .the dynamic distributed dimensional data model ( d4 m ) developed at lincoln laboratory , has been used to accelerate dna sequence comparison .d4 m is an innovation in computer programming that combines the advantages of five processing technologies : triple store databases , associative arrays , distributed arrays , sparse linear algebra , and fuzzy algebra .triple store databases are a key enabling technology for handling massive amounts of data and are used by many large internet companies ( e.g. , google big table ) .triple stores are highly scalable and run on commodity computers , but lack interfaces to support rapid development of the mathematical algorithms used by signal processing experts .d4 m provides a parallel linear algebraic interface to triple stores . using d4 m, developers can create composable analytics with significantly less effort than if they used traditional approaches .the central mathematical concept of d4 m is the associative array that combines spreadsheets , triple stores , and sparse linear algebra .associative arrays are group theoretic constructs that use fuzzy algebra to extend linear algebra to words and strings .associative arrays provide an intuitive mechanism for representing and manipulating triples of data and are a natural way to interface with the new class of high performance nosql triple store databases ( e.g. , google big table , apache accumulo , apache hbase , netflix cassandra , amazon dynamo ) . because the results of all queries and d4 m functions are associative arrays , all d4 m expressions are composable and can be directly used in linear algebraic calculations .the composability of associative arrays stems from the ability to define fundamental mathematical operations whose results are also associative arrays .given two associative arrays a and b , the results of all the following operations will also be associative arrays : a + b a - b a & b a a*b d4 m provides tools that enable the algorithm developer to implement a sequence alignment algorithm on par with blast in just a few lines of code .the direct interface to high performance triple store databases allows new database sequence alignment techniques to be explored quickly .the presented algorithm is designed to couple linear algebra approaches implemented in d4 m with well - known statistical properties to simplify and accelerate current methods of genetic sequence analysis .the analysis pipeline can be broken into four key steps : collection , ingestion , comparison , and correlation ( figure [ steps ] ) . in _ collection_ , unknown sample data is received from a sequencer in fasta format , and parsed into a suitable form for d4 m .dna sequences are split into k - mers of 10 bases in length ( words ) .uniquely identifiable metadata is attached to the words and positions are stored for later use .low complexity words ( composed of only 1 or 2 dna bases ) are dropped .very long sequences are segmented into groups of 1,000 k - mers to reduce non - specific k - mer matches . the _ ingestion _ step loads the sequence identifiers , words , and positions into d4 m associative arrays by creating unique rows for every identifier and columns for each word .the triple store architecture of associative arrays effortlessly handles the ingestion and organization of the data . during the process ,redundant k - mers are removed from each 1,000 bp segment , and only the first occurrence of words are saved . the length and the four possible bases gives a total of 4 ( just over one - million ) possible words , naturally leading to sparse matrices and operations of sparse linear algebra . additionally , the segmentation into groups of 1,000 ensures sparse vectors / matrices with less than 1 in 1,000 values used for each sequence segment .a similar procedure is followed for all known reference data , and is also ingested into a matrix . sequence similarity is computed in the _ comparison _ step . using the k - mers as vector indices allows a vector cross - product value of two sequences to approximate a pair - wise alignment of the sequences .likewise , a matrix multiplication allows the comparison of multiple sequences to multiple sequences in a single mathematical operation . for each unknown sequence , only strong matches ( those with greater than 20 words in common ) are stored for further analysis .computations are accelerated by the sparseness of the matrices .the redundant nature of dna allows two unrelated sequences to have numerous words in common .noise is removed by assuring the words of two matching sequences fall in the same order .the _ correlation _step makes use of the 10-mer positions to check the alignments .when the positions in reference and unknown sequences are plotted against each other , true alignments are linearly correlated with an absolute correlation coefficient ( r - value ) close to one . matches with over 20 words in common and absolute r - values greater than 0.9are considered strong .after these initial constraints are applied , additional tests may be used for organism identification and with large datasets .the algorithm was tested using two generated datasets ( datasets 1 and 2 ) .the smaller dataset 1 ( 72,877 genetic sequences ) was first compared to a fungal dataset and used to examine the selection criteria .results were compared with those found by running blast .dataset 2 ( 323,028 sequences ) was formed from a human sample and spiked with _ in silico _ bacteria organisms using fastqsim .bacterial results from dataset 2 were again compared to blast and used to test for correct organism identification . in both comparisons , the reference sets were compiled from rna present in genbank .it is important to note that the reference sequences are unique to the taxonomic gene level .therefore , each gene of an organism is represented by a unique sequence and metadata .dataset 1 was compared to the fungal rna dataset and , blast was run using blastn and a mit lincoln laboratory developed java blastparser program .d4 m found 6,924 total matches , while blast discovered 8,842 .examination demonstrates the quality of the matches varies based on the hit counts and correlation coefficients . to separate the minute differences in linear correlation ,the r - values were modified to log( ) . because of this choice , the strong r - values with absolute value close to one have a modified value near zero .figure [ figure : logr_hc ] displays the modified r - values plotted versus the hit counts .the dashed lines show the threshold values of 20 words in common and a r - value greater than .strong matches lie in the bottom right of the graph .the unusually large void between 10 and 10 on the vertical axis ( r - values of 0.9 to 0.99 ) serves as a clear distinction between regions of signal and noise for the d4 m and blast data .together , the hit count and r - value thresholds greatly reduce the background noise .figure [ figure : statistics ] shows a full distribution of the number of matches satisfying the numerical requirements .almost 63% of the total blast finds have hit counts less than 20 , and about 18% fall below both d4 m thresholds . before the correlation selection ,d4 m identifies 5,160 background matches , of which , blast finds 1,576 . after all restrictions , d4 m and blast both identify 1,717 matches .representative correlations from each region demonstrate the selection quality of the algorithm ( figure [ figure : e7examples ] ) .also shown are the blast alignments , in which the top strand is the unknown sequence and the bottom is the matching reference .vertical lines indicate an exact base pair alignment .dashes in the sequences represent a gap that was added by blast to improve the arrangement .the alignments include the initial seeds and the expansion until threshold values were reached .the results demonstrate how hit counts below twenty are a result of poor alignments and low complexity repeats ; these matches have few regions with at least a 10 bp overlap .strong alignments as seen in the bottom - right of figure [ figure : e7examples ] are comprised of long , well aligned stretches. segments of poor alignment still exist , but they are proportionally less , and offset the sample and reference sequences by equal amounts .it is worth noting that the blast e - values of the four examples coincide with the d4 m results .the results of dataset 1 present comparable findings to blast run with default parameters .additional filters were used in the analysis of the larger dataset 2 . as in dataset 1 , alignments were first required to have at least 20 words in common .additionally , for each unknown sequence , the r - value was only computed for matches within 10% of the maximum hit count .for example , unknown sequence a might have 22 words in common with reference b , 46 with reference c , and 50 with reference d. r - values were calculated for the matches with references c and d since the hit counts are within 10% of 50 , the maximum value .similar to dataset 1 , absolute r - values were thresholded at 0.9 , but were also required to be within 1% of the maximum for each unknown sequence .the additional percentage threshold values were chosen to identify matches of similar strength and reduce the number of computations .again , results were compared with blast , this time run with default parameters .comparisons of blast and d4 m findings before r - value filters are displayed in figure [ logr_hct7 ] .the stricter blast conditions eliminate many of the false positives with low hit counts and r - values as seen in dataset 1 . before the r - value restrictions ,d4 m finds significantly more alignments than blast ( figure [ statisticst7 ] ) .these numbers are reduced with r - value filters , but during organism identification steps , the majority of the additional points mapped to the correct species ( discussed below ) .notice the majority of blast findings missed by d4 m lie in the lower hit count regime , all of which emerge from the second hit count filter ( within 10% of maximum ) .after applying all filters , each sample matched either to one or multiple references .unique matches were labeled as that reference . in the case of multiple alignments ,the taxonomies were compared and the sample was classified as the lowest common taxonomic level .for example , if unknown sequence a maps equally to references b and c , both of which are different species within the same genus , sequence a is classified as the common genus .the number of sequences matching to each family , genus , and species is tallied to give the final results .in figure [ table : nomatches ] , d4 m and blast results are numerically compared with the spiked organisms .the numbers indicate how many sequences were classified as species .both d4 m and blast correctly identified the species _f. philomiragia _ and _ f. tularensis _, with numbers close to the truth data .it is important to note that _f. philomiragia _ and _ f. tularensis _ are very closely related .studies show _ f. philomiragia _ and _ f. tularensis _ to have between 98.5% and 99.9% identity which accounts for the slight difference in numbers of matching sequences .d4 m and blast identified nearly the same amount of _e. coli _ presence .interestingly , _e. coli _ was not an _ in silica _ spiked organism , and is instead a background organism ( present in the human sample ) detected by both .as previously noted , the numbers in figure [ statisticst7 ] show d4m identified significantly more matches than blast .the d4 m data in figure [ logr_hct7 ] was color - coded based on the taxonomy of matches .results are presented in figure [ sub : genusmatches ] and reveal the majority of points with high hit counts and r - values are matching to the truth data .again , at this stage , no r - value filters have been applied , but the results clearly indicate how hit counts and r - values are appropriate selection parameters to correctly and efficiently identify organisms present in a sample . in the analysis described ,parallel processing using pmatlab was heavily relied upon to increase computation speeds .the implementation of an apache accumulo database additionally accelerated the already rapid sparse linear algebra computations and comparison processes , but was not used to the full advantage .as shown in , the triple store database can be used to identify the most common 10-mers .the least popular words are then selected and used in comparisons , as these hold the most power to uniquely identify the sequences .preliminary results show subsampling greatly reduces the number of direct comparisons , and increases the speed 100x . figure [ speed ] shows the relative performance and software size of sequence alignment implemented using blast , d4 m alone , and d4 m with triple store accumulo .future developments will merge the results discussed here with the subsampling acceleration techniques of the database .with matlab and d4 m techniques , the described algorithm is implemented in less than 1,000 lines of code .that gives a 100x improvement over blast , and on comparable hardware the performance level is within a factor of 2 .the precise code allows for straight forward debugging and comprehension .results shown here with datasets 1 and 2 demonstrate that d4 m findings are comparable to blast and possibly more accurate .the next steps are to integrate the apache accumulo capabilities and optimize the selection parameters over several known datasets .additionally , the capabilities will be ported to the scidb database .the benefit of using d4 m in this application is that it can significantly reduce programming time , increase performance , and simplify the current complex sequence matching algorithms .the authors are indebted to the following individuals for their technical contributions to this work : chansup byun , ashley conard , dylan hutchinson , and anna shcherbina .f. chang , j. dean , s. ghernawat , w. c. hsieh , d. a. wallach , m. burrows , t. chandra , a. fikes , and r. e. gruber , `` bigtable : a distributed storage system for structured data , '' , _ acm transactions of computer systems ( tocs ) , _ vol ., 2008 .j. kepner , w. arcand , w. bergeron , n. bliss , r. bond , c. byun , g. condon , k. gregson , m. hubbell , j. kurz _ et al ._ , `` dynamic distributed dimensional data model ( d4 m ) database and computation system , '' in _ acoustics , speech and signal processing ( icassp ) , 2012 ieee international conference on . _ ieee , 2012 , p. 5349 - 5352 . j. kepner , c. anderson , w. arcand , d. bestor , b. bergeron , c. byun , m. hubbell , p. michaleas , j. mullen , d. gwynn , a. prout , a. reuther , a. rosa , and c yee , `` d4 m 2.0 schema : a general purpose high performance schema for the accumulo database'',_ieee high performance extreme computing ( hpec ) conference _, 2013 .b. huber , r. escudero , h. j. busse , e. seibold , h. c. scholz , p. anda , p. kmpfer , w. d. splettstoesser , `` description of francisella hispaniensis sp ., isolated from human blood , reclassification of francisella novicida ( larson et al . 1955 ) olsufiev et al .1959 as francisella tularensis subsp .novicida comb . nov . and emended description of the genus francisella '' , _ int. j. syst .microbiol . _ , 2010 .m. forsman , g. sandstrom , and a. sjostedt , `` analysis of 16s ribosomal sequences of _ francisella _ strains and utilization for determination of the phylogeny of the genus and for identification of strains by pcr '' , _ int . j. syst. micorbiol . _ , 1994 .d. g. hollis , r. e. weaver , a. g. steigerwalt , j. d. wenger , c. w. moss , and d. j. brenner , `` francisella philomiragia comb .( formerly yersinia philomiragia ) and francisella tularensis biogroup novicida ( formerly francisella novicida ) associated with human disease '' ,_ j. clin .
recent technological advances in next generation sequencing tools have led to increasing speeds of dna sample collection , preparation , and sequencing . one instrument can produce over 600 gb of genetic sequence data in a single run . this creates new opportunities to efficiently handle the increasing workload . we propose a new method of fast genetic sequence analysis using the dynamic distributed dimensional data model ( d4 m ) an associative array environment for matlab developed at mit lincoln laboratory . based on mathematical and statistical properties , the method leverages big data techniques and the implementation of an apache acculumo database to accelerate computations one - hundred fold over other methods . comparisons of the d4 m method with the current gold - standard for sequence analysis , blast , show the two are comparable in the alignments they find . this paper will present an overview of the d4 m genetic sequence algorithm and statistical comparisons with blast .
chaotic motion accounts on the one hand for the well - known phenomenon of sensitive dependence on initial conditions , that is , exponentially fast divergence of nearby orbits , and on the other hand for the phenomenon of decay of correlations or mixing .both properties are intimately related with the observation that even low - dimensional chaotic systems share common features with random processes .this intuitive picture has been used as a basis to address some of the fundamental questions arising in nonequilibrium statistical mechanics .in fact , it is a simple exercise to show that topological mixing implies sensitive dependence on initial conditions . at the measure - theoretical level , however , relating lyapunov exponents , the quantitative measures for sensitive dependence on initial conditions , to decay rates of correlation functions is a more involved task .for instance , it is easy to construct simple maps with finite lyapunov exponent and arbitrarily small correlation decay ( see , for example , a markov model described in ) .thus , at a quantitative level it is tempting to explore in some detail in which way the rate of correlation decay is linked with lyapunov exponents , as both quantities are supposed to have a common origin . in more general terms and from a wider perspectivethis topic can be viewed as belonging to the realm of fluctuation dissipation relations for nonequilibrium dynamics , where one cause , the underlying detailed dynamical structure , is responsible for the approach to the stationary state , that is , for the decay of correlations , but , at the same time , is responsible for fluctuation properties at a microscopic level , or in our context for the sensitive dependence of initial conditions and a positive lyapunov exponent .furthermore , any relation between decay of correlations and lyapunov exponents is of great practical interest , as the measurement of lyapunov exponents , unlike the rate of correlation decay , is notoriously difficult to determine in real world experiments . in was even suggested to take correlation decay rates as a meaningful approximation for lyapunov exponents .the problem we want to address can be illustrated by a very basic textbook example , probably considered for the first time more than two decades ago .consider a linear full branch map ( see figure [ fig:1 ] ) on the unit interval ] we restrict the parameter to , in order to guarantee expansivity .figure [ fig : counterexample](a ) depicts the map for .the leading part of the spectrum of the corresponding perron - frobenius operator considered on the space of analytic observables can be approximated using a spectral approximation method .the basic idea of this method is to approximate by an square matrix , where denotes the projector that sends a function to its lagrange - chebyshev interpolating polynomial of degree .this method is easily implemented and , moreover , it is possible to show ( see ) that the eigenvalues of converge exponentially fast to the eigenvalues of . using this method the leading eigenvalues of the perron - frobenius operator and their dependence on easily obtained ( see figure [ fig : counterexample](b ) ) .a minimum for the subleading eigenvalue occurs at about .the corresponding numerical value reads resulting in a mixing rate .the corresponding lyapunov exponent ( which hardly depends on the parameter ) is computed using the numerical approximation of the invariant density .the numerical value is so that the inequality is clearly violated .{fig3a.eps } & \hspace{-.12\paperwidth } \includegraphics[width=.65\textwidth]{fig3b.eps } \end{array} ] of the corresponding eigenvalue is a quadratic polynomial on each element of the partition , but it develops an increasing number of discontinuities between different intervals of the increasingly finer partition .these eigenfunctions do not seem to converge to a smooth limit ( see figure [ fig : evs_efunctions](b ) ) .in fact , unlike the invariant density there is no reason why the limit should be smooth .the numerical experiment suggests that we end up with a function of bounded variation . and of the corresponding matrix blocks and for a piecewise linear approximation of the map with , as a function of the level of approximation .the subleading eigenvalue of , , is displayed as well . for comparison , and depicted as well .the broken lines are a guide for the eye .right : ( b ) eigenfunction corresponding to for with normalisation . for clarity , successive approximationsare shifted by .the open symbols indicate the discontinuity set of the eigenfunction , i.e. , the increasingly finer markov partition of the piecewise linear approximaton .[ fig : evs_efunctions],scaledwidth=100.0% ] it is indeed possible to show and perhaps well known that an estimate like holds for discontinuous observables .for that purpose let us consider the perron - frobenius operator on the space of functions of bounded variation . recall that a function \to \mathbb{r}$ ] is of bounded variation if it has finite total variation . in this setup, the spectrum of the perron - frobenius operator associated with expanding maps has been studied in detail ( see ) .in particular , there is an explicit formula for the essential spectral radius given by \})^{-1/k } \ , .\ ] ] thus , we have an upper bound for the mixing rate \ } \,,\ ] ] which yields the following estimate for the lyapunov exponent : \ } \int_i d \mu\nonumber \\ & = & \frac{1}{k } \ln \inf \{|(f_c^k)'(x)| : x \in[-1,1 ] \ } \ , .\end{aligned}\ ] ] thus , for observables of bounded variation we have the following result .[ bvcor ] let be a piecewise monotonic smooth expanding interval map which is mixing with respect to its unique absolutely continuous invariant measure .then the rate of decay of correlations for functions of bounded variation is bounded by the lyapunov exponent in fact , almost identical statements can be found in , for example , corollary 9.2 .there is no simple , straightforward answer to the question about the relation between lyapunov exponents and mixing rates . on formal groundsone may argue that both quantities probe entirely different and independent aspects of a dynamical system , and that no particular relation should be expected .lyapunov exponents are determined by properties related to the largest eigenvalue of the perron - frobenius operator . by contrast, correlation decay depends crucially on properties of the observables , with mixing rates being related to the subleading part of the spectrum .thus , abstract operator theory on its own does not seem to provide further insight into the relation between both quantities .witness , for example , the doubling map viewed as an analytic map on the unit circle . while its lyapunov exponent is finite , the perron - frobenius operator has no nontrivial eigenvalue when considered on the space of analytic functions , that is , correlations of analytic observables decay faster than any exponential ( see , e.g. , ) .the argument outlined above , however , is a bit too simplistic .in fact , our results on piecewise linear expanding markov maps observed via piecewise analytic functions or general piecewise smooth expanding maps observed via functions of bounded variation suggest that bounds on the mixing rate in terms of lyapunov exponents can be derived provided that specific properties of the underlying dynamical system are taken into account. estimates of this type rely on nontrivial lower bounds for spectra . as such, they are complementary to estimates which are available for proving the existence of spectral gaps , and will thus require completely different approaches .it turns out that the spirit of the result contained in proposition [ bvcor ] can be understood by considering an observable with a single discontinuity . in order to substantiate this claimwe have performed numerical simulations on the map for .we have computed the autocorrelation function for the observable with if and if , having a discontinuity of stepsize at ( see figure [ fig:4 ] ) .choosing , the corresponding observable is analytic and the correlation decay is seen to follow the subleading eigenvalue of the perron - frobenius operator defined on the space of analytic functions ( see figure [ fig : counterexample](b ) ) . in the discontinuous case corresponding to the short time initial decay of the correlations still follows the pattern of the analytic observable , but the correlation function now develops an exponential tail which obeys .the tail becomes more pronounced if the stepsize increases .in fact , the mixing rate seems to be very close to the lyapunov exponent .revisiting the considerations leading to proposition [ bvcor ] , it is tempting to surmise that this coincidence is a consequence of large deviation properties of finite time lyapunov exponents , since the expression for the essential spectral radius involves an extreme value of a finite time lyapunov exponent . thus , the relation between lyapunov exponents and mixing rates for observables of bounded variation could be viewed to arise from the same mechanism already exploited in for analysing the simple case mentioned in the introduction . with different stepsizes for the map with .the straight lines represent an exponential decay with rate and .ergodic averages have been computed as time averages of a series of length for uniformly distributed initial conditions , skipping a transient of length .the horizontal dotted line indicates the order of magnitude of statistical errors induced by the finite ensemble size.[fig:4 ] ] this simple demonstration gives support to the folklore that correlation decay is linked with lyapunov exponents .even if a real world phenomenon is sufficiently well - modelled by a smooth dynamical system for which to date no link between correlation decay and lyapunov exponents can be established , one should keep in mind that modern data processing inevitably involves digital devices , which correspond to discontinuous observations .therefore , in formal terms observables of bounded variation could be the relevant class for applications and in these cases proposition [ bvcor ] applies . at an intuitive level it is easy to understand why discontinuous observations result in correlation decay related to lyapunov exponents .a discontinuous observable is able to distinguish between different `` microstates '' at a `` macroscopic '' level , that is , a discontinuous observation is able to distinguish two states at infinitesimal distance .as the distance between two nearby phase space points separated by a discontinuity grows according to the lyapunov exponent , the sensitivity at the microscale may be transported to the macroscale , that is , it may filter through to the correlation function by a discontinuous observation . in our contextthe mathematical challenge is to establish a relation between mixing rates and lyapunov exponents for natural classes of observables , for example , full branch analytic interval maps observed via analytic functions . besides the need for developing tools to obtain lower bounds for spectra , establishing the relation alluded to above also requires a deeper understanding of which dynamical feature causes the point spectrum of the perron - frobenius operator , that is , which signature of the microscopic dynamics survives if viewed via analytic observables .this is reminiscent of coarse - graining approaches in statistical mechanics , for example , the introduction of collective coordinates and quasi - particles .thus , tackling the mathematical problem above may well shed some light on some of the most fundamental problems in contemporary nonequilibrium statistical physics of complex systems .w.j . gratefully acknowledges support by epsrc ( grant no .ep / h04812x/1 ) and dfg ( through sfb910 ) , as well as the kind hospitality by eckehard schll and his group during the stay at tu - berlin .in this section we shall provide a short account of the technical details to make the results of section [ sec:2 ] rigorous .the main thrust of the argument is to define a suitable function space on which the generalised perron - frobenius operator is compact .results of this type for general analytic markov maps are not new ( see , for example , or ) . the special case of piecewise linear markov maps discussed below , where a complete determination of the spectrum is possible , is probably well known to specialists in the field .unfortunately , we are at loss to provide a reference for the results in proposition [ app : prop2 ] below , so we will outline a proof for the convenience of the reader . to set the scene we define what is meant by a piecewise linear markov map .before doing so we note that by a _ partition _ of a closed interval we mean a finite collection of closed intervals with disjoint interiors , that is , for , such that .[ defn : markovmap ] an interval map is said to be a _markov map _ if there exists a finite partition of such that for any pair either or .if this is the case , the corresponding partition will be referred to as a _ markov partition _ and the matrix given by will be called the _ topological transition matrix _ of the markov map .a markov map with markov partition is said to be _ expanding _ if for all .it is said to be _ piecewise linear _ if is constant on each element of the markov partition , that is , for all .finally , we call an expanding markov map with topological transition matrix _ topologically mixing _ ) . ]if there is a positive integer such that each entry of the matrix is strictly positive . in what follows we shall concentrate on topologically mixing piecewise linear expanding markov maps .our aim is to define suitable spaces of observables on which the associated _ generalised perron - frobenius operator _ or _ transfer operator _ ( see ) is well defined and has nice spectral properties .it turns out that these spaces can be chosen from spaces of functions which are piecewise analytic . * we write to denote the space of bounded holomorphic functions on .this is a banach space when equipped with the norm .* we use to denote the space of -tuples of bounded holomorphic functions on .this is a banach space when equipped with the norm .the desired space of observables will now be defined by linking the disk occurring in the definition above to the dynamical system as follows .given a piecewise linear expanding markov map with markov partition , let us denote by the inverse branch of the markov map from partition element into the partition element as well as its obvious analytic continuation to the complex plane .observe now that , since the map is expanding , all inverse branches are contractions .we can thus choose two concentric disks and of radius and , respectively , such that it turns out that is a suitable space of observables for the map , in the sense that the associated transfer operator ( [ app : eq : rpfapp ] ) is a well defined bounded operator on .this is the content of the following result .[ app : prop1 ] given a piecewise linear expanding markov map with topological transition matrix and inverse branches , suppose that the disk is chosen as above .then , for any real , the transfer operator is a well defined bounded operator from into itself and is given by the representation ( [ app : eq1 ] ) follows from a short calculation using the definition ( [ app : eq : rpfapp ] ) of . since is constant and the disk satisfies ( [ app : eq : adapted ] ) , the operator maps to . in order to seethat is bounded observe that if with , then the space is not the only suitable space of observables .restricting to one and the same disk of analyticity for each branch , however , simplifies notation .more general spaces are discussed in and . in order to explain the factorisation of the transfer operator we observe that in ( [ app : eq1 ] ) the argument of , that is , is contained in the smaller disk because of ( [ app : eq : adapted ] ). we can thus use ( [ app : eq1 ] ) to define the operator on a larger function space , namely .note that the space is ` larger ' as analyticity is only required on a smaller disk .we shall write in order to distinguish this lifted operator from the one occurring in proposition [ app : prop1 ] .note that the arguments in this proposition can be adapted to show that is a bounded operator .it is tempting to think of and as being essentially the same , since they are given by the same functional expression .however , as operators the two are different as the latter is defined on a larger domain . yet , both operators are related by restriction . in order to give a precise formulation of this fact we introduce a bounded embedding operator which maps the smaller space injectively into the larger space . to be precise given by , where in turn is given by for .note that looks superficially like the identity .this , however , is misleading as argument and image are considered in different spaces .the relation between and can now be written as note that the factorisation above disentangles the intricacies of the map contained in from its general expansiveness contained in .we now turn to the approximation argument . for piecewise linear markov mapsthe transfer operator is easily seen to map piecewise polynomial functions of degree at most into piecewise polynomial functions of degree at most .this follows from a straightforward calculation using the fact that the inverse branches are affine functions . in order to exploit this property of the transfer operatorfurther we shall introduce a projection operator defined as follows : given an analytic function in and an integer we use to denote the truncated taylor series expansion where denotes the centre of the disk . clearly , is a projection operator .it turns out that the projections approximate the embedding for large in a strong sense . in order to make this statement , the heart of the approximation argument alluded to above , more precise , we observe that , by cauchy s integral theorem , we have for any and any where the contour is the positively oriented boundary of a disk centred at with radius lying strictly between and .it follows that the norm of viewed as an operator from to satisfies in particular , we have in order to extend this result to the space of piecewise analytic functions we introduce the projection operator by setting .the analogue of ( [ app : japprox ] ) now reads we are now able to combine the factorisation ( [ app : lfactor ] ) with the approximation result above to prove the main result of this appendix .[ app : prop2 ] suppose we are given a piecewise linear expanding markov map with inverse branches and disks satisfying ( [ app : eq : adapted ] ) .then , for any real , the transfer operator viewed as an operator on is compact and its non - zero eigenvalues ( with multiplicities ) are precisely the non - zero eigenvalues of the matrices with given in ( [ melements ] ) .we start by recalling that for every the transfer operator leaves the space invariant , that is , .thus where denotes the identity on .using the above equation and the factorisation ( [ app : lfactor ] ) we see that which , using ( [ app : jcalapprox ] ) , implies since is a finite - rank operator for every , the limit above implies that is compact . clearly , the non - zero eigenvalues of each are exactly the non - zero eigenvalues of the block matrices ( [ blockmatrix ] ) . the remaining assertion , namely that the non - zero spectrum of the transfer operator is captured by the non - zero spectra of the finite dimensional matrix representations follows from ( [ app : lapprox ] ) together with an abstract spectral approximation result ( see ( * ? ? ? * xi.9.5 ) ) .[ app : cor ] suppose that the hypotheses of the previous proposition hold .if the markov map is also topologically mixing , then has a simple positive leading eigenvalue .moreover , this leading eigenvalue is the perron eigenvalue of the matrix .this follows from the previous proposition together with the observation that for the spectral radius of the matrix is strictly smaller than the perron eigenvalue of . in order to see this note that for all we have where a short calculation shows that for each and each we have where denotes the frobenius norm .the spectral radius formula now implies that
chaotic dynamics with sensitive dependence on initial conditions may result in exponential decay of correlation functions . we show that for one - dimensional interval maps the corresponding quantities , that is , lyapunov exponents and exponential decay rates , are related . for piecewise linear expanding markov maps observed via piecewise analytic functions we provide explicit bounds of the decay rate in terms of the lyapunov exponent . in addition , we comment on similar relations for general piecewise smooth expanding maps .
the physics community has in recent years devoted considerable attention to the study of networks , including social networks , biological networks , information networks , and others .many of these networks also have long histories of study in other fields .citation networks , which are the principal focus of this paper , have been studied quantitatively almost from the moment citation databases first became available , perhaps most famously by the physicist - turned - science - historian derek de solla price , who authored two celebrated papers in the 1960s and 1970s highlighting the power - law degree distributions in networks of scientific papers and developing models to explain their origin .a citation network is an information network in which the vertices represent documents of some kind and the edges between them represent citation of one document by another .citation networks differ from other networks in a number of important ways .first , they are directed : citations go from one document to another and hence constitute an inherently asymmetric relationship between the vertices involved .mathematically , the network can be represented by an adjacency matrix , with elements in a directed network the adjacency matrix is , in general , asymmetric .a second feature of citation networks is that they evolve over time as new documents are created .the time evolution of the network takes a special form , in that vertices and edges are added to the network at a specific time and can not be removed later .this permanence of vertices and edges means that the structure of the network is mostly static : it changes only at the `` leading edge '' of the network , the current time at which new documents are being added .citation networks differ in this respect from other information networks such as the world wide web , in which vertices and edges can be removed as well as added and edges can be repositioned after they are added . the limited form of time evolution found in citation networks makes them , in some ways , a simpler and cleaner laboratory for the study of network growth than the web. the combination of the two features of citation networks described above leads to a third : citation networks are acyclic , meaning there are no closed loops of citations of the form a cites b cites c cites a , or longer .when a new vertex is added to a citation network it can cite any of the previously existing vertices , but it can not cite vertices that have not yet been created .this gives the network a clear `` arrow of time , '' with all edges pointing backwards in time as shown in fig .[ fig : citationnetworkcartoon ] . as a resultit is typically possible , starting from a given vertex , to find a path of citations that takes us back in time through the network , but it is not possible to find one that takes us forward again , so that no closed loops exist .( real citation networks are often not perfectly acyclic .for example , a scientific paper can sometimes cite work that is forthcoming but not yet published , resulting in a closed loop in the network .however , such loops are rare and necessarily short , being limited by the narrow span of time over which it is possible to predict future publications . in practice , therefore , it is usually a good approximation to assume the network to be acyclic . ) citation networks arise in a variety of different areas .we have mentioned networks of scientific citations , which have been studied by many authors since the classic work of price mentioned above .( see , for instance , the book by egghe and rousseau or any volume of the journal _ scientometrics _ , which is entirely devoted to the quantitative analysis of scholarly authorship and citation patterns . )citation networks of patents have , to a lesser extent , also been studied .patents cite other patents for a variety of reasons , but most often to establish their originality and distinction from previous work .extensive data on patent citations have become available in recent years , allowing the construction of very large citation networks .very recently , there has also been interest in legal citation networks , networks of legal opinions written by judges and others , which cite one another to establish precedent .we make extensive use of one particular legal citation network , the network of opinions of the united states supreme court , as an example in this paper , although the techniques we will be considering are certainly applicable to other networks as well .given the wide interest in and unique structure of citation networks , it is instructive to investigate what can be learned from an analysis of the statistical patterns present in these networks .a variety of studies have been presented in the past focusing on relatively standard network measures such as degree distributions . to investigate the time - dependent structure that is the special property of citation networks ,however , other methods are needed . in this paper we present several techniques that , as we will show , are both individually and collectively capable of revealing interesting new structure in these networks .the first analysis we describe makes use of a stochastic mixture model of the citation process , which is fitted to the observed network data using the likelihood optimization technique known as the expectation - maximization algorithm .a crucial property affecting the structure of citation networks is the pattern over time of the citation of documents following their publication .it is interesting for instance to ask if there are typical patterns that documents follow .are there more citations immediately after publication than later , or do they grow in frequency over time ?are documents more likely to cite recent precedents or older better - established ones ?do documents tend to cite others published during a particular time period ?there could also be more than one common pattern with different documents following different patterns .if so , how can we determine those patterns , and how can we tell which pattern particular documents follow , given that citation data are inherently noisy ?as an example , we consider the network of legal citations between cases handed down by the supreme court of the united states , from its inception in 1789 until the present day . we will use this example throughout this paper ; it is well documented , shows clear and interesting structural signatures , and has been studied much less than other types of citation networks in the past , so that , although we use the network primarily as an illustrative device , the results we derive are in many cases of interest in their own right and not just as a demonstration of our methods .consider the following table , which gives the dates of the citations received so far by a single example opinion handed down by the supreme court in the year 1900 : [ cols=">,>,>,>,>,>",options="header " , ] we will take citation profiles such as this as the basic inputs in our analysis .one interesting question ( there are many ) is whether there are distinct eras of citation in the history of this ( or any ) citation network .are there , for instance , eras in which a certain set of documents are well cited , followed perhaps by another era or eras in which that set falls out of favor to be replaced by a different one ?many readers can probably think of anecdotal cases of behavior like this in scientific citation networks . herewe place these observations on a firm analytic foundation . we will attempt to divide the vertices in a citation network into groups by identifying similarities in their citation profiles .our method will be to define a set of citation profiles and then self - consistently assign each case to the profile it best fits while at the same time adjusting the shape of the profiles to best fit the cases assigned to them .the means by which we accomplish this task is the expectation maximization ( em ) algorithm .the em algorithm is an established tool of statistics , but one that is relatively new to network analysis . in a previous paper we described an application of the method to the classification of vertices in static networks ,both directed and undirected . herewe describe a different application to the analysis of the temporal profiles of citations .in essence the em algorithm is a method for fitting a model to observed data by likelihood maximization , but differs from the maximum likelihood methods most often encountered in the physics literature in that it does not rely upon markov chain monte carlo sampling of model parameters .instead , by judicious use of `` hidden '' variables , the maximization is performed analytically , resulting in a self - consistent solution for the best - fit parameters that can be evaluated using a relatively simple iteration scheme .suppose we have a network of vertices representing our documents and we believe that they can be divided into groups , each of which is characterized by a particular probability distribution of citations over time .( ultimately , we will vary to find the best description for our data , but for the moment let us assume it to be fixed . )our approach to finding the groups will be to fit the network to a model consisting of two parts : ( 1 ) a set of time profiles , one for each group , such that is the probability that a particular citation received by a document in group is made during year ; ( 2 ) a set of probabilities , such that is the probability that a randomly chosen document belongs to group ( i.e. , is the expected fraction of documents belonging to group ) .we fit this model to the observed data by maximizing the probability of the observed set of citations given the model the so - called likelihood function .suppose that document belongs to group and let be the number of citations that the document receives in year .then the probability that document received the particular citations it did and is in group , given the model parameters , is where for convenience we use to denote the entire set . assuming random and uncorrelated citations drawn from the time profile , the terms on the right - hand side are given by ^{z_i(t ) } \over z_i(t)!},\\ \label{eq : pgi } \pr(g_i|\pi,\theta ) & = \pi_{g_i},\end{aligned}\ ] ] where is the in - degree of document , i.e. , the total number of citations it receives , and and are the first and last years of data in our dataset .now taking the product over all vertices , the likelihood of the entire data set is .in fact , we will work with the logarithm of the likelihood , which has its maximum in the same place : .\label{eq : ll1}\ ] ] unfortunately , depends on the group memberships , which we do nt know .given the observed citation patterns , however , we can make a good guess about the group memberships , or more precisely we can compute the probability distribution of their values , which in bayesian fashion we regard as a statement about our knowledge of the world , rather than a statement about the actual values of the group memberships , which are in theory perfectly well - defined quantities .writing the probability of a particular assignment of vertices to groups as , we can then calculate the expected value of the log - likelihood as the average of eq . over all possible assignments thus : \nonumber\\ & = \sum_{i=1}^n \sum_{r=1}^c\pr(g_i = r|z_i,\pi,\theta ) \nonumber\\ & \qquad{}\times \bigl [ \ln \pr(g_i = r|\pi,\theta ) + \ln \pr(z_i|g_i = r,\pi,\theta ) \bigr ] \nonumber\\ & = \sum_{i=1}^n \sum_{r=1}^c q_{ir } \bigl\lbrace \ln \pi_r + \ln k_i ! + { } \nonumber\\ & \hspace{6em } \sum_{t = t_1}^{t_2 } \bigl [ z_i(t ) \ln \theta_r(t ) - \ln z_i(t ) ! \bigr ] \bigl\rbrace , \label{eq : loglikelihood}\end{aligned}\ ] ] where we have introduced the shorthand notation for the probability that vertex belongs to group , given the model and the observed citation pattern .this expected log - likelihood represents our best estimate of the value of the log - likelihood given what we know about the system .by maximizing it , we can now calculate a best estimate of the most likely values of the model parameters , a process that involves two steps : first , we estimate the group membership probabilities ; second , we use those probabilities in the maximization of .we take these steps in turn . to calculate the we observe that two factors on the right can be determined by summing eq . over the appropriate sets of variables and making use of eqs . and to give ^{z_i(t)}\over\sum_k \pi_k \prod_t \left [ \theta_k(t ) \right]^{z_i(t)}}. \label{eq : estep}\ ] ] once we have this expression , we can use it to evaluate the log - likelihood , eq . , and hence to find the values of the model parameters that maximize the likelihood , which is our ultimate goal .the maximization is helped by the fact that and enter eq . in independent terms .considering first and noting that it must satisfy the normalization condition , we introduce a lagrange multiplier and then differentiate , holding constant , to get \biggr\rbrace \nonumber\\ & = & { 1\over\pi_r } \sum_{i=1}^n q_{ir } - \alpha.\end{aligned}\ ] ] rearranging this expression gives the lagrange multiplier is then fixed by the condition thus : where we have made use of .thus is given by in other words , the prior probability of a vertex belonging to group is just the average over all vertices of the conditional probability of belonging to group .similarly , the satisfy the normalization condition for all , so we introduce a set of lagrange multipliers and write \biggr\rbrace = 0.\end{aligned}\ ] ] again holding constant and employing eq ., we find or where we have evaluated using the normalization condition and the fact that by definition .to calculate the optimal values of the model parameters , as well as the group membership variables , we now need to solve eq . simultaneously with eqs . and .the simplest way to do this is numerical iteration .starting from an initial guess about the values of , we evaluate eq . and then use the results to make an improved estimate of the model parameters from eqs . and . under reasonable conditionsthis process is known to converge upon iteration to a self - consistent solution . as a demonstration of the em methodwe have applied it to the citation network of supreme court cases described in section [ sec : em ] .applied to this network , the algorithm will divide the network into any requested number of groups , such that each group is characterized by a distinctive pattern of citations to cases in that group .we have performed the analysis for a variety of different values of .we begin with the simplest case , , of division into two groups . starting with random initial values for and applying the em iteration , eqs . , , and , the parameters rapidly converge to a clear split of the network into two groups .figure [ fig : emsplit2 ] shows the fraction of cases assigned by the algorithm to each of the groups as a function of time .cases are assigned in proportion to their probability of membership in each of the groups so that , for instance , a case belonging to group 1 with probability 0.7 and to group 2 with probability 0.3 contributes 0.7 of a case to the first group and 0.3 of a case to the second .to the network of citations between supreme court opinions .the two curves show the fraction of cases assigned to each of the two groups found , as a function of time.,width=302 ] figure [ fig : emsplit2 ] reveals a dramatic split between the two groups : the best fit , in the maximum likelihood sense , of the mixture model with two groups to these data produces one group containing practically all cases before 1937 and another containing practically all cases after .this breakpoint coincides with a significant constitutional crisis for the supreme court . for the interested readerwe give some further analysis in section [ sec : discussion ] .the em algorithm tells us in this case that the supreme court s rulings split quite cleanly into groups with distinct citation profiles .that is , the opinions of the court can be distinguished sharply by the cases that later cited them .the citation profiles themselves , meaning the temporal citation patterns represented by the parameters in the model , are shown in fig .[ fig : emprofiles ] .as we can see , they also divide into two time periods , which correspond closely to those of the group memberships depicted in fig .[ fig : emsplit2 ] .this implies that the opinions that cite cases in each of our groups were handed down during roughly the same eras as the cited cases .this is not surprising if one assumes that the group divisions reflect different legal ideologies , but it is important to bear in mind that our analysis does not require it : it would be perfectly possible to detect groups that were distinguished by citations received during some entirely different era of the court arbitrarily later in its history , or even in no era at all but scattered widely over time . generated by the em algorithm with for the supreme court citation network.,width=302 ]we can also ask about best fits to the model for numbers of groups greater than two .it is always the case that larger values of will give better fits to the data , since larger values give us more parameters to fit with , but we must be wary of overfitting . in practice ,we have been able to extract useful formation about networks by comparing the results for a variety of small values of .rigorous methods for deciding optimal values of , such as minimum description length , methods based on approximations to the marginal likelihood , or information theoretic measures have been developed for other applications of the em algorithm and we discuss these approaches elsewhere . for the moment we simply describe the results for various values of . to the network of citations between supreme court opinions.,width=302 ] figure [ fig : emsplit3 ] shows results for the supreme court network with .the method again finds clear groups of cases , and as in the case they are strongly delineated according to the dates of the opinions and thus appear to offer evidence for the presence of distinct eras in the court s history . in particular , the analysis finds a clear grouping of cases between 1897 and 1937 , corresponding approximately to the so - called _ lochner _ era of supreme court jurisprudence , the significance of which is described in section [ sec : discussion ] . in these analyseswe have characterized our documents by the pattern of citations they receive .however , one can equally well look at the pattern of citations that documents _ make _ and this also , at least in some cases , can be a useful cue for detecting patterns in the network .the em algorithm can be applied to this analysis as well .the developments are identical and the same computer code can be used one simply takes the transpose of the adjacency matrix .figure [ fig : emciting3 ] , for example , shows the results of the application of the method to citations made by the opinions in our supreme court dataset , with .as the figure shows , the results are remarkably similar to those for citations received : it appears that , in this case at least , there is a high degree of agreement about how cases should be classified into eras. this could indicate agreement between the opinions writers and those that came after them , about the position staked out by individual opinions within the larger body of literature represented in our data set .to data for citations _ made _ ( rather than received ) by opinions in our supreme court dataset .the groups found are quite similar to those for the analysis based on citations received.,width=302 ]the general problem of the division of networks into groups of related vertices has been extensively studied in the past .the classic problem of `` clustering '' or `` community detection '' is to find groups of vertices within networks that have a higher than average density of internal edges and relatively few connections to the rest of the network .the second analysis technique we investigate for citation networks is a clustering method of this kind . as we will see, it is instructive to compare the results with those of our em analysis in the previous section .the two methods do not do the same thing : the em analysis groups together vertices that have similar time profiles to their citations , while the community analysis groups together vertices that are specifically linked to one another by edges . nonetheless , as we will show, the two approaches can produce similar outcomes , for instance in the example of the supreme court data set .considerable effort has been devoted to the development of methods to find community structure within networks .the authors are aware of dozens of different methods ( at least ) published within the last few years . herewe make use a method recently proposed by newman based on the maximization of the benefit function known as `` modularity . ''although many competing methods appear to give excellent results , we focus on this particular method for two reasons : first , it is based on firm statistical principles that make its operation transparent to the user ; second , it has been shown in recent head - to - head comparisons to give better results on standardized tests than competing methods .briefly the method works as follows .given a network and a particular division of the vertices of that network into nonoverlapping groups or communities , the modularity is defined as the number of edges that lie within those groups minus the expected number of such edges if edges are placed at random between the vertices ( but respecting vertex degree ) .in essence , the modularity measures whether a larger than expected number of edges fall within the groups defined . in principle , the task of finding the best division of the network into groups is then one of maximizing the modularity over all possible divisions . in practice , this maximization problem is known to be np - complete , so approximate solution methods must be used for all but the smallest networks .newman s method works by rewriting the modularity in the language of linear algebra as a quadratic form involving an index vector and a characteristic matrix dubbed the `` modularity matrix . ''it can then be shown that the signs of the elements of the leading eigenvector of this modularity matrix give an approximation to the division of the network into two parts that maximizes the modularity .this approximate maximum can optionally be further refined by , for instance , applying a greedy algorithm that moves vertices between groups as described in . by repeatedly dividing the network in two in this way, a network can be divided into any number of communities , although typically one stops dividing when no divisions exist that will increase the modularity any further .this repeated subdivision of the network into smaller and smaller groups is particularly attractive for the purposes of our present analysis , because it allows us to observe the major divisions in the network first , followed by more minor ones , and to stop the process at any point to compare with our other analyses .a limitation of the method is that it is designed for use with undirected rather than directed networks .this however is not a great hindrance .it seems reasonable to consider edges in a citation network to be a sign of connection between documents , and that connection exists regardless of the direction the edge runs in .so we simply ignore the directions in our analysis and apply the eigenvector calculation to the undirected network .this approach has been taken before by other authors and appears to work well see , for example , ref . .we can visualize the results of our clustering analysis in a manner similar to our visualizations of the output of the em algorithm , as a histogram over time .the results for the leading split of the supreme court network into two clusters are depicted in this way in fig .[ fig : clusteringhistograms1 ] .the results are similar to those for the em algorithm , with a significant break around 1937 .this appears to bolster the conclusions of our em analysis , that there have been separate periods in the court s history that leave identifiable signatures in the citation record .there are some differences between the two sets of results , particularly the early `` tail '' to the second group in the clustering analysis and an overall difference in the number of cases assigned to each group .a possible explanation for these differences is that the em analysis makes use only of citations received by cases , whereas the clustering analysis , which ignores edge direction , takes into account both citations received and citations made .this allows the classification into groups of some vertices that were unclassifiable with the em algorithm by virtue of never receiving any citations .( about 10% of cases were never cited . )it could also be responsible for the tail in the second group because citations made , which are necessarily to cases in the past , connect vertices to earlier times , perhaps pulling them from the second group into the first in the clustering analysis .as with the em analysis , we can go further and look at splits into larger numbers of groups .for instance , fig .[ fig : clusteringhistograms2 ] shows the best split into four groups according to the modularity - based approach .again the split is similar in overall form to the split found by the em algorithm with , although the results are not as clean as those for the em algorithm .as before , a new split point appears around 1900 , which could be associated with the start of the _ lochner _ era .for our third analysis , we turn away from studies of groups or clusters and focus on another class of network measures : centrality scores , which quantify the importance or influence of individual vertices in a network . as we will see, the pattern of centrality scores as a function of time in our evolving citation networks can reveal interesting patterns .the simplest of centrality scores is the degree of a vertex . in a directed network such as a citation network , there are two degrees , the in - degree and the out - degree .it is reasonable , for instance , to imagine that important or influential vertices in a citation network will receive many citations and therefore have high in - degree .a more sophisticated versions of the same idea is eigenvector centrality , in which , rather than merely counting the number of citations a vertex gets , we award a higher score when the citing vertices are themselves influential . the simplest way to dothis is to define the centrality to be proportional to the sum of the centralities of the citing vertices , which makes the centralities proportional to the elements of the leading eigenvector of the adjacency matrix .unfortunately , this method does not work for acyclic directed networks , such as citation networks , for which all such centralities turn out to be zero .an interesting variant of eigenvector centrality has been proposed by kleinberg that works well for acyclic networks . in this varianteach vertex has two centralities , known as the authority score and the hub score , the first derived from the incoming links and the second from the outgoing links . in this viewa `` hub '' is a vertex that points to many important authorities a review paper in a citation network , for instance while an authority is a vertex pointed to by many important hubs such as an important or authoritative research article on a particular subject . in the simplest version of the method the authority score of vertex is simply proportional to the sum of the hub scores of the vertices citing it : for some constant , while the hub score is proportional to the sum of the authority scores of the vertices it cites : in matrix form , these equations can be written or , eliminating either or , thus and are eigenvectors of the symmetric matrices and ( also known as the cocitation and bibliographic coupling matrices respectively ) . in kleinberg s formulation of the problem one focuses on the leading eigenvector of each of the matrices , although in principle there could be useful information to be gleaned from other eigenvectors too . taking the supreme court network as an example again ,we have applied this method to the calculation of authority scores for cases in the network .it proves particularly revealing to look at the scores as a function of time .that is , we take the network as it existed at some time ( discarding all cases published after that time ) and calculate a complete set of authority scores for all vertices .we concern ourselves primarily with the most central cases , those with the highest scores .figure [ fig : authorityscores ] shows one particularly revealing statistic , the average age of the ten highest - ranked cases for each year in our data set as a function of year . as the plot shows , there is a marked trend for the average age to increase in step with the passage of time .this is precisely the behavior one would expect if the top authorities in the network are remaining the same as time goes by .every once in a while , however , the plot shows a sudden and precipitous drop in the average age , indicating that a much younger set of vertices have , in a short space of time , taken over as the new leaders in the authority score rankings .thus the plot indicates a repeated pattern in the evolution of the network in which a certain set of vertices certain cases considered by the supreme court remain the top authorities for substantial periods of time before being swiftly replaced by a different set .one example of such a turnover can be seen in fig .[ fig : authorityscores ] around 1900 and a smaller one around 1940 , dates that , as we have seen , correspond roughly to the beginning and end of the _ lochner _ era .another very large dip in the curve occurs around 1970 .( our four - group em analysis also found a group division at approximately the same time see fig .[ fig : emsplit3 ] . )the large size of this dip may be due in part to the much larger number of cases decided per year by the supreme court in more recent decades than in its earlier history , which makes it easier for newly appearing cases to quickly become top authorities .the results of the centrality analysis are thus compatible with but different from those of previous sections .such variations are one reason why a variety of different analytic techniques are useful in studies of network structure .the behavior described is clearest in the age of the top ten vertices , but persists if a different number is used .figure [ fig : authorityscores ] shows the results of the same calculation for the top 50 , 100 , and 500 authorities , and in each case a similar pattern of maturation followed by swift renewal is visible .although the purpose of this paper is primarily to highlight new methods for the analysis of network data , the ultimate goal of these methods is of course to give researchers insight into the structure and meaning of their data .thus it is interesting to ask whether the analyses described here do in fact shed light on the system studied in this case , the network of citations between supreme court cases .in fact the results do appear to shed interesting new light on the workings of the supreme court ; we give a short explanation of our arguments in this section .the united states underwent a transition from an agricultural economy to an industrial economy in the latter part of the nineteenth century .federal and state legislators adapted to the new economic environment by passing laws that regulated emerging industries . these regulations , however , were not without opposition from those who preferred a _laissez - faire _ or hands - off approach . among those outspoken in opposition were several members of the supreme court and , beginning in 1897 , the court began invalidating a number of cases that imposed regulations on industry and business , starting with _ allgeyer v. louisiana_. the legal doctrines of substantive due process and freedom of contract were merged together into a significant limitation on the police power of the state . after _, any statute , ordinance , or administrative act that imposed any kind of limitation upon the right of private property or freedom of contract became suspect , even if the regulation was intended to promote safety and general welfare .the most famous ( or infamous ) of the cases to use substantive due process to invalidate state regulation was _lochner v. new york _ in 1905 , a case that became so notorious that this entire era of jurisprudence , between 1897 and 1937 , came to be known as the _ lochner _ era . during the _ lochner _ era the supreme court struck down nearly 200 regulations .the _ lochner _era is clearly visible , for example , in our em analysis with ( fig .[ fig : emsplit3])the analysis picks out one group of cases with start and end dates that correspond closely to the accepted dates of the era .ultimately , the supreme court s hostility to state and federal regulation began to interfere with the `` new deal '' programs instituted by us president franklin roosevelt to combat the great depression . between 1934 and 1936 ,the court invalidated more federal statutes than during any other two - year period in its history and by 1936 nearly all of the statutes passed as part of the new deal had been struck down . in response ,roosevelt launched in early 1937 a counteroffensive against the supreme court in which he proposed to appoint to the court up to six additional justices more receptive to the new deal .this `` court packing '' plan was , to say the least , highly controversial , but roosevelt had the support of significant majorities in both houses of congress , and the nation as a whole , still in the throes of the depression , was eager for something new .following roosevelt s proposal , the court abruptly reversed course and , beginning in march of 1937 , validated a series of state and federal measures .contemporary commentators have humorously dubbed this change the `` switch in time that saved nine , '' but whether the switch was substantive or illusory has been the subject of much debate .some scholars believe that the court responded to political pressure , while others have suggested that the court already contained a majority of justices who would have been inclined to sustain the new deal if legislation had been drafted better or if certain unanswered questions had been appropriately posed to the court .our em analysis shows a clear break around 1937 , corresponding closely to the end of the _ lochner _ era .it is important to appreciate that the analysis takes into account only citations received by cases , and thus that the opinions of the supreme court appear to have taken a substantial change of direction not merely in impact but also in their arguments : later cases cited the new opinions rather than those coming before them because , presumably , their arguments better supported the decisions of the post-1937 court . thus our analysis appears to indicate not merely a change in case outcomes that was a natural , if novel , result of positions long held by the sitting justices , but a more fundamental change in legal thinking itself or at least its expression in the written opinions of the court and the later citation of those opinions .in this paper we have described several methods for the analysis of citation networks , which are acyclic directed graphs of citations between documents . using the network of citations between opinions handed down by the us supreme court as an example , we have described and demonstrated three analysis techniques .the first makes use of a probabilistic mixture model fitted to the observed network structure using an expectation maximization algorithm .the second is a network clustering method making use of the recently introduced method of modularity maximization .the third is an analysis of the patterns of time variation in eigenvector centrality scores , particularly the `` authority '' score introduced by kleinberg .when applied to the supreme court network , each of these analyses reveals interesting structure , particularly highlighting qualitative changes in citation patterns that may be associated with specific eras of legal thought in the supreme court .however , it is in combination that the methods become most effective .features that appear clearly in analyses performed using several different techniques possess correspondingly greater persuasive force . in the case of the supreme court , there emerges quite a clear picture of the eras of the court as marked by shifts in citation patterns , particularly around the time of the so - called _ lochner _ era in the early 20th century .
in this paper we examine a number of methods for probing and understanding the large - scale structure of networks that evolve over time . we focus in particular on citation networks , networks of references between documents such as papers , patents , or court cases . we describe three different methods of analysis , one based on an expectation - maximization algorithm , one based on modularity optimization , and one based on eigenvector centrality . using the network of citations between opinions of the united states supreme court as an example , we demonstrate how each of these methods can reveal significant structural divisions in the network , and how , ultimately , the combination of all three can help us develop a coherent overall picture of the network s shape .
three out of the four detector concepts for the ilc feature a time projection chamber ( tpc ) as their main trackers .the lc - tpc is required to have more than 100 sampling points with a spatial resolution of 150 m or better and a two - track separability down to 2 mm .beam tests of a small prototype have shown that the required performance is achievable if we adopt a pad width of as narrow as mm or a resistive anode to spread the signal charge on a pad plane .basic properties of a tpc with a micro pattern gas detector ( mpgd ) readout plane are also understood through these small prototype tests .our next target is to construct a large prototype having multi - mpgd - panels with small readout pads , and to demonstrate the required performance under more realistic experimental conditions with panel boundaries and distortions , thereby allowing a smooth extrapolation to the real lc - tpc .the lctpc collaboration plans to have a beam test with a large prototype at desy in the summer of 2008 . for the beam test, we will prepare gas electron multiplier ( gem ) panels .our basic design goals for the pre - prototype gem end panel includes the following : to allow the required smallness and density of readout pads , to minimize dead spaces due to phi - boundaries of adjacent panels , and to allow easy replacement of gem foils when necessary . considering these requirements, we decided to use as large gem foils currently available as possible , and to stretch the gem foil in the radial directions so as to avoid thick mullions introducing dead regions for high momentum tracks .we also designed a gem mounting system though which we can supply necessary high voltages to the gem foils . prior to the production of gem panels for the beam test, we constructed a pre - protoype to test its basic design philosophy and some of its engineering details including fabrication methods for the gem end panels .our gem foils are supplied by scienergy co. and consist of 5 m thick copper electrodes sandwiching a 100 m thick liquid crystal polymer insulator .the thick gem foils allow stable operation with higher gain than popular 50 m thick gem foils . a double gem configuration is hence enough to give a gain of more than .the hole diameter and pitch are 70 m and 140 m , respectively .the hole shape is cylindrical due to dry etching unlike that of cern gems , which is biconial .the active area is fan - shaped spanning in the direction with inner and outer radii of and cm ( ) , which is about 2 times larger than the gem foils we used for our small prototype . to keep the stored energy small enough, the gem electrodes are divided into two in the radial direction with a boundary of 100 m .a set of g10 frames should be glued to each gem foil in order to be mounted on a readout pc board . for the frame gluing we developed a gem stretcher , which consists of two parts .one is an acrylic gem stretcher frame and the other is a set of a middle frame and a gem adjuster made of aluminum . the lower part of the acrylic stretcher frame has a groove with a depth of 2 mm and the upper part has screw holes aligned to it .the middle aluminum frame has the same size as the groove . by sandwiching a gem foil with the lower and the upper parts of the acrylic stretcher frame together with the aluminum frame , and by screwing bolts into the holes of the upper piece and pressing down the aluminum frame into the groove, we could stretch the gem foil .the gem foil , the g10 frames , and the adjuster have through holes aligned to each other .we stacked them up together , put pins into the holes of the adjuster to align them , and glued the g10 frames with epoxy adhesive to the gem foil .notice that the frames covered only the inner and outer edges of the gem foil to reduce the dead space pointing to the interaction point .this fabrication method has been established and allowed us to produce 3 panels per day .the electrons multiplied by the gem foils are read out by a pc board .the pc board is thick and has a size enough to cover the gem foils , spanning in and having inner and outer radii of and cm , which are about cm extended in both inner and outer directions to facilitate the gem mounting .it carries 20 pad rows on its front side of which the inner 10 have 176 pads each and the outer 10 have 192 pads each , with every two rows staggered by half a pad width to minimize the so - called hodoscope effect .a typical pad size is about mm which is small enough for the intrinsic charge spread of m .the pads are wired to readout connectors on the back side of the pc board through a five - layer fr4 .the pc board , being of six layers , has no through - holes due to wiring , thereby assuring the gas - tightness required for the tpc operation .we used connectors supplied by jae .each connector has 40 channels , of which 32 are used for signal readout and the remaining 8 for ground .its size is about , which is one of the smallest connectors commercially available .pre - amplifiers are connected to the pc board with flexible cables . for the large prototype, pre - amplifiers and flash adcs will be mounted on small pc boards and are directly connected to the readout pc board .we will not use the connectors for the real ilc tpc , since pre - amplifiers and flash adcs will be mounted on the surface of the pc board .high voltage ( hv ) electrodes are also wired through the pc board .to apply the hvs to the gem electrodes , we adopt a bolt - and - nut method .a brass nut is adhered to an electrode on the front side of the pc board .we tried two methods to fix the nut .one is soldering , the other is gluing with a conductive paste ( dotite by fujikura kasei co. ) .it turned out that soldering is much stronger than the conductive paste , besides the dotite produces threads .we will hence use soldering for the large prototype construction .the lower and the upper gem foils are stacked on the pc board , and bolted through their g10 frames to bite the gem electrodes so as to supply required hvs through the bolts .the spacings between the gem foils and the readout pad plane are determined by the thicknesses of the g10 frames , resulting in a transfer gap of 4 mm and an induction gap of 2 mm .an aluminum flange with a groove for an o - ring is glued to the back side of the pc board with epoxy adhesive to avoid mechanical distortion and to be mounted to a gas container . when we glued the aluminum flange , we aligned the flange and the pc board by hand since there are no alignment holes or posts in the pc board and the flange .the positions of the readout pads are determined by this flange , so some alignment posts should be prepared for the large prototype .the gem end panel was mounted on a gas container with 16 m3 bolts .the pre - prototype chamber is filled with a 90:10 mixture of ar and iso - butane gases .we applied 410 to each of the two gem foils , and electric fields of 100 , 2050 , and 3075 / cm to the 25 mm long drift , the 4 mm long transfer , and the 2 mm long induction regions , respectively . under these conditions , the gas gain was about and the signal spread was about m . we irradiated the pre - prototype panel with x - rays from through the windows of the test chamber to measure the gain uniformity over the panel .first , we checked the charge distribution over the readout pads .since the signals sometimes spread over 2 pad rows , we required the signal charge be shared by 2 pad rows and summed the signals over 5 contiguous pads on each of these two rows to avoid mis - collection .figure[fig : adc ] shows the charge sum distributions for 10 pads .both a 5.9 main peak and a 2.9 kev escape peak can be clearly seen .r0.5 second , we checked the charge spread .a center of gravity was calculated from the measured pad signal charges and their positions as well as the charge fraction on each pad . by plotting the charge fraction as a function of the charge center measured from the middle of the central pad, we could get an image of the charge distribution over the pad plane .gaussian fit to the distribution resulted in a ( 1- ) width of about 550 m corresponding to half a pad width as expected ( fig .[ fig : signal ] ) from the diffusion .r0.5 we then measured the charge sum distributions at 28 positions over the panel for the uniformity test , usually requiring the charge sharing .we found , however , that charge sharing never happened near the boundary of the inner and the outer gem electrodes .the exact reason is still unknown but it could be attributed to the charge - up of the insulator of the gem boundary affecting the charge collection .the charge sharing was hence not required for the positions in the boundary region .after normalizing the charge sums to that of some reference position , we found that the normalized charge sums range from 0.49 to 1.08 .the observed gain non - uniformity is 2.5 times larger than the expected 20% or less from the mechanical tolerance of the panel .we found a large field distortion near the panel edges , which partly explains the non - uniformity but not all of it .further investigations for possible causes are needed including variations of operation conditions such as gas concentration , etc ..we have constructed and tested a pre - prototype of the gem tpc end panels to verify basic design philosophy and some of engineering details including fabrication methods for the large prototype of the real ilc tpc .we have basically established a gem framing scheme with some minor problems to be improved for the large prototype construction .we have also measured the gain uniformity over the pre - prototype panel and observed a 50% non - uniformity at maximum .the non - uniformity could partly be attributed to the field distortion due to the test chamber setup , but requires further studies to fully validate our basic design philosophy .this study is supported in part by the creative scientific research grant no .18gs0202 of the japan society for promotion of science .d. c. arogancia _ et al .arxiv:0705.2210 [ physics.ins-det ] . m. s. dixit , j. dubeau , j. p. martin and k. sachs , nucl .instrum .a * 518 * , 721 ( 2004 ) .
a gem tpc end panel pre - prototype was constructed for a large lc - tpc prototype to test its basic design philosophy and some of its engineering details . its interim test results are presented .
the fair allocation of indivisible items is a central problem in economics , computer science , and operations research .we focus on the setting in which we have a set of agents and a set of items with each agent expressing utilities over the items .the goal is to allocate the items among the agents in a fair manner without allowing transfer of money .if all agents have positive utilities for the items , we view the items as goods .on the other hand , if all agents have negative utilities for the items , we can view the items as chores . throughout, we assume that all agents utilities over items are additive . in order to identify fair allocations, one needs to formalize what fairness means . a compelling fairness concept called _ max - min share ( mms ) _ was recently introduced which is weaker than traditional fairness concepts such as envy - freeness and proportionality .an agent s mms is the `` most preferred bundle he could guarantee himself as a divider in divide - and - choose against adversarial opponents '' .the main idea is that an agent partitions the items into sets in a way that maximizes the utility of the least preferred set in the partition .the utility of the least preferred set is called the _ mms guarantee _ of the agent .an allocation satisfies _ mms fairness _ if each agent gets at least as much utility as her mms guarantee .we refer to such an allocation as _mms allocation_. although mms is a highly attractive fairness concept and a natural weakening of proportionality and envy - freeness , showed that an mms allocation of goods does not exist in general .this fact initiated research on approximate mms allocations of goods in which each agents gets some fraction of her mms guarantee . on the positive side , not only do mms allocations of goods exist for most instances , but there also exists a polynomial - time algorithm that returns a 2/3-approximate mms allocation .algorithms for computing mms allocations of goods have been deployed and are being used for fair division . in this paper, we turn to mms allocations for chores , a subject which has not been studied previously . even in the more general domain of fair allocation, there is a paucity of research on chore allocation compared to goods despite there being many settings where we have chores not goods . in general , the problem of chore allocation can not be transformed into a problem for goods allocation .[ [ contributions ] ] contributions + + + + + + + + + + + + + we consider mms allocation of chores for the first time and present some fundamental connections between mms allocation of goods and chores especially when the positive utilities of the agents in the case of goods are negated to obtain a chores setting .we also show that there are differences between the two settings with no known reductions between the settings . in particular , reductions such as negating the utility values and applying an algorithm for one setting does not give an algorithm for other setting .we show that an mms allocation does not need to exist for chores . in view of the non - existence results ,we introduce a new concept called _ optimal mms _ for both goods and chores .an allocation is an _ optimal mms allocation _ if it represents the best possible approximation of the mms guarantee .an optimal mms allocation has two desirable properties : ( 1 ) it always exists and ( 2 ) it satisfies mms fairness whenever an mms allocation exists ( see figure [ fig : fairness - relations ] ) . consequently ,optimal mms is a compelling fairness concept and a conceptual contribution of the paper .we present bounds to quantify the gap between optimal mms fairness and mms fairness . for chores , we present a linear - time round - robin algorithm for this purpose that provides a 2-approximation for mms .we show that the bound proved is _ tight _ for the round robin algorithm .we also show that , as in the case of goods , the computation of an mms allocation for chores is strongly np - hard and so is the computation of an optimal mms allocation . in view of the computational hardness results ,we develop approximation algorithms for optimal mms fairness . for both goods and chores, we use connections to parallel machine scheduling and some well established scheduling algorithms to derive an exponential - time exact algorithm and a ptas ( polynomial - time approximation scheme ) when the number of agents is fixed .these are the first ptas results related to mms . as long as an mms allocation exists ( that does exist in most instances as shown analytically by and experimentally by ), our algorithm for goods also provides a ptas for standard mms which in terms of approximation factor is a significant improvement over previous constant - factor approximation results .in addition to the literature on mms allocations for goods discussed in the introduction , our work is based on parallel machine scheduling theory .there is a natural connection between mms allocations and parallel machine scheduling , which we outline later .this connection turns out to be very fruitful for both exact and approximate computations of optimal mms allocations .we briefly introduce the concept of parallel machine scheduling in the following .we have a set of jobs and a set ] and then machines are considered identical .the goal of each machine scheduling problem is to find a schedule ( i.e. , an ordered allocation ) that optimizes a certain objective function .the problems that we focus on in this paper either minimize the time where the latest machine finishes ( this is also called the makespan of a schedule ) or maximize the time where the earliest machine finishes .we show that the former objective is related to mms allocation of chores whereas the latter is related to mms allocation of goods .an extensive overview on all important machine scheduling problems is provided by . established a notation for machine scheduling problems where stands for identical machines , for unrelated machines , for minimizing the latest machine s finishing time , and for maximizing the earliest machine s finishing time .according to this notation , we will use the problems , , , and in this paper .the latter problem is also equivalent to maximizing egalitarian welfare under additive utilities .all of these problems are np - hard in the strong sense but they are well investigated and plenty of research has been conducted on approximation algorithms which we will take advantage of .we introduce the basic notation and definitions for our approach in this section . for a set of items and a number , let be the set of all -partitions of ( i.e. , item allocations ) and let denote the power set of . 1 .a * non - negative instance * is a tuple ,(u_i)_{i\in [ n]}) ] of agents , and a family of additive utility functions } ] consisting of a set of items , a set ] .the set of all non - positive instances is denoted by .an * instance * is a tuple ,(v_i)_{i\in[n]}) ], we define the * corresponding instance * by ,(-v_i)_{i\in [ n]})\in\mathcal{i}.\ ] ] let ,(v_i)_{i\in[n]})\in\mathcal{i} ] be an agent . 1. agent s * max - min share ( mms ) guarantee * for is defined as } v_i(s_j).\ ] ] 2 .agent s * min - max share ( mms ) guarantee * for is defined as } v_i(s_j).\ ] ] let ,(v_i)_{i\in[n]})\in\mathcal{i} ] . is called a * perverse mms allocation * for iff for all agents ] and a constant . 1 .the * -max - min problem * for is about finding an allocation with for all ] .first , we present a fundamental connection between the allocation of chores ( non - positive utilities ) and goods ( non - negative utilities ) . later in this section , we discuss non - existence examples for mms allocations and show that existence and non - existence examples do not transfer straightforwardly from goods to chores and vice - versa . finally , we give a complexity result for the computation of mms allocations for both goods and chores .the following result shows an interesting connection between mms and mms when changing signs in all utility functions .[ mms =- mms ] let ,(v_i)_{i\in[n]})\in\mathcal{i} ] .this leads us to the following result discussing the equivalence of mms allocations for an instance and perverse mms allocations for its corresponding instance .[ transfer_goods_chores ] let ,(v_i)_{i\in[n]})\in\mathcal{i} ] , we have . which proves both claims .this fundamental connection shows also a difference between the allocation of chores and goods since finding mms allocations and finding perverse mms allocations involve different objectives .a similar statement can be made for the approximations .[ transfer_lambda ] let ,(v_i)_{i\in[n]})\in\mathcal{i} ] by a subtle modification of their example to obtain an analogous result for chores .consider a set =\{1,2,3\} ] , we define her utility function by we obtain the following result by a careful adaption of the argument presented by .[ non_existence_chores ] there is no mms allocation for . in particular , an mms allocation for chores does not need to exist . another interesting difference between mms for goods and chores is the fact that existence and non - existence examples for mms allocations can not be simply converted into each other by just changing the signs of the utility functions . the only difference from to the instance of the example presented by are changed signs in the matrices .let ,(w_i)_{i\in[3]})\in\mathcal{i}^+ ] be a non - negative instance .we define for all ] .the result is trivial for .if , i.e. , always chooses before , then the result is also obvious because will choose her lowest valued item in every round and has to pick at most one item more than in total ( which is compensated by ) .therefore , we can assume that , i.e. , picks before in every round .the protocol has exactly rounds .the pick of agent ( resp . ) in round is denoted by ( resp . ) .we have we separate two cases . in the case that agent has to pick an item in the last round , we have for all ( picking rule ) and therefore in the other case where agent does not have to pick anymore in the last round , agent also does not have to pick since she picks after .it follows that since for all ( picking rule ) . combining both cases, this gives us with and finally [ greedyupperbound ] let be the allocation obtained by the round - robin greedy protocol .then we have for each agent ] division by yields the result .[ 2approxgoods ] the round - robin greedy allocation protocol gives an allocation with for all ]. then we have for each agent ] be a non - positive instance and denote the round - robin greedy allocation for by .then we have for all ] for which the -max - min - problem for has a solution .2 . for a non - positive instance , the * optimal mms ratio * is defined as the minimal for which the -max - min - problem for has a solution . note that both the maximum and the minimum exist in this definition since for a fixed instance , there is only a finite number of possible allocations .we have the following initial bounds for the optimal mms ratio . [ optimal_mms_bounds ] 1 .let ,(u_i)_{i\in[n]})\in\mathcal{i}^+ ] .2 . let ,(d_i)_{i\in[n]})\in\mathcal{i}^- ] . 1 .the inequality is a result of the approximation algorithm presented by while the inequality is trivial .the equality holds if and only if the -max - min problem for has a solution for all .this is equivalent to for all ] with this implies and therefore allocating all items to agent gives a solution of the -max - min problem for .we do not claim that the introduced bounds of and are tight .the proof of the lemma shows another difference between mms for goods and mms for chores . if in the case of chores , the mms guarantee of an agent is , then the utility function of this agent is equal to .the same result does not hold true for goods . based on the previous notations ,we define a new fairness concept called _ optimal mms _ , which is a natural variant of mms fairness . for an instance ,(v_i)_{i\in[n]})\in\mathcal{i} ] .there are two main advantages to the introduced concept .first , for each specific instance , we can guarantee the existence of an optimal mms allocation . according to lemma [ optimal_mms_bounds ] .] second , an optimal mms allocation is always an mms allocation if the latter exists .both observations follow immediately from the definitions .we will give an introductory example for optimal mms allocations both for goods and chores in the following .define an instance ,(u_i)_{i\in[2]})\in\mathcal{i}^+ ] of two agents and a set of two items .we define , , , and for some . 1. we have which means that and is an mms allocation for where each agent gets a total utility of . the optimal mms allocation for would be and giving each agent a total utility of .in particular , .we have which means that and is an mms allocation for where each agent gets a total utility of .the optimal mms allocation for would be and giving each agent a total utility of .in particular , . these examples also show that each agent s ratio of the utility in an optimal mms allocation to the utility in an mms allocation can be arbitrarily large ( for goods ) or small ( for chores ) as can be any real number .another natural question is the worst case for the optimal mms allocation in comparison to the mms guarantee .this is addressed by the following definition . 1 .the * universal mms ratio for goods * is defined as 2 .the * universal mms ratio for chores * is defined as we can give bounds for and connections between the instance - dependent _ optimal _ and the instance - independent _ universal _ mms ratios . clearly , we have by definition for all and for all .furthermore , we have : 1 . by lemma [ optimal_mms_bounds ] and the non - existence example presented by .2 . by prop .[ non_existence_chores ] and lemma [ optimal_mms_bounds ] .we presented some properties of optimal mms allocations .but since the complexity of computing an mms allocation for both goods and chores - if it exists - is strongly np - hard ( prop . [ complexity_goods_chores ] ) , the same holds true for the computation of an optimal mms allocation . however , we will show in the next sections that there is a ptas for the computation of such an allocation as long as the number of agents is fixed .in this section , we develop a ptas for finding an optimal mms allocation for chores when the number of agents is fixed .the ptas is based on the following exact algorithm .[ alg : makespan_chores ] given a non - negative instance ,(u_i)_{i\in[n]})\in\mathcal{i}^+ ] .2 . define new additive utility functions for all ] and .denote the optimal objective function value by and the corresponding allocation by .given a non - positive instance ,(d_i)_{i\in[n]})\in\mathcal{i}^- ] .then we have and is an optimal mms allocation for .we will show that is the minimal for which a solution to the perverse -min - max problem for exists and is a corresponding solution .the claim follows then with prop .[ transfer_lambda ] . if ( and therefore ) for an agent ] . is by definition the minimal for which an allocation with for all ] . to sum up, is the minimal for which a solution of the perverse -min - max problem for exists and is a corresponding solution .there are two steps in algorithm [ alg : makespan_chores ] that are exponential in time .first , each computation of may require exponential time and second , finding an optimal solution to may require exponential time .the computation of for an agent ] and an , we state an algorithm consisting of the following steps . 1 .select and with .2 . compute with for each agent ] by if and if .4 . consider the corresponding problem where the processing times are defined as for all ] .[ ptas_maxmin_chores ] let a non - positive instance ,(d_i)_{i\in[n]})\in\mathcal{i}^- ] .since a solution of the -max - min problem for exists , we can conclude by [ transfer_lambda ] that a solution of the perverse -min - max problem for exists .this implies the existence of with for all ] ( note that implies ) and we can conclude . define {>0}:=\{i\in[n]|c_i>0\} ] and the same is true for all \backslash [ n]_{>0} ] , we state an algorithm consisting of the following steps . 1 .compute for each agent ] , set and choose an arbitrary allocation .terminate the algorithm .3 . define new additive utility functions for all {>0} ] machines where each machine represents one agent {>0} ] and .denote the optimal objective function value by and the corresponding allocation by {>0}}\in\pi_{|[n]_{>0}|}(\mathcal{m}) ] and .note that this algorithm aims at finding an optimal mms allocation by maximizing egalitarian welfare according to the new utility functions .[ optimal_mms_goods ] execute algorithm [ alg : makespan_goods ] for a non - negative instance ,(u_i)_{i\in[n]})\in\mathcal{i}^+ ] is equivalent to the computation of a job partition that maximizes the minimum finishing time on identical parallel machines ( ) where the processing time of a job is defined as . present a ptas for and present a ptas for ( which means that the number of agents is fixed to ) . agents .] this implies that we can run the following algorithm in polynomial time when the number of agents is fixed .[ alg : makespan_goods_approx ] given a non - negative instance ,(u_i)_{i\in[n]})\in\mathcal{i}^+ ] .define a set {>0}:=\{i\in[n]|c_i>0\} ] , set and choose an arbitrary allocation . terminate the algorithm .4 . define new additive utility functions for all {>0} ] and the processing times are defined as for all {>0} ] with for all {>0} ] and .[ ptas_maxmin_goods ] let a non - negative instance ,(u_i)_{i\in[n]})\in\mathcal{i}^+ ] with .if there is an agent ] which receives items with a sum of more than in the o - matrix , we have .consequently , we can focus on the case where all agents get a bundle with common labels ( i.e. , with a sum of exactly 55 in the -matrix ) .if the items are divided along the labels then agent 2 or agent 3 receive items labeled by 2 or 3 giving them a utility of .if the items are divided along the labels then agent 1 or agent 3 receive items labeled by or giving them a utility of at least .if the items are divided along the labels then agent 1 or agent 2 receive items labeled by or giving them a utility of at least .we showed that for an arbitrary allocation there is always an agent ] .as pointed out in appendix [ sec : non_existence_chores_proof ] , we can achieve a perfectly balanced partition with for all ] and therefore is an mms allocation for .the non - existence of an mms allocation for was shown in [ non_existence_chores ] .the utility functions for each ] , which means that is a perverse mms allocation for and therefore an mms allocation for by [ transfer_goods_chores ] .the non - existence of an mms allocation for was shown by . we show the strong np - hardness by a reduction from 3-partition .we consider a setting with numbers , a set of elements , and an additive valuation function such that for all and .question : can can be partitioned into disjoint subsets where the total valuation of the elements in each subset is ?this decision problem is a strongly np - complete restricted version of the -partition problem .the computation of an mms ( resp .perverse mms ) allocation for the corresponding instance ,(u)_{i\in[n]}) ] where both agents have the same utility function is equivalent to the computation of an mms ( resp .perverse mms ) partition for any single agent . butthis would answer the np - complete integer partition decision problem and therefore the computation of an mms allocation for both goods and chores ( by [ transfer_goods_chores ] ) - if it even exists - is np - hard for . if for all ] . by construction, is the maximal for which an allocation {>0}}\in\pi_{|n_{>0}|}(\mathcal{m}) ] exists .if for an agent ] with for all ] , there is nothing to show .so let us assume {>0}\neq\emptyset ] . from this , we have for all {>0}$ ] , which implies .g. amanatidis , e. markakis , a. nikzad , and a. saberi .approximation algorithms for computing maximin share allocations . in _ proceedings of the 35th international colloquium on automata , languages , and programming ( icalp ) _ , pages 3951 , 2015 .s. bouveret and m. lematre . characterizing conflicts in fair division of indivisible goods using a scale of criteria . in _ proceedings of the 13th international conference on autonomous agents and multi - agent systems ( aamas )_ , pages 13211328 .ifaamas , 2014 .s. bouveret , y. chevaleyre , and n. maudet .fair allocation of indivisible goods . in f.brandt , v. conitzer , u. endriss , j. lang , and a. d. procaccia , editors , _ handbook of computational social choice _, chapter 12 .cambridge university press , 2015 .r. j. lipton , e. markakis , e. mossel , and a. saberi . on approximately fair allocations of indivisible goods . in _ proceedings of the 5th acm conference on electronic commerce ( acm - ec ) _ ,pages 125131 .acm press , 2004 .
we consider max - min share ( mms ) allocations of items both in the case where items are goods ( positive utility ) and when they are chores ( negative utility ) . we show that fair allocations of goods and chores have some fundamental connections but differences as well . we prove that like in the case for goods , an mms allocation does not need to exist for chores and computing an mms allocation - if it exists - is strongly np - hard . in view of these non - existence and complexity results , we present a polynomial - time 2-approximation algorithm for mms fairness for chores . we then introduce a new fairness concept called optimal mms that represents the best possible allocation in terms of mms that is guaranteed to exist . for both goods and chores , we use connections to parallel machine scheduling to give ( 1 ) an exponential - time exact algorithm and ( 2 ) a polynomial - time approximation scheme for computing an optimal mms allocation when the number of agents is fixed .
much research has been done into the computational possibilities of neural networks . yetthe engineering and industrial applications of these models have often eclipsed their use in trying to come to an understanding of naturally occurring neural systems .whereas in engineering we often use single neural networks to attack a single problem , in nature we see neural systems in competition .humans , for example , invest in the stock market , attempt to beat their business rivals , or , in extreme examples , plan wars against each other .we are , as darwin identified a century and a half ago , in competition for natural resources ; our neural systems i.e . our brains are among the main tools we have to help us succeed in that competition . in collaboration with chialvo ,one of the authors of this paper has developed a neural network model that provides a biologically plausible learning system , based essentially around ` darwinian selection ' of successful behavioral patterns .this simple ` minibrain'as we will refer to it from now on has been shown to be an effective learning system , being able to solve such problems as the exclusive - or ( xor ) problem and the parity problem .crucially , it has also been shown to be easily able to _ un - learn _ patterns of behavior once they become unproductive an extremely important aspect of animal learning while still being able to remember previously successful responses , in case they should prove useful in the future . these capabilities , combined with the simplicity of the model ,provide a powerful case for biological feasibility . in choosing a competitive framework for this neural network, we follow the example of metzler , kinzel and kanter , using the delightfully simple model of competition within a population provided by the minority model of challet and zhang ( itself based on the ` el farol ' bar problem created by arthur ) . in this game , a population of agents has to decide , independently of each other , which of two groups they wish to join .whoever is on the minority side ` wins ' and is given a point . by combining these two models replacing the fixed strategies of agents in challet and zhang s model with agents controlled by the minibrain neural system we have a model of neural systems in competition in the real world .this is not the first model of coevolution of strategies in a competitive game a particularly interesting example is lindgren and nordahl s investigation of the prisoner s dilemma , where players on a cellular grid evolve and mutate strategies according to a genetic algorithm .however , we believe that the biological inspiration for the minibrain model , and its demonstrated capacity for fast adaption , makes our model of special interest . the structure of this paper is as follows :we begin with a discussion of what we mean when we talk about ` intelligence ' , noting how historical influences have shaped our instinctive ideas on this subject in potentially misleading ways ; in particular , we take issue with the suggestion that a creature s intelligence can be thought of as separate from its physical nature . we suggest that intelligence can only be measured in the context of the surrounding environment of the organism being studied : we must always consider the _ embodiment _ of any intelligent system .this is followed by the account of the computer experiments we have conducted , in which we investigate the behavioral patterns produced in the minibrain / minority model combination , and the ways in which they are affected by changing agent characteristics .we show how significant crowding behavior occurs within groups of agents with the same memory value , and demonstrate how this can allow a minority of high - memory agents to take advantage of the majority population and ` win ' on a regular basis and , by the same token , condemn a population of largely similar agents to continually losing .indeed , perhaps the most startling implication of this model is that , in a competitive situation , having a ` strategy ' might well prove worse than simply making random decisions .these results are in strong contrast with those of metzler , kinzel and kanter , whose paper inspired these experiments . in their simulations ,a homogeneous population of perceptron agents relaxes to a stable state where all agents have an average 50% success rate , and overall population performance is better than random .the perceptrons learn , in effect , to produce an efficient market system , and do not suffer from the crowding effect produced by minibrain agents . by the same token , however , it seems unlikely that a superior perceptron could win on a similar scale to a superior minibrain .we conclude with further discussion of the nature of intelligence , suggesting a conceptual approach that we believe will enable easier investigation of both natural and artificially created intelligent systems .having already suggested that we must consider ` embodied ' intelligences , we provide criteria for cataloguing that embodiment , consisting of hardwired parts the input and output systems of the organism , the feedback mechanism that judges the success or failure of behavioral patterns alongside a dynamic decision - making system that maps input to output and updates its methodology according to the signals received from the feedback system .the _ e. coli _ bacterium has a curious mode of behavior .if it senses glucose in its immediate surroundings , it will move in the direction of this sweet nourishment .if it does not , it will flip over and move a certain distance in a random direction , before taking stock again , and so on and so on until it finds food .bacteria are generally not considered to be ` intelligent ' . yetthis is a systematic response to environmental stimuli , not necessarily the _ best _ response but nevertheless a _ working _ response , a ` satisficing ' response .the _ e. coli _ bacterium is responding in an intelligent way to the problem of how to find food .how do we square this with our instinctive feeling that bacteria are _ not _ intelligent ?are our instincts mistaken ?how , instinctively , do we define intelligence ?historically , philosophers have often proposed the idea of a separation between ` body ' and ` mind ' .the human mind , from this point of view , is something special , something distinct , something not bound up in the messy business of the real world .it s this , we are told , that separates us from the animals : we have this magical ability to _ understand _ , to _ think _ , to _ comprehend_the ability to view the world in a rational , abstract way and thus arrive at some fundamental _ truth _ about how the universe works .the idea of separate compartments of reality for body and mind has lost its stranglehold over our way of thinking , but its influence lingers on in our concept of intelligence .our minds , our consciousness , may be the result of physical processes , but we still cling to the idea that we have the ability to discover an abstract reality , and it s this idea that informs our notion of ` intelligence ' .an intelligent being is one that can see beyond its own personal circumstances , one that is capable of looking at the world around it in an objective fashion . given enough time , it can ( theoretically ) solve any problem you care to put before it .it is capable of rising above the environment in which it exists , and comprehending the nature of true reality .naturally , this has informed our ideas about artificial intelligence .an artificially intelligent machine will be one that works in this same environmentally uninhibited manner .if we tell it to drive a car , it will be able ( given time to teach itself ) to drive a car . if we tell it to cook a meal , it will be able to cook a meal . if we tell it to prove fermat s last theorem all of these , of course , assume that we have given it some kind of hardware with which to gather input and make output to the relevant system , whether car , kitchen or math textbook assume , indeed , that we have these systems present at all and it s this necessity that causes us to realize that in fact , _ the mind and its surrounding environment ( including the physical body of the individual ) are inseparable . _our brains are the product of evolution ; they are not an abstract , infinite system for solving abstract , infinite problems , but rather a very particular system for solving the very particular problems involved in coping with the environmental pressures about us . in this respect , we re no different from the _ e. coli _ bacterium we discussed earlier : the environments we inhabit are different , and consequently so are our behavioral patterns , but on a conceptual level there is nothing to choose between us ._ intelligence only exists in the context of its surrounding environment ._ so , if we are to attempt to create an artificial intelligence system , we must necessarily also define a world in which it will operate . and the question of _ how _ intelligent that system is can only be answered by examining how good it is at coping with the problems this world throws up , by its ability to utilize the data available to it to find working solutions to these problems .the ` minibrain ' neural system , developed by one of the authors in collaboration with chialvo , is an extremal dynamics - based decision - making system that responds to input by choosing from a finite set of outputs , the choice being determined by darwinian selection of good ( i.e. successful ) responses to previous inputs ( negative feedback ) .we use the simple layered version of this model , consisting of a layer of input neurons , a single intermediary layer of neurons , and a layer of output neurons ; each input neuron is connected to every intermediary neuron by a synapse , and similarly each intermediary neuron is connected to every output neuron .every synapse is assigned a ` strength ' , initially a random number between and .competing against each other in the minority model , each agent receives data about the past , and gives as output which of the two the groups we label them and it wishes to join .we follow the convention of challet and zhang s version of the game , that this knowledge is limited to knowing which side was the minority ( i.e. winning ) group at each turn in a finite number of past turns , so that agent input can be represented by a binary number of bits , where is the agent s memory .so , for example , if in the last three turns group lost , then won , then won again , this would be represented by the binary number , where the left - most bit represents the most recent turn , and each bit is determined by the number of the _ losing _ ( majority ) group that turn ( we choose these settings in order to match the way our computer code is set up ) . in order to preserve symmetry of choice between the two groups , an agent with a memory of will have input neurons , with the first of the pair of neurons firing if the bit representing the result turns ago is , the second neuron of the pair firing if the result was .for example , if an agent with a memory of ( and hence with input neurons ) is given the past as we discussed above , then the second , fourth and fifth input neurons will fire .[ fig:1 ] gives a picture of this architecture ( to avoid over - complicating the diagram , not all connections are shown ) .architecture of minibrain agents .every input neuron is connected to every intermediary neuron , and every intermediary neuron is connected to every output neuron . for our setup , we have two outputs , and inputs , where is the agent s memory.,width=317,height=167 ] to determine the intermediary neuron that fires , we take for each the sum of the strengths of the synapses connecting it to the firing input neurons . the intermediary neuron with the greatest such sumis the one that fires .then , the output neuron that fires ( or ) is the one connected to the firing intermediary neuron by the strongest synapse .each turn , the synapses used are ` tagged ' with a chemical trace .if the output produced that turn is satisfactory ( in this setup , if the agent joins the minority group ) , no further action is taken .if the output is not satisfactory , however , a global feedback signal ( e.g. the release of some hormone ) is sent to the system , and the tagged synapses are ` punished ' for their involvement in this bad decision by having their strengths reduced ( in our model , by a random number between and ) . as we noted in the introduction , this darwinian selection of successful behavioral patterns has already been shown to be an effective learning system when ` going solo ' ; how will it cope when placed in competition ?[ fig:2 ] shows the success rates of agents of different memory values .a group of agents has an even spread of memory values between and ; each agent has intermediary neurons .the figure shows their success rates over a period of turns .success rates of a mixed population of minibrain agents against their memory .agents have intermediary neurons.,width=284,height=196 ] to a certain extent , these results reflect those found by challet and zhang when they explore the behavior of a mixed population of fixed - strategy agents , inasmuch as performance improves with higher memory but tends to saturate eventually .standard deviation within each memory group is much lower for minibrain agents , however , suggesting crowding behaviour within memory groups , and we will later show that this does indeed occur . disappointingly , we see that not one agent achieves as much as a 50% success rate they would all be better off tossing coins to make their decisions .the even spread of memory values throughout the population means that agents with higher memory values can not take full advantage of their extra knowledge : the crowding behavior between agents with the same memory cancels out most of the positive effects .it s no good having lots of data on which to base your decision if lots of other people have that same data everyone will come to the same conclusion and , in the minority model , that means losing . necessarily , then , one of the conditions for an agent to succeed i.e .to beat the coin - tossing strategy is that there must be few other agents with the same amount of memory .this is demonstrated starkly in fig .[ fig:3 ] , displaying the results for a population of agents of whom _ one _ has a memory of , the rest only .success rate of a single agent of memory , versus a -strong population of memory .agents have intermediary neurons.,width=284,height=226 ] the astonishing success of this ` rogue ' agent ( it makes the right decision approximately 99.8% of the time ) shows clearly just how important a factor this crowding behavior is in the success ( or failure ) of agents .the fact that this agent is the only one receiving the extra data means that he can use it to his advantage .contrast this with the other agents who , for all their careful thinking , fail miserably because _almost all of them think alike entirely independently almost all of the time . _this example leads us to ask the more general question : given a population of agents who all have memory , can we always find such a ` rogue ' , an agent capable of understanding and thus beating the system ? that it is not merely a matter of agent memory is amply demonstrated in fig .[ fig:4 ] , where we see a population of memory pitted against a rogue with a memory of .success rate of a single agent of memory , versus a -strong population of memory .agents have intermediary neurons.,width=284,height=193 ] despite its high memory value ( twice that of the majority population ) the rogue agent is unable to beat the coin - tossing strategy .why is this ?a higher memory value should , by our earlier results , always be an advantage .certainly , since we have respected symmetry of choice between agent outputs , it should not be a _disadvantage_. what factor is it that prevents this agent from making full use of the memory available to it , memory which surely has within it useful data patterns that predict the behavior of the agents with memory , and thus should allow the rogue agent the success we expect it to achieve ? the answer becomes clear when we examine the nature of the input that each agent receives a binary number of length , where is the value of the agent s memory .so , it follows that the total possible number of inputs will be . for an agent with memory , this means possible inputs . for an agent with memory ,the total number of possible inputs is .compare this to the number of _ intermediary _ neurons possessed by each agent ( , in all the simulations we ve run so far ) and we realize that , while this is an adequate number for an agent receiving different possible inputs , it is wholly inadequate for an agent having to deal with some possible inputs .the number of intermediary neurons restricts the maximum performance of an agent by placing a limit on the amount of memory that can be effectively used . bearing this condition in mind, we run a new set of games , again with a majority population of memory , but this time with a rogue of memory , and with the number of intermediary neurons given to each agent varying in each of these games . fig .[ fig:5 ] shows the results of games where agents have intermediary layers of , respectively , , , and neurons .success rates of single agents of memory , versus a -strong population of memory , in simulations with , , and intermediary neurons per agent.,width=284,height=245 ] the implications are clear it is the number of intermediary neurons , as well as the amount of memory , that control whether or not a rogue agent can succeed , and , if it does , by how much .a higher memory value will always be an advantage , but the degree to which it is advantageous will be determined by the number of intermediary neurons possessed .memory , obviously , determines how much information an agent can receive ; the intermediary neurons are what provide agents analytic capability .our computer simulations suggest that in situations such as the ones already discussed , with a majority population of memory , it is the intermediary neurons , rather than the amount of memory possessed , that control the ability of a rogue agent to succeed .a memory of is all that is required , _ provided _ the rogue has enough intermediary neurons to be able to use it effectively .we can muddy the waters , so to speak , by giving the majority population an evenly distributed _spread _ of memory values ( perhaps from to ) rather than a single value . where a single memory value is used , the crowding behavior observed within memory groups will easily allow rogue agents to predict the minority group . with a series of different , smaller groups in competition, it becomes significantly less easy to make accurate predictions , and rogue agent success rates fall significantly .herding sheep is fairly easy ; jumping into the middle of a brawl is dangerous for anyone , even a world champion martial artist .all things considered , it seems as though this may be the key point in determining agent success .an agent can only be truly successful if it has plenty of ` prey ' whose weaknesses it can exploit . if the behavior of the prey is highly unpredictable , or the prey are capable of biting back , the agent s chances of success are vastly reduced .we have on several occasions referred to crowding behavior of minibrain agents within the same memory group . in this section ,we give a brief mathematical analysis of what causes this to arise .we begin with a simple case , assuming that all agents have the same memory value . obviously , because of the nature of the game, a majority of these agents will behave in the same way each turn . what we show, however , is that this majority is significantly more than would be found if the agents were deciding randomly as to which group to join .were agents to employ this strategy , the mean size of the ( losing ) majority group would be only a little over 50% .we define by the proportion of agents in the minority group given input , where the subscript is the number of times input has occurred before .if an input has not been seen before by agents , it follows that they will decide randomly which group to join , and so we have for all possible inputs . if an input _ has _ been seen before , it follows that those agents in the minority group on that occasion i.e. those who were successful will make the same decision as last time .those who were unsuccessful last time will make a random decision as to which group to join : we can expect , on average , half of them to change their minds , half to stay with their previous choice .the effect of this , ironically , is that this last group the unsuccessful agents who keep with their previous choice will probably ( in fact , almost certainly ) form the minority group this time round .and so we can define a recurrence relation , determining the expected proportion of agents joining the minority group for each occurrence of input .this allows us to develop a more general equation , where observe that this holds for , as a little calculation reveals .now , assume the equation holds for , with any positive integer , so . by the recurrence relation , \ ]] \ ] ] \ ] ] hence , , and so by the induction hypothesis for all .it follows , then , that as , so , and so , with repeated exposure to the input , we will find that on average of the agents will produce the same output . as a result, the average majority size per turn ( regardless of input given ) will also tend to as the agents become saturated by all the possible inputs .this can be observed in fig .[ fig:6 ] , which shows the average proportion of agents joining the majority group each turn in eight different games involving single memory value populations , the first involving agents of memory , the second with agents of memory , and so on up to the final game , with agents of memory .each game takes place over a time period of some turns .mean size of majority each turn in games with uniform agent memory , against different choices for this memory value .agent population per game is , but majority size is given proportionally .agents have intermediary neurons.,width=284,height=194 ] as memory increases , so the number of possible inputs also increases , meaning there is less repeated exposure to individual inputs , and hence less crowding for a given time period . within a time - scale of , the behavior of agents with longer memories is random more often than not , and so the mean size of majority is similar to that of agents making random decisions . as the number of turns increases , so we can expect the mean size of the majority to tend to for all memory values , not just the lowest .what implications does this have for games involving a mixed population of agents , such as that displayed in fig .[ fig:2 ] ? overall, the same principles will apply .repeated exposure to the same input will produce the same crowding effect .but we note that the inputs given to this system eight - digit binary numbers are interpreted differently by different agents . for agents with lower memories ,many of these ` different ' inputs are interpreted as being the same !for example , the inputs and are the same to an agent with a memory of or less .so as is demonstrated by fig .[ fig:6]the crowding effect surfaces earlier in agents with lower memory values , and hence they are adversely affected to a greater degree .the agents with higher memory values fail to beat the 50% success rate , however , because there are too many of them any insights they might have into the crowding behavior of the lower memory groups are obscured by the actions of their fellow high - memory agents .thus , the kind of behavior we see in fig . [ fig:2 ] : the lower memory agents perform the worst , with the success rate increasing towards some ` glass ceiling ' as agent memory increases .it s only unique ` rogue ' agents , who do nt have a large group of fellows , who can see the crowding effect and thus beat the system .even such rogue agents can not succeed by any great margin in the case where they are pitted against a spread of memory values .the crowding behavior of the individual groups is obscured by the large number of them and predictions become difficult ; the rogue has to work out , not just in which direction the crowding within each group will go , but how much crowding will be taking place in each group a difficult task indeed !if we increase the crowding , we also increase the rogue s chances of success .[ fig:7 ] shows the results from two different games involving agents .five of them are ` rogue ' agents with memory values of , respectively , , , , and .the rest have an even spread of memory values from to . in order to allow the higher memory values to be useful ,we give agents intermediary neurons .the difference between the games is that in the first , when punishing unsuccessful synapses , we employ the principle that has been used throughout this paper synapses are punished _once_. in the second game , the punishment does not stop until the agent has learned what would have been the correct output .the result is that , when an input has been seen before , we will have 100% agreement within memory groups .success rates of rogue agents of memory , versus a majority population with memory , in games involving single punishment and ` infinite ' punishment of unsuccessful synapses .total agent population is .agents have intermediary neurons.,width=284,height=245 ] we can see here how the increased crowding caused by ` infinite ' punishment allows the rogues to take advantage and be successful .a higher memory value is required for substantial success , but substantial success is possible at the expense of the lower memory groups , whose success rates are substantially decreased by the extra crowding behavior they are forced to produce .the rogue agents in the game with single punishment , by contrast , are barely able to do better than a 50% success rate though they can evidently glean _ some _ data from the crowding behavior displayed by the lower memory groups , it s not sufficient for any great success and they are only barely able to beat the expected success rate were they to make purely random decisions .this is a striking result , to say the least ._ the inevitable consequence of an analytic strategy is a predisposition to failure ._ challet and zhang and w. b. arthur have already shown that fixed strategies can prove to be a disadvantage compared to random decisions ; this occurs when the number of available strategies is small compared to the number of agents .the crowding behavior that results from minibrain agents imperfect analysis will inevitably reduce the number of strategies in use , thus dooming themselves to worse - than - random results .we can see this at work in the real world , every day . many strategies whether for investments , business strategies , forming a relationship , or any of the myriad problems we have to solve fail , because they are based on common knowledge , and as such , will be similar to most other people s strategies .we are often told , ` everybody knows that ' , but few people realize the negative side of following such advice : since _ everybody _ knows it , _ everybody _ will come to the same conclusions as you , and so your strategy will be unlikely to succeed . perhaps the best recent example is the internet boom - and - bust : so many people thought the internet was the place to invest , the market overheated , and many companies went belly - up .as this paper was being prepared , a report was broadcast on uk television about an experiment in which a four - year - old child , picking a share portfolio , was able to outdo highly experienced city traders on the stock market . in such systems , with everyone s imperfect analysis competing against everyone else s , it seems highly likely that random decisions sometimes really are the best ; the minibrain / minority model combination would appear to confirm this .another interesting conclusion to be drawn from the computer experiments here described is that , given some particular minibrain agent , there is no way of deciding if it will be successful or not unless we know about the other agents it will be competing with . in a sense this is not surprising .we know , for example , that to be a high - flier at an ivy league university requires considerably more academic ability than most other educational institutions .the athlete coming last in the final of the olympic 100 m can still run faster than almost anyone else in the world .we know that _ in these contexts _ the conditions to be the ` best ' are different , but there is surely still an overall comparison to be made between the whole of humanity . or is there ?recall our suggestion in the introduction to this paper that the question of how intelligent a system is can only be answered by examining how good it is at coping with the problems its surrounding environment throws up . to return to minibrain agents : by the concepts we discussed earlier , it is agents _ intelligence _ , and not just their success rate , that is dependent on their fellows , as well as their own , characteristics ; indeed , the two measures success and intelligence cannot be separated .contrast this with how we have identified a whole range of factors memory , the number of intermediary neurons , the amount of punishment inflicted on unsuccessful synapses that affect the manner in which an agent performs .there are objective comparisons that can be made between agents . while we might accept that any measure of ` intelligence ' we can conceive of will only hold in the context of the minority model , surely it is not fair to suggest that the only valid measure of intelligence is success rate in the context of the population of agents we place within that world ? before we rush off to define our abstract ` agent iq ' , however , it s worth noting that all the measures of _ human _ , as well as minibrain , intelligence that we have put in place are in fact measures of success in particular contexts .when a teacher calls a pupil a ` stupid boy ' , he is not commenting on the child s intelligence in some abstract sense , but rather the child s ability to succeed at the tasks he is set in the school environment .( einstein was considered stupid in the context of a school environment where dyslexia and asperger s syndrome were unknown . ) when we say that human beings are more intelligent than other animals what we in fact mean is that human beings are more successful at manipulating their environment to their own benefit .high - fliers at ivy league universities are considered intelligent because of their academic success .olympic athletes are considered intelligent in the context of their sport because they are capable of winning consistently .even human iq tests , long thought to provide an abstract and objective measure of intelligence , work in this fashion , being a measure of an individual s success in solving logical problems .more recently these tests have been shown to discriminate against certain individuals based on their cultural background a further indication of their non - abstract , non - objective nature and in addition to this , psychologists are now proposing that there are other forms of intelligence , for example _ emotional _ intelligence or ` eq ' , which are just as important to individual success as intellectual ability .were abstract measures of intelligence possible , it would be reasonable to ask : ` who was more intelligent , albert einstein or ernest shackleton ? ' as it turns out , this question is impossible to answer .shackleton probably lacked einstein s capacity for scientific imagination , einstein probably did nt know a great deal about arctic survival , but both were highly successful and thus by implication intelligent_in the context of their own chosen way of life ._ the same is true of our hypothetical ivy league student and olympic runner .we suggest that no other possible measure of intelligence is truly satisfactory .it is not an entirely easy concept to take on board .in particular , it conflicts with our instinctive sense of what it means to be ` intelligent ' . casually and not so casually we talk about people s intelligence in the context of their _ understanding _ , their _ conceiving _ , their _awareness_. in other words , we talk about it in the context of their _ consciousness_. in their paper ` consciousness and neuroscience ' , francis crick and cristoph koch refer to the philosophical concept of a ` zombie ' , a creature that looks and acts just like a human being but lacks conscious awareness .using the concepts of intelligence we have been discussing , this creature is just as intelligent as a real human . yet , on closer examination , this is not such an unreasonable idea .such a ` zombie ' is probably scientifically untenable , but it should be noted that our measures of ` intelligence ' do not measure consciousness , at least not explicitly .a digital computer can solve logical problems , for example , and it seems very unlikely that such computers are conscious .the ` emotional intelligence ' we referred to earlier almost certainly has some unconscious elements to it : our ability to respond to a situation in an appropriate emotional manner tends to be an instinctive , more than a conscious , response .lizards , it is thought , lack a conscious sense of vision but they can still catch prey , find a mate and so on , using their visual sense to do so .in fact , most of the organisms that exist on earth are probably not conscious .consciousness , most likely , is a product of brain activity that is a useful survival aid , a useful aid for success .aid _ for success , and thus for intelligence , rather than a requirement .how , then , should we approach the question of what is an intelligent system ?in their description of the construction of the minibrain neural system , bak and chialvo note : ` biology has to provide a set of more or less randomly connected neurons , and a mechanism by which an output is deemed unsatisfactory .it is absurd to speak of meaningful brain processes if the purpose is not defined in advance .the brain can not learn to define what is good and what is bad .this must be given at the outset .from there on , the brain is on its own ' .these concepts provide us with a way of thinking about intelligent systems in general , whether naturally occurring biological systems or man - made artificial intelligence systems .( ii)_a decision - making system . _ given an input , a systematic process is applied to decide what output to make .this can range from the purely deterministic ( e.g. a truth - table of required output for each given input ) to the completely random .the e coli bacterium s behavior in response to the presence or otherwise of glucose either moving in the direction of the food or , if none is to be found , in a random direction is a perfect example .( iii)_a hardwired system for determining whether a given output has been successful , and sending appropriate feedback to the system . _again , the nature of this can vary . in our computer experiments ,success is defined as being in the minority group . for the _e. coli _ bacterium , success is finding food .possible types of feedback range from the positive reinforcement of successful behavior practiced by many neural network systems , to the negative feedback of the minibrain model .the _ e. coli _ bacterium provides perhaps the most extreme example : if it does nt find food within a certain time period , it will die !the last of these is perhaps the most difficult to come to terms with , simply because as human beings , we instinctively feel that it is a _ conscious _choice on our part as to whether many of our actions have been successful or not .nevertheless , the ultimate determination of success or failure must rest with hardwired processes over which the decision - making system has no control . if nothing else , we are all subject to the same consideration as _ e. coli _ : if our actions do not provide our physical bodies with what they need to survive , they , and our brains and minds with them , will perish .we should , perhaps , include an extra criterion that for a system to be _ truly _ intelligent , the feedback mechanism must in some way affect the operation of the decision - making system , whether it is punishing ` bad ' synapses in the minibrain neural network , changing the entries in a truth - table , or killing a bacterium .a system that keeps making the same decision regardless of how consistently successful that decision is , is nt being intelligent . with this in mind, we might consider systems such as _e. coli _( i.e. systems which employ one single strategy , and when it becomes unsuccessful simply _ stop _ ) to be _ minimally intelligent _ systems .they re nowhere near as smart as other systems , natural and artificial , but at least they know when to quit .intelligence , we suggest , is not an abstract concept . the question of what is intelligent behavior can only be answered in the context of a problem to be solved .so in the search for artificial intelligence , we must necessarily start with the world in which we want that intelligence to operate ; we can not begin by creating some ` consciousness - in - a - box ' to which we then give a purpose , but must first establish what we want that intelligence to _ do _ , before building the systems input - output , decision - making , feedback that will best achieve that aim .computer programmers already have an instinctive sense of this when they talk about , for example , the ` ai ' of a computer game .( purpose : to beat the human player . no longer the deterministic strategies of space invaders many modern computer games display a great subtlety and complexity of behavior . )this is not to denigrate attempts to build conscious machines .such machines would almost certainly provide the most powerful forms of artificial intelligence .but we are still a long way from understanding what consciousness is , let alone being able to replicate it , and as we have noted here , consciousness is not necessarily needed for intelligent behavior .the experiments discussed in this paper involve ` toy ' models .comparing the minibrain neural system to the real human brain is like comparing a paper airplane to a jumbo jet .but paper airplanes can still fly , and there are still lessons to be learned .these ` toy ' experiments provide us with a context to begin identifying what it means to be intelligent .we have been able to suggest criteria for identifying intelligent systems that avoid the controversial issues of consciousness and understanding , and a method of determining how intelligent such systems are that rests on one simple , useful and practical question : how good is this system at doing what it s meant to do ?in other words , we and others have begun to demystify the subject of intelligence and maneuver it into a position where we can begin to ask precise and well - defined questions .paper airplanes can fly for miles if they re launched from the right places .
we investigate the behavioral patterns of a population of agents , each controlled by a simple biologically motivated neural network model , when they are set in competition against each other in the minority model of challet and zhang . we explore the effects of changing agent characteristics , demonstrating that crowding behavior takes place among agents of similar memory , and show how this allows unique ` rogue ' agents with higher memory values to take advantage of a majority population . we also show that agents analytic capability is largely determined by the size of the intermediary layer of neurons . in the context of these results , we discuss the general nature of natural and artificial intelligence systems , and suggest intelligence only exists in the context of the surrounding environment ( _ embodiment _ ) . source code for the programs used can be found at * http://neuro.webdrake.net/*.
the central aim of this paper is to establish , in the context of hypothesis testing with incomplete data , a general framework for quantifying the amount of information in the observed data for a specific test being performed , relative to the full amount of information we would have had the data been complete .we do not address the issue of what is the best testing procedure , with or without the complete data , nor the issue of whether a full modeling / estimation strategy should or can be used instead .rather , we address an increasingly common practical problem where the investigator has chosen the testing procedure , but needs to know the impact of the missing data on the test in terms of the relative loss of information .such is the case in the genetic studies we briefly review in sections [ sec : genet ] and [ sec : haplo ] . besides the specific challenges listed in section [ subsec : conf ] ,there are a number of general theoretical and methodological difficulties for establishing this general framework .first , unlike the similar task for estimation , where the notion of `` fraction of missing information '' is well studied and documented ( e.g. , ; ) , for hypothesis testing , there are two sets of measures to be contemplated , depending on whether the null hypothesis or the posited alternative model can be regarded as approximately adequate .indeed , this is the very question the hypothesis test aims to provide partial evidence to discriminate .second , hypothesis testing procedures , especially those of nonparametric or semiparametric nature , are often constructed without reference to a specific ( parametric ) model . however , without an explicit model to link the unobserved quantities with the observed data , the very task of measuring how much information we have missed is neither possible in general nor meaningful .it is known , though not widely ( e.g. , ; ) , that certain robust statistical procedures for estimation or testing can produce more efficient or powerful results with less data .consequently , without assuming that our testing procedure is optimal under a specified optimality criterion , we may end up with the seemingly paradoxical situation that additional data may make our procedure less efficient or powerful .that is , we may declare that more information is available with less data .third , in the context of small samples , quantifying information requires going beyond convenient and standard measures such as fisher information , which is essentially a large - sample measure .small - sample problems are rather frequent with incomplete data , as missing data reduce effective sample sizes . for the genetic studies we investigate in this paper ,the small - sample problems arise even when there appear to be ample amounts of data .for example , we are often interested in measuring information content in individual components ( e.g. , an individual family in a large linkage study ) . in haplotype association studies, we often test haplotypes individually data size may be large enough for testing a common haplotype , but very small for a rare one . in addition , an individual person can be fully informative for one haplotype because we know s / he can not carry it , but much less so for another when we are uncertain whether s / he carries it or not .all these problems remind us that , in general scientific studies , small - sample problems appear more often than meets the eyes , namely , the numerical value of the sample size , because they sometimes appear in disguised forms . given the complex nature of small - sample problems requiring information measures , we literally have spent several years in our quest of finding a general workable approach .not surprisingly , our conclusion is that robust bayesian methods hold more promise .as we propose in section [ sec : small ] , after establishing a likelihood - based large - sample framework in section [ sec : large ] , this problem can be dealt with by considering posterior measures of the flatness of the entire likelihood surfaces .however , the problem of specifying an appropriate `` default '' prior is challenging .we report both our promising findings and open problems , hoping to stimulate further development on this practically important and theoretically fascinating topic .we also discuss various interesting theoretical connections ( section [ sec : theor ] ) , as well as further methodological work and applications ( section [ sec : future ] ) .the central applied problem that motivated our work was the task to sensibly measure and efficiently compute the amount of information available in _ a particular genetic data set for a particular hypothesis tested by a particular statistical procedure_. all genome - wide linkage screens carried out on qualitative and quantitative traits as well as most of the association studies extract only part of the underlying information .missing information can be the result of different sources , such as absence of dna samples , missing genotypes , spacing between markers , noninformativeness of the markers , or unknown haplotype phase .investigators want to know how much information is available in the observed data for the purpose of the study _ relative _ to the amount of information that would have been available if the data were complete .the notion of complete data is problem specific and , in parametric inference , depends on the sufficient statistics ; for example , in linkage studies where the ibd ( identical by descent ) process is sufficient for inference , complete data can be achieved even if genotypes and/or individual samples are missing .measures of relative information are needed for designing follow - up strategies in linkage studies , for example , using more genetic markers with existing dna samples versus collecting dna samples from additional families . even for situations where we do not intend to recover the missing data , including situations where theycan not possibly be recovered ( e.g. , dna samples from deceased ancestors ) , such measures can still be useful for the interpretation of the data and of the results , and for understanding the behavior of some of the inference tools ( e.g. , see section [ s4.5 ] ) .the key methodological challenge is to find a measure that ( 1 ) is a reliable index of the relative information specific to a study purpose , ( 2 ) conditions on particular data sets , ( 3 ) is robust in the sense of general applicability , including to small data sets , ( 4 ) is easy to compute and ( 5 ) is subject to meaningful combination axioms .the reliability criterion ( 1 ) is obvious , and the criterion ( 2 ) is necessary because typically an investigator is interested in measuring the relative information in the data set at hand , not with respect to some `` average '' data set .criterion ( 3 ) is desirable because in a typical genetic linkage study one needs to deal with a large amount of data with a variety of different complex structures ( e.g. , from a nuclear family to a very complex pedigree ) , often under time constraints , and thus it is not feasible to design separate measures to suit particular data structures .criterion ( 4 ) is needed for similar reasons any method without suitable computational efficiency , regardless of its theoretical superiority ,will typically be ignored in routine genetic studies given the practical constraints .criterion ( 5 ) ensures certain desirable coherence to prevent paradoxical measure properties ( e.g. , more informative studies receive less weight in the combined index ) when combining studies . to deal with all these criteria simultaneouslyrequires a careful combination of bayesian and frequentist perspectives .some of the criteria [ e.g. , ( 1 ) and ( 2 ) ] are most easily handled from the bayesian perspective , and some [ e.g. , ( 5 ) ] are easier to satisfy with a frequentist criterion . with large samples , as it is typical , likelihood theory provides a rather satisfactory solution , as we demonstrate in section [ sec : large ] . for small samples ,we have not been able to find a better alternative than to follow a robust bayesian perspective , which takes full advantage of the bayesian formulation in deriving information measures with desirable coherent properties , and at the same time it seeks measures that are robust to various misspecifications and are thus more generally applicable .we emphasize , however , that the computational burden associated to these bayesian measures should not be overlooked , even in this age of the mcmc revolution , for the reasons underlying criterion ( 4 ) above . nevertheless , it is more principled and fruitful to seek ways to increase computational efficiency _after _ we establish theoretically sound measures .this is the route we follow . for those who have no ( direct ) interest in genetic studies ,the following simple example may provide a stimulus to follow the methods developed in our paper .the example also provides some insights into a somewhat `` perplexing '' practical question when dealing with hypothesis testing in the presence of missing data : shall we impute under the null or not ?we emphasize that the purpose of this example is _ not _ to illustrate imputation methods .indeed , neither method discussed below can be recommended in general .rather , it shows how we can quantify relative information by measuring _how inaccurate is _ to erroneously treat imputations as if they were observed data .specifically , suppose are i.i.d .realizations of bernoulli , but only of them are actually observed .assuming that the missing data are missing completely at random ( ) , we can denote the observed data by . evidently , a simple large - sample test ( assuming is adequately large ) for is to refer the test statistic ( where the subscript `` ob '' stands for `` observed data '' ) to the null distribution , where is the average of the observed data .let us assume that the missing s were imputed using two mean - imputation methods .the firstmethod is to impute each missing by its mean , estimated by .the second procedure is to impute each missing by its mean assuming is true , that is , by .clearly , with either imputation , if we treat the imputed data as if they were observed and apply the test ( [ toytest ] ) with , we will not reach the valid conclusion unless we adjust the null distribution .for the first method , the average of all data , observed and imputed , is .therefore , if we erroneously treat the imputed values as real observations , we would compute our test statistics as where .in contrast , the second method would lead to because the average of all data , observed and imputed , is .two aspects of the above calculations are important .first , in both cases , the resulting `` completed - data '' test statistic is proportional to the benchmark given in ( [ toytest ] ) . consequently , imputing under the null or not leads to the same answer , as long as we adjust the corresponding null distribution accordingly ( the generality of this equivalence result obviously needs qualification , but the validity of a test is automatic when its null reference distribution is correctly specified ) .second , identities ( [ talte ] ) and ( [ tnull ] ) yield respectively the results in ( [ rform ] ) are important because measures the relative sample sizes , and hence the `` relative information '' in an i.i.d . setting . these results suggest that we consider measuring the relative information by how liberal the first imputation - based test is , when the imputations under the alternative are treated as real data , or how conservative the second test is , when the imputations under the null are treated as real observations .our general large - sample results given in section [ sec : large ] show that these ideas are in fact general , once we replace the statistics in ( [ rform ] ) by their appropriate log - likelihood ratio counterparts ( recall the large - sample equivalence between log - likelihood ratio statistics and the wald statistics in a form similar to ) .readers who are not interested in genetic applications can go directly to section [ sec : large ] , as sections [ sec : genet ] and [ sec : haplo ] provide the necessary background on the genetic problems to which our methods will be applied .] linkage refers to the co - inheritance of two markers or genes because they are located closely on the same chromosome .allele - sharing methods are part of linkage techniques for locating regions on the genome that are very likely to contain disease susceptibility genes ( e.g. , ) .the data usually consist of genotypes from a large number of markers ( polymorphic locations ) spread along the genome for individuals from pedigrees .the allele - sharing methods focus on affected individuals , but genetic data on unaffected relatives are used to infer the inheritance patterns .alleles at the same locus in two individuals are said to be identical by descent ( ibd ) if they originate from the same chromosome , and are called identical by state ( ibs ) if they appear to be the same . for a given location on the genome , the evidence for a disease - susceptibility locus linked to it is given by the sharing of alleles ibd among affected relatives in excess of what is expected if the marker is not linked to a genetic risk factor .the simplest example of a data structure is the affected sib pair , as shown in figure [ fig : sibpair ] , where the left diagram shows a family with two affected brothers in which the parental information at a fixed locus is denoted by `` a1 '' and `` a2 '' for the father , and `` a3 '' and `` a4 '' for the mother .the siblings have one allele ibd ( a2 ) which they inherited from their father , and different alleles inherited from their mother . in general , siblings share either two , one or no alleles ibd .unconditionally , each allele has probability to be transmitted ; this leads to a probability of , , for sharing zero , one , two alleles , respectively , identical by descent .conditioned on the affection status of the sibs , in the neighborhood of a disease gene , there is an expected increase in the number of alleles ibd across a collection of sib pairs ; statistical testing methods are often used to measure the strength of the evidence . in general , the data are not as simple as in the above example .the pedigree structures can contain far more complicated relations than sib pairs and more than two affected individuals .most of the data sets extract only part of the underlying ibd information . in general ,the information is incomplete at locations between markers . even at marker locations ,a variety of factors can lead to missing information , including any genotype data on deceased or unavailable family members , missing genotypes in the typed individuals , or noninformativeness of the markers .the right diagram of [ fig : sibpair ] illustrates a family where the parental allele information is missing , so even though the allele sharing among the sib pair appears to be identical in pattern with that of the left diagram , it is not known if the sibs share one or zero alleles ibd as the two `` a2 '' alleles might originate on different parental chromosomes . in general ,the marker information of all the loci on the chromosome is used to calculate a probability distribution on the space of inheritance vectors . for locus and pedigree , an inheritance vector , , is a binary vector that specifies , for all the nonfounding members of the pedigree , which grand - parental alleles are inherited . under the assumption of no linkage ,all inheritance vectors are equally likely , which leads to a uniform prior distribution on their space . for a sib pair , the inheritance vector has four elements , one for each parent - child combination .for example , the first element specifies whether the allele inherited by the first sib from his father originates from the grandfather or grandmother . assuming no interference ( ) , a hidden markov model can be used to calculate the inheritance distribution conditional on the genotypes at all marker loci ( ) .the distribution of the inheritance vectors conditional on the observed data is the basis of the statistical inference , and it is used to determine the conditional distribution of the number of alleles ibd at a given location . in order to summarize the evidence for linkage in a pedigree ,we can use a score ( ; ) , a measure of ibd sharing among the affected individuals at locus . in general , is chosen such that it has a higher expected value under linkage than under no linkage .the standardized form of is , where and .the test is typically in the form of linear combination over the pedigrees , where are weights assigned to the individual families .the weights can be chosen according to the number of affecteds and the relationship among them and/or other covariate information .under the null hypothesis , has mean 0 and variance 1 .deviations from the null hypothesis can be tested using a approximation or the exact distribution of . in general , s are not directly observable due to missing information .a common practice is to impute / replace by to construct a test statistic ( ) , the main problem with this test statistic is the difficulty of directly evaluating its statistical significance. a standard approximation can be very inaccurate when there is a large amount of missing information , as can be seen from the following variance decomposition : \\[-8pt ] & & { } + \mathrm{e}(\operatorname{var}(z\vert\hbox{data } , h_0)\vert h_0 ) , \nonumber\end{aligned}\ ] ] which implies in many cases can be substantially less than 1 , leading to a conservative test when the approximation is used .a more accurate approach is described in section [ sec : exp ] .it is important to emphasize that , in allele - sharing studies , the amount of missing information can be made arbitrarily low , at least in theory , by increasing the number of markers in the region .that is why , in regions with evidence for linkage , it is important to predict whether by genotyping additional markers one will obtain a more significant deviation from the null .a different strategy for increasing the amount of information is to increase the sample size , that is , to collect dna samples from additional families .therefore knowing how much information is missing from the data is important for designing efficient follow - up strategies ( see also ) .the linkage methods we described are based on a chosen test statistic . in order to measure the relative information for a test statistic, we need to associate it with a model which specifies the stochastic relationship between the observed data and missing data beyond the null .otherwise the question of relative information is not well defined , as it is emphasized in [ subsec : intro ] .it has been shown by that for every test statistic of the form of ( [ npl ] ) , a class of one - parameter models can be constructed such that the efficient score ( ) from each of the models gives asymptotically equivalent results to the given statistic .the inference procedures based on these models can be applied to any pedigree structure and missing data patterns . as an illustration, we briefly describe the _ exponential tilting _model of applied to the one - locus allele - sharing statistic . a key assumption underlying this model ( and other models for associating tests ) is that the distribution of the inheritance vectors satisfies where is an inheritance vector for pedigree that leads to a standardized scoring function equal to , and denotes the alternative hypothesis .note that any time an investigator employs a test solely based on the s , as far as measuring information concerns , s / he is effectively assuming ( [ reduc ] ) regardless of whether or not s / he is aware of it . under assumption ( [ reduc ] ) , it is sufficient to define the alternative models for s .the exponential tilting model has the form where is specified by the null ( i.e. , no linkage ) and ^{-1}s^*(y_{\mathrm{ob}})= \mathrm{e}[s(y_{\mathrm{co } } ) measures the anti - conservativeness of the completed - data test by pretending that the actual value of the unobserved is the same as its imputation under the ( estimated ) alternative .therefore , is the general version of the first case in ( [ rform ] ) .this measure also has the following property when combining data sets .suppose are mutually independent and we define for each as in ( [ fracoa ] ) but using instead of individual ( i.e. , an mle based on ; then the overall is a weighted harmonic mean of s weighted by the individual lod score , , namely , however , the individual lod score , , is not necessarily always positive in practice , a problem that is closely related to the problem of defining relative measures for small data sets ( e.g. , for individual family ) , as discussed in section [ sec : small ] . note that can also be expressed as weighted arithmetic mean of if we choose the weights to be proportional to the expected individual complete - data lod score ] , which can not exceed by ( [ revi ] ) , as our best estimate of the complete - data lod score ; the use of a single point estimate of the complete - data lod score without considering its uncertainty can be justified under the large - sample assumption .consequently , we can define \over\mathrm{lod}(\theta _ { \mathrm{ob } } , \theta _ 0|y_{\mathrm{ob } } ) } \nonumber\\[-8pt ] \\[-8pt ] & = & \frac{\max_{\theta } [ q(\theta |\theta _ 0 ) - q(\theta _0| \theta_0)]}{\ell_{\mathrm{ob } } ( \theta _ { \mathrm{ob}})-\ell_{\mathrm{ob}}(\theta _ 0)}. \nonumber\end{aligned}\ ] ] the last expression shows again the computational efficiency of this measure because is the same as carrying out the e - step and m - step of an em algorithm , by pretending the previous iterated value is . however , we emphasize that the use of ] is not because this computation is easy , but rather because of the nature of the fundamental identity ( [ key ] ) , which requires we maximize the expected complete - data lod score . like , . unlike , however , the investigation of when approaches one or zero is a more complicated matter , especially when the difference between and is large .this is a partial reflection of the fact that is defined under the assumption that the null hypothesis is ( approximately ) valid , which would be contradicted by a large value of , especially under our large - sample assumption .therefore , it is more sensible to investigate its theoretical properties when is small , in which case it is essentially equivalent to , as we will establish in section [ sec : theor ] .nevertheless , it is useful to remark here that under the additional assumption that is the unique stationary point of , the numerator of is zero if and only if its denominator is zero , that is , if and only if .[ the `` if '' part of this result is a trivial consequence of ( [ revi ] ) . the `` only if '' part follows from the fact that if the numerator is zero , then is a maximizer of , which means that must also be a stationary point of by ( [ eq : meng ] ) in appendix [ sa.2 ] . ]this demonstrates that in order for to be very small , the observed - data likelihood must suffer a diminishing ability to distinguish between and , just as with . also as with , when is linear in , can be computed simply as where , that is , the mean imputation of the missing under the null .therefore , is the general version of the second case in ( [ rform ] ) , and it measures the conservativeness of our test when we impute under the null .its main disadvantage , as previously mentioned , is that it can provide very misleading information when the true is far away from the null . on the other hand , because it is computed at the null , it is less sensitive , compared to , to possible misspecification of the alternative model .we will illustrate this in section [ subsec : finite ] , where we will discuss further the pros and cons of . and ; the bottom curve ( dot - dashed ) corresponds to the entropy - based measure ( ).[fig : niddm ] ] in the context of allele - sharing methods , the measures we introduced in the previous sections are implemented in the software allegro ( ) , and are discussed in detail in . in figure [ fig : niddm ] , and are plotted for various locations along chromosome 22 ( the unit for the x - axis is centimorgans ) in a data set consisting of 127 pedigrees used in an inflammatory bowel disease study ( ) .it can be seen that , in this case , the two measures are very close across the entire chromosome .this happens because the sample size is large and the distribution of the family sharing scores is fairly symmetric. also plotted is an inheritance - vector - based information measure calculated by the software genehunter ( ) .this measure takes advantage of the fact that the inheritance vectors are equally likely under and that , for the fixed support of the space of the inheritance vectors , the shannon entropy ( ) is maximal for the uniform distribution on the support .for the pedigree in the study and a given position , it is defined as where was defined in section [ sec : gene ] .the definition of the overall information content of a data set is based on the global entropy , which , summed over all pedigrees , satisfies while has several desired properties ( e.g. , it is always between zero and one , and it is one when there is perfect data on the inheritance vectors ) , it has some deficiencies that make it unsuitable for the linkage application .the most fundamental problem is that it measures the relative information in the whole inheritance vector space , which could be very different from what is available for a particular test statistic that is a function of the inheritance vectors .for example , in the right diagram of figure [ fig : sibpair ] , we may be nearly certain , and hence suffer very little missing information , that the ibs sharing is actually ibd if we have the knowledge that the allele `` a2 '' has very low population frequency , even though the parental alleles are unknown and therefore is low ( see , for more details ) .it is also possible that is higher than the measures described in this paper ( e.g. , ) , for example in situations where there is a lot of data on unaffected individuals in a family , but little or no data on affected individuals .in these cases , will capture available information that is not directly of interest when we are performing affecteds - only analyses .the serious overestimation or underestimation of relative information can have a great impact on the design of follow - up studies .one can decide on increasing the marker density if the relative information is low , as opposed to increasing the sample size .both strategies are expensive , and therefore deciding what is the most efficient design is of great importance in practice . for example , at the global mode in figure [ fig : niddm ] , our measures indicate that we have about relative information , implying that potentially we can increase the lod score by only about ( ) if we add markers to make the ibd process approximately known ( assuming the value of remains approximately the same with the additional data ) .on the other hand , the entropy - based measure from genehunter indicates that we have about 70% information , suggesting that a more substantial gain ( over ) is possible by increasing the density of the markers .therefore these two approaches of measuring information are likely to lead to different strategies in allocating the resources , but evidently , in this example , it is unlikely the test results will change significantly by adding more markers near the location at the global mode . in , the gene _tcf7l2 _ was found to be associated with type-2 diabetes . in particular , allelet of _ rs7903146 _ ( snp402 ) and allele x of a microsatellite marker dg10s478 are both associated with elevated risk of type-2 diabetes ( -value ) .allele t and allele x are substantially correlated ( ) and their effects could not be clearly distinguished from each other in the original study . however , with additional data ( ) , it became clear that allele t is more strongly associated with diabetes than allele x. snp402 has alleles t and c , and dg10s478 has alleles x and 0 .jointly there are four haplotypes : tx , cx , t0 and c0 .figure [ fig : stroke ] presents pairwise comparisons of these four haplotypes .data are from 1021 patients ( chromosomes ) and 4273 controls ( chromosomes ) .consistent with the single marker associations , haplotype tx is found to have elevated risk relative to c0 . to distinguish between the effects of alleles t and x ,haplotype t0 is found to confer risk that is similar to that of tx and has significantly higher risk than c0 .by contrast , haplotype cx is found to have risk similar to that of c0 and significantly lower risk than tx . in other words ,given snp402 , dg10s478 does not appear to provide extra information about diabetes , which keeps snp402 as a strong candidate for being the functional variant .the yield of the genotypes is not perfect .each subject has genotypes for at least one of the two markers , but about 3.5% of the genotypes are missing .this together with uncertainty in phase leads to the incomplete information summarized in figure [ fig : stroke ] .interestingly , while the same data are used for the six pairwise comparisons , the fraction of missing information can be quite different .most striking is that the relative information for the test of tx versus c0 is very close to 100% , while the other tests all have more substantial missing information .we explore some of the reasons below .notice that t is highly correlated with x and c highly correlated with 0 .as a consequence , tx and c0 are much more common than t0 and cx .consider a subject whose genotype for d10gs478 is missing .here we can think of his two alleles for snp402 one at a time .given an observed allele t , it is clear that the haplotype is not c0 and quite likely to be tx .hence , even though incomplete , there is still substantial information provided for the test of tx versus c0 .by contrast , we know that this chromosome is useful for the test of tx against t0 , but with the allele of dg10s478 missing , that information is completely lost .even more interesting is that , if the observed allele is c instead , then this haplotype is completely uninformative for the test of tx versus t0 , that is , there is actually no information here whether or not we know the corresponding dg10s478 allele . in effect , the genotype of snp402 is an ancillary statistic for the test of tx against t0 ( or cx against c0 ) .it tells us how much information we can get from this individual assuming that we have no missing data , but by itself does not provide any information for the test . moreover , if the test of tx versus t0 is of key interest , then effort to fill up missing genotypes for dg10s478 should be focused on those individuals that are t / t homozygous for snp402 .when genotypes of both markers are observed , uncertainty in phase only exists for those individuals that are doubly heterozygous , that is , having genotypes c / t and 0/x .such an individual either has haplotypes c0/tx ( scenario i ) or cx / t0 ( scenario ii ) .scenario ii provides no information for the test of tx versus c0 .scenario i does contribute something to the test , but by providing a count of 1 to both tx and c0 , its impact on the test of tx versus c0 is rather limited .by contrast , for the test of tx versus t0 , scenario i adds a count of 1 to tx while scenario ii adds a count to t0 .hence , uncertainty in phase has a much bigger impact on the test of tx versus t0 than the test of tx versus c0 .this example , therefore , illustrates clearly the importance of measuring _ test - specific _ relative information .the measures defined in previous sections do not necessarily work with small samples ( e.g. , data for one family ) because they rely on the ability of the mle to summarize the whole likelihood function .the bayesian approach becomes a valuable tool in such settings even if we do not necessarily have a reliable prior ; we can first construct a coherent measure and then investigate the choice of prior . since a likelihood quantifies the information in the data through its ability of distinguishing different values of the parameter , it is natural to consider measuring the relative information by comparing how the observed - data likelihood deviates from `` flatness '' relative to the same deviation in the complete - data likelihood . the bayesian method is ideal here because we need to assess the change in this deviation due to the joint variability in the missing data and in the parameter .a reasonable measure of this deviation , conditioning on , is the posterior variance of the likelihood ratio ( lr ) .this measure is appealing because it is naturally scaled via the equality } { \operatorname{var } [ \mathrm{lr}(\theta _ 0 , \theta| y_{\mathrm{co}})|y_{\mathrm{ob } } ] } \le1,\ ] ] where indexes the underlying prior on used by ( [ ripi ] ) , and stands for `` bayes information . ''we assume here that the complete - data likelihood surface is not flat , as otherwise the model is of little interest .the denominator in ( [ ripi ] ) is therefore positive .we also need to assume that the posterior variances of the two likelihood ratios are finite .this second assumption can be violated in practice , but a second measure we will propose below essentially circumvents this problem . in the presence of nuisance parameters ( under the null ) , there is also a subtle issue regarding the nuisance part of , in the definition of . for a full bayesian calculation ,one should leave it unspecified and average it over in the posterior calculation , just as with the in . on the other hand , to be consistent with the large - sample measures as defined in section [ sec : large ] , we can fix the nuisance parameter part in by its observed - data mle under the null . identity ( [ ratio ] )still holds with such a `` fix , '' because the calculation there conditions on the observed data .this `` fix '' may seem to be rather ad hoc from a pure bayesian point of view .however , it can be viewed as an attempt in capturing the dependence ( if any ) between the parameter of interest and the nuisance parameter under the null , a dependence that is ignored by a single prior on the nuisance parameter regardless of the null .this subtle issue is related to the difference between `` estimation prior '' and `` hypothesis testing prior , '' an issue we will explore in subsequent work . here we just note that all the bayesian measures defined in this section can be constructed with either approach for handling the nuisance parameter under the null , although those under shrinking prior toward the null ( see section [ subsec : shrink ] ) are most easily obtained when the nuisance parameter under the null is fixed at its mle ( or some other known values ) . with either approach , |y_{\mathrm{ob } } \ } = 0,\end{aligned}\ ] ] which occursif and only if for almost all the in the support of the posterior , the complete - data likelihood is ( almost surely ) a constant as a function of the missing data , and thus the missing data would offer no additional help in distinguishing from . on the other hand , if and only if the observed - data likelihood ratio is a constant , and thus there is no information in the observed data for testing using .other characteristics of this measure depend on the choice of the prior , and they will be discussed in the following sections . one potential drawback of is that it can be greatly affected by the large variability in the likelihood ratios , as functions of the parameters , for example , when very unlikely parameter values were given nontrivial prior mass .this problem can be circumvented to a large extent by using the posterior variance of the _ log - likelihood ratio _ , \ ] ] is equal to zero if and only if there is no additional information in the missing data for testing .these suggest that we can also measure the relative information by \nonumber \\ & & { } \cdot \biggl(\operatorname{var } [ \mathrm{lod}(\theta , \theta _0| y_{\mathrm{ob}})|y_{\mathrm{ob } } ] \\ & & \hspace*{13pt } { } + \operatorname{var } \biggl[\log{\frac{p(y_{\mathrm{co}}| y_{\mathrm{ob}},\theta ) } { p(y_{\mathrm{co}}|y_{\mathrm{ob } } , \theta_0)}}\big|y_{\mathrm{ob } } \biggr]\biggr)^{-1},\hspace*{-4pt } \nonumber\end{aligned}\ ] ] where , as for , indexes the underlying prior on .although the use of lod is more natural in view of the large - sample measures given in section [ sec : large ] , it does not admit the nice `` coherence '' identity for the likelihood ratio as given in ( [ ratio ] ) .indeed , we had to remove ad hoc a cross term in the denominator of ( [ bayes - ri ] ) in order to keep the resulting ratio always inside the unit interval. furthermore , as we show in section [ sec : theor ] , the use of the ratio scale , instead of log ratio , leads to a number of interesting identities between likelihood ratios and bayes factors , and it is more connected with some finite - sample measure of information in the literature . whereas such trade - offs need to be explored , our general results in the next section imply that in the neighborhood of , the differences between these two measures should be small .given their definitions , the immediate question is how to choose and how to compute and efficiently since , in general , their calculations require integrations that can not be performed analytically . when the truth is believed to be in a neighborhood of the null value , a -neighbor approximation to and be obtained by choosing to be with small .it is proved in appendix [ sa.1 ] that the two bayesian measures have the same limit as , denoted by , \\[-8pt ] & = & \frac { s^2(\theta _ 0|y_{\mathrm{ob } } ) } { s^2(\theta _ 0|y_{\mathrm{ob } } ) + i_{\mathrm{mi } } ( \theta_0|y_{\mathrm{ob } } ) } , \nonumber\end{aligned}\ ] ] where and are respectively the observed - data and complete - data score function , and is the expected ( missing ) fisher information from .note that although this result obviously assumes is univariate , it can also be applied when only the parameter of interest is univariate , if we fix the nuisance parameter part in at its observed - data mle under the null . for the exponential tilting linkage model, one can verify that \\[-8pt ] & = & 1-\frac{\operatorname{var}(z|\hbox{data } , h_0 ) } { w^2 + \operatorname{var}(z|\hbox{data } , h_0 ) } , \nonumber\end{aligned}\ ] ] where , and is given in ( [ npl ] ) .therefore its computation is straightforward because it only depends on the test statistic and the null hypothesis .note also that the expectation of the denominator in ( [ eq : bisp ] ) under the null is simply .therefore , if we replace the denominator in ( [ eq : bisp ] ) by its expected value under the null , we obtain an even simpler approximation .however , measures only the relative information in the neighborhood of . for example , suppose the data consist of one affected sib - pair like in figure [ fig : sibpair ] such that both parents and the sibs are heterozygous with the same pair of alleles at a specific locus ( i.e. , all individuals have the alleles `` a1 '' and `` a2 '' ) . in this case, the observed - data likelihood from the exponential tilting model is very informative away from ( see figure [ fig : dh - loglik ] ) , but because the null value turns out to be the _ minimizer _ of the observed - data likelihood . ] in general , whenever is a stationary point of , , even if there is almost perfect information .for example , if the data consist of sib - pairs such that there is complete information on sib - pairs , sharing 0 alleles ibd and sharing 2 alleles ibd , and one sib - pair has no information , then and thus .this is clearly a misleading measure . in the next sectionwe propose a remedy for this problem .the measures defined in section [ subsec : bayes ] are inherently small - sample quantities , for the variance terms used in these measures do not naturally admit additivity even for i.i.d .data structures . whether one can find a satisfying small - sample measure that would automatically admit such additivity is a topic of both theoretical and practical interest , but for our current purposes we can impose such additivity by defining global measures via appropriate combining rules , such as ( [ eq : comb ] ) .we adopt such rules mainly to maintain the continuity of moving from small - sample to large - sample measures as proposed in section [ sec : large ] .whether these are the most sensible rules is a topic that requires further research .specifically , suppose our data consist of independent `` small units '' ( e.g. , individual families ) , .we apply ( [ ripi ] ) to each unit and then combine them via the harmonic rule ( [ eq : comb ] ) but with weights proportional to and then taking the ratio .that is , } { \sum_{i=1}^n\operatorname{var } [ \mathrm{lr}(\theta _ 0 , \theta|y_{\mathrm{co}}^{(i)})|y_{\mathrm{ob}}^{(i ) } ] } \nonumber\\[-8pt ] \\[-8pt ] & = & \biggl\{\frac{\sum_{i=1}^n v_i[\mathcal{b}i^\pi _ { 1,i}]^{-1 } } { \sum_{i=1}^nv_i } \biggr\}^{-1}. \nonumber\end{aligned}\ ] ] similarly , we can define the combined version for from individual , and we can also use the arithmetic combining rule ( [ eq : combl ] ) .in addition , its limit under the shrinking prior , in analogy to ( [ eq : bisg ] ) , can be expressed as \\[-8pt ] & = & \frac{\sum_{i=1}^n s^2(\theta _ 0|y^{(i)}_{\mathrm{ob } } ) } { \sum_{i=1}^n s^2(\theta _ 0|y^{(i)}_{\mathrm{ob } } ) + i_{\mathrm{mi}}(\theta_0|y_{\mathrm{ob } } ) } , \nonumber\end{aligned}\ ] ] where is the expected fisher information matrix from , with .we have changed the notation from to to signify the fact that the latter measure is defined by _ summing _ up the numerators and denominators of the individual s _ separately _ before forming the combined ratio .the second equation in ( [ eq : combss ] ) holds because of the additivity of fisher information for independent data structures . for the exponential tilting linkage model ,this averaging for a shrinking prior leads to where and .this is equal to zero only if all the s are equal to zero , as opposed to using a global posterior , that is , by applying ( [ eq : bisg ] ) directly to the whole data set , where is sufficient to cause .this difference is an important advantage for , as we will demonstrate in section [ subsec : finite ] . ; the dashed line corresponds to calculated using a uniform prior on ; the dot - dashed line corresponds to calculated using a uniform prior on ; the dotted line corresponds to calculated using a uniform prior on .[fig : buc ] ] to illustrate the proposed bayesian measures of information , we calculated them for various priors in a data set containing 21 ulcerative colitis ( uc ) families ( ) .the choices of priors here were made for investigating the sensitivity to prior specification , so they may not reflect our real knowledge about the problem ( e.g. , we generally expect to be nonnegative in such problems ) . in figure [ fig : buc ] the measure of information is plotted in comparison with , which is calculated as described in the previous section for three different priors .similar results are obtained using . in this example and are almost identical ; is therefore not shown .note that the value of the parameter under the null hypothesis of no linkage is equal to zero , and , for this data set , the maximum likelihood estimates for the linkage parameter across the chromosome vary between and 0.07.=-1 we note that the measure calculated using a uniform prior is very close to , which demonstrates the possibility of having very different priors that result in very similar measures . the bayesian measure calculated with a prior having a narrower support ,that is , uniform on the interval , follows the same patterns but is uniformly smaller .using a prior centered around the maximum likelihood estimate , uniform on the interval , turns out to be very misleading because it gives values that are considerably too small ( i.e. , in comparison with the large - sample estimates given in figure [ fig : niddm ] ) .we emphasize that symmetric uniform priors were used in figure [ fig : buc ] simply to demonstrate potential substantial sensitivity to prior specification , as one often expects less erratic behavior from such symmetric and smooth prior specifications .the issue of sensitivity to the choice of prior is further discussed in section [ sec : future ] .as we discussed previously , a central difficulty in measuring the relative amount of information is that its value will generally depend on the true value of the unknown parameter .one way to explore this dependence is to replace in the definition of or by in a suitably defined neighborhood , and to plot it against in such a range to check its variability .the use of this type of _ relative information function _ was proposed in for the purpose of measuring the rate of convergence of em - type algorithms , where the function was termed _relative augmentation function_. note that is simply the value of this function at .for simplicity of presentation , we will assume in the following and section [ subsec : fixed - size ] that is univariate , though all the results are generalizable to multivariate by employing appropriate matrix notation and operations .we also assume all the regularity conditions as in dempster , laird and rubin ( ) to guarantee the validity of taking differentiation under integration and for taylor expansions .it was shown in that as , approaches the so - called _ fraction of observed information _ for the purpose of estimation : where the observed , complete and missing fisher information are defined , as in dempster , laird and rubin ( ) , \label{eq : imis } \\[-8pt ] & = & \mathrm{e } \biggl[-\frac{\partial^2 \log f(y_{\mathrm{co}}|y_{\mathrm{ob } } ; \theta ) } { \partial\theta ^2 } \bigg|y_{\mathrm{ob } } ; \theta\biggr ] \big\vert_{\theta = \theta_{\mathrm{ob } } } \nonumber\end{aligned}\ ] ] and \bigg\vert_{\theta = \theta _ { \mathrm{ob } } } \\ & = & i_{\mathrm{ob}}+i_{\mathrm{mi } } , \nonumber\end{aligned}\ ] ] where the last identity is known as the `` missing - data principle , '' and is a directed consequence of ( [ eq : emex ] ) .the measure plays a key role in determining the rate of convergence of the em algorithm and its various extensions ( e.g. , ; , ; ; ) . the above limiting resultsuggests that , when is small , we can study the behavior of via its connection to , as we demonstrate in the next section .however , among all the measures we proposed , the measure of ( [ eq : combss ] ) most closely resembles of ( [ eq : dlr ] ) .the main differences are the use of in place of , and the fact that the fisher information terms in are evaluated at , whereas for they are evaluated at .it is well known that , under regularity conditions , will converge to the expected fisher information under the null .consequently , under the null , and are asymptotically equivalent .this equivalence may suggest to directly define in terms of the `` observed fisher information at . '' however , although is guaranteed to be nonnegative ( definite ) when is in the interior of the parameter space , this is not necessarily true for .therefore , for small - sample problems for which the use of is inadequate ( e.g. , when the mle is on the boundary of ) , the direct substitution of by will not lead , in general , to a nonnegative measure .the measure this problem by using the sum of individual squared scores instead of , which guarantees that the resulting measure is inside the unit interval , and that it is consistent with for large samples .therefore can be viewed as a small - sample extension of in the neighborhood of the null . for both and , their equivalence to in the neighborhood of can be established for finite - sample sizes .( therefore , can also be defined as the value of either or when . ) specifically , denote the derivative of at , and it is proved in appendix [ sa.2 ] that in deriving this result , we have utilized the following well - known identities in the literature of the em algorithm ( e.g. , dempster , laird and rubin , ; ) : under the assumption that has a unique maximizer as a function of , an assumption that is easily satisfied in most of the applications when em is useful , we also prove in appendix [ sa.2 ] that \\[-8pt ] & & \hspace*{15pt } { } - 2\ell^{(3)}_{\mathrm{ob } } - q^{(3,0)}_{\mathrm{ob}}\mathcal{r}i_e^2\bigr ) ( 3i_{\mathrm{co}})^{-1 } \delta \nonumber \\ & & { } + o(\delta^2 ) .\nonumber\end{aligned}\ ] ] these expansions are useful for comparing the first - order ( in ) behavior of and . for example , we suspect that , for many applications , is a conservative estimate of the actual relative information , where is a more accurate measure. one way to validate this or to identify situations where this conjecture is true is to compare the two coefficients of and to determine the appropriate conditions for to the first order in the neighborhood of ( away from the neighborhood the comparison is not very meaningful because can be seriously biased ) . due to the complex nature of these two coefficients , we only present in the next section a simple example to illustrate the conservatism of , and leave the general theoretical investigation to subsequent work .we also remark here that when the true is believed to be close to , a measure like can be used to construct reasonable bounds .for example , we can expect to be a reasonable lower bound and an upper bound for relative information , or we can use as a compromise . in future work , we intend to investigate the reliability and applicability of such bounds and compromise . herewe simply note a computational advantage of that follows from }{q(\theta _{ \mathrm{ob}}| \theta _ { \mathrm{ob}})-q(\theta _ 0|\theta _ { \mathrm{ob } } ) } \biggr]^{1/2},\ ] ] which avoids entirely the calculation of the observed - data log - likelihood function , which is often harder to compute than the expected complete - data log - likelihood .furthermore , whenever and are close to each other , as in our real - data examples , will be practically the same as either or .let be i.i.d .samples from , where both and are unknown , and the null hypothesis is .suppose our observed data is a size- random sample of , where . then it should be clear that the relative information is by any reasonable argument .indeed , straightforward calculation shows regardless of the actual value of .however , \nonumber\\[-8pt ] \\[-8pt ] & = & r -\frac{r(1-r^2)}{2}\frac{t_0 ^ 2}{m } + o \biggl ( \biggl(\frac{t^2_0}{m } \biggr)^2 \biggr ) , \nonumber\end{aligned}\ ] ] where , which differs from the usual -statistic ( under the null ) only due to the use of mle for , , instead of the sample variance . from ( [ normal ] ) , it is clear that approaches whenever is small , which implies that will recover ( reasonably ) the correct information when the null hypothesis is ( approximately ) correct .in contrast , for a fixed sample size , approaches zero if because for large , behaves like .the reason is that the larger is , the stronger is the that the null is false , and thus the more conservative we become when we impute using ] when .the above discussion indicates a potential problem with any bayesian measure , as it is inevitable that some prior information will `` leak '' into our measure of relative information in the data alone ( for a specified test ) .when we have reliable prior information , it is a very interesting issue to investigate / debate whether our relative information should include the prior information ( e.g. , in the extreme case when we know the null is true for certain , the data become irrelevant , and one can always consider we have 100% information ) .nevertheless , in cases where the prior is introduced for convenience , as largely the case for our setting , it is desirable to reduce any unintended influence as much as possible . in this regard , it was a pleasant surprise to see that the defined in ( [ eq : combss ] ) is able to recover the correct answer in this example .specifically , letting , ( [ eq : combss ] ) becomes \\[-8pt ] & = & \frac{m}{m + ( n - m)}=r . \nonumber\end{aligned}\ ] ] it is curious that has this ability of `` removing '' the impact of prior information that affected in this finite - sample setting ; how generally this result holds ( even approximately ) is a topic for future research .our large - sample measures have interesting connections with classic measures based on fisher information , as shown in section [ subsec : asym ] .are there similar connections for the small - sample bayesian measures ?the bayesian measures are based on posterior variances of likelihood ratios or their logarithms .it turns out that there are several interesting connections , or at least analogies , in both frequentist and bayesian literature . in a frequentistsetting , just as the well - known cramr rao lower bound provides a finite - sample information bound that is determined by the fisher information , there is a more general chapman robbins information bound ( ) that is based on sampling variance of the _ likelihood ratio_. specifically , let have a multivariate pdf / pmf with taking values in some parameter space . for each ,let be the support of .suppose is an unbiased estimator of a real - valued function .let then ^ 2\over\operatorname{var } ( lr(\phi,\theta|x)|\theta ) } \biggr],\ ] ] where denotes the likelihood ratio function .this `` second cr '' bound is more general than the first one because it requires neither differentiability of nor the existence of fisher information ( e.g. , as in the case of discrete parameters ) .it provides an interesting analogy to the proposed bayesian measures because it is based also on the variability of the likelihood ratio , where and can be arbitrarily apart .the central connection here is that while our large - sample measures have close ties with fisher information ( as detailed in section [ subsec : asym ] ) , which is also intimately connected with the `` first cr '' bound ( i.e. , cramr rao bound ) , our small - sample measures are based on variances of likelihood ratio , which is connected with the `` second cr '' bound . the fact that the second cr bound is more general than the first cr bound is also consistent with our expectation that our bayesian measures ultimately should be more general than the likelihood - based large - sample measures , though currently this is still just an expectation , not a realization .the variances in our bayesian measures are more general than the one used by the second cr bound because we average over not only the missing data but also the posterior distribution of .examining the posterior distribution of the entire likelihood ratio might seem a case of `` using data twice , '' but the following several identities suggest that such a practice is natural from the bayesian point of view ( indeed , the use of posterior distribution of the likelihood ratio has been previously advocated by ) .first , suppose we have a _ proper _ prior ; then it is easy to verify that \nonumber \\ & & \quad = \int\frac{f(y_{\mathrm{ob}}|\theta _ 0)}{f(y_{\mathrm{ob}}| \theta)}\frac{f(y_{\mathrm{ob}}|\theta ) \pi(\theta ) } { f_\pi(y_{\mathrm{ob}})}\,d\theta \\ & & \quad = \frac{f(y_{\mathrm{ob}}|\theta _ 0)}{f_\pi(y_{\mathrm{ob } } ) } \equiv\mathrm{bf}_{\mathrm{ob } } , \nonumber\end{aligned}\ ] ] where .( note that here we assume is fixed at a known value . ) in other words , the posterior mean of our likelihood ratio is simply the well - known bayes factor for assessing the probability of the model under relative to the model under .this shows that the bayes factor is a very natural generalization of likelihood ratio by taking into account our uncertainty in while accessing the evidence in the data against the hypothesized null value .it also shows that it is quite natural to consider posterior quantification of the likelihood ratio itself .incidentally , applying identity ( [ bafac ] ) first with and then averaging the resulting identity over the posterior predictive distribution , we also obtain the following intriguing result : & = & \mathrm{e}[\mathrm{lr}(\theta _ 0 , \theta \nonumber\\[-8pt ] \\[-8pt ] & = & \mathrm{e}[\mathrm{lr}(\theta _ 0 , \theta |y_{\mathrm{ob}})| y_{\mathrm{ob } } ] = \mathrm{bf}_{\mathrm{ob}}. \nonumber\end{aligned}\ ] ] in other words , the observed - data bayes factor is the posterior average of any of these three quantities : the observed - data likelihood ratio , the complete - data likelihood ratio , or the complete - data bayes factor. identities ( [ ratio ] ) , ( [ bafac ] ) and ( [ bayes ] ) together demonstrate the `` coherence '' of likelihood ratio and bayes factor as well as between them . identity ( [ bayes ] ) also suggests an easy way of computing via monte carlo averaging of complete - data or observed - data likelihood ratios .we note , however , that theposterior distributions of , and are generally different .in particular , because of ( [ ratio ] ) and ( [ bafac ] ) , we have that , \operatorname{var}[\mathrm{lr}(\theta _ 0 , \theta |y_{\mathrm{ob}})|y_{\mathrm{ob } } ] \ } \nonumber\\[-8pt ] \\[-8pt ] & & \quad \leq \operatorname{var } [ \mathrm{lr}(\theta _ 0 , \theta |y_{\mathrm{co}})|y_{\mathrm{ob } } ] . \nonumber\end{aligned}\ ] ] given the clear interpretation and utility of the posterior mean of the likelihood ratio , we would naturally consider the posterior variance of the likelihood ratio .that is , we can measure the posterior uncertainty in our likelihood ratio evidence .these are exactly the quantities used in defining in ( [ ripi ] ) , where the numerator and denominator are respectively the posterior variances of the observed - data and complete - data likelihood ratios .the following equivalent expression of further demonstrates how measures relative `` flatness '' in the likelihood ratio surfaces : } { \operatorname{cov}_{\pi,\theta _ 0 } [ \mathrm{lr}(\theta _ 0,\theta|y_{\mathrm{co } } ) , \mathrm{lr}(\theta , \theta _ 0|y_{\mathrm{co } } ) ] } , \hspace*{-20pt}\ ] ] where is the covariance operator with respect to the prior , and is with respect to . in other words ,the flatness of the likelihood ratio surfaces is measured by the covariance of the likelihood ratio and its reciprocal .although this expression itself is intuitive because a positive function is flat if and only if it is proportional to its reciprocal , the equivalence between ( [ ripi ] ) and ( [ bicov ] ) is a bit curious because ( [ ripi ] ) is based on _posterior variance _ whereas ( [ bicov ] ) is based on _prior covariance_. it would be a serious oversight if we do not emphasize the connections of the information measures we discuss in this paper to the vast literature on entropy . indeed , essentially all measures we presented have an entropy flavor , from the large - sample ones based on kullback leibler information to the small - sample ones involving second - order entropy in the form of ( see ) .this is very natural given that the entropy is a fundamental type of information measure ( e.g. , ) . indeed ,much of the classic results on information measure in optimal sequential designs , which our genetic applications resemble ( i.e. , as one needs to decide the next step given what has been observed ) , are based on entropy - like quantities and their generalizations .this includes both kullback leibler information and chernoff information ( ) .a central difference between that literature and our current proposals is that the existing literature focuses on quantifying the _ absolute _ amount of information in an experiment / design , whereas our main objective here is to quantify the _ relative _ amount of information compared to the absolute amount of information that we would have if there were no missing data ( e.g. , known ibd sharing in linkage studies ) .furthermore , we investigate two sets of relative information , depending on whether we can assume the true parameter is in a neighborhood of the null or not . to the best of our knowledge ,our study is the first serious investigation of the roles of null and alternative hypotheses in measuring relative information . because our bayesian measures and are defined as ratios of variances , it is also important to emphasize their connections to the regression and to other measures of association / correlation such as the linkage disequilibrium measure ( e.g. , ) .these measures are related to fisher information and can also be used to estimate relative information .the main differences are that ours are defined via the _ posterior variability _ of the _ whole ratio or log - likelihood ratio _ , instead of _ sampling variances _ of _ individual statistics or variables_. more details on measures of association / correlation used to quantifyrelative information can be found elsewhere ( ) .clearly much remains to be done , especially for the small - sample problems . with large samples, we believe the measures we proposed , especially , satisfy essentially all five criteria as discussed in [ subsec : conf ] . for small samples , the various bayesian measures we proposed ,while all satisfy the second criterion , have pros and cons regarding the rest of the criteria .the most pronounced problem , of course , is the choice of a general - purpose `` default prior . '' here we emphasize that the desire for `` general purpose '' is motivated by the observation that in many applications the investigators need to compute the information measures for many data sets ( e.g. , different families or pedigrees and different loci in linkage analysis ; different tests for different haplotype models in the association studies ) under time constraints .therefore it is typically not feasible to construct specific priors for each data set at hand , nor is it desirable given that the purpose of hypothesis testing , in the genetic applications we are interested in , has more of a screening nature .a requirement for constructing problem - specific priors would be typically viewed as too much of a burden to be practically appealing . on the other hand ,standard recipes for constructing `` default '' priors do not seem to be generally applicable either .for example , the use of jeffreys prior is typically out of the question because the calculation of the expected fisher information requires us to specify a reliable distribution over the state space of for arbitrary value of , which is typically very hard , if not impossible , to do .furthermore , the properties of jeffreys prior are not clear when we try to avoid the use of fisher information in the first place .second , whereas provides a nice connection between small - sample and large - sample measures in the neighborhood of , we currently do not have such a measure when the null is far from the truth .this is of great theoretical and practical concern , at least in the context of genetic studies , because the regions where there is strong evidence against the null are precisely the regions we try to identify .one possible strategy is to start by estimating based on the aggregated data ( e.g. , using data from the other families ) , and then use a prior that shrinks toward this estimated when computing information measure for individual components ( e.g. , families ) . in future workwe plan to evaluate this strategy , as a part of the general investigation of the sensitivity of our bayesian measures to prior specifications once we move out the neighborhood of the null .third , even for large samples , our measures and can be sensitive to the posited linkage or association model , which may or may not capture the real biological process that leads to the linkage or association. this would be particularly true for , which relies more heavily on the model associated with the test than .although such sensitivity is inevitable because without a specific alternative model the very notion of relative information may not even be defined , as we emphasized previously , it is important to understand to what degree our information measures can change with our working model .both theoretical and empirical investigations are needed , especially for classes of problems that are common in practice . also neededare investigations of the impact of nuisance parameters on these measures .the haplotype association examples involve nuisance parameters , for example , population genotype risks or population haplotype frequencies , and seems to work adequately in practice .nevertheless , it would be interesting to see if further refinements are possible .the illustrative example of section [ subsec : finite ] strongly suggests that further research is necessary to investigate the possible complications caused by the nuisance parameters , especially for .the genetic applications presented in this paper focus on the allele - sharing linkage methods and the haplotype - based association studies , but there are many other areas in genetics where measuring relative information is important . for example , in the past years the markers used in genome - wide searches for susceptibility loci were mostly microsatellites .these are markers that have many alleles , and are generally very informative , but are not very common across the genome .because the applications focused on small regions of the genome , this lack of abundance of the microsatellites has led to the still increasing popularity of the snps as genetic markers .the snps are not as informative as the microsatellites , but they are highly abundant . also new technology platforms such as the affymetrix genechip mapping 10k , 100k and 500k arrays ( ) are available for snp genotyping , and they come with a substantial reduction in cost . given that both the microsatellites and the snps are currently used in gene - mapping studies , a fundamental and practical question is how many snps we need in order to obtain the same amount of information as obtained by using microsatellites .differences between snps and microsatellites have been investigated for linkage ( e.g. , ; ; ; ; ) , and measures of relative information extracted have been proposed ( ) , but the answers to similar questions will be different for different applications .we plan to further explore the use of the proposed measures of information to other problems of this sort .the comparisons between the relative information of sets of snps to that of sets of microsatellites ( relative to the underlying complete information ) will allow us to make sensible comparisons of the maps for a particular study purpose .the gene - mapping research has focused recently on genome - wide association studies that are thought to have better power to localize genes contributing more modestly to disease susceptibility . in these studies ,new measures are needed for quantifying the loss in information due to untyped snps , or even snps that have not been discovered .also , novel tools for measuring information are necessary in choosing a subset of `` tagging '' snps to type for a disease project based on the data from the hapmap project ( ) .other possible applications are in testing for gene - environment interaction .this can be done in both linkage and association studies , and can increase the power of detecting risk factors . in most of these studies , the environmental and the clinical data are also incomplete .a natural question then arises : `` what is the most efficient way to allocate the resources : what percentage should be devoted to collect more genetic information and what percentage should be used to collect more covariate information ? ''the answer depends again on the specific study , and the problem is more complicated because the environmental and clinical information can be subject to much more complicated missing - data patterns , often due to unknown reasons .research is clearly needed in this direction to explore to what extent it is possible to sensibly measure the relative information for guiding the allocation of resources , and we hope the general framework we set up in this paper provides a starting point , if not a solution .in order to prove the shrinking prior limit results in section [ subsec : shrink ] , we need the following lemma . [ lemma : lhopital]let be a fixed real number , and let and , , be real continuous functions defined on an open interval containing , such that and are three times differentiable in a neighborhood of .let , and similarly for , where .if \\[-8pt ] b_1(t)b_2(t)&=&b_3(t)b_4(t ) , \nonumber\end{aligned}\ ] ] but \\[-8pt ] & & \quad { } -b_3''(t)b_4(t)-b_3(t)b_4''(t)\neq0 , \nonumber\end{aligned}\ ] ] then the proof follows from the simple taylor expansion and conditions ( [ eq : zero ] ) and ( [ eq : noze ] ) .[ prop : bi1]let be . then \\[-8pt ] \eqntext{\quad k=1,2.}\end{aligned}\ ] ] let ] and .then , as in ( [ bicov ] ) , it is straightforward to verify that we can then apply lemma [ lemma : lhopital ] with .the result for in ( [ eq : blim ] ) then follows because and \\ & = & 2 i_{\mathrm{mi}}(\theta _ 0|y_{\mathrm{ob } } ) - \ell''(\theta_0|y_{\mathrm{ob } } ) + s^2(\theta _ 0|y_{\mathrm{ob}}).\end{aligned}\ ] ] note that condition ( [ eq : zero ] ) holds because for all . for , the limit can be calculated by observing that \\ & & \hspace*{40pt}\big/ \operatorname{var } [ \mathrm{lod}(\theta , \theta _ 0 calculating the limit of the ratio in the denominator .a little algebra shows that this ratio can be expressed as ^ 2\biggr ) \nonumber\\[-8pt ] \\[-8pt ] & & \quad { } \cdot \biggl(\int b_1(\theta ) \pi(\theta ) \,d\theta \int b_2(\theta ) \pi(\theta ) \,d\theta \nonumber \\ & & \quad \hspace*{61pt } { } - \biggl[\int b_3(\theta ) \pi(\theta ) \,d\theta \biggr]^2\biggr)^{-1 } , \nonumber\end{aligned}\ ] ] where are the same as in ( [ eq : var - lr - obs ] ) , but , \\b_2(\theta ) & = & \mathrm{lod}^2(\theta , \theta _ 0|y_{\mathrm{ob}})a_1(\theta ) \quad\hbox{and}\quad \\ b_3(\theta ) & = & \mathrm{lod}(\theta , \theta _ 0| y_{\mathrm{ob}})a_1(\theta ) .\end{aligned}\ ] ] to apply lemma [ lemma : lhopital ] , we let and .noting that for all [ and hence condition ( [ eq : zero ] ) holds ] , we only need to compute and in order to obtain the limit .this calculation is facilitated by the formula \\ & & \quad = 2g'^2\exp(f)+2gg''\exp(f)+4gg'f'\exp(f ) \\ & & \qquad { } + g^2f''\exp ( f)+g^2f'^2\exp(f).\end{aligned}\ ] ] the result then follows because and \\ &\equiv & 2i_{\mathrm{mi } } ( \theta_0|y_{\mathrm{ob}}).\end{aligned}\ ] ] the derivations are based on the following lemma , which is trivial to verify using the taylor expansion .[ lemma : epsilons]let and be continuous functions defined on an open interval containing zero , such that and as . then as in section [ sec : small ] , we let . for , we need to expand both and , as functions of . using the notation given in section [ subsec : fixed - size ] and ( [ eq : emid ] ) , we have and \\[-8pt ] & & \quad = - { i_{\mathrm{co}}\over2 } \delta^2 + { q^{(3,0)}_{\mathrm{ob}}\over6 } \delta^3 + o(\delta^4 ) . \nonumber\end{aligned}\ ] ] expansion ( [ eq : rione ] ) then follows directly from lemma [ lemma : epsilons ] . to establish a similar expansion for , let be the maximizer of ; recall we assume that is unique .then however , even when is small , it is not immediate that would be close to as well . we now show that when is small enough , and have opposite signs .consequently , , the unique solution of , must be between and , and hence .to see this , we first expand \\[-8pt ] & = & \bigl[q^{(2,0)}_{\mathrm{ob}}+ q^{(1,1)}_{\mathrm{ob}}\bigr ] \delta+ o(\delta^2 ) .\nonumber\end{aligned}\ ] ] but the following general result , proved in : \\[-8pt ] \eqntext{\quad\hbox{for any } k\ge 0,}\end{aligned}\ ] ] implies that and .consequently , for , using the notation in ( [ eq : emex ] ) and ( [ eq : dq ] ) , we have \\[-8pt ] & & \quad = h^{(2,0)}(\theta _ 0|\theta _ 0)(\theta _ { \mathrm{ob } } -\theta_0 ) + o(\delta^2 ) \nonumber \\ & & \quad = i_{\mathrm{mi}}(\theta _ 0 ) \delta+o(\delta^2 ) , \nonumber\end{aligned}\ ] ] where is as defined in ( [ eq : imis ] ) .since both and are positive , we conclude from ( [ eq : apb ] ) and ( [ eq : apb-2 ] ) that and have opposite signs when is small enough .therefore we have established that , and consequently we can express where and are as and are to be determined . to determine and , we first note that \\[-8pt ] & = & - { i_{\mathrm{ob } } } \delta + { \ell^{(3)}_{\mathrm{ob}}\over2 } \delta^2 + o(\delta^3 ) \nonumber\end{aligned}\ ] ] and where . substituting ( [ eq : delta ] ) and ( [ eq : ellf ] ) into ( [ eq : qzero ] ) and solving for and , we obtain \\[-8pt ] c & = & - \frac{\ell^{(3)}_{\mathrm{ob}}+ b^2 g^{(3)}(\theta _ 0)}{2g^{(2)}(\theta _ 0)}. \nonumber\end{aligned}\ ] ] noting that and ( [ eq : ellf ] ) , we then obtain \delta^2 \\ & & \qquad { } + \biggl[\frac{1}{2}b\ell^{(3)}_{\mathrm{ob}}- ci_{\mathrm{ob } } \\ & & \hspace*{41pt } { } + bcg^{(2)}(\theta _ 0)+ \frac{1}{6}b^3g^{(3)}(\theta _ 0 ) \biggr ] \delta^3 + o(\delta^4 ) \\ & & \quad = - \frac{i^2_{\mathrm{ob}}}{2g^{(2)}(\theta _0)}\delta^2 + \biggl[\frac{\ell_{\mathrm{ob}}^{(3)}i_{\mathrm{ob}}}{2g^{(2)}(\theta _ 0 ) } + \frac{g^{(3)}(\theta _ 0)i^3_{\mathrm{ob}}}{6 [ g^{(2)}(\theta _ 0)]^3 } \biggr ] \delta^3 \\ & & \qquad { } + o(\delta^4).\end{aligned}\ ] ] combining this expansion with \delta + o(\delta^2 ) , \\ g^{(3)}(\theta _ 0 ) & = & q^{(3,0)}_{\mathrm{ob } } + \bigl[q^{(4,0)}_{\mathrm{ob}}+q^{(3,1)}_{\mathrm{ob}}\bigr ] \delta + o(\delta^2)\end{aligned}\ ] ] and applying lemma [ lemma : epsilons ] , we obtain \\ & & \qquad\hspace*{99pt } { } - \frac{q^{(3,0)}_{\mathrm{ob}}}{3}(\mathcal{r}i_e)^3 \biggr ] \delta ^3 \\ & & \qquad { } + o(\delta^4).\end{aligned}\ ] ] by lemma [ lemma : epsilons ] , the above equation and ( [ eq : lodex ] ) together imply that of ( [ fracmc ] ) has the expansion ( [ eq : rizer ] ) .we thank daniel gudbjartsson for many helpful discussions and suggestions , and judy h. cho for providing the inflammatory bowel disease data . for the diabetes example illustrated in figure [ fig : stroke ] , we thank daniel gudbjartsson for providing the software that performed likelihood and information calculations , gubmar thorleifsson for constructing the figure , and the diabetes research group at decode genetics for generating and providing the data .we also want to thank a number of reviewers for very constructive comments and suggestions .this research was supported in part by several national science foundation grants ( nicolae and meng ) .grant , s , f. , thorleifsson , g. , reynisdottir , i. , benediktsson , r. , manolescu , a. and sainz , j. et al .variant of transcription factor 7-like 2 ( tcf7l2 ) gene confers risk of type 2 diabetes ._ nature genetics _ * 38 * 320323 .lange , c. and laird , n. m. ( 2002b ) . on a general class of conditional tests for family - based association studies in genetics : the asymptotic distribution , the conditional power and optimality considerations ._ genetic epidemiology _ * 23 * 165180 .meng , x .-( 2001 ) . a congenial overview and investigation of multiple imputation inference under uncongeniality . in _survey nonresponse _( r. groves , d. dillman , j. eltinge and r. little , eds . ) 343356 .wiley , new york .middleton , f. a. et al .genomewide linkage analysis of bipolar disorder by use of a high - density single - nucleotidepolymorphism ( snp ) genotyping assay : a comparison with microsatellite marker assays and finding of significant linkage to chromosome 6q22 .j. human genetics _ * 74 * 886897 .peer , i. , de bakker , p. i. , maller , j. , yelensky , r. , altshuler , d. and daly , m. j. ( 2006 ) . evaluating and improving power in whole - genome association studies using fixed marker sets ._ nature genetics _ * 38 * 663667 .schaid , d. j. , guenther , j. c. , christensen , g. b. , hebbring , s. , rosenow , c. , hilker , c. a. , mcdonnell , s. k. , cunningham , j. m. , slager , s. , blute , m. l. and thibodeau , s. n. ( 2004 ) .comparison of microsatellites versus single - nucleotide polymorphisms in a genome linkage screen for prostate cancersusceptibility loci .j. human genetics _ * 75 * 948965 .thalamuthu , a. , mukhopadhyay , i. , ray , a. and weeks , d. e. ( 2005 ) . a comparison between microsatellite and single - nucleotide polymorphism markers with respect to two measures of information content. _ bmc genetics _ * 6 ( suppl 1 ) * s27 .
many practical studies rely on hypothesis testing procedures applied to data sets with missing information . an important part of the analysis is to determine the impact of the missing data on the performance of the test , and this can be done by properly quantifying the relative ( to complete data ) amount of available information . the problem is directly motivated by applications to studies , such as linkage analyses and haplotype - based association projects , designed to identify genetic contributions to complex diseases . in the genetic studies the relative information measures are needed for the experimental design , technology comparison , interpretation of the data , and for understanding the behavior of some of the inference tools . the central difficulties in constructing such information measures arise from the multiple , and sometimes conflicting , aims in practice . for large samples , we show that a satisfactory , likelihood - based general solution exists by using appropriate forms of the relative kullback leibler information , and that the proposed measures are computationally inexpensive given the maximized likelihoods with the observed data . two measures are introduced , under the null and alternative hypothesis respectively . we exemplify the measures on data coming from mapping studies on the inflammatory bowel disease and diabetes . for small - sample problems , which appear rather frequently in practice and sometimes in disguised forms ( e.g. , measuring individual contributions to a large study ) , the robust bayesian approach holds great promise , though the choice of a general - purpose `` default prior '' is a very challenging problem . we also report several intriguing connections encountered in our investigation , such as the connection with the fundamental identity for the em algorithm , the connection with the second cr ( chapman robbins ) lower information bound , the connection with entropy , and connections between likelihood ratios and bayes factors . we hope that these seemingly unrelated connections , as well as our specific proposals , will stimulate a general discussion and research in this theoretically fascinating and practically needed area . , .
we have been treated to an outstanding set of review talks here , and so it is a real pleasure to summarise the main points from them .what have we learnt from the invited reviews about the big questions in solar physics , and where should we go next ? but first an advert and a look back . for the past 10 years , i have been writing a replacement for the book _solar mhd_. three days ago , i finally finished the page proofs , and so it will hopefully appear next spring , published by cambridge university press ( priest , 2014 ) .the baby " is a completely new rewrite , not just a new edition , so i had to decide on a new name for the new baby . in the end , i came up with _ magnetohydrodynamics of the sun _, so as to indicate that the subject matter is the same as before , but the book is very different .my first visit to japan was over 30 years ago in 1982 to the hinotori symposium in tokyo , and many key figures in our field can be seen in the conference photograph as rather younger people ( fig .[ fig1 ] ) . near the centre of the photographthere is uchida - san , whom i admired greatly as a highly creative mhd theorist , as well as watanabe - san and two young graduate students , sakurai - san ( hiding on the back row ) and shibata - san ( behind my shoulder ) , who at the time thought that reconnection has no role in solar flares ! how one s ideas can change over time ! on the left side of the photograph on the second or third row stands ichimoto - san between suematsu - san and a serious bespectacled tsuneta - san . finally , on the right, you can see hirayama - san and hiei - san on the front row , with doschek , acton , svestka and tandberg - hanssen a little further back .a few days ago , ichimoto - san and shibata - san took me on october 28th to kwasan observatory , where they and mai kamobe kindly laid on a special treat my first x - class flare seen alive in real time .hinode of course continues to make major contributions to fundamental understanding , thanks to teams of selfless scientists working under the brilliant pi s with the instruments and the data , both on sot , eis and xrt .we have over the past few days been treated to an excellent set of talks , so what have we learnt about the big questions and where should we go next ?the individual talks referred to here can be found in these proceedings , and related work that has been published elsewhere is listed at the end of this article .hideyuki hotta , a research student with a bright future , described how helioseismology has shown us several features : \(i ) the equator is accelerated due to the transport of angular momentum by the reynolds stress ; \(ii ) the internal velocity in the convection zone is constant on cones , due to a subtle balance including the effects of an entropy gradient and of meridional flow ; \(iii ) a strong shear layer ( the tachocline ) is located at the base of the convection zone ; \(iv ) and a near - surface shear layer , which is not understood at all but may be due to small - scale ( granular ) convection . and( b ) in hotta s model ( 2014).,title="fig:",width=264 ] and ( b ) in hotta s model ( 2014).,title="fig:",width=264 ] hotta ( 2014 ) has come up with a brilliant new idea for global computations of the convection zone , namely , to replace the usual anelastic approximation by a _ reduced sound - speed _ technique , in which the continuity equation is written as an example of one of his numerical experiments is shown in fig .[ fig3 ] .we also heard interesting talks about oscillatory dynamos without an omega - effect from masada - san ( 2013 ) , the effect of turbulent pumping on the solar cycle from dibyendu nandy ( 2013 ) , and stellar dynamos in which buoyant loops are generated by a spot dynamo from sacha brun ( 2013 ) .hiroko watanabe ( 2012 ) gave an interesting review of the properties of umbral dots .as the magnetic field increases , their size and rise speed decreases but their lifetime remains the same .they tend to cluster at the edges of the strongest umbral field , and it is possible in future that comparison with models may indicate properties of the subsurface field .shin toriumi , another rising star , reviewed flux emergence ( toriumi 2013 , 2014 ) .he first described observations and simulations of emergence from the deep interior , and then suggested that resistive processes are important in the birth of active regions .finally , he gave a recent example of the formation of a flaring active region in which he inferred from sunspot motions that there may well have been a single flux tube below the photosphere which split into two parts ( fig.[fig4 ] ) .in future , since simulations have shown the difficulty of encouraging flux to emerge completely through the photosphere , it would be good to try and estimate from observations and theory just how much flux is likely to pile up below the photosphere , both in the quiet sun and in active regions . the presence of such flux may affect and interact with the near - surface shear layer , and it may also provide a background seed on which convection can operate and generate granular magnetic loops .alan title ( 2013 ) presented some stunning uv slit jaw images and spectra from the new mission iris ( fig.[fig5 ] ) .these show that the chromosphere is incredibly dynamic , with rapid fine - scale brightnings and motions everywhere . ted tarbell ( 2013 ) described coordinated observations from iris , sst and hinode , while tiago pereira ( 2013 ) focussed on spicules , viggo hansteen compared the presence of a multitude of cool loops with models , and mark cheung showed examples of recurrent helical jets .clearly in future it is important to try and determine the causes of such fine - scale dynamics .we also heard a variety of other talks about the photosphere .for example , luis bellot rubio ( 2012 ) described the latest results about the ubiquitous horizontal magnetic fields discovered with hinode with typical fields of 140 gauss .daiko shiota ( 2012 ) presented details of the reversal of polar magnetic fields , while ada ortiz carbonell ( 2014 ) described an example of granular flux emergence in the form of a magnetic bubble .andreas lagg showed us the properties of granules in a light bridge , while yukio katsukawa ( 2012 ) presented details of photospheric power spectra .finally , two more future stars to watch out for in our field , sanja danilovic ( 2013 ) and david buehler ( 2013 ) , discussed the complex properties of 2d magnetic inversions for internetwork hinode / sp data and for plage flux tubes .etienne pariat ( 2012 ) gave a masterly review of coronal jets , splitting them into _ standard jets _ and _ blowout jets _, which are more complex , arise from multipolar magnetic fields and often have a cool part. often ( anemone ) jets occur at 3d null points by spine - fan reconnection , and helical structure is common .he described the basic 2d mechanism first suggested by heyvaerts , priest and rust ( 1977 ) and subsequently developed by forbes and priest ( 1984 ) as well as shibata ( 1992 ) , yokoyama ( 1996 ) and their colleagues ( fig.[fig6 ] ) .these suggested a hot fast jet produced by reconnection together with a cooler jet produced by evaporation . in three dimensions ,the process is more complex and has new features , such as untwisting jets , according to experiments by pariat ( 2010 ) and archontis ( 2013a , b ) , and it is still uncertain whether the cooler chromospheric jets are produced by slow - mode shocks or pressure build - up . in future , we need more observations and numerical experiments on the 3d aspects of jets produced by reconnection , which could shed light on several fronts . for example , what is the role of time - dependent jets in generating waves ? what is the role of magnetic helicity ? how much of the twist is releasing stored up twist and how much is due to the conversion of mutual magnetic helicity into self helicity by the reconnection process ?furthermore , what is the effect of the jets on both coronal heating ( converting kinetic energy into heat and spreading out the energy of hot jets ) and on accelerating the solar wind ?harry warren ( 2013 ) gave an innovative talk about active - region coronal heating , stressing a promising technique ( sparse bayesian inference ) to balance uncertainty and complexity in models .he also showed a comparison of an observed sdo active region with a nonlinear force - free model , in which the temperature and density on 1000 field lines were calculated .these suggest that the heating ( ) has the following scaling with magnetic field ( ) and loop length ( ) .they also imply that the heating events occur on time - scales more rapid than the cooling time . in future ,there is room for much more comparison between theory and observation in order to determine the likely heating mechanisms at work .ineke de moortel reviewed wave heating of the corona ( see de moortel , 2012 ; parnell and de moortel , 2012 ) , first of all describing observations of _ alfvnic waves _ in spicules and in the corona ( with comp and sdo ) , which imply that 100 w m is required in the quiet sun and 2000 w m in active regions .she then pointed out that they could be generated directly from photospheric vortices or by mode coupling from kink modes , in which the waves become localised in a flux tube boundary ( fig.[fig8 ] ) .their observational signature is at present unclear , since they could produce an impulsive or turbulent emission .waves are likely to be important in heating part of the corona , but there is a need to develop the basic theory beyond the current paradigm of simple flux tubes and to deduce observational signatures in more realistic geometry .we also heard excellent talks on a variety of other topics .marc de rosa described the effect of spatial resolution on nonlinear force - free extrapolation . on prominences ,two more future stars are david orozco suarez ( 2014 ) , who discussed the inferred magnetic field of prominence threads , and andrew hillier ( 2014 ) , who showed how observations of rising plumes can be used to infer the plasma beta in a prominence .in addition , elena dzifcakova ( 2014 ) gave interesting insights on the nature of the prominence transition region .regarding jets and flux emergence , irina kitiashvili ( 2013 ) described ejection by a photospheric vortex tube , while vasyl yurchyshyn ( 2013 ) suggested that spicules are accelerated by reconnection ( uchida - san would be pleased ) , and len culhane ( 2012 ) talked about the properties of solar wind outflow from active regions using eis .shinsuke takasao ( 2013 ) discussed jets accelerated by reconnection and shocks , and peter young ( 2014 ) introduced the idea of dark jets " in coronal holes .finally , there were several talks on coronal heating in general .three other future stars were : philippe bourdin ( 2013 ) , who presented a 3d model of an active region being heated in response to photospheric motions ; jiansen he , who talked about slow - mode waves and outflows from reconnection ; and hwanhee lee , who discussed a variety of different magnetic configurations .in addition , observations from the hi - c rocket flight were shown by sabrina savage of active - region dynamics and by paola testa ( 2013 ) of moss variations due to nanoflares .helen mason gave a masterly review of evaporation in small flares using imaging and spectroscopy from hinode / eis .in the 90 s , doschek had observed blue shifts during flares with bcs , but at the time he had no idea about their location .now with eis , del zanna and mason ( 2013 , 2014 ) have shown how eis blue shifts are located in kernels at the ends of hot coronal loops ( fig .[ fig9 ] ) .they occur only in lines at 23 mk and represent evaporation from the chromosphere .the upflowing plasma is located at a height of 200 km with a density of 10 and its properties agree with those from a conduction - driven 1d simulation .sdo / eve has produced more examples of upflows at 100200 km s . in future, it would be interesting to study such evaporation processes in large more - complex flares and to try and determine the heating and particle acceleration mechanisms .the subject of reconnection and particle acceleration in eruptive flares was reviewed by naoto nishizuka ( 2013 ) ( also a rising star ) .he showed how brightnings start below an erupting prominence , and suggested that the eruption drives impulsive reconnection in a current sheet .the fragmentation and ejection of plasmoids in the sheet has been demonstrated in 3d simulations ( fig.[fig10 ] ) , and test - particle orbits have found fermi acceleration at a fast - mode shock as well as stochastic acceleration at multiple separators . in future , it would be good to compare with observations of sad s ( supra - arcade downflows ) and also to develop self - consistent plasma physics of the process .then jun lin discussed large - scale current sheets in cme s , showing how they have been observed in lasco images , with plasmoids and typical thicknesses of 10 km , lengths of 10 km and alfvn mach numbers of 0.01 .numerical experiments reveal fragmentation and the properties of plasmoids with upflow and downflow velocities of 150250 km s and 90160 km s , respectively . at a magnetic reynolds number of 10 , there are typically 15 plasmoids and the turbulence enhances reconnection . in future , determining the nature and properties of the turbulence will be helpful , as well as determining the formation mechanism for the the blobs ( such as perhaps secondary tearing ) .other interesting talks that we heard were about : the location of non - thermal velocities from eis ( by louise harra , 2013 ) ; an mhd eruption ( by ed de luca , 2013 ) ; flare ribbons and current ribbons by ( miho janvier , 2013 , 2014 , another rising star ) ; shear flow with sot along a polarity inversion line ( by toshi shimizu , 2013 ) ; a flare observed with fiss / nst ( by hyungmin park , 2013 ) ; the way in which sad s indicate fragmentation of a turbulent flare current sheet ( by david mckenzie , 2013 , and kathy reeves , 2013 ) ; reconnection outflows in an x - flare ( by hirohisa hara , 2011 ) ; supersonic outflows in a prominence eruption ( by david williams , 2013 ) ; the magnetic field deduced from eit waves ( by david long , 2013 ) ; and hard x - rays from foxsi by shin ishikawa ( see krucker et al , 2011 ) .karel schrijver ( 2012 ) and hiroyuki maehara ( 2012 ) reviewed the concept of superflares .on the sun , the flare energy is mostly in white light and so is hard to measure . from kepler ,the frequency of stellar flares fits a power law and increases with rotation .the flare energy is independent of rotation but depends on spot area and so a superflare needs superspots . on the sun, one flare at 10 erg is expected every 800 years and one at 10 erg every 5000 years , so space weather can become much worse .we also heard how hinode helps understand stellar flares ( from petr heinzel ) , details of stellar winds from young solar - type stars ( from takeru suzuki , 2013 ) , how a magnetic storm twice as large as the carrington event is possible ( from bruce tsurutani , 2014 ) and about rapid events in tree rings ( from fusa miyake , 2013 ) .brian welsh ( 2014 ) raised three questions about the nature of photospheric magnetic fields and flows .first of all , do photospheric flows drive alfvnic turbulence to heat the atmosphere ?he does find flows that are faster and shorter - lived at smaller scales , but the observed flows do not agree with the van ballegooijen ( 2014 ) model .secondly , is flux emergence ideal or is reconnection necessary ?he often finds that flux loss , cancellations and the deduced electric fields suggest reconnection is indeed important .thirdly , what is the cause of the changes in photospheric magnetic field during flares ?perhaps a change in magnetic tension or a relation to sunquakes is involved . in order to solve these questions properly , higher - resolution observations from solar c and atstare needed , together with new ideas and computational experiments .valentin martinez pillet ( 2014 ) raised the question : do you need continuous magnetogram observations for days with very high resolution and a small field of view ?he suggested that the answer is yes , but that you also need full - disc observations to provide the context .he discussed an example of the puzzling nature of orphan penumbrae when sunspots have an unusual appearance ( fig .[ fig12]a ) .he suggested that large field of view observations enable an understanding of such localised behaviour .thus , orphan penumbra occur when a strong horizontal magnetic field in the photosphere inhibits convection .this may occur either during the emergence of active regions or when the axis of a twisted flux rope dips down to the photoshere ( fig . [fig12]b ) .we also heard about coronal loop strands from eis , aia and hi - c ( from david brooks , 2013 ) , about the plans for clasp ( from ryohko ishikawa , 2014 ) , and about nst observations ( from sasha kosovichev , 2013 ) .several amusing or memorable comments were made during the conference .george doschek asked should we think out of the box or drink out of the box ? " but thankfully helen beat sense into him ( fig .[ fig13 ] ) .shibata - san impressed us with dramatic kitaro music accompanying flares .watanabe - san told us about kyoto s galileo , called mr suzuki .ada ortiz carbonell talked about the most awaited romance " .andreas lagg daringly put the word naked " into his talk title .andrew hillier had his father as a co - author .louise harra admitted to stopping us from going for our beer .miho janvier said if you do nt follow my talk , blame the conference dinner alcohol " .we all agreed this had been a memorable conference in a beautiful location and were most grateful to the members of the soc for their hard work , led by shibata - san ( fig .[ fig14 ] ) , and also to the loc led by ichimoto - san and especially to shinichi nagata .but , as we parted we remembered to enjoy the beauty of the solar corona ( fig .[ fig15 ] ) .i am extremely grateful to hirohisa hara and kazunari shibata for hosting my visits to tokyo and kyoto , respectively , and for looking after me so well .it was a real delight to meet old friends and make new ones .i am also grateful to the university of tokyo and to the leverhulme trust for financial support . , su , y. , kliem , b. , and van ballegooijen , a. a. ( 2013 ) . model of a solar eruption starting from nlfff initial conditions . in _ aas solar physics division meeting_. aas solar physics division meeting , vol .103.01 . , matthews , s. , culhane , j. l. , cheung , m. c. m. , kontar , e. p. , and hara , h. ( 2013 ) .the location of non - thermal velocity in the early phases of large flares revealing pre - eruption flux ropes . * 774 * , 122 . , rempel , m. , and yokoyama , t. ( 2014 ) .high - resolution calculations of the solar global convection with the reduced speed of sound technique . i. the structure of the convection and the magnetic field without the rotation . *786 * , 24 . ,christe , s. , glesener , l. , ishikawa , s .- n . , mcbride , s. , glaser , d. , turin , p. , lin , r. p. , gubarev , m. , ramsey , b. , saito , s. , tanaka , y. , takahashi , t. , watanabe , s. , tanaka , t. , tajima , h. , and masuda , s. ( 2011 ) . .in _ society of photo - optical instrumentation engineers ( spie ) conference series_. society of photo - optical instrumentation engineers ( spie ) conference series , vol .8147 ) . ,bellot rubio , l. r. , hansteen , v. h. , de la cruz rodrguez , j. , and rouppe van der voort , l. ( 2014 ) .emergence of granular - sized magnetic bubbles through the solar atmosphere .i. spectropolarimetric observations and simulations . *781 * , 126 . ,ishido , y. , acton , l. w. , strong , k. , hirayama , t. , uchida , y. , mcallister , a. , matsumoto , r. , tsuneta , s. , shimizu , t. , hara , h. , sakurai , t. , ichimoto , k. , nishino , y. , and ogawara , y. ( 1992 ) .observations of x - ray jets with the yohkoh soft x - ray telescope . *44 * , l173l179 . ,title , a. m. , de pontieu , b. , lemen , j. r. , wuelser , j. , wolfson , c. j. , hurlburt , n. e. , schrijver , c. j. , golub , l. , deluca , e. e. , kankelborg , c. c. , hansteen , v. h. , carlsson , m. , bush , r. i. , sainz dalda , a. , and kleint , l. ( 2013 ) . ., de pontieu , b. , martnez - sykora , j. , deluca , e. , hansteen , v. , cirtain , j. , winebarger , a. , golub , l. , kobayashi , k. , korreck , k. , kuzin , s. , walsh , r. , deforest , c. , title , a. , and weber , m. ( 2013 ) . observing coronal nanoflares in active region moss . *770 * , l1 .
this conclusion to the meeting attempts to summarise what we have learnt during the conference ( mainly from the review talks ) about new observations from hinode and about theories stimulated by them . suggestions for future study are also offered . + .
one of the fastest evolving field among teaching and learning research is students performance evaluation .web - based educational systems with integrated computer based testing are the easiest way of performance evaluation , so they are increasingly adopted by universities . with the rapid growth of computer communications technologies , online testing is becoming more and more common .moreover , limitless opportunities of computers will cause the disappearance of paper and pencil ( pp ) tests .computer administered tests present multiple advantages compared to pp tests .first of all , various multimedia can be attached to test items , which is almost impossible in pp tests .secondly , test evaluation is instantaneous .moreover , computerized self - assessment systems can offer various hints , which help students exam preparation .this paper is structured in more sections .section 2 presents item response theory ( irt ) and discusses the advantages and disadvantages of adaptive test systems .section 3 is dedicated to the implementation issues .the presentation of the item bank is followed by simulations for item selection strategies in order to overcome the item exposure drawback .then the architecture of our web - based cat system is presented , which is followed by a proposal for item difficulty estimation .finally , we present further research directions and give our conclusions .computerized test systems reveal new testing opportunities .one of them is the adaptive item selection tailored to the examinee s ability level , which is estimated iteratively through the answered test items .adaptive test administration consists in the following steps : ( i ) start from an initial ability level , ( ii ) selection of the most appropriate test item and ( iii ) based on the examinee s answer re - estimation of their ability level .the last two steps are repeated until some ending conditions are satisfied .adaptive testing research started in 1952 when lord made an important observation : ability scores are test independent whereas observed scores are test dependent .the next milestone was in 1960 when george rasch described a few item response models in his book .one of the described models , the one - parameter logistic model , became known as the rasch model .the next decades brought many new applications based on item response theory . in the followingwe present the three - parameter logistic model .the basic component of this model is the item characteristic function : where stands for the examinee s ability , whose theoretical range is from to , but practically the range to is used .the three parameters are : , discrimination ; , difficulty ; , guessing .discrimination determines how well an item differentiates students near an ability level .difficulty shows the place of an item along the ability scale , and guessing represents the probability of guessing the correct answer of the item .therefore guessing for a true / false item is always 0.5 . is the probability of a correct response to the item as a function of ability level . is a scaling factor and typically the value is used .figure [ irf ] shows item response function for an item having parameters . for a deeper understanding of the discrimination parameter ,see figure [ ability3 ] , which illustrates three different items with the same difficulty ( ) and guessing ( ) but different discrimination parameters .the steepest curve corresponds to the highest discrimination ( ) , and in the middle of the curve the probability of correct answer changes very rapidly as ability increases .a three - parameter logistic model item characteristic function ] item characteristic functions ] the one- and two - parameter logistic models can be obtained from equation ( [ eq3pl ] ) , for example setting results in the two - parameter model , while setting and gives us the one - parameter model .compared to the classical test theory , it is easy to realize the benefits of the former , which is able to propose the most appropriate item , based on item statistics reported on the same scale as ability .another component of the irt model is the item information function , which shows the contribution of a particular item to the assessment of ability .item information functions are usually bell shaped functions , and in this paper we used the following ( recommended in ) : where is the probability of a correct response to item computed by equation ( [ eq3pl ] ) , is the first derivative of , and is the item information function for item .item information functions ] high discriminating power items are the best choice as shown in figure [ iteminfo3 ] , which illustrates the item information functions for the three items shown in figure [ ability3 ] .all three functions are centered around the ability , which is the same as the item difficulty .test information function is defined as the sum of item information functions .two such functions are shown for a 20-item test selected by our adaptive test system : one for a high ability student ( figure [ fig : smart ] ) and another for a low ability student ( figure [ fig : dull ] ) .the test shown in figure [ fig : smart ] estimates students ability near , while the test in figure [ fig : dull ] at .test information function for a 20-item test generated for high ability students ] test information function for a 20-item test generated for low ability students ] test information function is also used for ability estimation error computation as shown in the following equation : this error is associated with maximum likelihood ability estimation and is usually used for the stopping condition of adaptive testing .test information function for all test items ] for learner proficiency estimation lord proposes an iterative approach , which is a modified version of the newton - raphson iterative method for solving equations .this approach starts with an initial ability estimate ( usually a random value ) .after each item the ability is adjusted based on the response given by the examinee .for example , after questions the estimation is made according to the following equation : where is computed using the following equation : in equation ( [ eqs ] ) represents the correctness of the answer , which is for incorrect and for correct answer . is the probability of correct answer for the item having the ability ( equation ( [ eqi ] ) ) , and is its first derivative . in adaptive testing the best test itemis selected at each step : the item having maximum information at the current estimate of the examinee s proficiency .the most important advantage of this method is that high ability level test - takers are not bored with easy test items , while low ability ones are not faced with difficult test items .a consequence of adapting the test to the examinee s ability level is that the same measurement precision can be realized with fewer test items .along with the advantages offered by irt , there are some drawbacks as well .the first drawback is the impossibility to estimate the ability in case of all correct or zero correct responses .these are the cases of either very high or very low ability students . in such cases the test item administration must be stopped after administering a minimum number of questions .the second drawback is that the basic irt algorithm is not aware of the test content , the question selection strategy does not take into consideration to which topic a question belongs .however , sometimes this may be a requirement for generating tests assessing certain topics in a given curriculum .huang proposed a content - balanced adaptive testing algorithm .another solution to the content balancing problem is the testlet approach proposed by wainer and kiely .a testlet is a group of items from a single curriculum topic , which is developed as a unit .if an adaptive algorithm selects a testlet , then all the items belonging to that testlet will be presented to the examinee .the third drawback , which is also the major one , is that irt algorithms require serious item calibration . despite the fact that the first calibration method was proposed by alan birnbaum in 1968 andhas been implemented in computer programs such as bical ( wright and mead , 1976 ) and logist ( wingersky , barton and lord , 1982 ) , the technique needs real measurement data in order to accurately estimate the parameters of the items. however , real measurement data are not always available for small educational institutions .the fourth drawback is that several items from the item bank will be overexposed , while other test items will not be used at all .this requires item exposure control strategies .a good review of these strategies can be found in , discussing the strengths and weaknesses of each strategy .stocking made one of the first overviews of item exposure control strategies and classified them in two groups : ( i ) methods using a random component along the item selection method and ( ii ) methods using a parameter for each item to control its exposure .randomization strategies control the frequency of item administration by selecting the next item from a group of items ( e.g. out of the 5 best items ) .the second item exposure control strategy uses an exposure control parameter . in case of an item selection due to its maximum information for the examinee s ability level , the item will be administered only if its exposure control parameter allows it .we have used our own item bank from our traditional computer based test system `` intelligent '' .the item bank parameters ( - discrimination , - difficulty , - pseudo guessing ) were initialized by the tutor .we used 5 levels of difficulty from very easy to very difficult , which were scaled to the [ -3,3 ] interval .the guessing parameter of an item was initialized by the ratio of the number of possible correct answers to the total number of possible answers .for example , it is 0.1 for an item having two correct answers out of five possible answers .discrimination is difficult to set even for a tutor , therefore we used for each item . item information clusters and their size ] in our implementation we have tried to overcome the disadvantages of irt .we started to administer items adaptively only after the first five items .ability ( ) was initialized based on the number of correct answers given to these five items , which had been selected to include all levels of difficulty .we used randomization strategies to overcome item exposure .two randomization strategies were simulated . in the first one we selected the top ten items , i.e. the ten items having the highest item information .however , this is better than choosing the single best item , thus one must pay attention to the selection of the top ten items . there may be more items having the same item information for a given ability , therefore it is not always the best strategy choosing the first best item from a set of items with the same item information . to overcome this problem , in the second randomization strategy we computed the item information for all items that were not presented to the examinee and clustered the items having the same item information .the top ten items were selected using the items from these clusters . if the best cluster had less than ten items , the remainder items were selected from the next best cluster .if the best cluster had more than ten items , the ten items were selected randomly from the best cluster .for example , figure [ fig : ii_clusters ] shows the 26 clusters of item information values constructed from 171 items for the ability of .the best 10 items were selected by taking the items from the first two clusters ( each having exactly 1 item ) and selecting randomly another 8 items out of 13 from the third cluster .figure [ fig : plotitemexposure ] shows the results from a simulation where we used an item bank with 171 items ( test information function is shown in figure [ fig : all ] for all the 171 items ) , and we simulated 100 examinees using three item selection strategies : ( i ) best item ( ii ) random selection from the 10 best items ( iii ) random selection from the 10 best items and clustering .the three series in figure [ fig : plotitemexposure ] are the frequencies of items obtained from the 100 simulated adaptive tests .tests were terminated either when the number of administered items had exceeded 30 or the ability estimate had fallen outside the ability range .the latter were necessary for very high and very low ability students , where adaptive selection could not be used .the examinee s answers were simulated by using a uniform random number generator , where the probability of correct answer was set to be equal to the probability of incorrect answer .item exposure with or without randomization control strategies ] in order to be able to compare these item exposure control strategies , we computed the standard deviance of the frequency series shown in figure [ fig : plotitemexposure ] .the standard deviance is for the first series not using any item exposure control , it is for the second one , whereas for the third one is .it is obvious that the third series is the best from the viewpoint of item exposure .consequently , we will adopt this strategy in our distributed cat implementation .after the matlab simulations we implemented our cat system as a distributed application , using java technologies on the server side and adobe flex on the client side .the general architecture of our system is shown in figure [ fig : cat_architecture ] .the administrative part is responsible for item bank maintenance , test scheduling , test results statistics and test termination criteria settings . in the near future we are planning to add an item calibration module .cat - architecture ] the test part is used by examinees , where questions are administered according to settings .after having finished the test , the examinee may view both their test results and knowledge report .due to the lack of measurement data necessary for item calibration , we were not able to calibrate our item bank .however , 165 out of 171 items of our item bank were used in our self - assessment test system `` intelligent '' in the previous term .based on the data collected from this system , we propose a method for difficulty parameter estimation .although there were no restrictions in using the self - assessment system , i.e. users could have answered an item several times , we consider that the first answer of each user could be relevant to the difficulty of the item .item difficulty calibration ] figure [ fig : itemcalibration1 ] shows the original item difficulty ( set by the tutor ) and the difficulty estimated by the first answer of each user .the original data series uses 5 difficulty levels scaled to the $ ] interval .the elements of the `` first answers '' series were computed by the equation : .we computed the mean difficulty for both series , and we obtained 0.60 for the original one and 0.62 for the estimated one .conspicuous differences were found at the _ very easy _ and _ very difficult _ item difficulties .at present we are working on the parameter estimation part of our cat system .although there are several item parameter calibration programs , this task must be taken very seriously because it influences measurement precision directly .item parameter estimation error is an active research topic , especially for fixed computer tests . for adaptive testing , this problem has been addressed by paper .researchers have empirically observed that examinees suitable for item difficulty estimations are almost useless when estimating item discrimination .stocking analytically derived the relationship between the examinee s ability and the accuracy of maximum likelihood item parameter estimation .she concluded that high ability examinees contribute more to difficulty estimation of difficult and very difficult items and less on easy and very easy items .she also concluded that only low ability examinees contribute to the estimation of guessing parameter and examinees , who are informative regarding item difficulty estimation , are not good for item discrimination estimation .consequently , her results seem to be useful in our item calibration module .in this paper we have described a computer adaptive test system based on item response theory along its implementation issues .our cat system was implemented after one year experience with a computer based self - assessment system , which proved useful in configuring the parameters of the items .we started with the presentation of the exact formulas used by our working cat system , followed by some simulations for item selection strategies in order to control item overexposure .we also presented the exact way of item parameter configuration based on the data taken from the self - assessment system .although we do not present measurements on a working cat system , the implementation details presented in this paper could be useful for small institutions planning to introduce such a system for educational measurements on a small scale .m. barla , m. bielikova , a. b. ezzeddinne , t. kramar , m. simko , o. vozar , on the impact of adaptive test question selection for learning efficiency , http://www.sciencedirect.com/science/journal/03601315[_computers_ & _ educations _ ] , * 55 , * 2 ( 2010 ) 846857 .a. baylari , g. a. montazer , design a personalized e - learning system based on item response theory and artificial neural network approach , _ http://www.sciencedirect.com/science/journal/09574174[expert systems ] with applications _ ,* 36 , * 4 ( 2009 ) 80138021 .e. georgiadou , e. triantafillou , a. http://conta.uom.gr/conta/cv/economidescv-en.pdf[economides ] , a review of item exposure control strategies for computerized adaptive testing developed from 1983 to 2005 , _ journal of http://escholarship.bc.edu/jtla/[technology , learning , and assessment ] _ , * 5 , * 8 ( 2007 ) 33 p. r. k. http://www.labmeeting.com/papers/author/hambleton-rk[hambleton ] , r. w. jones , http://www.ncme.org/pubs/items/24.pdf[comparison of classical ] test theory and item response theory and their applications to test development , _ items instructional topics in educational measurement _ , 253262 .s. huang , a content - balanced adaptive testing algorithm for computer - based training systems , in : _ intelligent tutoring systems _( eds . c. frasson , g. gauthier , a. lesgold ) , http://www.springer.com/?sgwid=0-102-0-0-0[springer ] , berlin , 1996 , pp .. w. j. http://www.utwente.nl/gw/omd/afdeling/vanderlinden/[linden ] , c. a. w. glas , _ capitalization on item calibration error in computer adaptive testing _ ,http://lsac.biz/lsacresources/research/ct/ct-98-04.pdf[lsac research report 98 - 04 ] , 2006 , 14 p. m. lilley , t. barker , c. britton , the development and evaluation of a software prototype for computer - adaptive testing , http://www.sciencedirect.com/science/journal/03601315[_computers_ & _ education _ ] , * 43 , * 12 ( 2004 ) 109123. m. l. http://apm.sagepub.com/content/31/3/167.extract[stocking ] , _ controlling item exposure rates in a realistic adaptive testing paradigm _, http://www.eric.ed.gov/pdfs/ed384663.pdf[technical report rr 3 - 2 ] , educational testing service , princeton , nj , 1993 .m. l. http://apm.sagepub.com/content/31/3/167.extract[stocking ] , specifying optimum examinees for item parameter estimation in item response theory , _ ttp://www.psychometrika.org/ journal / psychometrika.html[psychometrika ] _ , * 55 , * 3 ( 1990 ) 461475 .h. wainer , g. l. kiely , item clusters and computerized adaptive testing : a case for testlets , _ journal of http://www.blackwellpublishing.com/journal.asp?ref=0022-0655[educational measurement ] _ , * 24 , * 3 ( 1987 ) 185201 .
one of the fastest evolving field among teaching and learning research is students performance evaluation . computer based testing systems are increasingly adopted by universities . however , the implementation and maintenance of such a system and its underlying item bank is a challenge for an inexperienced tutor . therefore , this paper discusses the advantages and disadvantages of computer adaptive test ( cat ) systems compared to computer based test systems . furthermore , a few item selection strategies are compared in order to overcome the item exposure drawback of such systems . the paper also presents our cat system along its development steps . besides , an item difficulty estimation technique is presented based on data taken from our self - assessment system .
with the proliferation of smart mobile devices , computationally intensive applications , such as online gaming , video conferencing and 3d modeling , are becoming prevalent .however , the mobile devices normally possess limited resources , e.g. , limited battery energy and computation capability of local cpus , and thus may suffer from unsatisfactory computation experience .mobile - edge computing ( mec ) emerges as a promising remedy . by offloading the computation tasks to the physically proximal mec servers ,the quality of computation experience , e.g. , the device energy consumption and the execution delay , could be greatly improved .computation offloading policies play critical roles in mec , and determine the efficiency and achievable computation performance . specifically , as computation offloading requires wireless data transmission , optimal computation offloading policies should take the time - varying wireless channel into consideration . in ,a stochastic control algorithm adapted to the wireless channel condition was proposed to decide the offloaded software components .a game - theoretic computation offloading approach for multi - user mec systems was proposed in , and this study was extended to multi - cell settings in . besides , the energy - delay tradeoff in cloud computing systems with heterogeneous types of computation tasks and multi - core mobile devices was investigated using lyapunov optimization techniques in and , respectively .in addition , a dynamic computation offloading policy for mec systems with mobile devices powered by renewable energy was developed in . for most mobile applications ,the execution time is in the range of tens of milliseconds , which is much longer than the time duration of a channel block , whose typical value is a few milliseconds . in other words ,the execution process may experience multiple channel blocks , which makes the computation offloading policy design a highly challenging two - timescale stochastic optimization problem .in particular , in a larger timescale , whether to offload a task to the mec server or not needs to be decided , while in a smaller timescale , the transmission policy for offloading the input data of an application should adapt to the instantaneous wireless channel condition . to handle this issue , an initial investigation for two - timescale computationoffloading policy design was conducted in , which , however , only considered to minimize the energy consumption of executing a single computation task and the queueing delay incurred by multiple tasks was ignored . moreover , with mec , the potential of executing multiple tasks concurrently should be fully exploited in order to utilize the local and cloud computation resources efficiently and improve the quality of computation experience to the greatest extent . in this paper , we will investigate an mec system that allows parallel computation task execution at the mobile device and at the mec server .the execution and computation offloading processes of the computation tasks running at the mobile device may be across multiple channel blocks , and the generated but not yet processed tasks are waiting in a task buffer .the average delay of each task and the average power consumption at the mobile device under a given computation task scheduling policy are first analyzed using markov chain theory .we then formulate the power - constrained delay minimization problem .an efficient one - dimensional search algorithm is developed to find the optimal stochastic computation offloading policy .simulation results show that the proposed stochastic computation task scheduling policy achieves substantial reduction in the execution delay compared to the baseline schemes .the rest of this paper is organized as follows .we introduce the system model in section ii . the average execution delay and the power consumption of the mobile device under a given stochastic computation task scheduling policyare analyzed in section iii . in section iv ,a power - constrained delay minimization problem is formulated and the associated optimal task scheduling policy is obtained .simulation results are shown in section v and conclusions are drawn in section vi ., scaledwidth=48.0% ] we consider a mobile - edge computing ( mec ) system as shown in fig .[ fig : system - model ] , where a mobile device is running computation - intensive and delay - sensitive applications with the aid of an mec server .this mec server could be a small data center installed at the wireless access point . by constructing a virtual machine associated with the mobile device, the mec server can execute the computation tasks on behalf of the mobile device .the cpu and the transmission unit ( tu ) at the mobile device are of particular interests , which can execute the computation tasks locally and transmit the input data of the computation tasks to the mec server for cloud computing , respectively . besides , due to the limited battery size and in order to prolong the device lifetime , we assume that the average power consumption at the mobile device is constrained by .we assume that time is divided into equal - length time slots and the time slot length is denoted as . at the beginning of each time slot , with probability , a new task is generated .the computation tasks can either be executed at the mobile device by the local cpu or be offloaded to the mec server for cloud computing .the arrived but not yet executed tasks will be queued in a task buffer with a sufficiently large capacity .denote ,v_{c}\left[t\right]\in\{0,1\} ] ( =1 ] ( =0 ] . in each time slot, the decision is made by the mobile device , and the dynamic of the task buffer can be expressed as =\min\{\left(q\left[t\right]-v_{l}\left[t\right]-v_{c}\left[t\right]\right)^{+}+a\left[t\right],q\ } , t=1,\cdots , \label{qdynamic}\ ] ] where , ] is the task arrival indicator , i.e. , if a task arrives at the time slot , we have =1 ] .* i ) local computation model : * we assume that the cpu at the mobile device is operating at frequency ( in hz ) if a task is being executed , and its power consumption is given by ( in w ) ; otherwise , the local cpu is idle and consumes no power .the number of required cpu cycles for executing a task successfully is denoted as , which depends on the types of mobile applications .in other words , time slots are needed to complete a task .we use \in \{0,1,\cdots , n-1\} ] means the local cpu is idle , while = n ] indicates that the task will be completed at the end of time slot and the local cpu will be available for a new task starting from the time slot . *ii ) cloud computation model : * in order to offload a computation task to the mec server , all the input data of the task should be successfully delivered to the mec server over the wireless channel . without loss of generality , we assume the input data of each task consists of equal - size data packets and each packet contains bits . for simplicity , on - off power control is adopted .we assume the channel side information is available at the mobile device , and thus a packet can be successfully transmitted to the mec server if the achievable throughput in the time slot , ,p_{\rm{tx}}\right) ] , where ] to represent the state of the tu , where =0 ] means that the packet of one task is scheduled to transmit in time slot .when all the input bits are successfully received , the mec server begins to execute the task .assume that the mec server is equipped with a multi - core cpu so that concurrent execution of multiple tasks is feasible .similar to local computation , time slots are required for completing the task at the mec server , where denotes the cpu - cycle frequency at the mec server .besides , we denote the delay for feeding back the computation results as , which is viewed as a constant .in the considered system , the system state ] .thus , the state space can be expressed as , where `` '' denotes the cartesian product . in the following, we will introduce the stochastic computation task scheduling policy , and analyze the average delay of each task and the average power consumption at the mobile device using markov chain theory . in order to minimize the average delay of each task and to meet the average power constraint ,the mobile device should make the computation task scheduling decision at each time slot , i.e. , whether to schedule a task for local computing or to offload it to the mec server . to characterize the computation task scheduling policy , we introduce a set of probabilistic parameters where ,\forall \bm{\tau}\in\mathcal{s},k=1,2,3,4 ] , there is no task to be scheduled , i.e. , for and . in the following , we consider the cases with >0 ] . in this case , both the local cpu and the transmitter are idle .thus , at most two computation tasks can start to be processed , i.e. , one for local computing and the other for computation offloading . given the system state =(q[t],c_{t}[t],c_{l}[t])=(i,0 , 0) ] and > 0 ] and =0 ] and >0 ] .it is worthwhile to note that the performance of the mec system depends on the adopted computation offloading policy , which can be characterized by the set of parameters and the optimal computation offloading policy will be developed in section iv . in this subsection, we will analyze the average delay of each task and the average power consumption at the mobile device by modeling the mec system as a markov chain .let denote the one - step state transition probability from state to .it can be checked under a given computation task scheduling policy .thus , the steady - state distribution can be obtained by solving the following linear equation set : * average delay : * as each computation task experiences a waiting stage and a processing stage ( either local or cloud computing ) after its arrival , according to the little s theorem , the average queueing delay can be expressed as =i\}=\frac{1}{\alpha}\sum\limits _{ i=0}^{q}i\sum\limits _ { m=0}^{m}\sum\limits _ { n=0}^{n-1}\pi_{(i , m , n)},\label{eq : queuing_time}\ ] ] where denotes the task arrival rate and =i\}=\sum_{m=0}^{m+1}\sum_{n=0}^{n}\pi_{(i , m , n)} ] denotes the probability that the channel in not in outage .consequently , the average processing time of each task can be expressed as where denotes the proportion of computation tasks that are executed locally at the mobile device in the long run and can be computed according to the following equation : where the state sets are defined as , and , respectively .therefore , the average delay of each computation task is the sum of the queueing delay and the processing latency , which can be written as * average power consumption : * let and denote the probabilities of local computations and successful packet transmissions with power consumptions and , respectively , given the system state .thus , the average power consumption at the mobile device is given by where the power coefficients and for each state can be expressed as and respectively .the derivation of the power coefficients and is deferred to appendix [ sec : power_cof_state ] .therefore , by averaging over all the state , we have with the average power coefficients given by and , respectively .in this section , we will formulate an optimization problem to minimize the average delay of each computation task subject to the average power constraint at the mobile device .an optimal algorithm will then be developed for the formulated optimization problem .based on the delay and power analysis in section iii - b , the power - constrained delay minimization problem can be formulated in : {cl } \mathcal{p}_{1}:\min\limits_{\{g_{\bm{\tau}}^{k}\ } } & \bar{t}=\frac{1}{\alpha}\sum\limits _{ i=0}^{q}i\sum\limits _ { m=0}^{m}\sum\limits _ { n=0}^{n-1}\pi_{(i , m , n)}+\eta n+(1-\eta)t_{c}\\ \ \ \ \ { \rm{s.t . } } & \begin{cases } \bar{p}\leq\bar{p}_{max } , & \ \ \ \ \ \ \ \ \ \ \ ( \mathrm{a})\\ \sum_{\bm{\tau}{'}\in\mathcal{s}}\chi_{\bm{\tau}{'},\bm{\tau}}\pi_{\bm{\tau}{'}}=\pi_{\bm{\tau}},\,\bm{\tau}\in\mathcal{s } , & \ \ \ \ \ \ \ \ \ \ \ ( \mathrm{b})\\ \sum\limits _ { i=0}^{q}\sum\limits _ { m=0}^{m}\sum\limits _ { n=0}^{n-1}\pi_{(i , n , m)}=1 , & \ \ \ \ \ \ \ \ \ \ \ ( \mathrm{c})\\ \sum_{k=1}^{4}g_{(i , m , n)}^{k}=1,\,\forall i , m , n , & \ \ \ \ \ \ \ \ \ \ \ ( \mathrm{d})\\ g_{(i , m , n)}^{k}\geq0,\,\forall i , m , n , k , & \ \ \ \ \ \ \ \ \ \ \ ( \mathrm{e } ) \end{cases } \end{array}\label{eq : opt_problem}\ ] ] where ( [ eq : opt_problem].a ) is the average power constraint , ( [ eq : opt_problem].b ) and ( [ eq : opt_problem].c ) denote the balance equation set , and is given by ( [ eq : prob_eta ] ) . it is worthwhile to note that once is determined , can be obtained according to ( [ eq : steady_state ] ) . however , as is non - convex , the optimal solution is not readily available . in the following, we will reformulate into a series of linear programming ( lp ) problems in order to obtain its optimal solution .first , we define the occupation measure as , which is the probability that the system is in state while decision is made . by definition , , and thus . by replacing with in ,we obtain an equivalent formulation of as follows : {cl } \mathcal{p}_{2}:\min\limits _ { \bm{x},\eta } & \bar{t}=\frac{1}{\alpha}\sum\limits_{\bm{\tau}\in\mathcal{s}}\sum\limits _ { k=1}^{4}i\cdot x_{\bm{\tau}}^{k}+\eta n+(1-\eta)t_{c}\\ \ \ \ \ { \rm{s.t . } } & \begin{cases } \nu_{loc}(\bm{x})p_{loc}+\beta\nu_{tx}(\bm{x})p_{tx}\leq\bar{p}_{max } , & \ \ \ \ \ ( \mathrm{a})\\ \gamma(\bm{x},\eta)=0 , & \ \ \ \ \ ( \mathrm{b})\\ f_{\bm{\tau}}(\bm{x})=0,\,\forall\bm{\tau}=(i , m , n)\in\mathcal{s } , & \ \ \ \ \ ( \mathrm{c})\\ \sum\limits _ { i=0}^{q}\sum\limits _ { m=0}^{m}\sum\limits _ { n=0}^{n-1}\sum\limits _ { k=1}^{4}x_{(i , m , n)}^{k}=1 , & \ \ \ \ \ ( \mathrm{d})\\ x_{(i , m , n)}^{k}\geq0,\forall i , m , n , k,\,\eta\in[0,1 ] , & \ \ \ \ \ ( \mathrm{e } ) \end{cases } \end{array}\label{eq : lp_problem}\ ] ] where and are linear functions of the variables given by and respectively , and and can be expressed as and respectively .denotes the probability that the current system state is and decision is made , while the system state in the next time slot is , which is independent with in contrast to .note that . ]the optimal solution and the optimal value of are denoted as and , respectively .once is known , the optimal computation task scheduling policy can be obtained as due to the product form of and in ( [ eq : lp_problem].b ) , is still a non - convex problem .fortunately , we observe that for a given value of , reduces to an lp problem in terms of variables .therefore , we can first obtain the optimal solution for arbitrary ] . in this case , both the local cpu and the tu are idle and each of them is available for processing a new task . when at least two computation tasks are waiting in the task buffer , one of fourcomputation task scheduling decisions can be chosen with probability , as presented in ( [ casei1 ] ) . by jointly considering all possible task arrival states and channel states , for any given ( ), the state transition probabilities can be expressed as where the state mapping function is defined as for example , state will transfer to state with probability , when one new task arrives at the task buffer and one waiting task is sent to the local cpu .when there is just one task in the task buffer , the mobile device has three possible decisions : local execution , cloud execution and remaining idle , as given by ( [ casei2 ] ) . accordingly , for , the state transition probability can be written as when the task buffer is empty , neither local execution nor cloud execution is needed . in this case , the system state transits due to one new task arrival , and accordingly the state transition probability can be simplified as * case ii : * >0 ] . in this case , the local cpu is available to execute a new task while the task offloading is in process .when there is at least one packet in the task buffer , i.e. , ( ) , the computation task scheduling policy is given by ( [ caseii ] ) .accordingly , the state transition probabilities can be written as when the task buffer is empty , there exist four possible state transitions with their transition probabilities given by depending on whether there are one new task arrival and one successful packet delivery .* case iii : * =0 ] .when the local cpu is busy in task execution while the tu is idle , the decision on task offloading is made with probability when the task buffer is non - empty , as shown in ( [ caseiii ] ) .similarly , the state transition probabilities can be obtained as * case iv : * >0 ] . in this case, both of the local cpu and the tu are busy in processing . for ( , , ) ,there also exist four possible state transitions with their transition probabilities given by [ [ the - power - coefficients - mu_bmtauloc - and - mu_bmtautx - secpower_cof_state ] ] the power coefficients and [ sec : power_cof_state ] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * case i : * =c_{l}[t]=0 ] and =0 ] and >0 ] and >0 $ ] .the power coefficients are and , since the local cpu is busy and one packet of a task will be successfully delivered with probability .s. barbarossa , s. sardellitti , and p. d. lorenzo , `` communicating while computing : distributed mobile cloud computing over 5 g heterogeneous networks , '' _ ieee signal process . mag .31 , no . 6 , pp .45 - 55 , nov . 2014 .s. sardellitti , g. scutari , and s. barbarossa , `` joint optimization of radio and computational resources for multicell mobile - edge computing , '' _ ieee trans .signal inf . process . over netw ._ , vol . 1 , no89 - 103 , jun .j. kwak , y. kim , j. lee , and s. chong , `` dream : dynamic resource and task allocation for energy minimization in mobile cloud systems , '' _ ieee j. sel .areas commun . _ ,2510 - 2523 , dec . 2015 .w. zhang , y. wen , k. guan , d. kilper , h. luo , and d. wu , `` energy - optimal mobile cloud computing under stochastic wireless channel , '' _ ieee trans .wireless commun ._ , vol.12 , no .9 , pp . 4569 - 4581 , sep .
mobile - edge computing ( mec ) emerges as a promising paradigm to improve the quality of computation experience for mobile devices . nevertheless , the design of computation task scheduling policies for mec systems inevitably encounters a challenging two - timescale stochastic optimization problem . specifically , in the larger timescale , whether to execute a task locally at the mobile device or to offload a task to the mec server for cloud computing should be decided , while in the smaller timescale , the transmission policy for the task input data should adapt to the channel side information . in this paper , we adopt a markov decision process approach to handle this problem , where the computation tasks are scheduled based on the queueing state of the task buffer , the execution state of the local processing unit , as well as the state of the transmission unit . by analyzing the average delay of each task and the average power consumption at the mobile device , we formulate a power - constrained delay minimization problem , and propose an efficient one - dimensional search algorithm to find the optimal task scheduling policy . simulation results are provided to demonstrate the capability of the proposed optimal stochastic task scheduling policy in achieving a shorter average execution delay compared to the baseline policies . mobile - edge computing , task scheduling , computation offloading , execution delay , qoe , markov decision process .
identifying mesoscopic structures and their relation to the function of a system in biological , social and infrastructural networks is one of the main challenges for complex networks analysis . until recently , most approaches focused on static network representations , although in truth most systems of interests are inherently dynamical .recent theoretical progresses and the availability of data inspired a few innovative methods , which mostly revolve around unfolded static representations of a temporal dynamics , constraints on the community structure of consecutive graph snapshots or on global approaches .+ in this article , we take a different route and tackle the problem of finding and characterising the relevance of community structures at different time - scales by directly incorporating the time - dependence in the method .+ inspired by the notion of _ stability _ , we propose a related measure , _ temporal stability _ , which naturally embeds the time - dependence and order of interactions between the constituents of the system .temporal stability allows not only to compare the goodness of partitions over specific time scales , as its static counterpart , but also to _ find _ the best partition at any time and over any time scale . in the following , we briefly review the main ingredients of static stability and introduce their natural extensions to temporal networks .we then present a benchmark model as a proof of principle , and then analyse two real - world datasets , finding pertinent mesoscopic structures at different time - scales .like the map equation , stability exploits the properties of the stationary distribution random walkers exploring a static network and of long persistent flows on a network .while the map equation relies on finding the most compressed description of a random walker trajectory in terms of its asymptotic distribution , the intuition behind stability is that walkers exploring the network will tend to stay longer in a well defined cluster before escaping to explore the rest of the network .the object of interest is thus the auto - covariance matrix of an unbiased random walk on a network for a given partition , i.e. the higher the autocorrelation , the better the description of a system in terms of modules by .after markov time - steps of exploration of the network by the random walkers , it can be compactly written as : where is the partition matrix assigning nodes to communities , the transition matrix of the random walk on , its stationary distribution and .the term can be interpreted as a null model that represents the asymptotic modular structure against which the structure unveiled by the random walkers exploration of the network is tested .the stability of partition at markov time is then defined by : the magnitude of the trace of the autocovariance matrix represents the extent to which walkers are confined within the clusters defined by .the minimum over the markov time during which the walkers were allowed to move ensures that the measure conservative .the value of at which a given partition becomes optimal conveys information about which topological scales of the network are best described by the partition considered .moreover , the interval over which a partition is optimal is related to the importance of that specific scale across the hierarchy of scales present in the network .+ extending this measure to temporal networks requires generalizing its ingredients : the partition , the transition matrix and the asymptotic walker distribution .[ [ temporal - partition . ] ] temporal partition .+ + + + + + + + + + + + + + + + + + + let us define a discrete temporal network as a time - ordered collection of graph snaphots , , represented by their adjacency matrices .the static partition matrix is naturally extended to the temporal case by allowing it to be time - dependent , , with being the number of slices in the temporal dataset .at this point it is worth noting that , like , does not need to change at every time step . + [ [ transition - matrix . ] ] transition matrix .+ + + + + + + + + + + + + + + + + + the transition matrix will in general change between time steps and therefore one does not simply iterate it .we define the single - snapshot transition matrix relative to as , with , where is the constant vector with unit components .the -limit is equivalent to including a self - loop of vanishing weight and is required to ensure that the transition matrix is well - defined for nodes that are disconnected at time . using the matrices we can define a time - ordered product which represents the transition matrix for the evolution of the random walker across the changing network between and : where we impose right multiplication to respect the arrow of time .+ [ [ stationary - walker - distribution . ] ] stationary walker distribution .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the last element we need is the stationary walker distribution on a time - varying network .we will note this distribution with .different types of random walks can be devised , depending on the model for the dynamics of the network , therefore , is not unique ; different dynamics preserving different statistical features of the system . herewe consider the case where the time - evolving connectivity of each node is known .the activity - driven model , introduced by perra et al . is particularly adapted to such systems .it provides a null model akin to a temporal configuration model where the nodes temporal activities play the role of the nodes degrees .we only introduce here the main concepts needed for our purpose , but a full description of the model is given in the appendix .importantly , the stationary distribution for walkers coevolving with the network is analytically amenable and provides a natural null model for temporal stability . the stationary walker distribution for a node with activity is given by : where is the average density of walkers on a node , a scalar that can be obtained numerically in closed form and the average activity .we used and the activities were computed such that the temporally averaged degree is conserved . without loss of generality , we can set , being the number of nodes in the network , and use eq .( [ activity_walker_distribution ] ) for the stationary walker distribution .+ we are now in a position where we can define the _ temporal stability _ for a partition of at time - scale : \right \rangle_{t'},\ ] ] with = \mathbf{h}^t(t ' ) \left [ \mathbf{\omega}\mathcal{m}_g(t',\tau ) - \omega^t \omega \right ] \mathbf{h}(t'),\ ] ] where and the average over in eq .( [ temporal_stability ] ) is taken over ] with the highest stability for every pair ( ) . by linearity, the average over can be unfolded , leaving the expression : .\ ] ] the partition ] and ] . more interestingly for our purposes , the activity driven model allows to describe the asymptotic distribution of a random walker coevolving with the network .it is in fact possible to write the probability of a random walker to be in node at time as : + \sum_{j\neq i } p_j(t ) \pi^{\delta t}_{j\to i } \ ] ] where is the propagator from to over time . in the the propagator can be written as and by grouping nodes in activity classes , we can write the equation for the probability of finding the walker in a node of activity at time as : where is the density of walkers in the network . looking for the stationary state of the previous equation , one obtains equation [ activity_walker_distribution ] .in this appendix , we describe the workflow to follow to obtain the temporal partition with the optimal temporal stability : 1 .extract the adjacency time - series of the system .2 . calculate the activity for each node .3 . calculate for each class of activity .4 . go through each possible pair ] . 6 .find the partition that optimises using any modularity optimisation algorithm ( for example the louvain method ) .finally average over to find the temporal stability \right \rangle_{t'}$ ] . the code to perform this is available at https://github.com/lordgrilo/temporalstability .* two blocks model . *the model consists of two sets of nodes of cardinality and .we assign to each node a value depending on whether they belong to the first or the second block .we create a temporal network with cycles by realizing graph snapshots as follows : a. for every snapshot we start with the nodes alone , without any edges ; b. for a number of steps we introduce an edge between nodes and with probability ; c. after iterations , for steps , we introduce an edge between and with probability .d. repeat times the steps above .in particular we are interested in setting the parameters of the block model is such a way as to obtain a time - aggregated network that does not display any obvious community structure .this can be done by setting the inter - block linking probability to .for very small values of , fluctuations in the link patterns can prevent the aggregated network from being uniform . however , already for this is not the case anymore .the simulations used in the paper were obtained for .+ in figure [ fig::two_block_stability ] , we plot extended versions of the plots of and shown in the main text . additionally , we show also for the two blocks model , which as expected has only trivial signal due to linking fluctuations within the blocks. + .8 .8 * face - to - face contact network . *the dataset refers to time - resolved face - to - face interactions of 10 teachers and 232 children between 6 and 12 years old .every child was asked to was a small wearable rfid device scanning for other neighboring devices every 20s , corresponding to the resolution of the resulting temporal network .although the original dataset covers two school days from morning to evening , we only analysed on the first day as it is enough to uncover the temporal patterns we are interested in .more details on the exact experimental design can be found in .the aggregated network is available from the sociopatterns website at http://www.sociopatterns.org/. + as per the two blocks model , in figure [ fig::sociopatterns_stability ] we plot extended versions of the plots of and shown in the main text .additionally , we show also for the sociopatterns data . in this casethe instantaneous variation of information shows much more structure than the two blocks model in fig .[ fig::two_block_stability ] , representing the presence of additional social structure beyond classes .this is especially clear in the time period corresponding to the lunch break , where children are allowed to mingle more freely in the school s cafeteria. + .8 0.8 * international trade network . *the dataset consists of trade flows between states for the years 1870 - 2009 ( v3.0 ) and is publicly available from the correlates of war website at http://www.correlatesofwar.org/ .we focused on the dyadic trade network , where each link represents the trade flow between pairs of states in current u.s .the full networks is therefore weighted and directed .the activity driven model that we are using as null model however produces binary undirected networks .therefore , in order to make the comparison meaningful , we extracted the backbone of the network using the disparity filter with a significance threshold of , then we symmetrised the network and from there calculated the activity values needed to define the activity driven model .in this section , we illustrate the difference between temporal stability and other temporal partition methods , and focus on multislice modularity optimised with the genlouvain algorithm .the main difference between these two methods is how time is treated .the approach in is a modularity one .they follow the exposition of construct their generalised modularity on an expansion of the exponential in the laplacian dynamics for small time .the dynamics they consider on the network is thus effectively a `` one - step '' dynamics .the time dependence of the system is encoded in the multislice structure as a coupling parameter between the slices which mimics temporal ( cor)relation between snapshots .time is therefore introduced as a topological construct . in contrast , in our approach , we consider the complete dynamics seen by a random walk co - evolving with the network itself , thus embedding the time - evolution of the system in its the description , making it also parameter free .the two approaches are therefore not only conceptually but also practically different .one has a one time - scale constraint , while the other has access to and uses all time - scales .+ as an example , we applied the genlouvain algorithm on the three datasets considered in this paper .we used the code available at http://netwiki.amath.unc.edu/genlouvain/genlouvain .we calculated the best partitions according to the multislice modularity for various values of the parameter ( which encodes the interlayer connection strength ) .the multislice modularity in general finds a small number of different clusters which change and evolve over time .the closest analog for the temporal stability case is the partition obtained for , as this describes the temporal dynamics when considering only successive layers .we find marked differences between the stability values obtained by the multislice modularity partitions and the optimal stability ones ( fig .[ fig::two_block_comparison_genlouvain_stability ] , [ fig::sociopattern_comparison_genlouvain_stability ] and [ fig::cow_comparison_genlouvain_stability ] ) .in addition to this , it is easy to see from figures [ fig::two_blockcomparison ] , [ fig::sociopattern_comparison ] and [ fig::cow_genlouvain_partition_zoomin ] that the results of the two methods differ significantly .multislice modularity produces larger and persistent communities over time due to the extra structure that it introduces in the network by the interlayer couplings , which effectively mixes time and space in the network in an unfolded static topology .the coupling parameter therefore stiffens the community structure by imposing ( arbitrarily ) stronger connection between time slices , resulting in a smaller number of communities with increasing the results for optimal stability at highlight on the contrary changes in the network as seen from a walker co - evolving with the network .this can be seen very clearly in the case of the two - block model and of the international trade network . in the former ,multislice modularity yields few ( 3 - 4 ) communities which do not change over time significantly , while the temporal stability at yields a temporal partition that picks up with remarkable accuracy the cycles of the model . in the latter, the sparsity of links during the early years of the international trade network yields partitions mostly constituted by single - node or very small communities . while the progressive integration of the trade network is evident in both methods ( a decreasing number of communities with time ( fig .[ fig::cow_genlouvain_partition_0_2_trade ] , [ fig::cow_genlouvain_partition_0_5_trade ] and [ fig::cow_genlouvain_partition_0_8_trade ] ) , optimal temporal stability for small is able to pick up distinctive , rapid changes in the network structure ( e.g. the two world wars ) , as opposed to multislice modularity .so even when only considering as a naive heuristic as the number of communities , the advantage in uncovering rapid changes in the temporal structure that temporal stability has over the multislice modularity is evident .+ in the light of these observations , we can conclude that the two methods provide two different and complementary lenses to study a given system ; one strongly focused on the static persistent topological structure of the network , the other on dynamics .it is also worse reminded the reader that by changing , one changes the time - scale at which one is probing the community structure evolution .for example , micro - communities best describe the two - block model for as the model is rather sparse over short time - scales .then , by increasing , one sees the existence of the two - blocks , together with the shocks of the mixing ; this effect is shown in the main text in the variation of information graphs .
we present a method to find the best temporal partition at any time - scale and rank the relevance of partitions found at different time - scales . this method is based on random walkers coevolving with the network and as such constitutes a generalization of partition stability to the case of temporal networks . we show that , when applied to a toy model and real datasets , temporal stability uncovers structures that are persistent over meaningful time - scales as well as important isolated events , making it an effective tool to study both abrupt changes and gradual evolution of a network mesoscopic structures .
while network virtualization enables a flexible resource sharing , opening the infrastructure for automated virtual network ( vnet ) embeddings or service deployments may introduce new kinds of security threats .for example , by virtualizing its network infrastructure ( e.g. , the links in the aggregation or backbone network , or the computational or storage resources at the points - of - presence ) , an internet service provider ( isp ) may lose control over how its network is used .even if the isp manages the allocation and migration of vnet slices and services itself and only provides a very rudimentary interface to interact with customers ( e.g. , service or content providers ) , an attacker may infer information about the network topology ( and state ) by generating vnet requests .this paper builds upon the model introduced in and studies complexity of the _ topology extraction problem _ : how many vnet requests are required to infer the full topology of the infrastructure network ? while algorithms for trees and cactus graphs with request complexity and a lower bound for general graphs of have been shown in , graph classes between these extremes have not been studied . ** this paper presents a general framework to solve the topology extraction problem .we first describe necessary and sufficient conditions which facilitate the `` greedy '' exploration of the substrate topology ( the _ host graph _ ) by iteratively extending the requested vnet graph ( the _ guest graph _ ) .our framework then exploits these conditions to construct an ordered ( request ) _ dictionary _ defined over so - called _ graph motifs_. we show how to apply the framework to different graph families , discuss the implications on the request complexity , and also report on a small simulation study on realistic topologies .these empirical results show that many scenarios can indeed be captured with a small dictionary , and small motifs are sufficient to infer if not the entire , then at least a significant fraction of the topology .this section presents our model and discusses how it compares to related work . * model . *the vnet embedding based topology extraction problem has been introduced in .the formal setting consists of two entities : a _ customer _ ( the `` adversary '' ) that issues virtual network ( vnet ) requests and a _ provider _ that performs the access control and the embedding of vnets .we model the virtual network requests as simple , undirected graphs ( the _ guest graph _ ) where denotes the virtual nodes and denotes the virtual edges connecting nodes in .similarly , the infrastructure network is given as an undirected graph ( the so - called _ host graph _ or _ substrate _ ) as well , where denotes the set of substrate nodes , is the set of substrate links , and is a capacity function describing the available resources on a given node or edge . without loss of generality , we assume that is connected and that there are no parallel edges or self - loops neither in vnet requests nor in the substrate . in this paperwe assume that besides the resource demands , the vnet requests do not impose any mapping restrictions , i.e. , a virtual node can be mapped to _ any _ substrate node , and we assume that a virtual link connecting two substrate nodes can be mapped to an entire ( but single ) _ path _ on the substrate as long as the demanded capacity is available .these assumptions are typical for virtual networks .a virtual link which is mapped to more than one substrate link however can entail certain costs at the _ relay nodes _ , the substrate nodes which do not constitute endpoints of the virtual link and merely serve for forwarding .we model these costs with a parameter ( per link ) .moreover , we also allow multiple virtual nodes to be mapped to the same substrate node if the node capacity allows it ; we assume that if two virtual nodes are mapped to the same substrate node , the cost of a virtual link between them is zero .[ def : embedding ] an _ embedding _ of a graph to a graph is a mapping where every node of is mapped to exactly one node of , and every edge of is mapped to a path of .that is , consists of a node and an edge mapping , where denotes the set of paths .we will refer to the set of virtual nodes embedded on a node by ; similarly , describes the set of virtual links passing through and describes the virtual links passing through with serving as a relay node .to be valid , the embedding has to fulfill the following properties : ( ) each node is mapped to exactly one node ( but given sufficient capacities , can host multiple nodes from ) .( ) links are mapped consistently , i.e. , for two nodes , if then is mapped to a single ( possibly empty and undirected ) path in connecting nodes and . a link can not be split into multiple paths .( ) the capacities of substrate nodes are not exceeded : : .( ) the capacities in are respected as well , i.e. , : .if there exists such a valid embedding mapping , we say that graph can be embedded in , denoted by .hence , denotes the vnet _embedding relation_. the provider has a flexible choice where to embed a vnet as long as a valid mapping is chosen .in order to design topology discovery algorithms , we exploit the following property of the embedding relation .[ lemma : poset ] the embedding relation applied to any family of undirected graphs ( short : ) , forms a partially ordered set ( a _ poset _ ) . [ proof in appendix ] we are interested in algorithms that `` guess '' the target topology ( the host graph ) among the set of possible substrate topologies . concretely , we assume that given a vnet request ( a guest graph ) , the substrate provider always responds with _ an honest ( binary ) reply _ informing the customer whether the requested vnet is embeddedable on the substrate . based on this reply , the attacker may then decide to ask the provider to embed the corresponding vnet on , or it may not embed it and continue asking for other vnets .let be an algorithm asking a series of requests to reveal .the _ request complexity _ to infer the topology is measured in the number of requests ( in the worst case ) until issues a request which is isomorphic to and _ terminates _( i.e. , knows that and does not issue further requests ) .+ * related work .* embedding vnets is an intensively studied problem and there exists a large body of literature ( e.g. , ) , also on distributed computing approaches and online algorithms .our work is orthogonal to this line of literature in the sense that we assume that an ( arbitrary and not necessarily resource - optimal ) embedding algorithm is _given_. instead , we focus on the question of how the feedback obtained through these algorithms can be exploited , and we study the implications on the information which can be obtained about a provider s infrastructure .our work studies a new kind of topology inference problem .traditionally , much graph discovery research has been conducted in the context of today s complex networks such as the internet which have fascinated scientists for many years , and there exists a wealth of results on the topic . the classic instrument to discover internet topologies is _ traceroute _ , but the tool has several problems which makes the problem challenging .one complication of traceroute stems from the fact that routers may appear as stars ( i.e. , anonymous nodes ) , which renders the accurate characterization of internet topologies difficult ._ network tomography _ is another important field of topology discovery . in network tomography, topologies are explored using pairwise end - to - end measurements , without the cooperation of nodes along these paths .this approach is quite flexible and applicable in various contexts , e.g. , in social networks .for a good discussion of this approach as well as results for a routing model along shortest and second shortest paths see . for example, shows that for sparse random graphs , a relatively small number of cooperating participants is sufficient to discover a network fairly well .both the traceroute and the network tomography problems differ from our virtual network topology discovery problem in that the exploration there is inherently _ path - based _ while we can ask for entire virtual graphs .the paper closest to ours is .it introduces the topology extraction model studied in this paper , and presents an asymptotically optimal algorithm for the cactus graph family ( request complexity ) , as well as a general algorithm ( based on spanning trees ) with request complexity .the algorithms for tree and cactus graphs presented in can be extended to a framework for the discovery of more general graph classes .it is based on the idea of growing sequences of subgraphs from nodes discovered so far .intuitively , in order to describe the `` knitting '' of a given part of a graph , it is often sufficient to use a small set of graph _ motifs _ , without specifying all the details of how many substrate nodes are required to realize the motif .we start this section with the introduction of motifs and their composition and expansion. then we present the dictionary concept , which structures motif sequences in a way that enables the efficient host graph discovery with algorithm .subsequently , we give some examples and finally provide the formal analysis of the request complexity . in order to define the motif set of a graph family , we need the concept of _ chain ( graph ) _ : is just a graph consisting of two nodes and a single link .as its edge represents a virtual link that may be embedded along entire path in the substrate network , it is called a _chain_. given a graph family , the set of motifs of is defined constructively : if any member of has an edge cut of size one , the _ chain _ is a motif for .all remaining motifs are at least 2-connected ( i.e. , any pair of nodes in a motif is connected by at least two vertex - disjoint paths ) .these motifs can be derived by the at least 2-connected components of any by repeatedly removing all nodes with degree smaller or equal than two from ( such nodes do not contribute to the knitting ) and merging the incident edges , as long as all remaining cycles do not contain parallel edges .only one instance of isomorphic motifs is kept .note that the set of motifs of can also be computed by iteratively by removing all low - degree nodes and subsequently determine the graphs connecting nodes constituting a vertex - cut of size one for each member .in other words , the motif set of a graph family is a set of non - isomorphic minimal ( in terms of number of nodes ) graphs that are required to construct each member by taking a motif and either replacing edges with two edges connected by a node or gluing together components several times .more formally , a graph family containing all elements of can be constructed by applying the following rules repeatedly .[ def : motif_rules ] ( 1 ) create a new graph consisting of a motif ( _ new motif rule _ ) . ( 2 ) given a graph created by these rules , replace an edge of by a new node and two new edges connecting the incident nodes of to the new node ( _ insert node rule _ ) .( 3 ) given two graphs created by these rules , attach them to each other such that they share exactly one node ( _ merge rule _ ) .being the inverse operations of the ones to determine the motif set , these rules are sufficient to compose all graphs in : if includes all motifs of , it also includes all 2-connected components of , according to definition [ def : motif - app ] .these motifs can be glued together using the _ merge rule _ , and eventually the low - degree nodes can be added using the _ insert node rule_. therefore , we have the following lemma .[ lemma : equiv ] given the motifs of a graph family , the repeated application of the rules in definition [ def : motif_rules ] allows us to construct each member .however , note that it may also be possible to use these rules to construct graphs that are _ not _ part of the family .the following lemma shows that when degree - two nodes are added to a motif to form a graph , all network elements ( substrate nodes and links ) are _ used _ when embedding in ( i.e. , ) .[ lem : nodesload - app ] let be an arbitrary two - connected motif , and let be a graph obtained by applying the _ insert node rule _( rule of definition [ def : motif_rules ] ) to motif .then , an embedding involves all nodes and edges in : at least resources are used on all nodes and edges .let . clearly , if there exists such that , then s capacity is used fullyotherwise , was added by rule .let be the two nodes of between which rule was applied , and hence must be a motif edge .observe that for these nodes degrees it holds that and since rule never modifies the degree of the old nodes in the host graph .since links are of unit capacity , each substrate link can only be used once : at at most edge - disjoint paths can originate , which yields a contradiction to the degree bound , and the relaying node has a load of . lemma [ lem : nodesload - app ] implies that no additional nodes can be inserted to an existing embedding .in other words , a motif constitutes a `` minimal reservation pattern '' .as we will see , our algorithm will exploit this invariant that motifs cover the entire graph knitting , and adds simple nodes ( of degree 2 ) only in a later phase . [ cor : embeditself - app ] let and let be a graph obtained by applying rule of definition [ def : motif_rules ] to motif .then , no additional node can be embedded on after embedding .next , we want to _ combine _ motifs explore larger `` knittings '' of graphs. each motif pair is glued together at a single node _ or edge _ ( `` attachment point '' ) : we need to be able to conceptually join to motifs at edges as well because the corresponding edge of the motif can be expanded by the _ insert node rule _ to create a node where the motifs can be joined .a _ motif sequence _ is a list where and where is glued together at exactly one node with ( i.e. , is `` attached '' to a node of motif ) : the notation specifies the selected attachment points and .if the attachment points are irrelevant , we use the notation and denotes an arbitrary sequence consisting of instances of .if can be decomposed into , where and are ( possibly empty ) motif sequences as well , then and are called _ subsequences _ of , denoted by . in the following, we will sometimes use the _ kleene star _notation to denote a sequence of ( zero or more ) elements of attached to each other .one has to be careful when arguing about the embedding of motif sequences , as illustrated in figure [ fig : counterexab - app ] which shows a counter example for .this means that we typically can not just incrementally add motif occurrences to discover a certain substructure .this is the motivation for introducing the concept of a _ dictionary _ which imposes an order on motif sequences and their attachment points . in a nutshell ,a dictionary is a _ directed acyclic graph ( dag ) _ defined over all possible motifs . and imposes an order ( poset relationship ) on problematic motif sequences which need to be embedded one before the other ( e.g. , the composition depicted in figure [ fig : counterexab - app ] ) . to distinguish them from sequences , dictionary entries are called _words_. a _ dictionary _ is a directed acyclic graph ( dag ) over a set of motif sequences together with their attachment points . in the context of the dictionary , we will call a motif sequence _word_. the links represent the poset embedding relationship .concretely , the dag has a single root , namely the chain graph ( with two attachment points ) . in general , the _ attachment points _ of each vertex describing a word define how can be connected to other words . the directed edges represent the transitively reduced embedding poset relation with the chain context : is embeddable in and there is no other word such that , and holds .( the chains before and after the words are added to ensure that attachment points are `` used '' : there is no edge between two isomorphic words with different attachment point pairs . )we require that the dictionary be _ robust to composition _ : for any node , let denote the `` reachable '' set of words in the graph and all other words .we require that , where the transitive closure operator denotes an arbitrary sequence ( including the empty sequence ) of elements in ( according to their attachment points ) .see figure [ fig : dictionary ] for an example .informally , the robustness requirement means that the word represented by can not be embedded in any sequence of `` smaller '' words , unless a subsequence of this sequence is in the dictionary as well . as an example , in a dictionary containing motifs and from figure [ fig : counterexab - app ] would contain vertices , and also , and a path from to . a ) , cycle , diamond , complete bipartite graph and complete graph .the attachment point pair of each word is black , the other nodes and edges of the words are grey .the edges of the dictionary are locally labeled , which is used in later .b ) a graph that can be constructed from the dictionary words.,title="fig : " ] b ) , cycle , diamond , complete bipartite graph and complete graph .the attachment point pair of each word is black , the other nodes and edges of the words are grey .the edges of the dictionary are locally labeled , which is used in later .b ) a graph that can be constructed from the dictionary words.,title="fig : " ] in the following , we use the notation to denote the set of `` maximal '' vertices with respect to their embeddability into : , where denotes the set of outgoing neighbors of .furthermore , we say that a dictionary _ covers _ a motif sequence iff can be formed by concatenating dictionary words ( henceforth denoted by ) at the specified attachment points .more generally , a dictionary covers a graph , if it can be formed by merging sequences of .let us now derive some properties of the dictionary which are crucial for a proper substrate topology discovery .first we consider maximal dictionary words which can serve as embedding `` anchors '' in our algorithm .[ lemma : dico - property - app ] let be a dictionary covering a sequence of motifs , and let .then constitutes a subsequence of , i.e. , can be decomposed to , and contains no words of order at most , i.e. , . by contradictionassume and is not a subsequence of ( written ) . since _ covers _ we have by definition . since is a dictionary and we know that .thus , : has a subsequence of at least one word in .thus there exists such that . if this implies which contradicts our assumption .otherwise it means that such that , which contradicts the definition of and thus it must hold that . the following corollary is a direct consequence of the definition of and lemma [ lemma : dico - property - app ] : since for a motif sequence with , all the subsequences of that contain no are in . as we will see , the corollary is useful to identify the motif words composing a graph sequence , from the most complex words to the least complex ones .[ cor : recursive - app ] let be a dictionary covering a motif sequence , and let . then can be decomposed as a sequence with .this corollary can be applied recursively to describe a motif sequence as a sequence of dictionary entries .note that a dictionary always exists .[ lem : dicoexistence ] there exists a dictionary that covers all member graphs of a motif graph family with vertices . [ proof in appendix ] with these concepts in mind , we are ready to describe our generalized graph discovery algorithm called ( cf algorithm [ alg : motifrec - app ] ) .basically , always grows a request graph until it is isomorphic to ( the graph to be discovered ) .this graph growing is performed according to the dictionary , i.e. , we try to embed new motifs in the order imposed by the dictionary dag . is based on the observation that it is very costly to discover additional edges between nodes in a 2-connected component : essentially , finding a single such edge requires testing all possibilities , which is quadratic in the component size .thus , it is crucial to first explore the basic `` knitting '' of the topology , i.e. , the minors which are at least 2-connected ( the _ motifs _ ) . in other words ,we maintain the invariant that there are never two nodes which are not -connected in the currently requested graph while they are -connected in ; no path relevant for the connectivity is overlooked and needs to be found later .nodes and edges which are not contributing to the connectivity need not be explored at this stage yet , as they can be efficiently added later .concretely , these additional nodes can then be discovered by ( 1 ) using an _ edge expansion _ ( where additional degree two nodes are added along a motif edge ) , and by ( 2 ) adding `` chains '' to the nodes ( a virtual link constitutes an edge cut of size one and can again be expanded to entire chain of nodes using _ edge expansion _ ) .let us specify the _topological order _ in which algorithm discovers the dictionary words .first , for each node in , we define an order on its outgoing edges .this order is sometimes referred to as a `` port labeling '' , and each path from the dictionary root ( the chain ) to a node in can be represented as the sequence of port labels at each traversed node , where corresponds to a port number in .we can simply use the lexicographic order on integers , : , to associate each vertex with its minimal sequence , and sort vertices of according to their embedding order .let be the _function associating each vertex with its position in this sorting : ( i.e. , is the topological ordering of ) .the fact that subsequences can be defined recursively using a dictionary ( lemma [ lemma : dico - property - app ] and corollary [ cor : recursive - app ] ) is exploited by algorithm . concretely, we apply corollary [ cor : recursive - app ] to gradually identify the words composing a graph sequence , from the most complex words to the least complex ones .this is achieved by traversing the dictionary depth - first , starting from the root up to a maximal node : algorithm tests the nodes of in increasing port order as defined above . as a shorthand , the word with written as ] holds if )<r(d[j]) ] will be detected before ] : denotes the maximal word in . distinguishes whether the subsequences next to a word are empty ( ) or chains ( ) , and we will refer to the subsequence before by and to the subsequence after by . concretely , while recursively exploring a sequence between two already discovered parts and we check whether the maximal word is directly next to ( i.e. , ) or or both ( ) , or whether is somewhere in the middle . in the latter case , we add a chain ( ) to be able to find the greatest possible word in a next step . uses tuples of the form where and , i.e. , ] .these tuples are lexicographically ordered by the total order relation on the set of possible tuples defined as follows : let and two such tuples. then iff or or or . with these definitionwe can prove that algorithm is correct .[ thm : main - app ] given a dictionary for , algorithm correctly discovers any .we first prove that the claim is true if forms a motif sequence ( without edge expansion ) .subsequently , we study the case where the motif sequence is expanded by rule 2 , and finally tackle the general composition case .* discovery of motif sequences : * due to lemma [ lemma : dico - property - app ] it holds that for chosen when line 1 of is executed for the first time , is partitioned into three subsequences , and .subsequently is executed on each of the subsequences recursively if , i.e. , if the subsequences are not empty . thus computes a decomposition as described in corollary [ cor : recursive - app ] recursively .as each of the words used in the decomposition is a subsequence of and does not stop until no more words can be added to any subsequence , it holds that all nodes of will be discovered eventually . in other words , is defined for all . as a next stepwe assume to be the sequence of words obtained by to derive a contradiction . since is the output of algorithm andis hence embeddable in : , there exists a valid embedding mapping . given , we denote by the set of pairs for which . now assume that and do not lead to the same resource reservations `` '' .hence there are some inconsistencies between the substrate and the output of algorithm : . with each of these `` conflict '' edges , one can associate the corresponding word ( resp . ) in ( resp . ) .if a given conflict edge spans multiple words , we only consider the words with the highest index as defined by .we also define ( resp . ) ) . since and are by definition not isomorphic , .let be the index of the greatest word embeddable on the substrate containing an inconsistency , and be the index of the corresponding word detected by .( ) assume : a lower order motif was erroneously detected .let ( and ) be the set of dictionary entries that are detected before ( after ) ] , and the corresponding sequence in .observe that since the words cut the sequences of and into subsequences that are embeddable .observe that \mapsto t ] was detected the higher indexed words had been detected correctly by in previous executions of this subroutine .hence , and can not contain any words leading to edges in .thus which contradicts line of .( ) now assume : a higher order motif was erroneously detected . using the same decomposition as step ( ) ,we define as the set of words perfectly detected , and therefore decompose and as sequences and with and the property that each .let be the sequence among that contains our misdetected word ] , \mapsto t' ] which is a contradiction with and corollary [ cor : recursive - app ] .the same arguments can be applied recursively to show that conflicts in of smaller indices can not exist either .* expanded motif sequences . * as a next step , we consider graphs that have been extended by applying node insertions ( rule 2 ) to motif sequences , so called _ expanded _ motif sequences : we prove that if is an expanded motif sequence , then algorithm correctly discovers . given an expanded motif sequence , replacing all two degree nodes with an edge connecting their neighbors unless a cycle of length three would be destroyed , leads to a unique pure motif sequence , . for the corresponding embedding mapping it holds that is exactly the set of removed nodes .applying to an expanded motif sequence discovers this pure motif sequence by using the nodes in as relay nodes . all nodes in then discovered in where the reverse operation node insertion is carried out as often as possible .it follows that each node in is either discovered in if it occurs in a motif or in otherwise . * combining expanded sequences . *finally , it remains to combine the expanded sequences .clearly , since motifs describe all parts of the graph which are at least 2-connected , the graph remaining after collapsing motifs can not contain any cycles : it is a tree .however , on this graph behaves like , but instead of attaching chains , entire sequences are attached to different nodes . along the unique sequence paths between two nodes , fixes the largest words first , and the claim follows by the same arguments as used in the proofs for tree and cactus graphs . /*current request graph*/ , /*set of unexplored nodes*/ choose , * if * ( ) * then * , add all nodes of to , * for all * * do * _ edgeexpansion_( ) * else * remove from _ find_motif_sequence_( ) find maximal s.t .)^j~{\textsc{af}}~(t_>) ] * if * ( ) * then * )^j , t_>) ] _ edge_expansion_( ) let be the endpoints of edge , remove from find maximal s.t . / * issue requests * / , add newly discovered nodes to the focus of is on generality rather than performance , and indeed , the resulting request complexities can often be high .however , as we will see , there are interesting graph classes which can be solved efficiently .let us start with a general complexity analysis .the requests issued by are constructed in line 1 of and in line 2 of .we will show that the request complexity of the latter is linear in the number of edges of the host graph while the request complexity of depends on the structure of the dictionary .essentially , an efficient implementation of line 1 of in can be seen as the depth - first exploration of the dictionary starting from the chain .more precisely , at a dictionary word requests are issued to see if one of the outgoing neighbors of could be embedded at the position of .as soon as one of the replies is positive , we follow the corresponding edge and continue recursively from there , until no outgoing neighbors can be embedded .thus , the number of requests issued before we reach a vertex can be determined easily .recall that tests vertices of a dictionary according to a fixed port labeling scheme . for any ,let be the set of paths from to ( each path including and ) . in the worst case , discovering costs .[ lem : weight ] the request complexity of line 1 of to find the maximal such that )^j~{\textsc{af}}~(t _ > ) \mapsto h ] in with depth - first traversal there is exactly one path between the chain and . a request for at most all the outgoing neighbors of the nodes this path .after has been found , the highest where has to be determined . to this end , another requests are necessary .thus the maximum of over all word determines the request complexity .when additional nodes are discovered by a positive reply to an embedding request , then the request complexity between this and the last previous positive reply can be amortized among the newly discovered nodes .let denote the number of nodes in the motif sequence of the node in the dictionary .[ thm : runtime ] the request complexity of algorithm is at most , where denotes the number of edges of the inferred graph , and is the maximal ratio between the cost of discovering a word in and , i.e. , . each time line [ line : potenz ] of is called , either at least one new node is found or no other node can be embedded between the current sequences ( one request is necessary for the latter result ) . if one or more new nodes are discovered , the request complexity can be amortized by the number of nodes found : if is the maximal word found in line 1 of then it is responsible for at most requests due to lemma [ lem : weight ] .if it occurs more than once at this position , only one additional request is necessary to discover even more nodes ( plus one superfluous request if no more occurrences of can be embedded there ) . amortizing the request number over the number of discovered nodes results in requests .all other requests are due to where additional nodes are placed along edges .clearly , these costs can be amortized by the number of edges in : for each edge , at most two embedding requests are performed ( including a `` superfluous '' request which is needed for termination when no additional nodes can be added ) . let us consider concrete examples to provide some intuition for theorem [ thm : main - app ] and theorem [ thm : runtime ] .the execution of for the graph in figure [ fig : dictionary].b ) , is illustrated in figure [ fig : motiftree ] .the squares and the edges between them depict the motif composition , the shaded squares belong to the motif sequence discovered in the first execution of ( chains , cycles , diamonds , and the complete bipartite graph over two times three nodes are denoted by , , and respectively ) .subsequently , the found edges are expanded before calling another four times to find and three times . ] a fundamental graph class are _trees_. since , the tree does not contain any 2-connected structures , it can be described by a single motif : the chain . indeed , if is executed with a dictionary consisting in the singleton motif set , it is equivalent to a recursive version of from and seeks to compute maximal paths . for the cactus graph, we have two motifs , the request complexity is the same as for the algorithm described in .trees can be described by one motif ( the chain ) , and cactus graphs by two motifs ( the chain and the cycle ) .the request complexity of on trees and cactus graphs is .we present the arguments for cactus graphs only , as trees constitute a subset of the cactus family .the absence of diamond graph minors implies that a cactus graph does not contain two closed faces which share a link .thus , there can exist at most two different ( not even disjoint ) paths between any node pair , and the corresponding motif subgraph forms a _ cycle _ ( or a triangle ) .since the cycle has only one attachment point pair , of is constant .consequently , a linear request complexity follows directly from theorem [ thm : runtime ] due to the planarity of cactus graphs ( i.e. , ) . an example where the dictionary is efficient although the connectivity of the topology can be high are_ block graphs_. a block graph is an undirected graph in which every bi - connected component ( a _ block _ ) is a clique .a _ generalized block graph _ is a block graph where the edges of the cliques can contain additional nodes . in other words , in the terminology of our framework , the motifs of generalized block graphs are _cliques_. for instance , cactus graphs are generalized block graphs where the maximal clique size is three .generalized block graphs can be described by the motif set of cliques .the request complexity of on generalized block graphs is , where denotes the number of edges in the host graph .the framework dictionary for generalized block graphs consists of the set of cliques , as a clique with nodes can not be embedded on sequences of cliques with less than nodes .as there are three attachment point pairs for each complete graph with four or more nodes , can be applied using a dictionary that contains three entries for each motif with more than three nodes ( ) .thus , the dictionary entry has nodes for and ) < 3(i+2)$ ] and of is hence in .consequently the complexity for generalized block graphs is due to theorem [ thm : runtime ] . on the other hand , theorem [ thm : runtime ] also states that highly connected graphs may require requests , even if the dictionary is small . in the next section, we will study whether this happens in `` real world graphs '' .a ) when run on different internet and power grid topologies . a ) number of nodes in different autonomous systems ( as ) .we computed the set of motifs of these graphs as described in definition [ def : motif - app ] and counted the number of nodes that : ( i ) belong to a tree structure at the fringe of the network , ( ii ) have degree 2 and belong to two - connected motifs , and finally ( iii ) are part of the largest motif .b ) the fraction of nodes that can be discovered with 12-motif dictionary represented in figure c ) .d ) an example network where tree nodes are colored yellow , line - nodes are green , attachment point nodes are red and the remaining nodes blue.,title="fig:",scaledwidth=42.0% ] b ) c ) when run on different internet and power grid topologies . a ) number of nodes in different autonomous systems ( as ) .we computed the set of motifs of these graphs as described in definition [ def : motif - app ] and counted the number of nodes that : ( i ) belong to a tree structure at the fringe of the network , ( ii ) have degree 2 and belong to two - connected motifs , and finally ( iii ) are part of the largest motif .b ) the fraction of nodes that can be discovered with 12-motif dictionary represented in figure c ) .d ) an example network where tree nodes are colored yellow , line - nodes are green , attachment point nodes are red and the remaining nodes blue.,title="fig:",scaledwidth=40.0% ] d )to complement our theoretical results and to validate our framework on realistic graphs , we dissected the _isp topologies _ provided by the rocketfuel mapping engine . in addition , we also dissected the topology of a european electricity distribution grid ( ` grid ` on the legends ) . figure [ fig : exp ]a ) provides some statistics about the aforementioned topologies .since discovers both tree and degree 2 nodes in linear time , this figure shows that most of each topology can be discovered quickly .the inspected topologies are composed of a large bi - connected component ( the largest motif ) , and some other small and simple motifs .figure [ fig : exp ] b ) represents the fraction of each topology that can be discovered by using only a 12-motifs dictionary ( see figure [ fig : exp ] c ) ) . interestingly , this small dictionary is efficient on different topologies , and contains motifs that are mostly symmetrical .this might stem from the man - engineered origin of the targeted topologies .finally , figure [ fig : exp ] d ) provides an example of such a topology .10 h. acharya and m. gouda . on the hardness of topology inference . in _ proc .icdcn _ , pages 251262 , 2011 .a. anandkumar , a. hassidim , and j. kelner .topology discovery of sparse random graphs with few participants . in _ proc. sigmetrics _ , 2011 .n. bansal , k .- w .lee , v. nagarajan , and m. zafer .minimum congestion mapping in a cloud . in _ proc .30th podc _ , pages 267276 , 2011 .b. cheswick , h. burch , and s. branigan .mapping and visualizing the internet . in _ proc .usenix annual technical conference ( atec ) _ , 2000 .m. k. chowdhury and r. boutaba .a survey of network virtualization . , 54(5 ) , 2010 .g. even , m. medina , g. schaffrath , and s. schmid .competitive and deterministic embeddings of virtual networks . in _ proc .icdcn _ , 2012. j. fan and m. h. ammar .dynamic topology configuration in service overlay networks : a study of reconfiguration policies . in _ proc .ieee infocom _ , 2006 .i. houidi , w. louati , and d. zeghlache . a distributed virtual network mapping algorithm . in _ proc ., 2008 .j. lischka and h. karl . a virtual network mapping algorithm based on subgraph isomorphism detection . in _ proc .acm sigcomm visa _ ,y. a. pignolet , g. tredan , and s. schmid . in _ proc .pignolet , g. tredan , and s. schmid . in _ieee infocom _ , 2013 .g. schaffrath , s. schmid , and a. feldmann . optimizing long - lived cloudnets with migrations . in _ proc .ieee / acm ucc _, 2012 .b. yao , r. viswanathan , f. chang , and d. waddington .topology inference in the presence of anonymous routers . in _ proc .ieee infocom _ ,pages 353363 , 2003 .s. zhang , z. qian , j. wu , and s. lu . an opportunistic resource sharing and topology - aware mapping framework for virtual networks . in _ proc .ieee infocom _ , 2012 .* lemma [ lemma : poset ] . * _ the embedding relation applied to any family of undirected graphs ( short : ) , forms a partially ordered set ( a _ poset _ ) . _a poset structure over a set requires that is a ( _ reflexive _ , _ transitive _ , and _ antisymmetric _ ) order which may or may not be partial . to show that , the embedding order defined over a given set of graphs , is a poset , we examine the three properties in turn . _ transitive _ and implies : let denote the embedding function for and let denote the embedding function for , which must exist by our assumptions .we will show that then also a valid embedding function exists to map to . regarding the node mapping ,we define as , i.e. , the result of first mapping the nodes according to and subsequently according to .we first show that is a valid mapping from to as well .first , , maps to a single node in , fulfilling the first condition of the embedding ( see definition [ def : embedding ] ) . ignoring relay capacities ( which is studied together with the conditions on the links below ) , condition ( ) of definition [ def : embedding ]is also fulfilled since the mapping ensures that no node in exceeds its capacity , and can hence safely be mapped to .let us now turn our attention to the links .we use the following mapping for the edges . note that maps a single link to an entire ( but possibly empty ) path in and then maps the corresponding links in to a walk in .we can transform any of these walks into paths by removing cycles ; this can only lower the resource costs . since maps to a subset of only and since can embed all edges of , all link capacities are respected up to relay costs .however , note also that by the mapping and for relay costs , each node can either not be used at all , be fully used as a single endpoint of a link , or serve as a relay for one or more links . since both end - nodes and relay nodes are mapped to separate nodes in , capacities are respected as well. conditions ( ) and ( ) hold as well ._ antisymmetric _ and implies , i.e. , and are isomorphic and have the same weights : first observe that if the two networks differ in size , i.e. , or , then they can not be embedded to each other : w.l.o.g ., assume , then since nodes of of can not be split into multiple nodes of ( cf definition [ def : embedding ] ) , there exists a node to which no node from is mapped .this however implies that node must have available capacities to host also , contradicting our assumption that nodes can not be split in the embedding .similarly , if , we can obtain a contradiction with the single path argument .thus , not only the total number of nodes and links in and must be equivalent but also the total amount of node and link resources .so consider a valid embedding for and a valid embedding for , and assume and .it holds that and define an isomorphism between and : clearly , since , and define a permutation on the vertices .w.l.o.g . , consider any link . then, also : would violate the node capacity constraints in , and requires . we present a procedure to construct such a dictionary . let be the set of all motifs with nodes of the graph family . for each motif with possible attachment point pairs ( up to isomorphisms ) , we add dictionary words to , one for each attachment point pair .the resulting set is denoted by . for each sequence of with at most nodes , we add another word to ( with the un - used attachment points of the first and the last subword ) .there is an edge if the transitive reduction of the embedding relation with context includes an edge between two words .we now prove that is a dictionary , i.e. , it is robust to composition. let .observe that contains all compositions of words with at most nodes in which can be embedded .consequently , no matter which sequences are in it holds that can not be embedded in a sequences in the robustness condition is satisfied .since has vertices , and since contains all possible motifs of at most vertices , covers . note that the proof of lemma [ lem : dicoexistence ] only addresses the composition robustness for sequences of up to nodes .however , it is clear that .( note that it is always possible to determine the number of nodes by binary search using requests . )
the network virtualization paradigm envisions an internet where arbitrary virtual networks ( vnets ) can be specified and embedded over a shared substrate ( e.g. , the physical infrastructure ) . as vnets can be requested at short notice and for a desired time period only , the paradigm enables a flexible service deployment and an efficient resource utilization . this paper investigates the security implications of such an architecture . we consider a simple model where an attacker seeks to extract secret information about the substrate topology , by issuing repeated vnet embedding requests . we present a general framework that exploits basic properties of the vnet embedding relation to infer the entire topology . our framework is based on a graph motif dictionary applicable for various graph classes . we provide upper bounds on the _ request complexity _ , the number of requests needed by the attacker to succeed . moreover , we present some experiments on existing networks to evaluate this dictionary - based approach .
complex networks have been studied extensively in recent years in the fields of mathematics , physics , computer science , biology , sociology , etc .various networks are used to model and analyze real world objects and their interactions with each other .for example , in sociology , airports and airflights that connect them can be represented by a network ; in biology , yeast reactions is also modeled by network ; etc . the mathematical terminology for a networkis conveniently described in the language of graph theory .a common encoding of graphs uses an adjacency matrix , or an edge list , when the adjacency matrix is sparse .however , even for a large network the edge list contains a large information storage . in the casethat some important network is transfered frequently between computers , it will save time and cost if there is a scheme to efficiently encode , and therefore compress the network first . fundamentally we find it a relevant issue to ask how much information is necessary to present a given network , and how symmetry can be exploited to this end . in this paperwe will demonstrate one way to reduce the information storage of a network by using the idea that habitually graphs have many nodes that share many common neighbors .so instead of recording all the links we could rather just store some of them and the difference between neighbors .the ideal compression ratio using this scheme will be where is the average degree of the network , compared to the standard compression using yale sparse matrix format which gives . in practicethis ratio is not attainable but the real compression ratio is still better than using ysmf as shown by our results .a graph is a set of vertices ( or nodes ) together with edges ( or links ) which are the connected pairs .graphs are often used to model networks .it is sometimes convenient to call the vertices that connect to a vertex in a graph to be the neighbors of . we will only consider undirected and unweighted graph in this paper .a drawing as in figure [ fignetwork ] allows us to directly visualize the graph ( i.e. the nodes and the connections between them ) , but a truism that anyone who works with real world graphs from real data knows is that commonly those graphs are so large that even a drawing will not give any insight . visualizing structure in graphs of such sizes ( to ) begs for some computer assistance .an _ adjacency matrix _ is a common , although inefficient data representation of a graph .the adjacency matrix of a graph is a square matrix where is the number of vertices of the graph and the entries of are defined by : for example , the adjacency matrix for the graph in figure [ fignetwork ] is .\ ] ] however , in the case that the number of edges in a graph are so few that the corresponding adjacency matrix is sparse , the _ edge list _ will be used instead .the edge list is a list of all the pairs of nodes that form edges in a graph .it is essentially the same as the edge set for a graph . using edge list to represent the same graph as above we will have : note here that in the edge list we actually record the label of nodes for each edge in the graph , so for undirected graph, we can exchange the order for each pair of nodes .we will only consider sparse simple graphs , whose adjacency matrices will thus be binary sparse matrices , and the standard information storage for such graphs or matrices will be the information units that are needed for the corresponding edge list ( or two dimensional arrays ) .we now sharpen the definition for the _ unit of information _ in our context . from the perspective of information theory , a message which contains different symbols will require bits for each symbol , without any further coding scheme .the edge list representation is one example of a text file which contains different symbols ( often represented by natural numbers from to ) for a graph containing vertices .note that the unit of information depends only on the number of symbols that appear in the message , i.e. the number of vertices in a graph , so for any given graph this will be a fixed number .thus , when we restrict the disscussion to any particular graph , it is convenient to assume that each pair of labels in the edge list requires one information unit without making explicit what is the size of that unit .for example , the above graph requires information units . in this paperwe will focus on how to represent the same graph using fewer information units than its original representation .as a motivating example , let us consider the following graph and its edge list .note that here the neighbors of node are almost the same as those of node .the edge list for this graph will be : this requires information units for the edge list .however , if we look back to the graph , we note that in this graph there are many common neighbors between node and node , so there is a great deal of information redundancy . considering the subgraphs ,the neighbors of node are almost the same as the neighbors of node , except that node links to , but not , while node links to , but not .taking the redundancy into account , we generate a new way to describe the same graph , exploiting the graphs . in the graph of figure [ figp ], we see that the subgraph including vertices is very similar to the subgraph including vertices , see figure [ subgraphs ] .we exploit this redundancy in our coding .we store the subgraph which only consists of node 1 , and all its neighbors .then , we add just two more parameters , and that allows us to reconstruct the original graph . here the ordered pair tells us that in order to reconstruct the original graph we need to first copy node to node . by copy , we mean the addition of a new node into the exsiting graph with label , and then linking all the neighbors of node to the new node . .copy from node to node .,width=405,height=183 ] the set tells us that we should then delete the link that connects the new node 2 and 3 and add a new link between 2 and 10 ..,width=480,height=192 ] after all these operations we see that we successfully reconstruct the graph with fewer information units , in this case , nearly half as many as the original edge list .so instead of equation ( [ exedges ] ) , we may use the edge list of the subgraph as well as two sets to represent the same graph . and.,width=403,height=211 ] the above example suggests that by exploiting symmetry of the graph , we might be able to reduce the information storage for certain graphs by using a small subgraph as well as and as defined above .however , there remains the question of how to choose the pair of vertices so that we actually reduce the information , and which is the best possible pair ?it is important to answer these questions since most of the graphs are so large that we never will be able to see the symmetry just by inspection as we did for the above toy example . in the followingwe answer the first question , and partly the second , by using a greedy algorithm . in section 3we will define information redundancy for a binary sparse matrix and show that it reveals the neighbor similarity between vertices in a graph which is represented by its corresponding adjacency matrix . then in section 4 we will give a detailed description of our algorithm which allows us to implement our main idea .then in section 5 we will show some examples of these applications followed by discussion in section 6 .the graphs we seek to compress are typically represented by large sparse adjacency matrices .an edge - list is a specific data structure for representing such matrices , to reduce information storage .we will consider the edge - list form to be the standard way of storing sparse matrices , which requires units of information for a graph with edges .there are approaches of compressing sparse matrices , among which the most general is the yale sparse matrix format , which does not make any assumption on the structure of the matrix and only requires units of information .there are other approaches , such as which emphasize not only the storage but also the cost for data access time .we will focus on the data storage , so the yale format will be considered as a basic benchmark approach for compression of a sparse matrix , to which we will compare our results .the yale format yields the compression ratio : where is the average degree of the graph .we will show our approach of compressing the sparse matrices by first illustrating how the redundancy of a binary sparse matrix will be defined regarding to our specific operation on the matrix .generally , the adjacency matrix is a binary sparse matrix , where equals or indicating the connectivity between node and . for a simple graph consisting of edges this matrix has nonzero entries , but since it is symmetric only half of them are necessary to represent the graph , which yields units of information for the edge - list .now , if two nodes and in the graph share a lot of similar neighbors , in the adjacency matrix row and row will have a lot of common column entries , and likewise for column and column ( due to the symmetry of the matrix ) .suppose that we apply the operation to the graph , mentioned in the last section , by choosing and the corresponding , we will not need row and column in the matrix , to represent the graph .the number of nonzero entries in row and column is where is the degree of node in the graph . by doing that , the number of nonzero entries in the new adjacency matrix becomes , which requires units of information .however , the extra information we have to record is encoded in and . always has two entries , which requires unit of information , and the units of information for depend on the number of different neighbors between node and node .if and have different neighbors , the size of will be latexmath:[\[\label{sizebeta } will thus be . taking both the reduction of the matrix and the extra information into account ,the actual information it requires after the operation is this is true for different from .we could extend the operation to allow meaning a self - match , then we will put all the neighbors of into the corresponding set , and then delete these links associated with . then by a similar argument we find that after this operation we need units of information using the new format . note that here we need to clarify exactly the meaning of different neighbors since in the case that and are connected is a neighbor of but is not , and likewise for .however , this extra information can be simply encoded in by making the following rule : means when we reconstruct we do not connect and and means we connect and when we reconstruct. then we can write . from the above discussionwe see that if we define then by choosing , measures exactly the amount of information it reduces .we call the information redundancy between node and . note here that in general this redundancy is not symmetric in and , since for any pair of nodes is symmetric but the degree of these two nodes can be different , and deleting the node with higher degree will always reduce more units of information compared to deleting the lower degree node .we form the redundancy matrix by setting the entry in row and columnn to be .we perform the shrinking operation for the pair with maximum , thus saving the maximum amount of information .for example , again using the graph from section , the adjacency matrix is : ,\ ] ] and the corresponding redundancy matrix is : ,\ ] ] the maximum entry in is , indicating that either choice of or will give the maximum information reduction , and the corresponding can be obtained by recording the column entries in row and row according to our rule . in the above discussionwe only consider a one step shrinking operation on the graph and find out the direct relationship between the maximum information reduction and the redanduncy matrix .but we know that after deleting one node the resulting graph is still sparse and so could be compressed further by our scheme .the question is then how to successively choose and to obtain the best overall compression .let denote the operation at step , ( here the sign for would not affect our analysis so by convience we just write ) .in order to analyze the multi - step effect , we first consider how the adjacency matrix is affected by the orbit .let be the original adjacency matrix .let be the corresponding adjacency matrix after applying and the entries in it be .on deleting node we actually set row and column to be zero in and all the other entries are unchanged , to obtain the new matrix , i.e. so by induction we see that then we analyze how the redundancy matrix changes .use to represent the redundancy matrix , the degree of node , and the number of different neighbors of node and , associated with the graph of .since our goal is to achieve compression , once a node is deleted in the graph it is useless for future operations .so we will set if or has been deleted before , i.e. now for those and that have not been deleted , i.e. , by equation [ reddef ] we see that for and .since is obtained by deleting row and columnn in , the degree of each node changes according to : and changes according to thus , we conclude that for \nonumber\\ & = & k_{t-1}(j)-1-\frac{1}{2}\delta_{t-1}(i , j)-a_{t-1}(j , j_{t})+\frac{1}{2}|a_{t-1}(i , j_{t})-a_{t-1}(j , j_{t})|\nonumber\\ & = & r_{t-1}(i , j)+[\frac{1}{2}|a_{t-1}(i , j_{t})-a_{t-1}(j , j_{t})|-a_{t-1}(j , j_{t})]\end{aligned}\ ] ] and for by induction , we obtain that for : \end{aligned}\ ] ] and for : .\end{aligned}\ ] ] by use oft the fact that , by equation [ updatea ] , we can simplify the above two expressions to yield , \mbox { } \mbox { } \mbox { } \mbox { } \mbox { } \mbox { if } i \neq j\nonumber\\ r_{t}(i , i ) & = & r_{0}(i , i)+\sum_{\tau=1}^{t}[-\frac{1}{2}a_{0}(i , j_{\tau})].\end{aligned}\ ] ] note that if we choose a pair at step , the information we save is measured by .thus , for any orbit satisfying for ( we call such an orbit a _ natural orbit _ ) , the total information reduction ( or information saving ) will be : \end{aligned}\ ] ] where is defined by : \mbox { } \mbox { } \mbox { } \mbox { } \mbox { } \mbox { if } i \neq j\nonumber\\ c(i , i , t ) & = & \sum_{\tau=1}^{t}[-\frac{1}{2}a_{0}(i , j_{\tau})].\end{aligned}\ ] ] so the compression problem can be stated as : one more thing to mention is that the length of the orbit , , is also a variable , which could not be larger than since there are only nodes in the graph and it is meaningless to delete an ` empty ' node which does not even exist .from the previous section we see that for a given adjacency matrix , the final compression ratio depends on the orbit we choose , and the compression problem becomes an optimization problem . however , to find the maximum of and the corresponding best orbit is not trivial .one reason is that the number of natural orbits is of order , which makes it impractical to test and try for all possible orbits .another reason which is crucial here is that for any given orbit of length , evaluating costs operations , making it hard to find an appropriate scheme to search for the true maximum or even the approximate maximum .instead , we use a greedy algorithm to find an orbit which gives a reasonable compression ratio , and which is easy to apply .the idea of the greedy algorithm is that at each iteration step we choose the pair of nodes and which maximizes over all possible pairs , and we stop if the maximum value is non - positive . also we need to record and according to the graph .here we summarize the greedy algorithm as pseudocode : given the adjacency matrix of a graph ( nodes and edges ) .begin : set ; calculate for all .this forms the redundancy matrix .set t=1 .1 . .+ + + + + 2 . + ; + ; + .3 . set and go to step 1 . the compressed version of the matrix will consist of : the final matrix , the orbit and the vectors , which will allow us to reconstruct and any intermediate matrix during the compression process .in this section we will show some examples of our compression scheme on several networks .we begin with the lattice graph , which is expected to be readily compressible due to the high degree of overlapping between neighbors of nodes . as a secondary example, we add some random alterations , and apply our method to the corresponding watts - strogatz network . finally we show some results for real - world networks .one of the most symmetric graphs is the lattice graph , a one - dimensional chain where each site is connected to nearest neighbors to its right and left . in this case represents the degree of each vertex in the lattice graph .the total number of nodes is , the corresponding adjacency matrix is sparse .we implement our algorithm for lattice graph with different .the results are shown in figure [ latticeresults ] . herewe take . to .the compression limit is indicated by the bottom curve given by , and we find that for large the compression ratio is close to the empirical formula : ( upper curve ) . for comparison , we plot the result using ysmf ( broken line ) : . for , our algorithm always achieves a better result than the ysmf and the advantage increases with increasing .,width=384,height=288 ] it is not surprising that the lattice graphs are easy to compress since these graphs are highly symmetric and nodes have lots of overlaps in their neighbors .however , in the case that we do nt have such perfect symmetry , we still hope to achieve compression . herewe apply our algorithm to the ws graphs .the ws graph comes from the famous watts - strogatz model for real - world networks by showing the so called small - world phenomenon .the ws graph is generated from a lattice graph by the usual rewiring of each edge with some given probability from the uniform distribution .we apply our algorithm to ws graphs with different to explore how affects the compression behavior .results are shown in figure [ wsresults ] . and .the stars show the compression results by our algorithm .the lower line is the compression ratio for the lattice and and the upper line is the ratio from the ysmf . we see that as increases there is less and less overlapping between neighbors in the network and the compression ratio increases . for ,we obtain worse result than ysmf.,width=384,height=288 ] in the following we show the compression results for some real world graphs : a c.elegans metabolic network ( figure [ metabolic ] ) , a yeast network constructed from yeast reactions , an email network , and an airline network of flight connections . in the table 1we summarize the compression results for these real world graphs . during each step ( left ) , and information redundancy each step ( right).,title="fig:",width=268,height=201 ] during each step ( left ) , and information redundancy each step ( right).,title="fig:",width=268,height=201 ] .compression results for some networks . [ cols="<,<,<,<,<,<",options="header " , ]from the previous section we see that our algorithm works for various kinds of graphs and gives a reasonable result .the ideal limit of our method for a graph with nodes , edges and average degree , which is relative large , is .this is obtained when each during the compression process is empty , meaning that most of the nodes share common neighbors , in which case we only need to record all the , requiring units of information and yields notice that trees do not compress , since for trees , so on average the overlap in neighbors will be even smaller ( likely to be ) , and a possible way to achieve compression is by self - matching for large degree nodes , for example , the hubs in a star graph . for comparison, the ysmf always gives the compression ratio which does not compress trees , and has a lower bound , while our method in principle approaches as .actually the compression ratio using ysmf can be achieved by choosing a special orbit in our approach which only contains self - matches , i.e. in this case the neighbors of each node will be put into corresponding sets and since any contains the same pair of numbers we can just use one to represent the pair , resulting in a total information units .so our approach can be considered as a generalization of the ysmf . however , as we observed in our compression results , the compression ratio given by [ idealcpr ] is in general not attainable since it is only achieved for the ideal case that nearly every node in the graph shares the same neighbors , and yet the graph needs to be sparse ! however , for lattices we observe that the actual compression ratio achieved by our algorithm is about , which is of the same order as the ideal compression ratio . for ws graphs ,when the noise is small , our algorithm achieves better compression ratio than ysmf , and the compression ratio is nearly linearly dependent on for . for graph resembles erdos - renyi random graphs , there is no symmetry between nodes to be used and thus our approach does not give good result , as compared to the ysmf . for real world graphs ,the results by our algorithm are better than using ysmf , but not as good as we observed for lattice graphs .this suggests that in real world graphs nodes , in general , share certain amount of common neighbors even when the total number of links is small .this kind of overlap in neighbors is certainly not as common as we see in lattice graphs since real world graphs in general have more complicated structures .e.m.b have been supported for this work by the army research office grant 51950-ma .e.m.b . has been further supported by the national science foundation under dms-0708083 and dms-0404778 , and d.b.a is supported by the national science foundation under phy-0555312 .we thank joseph d. skufca and james p. bagrow for discussion .s. c. eisenstat and h. c. elman and m. h. schultz and a. h. sherman , _ the ( new ) yale sparse matrix package _ , in elliptic problem solvers ii , g. birkho and a. schoenstadt , editors , academic press , new york , 1984 , pp .
in this paper we raise the question of how to compress sparse graphs . by introducing the idea of redundancy , we find a way to measure the overlap of neighbors between nodes in networks . we exploit symmetry and information by making use of the overlap in neighbors and analyzing how information is reduced by shrinking the network and using the specific data structure we created , we generalize the problem of compression as an optimization problem on the possible choices of orbits . to find a reasonably good solution to this problem we use a greedy algorithm to determine the orbit of symmetry identifications , to achieve compression . some example implementations of our algorithm are illustrated and analyzed .
phase i of the jaeri - kek joint project for high - intensity proton accelerators was recently approved for construction .the detailed design of the control system is given elsewhere .because of the recent success of epics in the kekb ring controls and the feasibility to share software resources of accelerator controls with other facilities , the epics control software environment has been selected for use in the system .internet protocol ( ip ) network controllers have been chosen for the controls of the linac portion of the project .the ability to use standard ip network software and infrastructure for both controls and its management influenced our decision .if these controllers meet the performance requirements , as expected , their use may be extended to the entire project .this article describes the usage plan and the software implementation of such network - based controllers under epics .a network - based controller and an epics ioc may be connected with an ip network , as shown in fig .[ fig : net ] . in this example , a plc is used , but other network - based controllers act the same way .five components in this scheme and their tasks are listed below : the plc controls local equipment and carries local processing .an equipment expert usually designs the plc logic , and it may include a simple local - operation panel .it can be tested by a management station without an epics environment .this type of autonomous controllers is useful when a robust control is required .the ioc processes logic between several plcs , and keeps their current status in memory .iocs can be designed by an equipment expert or an operator .the opi has no knowledge of network - based controllers , and therefore interacts with the ioc as usual .the management station is utilized to develop ladder software which may be downloaded to a plc , and to diagnose it .the ability to test from outside of the epics environment is useful in order to test for errors in the epics database or applications themselves .network hubs between them should be based on switch technology which does not suffer from message collisions .the design of the network topology is relatively flexible compared with other field networks because of the ip network .a connection to opis may be isolated by a network router to limit the communication to plcs locally .figure [ fig : net ] is symmetric between them , since it shows a physical view .logically , plcs are located on a local network while the opis are on a global network .they communicate in three ways : because a plc can not use the epics channel access ( ca ) protocol , a plc communicates with an ioc with its own protocol .while this protocol is based on polling , an important plc can send urgent information to an ioc without being requested .the ioc communicates with opis through the normal ca protocol .the management station maintains plcs during maintenance periods . due to the potential number of plcs that will be in use, it is important to manage them over the ip network .there are three types of network - based controllers being considered for the project : programmable logic controllers ( plc ) for simple and medium - speed controls .measurement stations ( yokogawa s we7000 ) for medium - speed waveform acquisition . plug - in network controller boards for magnets with relatively large power supplies .vme modules installed in epics iocs are used for other purposes , but new network equipment may be added .measurement equipment , such as network - based oscilloscopes , may be especially useful . at the electron linac in kek , central control computers manage approximately 150 plcs for rf , magnet , and vacuum controls .the plc , fa - m3 ( factory ace ) from yokogawa co. , was selected there because of the network software reliability , and the ability to manage the plcs over an ip network .even the ladder software is able to be downloaded into a plc over the network .we have decided to use the same type of plcs at the joint project as well .the communication and control routines for plcs were originally developed for use in a unix environment . while they were designed to access the shared - memory registers on the plcs , they can also directly access i / o modules over the network .since the routines were written with a generalized ip communication package , they were easily ported onto the vxworks and windows operating systems .the routines on windows machines are often useful for developers of ladder software of plcs even without the epics environment .epics device support software has been written utilizing those routines on vxworks . it provides standard epics access methods which can be called from any channel access ( ca ) clients .registers are read by and written to the plcs .each of registers is specified by an inp / out epics record field , which uses an ip address , or a host name , and a register address .the medm panel shown in fig .[ fig : medm ] provides an example of a channel access client .the panel displays current high voltage values and their strip charts for the ion source being conditioned in the linac .this type of application can be handled by the current software version without problems , but the current implementation of the device support software is not yet optimal .a plan to upgrade the software with a conditional write function , which is described later , may be necessary .a waveform acquisition is essential to beam instrumentation and microwave measurements .the measurement station , we7000 from yokogawa co. has been well adopted to beam instrumentation at kek .in addition , their cost performance and electro - magnetic noise elimination are promising .three types of waveform digitizers ( 100ks / s , 100ms / s and 1gs / s ) are currently considered .we decided to out - source the epics device support software , because we thought that it would be a good example of out - sourcing .although it took some time for the company to understand the epics software environment , waveform records have successfully built using disclosed information from yokogawa . at this time, we are evaluating the performance of the software .while designing the magnet power supplies for the drift - tube linac ( dtl ) and the separated - dtl , it was realized that a special type of controller was needed , since power supplies were intelligent and had many functions .we thus designed a plug - in - type network controller board , which transfers information and commands over the ip network and a local processor located inside a power supply .there are approximately 50 registers , half of which are utilized for network communication and for diagnostic purposes , such as the last ip address accessed .the controller boards are being built with the power supplies and will be evaluated soon .the software will be very similar and compatible with the plcs.since network - based controllers may reside on a global network , we should be very careful about programming and configuring them .although the number of persons who access such controllers were limited in the previous project , this may not be the case in the present new project .therefore , we decided to make several rules for use of the controllers : we will put an unique identification number ( i d ) to each plc and plug - in network controller .since it will be written in hardware or ladder software , a mistake in the configuration of the ip - address may be found from a management station .a clock counter of the controller should be consulted routinely from a management station to ensure that it works properly .while read functions are not restricted , write functions should be limited to some range of register addresses .for important controllers , a value should be always written indirectly with a value and an address .the combination of the network - based controllers with the epics toolkits will enhance the manageability of the control system .the software for epics toolkits has been developed and is currently being tested .they will soon be used in commissioning the first part of the linac .the authors would like to thank the kekb ring control group people and the joint project staff for valuable discussions .j. chiba _ et al ._ , `` a control system of the joint - project accelerator complex '' , in these proceedings . l.r .dalesio _ et al ._ , `` the experimental physics and industrial control system architecture '' , _ proc . of icalepcs93_ , berlin , nucl .instr . and meth .* a352 * ( 1994 ) 179 .n. kamikubota _ et al ._ , `` introduction of modern subsystems at the kek injector- linac '' , in these proceedings .k. furukawa __ , `` network based epics drivers for plc s and measurement stations '' , _ proc .of icalepcs99 _ , trieste , italy , 1999 , p.409 .
network - based controllers such as plc s and measurement stations will be used in the control system of the jaeri - kek joint project ( high intensity proton accelerator facility ) . the ability for the network hardware and software to be standardized has led to this decision . epics software support for those controllers has been designed with special attention paid to robustness . this software has been implemented and applied for the accelerator test stand , where the basic functionalities have been confirmed . miscellaneous functions , such as software diagnostics , will be added at a later date . to enable more manageable controllers , network - based equipment such as oscilloscopes are also being considered .
solar system evolution hinges on the outcome of collisions . in order for planetary accretion to proceed , for example, collisions must result in larger , not smaller pieces .asteroid dynamical families are in most cases thought to be the outcome of catastrophically disruptive collisions between parent bodies ; smaller asteroids and interplanetary dust may be the result of collisional cascades ; asteroid and planetary binaries ( ida and dactyl , pluto and charon , earth and moon ) may also be an expression of impact .impacts or collisions can be grouped in three different categories depending on outcome : cratering , shattering and dispersing .the first category is defined by events leading to the formation of topographical signatures ( craters ) accompanied by ejection of material but without affecting the physical integrity of the main body .shattering impacts , on the other hand , are events that break the parent body into smaller pieces . dispersing events are those which not only break the body into pieces , but also manage to impart velocities to those fragments in excess of escape velocity .much observational evidence testifies to these most energetic events : the dynamical asteroid families for example ( such as koronis ) , and the iron meteorites which are fragments excavated by impact from the cores of differentiated bodies .it has become customary in the literature to characterize impacts in terms of a specific energy threshold ( the kinetic energy in the collision divided by target mass ) .the threshold for a shattering event is defined by , the specific energy required to break a body into a spectrum of intact fragments , the largest one having exactly half the mass of the original target . dispersing events , on the other hand ,are defined by , the specific energy required to disperse the targets into a spectrum of individual but possibly reaccumulated objects , the largest one having exactly half the mass of the original target . in the strength regime , where gravity does not matter (fragments do not reaccumulate ) , obviously equal to . in the gravity regimehowever , always greater than , since the target must be fragmented and also _ dispersed _ by the event .laboratory experiments can be designed to determine this threshold for small targets , i.e. targets in the strength dominated regime ( see for example fujiwara _et al . _ 1989 ; davis & ryan 1990 ; ryan _ et al ._ 1991 ) . by using up to meter - sized targets , housen and holsapple ( 1999 ) were able to confirm previous theoretical prediction of strength weakening with size .impacts in pressurized targets ( housen 1991 ) designed to simulate self - gravitating bodies indicate that strength increases again in these `` larger '' targets .however , these artificially pressurized targets are uniformly compressed while in truly self - gravitating bodies the overburden is a function of position .the interpretation of these experiments is therefore not completely straightforward .the scales of planetary impacts are far different from what can be studied directly in the laboratory , and extrapolations by more than a dozen orders of magnitude in mass are required before reaching a range relevant to asteroids and/or planetesimals .detailed theoretical models of disruptive impacts ( holsapple & housen 1986 ; housen & holsapple 1990 ; holsapple 1994 ) try to bridge this gap by establishing relations among non - dimensional ratios involving impactor size , impact velocity , target strength , density , etc .such relations , deriving from dimensional analysis , assume uniformity of process , structural continuity and other idealizations , and can not predict detailed outcomes .recent exponential increases in computational power have enabled numerical simulations to become the method of choice to investigate these issues in greater detail .laboratory impact experiments are used to validate the numerical models on small scales before extrapolations to sizes relevant to solar system bodies are undertaken . however, numerical attempts to determine for the moment been limited to idealized geometry ( two - dimensional axisymmetry ) ( ryan & melosh 1998 ) or restricted to a subset of target sizes ( gravity regime ) ( melosh & ryan 1997 ; love & ahrens 1996 ) and low resolution ( love & ahrens 1996 ) . in this paper , we use a smooth particle hydrodynamics method ( sph ) to simulate colliding basalt and icy bodies from cm - scale to hundreds of km in diameter , in an effort to define _ self - consistently _ the threshold for catastrophic disruption .unlike previous efforts , this analysis incorporates the combined effects of material strength ( using a brittle fragmentation model ) and self - gravitation , thereby providing results in the `` strength regime '' and the `` gravity regime '' and in between .we begin with a short presentation of the physical model ( section [ sec : physics ] ) followed by a discussion of various numerical issues pertaining to our study ( section [ sec : numerics ] ) .the results obtained from 480 different simulations are presented and discussed in section [ sec : results ] .our approach of dynamical fracture modeling has been described in detail elsewhere ( benz and asphaug 1994 , 1995 ) and will only be summarized here .the equations describing an elastic solid are the usual conservation equations ( mass , momentum and energy ) in which the stress tensor has a non - diagonal part , the so - called deviatoric stress tensor . with the assumption that the stress deviator rate is proportional to the strain rate , i.e. hooke s law , and a suitable equation of state ( see section [ subsec : eos ] ) it is possible to numerically solve this set of coupled differential equations .plasticity is introduced by suitably modifying the stresses beyond the elastic limit using a von mises yielding relation .not counting the equation of state , this approach requires 3 material - dependent constants : the shear modulus , elastic limit and the melt energy used to decrease the elastic limit with increasing temperature .young s modulus can be computed from the knowledge of the bulk modulus ( parameter in section [ subsec : eos ] ) and the shear modulus according to .for the lower tensile stresses associated with brittle failure , we use a fracture model based on the nucleation of incipient flaws whose number density is given by a weibull distribution ( weibull 1939 ; jaeger & cook 1969 ) where is the number density of flaws having failure strains lower than .the weibull parameters and are material constants which can be determined from laboratory experiments comparing failure stress to strain rate , although data are scarce .table [ tab : matcon ] gives the numerical values of these material - dependent constants used in this study .the weibull parameters for basalt are a previously - determined ( benz & asphaug 1995 ) best match to the laboratory impact experiments of nakamura & fujiwara ( 1991 ) ; parameters for ice are derived by fitting a line to the measurements of rate - dependent fracture stress published in lange & ahrens ( 1983 ) . once flaws are activated , fractures grow at constant velocity , about half the sound speed . the extent to which fracture affects the local properties of matteris described by a scalar state variable called _damage_. the possible values for range between ( undamaged ) to ( totally damaged ) . the ability to sustain shear or tension by individual particles is reduced linearly with and vanishes for . because damage accrues according to the entire stress history of a parcel of matter , only lagrangian solutions to the hydrodynamics equations are applicable . our model of dynamic fragmentationexplicitly reproduces the growth of cracks in a brittle elastic solid by rupturing bonds and forming new free surfaces .cracks grow when local failure strains are exceeded , and stresses are relieved across the crack boundaries .the release of stress along fracture walls increases differential stress at the crack tips , driving cracks forward in the manner of an actual brittle solid .tensile and shear stresses are unsupported across disconnected regions , leading to reduced average strength and sound speed ( i.e. damage ) in the body . by producing actual cracks and fragments , our method at sufficiently high resolution automatically takes into account friction between fragments and bulking , effects which are included as recipes in statistical damage models .we used the so - called tillotson equation of state ( tillotson 1962 ) which was specifically derived for high - velocity impact computations .this equation has the advantage of being computationally expedient while sophisticated enough to allow its application over a wide regime of physical conditions . for compressed regions and for cold expanded states where the energy density ( ) is less than the energy of incipient vaporization ( )the tillotson eos has the form : \ ; \rho \ , e + a \ , \mu + b \ ,\mu^{2 } \label{eq : tillc}\ ] ] where and , such that is the compressed density and is the zero - pressure density . here and are referred to as the material - dependent tillotson parameters . for expanded states ,when the internal energy ( ) is greater than the energy of complete vaporization ( ) , the pressure has the form \ ; e^ { - \alpha \ , ( \rho_{0 } \ , /\rho - 1)^{2 } } \label{eq : tille}\ ] ] where and are constants that control the convergence rate of the equation to the perfect gas law .for intermediate states , pressure is simply interpolated linearly between both expanded and compressed states .tillotson parameters for a variety of geologic materials have been compiled by melosh ( 1989 ) . however , the most fundamental coefficients ( especially density and bulk modulus ) for candidate materials are typically different from those reported for specimens used in laboratory impact fragmentation experiments , from which our weibull coefficients ( table 1 ) are derived .we therefore make the following alteration to the equations of state , in an effort to best characterize the material in the fragmentation experiments : we substitute the measured density for the published tillotson reference density of the most similar material , and the measured bulk modulus for the tillotson parameter . because the tillotson parameter is not known for the specific target material , we make use of the fact that for most geologic materials .this is probably a good assumption for basalt but may underestimate the second - order pressure terms in ice , making it more rock - like " .we summarize in table [ tab : tillotson ] the tillotson parameters used in this paper .this situation is not ideal .however , given the rudimentary understanding of asteroidal and cometary compositions ( they are surely neither pure basalt nor pure water ice ! ) , we feel that there is little to be gained until comprehensie experimental work , combining dynamic strength and equation of state , can be performed on more representative candidate materials at the appropriate temperature .in this section we discuss various issues related to the numerical methods used to either simulate the impacts or analyze the results of the simulations .fracture depends on the entire stress history of a given piece of material .a lagrangian approach , in which the frame of reference is attached to the material , is therefore the natural framework for solving the equations briefly described in section [ sec : physics ] ; eulerian codes to date can not accurately follow stress history and the development of cracks .conventional lagrangian codes , however , are unable to handle large material deformations , as tangling and deformation of the grid severely affect the accuracy of derivatives .smooth particle hydrodynamics ( see reviews by benz 1990 ; monaghan 1992 ) does not suffer from this problem .we have developed a three - dimensional sph code capable of simulating dynamical fracture of brittle material ( benz & asphaug 1994 , 1995 ) .our sph code being an explicit code , the size of the time step is limited by the courant condition . in practice , this means that the time step can not exceed a fraction of the time needed by an elastic wave to cross a resolution element .if this element is of size , numerical stability requires that , where is the wave speed . with 42,000 particles , is of order where is the target radius . as an estimate for wave speedwe take where is the bulk modulus . taking the values appropriate for basalt, we get km / s which leads to an upper bound for the time step given by , where is in seconds and in km .gravitational clumping of fragments and the formation of rubble piles on the other hand takes place on a dynamical timescale given by . for basalt, we obtain a value for the clumping timescale of s. the number of timesteps required to follow a collision up to the time where gravitational reaccumulation occurs is therefore . for targets of radius 100 m, this requires a number of timesteps greater than , which is currently prohibitive .while these two vaslty different timescales prevent us from simulating the entire process directly , they also mean that self - gravitation is entirely decoupled from the impact phase for intermediate targets ( see asphaug 1997 ) .clumping can therefore be studied independently as part of the postprocessing analysis of the impact simulations proper ( see section [ subsec : frag ] ) .this has also the additional advantage not having to solve poisson s equation ( self - gravity ) during the impact phase .we therefore use a faster linked list in rank space algorithm to find the neighbor particles rather than a hierarchical tree . while the dynamical effects of self - gravity are not included in the simulations , the fracture shielding effect due to compression is included in the code(see section [ subsec : flaw ] ) .the code s behavior in the elastic regime was extensively tested against simple analytical solutions , while the fracture modeling was tested by simulating laboratory impact experiments ( nakamura and fujiwara 1991 ; ahrens and rubin 1993 ) leading to core - type and radial fragmentation and extensive spallation .the code was able to reproduce the laboratory experiments to a level of detailed accuracy never achieved before , including shape , ejection speed and rotation of fragments , trajectories of far - field fracture , and post - fragmentation sound speed relationships .we use the explicit flaw assignment procedure described in benz & asphaug ( 1995 ) using the material - dependent weibull parameters listed in table [ tab : matcon ] .our explicit flaw method improves upon the mixed explicit - implicit method of benz & asphaug ( 1994 ) .these methods are wholly independent of numerical resolution ( in terms of the assigned flaw statistics ) and lead to rate- and size - dependent fracture thresholds .each particle is assigned a number of discrete fracture thresholds distributed at random from the underlying weibull distribution ( eq .when local stresses exceed such a threshold , damage is allowed to grow until the local stresses decrease again , or until reaches its maximum authorized value .this maximum authorized value for particle is given by where is the currently active number of flaws and the number of fracture thresholds assigned to particle .this particular algorithm of damage accrual and authorization ensures that flaw distribution and activation are not linked to numerical resolution .for fracture to proceed , stresses must first overcome the local gravitational overburden .to include what amounts to an effective _ gravitational strengthening _ without having to solve the full poisson equation during the simulations , we use the technique ( asphaug and melosh 1993 ) of adding to the local stress tensor ( determined from solving the elasto - dynamical equations ) an isotropic lithostatically equilibrated stress ( ) .the sum , the total fracture stress " , is converted into a weibull strain for use in eq . 1 by dividing the largest tensile principal stress by the modified young s modulus , as pioneered by melosh_ et al . _note that the difference between the rock fracture stress and the total fracture stress is negligible for targets smaller than a few 10 s of km radius , and that for larger targets , strengthening towards the interior may be due as much to thermodynamical processes ( annealing ) than to gravity per se .as outlined above , late time gravitational evolution can for intermediate size targets be decoupled from the impact physics itself .thus , we perform the characterization of the collisional outcome as a postprocessing step .our approach to find the final collisional outcome proceeds in two steps .first , and regardless of target size , we begin by searching for intact ( undamaged ) fragments , i.e. fragments that are held together by material strength alone .our method makes no assumptions regarding their number , geometry , or location , as they are a natural outcome of the fracture trajectories resulting from a given simulation .contrary to statistical fragment approaches , our fragments are explicitly defined by the network of cracks resulting from the impact .we use a friends - of - friends algorithm to identify individual monolithic fragments defined as contiguous regions of particles held together by strength and surrounded by strengthless or empty regions . at the end of this procedure , we obtain for each fragment its mass , position , velocity , angular momentum and moment of inertia .this is the shattering " spectrum of fragment sizes discussed in the introduction .as soon as the target size exceeds 50 - 100 m , searching for intact fragments is no longer sufficient , as some of them may be able to reaccumulate due to gravity , leading to remnants incorporating multiple fragments .we search for these gravitationally - bound aggregates by applying a well known iterative procedure adapted from the techniques used in simulations of galaxy formation .the procedure starts by computing the binding energy of all particles and/or fragments with respect to the largest fragment , or if too small , the particle closest to the potential minimum .this serves as the seed for nucleating the total bound mass .unbound particles are discarded , and the center of mass position and velocity of the aggregate is computed .the binding energy of all the remaining fragments and particles with respect to this new position and velocity is again computed .unbound particles are again discarded and the procedure iterated until no particles are discarded .typically , convergence is achieved after a few iterations ( ) with only very few particles being lost after the first 2 - 3 steps .finally , we check that particle / fragment members of this gravitationally bound aggregate are also close spatially , using again a friends - of - friends algorithm .mass , position , velocity , angular momentum and moment of inertia are also determined for this gravitationally bound aggregate , which can be made of a collection of smaller fragments and/or individual particles . because of the limited number of particles used , we limit our search to the single largest aggregate and do not attempt to search for smaller ones .the algorithm has been tested extensively in both the strength and gravity dominated regime by comparing end results of simulations with predictions made at early times by the cluster finding algorithm . to overcome the time step problem in the gravity dominated regime ,purely hydrodynamical simulations were used to allow the code to compute to sufficiently late times . for each simulation ,we therefore identify and characterize the largest object present at the end of the collision .this object may either be an intact fragment ( in the strength dominated regime ) or a gravitationally bound aggregate of fragments ( gravity dominated regime ) . in the latter case, we also search for the largest intact fragment belonging to the aggregate ( see fig .[ fig : impcart ] ) , e.g. the largest component of the resulting `` rubble - pile '' .we have considered 8 different target radii : 3 cm , 3 m , 300 m , 1 km , 3 km , 10 km , 30 km , and 100 km , and two different material types : basalt and ice . for each targetwe have computed impacts for five angles of incidence measured from the surface normal : 0 , 30 , 45 , 60 , and 75 degrees . for each materialwe have considered two impact velocities : for icy targets 0.5 km / s and 3 km / s ; for basalt targets 3 km / s and 5 km / s . in each case , computed from a parabolic fit to three different simulations in which only the impactor mass was modified .the entire study therefore represents a total of 480 different simulations which were automatically handled by special driver " software .as each simulation can take up to a few days of high performance workstation time , this represents a significant body of computational effort .because of the statistical nature of this study , we had to limit ourselves to a relatively small number of particles . in all cases , we used particles to model the target .this number was found in convergence - test comparisons between numerical simulations and laboratory impact experiments ( benz & asphaug 1994 , 1995 ) to be sufficient to determine reliably the characteristics ( size , velocity and rotation ) of the largest fragment .thus in all what follows , we shall concentrate only on the characteristics of the largest fragment .the projectile was modeled using particles for the km / s and km / s impacts and for km / s impacts since in this case the impactor was much larger .( in fact , in the case of km large icy targets and large incidence angles the required projectile for a given value of was sometimes bigger than the target ! )the simulations were carried out in time until no further significant changes occurred in the extent of damage endured by the target and in the velocity of the ejected particles .for a given impact geometry , velocity and target ( parent body ) material , determined by interpolation between three different simulations spanning a range of incident kinetic energy per unit target mass ( ) chosen to yield largest remnant masses ( ) generally in the range . in the expression above , represents the mass of the largest remnant ( including gravitational reaccumulation if applicable ) and the mass of the parent body ( or target ) . a parabolic fit ( ) to these results is computed and by solving for () .the results of these calculations are shown in figs .[ fig : qbas3kms ] , [ fig : qbas5kms ] , [ fig : qice3kms ] and [ fig : qice05kms ] for the various velocities and material type . in these figures ,each dot represents the value of from the parabolic fit .the dots corresponding to the same angle of incidence of the projectile are connected by a solid line .we recover in these figures the well known functional dependency of target size , namely that with size for small targets while it increases with size for larger targets .these two behaviors correspond to collisions occurring in a strength or gravity dominated regime .the transition between the two regimes occurs in the range for both ice and basalt .we note that for a given target size , a strong function of the projectile s angle of incidence .differences in a head - on and a degree impact reach about a factor 10 .the increase of in the gravitational regime is due to two factors .the most important is the fact that even though bodies exceeding 1 km in radius are almost entirely shattered ( see section [ subsec : largestf ] ) in a , the pieces do not all disperse because their relative velocities are smaller than the escape velocity of the aggregate .hence , for catastrophically disrupted and dispersed bodies larger than km , all the largest remnants are found to be gravitationally bound aggregates of smaller fragments .the second effect is due to gravitational shielding of the central region of the target ( see section [ subsec : flaw ] ) but as we shall see in section [ subsec : largestf ] this affects only the largest targets in our mass range . in many studies ,one is not interested in the outcome of a particular collision but rather in the collisional evolution of an entire population .for these statistical studies , we can compute an effective threshold which is independent of angle of incidence , namely averaged over all possible impact geometries but at fixed relative velocity .for an isotropic distribution of incoming projectiles ( at infinity ) , shoemaker ( 1962 ) showed that the probability distribution for impacting with an angle between and is given by regardless of a planet s mass . using this probability distribution ,we define a mean catastrophic disruption threshold by this integration is carried out numerically with a simple trapezoidal method and using the determined from the simulations .the results are displayed in fig .[ fig : qbasmean ] for basalt targets and in fig .[ fig : qicemean ] for icy targets . in order to compare with disruption thresholds published in the literature and to allow these results to be used by others, we fitted ( by eye ) an analytical curve to of the following functional form where is the radius of the parent body ( or target ) , the density of the parent body ( in g/)and constants to be determined .this functional form is often encountered in scaling law approaches with the two terms representing the two distinct physical regimes dominating the dynamics : 1 ) material strength ( first term on the right , with ) and 2 ) self - gravity ( second term on the right , with ) .the values obtained for the coefficients are listed in table [ tab : qbarcon ] and the fits are represented as lines on figs . [ fig : qbasmean ] and [ fig : qicemean ] . note that the slopes in the gravity regime ( ) are somewhat different between basalt and ice , and even for ice the two slopes corresponding to the two velocities differ slightly .this could be related to the fact that for equal mass targets , ice material is lifted from an initially higher potential ( less negative potential energy ) corresponding to the larger equal - mass target diameter . alternatively , because the shock imparts different velocities to fragments according to different material characteristics ( the velocity field determines the subsequent gravitational reaccumulation ) , equation of state distinctions sensitive to both material type and impact speed may be the culprit . also note that in the gravity regime , faster impacts are less disruptive than slower impacts of the same energy , for both rock and ice .this is due to the greater efficiency of momentum coupling for slower impacts .the governing factor for gravity regime disruption is not shattering , but motion towards gravitational separation .for basalt , this trend continues into the strength regime , although that is coincidental , since the strength regime depends on entirely different aspects of collisional physics ( flaw activation ) . for ice ,the opposite is true : slower impacts ( of the same energy ) result in less disruption .evidently ice is easier to fracture at high strain rates than at low strain rates , relative to basalt .this is either because it has more flaws available at low activation energies ( consistent with its weibull distribution ) , or because vaporization at high impact speed may contribute to fragmentation , which is not the case for the subsonic ( 500 m / s ) collisions .now we compare our averaged dispersion threshold with other published values in fig .[ fig : compq ] . in this figure , we reproduced the _ disruption _ thresholds obtained either from scaling laws or numerical simulations in the strength or gravitational dominated regime . for small targets ,our estimate of the threshold agrees well with the determination of holsapple ( 1994 ) .the largest differences occur for large targets for which we predict that they are significantly stronger ( more difficult to disrupt and disperse ) than previously estimated .this does not arise due to a significant change of slope ( notice our slope is close to the one predicted by holsapple 1994 or melosh & ryan 1997 ) but because the turn over from strength to gravity dominated targets occurs at smaller sizes .we also display in fig .[ fig : compq ] the recent determination of durda _ et al . _their curve is determined by requiring that numerical models of the collisional evolution of the main belt asteroids fit the observed size distribution of these objects .interestingly , they obtain that objects of order 100 - 200 m in diameter are the weakest objects , a conclusion confirmed by our simulations .however , besides the agreement on the size of the weakest objects , our results ( as well as all other determinations of ) , differ from the results of durda _ et al ._ by an order of magnitude or more .the origin of this discrepancy is not clear . on one hand, we note that the values of by durda _et al . _are not obtained from a simple fit to observed sizes but assume an underlying collisional model which might not be predicting accurately collisional outcomes . on the other hand , it may also be possible that asteroidal material has very different mechanical properties than the material tested in the laboratory .as already noticed by many ( most recently by ryan and melosh 1998 ) , the efficiency at which the kinetic energy is transmitted from the impactor to individual fragments is extremely low .since the largest remnants in collisions involving targets larger than 1 km are gravitationally bound rubble piles ( see section [ subsec : largestf ] ) their mass is determined ultimately by the velocity distribution imparted to the fragments during the impact .a fragment will remain bound if its velocity remains below the escape velocity of the aggregate of all slow moving fragments .this explains why targets as small as 1 km radius are already significantly strengthened by gravity against dispersal . for a given material type, the radius of the weakest object , is obtained by finding the radius for which . from equation [ eq : qfr ] the value of the radius of the weakest object is given by table [ tab : rweak ] gives the values derived for using the values of the parameters listed in table [ tab : qbarcon ] . for both materials and impact speeds , 100200 m.these values are smaller than those derived in other studies .for example , holsapple ( 1994 ) based on scaling laws gives 3 km as the transition point between the two regimes .melosh & ryan ( 1997 ) as well as love & ahrens ( 1996 ) from numerical simulations give numbers in the range 200 - 400 m .while a collision occurring at by definition a largest remnant with mass equal to half the target mass , collisions occurring with different leave remnants of different masses .we can therefore use all our simulations ( whose intent was to bracket ) to investigate how the mass of the largest remnant depends upon the incident kinetic energy per gram of target material ( ) .[ fig : icemfq ] and [ fig : basmfq ] show the dependency of the mass of the largest fragment on the impact energy obtained in our simulations . to facilitate the interpretation of these results ,the impact kinetic energy per gram of target material has been normalized to each target size and projectile angle of incidence .all simulations ( 120 ) involving a given material type and impact velocity have been plotted on the same plot .the different symbols correspond to targets of different sizes .these figures clearly show that when normalized to relative mass of the largest remnant is a well - defined , simple function of and is independent of target size and/or angle of incidence .the dependence upon these parameters enters only in !the increased scatter in the points for small mass fragments is probably due to the inherent numerical difficulties in resolving these smaller objects ( at fixed resolution ) .this relationship is remarkable since the mass of the largest remnant is not determined by a single process , but either by material strength or gravity depending on target size ! in this regard it is interesting to note that in the case of low velocity collisions on icy targets the correlation is significantly worse , especially for large targets ( km ) .it is unclear why this is the case ; however , in these cases it is worth pointing out that , due to the low velocity , the projectile is sometimes as big as the target and that the relative velocity is significantly smaller than sound speed , indicating that there might be a different disruption regime at low velocity . except for the case discussed above , we find that the relative mass of the largest remnant can be well represented by the following expression and that this single expression holds for targets ranging from 3 cm to 100 km and angles of incidence between 0 and 75 degrees . these lines are drawn on the various figures and their slopes are remarkably similar : for basalt for / s , for / s and for ice for / s . the case for ice at km / s for some still unknown reason does not yield such a tight relation .we have not attempted a real fit to the data and only plot the line derived for the km / s .we now determine the largest _ intact _ ( unshattered )fragment , , in collisions occurring at . here by undamaged we mean a fragment for which material strength still plays an important role in the cohesive properties of the object even though the fragment may no longer have its original strength .we are looking for the largest boulder in the final rubble - pile .for each target radius and projectile angle of incidence , we obtain from the three simulations performed a parabolic relation .we determine by setting where been determined using the parabolic fit described in section [ subsec : qcrit ] .the values obtained for the mass of these largest intact fragments are displayed in figs .[ fig : fraginice ] and [ fig : fraginbas ] for both material types and impact velocities .not surprisingly , for small targets ( m ) the largest undamaged fragment is equal to half the original target mass regardless of impact parameter .this is simply because in the strength dominated regime the mass of the largest intact fragment is equal to half the target mass by definition of . however , for targets larger than m and regardless of material type and/or impact velocity , the mass of the largest intact fragment drops rapidly even though the collisions occurred at .this reflects the fact that collisions involving parent bodies this size and larger take place in the gravity dominated regime .this regime is therefore characterized by the fact that the largest remnant is not an intact fragment but a gravitationally bound aggregate of fragments . due to the precipitous nature of these curves for targetradii larger than a few 100 m , we do nt expect any monolithic object of this size .on the other hand , we expect a wide variety of internal structures for objects in the 30 m - 300 m range . for the largest targets considered in this study we notice a marginal trend for to rise again .this effect is due to gravitational strengthening of the target discussed in section [ subsec : flaw ] .however , we stress that this effect is only of moderate importance : the main role of gravity in this size range is to allow for the formation of gravitationally bound rubble piles .gravitational strengthening appears also to be strongly dependent on initial impact parameter .for example , in the case of an icy target of 100 km radius a grazing impact occurring at with an incidence angle of leaves behind a largest undamaged fragment of while at the same fragment is smaller than .final objects in the range are found to be essentially gravitationally bound aggregates of smaller fragments . whether or not observed asteroids and comets in this size range are indeed rubble piles depends upon whether they have suffered a during their history .further complications exist ; for example , asphaug _ et al . _ ( 1998 ) have shown that target geometry ( shape ) and internal structure ( pre - fracture ) can significantly influence collisional outcome .the spherical homogeneous intact solids considered here may be idealized . a contact binary " asteroid may for instance suffer catastrophic disruption of its impacted lobe , with little or no disruption occurring on its unimpacted lobe .a target which is _ already _ a rubble - pile may similarly be more difficult to disperse by impact , due to the inefficient coupling of impact energy . for each target size , material type and impact parameter we determined the ejection velocity of the largest mass remnant in each of the three simulations bracketing .we note that this velocity is not related in a straightforward manner to the usual energy partitioning coefficient , , namely the fraction of kinetic energy going into fragment kinetic energy .given that the kinetic energy is _ not _ distributed uniformly over all fragments , but rather carried away by a small amount of mass moving very fast , we believe the coefficient to be of little use to address the dynamics of the largest fragments .we therefore focus our attention on determining the actual velocity of the largest fragment or aggregate in each simulated collision , numerical resolution preventing us from studying the smaller ones .these ejection velocities as a function of normalized fragment mass are displayed in figs.([fig : vbas5kms ] , [ fig : vbas3kms ] , [ fig : vice3kms ] , [ fig : vice05kms ] ) for both material types and collision velocities .velocity is measured relative to the center of mass of the original target . in each figure, the upper panel shows the actual ejection velocity .each different symbol corresponds to a different initial target size regardless of angle of incidence .the lower panel shows the ejection velocity normalized to the target s escape velocity for initial targets greater that 1 km .we note that for a given parent body size , larger remnants have lower ejection velocities regardless of angle of incidence .in fact , it is remarkable how little influence the impact parameter seems to have on the ejection velocity of the largest remnant . for each target size , the ejection velocity of the largest remnant is to a good approximation a simple decreasing linear function of its fractional mass ( in the domain ) .in addition , we note that for the largest fragments , i.e. the one for which gravity is the dominant cohesion force , we can almost remove the target size dependence by normalizing the ejection velocity by the parent body s escape velocity . in other words ,the outcome velocity of the largest fragment normalized to initial target escape velocity is ( within some considerable scatter ) independent of target size and impact parameter .the fact that the velocity of the largest remnant is a relatively constant fraction of the target s escape velocity is probably due to the a priori requirement that in the gravitational regime about half of the initial mass must escape . in order to gain insight regarding which collisions lead to the fastest moving largest remnants , we have analyzed how the ejection velocities depend upon the ratio of impactor to target size , .we stress again that because of numerical resolution , we are able to analyze only the largest remnants and not the entire ejecta distribution .thus , it is unrealistic from our data to determine the actual kinetic energy transfer efficiency . figs .[ fig : basvr ] and [ fig : icevr ] display the ejection velocity of the largest remnants as a function of regardless of angle of incidence .the two different symbols correspond in each cases to the two different impact velocities .apparent from these figures is the fact that the ejection velocity rises with in a monotonic fashion .the `` width '' of the curve is mainly determined not by scatter but by the relation between ejection velocity and remnant mass ( see section([subsec : vejec ] ) .thus , regardless of angle of incidence , collisions will give rise to fast moving remnants if the size of the impactor becomes comparable to the size of the parent body .in regard to the collisional origin of asteroid families , we note that velocities of order 100 m / s are easily obtained for basaltic targets provided the impactor size is at least about half the parent body size .we have presented a self - consistent three - dimensional treatment of impact disruption leading from the laboratory and the strength regime , where our sph code has been exhaustively calibrated and tested , all the way out to the gravity regime collisions responsible for the formation of asteroid families and planetary accumulation .while some parameters ( such as shape and pre - fragmentation and rotation ) have yet to be fully explored ( see for instance asphaug _et al . _ 1998 ) the 480 runs summarized here provide a robust constraint on the outcome of catastrophic collisions .in particular , we have demonstrated that bodies 100 - 200 m radius ( depending on impact speed and composition ; see table 4 ) are the weakest objects in the solar system , with all bodies this size and larger being dominated by self - gravitational forces , rather than material strength , with regard to impact disruption .this enhanced role of gravity is not due as usually assumed to fracture prevention by gravitational compression .it is due to the difficulty of the fragments to escape their mutual gravitational attraction .owing to the generally low efficiency of momentum transfer in collisions , the velocity of larger fragments tends to be small , and more energetic collisions are needed to disperse them .remarkably , the efficiency of momentum transfer ( while still small ) is found to be larger for larger projectiles .thus , at a fixed collisional energy , a low velocity high mass projectile will lead to a higher fragment velocity that a small mass high speed projectile .this increased role of gravity implies that the threshold for disruption is actually significantly larger than previously assumed .the upshot of this is that any catastrophic collisions leading to disruption _ must _ occur at an energy far exceeding the threshold for shattering the parent body .these necessarily high strain rate collisions imply by the nature of the fracturing process that many cracks must grow to release the stresses , preventing any sizeable fragment from surviving .thus , catastrophic collisions of this nature can only result in the formation of gravitationally bound aggregates of smaller fragments .we shall continue to examine the outcome of these simulations for information regarding angular momentum transfer during impact , and for anticipated cumulate structures for large fragments from large parent bodies , motivated by the possibility of re - creating the events which led to the formation of known asteroid families ( benz and michel , in preparation ) . as available computing power permits , a more detailed parameteric exploration ( varying weibull coefficients , shape and internal structure ) is someday hoped for .as it stands , our chosen materials ( basalt and ice ) represent broad - based choices , and the fact that both give similar results implies that catastrophic disruption is perhaps not very material dependent .if that is the case , then the simple and robust relations presented here , for the mass of the largest remnant in an impact event , and for the ejection velocity of the largest remnant , are appropriate for a new generation of calculations modeling the accretion of planets in ours and other solar systems .this work was supported in part by the swiss national science foundation and by nasa grant nag5 - 7245 from the planetary geology and geophysics program .= 0.5truein=1ahrens , t.j . , and rubin , a.m. , 1993 , impact - induced tensional failure in rock , _ j. geophys ._ , * 98 * , 11851203 .= 0.5truein=1asphaug , e. and h. j. melosh ( 1993 ) . the stickney impact of phobos : a dynamical model . _ icarus _ * 101 * , 144164 = 0.5truein=1asphaug , e. , 1997 , impact origin of the vesta family , _ meteor . and plan ._ , * 32 * , 965980 .= 0.5truein=1asphaug , e. , s.j .ostro , r.s .hudson , d.j .scheeres and w. benz , 1998 , disruption of kilometre - sized asteroids by energetic collisions , _ nature _ , * 393 * , 437440 .= 0.5truein=1benz , w. , 1989 , smooth particle hydrodynamics : a review , in _ numerical modeling of nonlinear stellar pulsations .problems and prospects _ , ed .buchler , dordrecht : kluwer academic press , p. 269288 .= 0.5truein=1benz , w. , and e. asphaug , 1994 , impact simulations with fracture .i. method and tests , _ icarus _ , * 107 * , 98116 .= 0.5truein=1benz , w. , and e. asphaug , 1995 , simulations of brittle solids using smooth particle hydrodynamics _ comput ._ , * 87 * , 253265 .= 0.5truein=1davis , d.r . ,& ryan , e.v . , 1990 , on collisional disruption - experimental results and scaling laws , _ icarus _ * 83 * , 156 .= 0.5truein=1davis , d.r . ,ryan , e.v . , and p. farinella , 1997 , on how to scale disruptive collisions , _ lun .plan . science _ , * xxvi * , 319320 .= 0.5truein=1durda , d.d . ,greenberg , r. , and r. jedicke , 1998 , collisional models and scaling laws : a new interpretation of teh shape of the main - belt asteroid size distribution , _ icarus _ , * 135 * , 431440 .= 0.5truein=1fujiwara , a. , cerroni , p , davis , d.r . ,ryan , e.v . ,dimartino , m. , holsapple , k.a . , and housen , k.r . , 1989 , in _ asteroids ii _ , eds .binzel , t. gehrels , m.s .matthews , university of arizona press , tucson .= 0.5truein=1holsapple , k.a . , 1994 , catastrophic disruptions and cratering of solar system bodies : a review and new results , _ plan .spac . science _ , * 42 * , 10671078 .= 0.5truein=1holsapple , k. a. , and k. r. housen , 1986 , scaling laws for the catastrophic collisions of asteroids , _ mem .s.a.it._ * 57 * , 65 - 85 = 0.5truein=1housen , k.r . , and k.a .holsapple , 1990 , on the fragmentation of asteroids and planetary satellites , _icarus _ , * 84 * , 226253 .= 0.5truein=1housen , k.r. , 1991 , laboratory simulations of large - scale fragmentation events , _ icarus _ * 94 * , 180190 .= 0.5truein=1housen , k.r . & k.a .holsapple , 1999 , scale effects in strength - dominated collisions of rocky asteroids , _ icarus _ , this issue .= 0.5truein=1jaeger , j.c . , and n.g.w . cook , 1969 , fundamentals of rock mechanics , ( london : chapman and hall ) .= 0.5truein=1lange , m.a . and t.j .ahrens , 1983 , the dynamic tensile strength of ice and ice - silicate mixtures , _ j. geophys .res _ * 88 * , 11971208 .= 0.5truein=1love , s.g . , and ahrens , t.j . ,1996 , catastrophic impacts on gravity dominated asteroids , _ icarus _ , * 124 * , 141155 .= 0.5truein=1melosh , h.j . , 1989 , _ impact cratering : a geologic process _ ,( new york : oxford university press ) .= 0.5truein=1melosh , h.j ., e. ryan , and e. asphaug ( 1992 ) .dynamic fragmentation in impacts ._ j. geophys .res . _ * 97 * : 14,73514,759 . = 0.5truein=1melosh , h.j . , and ryan , e.v . , 1997 , asteroids : shattered not dispersed , _ icarus _ , * 129 * , 562564 .= 0.5truein=1monaghan , j.j . , 1992 , smooth particle hydrodynamics ._ ann . rev ._ , * 30 * , 543574 .= 0.5truein=1nakamura , a. , and a. fujiwara , 1991 , velocity distribution of fragments formed in a simulated collisional disruption , _ icarus _ , * 92 * , 132146 .= 0.5truein=1okeefe , j.d . and t.j .ahrens , 1982a , the interaction of the cretaceous / tertiary extinction bolide with the atmosphere , ocean and solid earth ._ special papers * 190 * , 103120 .= 0.5truein=1okeefe , j.d . and t.j .ahrens , 1982b , cometary and meteorite swarm impact on planetary surfaces , _ j. geophys .res . _ * 87 * , 66686680 .= 0.5truein=1ryan , e.v . , hartmann , w.k ., & davis , d.r . , impact experiments iii - catastrophic fragmentation of aggregate targets and relation to asteroids , 1991 , _ icarus _ * 94 * , 283298 = 0.5truein=1ryan , e.v . , and melosh , h.j. , 1998 , impact fragmentation : from the laboratory to asteroids , icarus , 133 , 124 = 0.5truein=1shoemaker , e. m. , 1962 , interpenetration of lunar craters . in z.kopal ( ed . ) , _ physics and astronomy of the moon _ , academic press , new york and london , pp . 283359 .= 0.5truein=1tillotson , j. h. , 1962 , metallic equations of state for hypervelocity impact , _ rep ., july 18 , gen . at ., san diego , california = 0.5truein=1weibull , w. a. , 1939 , a statistical theory of the strength of materials ( transl . ) , _ ingvetensk ._ , * 151 * ( stockholm ) , 545 .llllll & & y & & k & m + & erg / cc & erg / g & erg / g & & + basalt & & & & & 9.0 + ice & & & & & 9.6 + + + + + + lllllllllll & & a & b & & e & e & a & b & & + & g / cc & erg / cc & erg / cc & erg / g & erg / g & erg / g & & & & + basalt & & & & & & & & & & + ice & & & & & & & & & & + + + + + + + + .fit constants for [ cols="^,^,^,^,^,^ " , ]
we use a smooth particle hydrodynamics method ( sph ) to simulate colliding rocky and icy bodies from cm - scale to hundreds of km in diameter , in an effort to define self - consistently the threshold for catastrophic disruption . unlike previous efforts , this analysis incorporates the combined effects of material strength ( using a brittle fragmentation model ) and self - gravitation , thereby providing results in the `` strength regime '' and the `` gravity regime '' , and in between . in each case , the structural properties of the largest remnant are examined . our main result is that gravity plays a dominant role in determining the outcome of collisions even involving relatively small targets . in the size range considered here , the enhanced role of gravity is not due to fracture prevention by gravitational compression , but rather to the difficulty of the fragments to escape their mutual gravitational attraction . owing to the low efficiency of momentum transfer in collisions , the velocity of larger fragments tends to be small , and more energetic collisions are needed to disperse them . we find that the weakest bodies in the solar system , as far as impact disruption is concerned , are about 300 m in diameter . beyond this size , objects become more difficult to disperse even though they are still easily shattered . thus , larger remnants of collisions involving targets larger than about 1 km in radius should essentially be self - gravitating aggregates of smaller fragments .
increasing throughput / connectivity within scarce resources has been the main motivation for modern wireless communications . among the various proposed techniques ,the concept of _ relaying _ has attracted much attention as a cost - effective enabler to extend the network coverage and capacity . in recent 5 g discussions , relaying became one of the core parts for the future cellular architecture including techniques of small cell managements and device - to - device communications between users . in network information theory, many intelligent and cooperative relaying strategies have been devised such as decode - and - forward / compress - and - forward for relay networks , network coding for noiseless networks , and general noisy network coding for discrete memoryless networks . among them , network coding has emerged as a promising technique for a practical wireless networking solution , which models the underlying wireless channels by a simple but non - trivial random packet erasure network . that is , each node is associated with its own broadcast packet erasure channel ( pec ) .namely , each node can choose a symbol ] , transmits , and a random subset of receivers will receive the packet . in thissetting , proved that the _ linear network coding _ ( lnc ) , operating only by linear " packet - mixings , suffices to achieve the single - multicast capacity .moreover , recent wireless testbeds have also demonstrated substantial lnc throughput gain for multiple - unicasts over the traditional store - and - forward 802.11 routing protocols .\(s ) at ( 0,0 ) ; ( [ shift=(-60:0.3)]s ) arc ( -60:60:0.3 ) ; ( [ shift=(-60:0.4)]s ) arc ( -60:60:0.4 ) ; ( r ) at ( [ shift=(1,0,0)]s ) ; ( [ shift=(-60:0.3)]r ) arc ( -60:60:0.3 ) ; ( [ shift=(-60:0.4)]r ) arc ( -60:60:0.4 ) ; ( d1 ) at ( [ shift=(15:2.25)]s ) ; ( d2 ) at ( [ shift=(-15:2.25)]s) ; at ( [ shift=(1.6,0)]s ) ( a ) -node -hop relay network ; ( s ) at ( 3.3,0 ) ; ( spec ) at ( [ shift=(1.0,0)]s ) ; ( r ) at ( [ shift=(2,0,0)]s ) ; ( rpec ) at ( [ shift=(1.0,0)]r ) ; ( d1 ) at ( [ shift=(15:2.25)]r ) ; ( d2 ) at ( [ shift=(-15:2.25)]r) ; ( s ) ( spec ) ; ( spec ) .. controls([shift=(-2.1,0)]d1 ) .. ( d1 ) ; ( spec ).. controls([shift=(-2.1,0)]d2 ) .. ( d2 ) ; ( spec ) ( r ) ; ( r ) ( rpec ) ; ( rpec ) .. controls([shift=(0.7,0)]rpec ) .. ( d1 ) ; ( rpec ) .. controls([shift=(0.7,0)]rpec ) .. ( d2 ) ; at ( [ shift=(2.65,0)]s ) ( b ) the pec network model ; + ( n1 ) at ( 0,0 ) ; ( pec1 ) at ( [ shift=(0.75,0)]n1 ) ; ( n2 ) at ( [ shift=(25:1.5)]n1 ) ; ( n3 ) at ( [ shift=(-25:1.5)]n1 ) ; ( n1 ) ( pec1 ) ; ( pec1 ) .. controls([shift=(0.9,0)]n1 ) .. ( n2 ) ; ( pec1 ) .. controls([shift=(0.9,0)]n1 ) .. ( n3 ) ; at ( [ shift=(0.2,0.4)]n1 ) ] ; at ( [ shift=(0.2,0)]d1 ) ; at ( [ shift=(0.2,0)]d2 ) ; at ( [ shift=(-0.6,-0.5)]s1 ) ( d ) a 2-flow wireless butterfly ; motivated by these results , we are interested in finding an optimal or near - optimal lnc strategy for wireless relaying networks . to simplify the analysis, we consider a -node -hop network with one source , two destinations , and a common relay inter - connected by two broadcast pecs .see fig .[ fig : srp : model](a - b ) for details .we assume time - sharing between and so that interference is fully avoided , and assume the causal packet acknowledgment feedback . in this way, we can concentrate on how the relay and source can jointly exploit the broadcast channel diversity within the network . when relay is not present , fig .[ fig : srp : model](b ) collapses to fig .[ fig : srp : model](c ) , the -receiver broadcast pec .it was shown in that a simple lnc scheme is capacity - achieving .the idea is to exploit the wireless diversity created by random packet erasures , i.e. , overhearing packets of other flows . whenever a packet intended for is received only by and a packet intended for received only by , can transmit their linear mixture ] with size . to that end, we denote as an -dimensional row vector of all the packets , and define the linear space )^{n\rsum} ] ; and only when , node ( one of ) will receive . in all other cases ,node receives an erasure .the reception of relay s transmission is defined similarly .assuming that the -bit vector is broadcast to both and after each packet transmission through a separate control channel , a _linear network code _ contains scheduling functions {1}^{t-1}),\ ] ] where we use brackets {1}^{\tau} ] , and we assume that is known causally to the entire network . can either be appended in the header or be computed by the network - wide causal csi feedback ^{t\!-\!1} ] .a rate vector is achievable by lnc if for any there exists a joint scheduling and lnc scheme with sufficiently large such that for all .the lnc capacity region is the closure of all lnc - achievable . in our network model , there are two broadcast pecs associated with and . for shorthand ,we call those pecs the -pec and the -pec , respectively . the distribution of the network - wide channel status vector can be described by the probabilities for all , and for all . in total, there are channel parameters . to be correlated ( i.e. , spatially correlated as it is between coordinates , not over the time axis ) ,our setting can also model the scenario in which and are situated in the same physical node and thus have perfectly correlated channel success events . ] for notational simplicity , we also define the following two probability functions and , one for each pec . the input argument of is a collection of the elements in .the function outputs the probability that the reception event is compatible to the specified collection of . for example, is the probability that the input of the source - pec is successfully received by but not by .herein , is a dont - care receiver and thus sums two joint probabilities together ( receives it or not ) as described in ( [ eq : srp : chex ] ) .another example is , which is the marginal success probability that a packet sent by is heard by .to slightly abuse the notation , we further allow and to take multiple input arguments separated by commas . with this new notation , they can represent the probability that the reception event is compatible to at least one of the input arguments .for example , that is , represents the probability that equals one of the following vectors , , , , and .note that these vectors are compatible to either or or both .another example of this notation is , which represents the probability that a packet sent by is received by at least one of the three nodes , , and .the indicator function and taking expectation is denoted by and , respectively .since the coding vector has number of coordinates , there are exponentially many ways of jointly designing the scheduling and the coding vector choices over time when sufficiently large and ] ; ( rdx1**dec ) at ( [ shift=(-0.7,-0.6)]rdx1 * * ) ; ( rdx1 * * ) .. controls([shift=(0,-0.3)]rdx1 * * ) .. ( rdx1**dec ) ; ( rdx2 * * ) at ( [ shift=(+5,0)]l8 ) }{} ] , and a cross - mixture ] received , and the third row vector represents the mixture ] in the reception list means that the list contains a -dimensional row vector of exactly .we then say that a pure packet is _ not flagged _ in the reception list , if the column of the corresponding entry contains all zeros . from the previous example , the pure session- packet is not flagged in , meaning that has neither received nor any mixture involving this .note that not flagged " is a stronger definition than unknown " . from the previous example, the pure session- packet is unknown to but still flagged in as has received the mixture ] to extract .we sometimes abuse the reception list notation to denote the collective reception list by for some non - empty subset .for example , implies the vertical concatenation of all , , and .we now describe the properties of the queues .every packet in this queue is _ of a pure session- _ and _ unknown _ to any of , even _ not flagged _ in .initially , this queue contains all the session- packets , and will be empty in the end .every packet in this queue is _ of a pure session- _ and _ known _ to .initially , this queue is empty but will contain all the session- packets in the end .every packet in this queue is _ of a pure session- _ and _ known _ by but _ unknown _ to any of , even _ not flagged _ in .every packet in this queue is _ of a pure session- _ and _ known _ by but _ unknown _ to any of , even _ not flagged _ in .every packet in this queue is _ of a linear sum ] is in ; is _ unknown _ to ; and is _ known _ by but _ unknown _ to . * ] when a session- packet is not yet delivered to but overheard by and a session- packet is not yet delivered to but overheard by .namely , the source can perform such classic butterfly - style operation of sending the linear mixture ] and it is received by only . and assume that already knows the individual and but is unknown to , see fig .[ fig : srp : queue_scenario](a ) .this example scenario falls into the second condition of above .then sending from the relay simultaneously enables to receive the desired and to decode the desired by subtracting the received from the known ] that has been received by either or or both . in the same scenario ,however , notice that can not benefit both destinations simultaneously , if sends , instead of .as a result , we use the notation \!:\!w ] .every packet in this queue is _ of a linear sum ] is in .* is _ unknown _ to any of , even _ not flagged _ in .* is _ known _ by but _ unknown _ to any of , even _ not flagged _ in .the scenario is the same as in fig .[ fig : srp : queue_scenario](a ) when not having . in this scenario, we have observed that can not benefit both destinations by sending the known . collects such unpromising ] ; at ( [ shift=(1.5,0)]r ) ( a ) example scenario for ; ( r ) at ( 4,0 ) ; ( [ shift=(-60:0.3)]r ) arc ( -60:60:0.3 ) ; ( [ shift=(-60:0.4)]r ) arc ( -60:60:0.4 ) ; at ( [ shift=(0,0.45)]r ) ; ( d1 ) at ( [ shift=(25:1.25)]r) ; at ( [ shift=(0.9,0)]d1 ) ] ; ( d2 ) at ( [ shift=(-25:1.25)]r) ; at ( [ shift=(0.5,0)]d2 ) ; at ( [ shift=(1.9,0)]r ) ( d ) case 2 : ; at ( [ shift=(1.9,0)]r ) it must be ; + \(r ) at ( 0,0 ) ; ( [ shift=(-60:0.3)]r ) arc ( -60:60:0.3 ) ; ( [ shift=(-60:0.4)]r ) arc ( -60:60:0.4 ) ; at ( [ shift=(0,0.45)]r ) ] ; at ( [ shift=(1.5,0)]r ) ( e ) case 3 : \!\in\!\srpqx{1}{2} ] ; ( d1 ) at ( [ shift=(25:1.25)]r) ; at ( [ shift=(0.5,0)]d1 ) ; ( d2 ) at ( [ shift=(-25:1.25)]r) ; at ( [ shift=(0.5,0)]d2 ) ; at ( [ shift=(1.9,0)]r ) ( f ) scenario for ; at ( [ shift=(1.9,0)]r ) \!\in\!\srpqstar ] is in .* is _ known _ by but _ unknown _ to any of .* is _ known _ by ( i.e. already in ) but _ unknown _ to any of , even not flagged in .the concrete explanations are as follows .the main purpose of this queue is basically the same as , i.e. , to store session- packet overheard by , so as to be used by the source for the classic xor operation with the session- counterparts ( e.g. , any packet in ) .notice that any is unknown to and thus can not generate the corresponding linear mixture with the counterpart .however , because is unknown to the relay , can not even naively deliver to the desired destination . on the other hand, the queue here not only allows to perform the classic xor operation but also admits naive delivery from . to that end , consider the scenario in fig .[ fig : srp : queue_scenario](b ) . here, has received a linear sum ] to decode the desired .this is also known by ( i.e. , already in ) , meaning that is no more different than a session- packet overheard by but not yet delivered to .namely , such can be treated as _ information equivalent to . that is , using this session- packet for the sake of session- does not incur any information duplicity because is already received by the desired destination .does not require any more , and thus or can freely use this in the network to represent not - yet - decoded instead .] for shorthand , we denote such as . as a result , the source can use this as for session- when performing the classic xor operation with a session- counterpart .moreover , also knows the pure and thus relay can perform naive delivery for as well .every packet in this queue is _ of either a pure or a mixed _packet satisfying the following conditions simultaneously .* is _ known _ by both and but _ unknown _ to .* can extract a desired session- packet when is further received .specifically , there are three possible cases based on how the packet is constituted : * is a pure session- packet .that is , is known by both and but unknown to as in fig .[ fig : srp : queue_scenario](c ) . obviously , acquires this new when it is further delivered to .* is a pure session- packet .that is , is already received by and known by as well but unknown to . for such , as similar to the discussions of , there exists a session- packet still unknown to where , and their mixture ] to decode the desired .* is a mixed packet of the form ] is known by both and but unknown to . in this case , is still unknown to but is already received by so that whenever ] is delivered to . * * is a session- packet . in this subcase, there exists a session- packet ( other than in the above case 3 discussions ) still unknown to where .moreover , ] is delivered to .the concrete explanations are as follows .the main purpose of this queue is basically the same as but the queue here allows not only the source but also the relay to perform the classic xor operation . as elaborated above , we have three possible cases depending on the form of the packet . specifically , either a pure session- packet ( case 1 ) or a pure session- packet ( case 2 ) or a mixture ] is already in . for this session- counterpart ,consider any packet in .obviously , the relay knows both and by assumption . as a result, either or can send their linear sum ] , when received by , can be used to decode and further decode a desired session- packet as discussed above . moreover , if receives ] is already in by assumption .every packet in this queue is _ of a linear sum _ ] is in .* is _ known _ by but _ unknown _ to any of .* is _ known _ by but _ unknown _ to any of . where and are pure but generic that can be either a session- or a session- packet .specifically , there are four possible cases based on the types of and packets : * is a pure session- packet and is a pure session- packet . * is a pure session- packet and is a pure session- packet . for the latter packet , as similar to the discussions of , there also exists a pure session- packet still unknown to where and their mixture ] to decode the desired .* is a pure session- packet and is a pure session- packet . for the former packet, there also exists a pure session- packet still unknown to where and ] to decode the desired .* is a pure session- packet and is a pure session- packet . forthe former and the latter packets , the discussions follow the case 3 and case 2 above , respectively .the concrete explanations are as follows .this queue represents the all - happy " scenario as similar to the butterfly - style operation by the relay , i.e. , sending a linear mixture ] only .this possibility only applies to the relay as all the messages including both individual packets are originated from the source . as a result ,this queue represents such scenario that the relay only knows the linear sum instead of individuals , as in fig .[ fig : srp : queue_scenario](f ) .more precisely , cases 1 to 4 happen when the source performed one of four classic xor operations to , respectively , and the corresponding linear sum is received only by , see appendix [ app : srp : queue_invariance ] for details . based on the properties of queues, we now describe the correctness of proposition [ prop : srp : sim - inner ] , our lnc inner bound . to that end, we first investigate all the lnc operations involved in proposition [ prop : srp : sim - inner ] and prove the queue invariance " , i.e. , the queue properties explained above _ remains invariant regardless of an lnc operation chosen_. such long and tedious investigations are relegated to appendix [ app : srp : queue_invariance ] . then , the decodability condition ( [ prop : srp : sim - inner : d ] ) , jointly with the queue invariance , imply that and will contain at least and number of pure session- and pure session- packets , respectively , in the end .this further means that , given a rate vector , any - , - , and -variables that satisfy the inequalities ( [ prop : srp : sim - inner : e ] ) to ( [ prop : srp : sim - inner : d ] ) in proposition [ prop : srp : sim - inner ] will be achievable .the correctness proof of proposition [ prop : srp : sim - inner ] is thus complete . for readability , we also describe for each queue , the associated lnc operations that moves packet into and takes packets out of , see table [ tab : srp : queue_in - out ] .the lnc inner bound in proposition [ prop : srp : sim - inner ] has focused on the strong - relaying scenario and has considered mostly on cross - packets - mixing operations ( i.e. , mixing packets from different sessions when benefiting both destinations simultaneously ) .we now describe the general lnc inner bound that works in arbitrary -pec and -pec distributions , and also introduces self - packets - mixing operations ( i.e. , mixing packets from the same session for further benefits ) .[ prop : srp : sim - innerv2 ] a rate vector is lnc - achievable if there exist non - negative variables and , non - negative -variables : and non - negative -variables : for all , } { } \;:\ ; \text{for all}\;k\in\{1,2\ } \big\ } , \\ & \big\ { \wnumrc{h},\ ; \wnumox{h}{},\ ; \wnumcx{h } { } \big\},\end{aligned}\ ] ] such that jointly they satisfy the following five groups of linear conditions : group 1 , termed the _ time - sharing condition _ ,has inequalities : }{}\right ) \nn \\ & \qquad + \wnumrc{s } + \wnumox{s } { } + \wnumcx{s } { } , \label{prop : srp : sim - innerv2:ts2 } \\ t_r & \geq \!\sum_{k\in\{1,2\ } } \!\!\ !\left ( \wnumuc{r}{k } \!+\ ! \wnumdx{r}{(\!k\ ! ) } { } \!+\ !\wnumdx{r}{[k]}{}\right ) + \wnumrc{r } + \wnumox{r } { } + \wnumcx{r}{}. \label{prop : srp : sim - innerv2:ts3}\end{aligned}\ ] ] group 2 , termed the _ packets - originating condition _ ,has inequalities : consider any satisfying . for each pair ( out of the two choices and ) , ( [ prop : srp : sim - innerv2:e ] ) is the same to ( [ prop : srp : sim - inner : e ] ) in proposition [ prop : srp : sim - inner ] . group 3 , termed the _ packets - mixing condition _ , has inequalities : for each pair , and the following one inequality : where ( [ prop : srp : sim - innerv2:b ] ) is the same to ( [ prop : srp : sim - inner : b ] ) in proposition [ prop : srp : sim - inner ] . group 4 , termed the _ classic xor condition by source only _ , has inequalities : group 5 , termed the _ xor condition _ , has inequalities : and for each pair , } { } \right)\cdot\psrpsimt{h}{d_i}. \label{prop : srp : sim - innerv2:x}\end{aligned}\ ] ] group 6 , termed the _ decodability condition _ , has inequalities : for each pair , }{}\right)\cdot\psrpsimt{h}{d_i } \geq \rb{i } , \label{prop : srp : sim - innerv2:d}\end{aligned}\ ] ] the main difference to proposition [ prop : srp : sim - inner ] ( for the strong - relaying scenario ) can be summarized as follows . recall that all the messages are originated from the source and the knowledge space of the relay at time , i.e. , always satisfies . as a result, can always mimic any lnc encoding operation that can perform regardless of any time .therefore , we allow to mimic the same encoding operations that does and thus the -variables in proposition [ prop : srp : sim - inner ] is now replaced by the -variables associated with both and , where the performer is distinguished by the superscript , .for that , the conditions ( [ prop : srp : sim - inner : a ] ) , ( [ prop : srp : sim - inner : t ] ) , ( [ prop : srp : sim - inner : x0 ] ) , and ( [ prop : srp : sim - inner : x ] ) that are associated with -variables has changed to ( [ prop : srp : sim - innerv2:a ] ) , ( [ prop : srp : sim - innerv2:t ] ) , ( [ prop : srp : sim - innerv2:x ] ) , and ( [ prop : srp : sim - innerv2:x ] ) , respectively , by replacing -variables into -variables with the superscript , .the -pec probabilities are also replaced by a generic notation , . on the other hand ,the other conditions that are associated only with -variables , i.e. , ( [ prop : srp : sim - inner : e ] ) , ( [ prop : srp : sim - inner : b ] ) , ( [ prop : srp : sim - inner : m ] ) , and ( [ prop : srp : sim - inner : s ] ) remain the same as before by ( [ prop : srp : sim - innerv2:e ] ) , ( [ prop : srp : sim - innerv2:b ] ) , ( [ prop : srp : sim - innerv2:m ] ) , and ( [ prop : srp : sim - innerv2:s ] ) , respectively .in addition to the above systematic changes , we also consider the more advanced lnc encoding operations that the source can do , i.e. , self - packets - mixing operations . by these newly added -variables , ( [ prop : srp : sim - innerv2:a ] ) , ( [ prop :srp : sim - innerv2:s ] ) to ( [ prop : srp : sim - innerv2:t ] ) , and ( [ prop : srp : sim - innerv2:x ] ) to ( [ prop : srp : sim - innerv2:d ] ) are updated accordingly .the queueing network described in section [ sec : srp : queue - property ] remains the same as before , but we have additional self - packets - mixing operations for the general lnc inner bound . the lnc encoding operations and the packet movement process of the newly added -variables can also be found in appendix [ app : srp : queue_invariance : v2 ] ., width=340,height=170 ] consider a smart repeater network with marginal channel success probabilities : ( a ) -pec : , , and ; and ( b ) -pec : and . andwe assume that all the erasure events are independent .we will use the results in propositions [ prop : srp : lp - outer ] and [ prop : srp : sim - inner ] to find the largest value for this example scenario .[ fig : srp : comp ] compares the lnc capacity outer bound ( proposition [ prop : srp : lp - outer ] ) and the lnc inner bound ( proposition [ prop : srp : sim - inner ] ) with different achievability schemes .the smallest rate region is achieved by simply performing uncoded direct transmission without using the relay .the second achievability scheme is the -receiver broadcast channel lnc from the source in while still not exploiting at all .the third and fourth schemes always use for any packet delivery .namely , both schemes do not allow -hop delivery from .then in the third scheme uses pure routing while performs the -user broadcast channel lnc in the fourth scheme .the fifth scheme performs the time - shared transmission between and , while allowing only intra - flow network coding .the sixth scheme is derived from using only the classic butterfly - style lncs corresponding to , , and .that is , we do not allow to perform fancy operations such as , , , and .one can see that the result is strictly suboptimal . in summary, one can see that our proposed lnc inner bound closely approaches to the lnc capacity outer bound in all angles .this shows that the newly - identified lnc operations other than the classic butterfly - style lncs are critical in approaching the lnc capacity .the detailed rate region description of each sub - optimal achievability scheme can be found in appendix [ app : srp : schemes ] . , andthe inner bounds are described in propositions [ prop : srp : sim - inner ] and [ prop : srp : sim - innerv2 ] , respectively.,width=340,height=170 ] fig .[ fig : srp : cdf1 ] examines the relative gaps between the outer bound and two inner bounds by choosing the channel parameters and uniformly randomly while obeying ( a ) the strong - relaying condition in definition [ def : srp : strong - relaying ] when using proposition [ prop : srp : sim - inner ] ; and ( b ) the arbitrary -pec and -pec distributions when using proposition [ prop : srp : sim - innerv2 ] . for any chosen parameter instance, we use a linear programming solver to find the largest sum rate of the lnc outer bound in proposition [ prop : srp : lp - outer ] , which is denoted by .similarly , we find the largest sum rate that satisfies the lnc inner bound in proposition [ prop : srp : sim - inner ] ( resp . proposition [ prop : srp : sim - innerv2 ] ) and denote it by .we then compute the relative gap per each experiment , , and then repeat the experiment times , and plot the cumulative distribution function ( cdf ) in unit of percentage .we can see that with more than of the experiments , the relative gap between the outer and inner bound is smaller than for case ( a ) and for case ( b ) .this work studies the lnc capacity of the smart repeater packet erasure network for two unicast flows .the capacity region has been effectively characterized by the proposed linear - subspace - based outer bound , and the capacity - approaching lnc scheme with newly identified lnc operations other than the previously well - known classic butterfly - style operations .we enumerate the _ feasible types _ ( fts ) defined in ( [ def : srp : type ] ) that the source can transmit in the following way : where each -digit index represent a -bitstring of which is a hexadecimal of first four bits , is a octal of the next three bits , is a hexadecimal of the next four bits , is a octal of the next three bits , and is binary of the last bit .the subset of that the relay can transmit , i.e. , are listed separately in the following : recall that the of a -bitstring represents whether the coding subset belongs to or not , and by definition ( [ def : srp : asetr2 ] ) . as a result ,any coding type with implies that it lies in the knowledge space of the relay .the enumerated in the above is thus a collection of such coding subsets in with .in the following , we will describe all the lnc encoding operations and the corresponding packet movement process of proposition [ prop : srp : sim - inner ] one by one , and then prove that the queue invariance explained in section [ sec : srp : queue - property ] always holds . to simplify the analysis, we will ignore the null reception , i.e. , none of receives a transmitted packet , because nothing will happen in the queueing network .moreover , we exploit the following symmetry : for those variables whose superscript indicates the session information ( either session- or session- ) , here we describe session- ( ) only . those variables with in the superscript will be symmetrically explained by simultaneously swapping ( a ) session- and session- in the superscript ; ( b ) and ; ( c ) and ; and ( d ) and , if applicable .the source transmits .depending on the reception status , the packet movement process following the inequalities in proposition [ prop : srp : sim - inner ] is summarized as follows . * * departure * : one property for is that must be unknown to any of . as a result ,whenever is received by any of them , must be removed from for the queue invariance . * * insertion * : one can easily verify that the queue properties for , , , and hold for the corresponding insertions . transmits .the movement process is symmetric to . transmits a mixture ] is received by any of , must be removed from .similarly , the property for is that must be unknown to any of , even not flagged in .therefore , whenever the mixture is received by any of , must be removed from . * * insertion * : when only receives the mixture , can use the known and the received ] only . as a result, we can insert ] is in while is still known by only .this corresponds to the first condition of .one can easily verify that other cases satisfy either one of or both properties of .following the packet format for , we insert :w ] into as sending the known from simultaneously enables to receive the desired and to decode the desired by subtracting from the received ] from and .the movement process is symmetric to . transmits a mixture ] is received by any of , must be removed from .similarly , the property for is that must be unknown to any of , even not flagged in .therefore , whenever the mixture is received by any of , must be removed from . * * insertion * : whenever receives the mixture , can use the known and the received ] is in and is known by only .namely , can benefit both destinations simultaneously by sending the known .for those two reception status and , we can thus insert this mixture into as \!:\!x_i ] to extract the pure .now is known by both and but still unknown to even if receives this mixture ] is received by .namely , is still unknown to but when it is delivered , can use and the received ] from and .the movement process is symmetric to . transmits of the mixture ] is that is unknown to any of . as a result , whenever is received by any of , the mixture ] and the received to subtract the pure .we can thus insert into .the case when only receives falls into the first condition of as ] . the remaining reception status to consider are , , , and .the first when only receives falls into the property of as is known only by and not flagged in .thus we can insert into .obviously , can decode from the previous discussion .for the second when only receives , we first have while is unknown to any of .moreover , is known by only and ] .that is , is already in and known by as well but still unknown to .moreover , can decode the desired session- packet when it receives further . as a result, can be used as an information - equivalent packet for and can be moved into as the case 2 insertion . transmits of \in\srpqb{2}{1} ] is in . as a result ,whenever receives , can use the received and the known ] is in .thus when receives , can further decode the desired .moreover , is already in . as a result, we can move into as the case 2 insertion . transmits .the movement process is symmetric to . transmits ] , can use the known and the received ] to extract the desired and thus we can insert into .the remaining reception status are , , and .the first when only receives the mixture exactly falls into the first - case scenario of as ] to benefit both destinations .the second case when both and receive the mixture , jointly with the assumption , falls exactly into the third - case scenario of where is a pure session- packet . as a result, we can move ] into as the case 3 insertion . transmits ] is received by any of , must be removed from . from the property for , we know that is unknown to any of , even not flagged in . as a result ,whenever receives the mixture ] to decode and thus must be removed from . ** insertion * : from the properties of and , we know that contains ( still unknown to and ) ; contains ; and contains \} ] , can use the known and the received ] and the received ] ; contained before ; and contains ,[x_i+x_j]\} ] into as the case 3 insertion .( and obviously , can decode the desired from the previous discussion . ) for the third case when both and receive the mixture , now contains \} ] ; and contained \} ] will enable to further decode the desired .thus we can move ] from and .the movement process is as follows . * * departure * : from the property for , we know that is unknown to any of , even not flagged in . as a result ,whenever receives the mixture ] to decode and thus must be removed from .one condition for is that must be unknown to any of , even not flagged in . as a result , whenever the mixture ] ; and contains already .therefore , whenever receives the mixture ] and the received ] to extract the desired , and thus we can insert into . the remaining reception status are , , and .one can see that the first case when only receives the mixture exactly falls into the case 3 scenario of . for the second casewhen both and receive the mixture , now contains \} ] before ; and contains \} ] will enable to further decode the desired .thus we can move ] ; contains ,[y_i+y_j]\} ] into as the case 3 insertion .( and obviously , can decode the desired from the previous discussion . ) transmits ] , must be removed from .moreover , is known by . as a result ,whenever receives the mixture , can use the known and the received ] ; and contains \} ] , can use the known ,x_j\} ] to extract the desired and thus we can insert into . similarly , whenever receives this mixture , can use the known \} ] to extract the desired , and thus we can insert into . the remaining reception status are , , and .one can see that the first case when only receives the mixture exactly falls into the case 4 scenario of .for the second case when both and receive the mixture , now contains \} ] before ; and contains ,[y_i+x_j]\} ] will enable to further decode the desired .thus we can move ] ; contains ,x_j,[y_i+x_j]\} ] before , where we now have from the previous discussion .this falls exactly into the third - case scenario of where is a pure session- packet .note that delivering ] into as the case 3 insertion . transmits ] is received by any of , must be removed from .similarly , one condition for is that must be unknown to .however when receives the mixture , can use the known and the received ] to extract the desired and thus we can insert into . similarly , whenever receives this mixture , can use the known and the received ] . as a result, can further decode and thus we can insert into .if it was the case 3 insertion , then is a mixed form of ] .note that in the case 3 insertion \!\in\!\srpqx{2}{1} ] . as a result, can further use the known ] , a session- packet that was unknown to can be newly decoded .the remaining reception status are and . for both caseswhen receives the mixture but does not , can use the known and the received ] from and .the movement process is symmetric to . transmits ] , must be removed from .moreover , is known by . as a result ,whenever receives the mixture , can use the known and the received ] to decode .thus must be removed from whenever receives . * * insertion * : from the properties of and , we know that contains ; contains ,\overline{w}_j\} ] and the received ] to extract .we now need to consider case by case when was inserted into . if it was the case 1 insertion, then is a pure session- packet and thus we can simply insert into . if it was the case 2 insertion ,then is a pure session- packet and there exists a session- packet still unknown to where .moreover , has received ] where is already known by but is not . as a result, can decode upon receiving ] comes from either or .if was coming from , then is a session- packet and we can simply insert into . if was coming from , then is a session- packet and there also exists a session- packet still unknown to where .moreover , has received ] and the extracted to decode and thus we can insert into . in a nutshell ,whenever receives the mixture ] to extract . since is now known by both and but ] from and .the movement process is symmetric to . transmits from .the movement process is as follows . * * departure * : one condition for is that must be unknown to any of . as a result ,whenever is received by any of , must be removed from . * * insertion * : from the above discussion , we know that is unknown to . as a result , whenever is received by , we can insert to .if is received by but not by , then is now known by both and but still unknown to .this exactly falls into the first - case scenario of and thus we can move into as the case 1 insertion . transmits from .the movement process is symmetric to . transmits that is known by only and information equivalent from .the movement process is as follows . * * departure * : from the property for , we know that there exists an information - equivalent session- packet that is known by but unknown to any of . as a result , whenever is received by any of , must be removed from . ** insertion * : from the above discussion , we know that is unknown to and thus we can insert to whenever is received by . if is received by but not by , then is now known by both and but still unknown to .this exactly falls into the first - case scenario of and thus we can move into as the case 1 insertion . transmits that is known by only and information equivalent from .the movement process is symmetric to . transmits known by for the packet of the form :w\!\in\!\srpqm ] , we know that is designed to benefit both destinations simultaneously when transmits .that is , whenever ( resp . ) receives , ( resp . ) can decode the desired ( resp . ) , regardless whether the packet is of a session- or of a session- .however from the conditions of , we know that is unknown to and is unknown to .therefore , whenever is received by any of , :w ] is already in .this exactly falls into the second - case scenario of and thus we can move into as the case 2 insertion . the second reception case will follow the the previous arguments symmetrically . transmits \!\in\!\srpqstar ] , we know that is known only by and that is only known by . as a result ,whenever receives this mixture , can use the known and the received ] to extract and thus the mixture must be removed from .* * insertion * : from the above discussions , we have observed that whenever ( resp . ) receives the mixture , ( resp . ) can extract ( resp . ) . from the four cases study of , we know that ( resp . ) can decode a desired session- packet ( resp .session- packet ) whenever ( resp . ) receives the mixture , and thus we can insert ( resp . ) into ( resp . ) .we now consider the reception status and .if receives the mixture but does not , then contained and now contains ] was transmitted from .this falls exactly into the third - case scenario of . as a result, we can move ] into as the case 3 insertion . transmits .the movement process is as follows . * * departure * : one condition for is that is known by unknown to . as a result ,whenever receives , must be removed from .since is already known by , nothing happens if it is received by .* * insertion * : from the previous observation , we only need to consider the reception status when receives . for those and , we need to consider case by case when was inserted into . if it was the case 1 insertion, then is a pure session- packet and thus we can simply insert into . if it was the case 2 insertion ,then is a pure session- packet and there exists a session- packet still unknown to where .moreover , has received ] where is already known by but is not . as a result, can decode upon receiving ] comes from either or .if was coming from , then is a session- packet and we can simply insert into . if was coming from , then is a session- packet and there also exists a session- packet still unknown to where .moreover , has received ] and the extracted to decode and thus we can insert into . in a nutshell , whenever receives , a session- packet that was unknown to can be newly decoded . transmits .the movement process is symmetric to }{} ] from and .the movement process is as follows . * * departure * : from the property for , we know that is known by but unknown to .symmetrically , is known by but unknown to . as result , whenever ( resp . ) receives the mixture , ( resp . ) can use the known ( resp . ) and the received ] . following these case studies, one can see that a session- packet that was unknown to can be newly decoded whenever receives .the reception status when receives the mixture can be followed symmetrically such that can always decode a new session- packet that was unknown before .in the following , we will describe the newly added self - packets - xor operations and the corresponding packet movement process of proposition [ prop : srp : sim - innerv2 ] one by one , and then prove that the queue invariance explained in section [ sec : srp : queue - property ] always holds .again , to simplify the analysis , we will ignore the null reception and we will exploit the following symmetry : for those variables whose superscript indicates the session information ( either session- or session- ) , here we describe session- ( ) only . those variables with in the superscript will be symmetrically explained by simultaneously swapping ( a ) session- and session- in the superscript ; ( b ) and ; ( c ) and ; and ( d ) and , if applicable .the source transmits ] is received by any of , must be removed from .similarly , the property for is that must be unknown to any of , even not flagged in . as a result , whenever the mixture is received by any of , must be removed from . * * insertion * : whenever receives the mixture , can use the known and the received ] to extract . from the above observations , we describe one by one for each reception status .when the reception status is , now is known by both and but still unknown to while is still at .this falls exactly into the first - case scenario of and thus we move into as the case 1 insertion . when the reception status is , now is known by both and but still unknown to while is still at . as a result, we can move into as the case 1 insertion .when the reception status is , we now have at ; ] at ; and at . following the discussion when , we can treat either or as already decoded .but here we choose to treat as already decoded since is known by both and .such falls exactly into the second - case scenario of when we substitute by . as a result, we move into , and also into as the case 2 insertion .when the reception status is , we now have at ; ] at ; and at . following the similar discussion of when , we know that we can treat either or as already decoded because both and are known by and . as a result ,if we treat as already decoded by , then we move into as the case 2 insertion . on the other hand , if we treat as already decoded , then we move into as the case 2 insertion . transmits ] from and .the movement process is as follows . * * departure * : the property for is that must be unknown to any of , even not flagged in . as a result , whenever the mixture ] at ; and at , where is the information - equivalent pure session- packet corresponding to from the property of .now assume that the mixture is received only by both and .we then have at ; ,[x\!+\!\overline{w}_i]\} ] at .then can now use the known and the received ] by manipulating its received mixtures ,[x\!+\!\overline{w}_i]\} ] where . in a nutshell , when the reception status is , we can treat as if .therefore , must be removed from . * * insertion * : whenever receives the mixture , can use the known and the received ] to extract . from these observations , we describe one by one for each reception status .when the reception status is , now is known by both where is still at .since was coming from , also knows ] at ; and at . in this case, whenever or is further delivered to , it can decode both and simultaneously .but since is overheard by , we chose to treat as already decoded and insert into , while still keeping as not - yet decoded .since is kept intact and the mixture is received by only , in order for to further decode , needs to have either in or in .namely , the original scenario of is still kept intact . as a result, we just insert into . when the reception status is , we now have that both and are known by both and thus both and falls into case 1 and case 2 of , respectively .we thus move both and into as the case 1 and case 2 insertion , respectively .when the reception status is , we now have at ; ,[x\!+\!\overline{w}_i]\} ] .namely , by treating , we can switch the -associated pure session- packet from to since now knows ] where . as a result , we can further move into as the case 2 insertion .when the reception status is , we now have at ; ,[x\!+\!\overline{w}_i]\} ] where .as a result , we can further move into as the case 2 insertion .finally when the reception status is , we now have at ; ,[x\!+\!\overline{w}_i]\} ] from and .the movement process is symmetric to . transmits ] is received by , it can use the known and the received ] at ; and at , where is the information - equivalent pure session- packet corresponding to from the property of .now assume that the mixture is received only by both and .we then have \} ] at ; and still at .then can now use the known and the received ] .moreover , is known by both while is known by only . as a result, we chose to use further and thus treat as already decoded .the reason is because , for such , this exactly falls into case 2 of where is known by both and has \!=\![x^\ast_i\!+\!x_i] ] to extract . also , whenever receives the mixture , can use the known and the received ] at ; and at . in this case, whenever or is further delivered to , it can decode both and simultaneously .we then chose to treat as already decoded and insert into , while still keeping as not - yet decoded .since is kept intact and the mixture ] at ; and at . following the * departure * discussion when , we can choose to treat as already decoded and use as for case 2 of where is known by both and has \!=\![x_i\!+\!x^\ast_i] ] at ; and at . following the discussionwhen , we can treat either or as already decoded .but here we chose to treat as already decoded since is now overheard by both and contains ] at ; and at . from the previous discussions ,we know that we can treat either or as already decoded where both and are known by both .if we treat as already decoded , we can simply move into as the case 1 insertion . similarly ,if we treat as already decoded , then since is known by both , we can thus move into as the case 1 insertion . transmits $ ] from and .the movement process is symmetric to . in the following table [ tab :srp : queue_in - out_v2 ] , we also described for each queue , the associated lnc operations that moves packet into and takes packets out of in the general lnc inner bound of proposition [ prop : srp : sim - innerv2 ] .note that -variables are the same as -variables where the superscript is by , and thus they represent -variables accordingly .in the following , we describe rate regions of each suboptimal achievability scheme used for the numerical evaluation in section [ sec : srp : eval ] . * :* the rate regions can be described by proposition [ prop : srp : sim - inner ] , if the variables , , are all hardwired to .namely , we completely shut down all the variables dealing with cross - packet - mixtures .after such hardwirings , proposition [ prop : srp : sim - inner ] is further reduced to the following form : }{}\right),\end{aligned}\ ] ] and consider any satisfying . for each pair ( out of the two choices and ) , }{}\cdot\psrpsimt{r}{d_i } , \\ \big ( \snumuc{i } + \snumdx{i } { } \big)\cdot\psrpsimt{s}{d_i } + \big(\rnumuc{i } & + \rnumdx{[i]}{}\big)\cdot\psrpsimt{r}{d_i } \geq \rb{i}.\end{aligned}\ ] ] * :* this scheme requires that all the packets go through , and then performs -user broadcast channel nc .the corresponding rate regions can be described as follows : * :* this scheme requires that all the packets go through as well , but performs uncoded routing for the final delivery .the corresponding rate regions can be described as follows : * :* this scheme completely ignores the relay in the middle , and just performs -user broadcast channel lnc of .the corresponding rate regions can be described as follows : d. koutsonikolas , c .- c .wang , and y. hu , `` efficient network coding based opportunistic routing through cumulative coded acknowledgment , '' _ ieee / acm trans .19 , no . 5 , pp .13681381 , 2011 .l. georgiadis and l. tassiulas , `` broadcast erasure channel with feedback - capacity and algorithms , '' in _ proc .5th workshop on network coding , theory , and applications ( netcod ) _ , lausanne , switzerland , june 2009 , pp .kuo and c .- c .wang , `` on the capacity of 2-user 1-hop relay erasure networks - the union of feedback , scheduling , opportunistic routing , and network coding , '' in _ proc .ieee intl symp .inform . theory ._ , st . peterburg , russia , july 2011 , pp . 13371341 .wang and d. j. love , `` linear network coding capacity region of -receiver mimo broadcast packet erasure channels with feedback , '' in _ proc .ieee intl symp .inform . theory ._ , boston , ma , usa , july 2012 , pp .20622066 .m. gatzianas , l. georgiadis , and l. tassiulas , `` multiuser broadcast erasure channel with feedback - capacity and algorithms , '' _ ieee trans .inform . theory _59 , no . 9 , pp .57795804 , september 2013 .wang , `` linear network coding capacity for broadcast erasure channels with feedback , receiver coordination , and arbitrary security requirement , '' in _ proc .ieee intl symp .inform . theory ._ , istanbul , turkey , july 2013 , pp .29002904 .wang and j. han , `` the capacity region of 2-receiver multiple - input broadcast packet erasure channels with channel output feedback , '' _ ieee trans .inform . theory _60 , no . 9 ,pp . 55975626 , sept .kuo and c .- c .wang , `` robust and optimal opportunistic scheduling for downlink 2-flow inter - session network coding , '' in _ proc .33rd ieee conf .( infocom ) _ , toronto , canada , may 2014 , pp .655663 .a. papadopoulos and l. georgiadis , `` multiuser broadcast erasure channel with feedback and side information and related index coding results , '' in _ proc .52st annual allerton conf . on comm ., contr . , and computing . _ , monticello , illinois , usa , october 2014 .
this work considers the smart repeater network where a single source wants to send two independent packet streams to destinations with the help of relay . the transmission from or is modeled by packet erasure channels : for each time slot , a packet transmitted by may be received , with some probabilities , by a random subset of ; and those transmitted by will be received by a random subset of . interference is avoided by allowing at most one of to transmit in each time slot . one example of this model is any cellular network that supports two cell - edge users when a relay in the middle uses the same downlink resources for throughput / safety enhancement . in this setting , we study the capacity region of when allowing linear network coding ( lnc ) . the proposed lnc inner bound introduces more advanced packing - mixing operations other than the previously well - known butterfly - style xor operation on overheard packets of two co - existing flows . a new lnc outer bound is derived by exploring the inherent algebraic structure of the lnc problem . numerical results show that , with more than 85% of the experiments , the relative sum - rate gap between the proposed outer and inner bounds is smaller than 0.08% under the strong - relaying setting and 0.04% under arbitrary distributions , thus effectively bracketing the lnc capacity of the smart repeater problem . packet erasure networks , channel capacity , network coding
turbulence is a state of spatio - temporal chaotic flow generically attainable for fluids with access to a sufficient source of free energy .a result of turbulence is enhanced mixing of the fluid which is directed towards a reduction of the free energy .mixing typically occurs by formation of vortex structures on a large range of spatial and temporal scales , that span between system , energy injection and dissipation scales .fluids comprise the states of matter of liquids , gases and plasmas .a common free energy source that can drive turbulence in neutral ( or more precisely : non - conducting ) fluids is a strong enough gradient ( or `` shear '' ) in flow velocity , which can lead to vortex formation by kelvin - helmholtz instability .examples for turbulence occuring from this type of instability are forced pipe flows , where a velocity shear layer is developing at the wall boundary , or a fast jet streaming into a stationary fluid .another source of free energy is a thermal gradient in connection with an aligned restoring force ( as in liquids heated from below in a gravity field ) that leads to rayleigh - benard convection .several routes for the transition from laminar flow to turbulence in fluids have been proposed .for example , in some specific cases the ruelle - takens scenario occurs , where by linear instability through a series of a few period doubling bifurcations a final nonlinear transition to flow chaos is observed when a control parameter ( like the gradient of velocity or temperature ) is increased . for other scenarios , like in pipe flow , a sudden direct transition by subcritical instability to fully developedturbulence or an intermittent transition are possible .the complexity of the flow dynamics is considerably enhanced in a plasma compared to a non - conducting fluid .a plasma is a macroscopically neutral gas composed of many electrically charged particles that is essentially determined by collective degrees of freedom .space and laboratory plasmas are usually composed of positively charged ions and negatively charged electrons that are dynamically coupled by electromagnetic forces .thermodynamic properties are governed by collisional equilibration and conservation laws like in non - conducting fluids .the additional long - range collective interaction by spatially and temporally varying electric and magnetic fields allows for rich dynamical behaviour of plasmas with the possibility for complex flows and structure formation in the presence of various additional free energy sources . the basic physics of plasmas in space , laboratory and fusion experiments is introduced in detail in a variety of textbooks ( e.g. in refs . ) . although the dynamical equations for fluid and plasma flows can be conceptually simple , they are highly nonlinear and involve an infinite number of degrees of freedom .analytical solutions are therefore in general impossible .the description of fluid and plasma dynamics mainly relies on statistical and numerical methods ._ computation of decaying two - dimensional fluid turbulence , showing contours + of vorticity ._ , width=264 ]computational models for fluid and plasma dynamics may be broadly classified into three major categories : * \(1 ) microscopic models : many body dynamical description by ordinary differential equations and laws of motion ; * \(2 ) mesoscopic models : statistical physics description ( usually by integro - differential equations ) based on probability theory and stochastic processes ; * \(3 ) macroscopic models : continuum description by partial differential equations based on conservation laws for the distribution function or its fluid moments .examples of microscopic models are molecular dynamics ( md ) methods for neutral fluids that model motion of many particles connected by short range interactions , or particle - in - cell ( pic ) methods for plasmas including electromagnetic forces .such methods become important when relevant microscopic effects are not covered by the averaging procedures used to obtain meso- or macroscopic models , but they usually are intrinsical computationally expensive .complications result from multi - particle or multi - scale interactions .mesoscopic modelling treats such effects on the dynamical evolution of particles ( or modes ) by statistical assumptions on the interactions .these may be implemented either on the macroscale as spectral fluid closure schemes like , for example , in the direct interaction approximation ( dia ) , or on the microscale as advanced collision operators like in fokker - planck models .an example of a mesoscopic computational model for fluid flow is the lattice boltzmann method ( lbm ) that combines free streaming particle motion by a minimalistic discretisation in velocity space with suitable models for collision operators in such a way that fluid motion is recovered on macroscopic scales .macroscopic models are based on the continuum description of the kinetic distribution function of particles in a fluid , or of its hydrodynamic moments .the continuum modelling of fluids and plasmas is introduced in more detail below .computational methods for turbulence simulations have been developed within the framework of all particle , mesoscopic or continuum models .each of the models has both advantages and disadvantages in their practical numerical application .the continuum approach can be used in situations where discrete particle effects on turbulent convection processes are negligible .this is to some approximation also the case for many situations and regimes of interest in fusion plasma experiments that are dominated by turbulent convective transport , in particular at the ( more collisional ) plasma edge . within the field of computational fluid dynamics the longest experience and broadest applicationshave been obtained with continuum methods .many numerical continuum methods that were originally developed for neutral fluid simulation have been straightforwardly applied to plasma physics problems . in continuum kinetics ,the time evolution of the single - particle probability distribution function for particles of each species ( e.g. electrons and ions in a plasma ) in the presence of a mean force field and within the binary collision approximation ( modelled by an operator ) is described by the boltzmann equation in a plasma the force field has to be self - consistently determined by solution of the maxwell equations .usually , kinetic theory and computation for gas and plasma dynamics make use of further simplifying approximations that considerably reduce the complexity : in the vlasov equation binary collisions are neglected ( ) , and in the drift - kinetic or gyro - kinetic plasma equations further reducing assumptions are taken about the time and space scales under consideration .the continuum description is further simplified when the fluid can be assumed to be in local thermodynamic equilibrium .then a hierarchical set of hydrodynamic conservation equations is obtained by construction of moments over velocity space . in lowest orders of the infinite hierarchy ,the conservation equations for mass density , momentum and energy density are obtained .any truncation of the hierarchy of moments requires the use of a closure scheme that relates quantities depending on higher order moments by a constitutive relation to the lower order field quantities .an example of a continuum model for neutral fluid flow are the navier - stokes equations . in theirmost widely used form ( in particular for technical and engineering applications ) the assumptions of incompressible divergence free flow ( i.e. , is constant on particle paths ) and of an isothermal equation of state are taken .then the description of fluid flow can be reduced to the solution of the ( momentum ) navier - stokes equation under the constraints given by and by boundary conditions .most numerical schemes for the navier - stokes equation require solution of a poisson type equation for the normalised scalar pressure in order to guarantee divergence free flow .the character of solutions for the navier - stokes equation intrinsically depends on the ratio between the dissipation time scale ( determined by the kinematic viscosity ) and the mean flow time scale ( determined by the system size and mean velocity ) , specified by the reynolds number for small values of the viscosity will dominate the time evolution of in the navier - stokes equation , and the flow is laminar . for higher advective nonlinearity is dominant and the flow can become turbulent .the rayleigh number has a similar role for the onset of thermal convective turbulence .flow instabilities as a cause for turbulence , like those driven by flow shear or thermal convection , do in principle also exist in plasmas similar to neutral fluids , but are in general found to be less dominant in strongly magnetised plasmas . the most important mechanism which results in turbulent transport and enhanced mixing relevant to confinement in magnetised plasmas is an unstable growth of coupled wave - like perturbations in plasma pressure and electric fields .the electric field forces a flow with the exb ( `` e - cross - b '' ) drift velocity of the plasma perpendicular to the magnetic field * b*. a phase shift , caused by any inhibition of a fast parallel boltzmann response of the electrons , between pressure and electric field perturbation in the presence of a pressure gradient can lead to an effective transport of plasma across the magnetic field and to an unstable growth of the perturbation amplitude .nonlinear self - advection of the exb flow and coupling between perturbation modes ( `` drift waves '' ) can finally lead to a fully developed turbulent state with strongly enhanced mixing .a generic source of free energy for magnetised laboratory plasma turbulence resides in the pressure gradient : in the core of a magnetic confinement region both plasma density and temperature are usually much larger than near the bounding material wall , resulting in a pressure gradient directed inwards to the plasma center .instabilities which tap this free energy tend to lead to enhanced mixing and transport of energy and particles down the gradient . for magnetically confined fusion plasmas , this turbulent convection by exb drift waves often dominates collisional diffusive transport mechanisms by orders of magnitude , and essentially determines energy and particle confinement properties .the drift wave turbulence is regulated by formation of mesoscopic streamers and zonal structures out of the turbulent flows .continuum models for drift wave turbulence have to capture the different dynamics of electrons and ions parallel and perpendicular to the magnetic field and the coupling between both species by electric and magnetic interactions .therefore , a single - fluid magneto - hydrodynamic ( mhd ) model can not appropriately describe drift wave dynamics : one has to refer to a set of two - fluid equations , treating electrons and ions as separate species , although the plasma on macroscopic scales remains quasi - neutral with nearly identical ion and electron density , .the two - fluid equations require quantities like collisional momentum exchange rate , pressure tensor and heat flux to be expressed by hydrodynamic moments based on solution of a kinetic ( fokker - planck ) model .the most widely used set of such fluid equations has been derived by braginskii and is e.g. presented in brief in ref . . the most general continuum descriptions for the plasma species , based either on the kinetic boltzmann equation or on the hydrodynamic moment approach like in the braginskii equations ,are covering all time and space scales , including detailed gyro - motion of particles around the magnetic field lines , and the fast plasma frequency oscillations . from experimental observationit is on the other hand evident , that the dominant contributions to turbulence and transport in magnetised plasmas originate from time and space scales that are associated with frequencies in the order the drift frequency , that are much lower than the ion gyro - frequency by the ratio between drift scale to gradient length : under these assumptions one can apply a `` drift ordering '' based on the smallness of the order parameter .this can be introduced either on the kinetic level , resulting in the drift - kinetic model , or on the level of two - fluid moment equations for the plasma , resulting in the `` drift - reduced two - fluid equations '' , or simply called `` drift wave equations '' : neglect of terms scaling with in higher powers than 2 considerably simplifies both the numerical and analytical treatment of the dynamics , while retaining all effects of the perpendicular drift motion of guiding centers and nonlinear couplings that are necessary to describe drift wave turbulence . for finite ion temperature , the ion gyro - radius can be of the same magnitude as typical fluctuation scales , with wave numbers found around in the order of the drift scale although the gyro - motion is still fast compared to turbulent time scales , the ion orbit then is of similar size as spatial variations of the fluctuating electrostatic potential. finite gyro - radius ( or `` finite larmor radius '' , flr ) effects are captured by appropriate averaging procedures over the gyrating particle trajectory and modification of the polarisation equation , resulting in `` gyrokinetic '' or `` gyrofluid models '' for the plasma .the prevalent picture of drift wave turbulence is that of small - scale , low - frequency exb vortices in the size of several gyro - radii , that determine mixing and transport of the plasma perpendicular to the magnetic field across these scales . beyond that , turbulence in magnetisedplasmas exhibits large - scale structure formation that is linked to this small - scale eddy motion : the genesis of mean zonal flow structures out of an initially nearly homogeneous isotropic vortex field and the resulting shear - flow suppression of the driving turbulence is a particular example of a self - organising regulation process in a dynamical system .the scale of these macroscopic turbulent zonal flows is that of the system size , setting up a radially localised differential exb rotation of the whole plasma on an entire toroidal flux surface . _generation of mean sheared flows from drift wave turbulence within the poloidal cross - section of a magnetised plasma torus._,width=566 ] moreover , the process of self - organisation to zonal flow structures is thought to be a main ingredient in the still unresolved nature of the l - h transition in magnetically confined toroidal plasmas for fusion research .the l - h transition is experimentally found to be a sudden improvement in the global energy content of the fusion plasma from a low to high ( l - h ) confinement state when the central heat source power is increased above a certain threshold .the prospect of operation in a high confinement h - mode is one of the main requirements for successful performance of fusion experiments like iter .the mechanism for spin - up of zonal flows in drift wave turbulence is a result of the quasi two - dimensional nature of the nonlinear exb dynamics , in connection with the double periodicity and parallel coupling in a toroidal magnetised plasma .basic concepts and terminology for the interaction between vortex turbulence and mean flows have been first developed in the context of neutral fluids .it is therefore instructive to briefly review these relations in the framework of the navier - stokes eq .( [ e : nse ] ) before applying them to plasma dynamics .small ( space and/or time ) scale vortices and mean flows may be separated formally by an ansatz known as reynolds decomposition , splitting the flow velocity into a mean part , averaged over the separation scale , and small - scale fluctuations with . while the averaging procedure , , is mathematically most unambiguous for the ensemble average , the physical interpretation in fluid dynamics makes a time or space decomposition more appropriate . applying this averaging on the navier - stokes eq .( [ e : nse ] ) , one obtains the reynolds equation ( or : reynolds averaged navier - stokes equation , rans ) : this mean flow equation has the same structure as the original navier - stokes equation with one additional term including the reynolds stress tensor . momentum transport between turbulence andmean flows can thus be caused by a mean pressure gradient , viscous forces , and reynolds stress .a practical application of the rans is in large eddy simulation ( les ) of fluid turbulence , which efficiently reduces the time and space scales necessary for computation by _ modelling _ the reynolds stress tensor for the smaller scales as a local function of the large scale flow .les is however not applicable for drift wave turbulence computations , as here in any case all scales down to the effective gyro - radius ( or drift scale ) have to be resolved in direct numerical simulation ( dns ) .turbulent flows are generically three - dimensional . in some particular situationsthe dependence of the convective flow dynamics on one of the cartesian directions can be negligible compared to the others and the turbulence becomes quasi two - dimensional .examples for such 2d fluid systems are thin films ( e.g. soap films ) , rapidly rotating stratified fluids , or geophysical flows of ocean currents and the ( thin ) planetary atmosphere . in particular ,also the perpendicular exb dynamics in magnetised plasmas behaves similar to a 2d fluid .the two - dimensional approximation of fluid dynamics not only simplifies the treatment , but moreover introduces distinctly different behaviour . the major difference can be discerned by introducing the vorticity and taking the curl of the navier - stokes eq .( [ e : nse ] ) to get the vorticity equation in a two - dimensional fluid with and the vorticity reduces to with . the vortex stretching andtwisting term is zero in 2d , thus eliminating a characteristic feature of 3d turbulence . for unforced inviscid 2d fluids then due to vorticity is constant in flows along the fluid element .this implies conservation of total enstrophy in addition to the conservation of kinetic flow energy .the 2d vorticity equation can be further rewritten in terms of a scalar stream function that is defined by so that , to obtain = \nu \delta w. \label{e : vor2d}\ ] ] here the poisson bracket = \partial_y a \ ;\partial_x b - \partial_x a \ ; \partial_y b ] , the electron drift wave frequency is found to be = { \rho_s \over l_n } \omega_i [ \rho_s k_{\theta}].\ ] ] here the density gradient length and the drift scale , representing an ion radius at electron temperature , have been introduced . _basic drift waves mechanism : ( 1 ) an initial pressure perturbation leads to an ambipolar loss of electrons along the magnetic field , whereas ions remain more immobile .( 2 ) the resulting electric field convects the whole plasma with around the perturbation in the plane perpendicular to .( 3 ) in the presence of a pressure gradient , propagates in electron diamagnetic drift direction with .this `` drift wave '' is stable if the electrons establish according to the boltzmann relation without delay ( `` adiabatic response '' ) .a non - adiabatic response due to collisions , magnetic flutter or wave - kinetic effects causes a phase shift between and .the exb velocity is then effectively transporting plasma down the gradient , enhances the principal perturbation and leads to an unstable growth of the wave amplitude _ , width=623 ] the motion of the perturbed structure perpendicular to magnetic field and pressure gradient in the electron diamagnetic drift direction is in this approximation still stable and does so far not cause any transport down the gradient .the drift wave is destabilized only when a phase shift between potential and density perturbation is introduced by `` non - adiabatic '' electron dynamics the imaginary term in general is an anti - hermitian operator and describes dissipation of the electrons , that causes the density perturbations to proceed the potential perturbations in by slowing down the parallel equilibration .this leads to an exponential growth of the perturbation amplitude by with linear growth rate .parallel electron motion also couples drift waves to shear alfvn waves , which are parallel propagating perpendicular magnetic field perturbations . with the vector potential as a further dynamic variable ,the parallel electric field , parallel electron motion , and nonlinearly the parallel gradient are modified .the resulting nonlinear drift - alfvn equations are discussed in the following section .the stability and characteristics of drift waves and resulting plasma turbulence are further influenced by inhomogeneities in the magnetic field , in particular by field line curvature and shear .the normal and geodesic components of field line curvature have different roles for drift wave turbulence instabilities and saturation .the field gradient force associated with the normal curvature , if aligned with the plasma pressure gradient , can either act to restore or amplify pressure gradient driven instabilities by compression of the fluid drifts , depending on the sign of alignment .the geodesic curvature describes the compression of the field strength in perpendicular direction on a flux surface and is consequently related to the compression of large - scale ( zonal ) exb flows .transition from stable drift waves to turbulence has been studied experimentally in linear and simple toroidal magnetic field configurations , and by direct numerical simulation .experimental investigations in a magnetized low - beta plasma with clindrical geometry by klinger _et al . _ have demonstrated that the spatiotemporal dynamics of interacting destabilised travelling drift wave follows a bifurcation sequence towards weakly developed turbulence according to the ruelle - takens - newhouse scenario .the relationship between observations made in linear magnetic geometry , purely toroidal geometry and magnetic confinement is discussed in ref . , where the role of large - scale fluctuation structures has been highlighted .the role of parallel electron dynamics and alfvn waves for coherent drift modes and drift wave turbulence have been studied in a collisionality dominated high - density helicon plasma .measurements of the phase coupling between spectral components of interchange unstable drift waves at different frequencies in a basic toroidal magnetic field configuration have indicated that the transition from a coherent to a turbulent spectrum is mainly due to three - wave interaction processes .the competition between drift wave and interchange physics in exb drift turbulence has been studied computationaly in tokamak geometry with respect to the linear and nonlinear mode structure by scott .a quite remarkable aspect of fully developed drift wave turbulence in a sheared magnetic field lying in closed surfaces is its strong nonlinear character , which can be self - sustaining even in the absence of linear instabilities .this situation of self - sustained plasma turbulence does not have any analogy in neutral fluid dynamics and , as shown in numerical simulations by scott , is mostly applicable to tokamak edge turbulence , where linear forcing is low enough so that the nonlinear physics can efficiently operate .the model dalf3 by scott , in the cold ion approximation without gyrofluid flr corrections , represents the four field version of the dissipative drift - alfvn equations , with disturbances ( denoted by the tilde ) in the exb vorticity , electron pressure , parallel current , and parallel ion velocity as dependent variables .the equations are derived under gyro / drift ordering , in a three dimensional globally consistent flux tube geometry , and appear ( in cgs units as used in the references ) as with the parallel magnetic potential given by through ampere s law , and the vorticity here , is the braginskii parallel resistivity , and are the electron and ion masses , is the electron ( and ion ) density , and is the electron temperature with pressure .the dynamical character of the system is further determined by a set of parameters characterising the relative role of dissipative , inertial and electromagnetic effects in addition to the driving by gradients of density and temperature .the flux surface geometry of a tokamak enters into the fluid and gyrofluid equations via the curvilinear generalisation of differentiation operators and via inhomogeneity of the magnetic field strength .the different scales of equilibrium and fluctuations parallel and perpendicular to the magnetic field motivate the use of field aligned flux coordinates .the differential operators in the field aligned frame are the parallel gradient with magnetic field disturbances as additional nonlinearities , the perpendicular laplacian ,\ ] ] and the curvature operator .\ ] ] the dalf equations constitute the most basic model containing the principal interactions of dissipative drift wave physics in a general closed magnetic flux surface geometry .the drift wave coupling effect is described by acting upon and , while interchange forcing is described by acting upon and . in the case of tokamak edge turbulence ,the drift wave effect is qualitatively more important , while the most important role for is to regulate the zonal flows .detailed accounts on the role of magnetic field geometry shape in tokamaks and stellarators on plasma edge turbulence can be found in refs . and , in particular with respect to effects of magnetic field line shear and curvature .an example for typical experimental parameters are those of the asdex upgrade ( aug ) edge pedestal plasmas in l mode near to the l - h transition for deuterium ions with : electron density , temperatures ev , magnetic field strength t , major radius cm , perpendicular gradient length cm , and safety factor . the dynamical character of the dalf / gem system is determined by a set of parameters characterising the relative role of dissipative , inertial and electromagnetic effects in addition to the driving by gradients of density and temperature .in particular , for the above experimental values , these are collisionality , magnetic induction , electron inertia and ion inertia .the normalised values are similar in edge plasmas of other large tokamaks like jet .the parameters can be partially obtained even by smaller devices like the torsatron tj - k at university of stuttgart , which therefore provides ideal test situations for comparison between simulations and the experiment ._ computation of plasma edge turbulence in the magnetic field of a divertor tokamak fusion experiment , using the dalf3 model as described in the text [ background figure : asdex upgrade , max - planck - institute for plasma physics ] ._ , width=434 ] a review and introduction on drift wave theory in inhomogeneous magnetised plasmas has been presented by horton in ref . , although its main emphasis is placed on linear dynamics .an excellent introduction and overview on turbulence in magnetised plasma and its nonlinear properties by scott can be found in ref . , and a very detailed survey on drift wave theory with emphasis on the plasma edge is given by scott in refs . .however , no tokamak edge turbulence simulation has yet reproduced the important threshold transition to the high confinement mode known from experimental fusion plasma operation . the possibility to obtaina confinement transition within first principle computations of edge turbulence will have to be studied with models that at least include full temperature dynamics , realistic flux surface geometry , global profile evolution including the equilibrium , and a coupling of edge and sol regions with realistic sheath boundary conditions .in addition the model still has to maintain sufficient grid resolution , grid deformation mitigation , and energy plus enstrophy conservation in the vortex / flow system .such , , integrated fusion plasma turbulence simulation codes are currently under development . the necessary computing powerto simulate the extended physics models and computation domains is going to be available within the next years .this may facilitate international activities ( for example within the european task force on integrated tokamak modelling ) towards a , , computational tokamak plasma with a first - principles treatment of both transport and equilibrium across the whole cross section .the objective of this extensive project in computational plasma physics is to provide the means for a direct comparison between our theoretical understanding with the emerging burning - plasma physics of the next large international fusion experiment iter .this work was supported by the austrian science fund fwf under contract p18760-n16 , and by the european communities under the contract of associations between euratom and the austrian academy of sciences and carried out within the framework of the european fusion development agreement . the views and opinionsherein do not necessarily reflect those of the european commission .u. frisch . _turbulence : the legacy of kolmogorov_. cambridge university press , 1995 .
in an inhomogeneous magnetised plasma the transport of energy and particles perpendicular to the magnetic field is in general mainly caused by quasi two - dimensional turbulent fluid mixing . the physics of turbulence and structure formation is of ubiquitous importance to every magnetically confined laboratory plasma for experimental or industrial application . specifically , high temperature plasmas for fusion energy research are also dominated by the properties of this turbulent transport . self - organisation of turbulent vortices to mesoscopic structures like zonal flows is related to the formation of transport barriers that can significantly enhance the confinement of a fusion plasma . this subject of great importance in research is rarely touched on in introductory plasma physics or continuum dynamics courses . here a brief tutorial on 2d fluid and plasma turbulence is presented as an introduction to the field , appropriate for inclusion in undergraduate and graduate courses . _ this is an author - created , un - copyedited version of an article published in european journal of physics 29 , 911 - 926 ( 2008 ) . iop publishing ltd is not responsible for any errors or omissions in this version of the manuscript or any version derived from it . the definitive publisher authenticated version is available online at doi : 10.1088/0143 - 0807/29/5/005 . _
figure 1 illustrates a general modelling conceptualization where the interactions among natural hazard behaviour , related transdisciplinary impacts , risk management and control strategies are taken into account .the special focus on the many sources of uncertainty leads to a robust semantically - enhanced modelling architecture based on the paradigm of semantic array programming ( semap ) , with an emphasis on the array of input , intermediate and output data / parameters and the array of data - transformation modules ( d - tm ) dealing with them .arrays of hazard models , dynamic information forecasts ( i.e. meteorology ) and static parametrisation ( i.e. spatial distribution of land cover ) are considered .their multiplicity derives from the many sources on uncertainty which affect their estimation ( or implementation , for the d - tm software modules which are the building blocks of the hazard models ) .furthermore , during emergency modelling support the lack of timely and accurate monitoring systems over large spatial extents ( e.g. at the continental scale ) may imply a noticeable level of uncertainty to affect possibly even the location of natural hazards ( _ geoparsing _ uncertainty ) .this peculiar information gap may be mitigated by integrating remote sensing ( e.g. satellite imagery ) with a distributed array of social contributors ( citizen sensor ) , exploiting mobile applications ( apps ) and online social networks .remote sensing and the citizen sensor are here designed to cooperate by complementing accurate ( but often less timely ) geospatial information with distributed alert notifications from citizens , which might be timely but not necessarily accurate .their safe integration implies the supervision of human expertise , even if the task may be supported by automatic tools .assessing the evolution in the timespan $ ] of a certain hazard event for the associated array of impacts may be also complex ( e.g. ) .in particular , the array of impacts is often irreducible to a unidimensional quantity ( e.g. monetary cost ) .* figure 1 * - modular architecture for environmental risk modelling . based on urgent hpc, it follows the semantic array programming paradigm ( image adapted from ) integrating as inputs remote sensing , meteo data and the citizen sensor .the analysis of non - trivial systems subject to environmental risk and natural resources management may naturally lead to multi - objective ( multi criteria ) control problems , which might benefit from advanced machine learning techniques for mitigating the involved huge computational costs . indeed , the multiplicity of modelling dimensions ( states ; controls ; uncertainty - driven arrays of parameters and scenarios ; arrays of d - tm modules to account for software uncertainty )may easily lead to an exponential increase of the required computational processes ( the so called `` curse of dimensionality '' ) . a viable mitigation strategy might be offered by hpc tools ( such as urgent hpc ) in order to sample high - dimensional modelling space with a proper method .context : : demands on the eu s resilience in preparedness and disaster response capacity are likely to increase , as the impacts of disasters continue to grow .+ classical disciplinary and domain - specific approaches which might be perfectly suitable at local - scale may result in unacceptable simplifications in a broader context .pitfalls : : mathematization of systems in this contex as a formal control problem should be able to establish an _ effective _ science - policy interface . academic _ silo thinking _ should stop advertising solution - driven oversimplification to fit control theory `` hot topics '' .+ although in this family of problems `` humans will always be part of the computational process '' ( despite any academic potential illusion of fashionable _ full automation _ ) , + evolvability ( for adapting models to new emerging needs and knowledge ) and robustness ( for supporting uncertainty - aware decision processes ) would still need + a higher level of semantic awareness in computational models and + a self - adapting ability to scale up to the multiple dimensions of the arrays of + data / parameters .multiplicity : uncertainty and complexity : : in this context , the boundary between classical control - theory management strategies for natural resources and hazards and scenario modelling under deep - uncertainty is fuzzy ( inrmm ) .+ a key aspect of soundness relies on explicitly considering the multiple dimensions of the problem and the array of uncertainties involved .+ as no silver bullet seems to be available for reliably attacking this amount of uncertainty and complexity , an integration of methods is proposed . mitigating with an integrated approach: : array programming is well - suited for easily managing a multiplicity of arrays of hazard models , dynamic input information , static parametrisation and the distribute array of social contributions ( citizen sensor ) .+ _ array - based abstract _ thus better scalable _ modularisation _ of the data - transformations ( d - tm ) , and a _ semantically - enhanced _ design of the d - tm structure and interactions ( semantic array programming ) is proposed to consider also the array of uncertainties ( data , modelling , geoparsing , software uncertainty ) and the array of criteria to assess the potential impacts associated with the hazard scenarios .+ the unevenly available information during an emergency event may be efficiently exploited by means of a polfc schema .+ its demanding computations may become affordable during an emergency event with an appropriate array - based parallelisation strategy within urgent - hpc .semap can simplify wstme modelling of nontrivial static and dynamic geospatial quantities . under the semap paradigm ,the generic -th d - tm module is subject to the semantic checks as pre- , post - conditions and invariants on the inputs , outputs and the d - tm itself .the control problem is associated with the unevenly available dynamic updates of field measurements and other data related to an on - going hazard emergency .an _ emergency manager _ may thus be interested in assessing the best control strategy given a set of impacts and their associated costs as they can be approximately estimated ( rapid assessment ) with the currently available data .this data - driven approach can be implemented as partial open loop feedback control ( polfc ) approach for minimizing the overall costs associated with the natural hazard event , from the time onwards : where the -th cost is linked to the corresponding impact assessment criterion .this polfc schema within the semap paradigm may be considered a semantically - enhanced dynamic data - driven application system ( dddas ) .finally , the emergency manager may communicate the updated scenarios of the emergency evolution ( by means of geospatial maps and other executive summary information ) in order for decision - makers and stakeholders to be able to assess the updated multi - criteria pattern of costs and the preferred control options .this critical communication constitutes the science - policy interface and must be as supportive as possible .it is designed to exploit web map services ( wms ) ( on top of the underpinning free software for wstme , e.g. ) which may be accessed in a normal browser or with specific apps for smart - phones .nsf cyberinfrastructure council report reads : _ while hardware performance has been growing exponentially - with gate density doubling every 18 months , storage capacity every 12 months , and network capability every 9 months - it has become clear that increasingly capable hardware is not the only requirement for computation - enabled discovery .sophisticated software , visualization tools , middleware and scientific applications created and used by interdisciplinary teams are critical to turning flops , bytes and bits into scientific breakthroughs _transdisciplinary environmental problems such as the ones dealing with complexity and deep - uncertainty in supporting natural - hazard emergency might appear as seemingly intractable . nevertheless , approximate rapid - assessment based on computationally intensive modelling may offer a new perspective at least able to support emergency operations and decision - making with qualitative or semi - quantitative scenarios . even a partial approximate but timely investigation on the potential interactions of the many sources of uncertainty might help emergency managers and decision - makers to base control strategies on the best available although typically incomplete sound scientific information . in this context ,a key aspect of soundness relies on explicitly considering the multiple dimensions of the problem and the array of uncertainties involved . as no silver bullet seems to be available for reliably attacking this amount of uncertainty and complexity ,an integration of methods is proposed , inspired by their promising synergy .array programming is perfectly suited for easily managing a multiplicity of arrays of hazard models , dynamic input information , static parametrisation and the distribute array of social contributions ( citizen sensor ) .the transdisciplinary nature of complex natural hazards their need for an unpredictably broad and multifaceted readiness to robust scalability may benefit ( 1 ) from a disciplined _ abstract modularisation _ of the data - transformations which compose the models ( d - tm ) , and ( 2 ) from a _ semantically - enhanced _ design of the d - tm structure and interactions .these two aspects define the semantic array programming ( semap , ) paradigm whose application extended to geospatial aspects is proposed to consider also the array of uncertainties ( data , modelling , geoparsing , software uncertainty ) and the array of criteria to assess the potential impacts associated with the hazard scenarios .the unevenly available information during an emergency event may be efficiently exploited by means of a partial open loop feedback control ( polfc , ) schema , already successfully tested in this integrated approach as a promising evolution of adaptive data - driven strategies .its demanding computations may become affordable during an emergency event with an appropriate array - based parallelisation strategy within urgent - hpc .sippel , s. , otto , f.e.l .climatological extremes - assessing how the odds of hydrometeorological extreme events in south - east europe change in a warming climate ] .climatic change : 1 - 18 .cirella , g.t . , et al , 2014 .http://dx.doi.org/10.1007/978-94-007-7161-1_16[natural hazard risk assessment and management methodologies review : europe ] . in : linkov , i. ( ed . ) , sustainable cities and military installations .nato science for peace and security series c : environmental security .springer netherlands , pp .329 - 358 .ciscar , j. c. , et al , 2013 .http://www.climate-impacts-2013.org/files/cwi_ciscar.pdf[climate impacts in europe : an integrated economic assessment ] . in : impactsworld 2013 - international conference on climate change effects .potsdam institute for climate impact research ( pik ) e. v. , pp .87 - 96 .ciscar , j. c. , et al , 2014 .http://dx.doi.org/10.2791/7409[climate impacts in europe - the jrc peseta ii project ] .26586 of eur - scientific and technical research .publ . off .union , 155 pp .dankers , r. , feyen , l. , 2008 .change impact on flood hazard in europe : an assessment based on high - resolution climate simulations ] .j. geophys .113(d19 ) : d19105 + .gaume , e. , et al , 2009 .http://dx.doi.org/10.1016/j.jhydrol.2008.12.028[a compilation of data on european flash floods ] .journal of hydrology 367(1 - 2 ) : 70 - 78 .marchi , l. , et al , 2010 .http://dx.doi.org/10.1016/j.jhydrol.2010.07.017[characterisation of selected extreme flash floods in europe and implications for flood risk management ] .journal of hydrology 394(1 - 2 ) : 118 - 133 .feser , f. , et al , 2014 .http://dx.doi.org/10.1002/qj.2364[storminess over the north atlantic and northwestern europe - a review ] .quarterly journal of the royal meteorological society .jongman , b. , et al , 2014 . http://dx.doi.org/10.1038/nclimate2124[increasing stress on disaster - risk finance due to large floods ] .nature climate change 4 ( 4 ) : 264 - 268. self , s. , 2006 .http://dx.doi.org/10.1098/rsta.2006.1814[the effects and consequences of very large explosive volcanic eruptions ] .philosophical transactions of the royal society a : mathematical , physical and engineering sciences 364(1845 ) : 2073 - 2097 .swindles , g. t. , et al , 2011 .yr perspective on volcanic ash clouds affecting northern europe ] .geology 39(9 ) : 887 - 890 . gramling ,c. , 2014 .volcano rumbles , scientists plan for aviation alerts ] .science 345(6200 ) : 990 .allard , g. , et al , 2013 .http://www.fao.org/docrep/017/i3226e/i3226e.pdf[state of mediterranean forests 2013 ] .fao , 177 pp .schmuck , g. , et al , 2014 .http://dx.doi.org/10.2788/99870[forest fires in europe , middle east and north africa 2013 ] .publications office of the european union .nijhuis , m. , 2012 .http://dx.doi.org/10.1038/489352a[forest fires : burn out ] . nature 489 ( 7416 ) , 352 - 354 .boyd , i.l ., et al , 2013 .http://dx.doi.org/10.1126/science.1235773[the consequence of tree pests and diseases for ecosystem services ] .science 342 ( 6160 ) : 1235773 + .venette , r.c . , et al , 2012 .http://www.webcitation.org/6bocob2ez[summary of the international pest risk mapping workgroup meeting sponsored by the cooperative research program on biological resource management for sustainable agricultural systems ] . in : 6th international pest risk mapping workgroup meeting : `` advancing risk assessment models for invasive alien species in the food chain : contending with climate change , economics and uncertainty '' .organisation for economic co - operation and development ( oecd ) , pp . 1 - 2 .maes , j. , et al , 2013 .http://dx.doi.org/10.2779/12398[mapping and assessment of ecosystems and their services - an analytical framework for ecosystem assessments under action 5 of the eu biodiversity strategy to 2020 ] .publications office of the european union , 57 pp .european commission , 2013 .http://eur-lex.europa.eu/lexuriserv/lexuriserv.do?uri=com:2013:0659:fin:en:pdf[communication from the commission to the european parliament , the council , the european economic and social committee and the committee of the regions - a new eu forest strategy : for forests and the forest - based sector ] .com(2013 ) 659 final .communication from the commission to the council and the european parliament .european commission , 2013 .staff working document accompanying the document : communication from the commission to the european parliament , the council , the european economic and social committee and the committee of the regions - a new eu forest strategy : for forests and the forest - based sector ] .commission staff working document 2013 ( swd/2013/0342 final ) , 98pp .barredo , j. i. , 2007 .http://dx.doi.org/10.1007/s11069-006-9065-2[major flood disasters in europe : 1950 - 2005 ] .natural hazards 42(1 ) : 125 - 148 .barredo , j. i. , 2010 .upward trend in normalised windstorm losses in europe : 1970 - 2008 ] .natural hazards and earth system science 10(1 ) : 97 - 104 .evans , m. r. , et al , 2012 .http://dx.doi.org/10.1098/rstb.2011.0191[predictive ecology : systems approaches ] .philosophical transactions of the royal society of london .series b , biological sciences 367(1586 ) : 163 - 169 .phillis , y. a. , kouikoglou , v. s. , 2012 .http://dx.doi.org/10.1016/j.ecolmodel.2012.03.032[system-of-systems hierarchy of biodiversity conservation problems ] .ecological modelling 235 - 236 : 36 - 48 .langmann , b. , 2014 .the role of climate forcing by volcanic sulphate and volcanic ash ] .advances in meteorology 2014 : 1 - 17 .gottret , m. v. , white , d. , 2001 . http://www.ecologyandsociety.org/vol5/iss2/art17/[assessing the impact of integrated natural resource management : challenges and experiences ] . ecology and society 5( 2 ) : 17 + hagmann , j. , et al , 2001 .http://www.ecologyandsociety.org/vol5/iss2/art29/[success factors in integrated natural resource management r&d : lessons from practice ] .ecology and society 5 ( 2 ) , 29 + .zhang , x. , et al , 2004 .http://www.citeulike.org/group/15400/article/11477904[scaling issues in environmental modelling ] . in : wainwright , j. , mulligan , m. ( eds . ) , environmental modelling : finding simplicity in complexity .estreguil , c. , et al , 2013 .http://dx.doi.org/10.2788/77842[forest landscape in europe : pattern , fragmentation and connectivity ] .eur - scientific and technical research 25717 ( jrc 77295 ) , 18 pp .turner , r. , 2010 , http://dx.doi.org/10.1890/10-0097.1[disturbance and landscape dynamics in a changing world ] .ecology , 91 10 : 2833 - 2849 .van westen , c.j . , 2013 .http://dx.doi.org/10.1016/b978-0-12-374739-6.00051-8[remote sensing and gis for natural hazards assessment and disaster risk management ] .in : bishop , m. p. ( ed . ) , remote sensing and giscience in geomorphology .vol . 3 of treatise on geomorphology .elsevier , pp .259 - 298 .urban , m.c ., et al , 2012 .crucial step toward realism : responses to climate change from an evolving metacommunity perspective ] .evolutionary applications 5 ( 2 ) : 154 - 167 .baklanov , a. , 2007 .risk and assessment modelling - scientific needs and expected advancements ] . in : ebel ,a. , davitashvili , t. ( eds . ) , air , water and soil quality modelling for risk and impact assessment .nato security through science series .springer netherlands , pp .steffen , w. , et al , 2011 .anthropocene : from global change to planetary stewardship ] .ambio 40 ( 7 ) : 739 - 761 .white , c. , et al , 2012 .http://dx.doi.org/10.1111/j.1461-0248.2012.01773.x[the value of coordinated management of interacting ecosystem services ] .ecology letters 15 ( 6 ) : 509 - 519. de rigo , d. , 2013 , http://arxiv.org/abs/1311.4762[software uncertainty in integrated environmental modelling : the role of semantics and open science ] .abstr . 15 : 13292 + .de rigo , d. , ( exp . )2014 . behind the horizon of reproducible integrated environmental modelling at european scale : ethics and practice of scientific knowledge freedom .f1000 research , submitted .lempert , r. j. , may 2002 .http://dx.doi.org/10.1073/pnas.082081699[a new decision sciences for complex systems ] .proceedings of the national academy of sciences 99 ( suppl 3 ) : 7309 - 7313 .rammel , c. , et al , 2007 . http://dx.doi.org/10.1016/j.ecolecon.2006.12.014[managing complex adaptive systems - a co - evolutionary perspective on natural resource management ] .ecological economics 63 ( 1 ) : 9 - 21 .van der sluijs 2012 , j. p. , 2012 .http://dx.doi.org/10.3167/nc.2012.070204[uncertainty and dissent in climate risk assessment : a post - normal perspective ] .nature and culture 7 ( 2 ) : 174 - 195. de rigo , d. , et al , 2013 .http://dx.doi.org/10.1007/978-3-642-41151-9_35[an architecture for adaptive robust modelling of wildfire behaviour under deep uncertainty ] .technol . 413 : 367 - 380 .guariso , g. , et al , 2000 . a map - based web server for the collection and distribution of environmental data . in : kosmatin fras , m. , mussio , l. , crosilla , f. , podobnikar , t. ( eds . ) , bridging the gap : isprs wg vi/3 and iv/3 workshop , ljubljana , february 2 - 5 , 2000 : collection of abstracts .ljubljana .guariso , g. , et al , 1985 .http://dx.doi.org/10.1016/0377-2217(85)90150-x[decision support systems for water management : the lake como case study ] .european journal of operational research 21(3 ) : 295 - 306 .soncini - sessa , r. , et al , 2007 . integrated and participatory water resources theory .elsevier .guariso , g. , page , b. ( eds ) , 1994 .computer support for environmental impact : proceedings of the ifip tc5/wg5.11 working conference on computer for environmental impact assessment , cseia 93 , como , italy , 6 - 8 october , 1993 .north - holland .casagrandi , r. , guariso , g. , 2009 .http://dx.doi.org/10.1016/j.envsoft.2008.11.013[impact of ict in environmental sciences : a citation analysis 1990 - 2007 ] .environmental modelling & software 24 ( 7 ) : 865 - 871 .cole , j. , 2010 .http://www.rusi.org/publications/whitehallreports/ref:o4c2cc38d725ee/[interoperability in a crisis 2 : human factors and organisational processes ] .tech . rep ., royal united services institute .sterman , j.d . , 2002 .http://dx.doi.org/10.1002/sdr.261[all models are wrong : reflections on becoming a systems scientist ] .system dynamics review 18 ( 4 ) : 501 - 531 .weichselgartner , j. , kasperson , r. , 2010 .http://dx.doi.org/10.1016/j.gloenvcha.2009.11.006[barriers in the science - policy - practice interface : toward a knowledge - action - system in global environmental change research ] .global environmental change 20 ( 2 ) : 266 - 277 .bainbridge , l. , 1983 .http://dx.doi.org/10.1016/0005-1098(83)90046-8[ironies of automation ] .automatica 19(6 ) : 775 - 779. de rigo , d. , et al , 2013 .http://dx.doi.org/10.6084/m9.figshare.155703[toward open science at the european scale : array programming for integrated environmental modelling ] .geophysical 15 : 13245 + .stensson , p. , jansson , a. , 2013 .http://dx.doi.org/10.1080/00140139.2013.858777[autonomous technology - sources of confusion : a model for explanation and prediction of conceptual shifts ] .ergonomics 57 ( 3 ) : 455 - 470 .russell , d. m. , et al , 2003 .http://dx.doi.org/10.1147/sj.421.0177[dealing with ghosts : managing the user experience of autonomic computing ] .ibm systems journal 42 ( 1 ) : 177 - 188 .anderson , s. , et al , 2003 .http://dx.doi.org/10.1109/dexa.2003.1232106[making autonomic computing systems accountable : the problem of human computer interaction ] . in : database and expert systems applications , 2003 .14th international workshop on .ieee , pp .718 - 724 .kephart , j. o. , chess , d. m. 2003 .http://dx.doi.org/10.1109/mc.2003.1160055[the vision of autonomic computing ] .computer 36 ( 1 ) : 41 - 50 .van der sluijs , j.p . , 2005 .http://www.iwaponline.com/wst/05206/wst052060087.htm[uncertainty as a monster in the science - policy interface : four coping strategies ] . water science & technology 52 ( 6 ) : 87 - 92 . frame , b. , 2008 .http://dx.doi.org/10.1068/c0790s[wicked , messy , and clumsy : long - term frameworks for sustainability ] .environment and planning c : government and policy 26 ( 6 ) : 1113 - 1128 .mcguire , m. , silvia , c. , 2010 .http://dx.doi.org/10.1111/j.1540-6210.2010.02134.x[the effect of problem severity , managerial and organizational capacity , and agency structure on intergovernmental collaboration : evidence from local emergency management ] .public administration review 70(2 ) : 279 - 288 .bea , r. , et al , 2009 .new approach to risk : the implications of e3 ] .risk management 11 ( 1 ) : 30 - 43. adams , k.m . , hester , p.t . , 2012 .http://dx.doi.org/10.1504/ijsse.2012.052683[errors in systems approaches ] .international journal of system of systems engineering 3 ( 3/4 ) : 233 + .larsson , a. , et al , 2010 .http://dx.doi.org/10.2202/1547-7355.1646[decision evaluation of response strategies in emergency management using imprecise assessments ] .journal of homeland security and emergency management 7 ( 1 ) .innocenti , d. , albrito , p. , 2011 . http://dx.doi.org/10.1016/j.envsci.2010.12.010[reducing the risks posed by natural hazards and climate change : the need for a participatory dialogue between the scientific community and policy makers ] . environmental science & policy 14 ( 7 ) : 730 - 733 .ravetz , j. , 2004 . http://dx.doi.org/10.1016/s0016-3287(03)00160-5[the post - normal science of precaution ] .futures 36 ( 3 ) : 347 - 357. de rigo , d. , 2012 .integrated natural resources modelling and management : minimal redefinition of a known challenge for environmental modelling .excerpt from the call for a shared research agenda toward scientific knowledge freedom , maieutike research initiative de rigo , d. , 2012 , http://www.iemss.org/iemss2012/proceedings/d3_1_0715_derigo.pdf[semantic array programming for environmental modelling : of the mastrave library ] .congress on environmental modelling and .managing resources of a limited plant , 1167 - 1176 .de rigo , d. , 2012 .http://mastrave.org/doc/mtv-1.012-1/[semantic array programming with mastrave - introduction to semantic computational modelling ] .corti , p. , et al , 2012 .news management in the context of the european forest fire information system ( effis ) ] . in : proceedings of `` quinta conferenza italiana sul software geografico e sui dati geografici liberi '' ( gfoss day 2012 ) .sheth , a. , 2009 .http://dx.doi.org/10.1109/mic.2009.77[citizen sensing , social signals , and enriching human experience ] .internet computing , ieee 13 ( 4 ) : 87 - 92 .zhang , d. , et al , 2011 .emergence of social and community intelligence ] .computer 44 ( 7 ) : 21 - 28 .adam , n.r ., et al , 2012 .http://dx.doi.org/10.1109/mis.2012.113[spatial computing and social media in the context of disaster management ] .intelligent systems , ieee 27 ( 6 ) : 90 - 96 .fraternali , p. , et al , 2012 .http://dx.doi.org/10.1016/j.envsoft.2012.03.002[putting humans in the loop : social computing for water resources management ] . environmental modelling & software 37 : 68 - 77 .rodriguez - aseretto , d. , et al , ( exp . ) 2014 . image geometry correction of daily forest fire progression map using modis active fire observation and citizens sensor .ieee earthzine 7(2 ) . submitted .bosco , c. , et al , 2013 .robust modelling of landslide susceptibility : rapid assessment and catchment robust fuzzy ensemble ] .commun . . 413 : 321 - 335 .di leo , m. , et al , 2013 .http://dx.doi.org/10.1007/978-3-642-41151-9_2[dynamic data driven ensemble for wildfire behaviour assessment : a case study ] .technol . 413 : 11 - 22 .ackerman , f. , heinzerling , l. , 2002 .http://dx.doi.org/10.2307/3312947[pricing the priceless : cost - benefit analysis of environmental protection ] .university of pennsylvania law review 150 ( 5 ) : 1553 - 1584 .gasparatos , a. , 2010 .http://dx.doi.org/10.1016/j.jenvman.2010.03.014[embedded value systems in sustainability assessment tools and their implications ] .journal of environmental management 91 ( 8) : 1613 - 1622 .de rigo , d. , et al , 2001 .http://dx.doi.org/10.5281/zenodo.7481[neuro-dynamic programming for the efficient management of reservoir networks ] . in : proceedings of modsim 2001 , international congress on modelling and simulation .4 . model .australia and new zealand , pp . 1949 - 1954 .rodriguez - aseretto , d. , et al , 2009 . http://dx.doi.org/10.1007/978-3-642-01973-9_55[injecting dynamic real - time data into a dddas for forest fire behavior prediction ] .lecture notes in computer science 5545 : 489 - 499 .cencerrado , a. , et al , 2009 .http://dx.doi.org/10.1007/978-3-642-01970-8_23[support for urgent computing based on resource virtualization ] .lecture notes in computer science 5544 : 227 - 236 .joshimoto , k , k , et al , 2012 . link:{}{h}[t]tp://dx.doi.org/10.1016%2fj.procs.2012.04.186implementations of urgent computing on production hpc systems .procedia computer science 9 : 1687 - 1693 .de rigo , d. , bosco , c. , 2011 .http://dx.doi.org/10.1007/978-3-642-22285-6_34[architecture of a pan - european framework for soil water erosion assessment ] .commun . . 359 : 310 - 318. de rigo , d. , et al , 2013 .http://dx.doi.org/10.1007/978-3-642-41151-9_26[continental-scale living forest biomass and carbon stock : a robust fuzzy ensemble of ipcc tier 1 maps for europe ] .commun . . 359 : 271 - 284rodriguez - aseretto , d. , et al , 2013 .http://dx.doi.org/10.1016/j.procs.2013.05.355[a data - driven model for large wildfire prediction in europe ] .procedia computer science 18 : 1861 - 1870 .castelletti , a. , et al , http://www.nt.ntnu.no/users/skoge/prost/proceedings/ifac2008/data/papers/1685.pdf[on-line design of water reservoir policies based on ] .ifac - papersonline 17 : 14540 - 14545 .mcinerney , d. , et al , 2012 .http://dx.doi.org/10.1109/jstars.2012.2194136[developing a forest data portal to support multi - scale decision making ] .ieee j. sel .earth obs .remote sens .5(6 ) : 1692 - 1699 .bastin , et al , 2012 .http://www.earthzine.org/?p=389531[web services for forest data , analysis and monitoring : developments from eurogeoss ] .ieee earthzine 5(2 ) : 389531 + .rodriguez - aseretto , d. , et al , 2013 . http://dx.doi.org/10.6084/m9.figshare.155700[free and open source software underpinning the european forest data centre ] .geophysical research abstracts 15 : 12101 + .national science foundation , cyberinfrastructure council , 2007 .http://www.nsf.gov/pubs/2007/nsf0728/index.jsp?org=ef[cyberinfrastructure vision for 21st century discovery ] .tech . rep .nsf 07 - 28 , national science foundation .altay , n. , green , w.g .http://dx.doi.org/10.1016/j.ejor.2005.05.016[or/ms research in disaster operations management ] .european journal of operational research 175 ( 1 ) : 475 - 493 .rodriguez - aseretto , d. , et al , 2008 .adaptive system for forest fire behavior prediction ] . in : computational science and engineering , 2008 .11th ieee international conference on .ieee , pp .275 - 282 .
demands on the disaster response capacity of the european union are likely to increase , as the impacts of disasters continue to grow both in size and frequency . this has resulted in intensive research on issues concerning spatially - explicit information and modelling and their multiple sources of uncertainty . geospatial support is one of the forms of assistance frequently required by emergency response centres along with hazard forecast and event management assessment . robust modelling of natural hazards requires dynamic simulations under an array of multiple inputs from different sources . uncertainty is associated with meteorological forecast and calibration of the model parameters . software uncertainty also derives from the data transformation models ( d - tm ) needed for predicting hazard behaviour and its consequences . on the other hand , social contributions have recently been recognized as valuable in raw - data collection and mapping efforts traditionally dominated by professional organizations . here an architecture overview is proposed for adaptive and robust modelling of natural hazards , following the _ semantic array programming _ paradigm to also include the distributed array of social contributors called _ citizen sensor _ in a semantically - enhanced strategy for d - tm modelling . the modelling architecture proposes a multi - criteria approach for assessing the array of potential impacts with qualitative rapid assessment methods based on a _ partial open loop feedback control _ ( polfc ) schema and complementing more traditional and accurate a - posteriori assessment . we discuss the computational aspect of environmental risk modelling using array - based parallel paradigms on _ high performance computing _ ( hpc ) platforms , in order for the implications of urgency to be introduced into the systems ( urgent - hpc ) . * keywords * : geospatial , integrated natural resources modelling and management , semantic array programming , warning system , remote sensing , parallel application , high performance computing , partial open loop feedback control dario rodriguez - aseretto , christian schaerer , daniele de rigo european commission , joint research centre , institute for environment and sustainability , ispra ( va ) , italy . + polytechnic school , national university of asuncion , san lorenzo , central , paraguay . + politecnico di milano , dipartimento di elettronica , informazione e bioingegneria , milano , italy . europe experienced a series of particularly severe disasters in the recent years , with worrying potential impacts of similar disasters under future projected scenarios of economy , society and climate change . they range from flash floods and severe storms in western europe with an expected increasing intensity trend , large - scale floods in central europe , volcanic ash clouds ( e.g. after the eyjafjallajkull eruption ) , large forest fires in portugal and mediterranean countries . biological invasions such as emerging plant pests and diseases have the potential to further interact e.g. with wildfires and to impact on ecosystem services and economy with substantial uncertainties . it should be underlined that these recent highlights are set in the context of systemic changes in key sectors which overall may be expected to at least persist in the next decades . as a general trend , demands on the eu s resilience in preparedness and disaster response capacity are likely to increase , as the impacts of disasters continue to grow both in size and frequency , even considering only the growing exposure ( societal factors ) . the aforementioned examples of disturbances are often characterised by non - local system feedbacks and off - site impacts which may connect multiple natural resources ( system of systems ) . in this particular multifaceted context , landscape and ecosystem dynamics show intense interactions with disturbances . as a consequence , classical disciplinary and domain - specific approaches which might be perfectly suitable at local - scale may easily result in unacceptable simplifications within a broader context . a broad perspective is also vital for investigating future natural - hazard patterns at regional / continental scale and adapting preparedness planning . the complexity and uncertainty associated with these interactions along with the severity and variety of the involved impacts urge robust , holistic coordinated and transparent approaches . at the same time , the very complexity itself of the control - system problems involved may force the analysis to enter into the region of deep - uncertainty . the mathematization of systems in this context as a formal control problem should be able to establish an effective science - policy interface , which is _ not _ a trivial aspect . this is easily recognised even just considering the peculiarities which have been well known for a long time of geospatially - aware environmental data and decision support systems , their entanglement with growingly complex ict aspects and their not infrequent cross - sectoral characterisation . several pitfalls may degrade the real - world usefulness of the mathematization / implementation process . while it is relatively intuitive how a poor mathematization with a too simplistic approach might result in a failure , subtle pitfalls may lie even where an `` appropriately advanced '' theoretical approach is proposed . mathematization should resist _ silo thinking _ temptations such as academic solution - driven pressures to force the problem into fashionable `` hot topics '' of control theory : robust approximations of the real - world broad complexity may serve egregiously instead of state - of - art solutions of oversimplified problems . other long - lasting academic claims are `` towards '' fully automated scientific workflows in computational science , maybe including self - healing and self - adapting capabilities of the computational models implementing the mathematization . these kinds of claims might easily prompt some irony among experienced practitioners in wide - scale transdisciplinary modelling for environment ( wstme , ) as a never - ending research pandora s box with doubtful net advantages . complex , highly uncertain and sensitive problems for policy and society , as wstme problems typically are , will possibly never be suitable for full automation : even in this family of problems , `` humans will always be part of the computational process '' also for vital accountability aspects . while a certain level of autonomic computing capabilities might be essential for the evolvability and robustness of wstme ( in particular , perhaps , a higher level of semantic awareness in computational models and a self - adapting ability to scale up to the multiple dimensions of the arrays of data / parameters ; see next section ) , here the potential pitfall is the illusion of _ fully automating _ wstme . the domain of applicability of this puristic academic _ silo _ although promising for relatively simple , well - defined ( and not too policy - sensitive ) case studies might be intrinsically too narrow for climbing up to deal with the _ wicked problems _ typical of complex environmental systems . the discussed pitfalls might deserve a brief summary . first , perhaps , is the risk of `` solving the wrong problem precisely '' by neglecting key sources of uncertainty e.g. unsuitable to be modelled within the `` warmly supported '' solution of a given research group . during emergency operations , the risks of providing a `` myopic decision support '' should be emphasised ; i.e. suggesting inappropriate actions e.g. inaction or missing precaution due to the potential overwhelming lack of information or the oversimplification / underestimation of potential chains of impacts due to the lack of computational resources for a decent ( perhaps even qualitative and approximate ) rapid assessment of them . overcoming these pitfalls is still an open issue . here , we would like to contribute to the debate by proposing the integrated use of some mitigation approaches . we focus on some general aspects of the _ modelling architecture _ for the computational science support , in order for emergency - operators , decision - makers , stakeholders and citizens to be involved in a participatory information and decision support system which assimilates uncertainty and precaution . since no silver bullet seems to be available for mitigating the intrinsic wide - extent of complexity and uncertainty in environmental risk modelling , an array of approaches is integrated and the computational aspects are explicitly connected with the supervision and distributed interaction of human expertise . this follows the idea that the boundary between classical control - theory management strategies for natural resources and hazards ( driven by automatic control problem formulations `` minimize the risk score function '' ) and scenario modelling under deep - uncertainty ( by e.g. merely supporting emergency - operators , decision - makers and risk - assessors with understandable information `` sorry , no such thing as a risk score function can be precisely defined '' ) is fuzzy . both modelling and management aspects may be computationally intensive and their integration is a transdisciplinary problem ( _ integrated natural resources modelling and management _ , inrmm ) .
multipair multiple - input multiple - output ( mimo ) relaying networks have recently attracted considerable attention since they can provide a cost - effective way of achieving performance gains in wireless systems via coverage extension and maintaining a uniform quality of service .in such a system , multiple sources simultaneously exchange information with multiple destinations via a shared multiple - antenna relay in the same time - frequency resource .hence , multi - user interference is the primary system bottleneck .the deployment of massive antenna arrays at the relay has been proposed to address this issue due to their ability to suppress interference , provide large array and spatial multiplexing gains , and in turn to yield large improvements in spectral and energy efficiency .there has recently been considerable research interest in multipair massive mimo relaying systems . for example, derived the ergodic rate of the system when maximum ratio combining / maximum ratio transmission ( mrc / mrt ) beamforming is employed and showed that the energy efficiency gain scales with the number of relay antennas in rayleigh fading channels .then , extended the analysis to the ricean fading case and obtained similar power scaling behavior . for full - duplex systems , analytically compared the performance of mrc / mrt and zero - forcing reception / transmission and characterized the impact of the number of user pairs on the spectral efficiency .all the aforementioned works are based on the assumption of perfect hardware .however , a large number of antennas at the relay implies a very large deployment cost and significant energy consumption if a separate rf chain is implemented for each antenna in order to maintain full beamforming flexibility . in particular ,the fabrication cost , chip area and power consumption of the analog - to - digital converters ( adcs ) and the digital - to - analog converters ( dacs ) grow roughly exponentially with the number of quantization bits .the cumulative cost and power required to implement a relay with a very large array can be prohibitive , and thus it is desirable to investigate the use of cheaper and more energy - efficient components , such as low - resolution ( e.g. , one bit ) adcs and dacs .fortunately , it has been shown in that large arrays exhibit a certain resilience to rf hardware impairments that could be caused by such low - cost components .several recent contributions have investigated the impact of low - resolution adcs on the massive mimo uplink .for example , optimized the training pilot length to maximize the spectral efficiency , while revealed that in terms of overall energy efficiency , the optimal level of quantization is 4 - 5 bits . in ,the bussgang decomposition was used to reformulate the nonlinear quantization using a second - order statistically equivalent linear operator , and to derive a linear minimum mean - squared error ( lmmse ) channel estimator for one - bit adcs . in ,a near - optimal low complexity bit allocation scheme was presented for millimeter wave channels exhibiting sparsity .the work of examined the impact of one - bit adcs on wideband channels with frequency - selective fading .other work has focused on balancing the spectral and energy efficiency , either through the combined use of hybrid architectures with a small number of rf chains and low resolution adcs , or using mixed adcs architectures with high and low resolution .in contrast to the uplink case , there are relatively fewer contributions that consider the massive mimo downlink with low - resolution dacs . in , it was shown that performance approaching the unquantized case can be achieved using dacs with only 3 - 4 bits of resolution .the nearly optimal quantized wiener precoder with low - resolution dacs was studied in , and the resulting solution was shown to outperform the conventional wiener precoder with 4 - 6 bits of resolution at high signal - to - noise ratio ( snr ) . for the case of one - bit dacs , showed that even simple mrt precoding can achieve reasonable results .in , an lmmse precoder was proposed by taking the quantization non - linearities into account , and different precoding schemes were compared in terms of uncoded bit error rate .all these prior works are for single - hop systems rather than dual - hop connections via a relay .recently , considered a relay - based system that uses mixed - resolution adcs at the base station .unlike , we consider a multipair amplify - and - forward ( af ) relaying system where the relay uses both one - bit adcs and one - bit dacs .the one - bit adcs cause errors in the channel estimation stage and subsequently in the reception of the uplink data ; then , after a linear transformation , the one - bit dacs produce distortion when the downlink signal is coarsely quantized . in this paper , we present a detailed performance investigation of the achievable rate of such doubly quantized systems . in particular , the main contributions are summarized as follows : * we investigate a multipair af relaying system that employs one - bit adcs and dacs at the relay and uses mrc / mrt beamforming to process the signals .we take the correlation of the quantization noise into account , and present an exact achievable rate by using the arcsine law .then , we use asymptotic arguments to provide an approximate closed - form expression for the achievable rate .numerical results demonstrate that the approximate rate is accurate in typical massive mimo scenarios , even with only a moderate number of users .* we show that the channel estimation accuracy of the quantized system depends on the specific orthogonal pilot matrix that is used , which is in contrast to unquantized systems where any orthogonal pilot sequence yields the same result .we consider the specific case of identity and hadamard pilot matrices , and we show that the identity training scheme provides better channel estimation performance for users with weaker than average channels , while the hadamard training sequence is better for users with stronger channels .* we compare the achievable rate of different adc and dac configurations , and show that a system with one - bit dacs and perfect adcs outperforms a system with one - bit adcs and perfect dacs . focusing on the low transmit power regime , we show that the sum rate of the relay system with one - bit adcs and dacs is times that achievable with perfect adcs and dacs .also , it is shown that the transmit power of each source or the relay can be reduced inversely proportional to the number of relay antennas , while maintaining a given quality - of - service .* we formulate a power allocation problem to allocate power to each source and the relay , subject to a sum power budget .locally optimum solutions are obtained by solving a sequence of geometric programming ( gp ) problems .our numerical results suggest that the power allocation strategy can efficiently compensate for the rate degradation caused by the coarse quantization .the remainder of the paper is organized as follows .section [ sec : system_model ] introduces the multipair af relaying system model under consideration .section [ sec : rate_analysis ] presents an approximate closed - form expression for the sum rate , and compares the rate achieved with different adc and dac configurations .section [ sec : power_allocation ] formulates a power allocation problem to compensate for the rate loss caused by the coarse quantization .numerical results are provided in section [ sec : numerical_results ] .finally , section [ sec : conclusions ] summarizes the key findings ._ notation _ : we use bold upper case letters to denote matrices , bold lower case letters to denote vectors and lower case letters to denote scalars .the notation , , , and respectively represent the conjugate transpose operator , the conjugate operator , the transpose operator , and the matrix inverse .the euclidian norm is denoted by , the absolute value by , and {mn}} ] , whose elements are assumed to be gaussian distributed with zero mean and unit power . is a diagonal matrix that denotes the transmit power of the sources with {kk } = p_{{\text s},k} ] and ] for . for one - bit quantization , by invoking the results in (* chapter 10 ) and applying the arcsine law , we have where substituting into , and after some simple mathematical manipulations , we have since is uncorrelated with , we have substituting into yields based on the observation and the training pilots , we use the lmmse technique to estimate .hence , the estimated channel is given by as a result , the covariance matrix of the estimated channel is expressed as where . from , we can see that is a non - trivial function of , which indicates that the quality of the channel estimates depends on the specific realization of the pilot sequence , which is contrary to unquantized systems where any set of orthogonal pilot sequences gives the same result .although our conclusion in _ remark 1 _ is obtained based on the lmmse estimator , it also holds for the maximum likelihood estimator . in the following ,we study the performance of two specific pilot sequences to show how the pilot matrix affects the channel estimation . here , we choose , which is the minimum possible length of the pilot sequence .\a ) _ _ . in this case , , and hence we have consequently , where is a diagonal matrix with {kk } = \alpha_{\text p , k}= \sqrt{\frac{2}{\pi } \frac{1}{k p_\text p \beta_{\text{sr},k } + 1}} ] and ] and ], the problem is expressed as where and and are respectively given by and , shown on the top of the next page . ' '' '' ' '' '' since is an increasing function , problem can be reformulated as which can be identified as a complementary geometric program ( cgp ) .note that the equality constraints of have been replaced with inequality constraints . since the objective function of decreases with , we can guarantee that the inequality constraints must be active at any optimal solution of , which means that problem is equivalent to .cgp problems are in general nonconvex .fortunately , we can first approximate the cgp by solving a sequence of gp problems . then, each gp can be solved very efficiently with standard convex optimization tools such as cvx .the key idea is to use a monomial function to approximate near an arbitrary point . to make the approximation accurate , we need to ensure that these results will hold if the parameters and are chosen as and . at each iteration ,the gp is obtained by replacing the posynomial objective function with its best local monomial approximation near the solution obtained at the previous iteration .the following algorithm shows the steps to solving .initialization_. define a tolerance and parameter .set , and set the initial value of according to the signal - to - interference - plus - noise ratio ( sinr ) in theorem [ theorm : rk_tilde ] with and .\2 ) _ iteration . compute . then , solve the following gp problem : denote the optimal solutions by , for .\3 ) _ stopping criterion_. if , stop ; otherwise , go to step 4 ) .\4 ) _ update initial values_. set , and .go to step 2 ) .we have neglected in the objective function of since they are constants and do not affect the problem solution . also , some trust region constraints are added , i.e. , , which limits how much the variables are allowed to differ from the current guess .the parameter controls the desired accuracy .more precisely , when is close to 1 , it provides good accuracy for the momomial approximation but with slower convergence speed , and vice versa if is large . as discussed in , offers a good tradeoff between accuracy and convergence speed .in this section , we present numerical results to validate previous analytical results and demonstrate the benefits of the power allocation algorithm . in this section, we evaluate the channel estimation accuracy of the identity and hadamard pilot matrices .we choose , and the large scale fading coefficients ] , and $ ] . as a benchmark scheme for comparison, we also plot the sum rate with uniform power allocation , i.e. , and .for uniform power allocation , we can see that the rate of case i is the highest , case iv is the lowest , while case ii outperforms case iii .these results are in agreement with proposition [ prop : rate : compare ] . in addition , we observe that the optimal power allocation strategy significantly boosts the sum rate . although the rate achieved by the optimal power allocation with one - bit adcs and dacs is inferior to the case of perfect adcs and dacs with uniform power allocation , it outperforms the other three one - bit adc / dac configurations .this demonstrates the great importance of power allocation in quantized systems . for , db , and db .we have analyzed the achievable rate of a multipair half - duplex massive antenna relaying system assuming that one - bit adcs and dacs are deployed at the relay .an approximate closed - form expression for the achievable rate was derived , based on which the impact of key system parameters was characterized .it was shown that the sum rate with one - bit adcs and dacs is times less than that achieved by an unquantized system in the low power regime . despite the rate loss due to the use of one - bit adcs and dacs , employing massive antenna arrays still enables significant power savings ; i.e. , the transmit power of each source or the relay can be reduced proportional to to maintain a constant rate , as in the unquantized case .finally , we show that a good power allocation strategy can substantially compensate for the rate loss caused by the coarse quantization .the end - to - end sinr given in consists of six expectation terms : 1 ) desired signal power ; 2 ) estimation error ; 3 ) interpair interference ; 4 ) noise at the relay ; 5 ) quantization noise of adcs ; 6 ) quantization noise of dacs . besides these terms, we also need to calculate an approximation of . in the following , we compute them one by one .\1 ) approximate : by using the fact that , we have then , by substituting and into , we directly obtain \2 ) : since we have \3 ) : which can be decomposed into three different cases : \a ) for , \b ) for , \c ) for , combining a ) , b ) , and c ) , and by utilizing the fact of and , we have thus , 4 ) : which can be decomposed as six cases : \a ) for , \b ) for , \c ) for ( ) , \d ) for ( ) , \e ) for , .\f ) for , . combining a ) , b ) , c ) , d ) , e ) , and f ) , we have \5 ) : following the same approach as with the derivations of , we obtain \6 ) : by using the fact that , we obtain the result for .\7 ) : .\8 ) : combining , , and , we can find the value of .we can readily observe that and .thus , we only focus on comparing and . due to the fact that and ( c.f . , , and corollary [ coro : perfect : adc ] ) , and by neglecting the low order terms as , the ratio between the sinr of and that of can be expressed as where since , we conclude that .this completes the proof .c. kong , c. zhong , a. papazafeiropoulos , m. matthaiou , and z. zhang , `` sum - rate and powr scaling of massive mimo systems with channel aging , '' _ ieee trans .48794893 , dec . 2015 .m. cheng , s. yang , and x. fang , `` adaptive antenna - activation based beamforming for large - scale mimo communication systems of high speed railway , '' _ china commun ._ , vol . 13 , no .9 , pp . 1223 , sep .2016 .h. q. ngo , h. a. suraweera , m. matthaiou , and e. g. larsson , `` multipair full - duplex relaying with massive arrays and linear processing , '' _ ieee j. sel .areas commun .9 , pp . 17211737 , oct . 2014 .z. zhang z. chen , m. shen , and b. xia , `` spectral and energy efficiency of multipair two - way full - duplex relay systems with massive mimo , '' _ ieee j. sel .areas commun .4 , pp . 848863 , apr . 2016 .j. yoo , k. choi , and d. lee , `` comparator generation and selection for highly linear cmos flash analog - to - digital converter , '' _ j. analog integr. circuits signal process .23 , pp . 179187 , 2003 .y. li , c. tao , a. mezghani , a. l. swindlehurst , g. seco - granados , and l. liu , `` optimal design of energy and spectral efficiency tradeoff in one - bit massive mimo systems , '' [ online ] available : https://arxiv.org/pdf/1609.07427.pdf y. li , c. tao , g. seco - granados , a. mezghani , a. l. swindlehurst , and l. liu , `` channel estimation and performance analysis of one - bit massive mimo systems , '' [ online ] available : https://arxiv.org/pdf/1612.03271.pdf j. mo , a. alkhateeb , s. abu - surra , and r. w. heath jr ., `` hybrid architectures with few - bit adc receivers : achievable rates and energy - rate tradeoffs , '' [ online ] available : http://arxiv.org/pdf/1605.00668.pdf g. jacovitti and a. neri , `` estimation of the autocorrelation function of complex gaussian stationary processed by amplitude clipped signals , '' _ ieee trans .inf . theory _ ,1 , pp . 249245 , jan .
this paper considers a multipair amplify - and - forward massive mimo relaying system with one - bit adcs and one - bit dacs at the relay . the channel state information is estimated via pilot training , and then utilized by the relay to perform simple maximum - ratio combining / maximum - ratio transmission processing . leveraging on the bussgang decomposition , an exact achievable rate is derived for the system with correlated quantization noise . based on this , a closed - form asymptotic approximation for the achievable rate is presented , thereby enabling efficient evaluation of the impact of key parameters on the system performance . furthermore , power scaling laws are characterized to study the potential energy efficiency associated with deploying massive one - bit antenna arrays at the relay . in addition , a power allocation strategy is designed to compensate for the rate degradation caused by the coarse quantization . our results suggest that the quality of the channel estimates depends on the specific orthogonal pilot sequences that are used , contrary to unquantized systems where any set of orthogonal pilot sequences gives the same result . moreover , the sum rate gap between the double - quantized relay system and an ideal non - quantized system is a moderate factor of in the low power regime . massive mimo , relays , one - bit quantization , power allocation
the cloud radio access network ( c - ran ) provides a new architecture for 5 g cellular systems . in c - rans , the baseband processing of base stations is carried out in the cloud , i.e. , a centralized base band unit ( bbu ) , which launches joint signal processing with coordinated multi - point transmission ( comp ) and makes it possible to mitigate inter - cell interference .the separation of the bbu and the radio units ( rus ) brings a new segment , i.e. , fronthaul links , to connect both parts .the limited capacities of fronthaul links have a significant influence on the system performance of c - rans .there are several existing works on fronthaul links in c - rans .efficient signal quantization / compression for fronthaul links is designed to maximize the network throughput for the uplink and downlink in and , respectively . in , fronthaul quantization andtransmit power control are optimized jointly . in , energy - efficient comp is designed for downlink transmission considering fronthaul capacity . in ,the capacities of fronthaul links are allocated under a sum capacity constraint to maximize the total throughput . in ,the fronthaul links are reconfigured to apply appropriate transmission strategies in different parts according to both heterogeneous user profiles and dynamic traffic load patterns .however , these existing works have all focused on the physical layer performance without consideration of bursty data arrivals at the transmitters or of the delay requirement of the information flows . since real - life applications ( such as video streaming , web browsing or voip ) are delay - sensitive , it is important to optimize the delay performance of c - rans .to take the queueing delay into consideration , the fronthaul allocation policy should be a function of both the channel state information ( csi ) and the queue state information ( qsi ) .this is because the csi reveals the instantaneous transmission opportunities at the physical layer and the qsi reveals the urgency of the data flows .however , the associated optimization problem is very challenging . a systematic approach tothe delay - aware optimization problem is through a markov decision process ( mdp ) . in general , the optimal control policy can be obtained by solving the well - known _bellman equation_. conventional solutions to the bellman equation , such as brute - force value iteration or policy iteration , have huge complexity ( i.e. , the curse of dimensionality ) , because solving the bellman equation involves solving an exponentially large system of non - linear equations . in this paper , we focus on minimizing the average delay by fronthaul allocation .there are two technical challenges associated with the fronthaul allocation optimization problem : * * challenges due to the average delay consideration * : unlike other works which optimize the physical layer throughput , the optimization involving average delay performance is fundamentally challenging .this is because the associated problem belongs to the class of _ stochastic optimization _ , which embraces both _ information theory _( to model the physical layer dynamics ) and _ queueing theory _ ( to model the queue dynamics ) . a key obstacle to solvingthe associated bellman equation is to obtain the priority function , and there is no easy and systematic solution in general . ** challenges due to the coupled queue dynamics : * the queues of data flows are coupled together due to the mutual interference .the associated stochastic optimization problem is a -dimensional mdp , where is the number of data flows .this -dimensional mdp leads to the curse of dimensionality with complexity exponential to for solving the associated bellman equation .it is highly nontrivial to obtain a low complexity solution for dynamic fronthaul allocation in c - rans . in this paper , we model the fronthaul allocation problem as an infinite horizon average cost mdp and propose a low - complexity delay - aware fronthaul allocation algorithm . to overcome the aforementioned technical challenges, we exploit the specific problem structure that the cross link path gain is usually weaker than the home cell path gain . utilizing the _ perturbation analysis _technique , we obtain a closed - form approximate priority function and the associated error bound . based on that , we obtain a low - complexity delay - aware fronthaul allocation algorithm .the solution is shown to be asymptotically optimal for sufficiently small cross link path gains .furthermore , the simulation results show that the proposed fronthaul allocation achieves significant delay performance gain over various baseline schemes .the rest of this paper is organized as follows . in sectionii , we establish the wireless access link , fronthaul link and cloud baseband processing models as well as the queue dynamics . in section iii, we formulate the fronthaul allocation problem and derive the associated optimality conditions . in section iv , we propose a low - complexity fronthaul allocation solution . following this ,the delay performance of the proposed algorithm is evaluated by simulation in section v. finally , conclusions are drawn in section vi .in this section , we introduce the c - ran topology and the associated models of the access link , the fronthaul link and the cloud baseband processing . based on the models , we obtain the throughput and the dynamics of packet queues .we consider a c - ran with cells , each of which has an ru with a single antenna . in each cell , the data are transmitted from a single - antenna user equipment ( ue ) to the ru via wireless access links and then to the bbu via the fronthaul link over fiber / microwave , as shown in fig . [ fig : topology ] .the time is slotted and the duration of each time slot is .the bbu collects necessary information and makes the resource allocation decisions periodically at the beginning of each time slot .the wireless access links are modeled as an interference channel . in the uplink ,the ues transmit signals to their corresponding rus respectively , and in the meantime , cause interference to other rus in the network .the signals received by the rus are where is a -dimensional vector of the transmitted signals , in which is transmitted by the -th uewith power , is a -dimensional vector of the signals received by the rus , in which is the signal received by the ru in the -th cell , , in which is the complex channel fading coefficient of the uplink transmission from the -th ue to the ru in the -th cell , and is the white gaussian thermal noise with power .define as the _ global csi _ for uplink access links at the -th slot .we have the following assumption on .[ ass_csi]the csi remains constant within a time slot and is i.i.d . over time slots . is independent over the indices and . is composed of two parts , i.e. , , where is the short - term fading coefficient which follows a complex gaussian distribution with mean 0 and unit variance , and is the corresponding large - scale path gain , which is constant over the duration of the communication session .denote as the capacity allocated to the fronthaul link between the ru in the -th cell and the bbu at the -th slot .let be the uplink fronthaul allocation . with limited - capacity fronthaul links ,the signals transmitted between the rus and the bbu have to be quantized . in the uplink ,the ru in each cell underconverts its received signal and sends the quantized signal to the bbu .define , where is the quantized signal at the ru in the -th cell .the signals are assumed to be quantized for each fronthaul link separately .the quantization leads to the distortion of signal , which can be treated as the quantization noise , denoted as , where is the quantization noise over the -th fronthaul link .the signals received by the bbu are expressed as the relationship between and depends on the fronthaul capacity according to the rate - distortion theory , which is given by , where is the mutual information between and .let , where is the power of the quantization noise at the -th slot .the quantization noise power induced by the transmission over the -th uplink fronthaul link at the -th slot is given by where is the euclidean norm .the bbu performs joint decoding for the received uplink signals , which benefits the system performance by joint cloud processing of the signals for different cells .the cloud baseband processing for uplink signals at the bbu is introduced in the following assumption .[ ass_zf]assume that zf joint detection is adopted for the uplink in the cloud baseband processing to eliminate the inter - cell interference .the linear zf receiver at the bbu can be represented by a matrix at the -th slot , where is the inverse , the elements of are independent .thus , and the inverse of exists .] of the channel matrix , i.e. , . the uplink transmission model is described in fig . [ fig : model ] . with the zf joint detection at the bbu ,the post - processing signal is considering both the thermal noise power and the quantization noise power , we obtain the uplink data rate for the -th ue as where is a function of and , and is a function of .note that there is an implicit coupling among the uplink data flows in the sense that depends not only on the fronthaul capacity allocation but also on .there is a bursty data source for each ue .let be the random arrivals ( number of bits ) from the application layers at the end of the -th time slot .we have the following assumption on .assume that is i.i.d .over slots according to a general distribution ] . is independent w.r.t .furthermore , the arrival rates lie within the stability region of the system with the given uplink resource .each ue has a data queue for the bursty traffic flows towards the associated ru .let be the queue length ( number of bits ) at the -th ue at the beginning of the -th slot .let be the _global qsi_. the queue dynamics for the -th ue can be written as in the uplink , the queue dynamics are coupled together due to the zf processing in the bbu . specifically , according to ( [ eq : rate ] ) , the queue departure for the -th ue depends on not only the allocated capacity for the -th fronthaul link , but also all the other elements of .in this section , we formulate the delay - aware control framework of uplink fronthaul allocation .we first define the control policy and the optimization objective .we then formulate the design as a markov decision process ( mdp ) and derive the optimality conditions for solving the problem . for delay - sensitive applications ,it is important to dynamically adapt the fronthaul capacities based on the instantaneous realizations of the csi ( captures the instantaneous transmission opportunities ) and the qsi ( captures the urgency of data flows ) .let denote the global system state .we define the stationary fronthaul allocation policy below : a stationary control policy for the -th ue is a mapping from the system state to the fronthaul allocation action of the -th ue . specifically , .let denote the aggregation of the control policies for all the ues .the csi is i.i.d . over time slots based on the block fading channel model in assumption [ ass_csi ] .furthermore , from the queue evolution equation in ( [ eq : queue ] ) , depends only on and the data rate . given a control policy , the data rate at the -th slot depends on and . hence , the global system state is a controlled markov chain with the following transition probability : = & \pr[\mathbf{h}(t+1)]\pr[\mathbf{q}(t+1)|\boldsymbol{\chi}(t),\boldsymbol{\omega}(\boldsymbol{\chi}(t))]\end{aligned } , \ ] ] where the queue transition probability is given by = & \begin{cases } \prod_{k}\pr\big[a_{k}\left(t\right)\big ] & \text{if } q_{k}\left(t+1\right)\text{is given by ( \ref{eq : queue})},\forall k\\ 0 & \text{otherwise } , \end{cases}\end{aligned}\ ] ] where the equality is due to the i.i.d .assumption of in assumption [ ass_csi ] . for technical reasons, we consider the _admissible control policy _ defined below .[ def_adm]a policy is admissible if the following requirements are satisfied : * is a unichain policy , i.e. , the controlled markov chain under has a single recurrent class ( and possibly some transient states ) .* the queueing system under is second - order stable in the sense that <\infty ] because the action taken will affect the future evolution of .one key obstacle in deriving the optimal fronthaul policy is to obtain the priority function of the bellman equation in ( [ eq : bellman1 ] ) .conventional brute force value iteration or policy iteration algorithms can only give numerical solutions and have exponential complexity in , which is highly undesirable .in this section , we shall exploit the characteristics of the topology of c - rans . specifically , we define be the worst - case path gain among all the cross links , which is usually weaker than the home cell path gain due to the c - ran network architecture .we adopt perturbation theory w.r.t . to obtain a closed - form approximation of the priority function and derive the associated error bound . based on that, we obtain a low complexity delay - aware fronthaul allocation algorithm .we adopt a calculus approach to obtain a closed - form approximate priority function .we first have the following theorem for solving the bellman equation in ( [ eq : bellman1 ] ) .[ the_hjb1]assume there exist and of class that satisfy * the following partial differential equation ( pde ) : \bigg|\mathbf{q}\bigg]=0,\\ & \hspace{5cm}\hspace{5cm}\forall\mathbf{q}\in\mathbb{r}_{+}^{k } \end{aligned } \label{eq : bellman3}\ ] ] with boundary condition ; * for all , is an increasing function of all ; * .then , we have where the error term asymptotically goes to zero for sufficiently small . please refer to appendix b. theorem [ the_hjb1 ] suggests that if we can solve for the pde in ( [ eq : bellman3 ] ) , then the solution is only away from the solution of the bellman equation . the queues of the uplink data flows are coupled due to the coupling of in ( [ eq : rate ] ) .the following lemma establishes the intensity of the queue coupling .[ lem_weak]the coupling intensity of uplink data queues induced by in ( [ eq : rate ] ) is given by .please refer to appendix c. as a result , the solution of ( [ eq : bellman3 ] ) depends on the worst - case cross link interference path gain and , hence , the -dimensional pde in ( [ eq : bellman3 ] ) can be regarded as a perturbation of a _ base system _ , as defined below .[ def_base]a base system is characterized by the pde in ( [ eq : bellman3 ] ) with . according to lemma [ lem_weak ], we have in the base system .we first study the base system and use to obtain a closed - form approximation of .we have the following lemma summarizing the priority function of the base system .[ decomposable structure of ][lem_base]the solution for the base system has the following decomposable structure : where is the _ per - flow priority function _ for the -th data flow given by where ; , where satisfies ; ; is chosen to satisfy , firstly solve using one - dimensional search techniques ( e.g. , bisection method ) . then is chosen such that . ] the boundary condition .please refer to appendix d. note that when , we have for all and , hence , there is no coupling between the ue - ru pairs . as a result , the queues are totally decoupled and the system is equivalent to a decoupled system with independent queues .that is why the priority function in the base system has the decomposable structure in lemma [ lem_base ] .when , can be considered as a perturbation of the solution of the base system . using perturbation analysis on the pde ( [ eq : bellman3 ] ) , we establish the following theorem on the approximation of : [ first order perturbation of [the_first] is given by where .please refer to appendix e. the priority function is decomposed into the following three terms : 1 ) the base term obtained by solving a base system without coupling , 2 ) the perturbation term accounting for the first order coupling due to the joint processing in the bbu , and 3 ) the residual error term which goes to zero in the order of . as a result , we adopt the following closed - form approximation of : in this section , we use the closed - form approximate priority function in ( [ eq : finalapprox ] ) to capture the urgency information of the data flows and obtain a low complexity delay - aware fronthaul allocation algorithm . using the approximate priority function in ( [ eq : finalapprox ] ) ,the per - stage control problem ( for each state realization ) is given by , where satisfies .] where can be calculated from ( [ eq : finalapprox ] ) , which is given by the per - stage problem in ( [ eq : utility ] ) is similar to the weighted sum - rate ( wsr ) optimization , which can be considered as a special case of network utility maximization .however , unlike conventional wsr problems , where the weights are static , the weights here in ( [ eq : utility ] ) are dynamic and are determined by the qsi via the priority function . as such, the role of the qsi is to dynamically adjust the weight ( priority ) of the individual flows , whereas the role of the csi is to adjust the priority of the flow based on the transmission opportunity in the rate function .one approach to solve the wsr problem is solving the local optimization problem for each flow iteratively . in each local optimization problem for the -th flow , the total wsr objective is maximized , assuming that the capacities of other links do not change . the local optimization problem is formulated as the above local optimization problem is still difficult to solve directly . an alternative method is simplifying the effect of on the other links as a linear function . define as the marginal increase in the utility of the -th flow per unit increase in , i.e. , where and . adopting the linear simplification for the effect of on the -th flow in the per - stage local optimization problem ( [ eq : local ] ), we have the karush - kuhn - tucker ( kkt ) condition as by solving ( [ eq : local2 ] ) , we obtain the optimal fronthaul capacity for the local optimization problem as where and . based on the above analysis , we propose a low - complexity fronthaul allocation algorithm launched at the beginning of each slot , which is described using pseudo codes as algorithm 1 .we denote as the allocated fronthaul capacities in the -th iteration .initialize and calculate based on according to ( [ eq : local3 ] ) although the per - stage problem ( [ eq : utility ] ) is not convex in general , the following lemma states that it is a convex problem for sufficiently small . [ the : convex]when is sufficiently small , the objective in ( [ eq : utility ] ) is a concave function of , and the problem ( [ eq : utility ] ) is a convex problem. please refer to appendix f. according to lemma [ the : convex ] , we provide the convergence property and asymptotic optimality of algorithm 1 in the following theorem : [ the : convergence]when is sufficiently small , starting from any feasible initial point , algorithm 1 converges to the optimal solution of the original problem [ pro_mdp ]. please refer to appendix g.in this section , we evaluate the performance of the proposed low - complexity delay - aware fronthaul allocation algorithm for c - rans . for performance comparison , we adopt the following two baseline schemes . ** baseline 1 [ throughput - optimal fronthaul allocation ] : * the throughput - optimal fronthaul allocation algorithm determines the fronthaul capacities for maximizing the total data rate without considering the queueing information , which is similar to that in but with zf processing . * * baseline 2 [ queue - weighted fronthaul allocation ] : * the queue - weighted fronthaul allocation algorithm exploits both csi and qsi for queue stability by lyapunov drift and solves the per - stage problem ( ) replacing with . in the simulation ,the performance of the proposed fronthaul allocation algorithm is evaluated in a c - ran cluster with seven cells .a single channel is considered , and one user over the channel is located randomly in each cell , with radius 500 m .poisson data arrival is considered , with an average arrival rate for the -th ue , which is uniformly distributed between ] .similarly , if satisfies the approximate bellman equation in ( [ eq : bellman2 ] ) , we have = \mathbf{0 } , \quad \forall \mathbf{q } \in \boldsymbol{\mathcal{q } } , \end{aligned}\ ] ] where and .we then establish the following lemma . for any , we have \geq \min _ { \boldsymbol{\omega}\left ( \boldsymbol{\chi } \right ) } f^\dagger_{\boldsymbol{\chi}}(\theta , v , \boldsymbol{\omega}(\boldsymbol{\chi}))+ \nu \min _ { \boldsymbol{\omega}\left ( \boldsymbol{\chi } \right ) } g_{\boldsymbol{\chi}}(v,\boldsymbol{\omega}(\boldsymbol{\chi } ) ) ] according to ( [ defapp ] ) , and and are all smooth and bounded functions , we have \big| = o(1) ] for all together with the transversality condition in ( [ eq : trans1 ] ) has a unique solution .if satisfies the approximate bellman equation in ( [ eq : bellman2 ] ) and the transversality condition in ( [ eq : trans1 ] ) , then , for all , where asymptotically goes to zero as goes to zero .[ proof of lemma [ lemma3 ] ] suppose for some , ( w.r.t . ) . from lemma [ lemma2 ], we have \big| = o(1) ] for all and the transversality condition in ( [ eq : trans1 ] ) .however , due to . this contradicts the condition that is a unique solution of for all and the transversality condition in ( [ eq : trans1 ] ) .hence , we must have for all .similarly , we can establish .for notation convenience , we write in place of .it can be observed that if satisfies ( [ eq : bellman3 ] ) , it also satisfies ( [ eq : bellman2 ] ) .furthermore , since , then <\infty ] and the _ conditional lyapunov drift _ as ] be the _ semi - invariant moment generating function _ of .then , will have a unique positive root ( ) .let , where . using the kingman bound result that \leq e^{-r_k^\ast x } ] . similarly ,if , the expected data rate is = & \int_{\frac{n_{0}\gamma_{k}}{pl_{kk}\left(j_{k}'(q_{k})-\gamma_{k}\right)}}^{\infty}\log_{2}\left(\frac{1+pl_{kk}x / n_{0}}{1+\frac{1}{\left(j_{k}'(q_{k})/\gamma_{k}-1\right)}}\right)e^{-x}dx\\ = & \frac{e^{\frac{n_{0}}{pl_{kk}}}}{\ln2}e_{1}\left(\frac{n_{0}j_{k}'(q_{k})}{pl_{kk}\left(j_{k}'(q_{k})-\gamma_{k}\right)}\right ) . \end{aligned }\label{eq : app4exp2}\ ] ] otherwise , =0 $ ] .we then calculate .since ( [ eq : app4perflow ] ) should hold when , we have \label{eq : app4exp3}\ ] ] =\lambda_{k}.\label{eq : app4exp4}\ ] ] using ( [ eq : app4exp1 ] ) and ( [ eq : app4exp2 ] ) , we can calculate as shown in lemma [ lem_base ] . substituting ( [ eq : app4exp1 ] ) , ( [ eq : app4exp2 ] ) , and into ( [ eq : app4perflow ] ) and letting , we have the following ode : we adopt the following argument to prove the convexity : given two feasible points and , define , , then is a convex function of if and only if is a convex function of , which is equivalent to for . to use this argument ,we rewrite problem ( [ eq : utility ] ) as consider the convex combination of two feasible solutions , and , as and . when is sufficiently small , the second order derivative of is calculated as where and .when is sufficiently small , the terms in the order of can be ignored and we simplify as
in cloud radio access networks ( c - rans ) , the baseband units and radio units of base stations are separated , which requires high - capacity fronthaul links connecting both parts . in this paper , we consider the delay - aware fronthaul allocation problem for c - rans . the stochastic optimization problem is formulated as an infinite horizon average cost markov decision process . to deal with the curse of dimensionality , we derive a closed - form approximate priority function and the associated error bound using perturbation analysis . based on the closed - form approximate priority function , we propose a low - complexity delay - aware fronthaul allocation algorithm solving the per - stage optimization problem . the proposed solution is further shown to be asymptotically optimal for sufficiently small cross link path gains . finally , the proposed fronthaul allocation algorithm is compared with various baselines through simulations , and it is shown that significant performance gain can be achieved . cloud radio access networks , fronthaul link , delay optimization , perturbation analysis , markov decision process
over the last half century , significant advances have been made in understanding the dynamics of coupled oscillators . since the pioneering work of winfree and kuramoto , the nonlinear dynamics community has been able to use both analytical and numerical techniques to study the onset of synchronization and to explore other types of dynamics , including varying degrees of coherence and incoherence , in a broad class of oscillator networks . before 2002, divergent behaviors such as incoherence and coherence were thought to result from heterogeneities .however , kuramoto and battogtokh showed that even networks of identical oscillators could split into regions of coherence and incoherence . since this surprising discovery, these `` chimera states '' have been reported in a vast array of network topologies including spatially embedded networks like a ring of oscillators , a torus , and a plane . herewe study the dynamics of kuramoto oscillators on the surface a unit sphere . in this system , the phases governed by where is a continuous coupling kernel .we choose to study this system for several reasons .there are homeomorphisms ( continuous deformations ) from the sphere to many common closed two - dimensional surfaces embedded in three dimensions ; spheres are topologically equivalent to all kinds of different surfaces with physical and biological relevance .our results suggest that chimera states are likely to occur for oscillators on any orientable closed surface .furthermore , the sphere is a geometry in which both spot and spiral chimera states appear in very simple forms .these two unique dynamical patterns have yet to be connected from an analytical perspective .spiral chimeras on the sphere show an intriguing similarity to patterns of activity displayed by the human heart during ventricular fibrillation .we analyze the dynamics of this system with near - global coupling and demonstrate the existence of spot and spiral chimera states in the perturbative limit .then , using a variety of numerical and analytical techniques , we explore the role of the coupling length and phase lag in determining the existence and stability of these unusual patterns ( see fig .[ fig : spot_and_spiral_plot ] ) .in two - dimensional spatially embedded networks of identical oscillators , two different classes of chimera states have been reported : spots and spirals . for spot chimeras ,oscillators form spots of incoherence and coherence . in the coherent region ,all oscillators share the same phase . for spiral chimeras ,a region of incoherence is surrounded by a coherent region . in the coherent region ,the phases of the oscillators make a full cycle along any path around the incoherent spot .thus the lines of constant phase resemble spiral arms around an implied phase singularity at the center of the incoherent region .spot chimeras were discovered first .their bifurcations and stability have been studied extensively with near - global coupling .when is near , unstable and stable spot chimeras bifurcate off of the fully synchronized and drifting states respectively and then disappear due to a saddle node bifurcation .thus they only exist near the hamiltonian limit .spiral chimeras are not as well understood .they were reported by shima and kuramoto on an infinite plane .their existence was confirmed analytically by martens et al . , but their bifurcations and stability have not yet been studied from an analytical perspective .numerical experiments have suggested that they are only stable when is near 0 ( a dissipative limit ) and when the coupling is more localized .we consider the special case where the coupling kernel is defined as this is known as the von - mises - fisher distribution and represents the analog of a normal distribution on a sphere .the variance ( coupling length ) of this distribution is inversely related to the concentration parameter . as , representing purely local coupling .when , representing global coupling .we are interested in the role this concentration plays in the dynamics . following the approach of kuramoto and battogtokh , we shift into a rotating frame with angular frequency and define a complex order parameter , equation can be rewritten in terms of the order parameter and the frequency difference revealing two types of stationary solutions : where , oscillators become phase - locked with a stationary phase , and where , they can not become phase - locked but instead drift with a stationary phase distribution . inboth the locked and drifting regions , these stationary solutions must satisfy a self - consistency equation where and . to reduce the dimensionality of this system , we parametrize the surface of the sphere using the mathematical convention for spherical coordinates where represents the azimuthal angle and represents the polar angle and restrict our search to solutions of the form [ self_c_ans ] where is an integer .these solutions correspond to rotationally symmetric spots ( ) , simple spirals ( ) , and higher order spirals ( ) . under this restriction ,. becomes where where is the order modified bessel function of the first kind .note that is independent of .equation is a complex nonlinear integral eigenvalue problem .solving explicitly for , , and is not possible in general , and solutions may not exist for all . with near - global coupling ( ) , the coupling kernel can be approximated to leading order in by \}.\end{gathered}\ ] ] substitution of this coupling kernel into eq .reveals that the order parameter must have the following form where and [ sc_alg ] note that the derivation of eq .does not rely on eq . .we now focus on the case where . and [ see eq . for the definitions ] . the green ( dotted ) curve corresponds to solutions with spatially modulated drift .the blue ( solid ) curve corresponds to stable spot chimeras .the magenta ( dashed ) curve corresponds to unstable spot chimeras .the red ( solid ) curve along the vertical axis represents uniform solutions .panel ( b ) displays the fraction of oscillators in the drifting region as a function of ( where ) .the solid curve represents stable chimera states and the dashed curve represents unstable chimera states . ]when , and ( and therefore ) depend only on , so integration with respect to is possible giving .thus the order parameter takes the form where .this yields two equations : where denotes complex conjugation .motivated by the results from we look for solutions that scale like [ scaling ] expanding in and defining we obtain the following conditions at leading order : [ self_c_10 ] +i\beta_1~ , \label{self_c_10a}\\ a_2 + ib_2 & = \frac{\sqrt{2}}{a_2 ^ 2}\bigg[\frac{2\delta}{3}\left(\left(\delta - a_2\right)^{3/2}-\left(\delta+a_2\right)^{3/2}\right)\nonumber\\&-\frac{2}{5}\left(\left(\delta - a_2\right)^{5/2}-\left(\delta+a_2\right)^{5/2}\right)\bigg]~. \label{self_c_10b}\end{aligned}\ ] ] note that was required to satisfy the equations at .the real part of eq .depends only on and .thus we can fix , solve for , and then compute the other unknowns directly .the parameters and determine the variation in the order parameter and the size of the drifting region . using matcont , a numerical continuation software package for matlab , we find the solutions to eq . and display them in fig .[ fig : spot_res](a ) .figure [ fig : spot_res](b ) shows the fraction drifting as a function of .these solutions resemble the spot solutions observed in refs . in thatunstable spot chimeras bifurcate off of a phase - locked state and stable spot chimeras bifurcate off of a modulated drift state . at a critical value of ,the chimera states disappear due to a saddle node bifurcation ( see appendix [ app_snb ] ) .this explains the change in stability observed in panel ( b ) .the stability of these solutions was confirmed via numerical integration of eq . .no higher order spirals ( ) occur to lowest order in because all terms in eq .integrate to 0 . for , on the other hand , and ,so eq . yields an order parameter of the form without loss of generality , we can define the argument to be 0 along the half plane and making real .this implies that and . defining , substituting this result into eq . , andintegrating with respect to yields note that chimera states only appear if .[ for , eq . can only be satisfied for . ] integrating with respect to , this simplifies to .\end{gathered}\ ] ] where refers to the principal branch of the natural logarithm .solving eq .reveals that like spot chimeras , spiral chimeras represent a link between coherence and incoherence ( see fig .[ fig : smallk_spiral_sc ] ) .we find that when , and .this leads to a fully locked solution in which the phases depend only on the azimuthal variable ( a `` beachball '' pattern ) . as increases from 0 ,incoherent spiral cores are born at the poles of the sphere and grow until .when , , and the sphere is fully incoherent .although these spirals resemble the spirals reported in refs . , numerical experiments suggest that these states are unstable .they only seem to gain stability when coupling is more localized . to search for stable spiral chimera states , we now explore the dynamics when is not small .with highly localized coupling , the effects of curvature are negligible , and the sphere can be approximated locally as a plane .martens et al. showed that , on an infinite plane , spiral chimera states appear when is small .motivated by these findings , we consider the limit where and assume the following scalings [ scalings ] expanding eq . in to leading order yields which can be integrated numerically to find .to we find that thus and satisfy an inhomogeneous fredholm equation of the second kind which can be solved numerically .this asymptotic approach decouples the magnitude of the order parameter from its argument making it possible to solve for each separately .it also allows a nonlinear equation to be approximated by a series of linear equations and can be used to find higher order approximation to and as well .these results can be used to estimate the size of the incoherent region at the center of the spiral . to see this ,let represent the boundary between the locked and drifting regions . to order , the boundary satisfies there are two solutions to this equation that are symmetric about the equator , one with and one with . to find the size of the incoherent region with , we expand about , substituting eq . into eq . , integrating with respect to , and solving for yields thus , near the birth of the chimera state the size of the incoherent region grows with .since is dependent but its scaling with is unknown , the dependence of the size of the incoherent region on can not be determined using eq . .however , given a numerical solution for and , the value of is readily apparent and the size of the drifting region can be easily calculated ( see fig . [fig : fracs ] in appendix [ app_ns ] ) .the asymptotic approximations discussed in secs . [ sec : global ] and [ sec : local ] are only valid when or . to explore other regions of parameters space, we used matcont for numerical continuation .this software package uses newton s method to allow the user to continue equilibria of systems of ordinary differential equations ( odes ) and to detect bifurcations . for numerical continuation, we defined and rewrote eq . as the known spiral solutions for are nearly sinusoidal , therefore it is natural to represent them by a fourier sine series where in fourier space , it is straightforward to show that the fixed points of [ matcont_f2 ] where and denote real and imaginary parts and represents solutions to eq . .equation was imposed to eliminate the extra degree of freedom in eq .( invariance under rotations ) .the nonlinearity of equation made it unlikely that numerical methods would converge to the correct solution without an accurate initial guess .so , we began with the solutions for , , and derived for and and then implemented a variety of algorithms in order to numerically continue spiral chimera states over the parameter space and .the simplest approach we implemented was a naive iterative method .eq . has the form . given an initial guess for the solution to eq . ( in practice we used a discrete set of values ) , we updated our solution as follows : + + _ step 1 : _+ define ._ step 2 : _ + choose to minimize ._ step 3 : _ + update .+ + to carry out numerical continuation using this heuristic scheme , we selected a known solution as a starting point made a small change to one of the parameters , and then iterated until the residual was small .this method is not guaranteed to converge , but for a starting point sufficiently close to the true solution and steps that were sufficiently small , it did allow us to identify solutions for new ranges of and . in order to improve upon the above , we implemented a second similar method .we again started with a known solution and made a small change to either or .then , we utilized a quasi - newton method ( as implemented in the matlab function fminunc ) to find the values of and that minimized the error : this method seemed to be more stable than the iterative approach , but it was more computationally intensive . both of the above methods used zeroth order extrapolation to generate a starting point for the next set of parameter values . to improve upon this method we also used first order extrapolation to generate the next starting point .for example , to continue in , given and their associated solutions for and , we used a linear approximation to generate a guess at .this guess was then used as a starting point for the above methods . although the above approaches did yield marginal gains in exploring vs. space ,ultimately the time and memory demands were far too large to adequately explore the domain of interest due to the small step sizes required for convergence .we found that matcont was the most effective method for numerically continuing spiral chimera states .our first attempt at writing eq . as a system of odes ,the input format required by matcont , used a discretized version of eq . on a uniform grid of 101 points between .unfortunately , the algorithm was unable to identify an appropriate search direction for continuation .instead we represented as a fourier sine series as described in the main text .for , is sinusoidal , thus only the first fourier coefficient is nonzero .as increases subsequent terms become more important . for , , we truncated the series after 16 terms ( inclusion of higher frequency modes does not significantly change the results ) .we then verified the accuracy of these solutions by substituting them directly into equation .the integrals in eqs . andwere evaluated using simpson s rule with 101 grid points .we terminated continuation when the change in over 10 steps was less than .the endpoints ( ) of the curves obtained from matcont were somewhat irregular . to address this , we compiled all of the solutions from matcont and used spline interpolation and extrapolation to generate guesses for missing solutions .then , using the optimization method described above we refined the guesses until the optimization scheme terminated and computed the error for these new points using eq . .if the norm of the error was less than , we accepted the result as a solution to eq . .this allowed us to extend our results to higher values of .the results from numerical continuation of spiral chimeras are displayed in figs .[ fig : op_sols ] and [ fig : spiral_stability ] .we find that spiral chimera states satisfying eqs . and continue to exist for .near and , we were able to continue these solutions indefinitely in .for intermediate values of the numerical continuation fails prematurely at .attempts at continuing beyond this point by increasing the number of fourier coefficients retained , refining the grid for numerical integration , and continuing using alternative methods ( see appendix [ app_nc ] ) yielded incremental increases in .this suggests that the failure of convergence is due to narrowing of the basin of attraction for the numerical solution and increasingly sharp transitions in the shape of the solutions , however we can not rule out a failure of existence due to a bifurcation .qualitatively these spirals resemble the ones observed for .they are symmetric with respect to reflections about the equator . however , instead of straight spiral arms ( lines of constant phase ) where ( see fig . [fig : op_sols ] , bottom panels ) , these solutions have curved spiral arms with ( see fig . [fig : op_sols ] , top panels ) .the fraction of oscillators drifting is zero for and increases with until the entire sphere is incoherent when .see fig .[ fig : spot_and_spiral_plot ] for an example .spot chimeras appear to exist for arbitrary and seem qualitatively similar to solutions for ( see appendix [ app_scwlc ] ) . in order to test the stability of these solutions , we approximated eq . by selecting 5000 points uniformly distributed on the surface of a sphere , generating initial conditions consistent with the order parameters obtained through numerical continuation ( see appendix [ app_gic ] ) , and integrating for 5000 units of time ( 10 - 1000 cycles of the locked oscillators , depending on the values of and ) .after this interval , if the final state possessed a phase distribution that was nearly identical to the initial state ( except for possible drifting of the incoherent region ) we classified the chimera state as stable .we observed a narrow strip with stable chimeras extending down to and .we believe that , to conform with the planar case explored in ref . , this strip is likely to originate from and ( in this limit the coupling is so localized that the curvature of the sphere becomes irrelevant ) .near the boundaries of this strip , solutions remained close to the initial condition for most of the integration time before evolving toward a fully coherent state or spiral pattern without an incoherent region , suggesting that spiral chimera states outside of the strip were unstable . at the momentwe have no analytical explanation for the observed changes in stability .one possibility is that the states we refer to as stable are actually just very long - lived transients .however , that raises the question of why this particular strip would have dramatically longer transient times than neighboring regions of parameter space .another possibility is that stability changes due to some as of yet unidentified bifurcation .this bifurcation can not be due to the presence of a spot chimera because of the topological differences and the fact that spot chimeras do not exist near the boundaries of this region , but it could be due to other equilibrium spiral patterns that only satisfy ansatz at the bifurcation point .this work demonstrates the existence of both spot and spiral chimera states on the surface of a sphere .we find that both spirals and spots represent links between coherence and incoherence . in agreement with previous results ,when coupling is nearly global , spot chimeras only exist near the hamiltonian limit ( ) whereas spiral chimeras exist for all values of . for more localized coupling, numerical results suggest that both types of chimera states continue to exist , but that they have disjoint regions of stability .a puzzling apparent failure of existence of chimera states for localized coupling and intermediate phase lags ( ) remains to be explained .more broadly , we have demonstrated that the surface of a sphere provides an interesting testbed for assessing the properties of chimera states one in which analogs of many previously reported chimera states exist .although the underlying cause is the same , this topology leads to visually distinct patterns from other two dimensional systems on a plane , single spirals appear , whereas , on a torus , spirals only appear in multiples of 4 and on a sphere spirals appear in pairs . the result that both spiral and spot chimera states occur over a wide range of parameter values in these systems suggests that chimera states may be possible in any network of non - locally coupled oscillators on a closed , orientable surface .in particular , the topological resemblance of a sphere to real - world systems makes this geometry potentially valuable for applications to naturally occurring biological oscillatory networks ( e.g. the human heart and brain , where chimera states could be associated with dangerous ventricular fibrillation or epileptic seizure states ) . the authors would like to thank c. laing for useful conversations .the saddle - node bifurcation with respect to ( or , equivalently , ) visible in fig .2(b ) is straightforward to compute numerically from eqs .( 11 ) . to compute an analytical form for the critical ,however , we proceeded as follows : * isolate in the imaginary part of ( 11a ) , then eliminate by plugging into the real part of ( 11b ) to get a function . *find the maximum for which a solution exists by differentiating with respect to and imposing , then solving for .* plug in the result to get and solve for to get . * plug into to get .note that in each step itemized here significant simplification may be required to obtain a suitably concise result .the spot chimera solutions for , , and derived for can also be continued for larger values of . however , unlike the spiral solutions , these solutions do not resemble sine functions , and as a result , they can not be accurately represented using a truncated fourier sine series . instead, a cosine series or full fourier series may be appropriate .numerical continuation results from matcont ( see fig . [fig : cont_spot ] ) suggest that solutions exist for higher values of that are qualitatively similar to the solutions with . as the asymptotic analysis in section [ sec : global ] indicates , the critical value of corresponding to the saddle node bifurcation grows with . in our numerical exploration of the stability of spiral chimeras , while integrating eq .we observed that some of the unstable chimeras with near and evolved into spot chimeras .this suggests that spot chimeras remain stable for larger values of . as a result, we believe spot chimeras with localized coupling warrant further exploration .in many systems , the basin of attraction for a chimera state is small compared with the basins of attraction of the uniform coherent and incoherent states . since chimera states can only be observed in simulations when the initial condition is inside these basins , completely random initial conditions are unlikely to converge to a chimera .it is also difficult to determine the structure of the basins of attraction of equilibrium points in high dimensional systems .so , the most effective method for observing a chimera states in simulation is to select an initial condition very close to the chimera .such an initial condition can be found given a solution for the order parameter and the natural frequency in the rotating frame by using the method outlined below . after making the transformation to shift into a rotating frame of reference , eq .can be written as this equation admits two types of stationary solutions .wherever , oscillators have a stationary phase satisfying wherever oscillators can not become phase - locked .however , if the phase at each point is interpreted as a probability distribution , then there are stationary phase distributions satisfying the continuity equation where represents the phase velocity .it is straightforward to check that the following distribution is stationary : therefore , given , and at the position of each oscillator , an appropriate initial phase can be computed as follows : * _ case 1 . _ if set by solving eq . .* _ case 2 ._ if choose randomly using the probability distribution in eq . .( a ) compute the cumulative distribution .( b ) choose to be a uniformly distributed random number between 0 and 1 .( c ) set .to complement our numerical results , we would have liked to perform a rigorous stability analysis on our system , as omelchenko was able to do for a ring of coupled oscillators .unfortunately the nonlinear eigenvalue problem in our system results in a complex nonlinear integral equation with a nonseparable kernel which we were unable to solve analytically .there are various limitations to using numerical integration to ascertain information about stability in this system .first of all , numerical integration itself introduces error . to address this , we used matlab s built in adaptive runge - kutta method ode45 for integration and verified that the results were consistent with those obtained using other ode solvers . to accelerate computations ,large matrix operations were carried out on a nvidia geforce gtx 570 gpu with 480 cores using the parallel computing toolbox , a matlab implementation of nvidia s cuda platform .second , choosing uniformly distributed points on the surface of a sphere is a non - trivial problem .there are various heuristic methods for generating points that are approximately uniformly distributed .we used the method described in ref . which is a modification of the generalized spiral points method proposed by rakhmanov et al . . by selecting evenly spaced points along a spiral from the north pole to the south ,one obtains nearly `` uniformly '' distributed points .the slight nonuniformity means that some oscillators may be weighted slightly more heavily than others in the network .third , the lifetime of chimera states can depend on the number of grid points .previous work with spot chimeras has suggested that they are stable states with an infinite number of oscillators and long - lived transients with a finite number .the lifetime of these metastable chimera states grows with the number of oscillators , but so does the computation time .for the figures displayed in this paper , we chose to use 5000 oscillators as a compromise allowing for full exploration of parameter space in a reasonable amount of time .our results were robust to variations in the number of oscillators ( we also tried 2500 , 8000 , and 10000 for selected cases of interest ) .however , there is no guarantee that the stability with a finite number of oscillators will agree with that when .note that even when the number of grid points is large , for some parameter values ( e.g. , small for spiral chimeras ) the number of points within the incoherent region may still be small ( see fig . [ fig : fracs ] ) , leading to inaccurate numerical assessment of stability ( this is the origin of the yellow region in fig . [fig : spiral_stability ] ) .furthermore , the effective coupling length given by is proportional to , so the number of grid points must grow with if a minimal number are to be included within the `` coupling zone '' where coupling strength is significant .finally , numerical integration can not truly determine stability .we integrated eq . for 5000 units of time .this termination criteria is somewhat arbitrary and could lead to unstable but long - lived transient states being classified as stable . in our analysis , we found that the boundaries of the `` stable '' region did change slightly depending on the termination criteria .however , we also tested a subset of points in the domain and verified that they appeared stable after 10000 units of time .both 5000 and 10000 time units are orders of magnitude longer than the transient lifetime of a typical chimera state that we identify as numerically unstable .figure [ fig : vels ] shows the typical pattern of phase velocities distributed on the sphere for both stable spot and stable spiral chimera states .the final pattern of phase velocities was used in conjunction with the final pattern of phases to distinguish between stable and unstable chimera states .
a chimera state is a spatiotemporal pattern in which a network of identical coupled oscillators exhibits coexisting regions of asynchronous and synchronous oscillation . two distinct classes of chimera states have been shown to exist : `` spots '' and `` spirals . '' here we study coupled oscillators on the surface of a sphere , a single system in which both spot and spiral chimera states appear . we present an analysis of the birth and death of spiral chimera states and show that although they coexist with spot chimeras , they are stable in disjoint regions of parameter space .
for the testing of newly developed detector systems , testbeam facilities are suitable and frequently used .they create experimental conditions which are closer to a high energy physics experiment than the conditions in the laboratory while permitting access to important experimental parameters . in order to measure properties like efficiency and spatial resolution of a device under test ( dut ) , a precise reference measurement of the incident particle tracks is required .this is the task for a _ beam telescope _ measuring intercept and angle for incident particles on an event by event basis . in order to achieve position resolutions in the and scale silicon microstrip detectorsare commonly used for such telescopes , providing a number of space points for track interpolation .such microstrip based telescope systems suffer from limited event rate due to their large number of readout channels and their system architecture .additionally , to synchronize such a system is difficult , and merging a given dut readout into the system s data acquisition ( daq ) is a major task . as beam timeis often limited , speed is also an important requirement for a telescope system , especially when semiconductor detector devices with a structure size in the scale , for instance atlas pixel devices , are to be tested .the time needed to collect a significant number of events for every sensor element strongly depends on the readout speed of the telescope , as the dut readout is very fast . in this paper ,the concept of a fully pc - based beam telescope system , henceforth referred to as bat , is presented , which combines good track measurement accuracy with high event rate and easy dut integration .figure [ telsetup ] shows a typical bat setup consisting of four detector modules , a trigger logic unit ( tlu ) , the data acquisition pc and a dut .all components are connected via the purely digital `` blueboard bus '' ( bb ) .furthermore , the `` timing bus '' connects bat modules , dut and tlu.a raw trigger signal indicating an event is provided by the coincidence of two scintillation counters .the coincidence signal is fed into the tlu , which then decides if a trigger is to be given according to the module s status information accessible on the timing bus .if so , the tlu generates the trigger signal and distributes it to the modules.after receiving a trigger , each module acquires , digitizes and preprocesses event data autonomously and independent from an external sequencer logic .the event data is stored in a module - internal memory . when a certain amount of data is accumulated in a module s memory ,the corresponding module alerts the data acquisition pc to read the entire data memory content of this module.the daq processes running on the pc collect all data from the different modules , assemble the data which belong to one event and store it on the hard disk .part of the data is processed , the results are made available to the user for monitoring purposes.several ways of integrating a dut are feasible .the dut can be connected directly to the bb , as shown in figure [ telsetup ] . for this purpose, a flexible bb interface is available . for integration of a given vme based dut and supplementary measurement equipment, a vme crate can be attached to the daq pc using a commercially available pc to vme interface .and in case an embedded pc or a vme cpu is to be used for daq , a bb to vme interface has been developed for fully vme based operation of the entire telescope .a bat module consists of a sensor assembly , an analog telescope card ( atc ) and a digital telescope card ( dtc ) . an overview over a module s constituents and their interconnection is given in figure [ modsetup ] .a photograph of a fully assembled module is shown in figure [ modphot ] .the sensor assembly consists of the sensor and 2 5 front end ics .the sensor is a commercially available double sided , ac - coupled silicon strip detector type s6934 with integrated polysilicon bias resistors manufactured by hamamatsu photonics .the n - side strips are isolated by -stop implantations .implant and readout strip pitch are 50 on both sides , the nominal stereo angle is 90 .the sensitive area is corresponding to 640 strips on each side.the front end ic used is the va2 manufactured by ide as , oslo .the va2 is a 128 channel charge sensitive preamplifier - shaper circuit with simultaneous sample and hold , serial analog readout and calibration facilities .five va2 ics are needed to provide readout for one detector side .they are mounted on a so - called belle hybrid , a ceramic carrier with an attached pcb providing support for the vas and distributing supply , bias and digital control to them .as vas on the hybrid are operated in a _ daisy chain _ , a hybrid is read out like one large 640 channel va .sensor and hybrids are fixed to a ceramic support structure , which is attached to a solid aluminum frame for handling.the atc is divided into two identical compartments supporting one belle hybrid each .a hybrid s compartment provides the supply voltages , bias voltages and bias currents required by the hybrid .a fast adc circuit is used for digitization of the va2 output data , and an additional multi - channel adc allows observation of the most important hybrid parameters during operation.only digital signals are transferred between atc and dtc via digital couplers .the central functional building block of the dtc is the _ readout sequencer_. implemented into the _ main fpga _ , this circuit generates the control sequence needed to acquire and digitize the analog fe data .both hybrids are read out simultaneously .the readout sequencer also controls the _ data preprocessing logic_. furthermore , the dtc holds a large fifo for on - module data buffering and a ram for storing data preprocessing information . a second fpga circuit controls access to the bb and the timing bus .it is also capable of sending interrupt requests ( irqs ) on the bb to the pc .each module has its own power supply , providing three independent voltage sources needed to operate the atc compartments and the dtc .the power supply also generates the detector bias voltage .the powering and grounding scheme of a telescope module is shown in figure [ batpotentials ] .the data acquisition of the bat is implemented as a two - level process .the primary level daq ( daq i ) , controlled by the readout sequencers in every module , is simultaneously performed inside each module directly after receiving a trigger signal .the secondary level daq ( daq ii ) is pc controlled and common for all modules .both daq levels running independently reduces the effective system dead time to the daq i runtime ( see also section [ tc ] ) .the telescope daq structure is shown in figure [ daqstruct ] .an example for daq i and daq ii interaction is shown in figure [ tlutrig ] .when receiving a trigger , a module s readout sequencer acquires and digitizes the data residing in the front end ics and operates the data preprocessing logic , which performs pedestal correction , hit detection and zero suppression .pedestal correction is done by subtracting an individual pedestal value for each channel .hits are detected by applying an individual threshold to the pedestal corrected data .pedestal and threshold values have to be determined and stored beforehand in the dtc ram .zero suppression is done by storing only _ clusters _ consisting of the information of the 5 neighboring channels around the hit channel in the dtc fifo .enlarged clusters are stored for two or more hit channels in close proximity .multiple clusters per event are possible .the data volume for an event with one hit cluster is 32 byte in total .compared to a typical event size of 2.5 kbyte for common telescope systems , the amount of data is reduced by a factor 1/80.after finishing preprocessing an event , end of event data ( eod ) is written to the fifo , which transmits a module internal trigger number count and the so - called common mode count ( cmc ) value .the cmc value is used to calculate and correct the common mode fluctuation amplitude for this event in on- or offline analysis .daq i has finished processing an event as soon as a complete module event data block ( med ) including cluster data and eod has completely been written to the fifo . while daq i is active , meds keep accumulating in the modules fifos until a certain threshold fill level is exceeded .a module internal interrupt generator generates an irq , forcing daq ii to become active.daq ii , responsible for data transfer to the pc , is controlled by the _ producer _task , which runs on the data acquisition pc .it controls one _ shared buffer _ , a fifo like structure in pc ram , for every module .when detecting an irq from a certain module , the producer transfers the data from this module s buffer fifo to the corresponding shared buffer .daq i operation is not affected by this data transfer and continues to process events .the _ writer _ software process collects and assembles meds belonging to the same event from the different shared buffers and stores them on the hard disk.the modules threshold fill level can be adjusted with respect to the beam intensity .a single event operation mode for low beam intensities is also available .as each module takes data autonomously , trigger control is necessary to prevent the trigger synchronization from getting lost .every device is therefore connected to the trigger logic implemented in the tlu , receives its trigger signal from the tlu and has a dedicated busy line on the timing bus , which indicates daq i activity .the tlu generates a gate signal for the raw trigger from the coincidence of all devices busy signals , which only sends triggers if all devices are not busy .the system s dead time is therefore determined by the busy signal from the slowest device .the timing of gate and busy signals is shown in figure [ tlutrig ] .the daq pc is a commercial pc equipped with a dual pentium ii processor running the windows nt 4.0 operating system and the daq software package written in c++ .it is connected to the bb via a bb to pci interface card .in addition to the daq processes mentioned , online monitoring processes allow an overview about the device performance during operation .an overview over the different processes and their tasks is given in figure [ batsoft ] .the mean event rate of the telescope system is determined by the dead time of the slowest device , being the bat modules in most applications due to their serial readout . a bat module s dead time dominated by the daq i runtime . ] , which is 132 .the event rate actually observed also depends on the trigger coincidence rate , and is given by : assuming poisson statistics .the dependence of event rate and trigger coincidence rate is shown in figure [ trrate ] . at the h8 testbeam at cern ,a system consisting of 4 bat modules and one bb based dut has been operated with an effective event rate of .this is an event rate larger than the event rate of conventional vme - based systems by factors of 40 to 75 .figure [ hitmap ] shows a hit map and source profiles of a source scan using a pin diode as trigger device .only one dead channel on the n - side and a few noisy channels on the p - side are observed .the system operates stably .no pedestal drift was observed during a 32-hour run .thus taking pedestals only once at the beginning of each run is sufficient .common mode noise is also tolerable .figure [ perf1 ] shows a typical pulse height distribution together with the noise histogram of n- and p - side of the same module .one adc count corresponds to an enc of 20 on p and 24 on n - side , as can be calculated from the position of the peak in the respective pulse height distribution .thus , the mean enc value for all channels is 706 for the n - side and 340 for the p - side . comparing these values with the most probable charge deposition for a minimal ionizing particle in 300 m thick silicon , which is 23300 , yields signal to noise ratios of 33 for the n- and 69 for the p - side , which is comparable to the results obtained with other telescope systems .figure [ plscor ] shows the correlation between the pulse heights observed on n- and p- side of the detector for an event .the pulse height correlation can be used to solve strip data ambiguities , which can occur at high beam intensities .charge sharing between detector strips can be used for a more exact reconstruction of the position of a hit on a module , as the bat provides analog cluster readout .the normalized pulse heights of the three central strips of a cluster can conveniently be displayed in the form of a _ triangle plot _( figure [ dalitz ] ) . using the different normalized amplitudes of the three central strips of a cluster as distances from the sides of an equilateral triangle ,the triangle plot is a way to display the distribution of the signal charge among the three central channels .events in which most of the charge is deposited in the central cluster channel lie at the top of the triangle , events in which the charge is divided between two cluster channels lie on the sides of the triangle .events with significant amounts of charge on all three central channels lie in the central area of the triangle .entries outside the triangle area are due to `` negative '' signal amplitudes after pedestal subtraction caused by noise . in most casesthe charge is deposited only in the central cluster strip or in two strips .charge distribution over three or more channels , mostly due to -electrons , are rare ; thus an algorithm using only two cluster charges for reconstruction is appropriate .the commonly used -algorithm uses the pulse heights of the two central cluster channels which carry the largest signals within the cluster : with being the amplitude of the left and the right central cluster channel . assuming a uniform distribution of hits and charge sharing independent from the total pulse height , the integral of the -distribution can be used to calculate a position correction value by with being the strip pitch and the total number of entries in the distribution histogram .the correction value is then added to a reference position to obtain the absolute position of the hit .typical distributions for a single module are displayed in figure [ etadistr ] .the differences in shape of the distribution between n- and p- side are mostly due to different interstrip capacitances on the detector sides .the asymmetry of the distribution for one detector side is due to parasitic capacitances in the analog readout of the strips .they can be corrected by applying a deconvolution algorithm .their influence on the spatial resolution of the detectors is , however , small .the telescope tracking performance has been studied using test beam data taken with a 180 gev/ pion beam at the cern h8 testbeam at the sps .the raw event data is processed by a program developed by the milano atlas group which performs event reconstruction and alignment of the telescope planes.a straight line fit is applied to the strip hits , and the residuals between the hits and the fitted track are computed for the strip planes .then , an analytical alignment algorithm is applied to the strip planes , which minimizes the residuals and their dependence on position and angle of the tracks .the alignment and the tilt angle are calculated for all strip planes using the first strip plane as reference plane .examples of the resulting residual distributions for the strip planes in one direction after alignment are presented in figure [ stripresiduals ] , showing the quality of the alignment algorithm .the distributions are properly centered around zero , which indicates the absence of systematic errors .their widths , which are determined by the intrinsic resolution of the strip planes , multiple scattering and the alignment algorithm , lie between 6.3 and 4.2 .as the data from the strip planes , however , is used in the track fit , the width of the strip plane residuals can not be taken to determine the spatial resolution of the telescope .for this purpose , the residual distributions in the dut planes have to be considered .the telescope setup included two duts , which were hybrid pixel detectors .sensor and front end electronics were developed by the atlas pixel collaboration .the sensor has no inefficient area , the pixel cell size was m m corresponding to the pixel pitch , and the front end electronics provides for zero - suppressed readout , reporting both pixel position and charge deposition for those pixels only , for which the charge deposition exceeds a certain threshold.the spatial resolution of the telescope system was measured using the residuals between the position determined from the dut data and the extrapolation to the dut plane of the tracks fit to the strip data . for this purpose , the relative alignment of the dut to the strip planes is calculated .events with a -probability of the track fit greater than 0.02 were selected from data taken with the beam along the normal to the pixel plane . in figure [ pixelresiduals ] , the residuals along the short ( m ) pixel cell direction are shown for events , for which only one pixel reported a hit ( upper histogram ) and events , for which two neighboring pixels reported a hit ( lower histogram ) .the reconstructed position on the dut of the single pixel hits is the centre of the hit pixel cell , while for two pixel hits an interpolation algorithm is used to determine the hit position using the charge deposition information .the latter distribution can be used to give an estimation of the telescope resolution .residual distributions between the strip hits and the fitted track.,width=350 ] residual distributions between the pixel hits and the extrapolation of the track to the pixel detector plane along the short direction of the pixel cell .the upper histogram is for single pixel clusters , the lower histogram for two pixel clusters with a gaussian fit superimposed.,width=350 ] a gaussian fit to the two pixel residual distribution yields m .this is the convolution of the telescope resolution and the pixel detector intrinsic resolution .the latter can be estimated as follows . as tracksare uniformly distributed , the width of the region in which charge division between two pixels occurs can be estimated using the ratio between the number of two pixel and one - pixel hits .this yields m .the expected r.m.s . of the residual distribution for these tracks is m .thus , the width of the actual residual distribution is dominated by the telescope resolution , which can be estimated conservatively to be better than m in the dut plane .a high speed modular pc based beam telescope using double sided silicon microstrip detectors with on module data preprocessing has been built and successfully taken into operation .telescope hard- and software are capable of stand - alone operation and easy to handle ; integration of an additional `` device under test '' is straightforward .pedestal subtraction , hit detection and zero suppression are done inside every module , reducing the data volume by a factor of 1/80 . with its two level data acquisition scheme ,the system can process event rates up to 7.6 khz .the telescope is a factor of 75 ( 40 ) ( ) faster than conventional vme based systems while providing comparable performance .signal to noise ratios of up to 70 were achieved .the spatial resolution in the dut plane has been determined to be better than m .we gratefully acknowledge the help obtained from walter ockenfels and ogmundur runolfsson when encountering problems concerning mechanics , case design and handling and bonding of silicon detectors .we would also like to thank the members of the atlas pixel collaboration , in particular john richardson from lbnl , berkeley , and attilio andreazza , francesco ragusa and clara troncon from the milano atlas group , for providing help and know - how in testbeam data taking and data analysis .9 c. eklund et al .instr . and meth . , a 430 , ( 1999 ) 321p. fischer et al . , nucl .instr . and meth . , a 364 , ( 1995 ) 224l.celano et al ., cern - ppe/95 - 106 , cern , geneva ( 1995 ) .catalog _ si photodiodes and charge sensitive amplifiers for scintillation counting and high energy physics _ , published by hamamatsu , catalog number koth00020e05 ( 1997 ) . .specifications & manual .version 1.4 . published by ide as , oslo , norway ( 1997 ) .version 2.3 . published by ide as , oslo , norway .bjrn magne sundal , _ technical design report for belle svd readout hybrid ._ published by ide as , oslo , norway .( 1997 ) . .pci interface - karte fr das blueboard asic testsystem . documentation. published by silicon solutions , bonn ( 1999 ) .e. belau et al .instr . and meth . , a 214 , ( 1983 ) 253t. lari , nucl .instr . and meth . , a 465 ( 2001 ) 112 - 114t.lari , _ study of silicon pixel sensors for the atlas detector _ , cern - thesis-2001 - 028 , cern , geneva ( 2001 ) .n. wermes for the atlas pixel collaboration , _ designs and prototype performance of the atlas pixel detector _ , bonn - he-99 - 07 , bonn ( 1999 ) .et al . , nucl .instr . andmeth . , a 456 , ( 2001 ) 217 .
a pc based high speed silicon microstrip beam telescope consisting of several independent modules is presented . every module contains an ac - coupled double sided silicon microstrip sensor and a complete set of analog and digital signal processing electronics . a digital bus connects the modules with the daq pc . a trigger logic unit coordinates the operation of all modules of the telescope . the system architecture allows easy integration of any kind of device under test into the data acquisition chain.signal digitization , pedestal correction , hit detection and zero suppression are done by hardware inside the modules , so that the amount of data per event is reduced by a factor of 80 compared to conventional readout systems . in combination with a two level data acquisition scheme , this allows event rates up to 7.6 . this is a factor of 40 faster than conventional vme based beam telescopes while comparable analog performance is maintained achieving signal to noise ratios of up to 70:1 . the telescope has been tested in the sps testbeam at cern . it has been adopted as the reference instrument for testbeam studies for the atlas pixel detector development . , , , , ,
in 1999 , d. molodtsov , introduced the notion of a soft set as a collection of approximate descriptions of an object .this initial description of the object has an approximate nature , and we do not need to introduce the notion of exact solution .the absence of any restrictions on the approximate description in soft sets make this theory very convenient and easily applicable in practice .applications of soft sets in areas ranging from decision problems to texture classification , have surged in recent years .similarity measures quantify the extent to which different patterns , signals , images or sets are alike .such measures are used extensively in the application of fuzzy sets , intuitionistic fuzzy set and vague sets to the problems of pattern recognition , signal detection , medical diagnosis and security verification systems .that is why several researchers have studied the problem of similarity measurement between fuzzy sets , intuitionistic fuzzy sets ( ifss ) and vague sets .ground breaking work for introducing similarity measure of soft sets was presented by majumdar and samanta in .their work uses matrix representation based distances of soft sets to introduce similarity measures . in this paper , we propose new similarity measures using set theoretic operations , besides showing how the earlier similarity measures of majumdar and samanta are inappropriate .we also present an application of the proposed measures of similarity in the area of automated financial analysis .this paper is organized as follows : in section 2 , requisite preliminary notions from soft set theory have been presented .section 3 comprises some counterexamples to show that some claims in are not correct . at the end of this sectionwe also improve and generalize lemma 4.4 of . in section 4we give our motivation and rationale to introduce set operations based distance and similarity measures .section 5 introduces the notion of set operation based distance between soft sets and some of its weaker forms . in section 6similarity measures have been defined .finally section 7 is the application of new similarity measures to the problem of financial diagnosis of firms .a pair is called a soft set over , where is a mapping given by in other words , a soft set over is a parametrized family of subsets of the universe for may be considered as the set of -approximate elements of the soft set .clearly a soft set is not a set in ordinary sense . let be a universe and a set of attributes .then the pair called a soft space , is the collection of all soft sets on with attributes from .[st - subset ] for two soft sets and over , we say that is a soft subset of if and .we write . is said to be a soft super set of , if is a soft subset of .we denote it by .[st - union ] union of two soft sets and over a common universe is the soft set where and write [ intersection_def_ours] let and be two soft sets over with .restricted intersection of two soft sets and is a soft set where and .we write the complement of a soft set is denoted by and is defined by where is a mapping given by in the sequel we shall denote the absolute null and absolute whole soft sets in a soft space as and , respectively .these have been defined in sfst08ali as: majumdar and samanta have written the ground breaking paper on similarity measures of soft sets . in this sectionwe first give counterexamples to show that their definition 2.7 and lemma 3.5(3 ) contain some errors .we , then , improve lemma 4.4 and make it a corollary of our result .a matching function based similarity measure has been defined in sfst08maj as : if then similarity between and is defined by } .\tag{i } \label{mtchfnsim}\]]if and then we first define for and for then is defined by formula ( [ mtchfnsim ] ) .[ lm3.5,sfst08maj](lemma 3.5 ) let and be two soft sets over the same finite universe then the following hold : our next example shows that claim of lemma lm3.5,sfst08maj is incorrect : let and we choose soft set as: using ( [ mtchfnsim ] ) we get majumdar and samanta have defined following distances between soft sets as four distinct notions : for two soft sets and we define the mean hamming distance between soft sets as: normalized hamming distance as: euclidean distance as: normalized euclidean distance as: [ mainremark]majumdar and samanta s observation ( also used in their proof of lemma 4.4 ) that inaccurate .the quantity is either or only .consequently , the term used for defining distances and , comes out to be identical with renders and as mere square roots of and respectively . symbolically we write: the four distances of majumdar and samanta are not distinct , rather they are only two distances . in the sequel the cardinality of a set is denoted as .we now present the main result of this section as : [ mainresult]let .then for any two soft sets and we have the smallest and the largest distances are given as , suppose the arrangement of entries in matrix representation of two arbitrary soft sets and is such that the term evaluates to times .then , we can re - arrange the terms in expansion of to get ( [ ii]),([iii ] ) and ( [ iv ] ) we have note that the result now follows immediately from follow immediately by remark mainremark and and ( lemma 4.4 ) let .then for any two soft sets and we have .we first define the notion of soft space : let be a universe and a set of attributes . then the pair called a soft space ,is the collection of all soft sets on with attributes from .the work of majumdar and samanta depends solely upon the tacit assumption that matrix representation of soft sets is a suitable representation .we now discuss the validity of this assumption .tabular representation of a soft set was first proposed by maji , biswas and roy in .this representation readily lends itself to become majumdar and samanta s matrix representation as given in .hence , in the sequal , we shall use the words ` table representation ' and ` matrix representation ' interchangeably .furthermore , we shall term a soft set in a soft space as ` total soft set ' if the soft set , which is a mapping , is defined on each point of the universe of attributes hence is a total soft set in the soft space but , with is not .it is noteworthy that the matrix representation compels one to write every soft set as a total soft set .consequently neither matrix representation is unique , nor it returns the original soft set .this is shown by the following example : let be a soft space with and choose its matrix representation is given as we try to retrieve , the soft set from we get is clearly a different soft set in as but and hence moreover , it is evident by the very definition of soft union as given by maji _ et.al . _that total soft sets are not meant by either molodstov or maji , biswas and roy .had this been the case , the soft union should not have been defined in three pieces .furthermore , it is important to note that in , while calculating the similarity only the value sets of a soft set have been paid attention to . whereas , ideally a similarity measure for soft sets must reflect similarity between both the value sets and the attributes , due to the peculiar dependance of the notion of soft set upon these two sets .both the above given points viz .non - suitability of matrix representation and partial nature of similarity measures of majumdar and samanta , provide us motivation to introduce more suitable distance and similarity measures of soft sets .we introduce these measures in the following sections .recall that symmetric difference between two sets and is denoted and defined as: first define : let and be soft sets in a soft space and a mapping .then is said to be quasi - metric if it satisfies a quasi - metric is said to be semi - metric if a semi - metric is said to be pseudo metric if a pseudo metric is said to be metric if some quasi - metrics and semi - metrics for soft sets may readily be defined as follows : for two soft sets and in a soft space where and are not identically void we define hamming quasi - metric as: normalized hamming quasi - metric as: for two soft sets and in a soft space , we define cardinality semi - metric as : normalized cardinality semi - metric as: following example shows that and are quasi - metrics and and are semi - metrics , only : let be a soft space with and we choose following soft sets in calculations give hence choose , soft sets : we get and fail to satisfy this choose [ my - distances]for two soft sets and in a soft space where and are not identically void we define euclidean distance as: euclidean distance as: all the radicals yield non - negative values only .the mappings as defined above , are metrics .[ lm - mydistances]for the soft sets , and an arbitrary soft set in a soft space , we have : mapping $ ] is said to be similarity measure if its value for arbitrary soft sets and in the soft space , satisfies following axioms : : : : : if then : : : : if and then and for two soft sets and in a soft space we define a set theoretic matching function similarity measure as: for the soft sets , and an arbitrary soft set in a soft space , we have : based upon distances , defined in last section ( definition [ my - distances ] ) , two similarity measure may be introduced , following koczy , as: the definition of williams and steele we may define another pair of similarity measures as: is a positive real number ( parameter ) called the steepness measure . two soft sets and in a soft space are said to be -similar , denoted as , if is a similarity measure . is reflexive and symmetric .majumdar and samanta have defined the notion of significant similarity as follows : two soft sets and in a soft space are said to be significantly similar with respect to the similarity measure if in the following example we show that two clearly non - similar soft sets come out to be significantly similar using a majumdar - samant similarity measure .but the same soft sets are rightly discerned as non - significantly similar by a similarity measure proposed in this work : [ ex_showing_superiority]let and and is intuitively clear that and are not similar but and appear to be considerably similar .we calculate the similarity of both pairs of soft sets using a majumdar - samanta similarity measure as follows: according to both the soft sets and are significantly similar to though this conclusion is counter - intuitive . on the other hand using ( proposed in this work ) we calculate similarities as: has rightly discerned to be non - significantly similar and as significantly similar .we now give some interesting properties of the newly introduced similarity measures in the form of following two propositions .proofs of these propositions are straightforward in view of lemma [ lm - mydistances ] : for an arbitrary soft set in a soft space , we have : for the soft sets , in a soft space , we have : now present a financial diagnosis problem where similarity measures can be applied .the notion of similarity measure of two soft sets can be applied to detect whether a firm is suffering from a certain economic syndrome or not . in the following example , we estimate if two firms with observed profiles of financial indicators are suffering from serious liquidity problem .suppose the firm profiles are given as : profile 1 : : the firm abc maintains a beerish future outlook as well as same behaviour in trading of its share prices . during last fiscal year the profit - earning ratio continued to rise .inflation is increasing continuously .abc has a low amount of paid - up capital and a similar situation is seen in foreign direct investment flowing into abc .profile 2 : : the firm xyz showed a fluctuating share price and hence a varying future outlook . likeabc profit - earning ratio remained beerish .as both firms are in the same economy , inflation is also rising for xyz and may be considered even high in view of xyz .competition in the business area of xyz is increasing .debit level went high but the paid - up capital lowered . for this, we first construct a model soft set for liquidity - problem and the soft sets for the firm profiles .next we find the similarity measure of these soft sets .if they are significantly similar , then we conclude that the firm is possibly suffering from liquidity problem .let , profit - earning ratio , share price , paid - up capital , competitiveness , business diversification , future outlook , debt level , foreign dirct investment , fixed income be the collection of financial indicators which are given in both profiles .further let be the universe of parameters , which are basically linguistic labels commonly used to describe the state of financial indicators .the profile of a firm by observing its financial indicators may easily be coded into a soft set using appropriate linguistic labels .let and be soft sets coding profiles of firms abc and xyz , respectively , and are given as: model soft set for a firm suffering from liquidity problem can easily be prepared in a similar manner by help of a financial expert . in our casewe take it to be as follows: thus we have , and the soft sets of firm profiles become: the calculations give: we conclude that the firm with profile i.e. xyz is suffering from a liquidity problem as its soft set profile is significantly similar to the standard liquidity problem profile . whereas the firm abc is very less likely to be suffering from the same problem .majumdar and samanta use matrix representation based distances of soft sets to introduce matching function and distance based similarity measures .we first give counterexamples to show that majumdar and samanta s definition 2.7 and lemma 3.5(3 ) contain errors , then prove some properties of the distances introduced by them , thus making their lemma 4.4 , a corllary of our result .the tacit assumption of that matrix representation is a suitable representation for mathematical manipulation of soft sets , has been shown to be flawed , in section 4 .this raises a natural question as to what approach be considered suitable for similarity measures of soft sets ?in one possible reply to this we introduce set operations based measures .our example [ ex_showing_superiority ] presents a case where majumdar - samanta similarity measure produces an erroneous result but the measure proposed herein decides correctly .the new similarity measures have been applied to the problem of financial diagnosis of firms .a technique of using linguistic labels as parameters for soft sets has been used to model natural - language descriptions in terms of soft sets .this exhibits the rich prospects held by soft set theory as a tool for problems in social , biological and economic systems .the author wishes to express his sincere gratitude to anonymous referee(s ) for valuable comments which have improved the presentation .the author is also thankful to the area editor prof .john mordeson , of this journal , for his kindness and prompt response .z. xiao , l. chen , b. zhong , s. ye , _ recognition for soft information based on the theory of soft sets , _intl . conf .services systems and services management 2005 , proc of vol2 issue 13 - 15 , ( june 2005)1104 - 1106 .
in [ p. majumdar , s. k. samanta , _ similarity measure of soft sets _ , new mathematics and natural computation 4(1)(2008 ) 1 - 12 ] , the authors use matrix representation based distances of soft sets to introduce matching function and distance based similarity measures . we first give counterexamples to show that their definition 2.7 and lemma 3.5(3 ) contain errors , then improve their lemma 4.4 making it a corllary of our result . the fundamental assumption of has been shown to be flawed . this motivates us to introduce set operations based measures . we present a case ( example [ ex_showing_superiority ] ) where majumdar - samanta similarity measure produces an erroneous result but the measure proposed herein decides correctly . several properties of the new measures have been presented and finally the new similarity measures have been applied to the problem of financial diagnosis of firms . * keywords : * applied soft sets ; similarity measure ; distance measure ; financial diagnosis ; similarity based decision making ;
the scientific disciplines ( e.g. biology , chemistry , physics ) once stood well separated from each other , with practitioners from each approaching different questions in different ways .these divisions are beginning to blur , however , as answers to questions from one field increasingly require techniques and knowledge built up in another .there is evidence of this effect in the increasing need for interdisciplinary collaborations to solve problems arising in distinct fields .a particularly poignant example of this blurring of lines is the field of molecular biology , where researchers try to build an understanding of biological systems starting at the molecular level .concepts from chemistry and physics arise naturally in such endeavors and this has bred a symbiotic relationship between biologists , chemists , and physicists , who now seek to answer similar questions. some of the most important questions arising in this arena relate to the structure and function of biological macromolecules. for example , for rational drug design to be viable , a detailed knowledge of the interactions between a target protein and a potential drug molecule is necessary to understand whether the drug will bind to the protein at the right location and in the right way. from there , an atomic - level understanding of the protein itself is necessary to understand how allosteric effects turn a drug binding event into a change in the behavior of the protein. these details can not come from a _ top down _ investigation of the molecules , nor can they come from simply observing the changing behavior as a function of drug binding .part of the physics lies in the statistical mechanics of protein conformations , and part resides in the communication networks within the protein , the elucidation of which hinges on the detailed physics of the binding event and the transmission of information from the binding site to a possibly distant effector site .the same can be said of the `` holy grail '' of molecular biology understanding protein folding and how the structure of a protein relates to its function .coarse - grained models provide some insight into the process of protein folding , but a true understanding of the process and the ability to reliably predict how a protein will fold requires an atomic - level understanding of the interactions within a particular protein .there are many other examples of the need for atomistic detail in molecular biology .ultimately , all properties of biological macromolecules such as dna , rna , proteins are governed by minute details involving the atomic and electronic structure of their constituent parts as well as the interactions between neighboring pieces of the molecule .even dynamic conformational changes that may be essential to a particular process are ultimately governed by these interactions and similar interactions with the surrounding environment .developing an atomic - level understanding of large molecular systems is not an easy task and , until recently , the application of accurate quantum mechanical methods to such systems was infeasible .this review highlights recent advances made in the fields of computational physics and physical chemistry that can aid in building such an understanding . after discussing classical simulations and common quantum - chemistry approaches, we focus specifically on advances within density functional theory ( dft)a framework used successfully for decades in the field of condensed matter physics which affords unprecedented accuracy and utility in treating large , weakly bound molecular systems .these new methods will be discussed and paired with a survey of their use on biologically - relevant molecular systems .possible future applications of these methods will also be addressed .for many purposes , the best present - day methods to study biologically - relevant systems are classical force field models .such methods allow one to study the large - scale dynamics of systems with perhaps millions of atoms over biologically - relevant timescales. this is by far the most common computational method of study for macromolecules , and has provided indispensable insight into numerous biological systems .the main goal of a force field is simple ; to represent the energy and forces of a collection of atoms using a physically - motivated , yet relatively straight - forward , algebraic expression .this simplicity is what allows the simulation of large systems over significant timescales. generally , the physical motivation for terms in the energy hamiltonian come from macroscopic physics .for example , many force fields treat bond stretches and angle flexes as classical harmonic oscillators obeying hooke s law .this is , in fact , what is meant by the phrase _ classical _ force field . in its most basic form, a typical force field can be written as a sum of separate contributions to the total energy, i.e. & = & \sum_{b}{\frac{1}{2}k_{b}(d_{b}-d_{0})^{2}}\nonumber\\ & + & \sum_{a}{\frac{1}{2}k_{a}(\theta_{a}-\theta_{0})^{2}}\nonumber\\ & + & \sum_{d}{\frac{1}{2}k_{d}\big[1+\cos(n\phi-\delta)\big]}\nonumber\\ & + & \sum_{nb}{\bigg[\frac{q_{i}q_{j}}{r_{ij}}+ \bigg(\frac{c_{12}}{r_{ij}^{12}}- \frac{c_{6}}{r_{ij}^{6}}\bigg)\bigg]}\;.\label{e_ff}\end{aligned}\ ] ] the first term on the right hand side of eq .( [ e_ff ] ) represents an harmonic oscillator ( with spring constant ) in bond length between each pair of covalently - bonded atoms within the system .the second does the same for the three - atom angle term .dihedral angles are treated with a fairly shallow periodic potential , represented by the third term .the last line of eq .( [ e_ff ] ) represents non - bonded interactions and includes a coulomb term for charge - charge interactions and a lennard - jones ( 612 ) potential to account for van der waals type interactions .some force fields add additional terms for out - of - plane motions ( improper dihedrals ) or higher - order terms. variations on the functional form are also sometimes applied .for example , a morse potential can be used in place of the harmonic bond term to allow for bond breaking during a simulation. for all their usefulness , so called `` class i '' force field approaches suffer from some drawbacks .first , treating microscopic phenomena using macroscopic theory is , in essence , a mean - field approach .the quantum - mechanical interactions between electron clouds are averaged over .this , along with the assumed form for all physical interactions , does not allow new physics to be uncovered .the only physics present in the simulation is what was explicitly included , meaning one can not gain any true atomic - level insight into the underpinnings of interesting phenomena .second , the simplicity of the mean - field approach used in force field simulations means that they are generally incapable of transferably achieving chemical accuracy. while bulk motions and general trends can often be gleaned from such simulations , the precise movements and behavior of atoms are probably not accurate .this poses a significant problem for applications such as drug design , where one seeks to find a small molecule ( an enzyme inhibitor perhaps ) that binds with a certain affinity to a site in the protein .biophysicists and biochemists have already made substantial headway against this problem. originally , force fields included the partial charge on an atom as a fitting parameter .the charge was assumed fixed during the simulation so effects of polarization could not be treated .the next generation of force fields incorporates the ability of charge to rearrange during a simulation .polarizable _ force fields incorporate some of the quantum effects necessary to accurately model molecular systems .one example of this new type of force field is the amoeba force field , which includes both static and dynamic polarizabilities and represents a significant step towards accurate energetics from a force field. in addition , newer force fields often include cross - terms that account for how changes in one internal coordinate affect other energy terms .these help improve accuracy and transferability but can not correct for the lack of an explicit quantum mechanical treatment .the obvious solution to the shortcomings of the classical force field methods is to directly include quantum mechanics in calculations .therefore , the solution to the problem is straight - forward ; one simply has to solve the time - independent schrdinger equation where the hamiltonian in atomic units is given by here , lower - case letters represent electronic degrees - of - freedom , upper - case letters represent nuclear degrees - of - freedom ( including charge z ) , is the energy of the system , and the explicit representation of the hamiltonian follows from specialization to an isolated system of atoms under the born - oppenheimer approximation .the unknown function is the wave function for the electrons and from it ( along with knowledge of the nuclear positions ) one can calculate all accessible properties of the system . unfortunately, depends on the coordinates of all electrons within the system and , as a result , direct solution of eq .( [ schrodinger ] ) for the full many - body wave function is difficult or impossible for all but the most trivial systems .for example , a single neutral water molecule has 10 electrons , so its wave function is a function of variables ( i.e. 10 electron positions in three dimensions ) . while an analytical solution in this simple case is already not possible , it is conceivable that the schrdinger equation could be solved numerically .however , to store the wave function on a numerical grid consisting of 10 points in each dimension ( a laughably coarse grid ) using single precision numerics would take bytes ( approximately tb ) of storage .this is `` the curse of dimensionality '' on a grand scale and renders full solution of the schrdinger equation for most systems utterly intractable .surmounting this fundamental problem in a physical way is not easy and has consumed the efforts of chemists and physicists alike for decades . from those efforts , however ,have sprung a number of useful approaches .these can be split into two categories , wave function theories and density functional theory , both of which we will discuss in detail below .all the methods described in what follows have exhibited great success in describing various quantum - mechanical properties of molecules and materials in general . however , when dealing with biologically - relevant systems two special considerations arise : ( i ) such systems are typically quite _ large _ and ( ii ) their structure and binding is often dominated by weak _van der waals _ interactions .since this review is focused on biological applications of quantum - mechanical methods , special attention will be paid to the ability of each method to scale well with system size and to adequately describe van der waals interactions . as such, the ability of a method to treat large systems involving van der waals interactions will determine its applicability to the biologically - relevant systems considered here .a simple solution to the dimensionality problem introduced by eq . ( [ schrodinger ] ) is to seek solutions of the form that is , to assume that the total electron wave function can be separated and written as a product of single - electron states ( orbitals ) .the pauli - exclusion principle and the anti - symmetry of the wave function can be enforced by forming a slater determinant of the single - particle solutions .since the fock operator used to find the orbitals depends explicitly on those orbitals , the resulting equations are generally solved self - consistently .this approach is a form of mean - field theory where each electron responds to the average field created by all other electrons residing in their single - particle orbitals .the advantage of this approach is that each orbital is now a function of the three spatial coordinates , making numerical calculations computationally feasible .wave function methods are described in detail in ref . .the hartree - fock ( hf ) method , which takes this approach , is relatively fast and based on sound quantum mechanics , but the approximations invoked by its use miss some crucial physics .in particular , electrons are dynamic entities .the total energy of the system can be lowered if , averaged over some degree - of - freedom , the electrons correlate their behavior .correlation is ( almost ) completely missed in the hartree - fock method , which explicitly assumes single - particle states the static correlation due to the pauli exclusion principle is fully accounted for .nevertheless , hf theory is a good first - order starting point for corrections that incorporate electron correlation into the total wave function and its associated energy .such methods are termed post - hf methods , since they use the results of a hf calculation as a starting point to incorporate electron correlation explicitly .there are many post - hf methods that exhibit various accuracies coming at related computational costs .one of the best features of the wave function methods is their segregation into a hierarchy of so called `` levels of theory '' .thus , one knows in some sense , to what degree a result can be trusted , depending on the precise method used .if better results are desired , one merely has to progress to a higher level of theory .basis sets ( the set of functions used to expand the wave functions ) are also of critical importance .they too , however , exhibit a hierarchy of complexity and applicability .figure [ fig_hf ] gives a cartoon depiction of how one can approach the numerically exact solution by combining a large basis set with a high level of theory .the most rigorous method to include electron correlation is full configuration interaction ( ci ) . in the ci method ,one starts as usual with the orbitals found by a hartree - fock calculation .instead of using a single slater determinant of these functions , however , a linear combination of slater determinants is formed , each one corresponding to one possible ordering of electrons in the orbitals . in other words ,all possible combinations of electron excitations are given a slater determinant , and the optimized linear combination of these yields the numerically exact wave function .this renders full ci a combinatorial problem taking a given number of electrons and producing all possible excitations to a given set of orbitals .thus , full ci scales factorially with the number of basis functions used and therefore is not practical in all but the smallest of systems .perhaps the next best thing to a full ci calculation is to use coupled cluster theory .the coupled cluster approach mimics ci but using only small numbers of electron excitations , usually considering only excitations of one to three electrons .the most common variant of coupled - cluster theory is notated ccsd(t ) , which includes single and double excitations iteratively , and triple excitations perturbatively .this has proven incredibly reliable and represents the `` gold standard '' for accurate quantum - chemistry calculations .although it has polynomial ( rather than factorial ) scaling in the number of basis functions used , the asymptotic scaling of for the generally used form renders this approach mainly useful on relatively small systems of perhaps 3050 atoms . among the most used post - hf methodsis mller - plesset perturbation theory at second order ( mp2 ) or higher order ( mp3 , mp4 , ) . in perturbation theory ,one seeks to find the solution of { \left|\psi\right>}=e{\left|\psi\right>}\,,\label{perturbation}\ ] ] where the perturbation strength factor is assumed small , and the solution to the unpertured problem ( ) is already known . in this case, is the non - interacting hartree - fock hamiltonian and , which is assumed to be small in effect relative to , is the hamiltonian for inclusion of electron correlation .mp2 expands this expression in terms of powers of up to second order .this can be used to correct both energies and wave functions .mp2 has shown great success , but it is not perfect. comparison with coupled cluster and full ci methods have shown that mp2 often significantly overestimates the correlation , especially in delocalized systems. usage of the higher - order expansions ( e.g. mp4 ) may yield increased accuracy , but the results are not as straightforward as one might hope , as convergence of the mller - plesset series has been shown to be unreliable. in many cases , estimates of correlation may get worse with increasing order ; sometimes oscillating or even diverging in the worst cases. convergence depends on both the system under study and the basis set being employed , with poor results often accompanying use of the diffuse functions required to correctly model dispersion interactions .nevertheless , mp methods are highly prized in quantum - chemistry wave function calculations because they contain a good balance of accuracy and computational efficiency .the asymptotic scaling of mp2 ( as ) makes it substantially cheaper than high - level coupled cluster methods .mp2 can be used on systems of respectable size .a system with a hundred atoms or more is not out of the reach of an mp2 calculation on a high - end computer .wave function theories have a number of nice properties , but they scale poorly with systems size .a completely different approach , density functional theory ( dft ) , scales as , and is therefore much more amenable to calculation of large systems .calculations can be performed on systems consisting of perhaps several thousand atoms , making it applicable to biochemical systems . in 1964hohenberg and kohn published a seminal paper showing that the quantum - mechanical energy of a set of atoms can be written uniquely as a functional of the electron charge density ( within the born - oppenheimer approximation ) .furthermore , the charge density that minimizes this functional is the ground - state charge density for the system , and all measurable properties of the system can be written in terms of this optimal charge density .this avoids the dimensionality problem of eq .( [ schrodinger ] ) by shifting the quantity of interest to the charge density in real space , a function of only three variables regardless of the number of electrons .density functional theory as a modern approach was initiated when kohn and sham wrote the energy as a density functional of the form } & = & e_{{\text{k}}}{[n(\vec{r}\,)]}+e_{{\text{n - e}}}{[n(\vec{r}\,)]}+e_{{\text{e - e}}}{[n(\vec{r}\,)]}\nonumber\\ & + & e_{{\text{xc}}}{[n(\vec{r}\,)]}+e_{{\text{n - n}}}\;,\end{aligned}\ ] ] where is the total kinetic energy of the system , in principle written as a density functional , but in practice written as a functional of the kohn - sham orbitals .an analytical density functional for is not known , but approximations to it lead to so called orbital - free methods . the final term in eq .( [ dft ] ) is the nucleus - nucleus repulsion term , which can be treated as a simple additive constant since it is uniquely determined by the positions of the nuclei and these are decoupled from the quantum - mechanical problem by use of the born - oppenheimer approximation .analytical expressions are known for both ( the nucleus - electron , effective 1-body term ) and ( the hartree term giving the average electron - electron interaction ) , leaving the _ exchange - correlation _ functional } ] ) with respect to the density , thereby finding the ground - state density . since , is not known however , it must be approximated in some way .this is the main approximation in dft and determines the method s applicability to a particular system .not surprisingly then , much effort is put into improving the approximations made in generating }$ ] .one approach is to assume that the exchange - correlation energy is a _ local _ functional of the density one that depends on in a point - wise fashion. this local density approximation ( lda ) is good when the density is slowly varying , becoming exact in the limit of a uniform electron density . despite its simplicity , the lda is amazingly good in many systems , especially in those with relatively concentrated charge density such as crystalline environments where metallic or covalent bonding dominates . for molecules , where directional covalent bonds are the primary interaction , it tends to perform less adequately .one can imagine the lda as the zeroth - order term in the taylor expansion of the density about each point , and envision adding additional , derivative - dependent terms .a functional depending on the density and its gradient ( first derivative ) in a point - wise fashion is called a semi - local functional , and the approximation of the true energy functional in this way is called the generalized gradient approximation ( gga). this approximation is a substantial improvement over lda in many systems , particularly molecules . the term in eq .( [ dft ] ) must approximate the effects of both exchange , which removes the unphysical electron self interaction while enforcing the pauli exclusion principle , and correlation , which roughly speaking , accounts for the fact that each electron experiences a highly dynamic environment rather than a _mean field _ of the other electrons .in hartree - fock theory , the form of the exchange operator is known , so exchange could be treated exactly and combined with an approximate correlation functional .unfortunately , most functionals exhibit serendipitous error cancellations between their exchange and correlation pieces , making just using the correlation contribution prone to large errors . in 1993becke proposed using a 50%50% mix of exact exchange and lda, eventually leading to the 3-parameter b3lyp functional and similar hybrid functionals , which are among the most accurate functionals for covalently - bound molecules .early successes of hybrid functionals led some to erroneously believe that they could describe weak van der waals interactions . unfortunately ,hybrid functionals , being a linear combination of exact exchange and ( semi)-local exchange - correlation approximations , can not account for van der waals interactions .this is because van der waals interactions are a non - local correlation effect , and any functional that is local or semi - local in correlation is by construction not able to reliably describe them. there is ample discussion in the literature of the poor performance of standard hybrid functionals in weakly bound complexes. as evident from the discussion in section [ sec : dft ] and fig .[ scaling ] , dft is capable of treating large systems of perhaps thousands of atoms , which is one of the requirements if it is to be applicable to systems of interest in molecular biology .however , at the same time , it also has to be able to accurately describe weak van der waals interactions , which play an important role in biomolecules .historically , dft has not performed well when applied to systems with van der waals interactions this is probably the single most important problem that has prevented dft from gaining a strong foothold in molecular biology .below we discuss the shortcomings of standard dft and several recent developments that overcome this barrier , leading to a full applicability of dft to large biomolecules . in standard dft , the exchange - correlation functionalis often assumed to be _ local _ ,i.e. a single spacial integral of the exchange - correlation energy density , which depends explicitly on the charge - density .this approach leads to the so called local density approximation ( lda ) . adding a dependence on the gradient of the charge density results in the generalized gradient approximation ( gga ) , while inclusion of higher - order derivative terms yield meta - gga functionals . however , this approach fails to correctly account for van der waals ( vdw ) interactions , which are non - local correlation effects ; they occur between physically separated regions of charge , generally with little overlap of their density functions . capturing these effectscorrectly requires a functional that expresses the exchange - correlation energy as ( at minimum ) a double spacial integral .van der waals interactions , ubiquitous in polyatomic systems , occur when electron motions in one atom ( or within one molecule ) correlate with electron motions in a nearby atom ( or molecule ) setting up transient but interacting multipoles within each. correlation between electrons lowers their energy relative to uncorrelated electrons , so the van der waals force is always attractive. in some systems , crystalline nacl for example , the contribution of these interactions to the overall binding are negligible . in other systemsthese interactions can be an appreciable part of the overall interaction .nobel gas dimers such as ar and kr are held together entirely by van der waals interactions .large diffuse molecular systems ( prime examples being biological macromolecules ) rely quite heavily on van der waals interactions for their stability , so such interactions play an integral role in their behavior . with this in mind , numerous attempts were made to include the ability to capture van der waals interactions within conventional dft .a thorough account of all these efforts is beyond the scope of the present review , but several promising approaches will be discussed . as stated earlier ,van der waals interactions arise when electronic motions within separated atoms correlate , setting up transient multipole moments within the individual atoms .one can expand the dispersion energy of two arbitrary , polarizable charge densities in terms of the interactions of induced multipoles .if a point of interest is located at a distance that is large compared to some characteristic length scale of the charge distribution , the _ pairwise _ dispersion energy can be expanded in powers of as where the constants correspond to a particular system and determine the relative strengths of the various terms . for sufficiently large distances , the dipole - dipole term dominates and dispersion interactions go as 1/ .this observation is the basis of the density functional theory with added dispersion ( dft - d ) method. typically , this method works by adding to the total energy a pairwise atomic correction of the form where is the dispersion energy , is an empirically - derived coefficient that is atom - pair - dependent , is a damping function , and the sum runs over all pairs of atoms .the damping function ranges from 0 at small to 1 for larger separations , and is required because the asymptotic 1/ form becomes unphysical as distances become small .the specific form of the damping function plays a role in the accuracy of the technique. too weak a damping with decreasing distance results in over - counting of the interaction energy . too stronga damping will weaken the vdw interactions at relevant ranges .the most critical aspect of the damping function is how it behaves at intermediate distances near the bonding length of a vdw bond .much attention has been paid to the form of the damping function , and opinions differ on its optimal form .commonly , the damping function is given the form where is a chosen constant and sets the relevant distance scale for the interaction of atoms and and is generally chosen to be the sum of their van der waals radii . this was the form chosen by grimme in 2004 , when he published a set of coefficients based on a database of dipole oscillator strength distributions , for a number of important atoms and demonstrated the method s effectiveness on a large set of molecular systems .the values of the coefficients ( and corresponding vdw - radii ) depend somewhat on the choice of exchange - correlation functional , so grimme added an empirical parameter he called that scales the interaction , adjusting its strength to the functional being used .approaches like the dft - d method are not new , dating back at least as far as london himself , but they have proved extremely useful at many levels of atomic theory , and continue to be so within dft .the pairwise dispersion correction given by eq .( [ dftd ] ) is not a density - functional , but instead relies on fitting to a chosen set of external data .the data used in the fit and the interaction between the dispersion correction and the functional coupled with it both affect the results obtained .such a fitting procedure can limit transferability between systems .the original dft - d approach of grimme has been re - parameterized many times both for improved accuracy and for application to other systems. to improve on the transferability and overall accuracy of dft - d , tkatchenko and scheffler proposed an alteration , which uses a relative coefficient calculated on - the - fly from the charge density. in this approach ( hereafter referred to as dft+vdw ) they define the effective volume for an atom within a system , relative to the free - atom volume as : where is the fraction of the density at arising from atom in a linear combination of free - atom charge densities .starting from the free - atom coefficients , they define the effective coefficient for an atom within a molecule or solid as with the hetero - nuclear combination rule for atoms and defined as : where is the static polarizability of atom .thus , in the dft+vdw approach one may write =-\frac{1}{2}\sum_{i\ne j } \frac{c_{6}^{ij}[n(\vec{r}\,)]}{r_{ij}^{6}}\;,\end{aligned}\ ] ] with and ranging over all atoms .that is , the dispersion energy can be written as a functional ( albeit a non - universal one that depends on the arrangement of nuclei ) of the charge density .writing the coefficients as density functionals allows for the polarizability of atoms to be a dynamic , environment - dependent quantity . if it could be calculated , the functional derivative of this expression with respect to the charge density would yield the kohn - sham potential for the dispersion energy , allowing the latter to be calculated self - consistently .it is not clear at present whether this would significantly affect the interaction energies of vdw compounds when using the dft+vdw method .tkatchenko and scheffler do note , however , that the use of eq .( [ veff ] ) largely cancels the charge density differences arising from the use of different functionals , making their method less sensitive to the particular exchange - correlation functional used compared with static approaches. in 2008 , tkatchenko and von lilienfeld noted that many body effects can play a significant role in the energetics of vdw - rich systems. this is especially true in bulk , where close - packed atoms can be geometrically arranged in many complex ways .in particular , three - body , triple - dipole interactions can contribute substantially to binding energies , typically raising them relative to pure pairwise interactions .given the recent surge of inquiry into metal organic frameworks and other molecular crystals, the ability to account for this fact may become of increasing interest . in 2010 , an expanded version of dft+vdw was described , wherein an additional three - body term was added. the three - body term was given the form where is the distance between atoms and and is the angle between and .the formulation of this expression into a density functional then follows in a fashion similar to that of the term , the fundamental difference being the inclusion of an angular dependence in the damping function .an alternative approach to the addition of pairwise atomic dispersion terms is to express the total energy of a system directly as a _ non - local _ functional of the density .that is , to write the exchange - correlation functional in such a way that it depends simultaneously on the charge density at multiple points . in principle, this is the optimal approach because the true exchange - correlation functional is _ fundamentally _ a non - local functional .treating it on such a footing allows for its integration into dft in a seamless and self - consistent manner . in the van der waals densityfunctional ( vdw - df ) approach , the exchange - correlation functional takes the form where is a local - like piece of the functional that is assumed to be well modeled by standard functionals and is a non - local piece , which is evaluated by considering all _ pairwise _ points in the charge density .the kernel function describes how charge densities at and correlate .a meaningful form for was described by dion et al . in 2004 , leading to the van der waals density functional ( vdw - df). this functional evolved from a less general one restricted to planar geometries. the analytical form of and its numerical computation are onerous , but since itself does not depend on the density , it can be calculated and tabulated once - and - for - all .the functional derivative of eq .( [ nl ] ) with respect to the charge density was given in 2007, allowing for completely self - consistent calculation of energies and forces using this method .the functional as originally proposed required the evaluation of a double integral over three - dimensional space , as one might expect from a non - local functional .this made the use of the functional costly relative to other local or semi - local options .however , in 2009 romn - prez and soler effected a great simplification , by transforming the double integral into a single integral over fourier transforms using the convolution theorem. since fourier transforms are efficiently obtained and/or readily available in plane - wave dft codes , this dropped dramatically the time required to evaluate the vdw - df functional and made the cost of its use on par with that of a similar gga calculation .it was quickly noted that , when used on vdw - rich systems , the functional produced binding distances that were slightly larger compared with experiment or high - level calculations. this led to the assertion that the revised perdew - burke - ernzerhoff exchange functional, originally chosen to accompany the vdw - df because it exhibited minimal spurious binding of its own , was too repulsive. lee et al . revised the approach in 2010 , recommending the use of a less repulsive revised version of the perdew - wang exchange functional and changing the value of a gradient coefficient. these small changes improved the method s accuracy for both energy and geometry in many systems . for an in - depth review of this approach see ref . .there are a number of other approaches that are capable of describing van der waals interactions within a dft framework .a full listing is beyond the scope of this review , but several of the more common approaches are briefly discussed here . in symmetryadapted perturbation theory ( sapt ) , the interaction energy of a system is written as a perturbative expansion in terms of physically - meaningful interactions .the hamiltonian for a superposition of non - interacting monomers is taken as , with the interaction between monomers forming the perturbing potential. terms in the perturbation generally include electrostatic , exchange , induction , and dispersion interactions. the principle advantage of this approach is that the relative contributions from different physical interactions can be determined explicitly .this leads to an intuitive interpretation of the interaction energy .the downside to the approach is its computational cost since the method scales as ( when taken to second order ) with increasing system size. it is therefore limited to relatively small systems .see ref . for an excellent overview of sapt and its applications .zhao and truhlar have developed a series of functionals designed to obtain accurate energies for weakly - bound systems. these functionals have been shown to work well for the -stacking and hydrogen bonding interactions that are omnipresent in biological macromolecules. the advantage of these functionals is their efficiency , being essentially the same computational cost as a typical dft calculation .there are several families of these functionals , designed and parameterized to apply to different chemical situations .the functionals are known to poorly describe dispersion interactions in the asymptotic limit , where they decay exponentially rather than as . nevertheless , they have seen heavy use recently for their ability to accurately and efficiently capture the short - ranged contributions to dispersion interactions . in the dispersion - corrected atom - centered potential ( dcacp )approach of von lilienfeld et al. , van der waals interactions are handled by means of an effective electron - core interaction .typical plane - wave density functional theory approaches utilize pseudopotentials , which treat nuclei and core electrons together as an effective , angular momentum - dependent potential .the potential is designed such that the all - electron wave - function is reproduced faithfully . in the dcacp approach ,the non - local piece of this effective core potential is optimized to reproduce high - level calculations of molecular properties , specifically , the dispersion energies and forces within molecules .since the dcacp method uses the same type of effective core potential that is traditionally used in plane - wave calculations , its use does not impose additional computational complexity .the effective potential is designed as a van der waals correction to standard gradient - corrected exchange - correlation functionals , so potentials must be optimized for each type of atom and for every exchange - correlation functional that the method is to be paired with .optimized effective potentials have been generated for all the standard biological atoms ( carbon , nitrogen , oxygen , hydrogen , sulfur , and phosphorus ) , each with several gradient - corrected functionals .the method shows good transferability and has been used in molecular as well as solid - state applications .although van der waals interactions are generally thought of as a _ correlation _ effect , the approach of becke and johnson takes a wholly different viewpoint , treating them instead as arising from interactions between an electron - exchange hole pair in one system and an induced dipole in another .this viewpoint is motivated by the fact that the exchange hole is , in general , not spherically symmetric , so the electron - exchange hole system has a non - zero dipole moment .this dipole moment does not affect the energy of the system containing the electron - exchange hole pair since only the spherical average of the exchange hole enters the energy expression for a system .this electron - hole dipole can correlate with a separate system , however , yielding a dispersion - like interaction . when averaged over the entirety of a system , the approach yields molecular coefficients in good agreement with those from high - level methods .these can be decomposed into atomic coefficients and used in a scheme similar to that in the dft - d approach .a required component of the approach is the dipole moment of the exchange hole , which becke and johnson conveniently cast as a meta - gga functional by utilizing the approximate becke - roussel form for the exchange hole .further development led to the ability to calculate and coefficients .this approach is simple , elegant , and performs well over a variety of systems .as can be seen from fig .[ scaling ] , of all the quantum mechanical methods discussed above , only dft is currently capable of treating systems consisting of several hundred to several thousand atomsi.e.the lower end of the range of biologically relevant molecules . as such ,in this application section we will almost exclusively focus on studies that have used dft to investigate such systems .the methods outlined above represent current state - of - the - art dft as it applies to vdw - rich systems . inwhat follows , these methods ability to do useful biochemistry will be highlighted through a brief survey of recent studies conducted both to test them and to learn from them ._ this survey is intended to act as a showcase of the capabilities of modern dft , rather than a comparison of particular methods of its implementation ._ small molecules make a natural proving ground for new methods in dft because calculations can be compared with quantum chemistry methods .there exists an extensive body of work , much of it carried out by poner and hobza and , independently , by stefan grimme , benchmarking the dft methods discussed above against accurate wave function approaches with special focus being placed on biologically - relevant molecular systems .this work has yielded encouraging results and forms the foundation upon which studies of the physics in these systems rests .but studying small molecules is useful in its own right , since these play a pivotal role in biochemistry .most notable among the biologically - relevant small molecules are water and the building blocks of macromolecules themselves , namely , dna bases and amino acids .water has received special attention in the literature , both because of its great importance to ( bio)chemistry and because an accurate first - principles understanding of it has proven surprisingly elusive .most molecular interactions within living systems occur in an aqueous environment , so an understanding of water is a necessary precursor to developing an understanding of _ in vivo _ biochemistry . these days , the bulk behavior of water ( e.g. phase diagram , radial distribution functions ) is well modeled by parameterized force fields. although these force fields get many of the properties of water correct compared with experiment , the fact that they were parameterized to do just that limits their usefulness as a tool for understanding the atomistic interactions in water . at the fundamental levelthere are quantum effects , most notably the quantum - mechanical nature of the hydrogen nuclei, that can not be easily reproduced with classical models .this clouds the connection between microscopic effects and bulk behavior .a full understanding of the behavior of water can only come from a quantum mechanical description that applies at the microscopic level , but can be extended up to the macroscopic limit .the behavior of small water clusters ( h) with less than about 6 has been extensively studied at the quantum level and is largely understood. minimum energy geometries can be calculated with high level wave function methods and these have been compared with various dft treatments . at this level ,standard dft does a reasonable job at describing the geometric and energetic properties of water , but some improvement can be made by including dispersion interactions. although the hydrogen bonds that govern water s structure are not typically thought of as a van der waals effect , recent studies have shown that geometries , energies , dipole moments , and vibrational frequencies of small water clusters are all improved by inclusion of van der waals interactions, as can be seen in figs .[ flo : water_properties_1 ] and [ flo : water_properties_2 ] .the improved description of water when van der waals effects are included is not limited to small water clusters , but continues into the bulk . through a series of _ ab initio _ molecular dynamics simulations lin et al. showed that the radial distribution functions produced by standard gradient - corrected functionals tend to produce water that is over - structured compared with experimentthis was also evident in the average number of hydrogen bonds and the self - diffusion coefficient , both of which show an over - structuring of the water molecules .these results mirror obtained by numerous other groups working with a variety of different codes , exchange - correlation functionals , and basis sets this over - structuring is mitigated to a large degree by a proper treatment of van der waals interactions .the self diffusion coefficient increases three - fold and the over - structuring evident in the radial distribution functions softens when van der waals interactions are included .this is also true for bulk ice in its standard hexagonal form ( i ) where inclusion of van der waals interactions again improves the description of structural and electronic properties .it is worth pointing out that the results of dft calculations can vary quite widely depending on the choice of basis set and exchange - correlation functional used , and great care must be taken with their selection .for example , when coupled to the non - local piece of the vdw - df , the overly repulsive revised pbe exchange functional actually produces water that is _ under - structured _ compared with experiment , in contrast to most other exchange functionals .this is related to the aforementioned tendency of the original vdw - df to predict intermolecular interaction distances that are large compared with experiment and high - level wave - function methods .additionally , it has been pointed out that the properties of liquid water calculated within dft can depend quite strongly on the choice of basis set. calculations similar to those shown in fig .[ flo : water_properties_2 ] were carried out by zhang et al . and showed considerably less improvement in the oxygen - oxygen radial distribution function compared to experiment .the basis sets used in the two sets of calculations were fundamentally different , making a direct comparison of their appropriateness difficult . despite these issues , it is generally agreed that , when properly chosen basis sets and exchange - correlation functionals are used , the inclusion of van der waals interactions fundamentally improves the dft description of water , both at the microscopic level and in bulk .it is interesting to note that , although inclusion of van der waals interactions greatly improves the description of water , this alone does not complete the picture of important effects within water .the standard born - oppenheimer approximation used in quantum - mechanical studies treats all nuclei as classical point particles .recent work by a number of groups , however , has shown that nuclear quantum effects may play a significant role in determining the properties of water .in fact , it has been shown that such nuclear quantum effects may be more far - reaching , playing a substantial role in hydrogen bonds in general , not just between water molecules .this would , of course , have enormous consequences for a proper description of interactions within biological molecules such as proteins and dna , where hydrogen bonds often dominate the binding .for example , it has been proposed that the keto form of dna nucleobases ( the standard form required for watson - crick hydrogen bonding ) can spontaneously tautomerize via hydrogen tunneling to the enol form , a process which could be responsible for some types of dna damage . a recent study by prez et al .found that , although such tunneling does occur , the metastable enol form has a lifetime too short to play a significant role in dna mismatch damage .in fact , the effects of quantum nuclei appear to dynamically stabilize the keto form .for a recent review of these considerations see ref . .the four nucleobases , arranged in different sequences along strands of a sugar - phosphate polymer , have enabled the information of life to be stored and propagated since life began .each of these relatively simple molecules contains an aromatic ring capable of engaging in multiple hydrogen bonds .when these bases come together in a watson - crick , _ edge - on _ manner , they can form hydrogen bonds strong enough to hold two dna strands together . when brought together in a parallel _ face - on_ fashion , they form stacking interactions strong enough to give it an average persistence length of roughly 50 nm with some sequences having even larger persistence lengths .+ cooper , thonhauser , and langreth calculated the base interaction energy as a function of distance for a watson - crick , _ edge - on _ approach of two base pairs ( see fig . [ fig_h - bond]). this was done for the a : t , a : u , and g : c combinations .the g : c base pair exhibits a maximum interaction energy of about twice that of the other pairs , not surprising since it has an extra hydrogen bond , and all three show similar equilibrium binding distances . the base stacking energy as a function of geometry has been studied by several groups. the binding energies as a function of twist angle for all possible stacked base pairs are shown in fig .[ fig_stacking ] .it is noted that the methyl substitution that differentiates thymine from uracil stabilizes the systems with respect to twist . in 2006 , jurecka et al . published a set of accurate ,quantum - chemical binding energies of 22 molecular dimers , selected for the importance of van der waals interactions within them. the set ( dubbed the s22 dataset ) was broken into three distinct groups : ( i ) dimers for which hydrogen bonding is the key component of binding , ( ii ) dimers for which pure dispersion is the key component of binding , and ( iii ) dimers which exhibit a mixture of both of these effects .comparison with this dataset became the _ de facto _metric for assessing the ability of fledgling methods within dft to correctly account for van der waals interactions . within thisset ( which was later revised , expanded , and placed in a convenient online database) were a homodimer of uracil and an a : t heterodimer . in 2010 ,a landmark paper by von lilienfeld and tkatchenko showed that the uracil - uracil u : u and adenine - thymine a : t stacked bases exhibit large 3-body dispersion terms .going a step further , the authors addressed the magnitude of two and three - body dispersion interactions across the entire s22 dataset .some of their results are shown in fig .[ 3-body_s22 ] . using the dft+vdw approach enhanced with the triple - dipole term ( as discussed in section [ sec_methods ] ) , they found that the three distinct groups of the s22 set show markedly different dependencies on three - body dispersion interactions .the systems showing large 3-body dispersion terms ( which include the stacked u : u and a : t dimers ) were the systems deemed dispersion - dominant and those with essentially no 3-body dispersion interactions were systems dominated by hydrogen bonding . the authors argue that 3-body effects may be more important than previously thought .interestingly , for stacked nucleobases the 3-body dispersion term seems to be relatively constant , especially compared with the pairwise dispersion term .figure [ 3-body_all ] shows the 2 and 3 body dispersion terms calculated by von lilienfeld and tkatchenko for 42 stacked nucleobases and base pairs . the 3-body contribution tothe energy is relatively constant across the entire dataset while the 2-body term varies considerably , especially for the weaker - binding systems .the stacking interactions of dna bases discussed in the previous section are important for another reason .many cancer - causing agents act by intercalating between base - pairs within a strand of dna , preventing it from carrying out its normal functions .ironically , some anti - cancer drugs can also act in this way . in the latter case the dnais intentionally disturbed either to prevent its replication or to trigger cell death .one well known intercalating anticancer drug is the poly - aromatic ellipticine molecule .this molecule can intercalate between base pairs of dna where it is believed to interfere with the process of replication , effectively killing the cell. li et al .calculated the binding energy between the neutral ellipticine molecule and a single c : g base pair to be 18.4 kcal / mol. not surprisingly , the strength of the binding was shown to have a substantial dependence on the relative angle between the ellipticine and dna bases , showing a relatively strong ( several kcal / mol ) preference for near parallel and anti - parallel conformations .chun lin et al . investigated the intercalation of ellipticine between a cytosine - guanine base step ( i.e. a pair of c : g base pairs). as shown in fig .[ ellipticine_intercalation ] , they found that ellipticine is significantly attracted to the dna complex even when it is several angstroms away and ultimately intercalates with a binding energy of about 37 kcal / mol , in perfect accord with the earlier results found by li et al .further , they found that the interaction was repulsive when van der waals interactions were excluded from the calculations .von lilienfeld and tkatchenko found that the pairwise dispersion energy for this system to be a substantial kcal / mol and the 3-body correction term was 8.9 kcal / mol. this is certainly a significant binding event and shows the strength with which aromatic molecules can interact with the loose electrons within dna .such interactions are common for -stacked molecules such as benzene and the large number of relatively close neighbors within the -conjugated dna bases is believed to be responsible for the large interaction energy .the intercalation of both positively charged and neutral proflavine has also been studied. neutral proflavine was found to bind to a c : g base pair with an energy of about kcal / mol and charged proflavine with an energy near 12.1 kcal / mol .the difference between the charged and uncharged binding was attributed to electrostatic effects rather than those of correlation .this conclusion was reached largely because the results of standard pbe calculations , although they get the interaction energy of each system wrong , exhibit a similar _ difference _ between the two binding energies .it is interesting to note that the binding energy is larger for the positively charged proflavine even though the negatively charged backbone was omitted from these calculations .again , a substantial preference for near parallel and anti - parallel relative angles was found .the energetics of the interaction between proflavine and a t : a base pair was also studied , with results qualitatively similar to those of the proflavine c : g system. proflavine was found to bind to t : a with a binding energy of 18 kcal / mol , again showing preference for near parallel and anti - parallel configurations .steric clashes with the methyl group on the thymine base produced some interesting features in the rotation curve but did not change the overall preferred structures .perhaps the most interesting finding of li et al .was that both intercalators studied were found to have stronger interactions with a c : g base pair than a c : g base pair has with _ another _ c : g base pair , and that the angular dependence of these interactions qualitatively differ. a c : g base pair dimer has a double - well minimum centered around 0 , with the minimum - energy configurations at a twist of about 35 and 35 .the intercalators , by contrast , exhibit only one of these minima .this may partially explain the disruption of secondary structure observed upon intercalation of these molecules , which may play an important role in their anti - cancer function. owing to their large size and complexity , simulation of proteins often proves to be a formidable challenge even for simple parameterized models .the application of quantum mechanics to a full protein is , unfortunately , still beyond the reach of modern dft .recently , however , significant steps toward a quantum understanding of proteins have been made .helical chains of alanine molecules are often studied because they are relatively simple yet they exhibit the canonical helix structure present in so many proteins .in addition , when capped with a charged species they can be formed experimentally so computed properties may be compared with experiment. in one study , tkatchenko et al.looked at three helical forms ( , , and 3 ) of poly - alanine chains. by comparing with pbe , a standard gradient - corrected functional , they found significant van der waals stabilization of all three helix types relative to the fully extended structure .in fact , pbe predicts nearly equal stabilization energies for all three whereas the van der waals calculations showed a splitting of about 2 kcal / mol between the -helix and 3 structures .the authors note that the van der waals effects are of much shorter range than the standard hydrogen bond stabilizations in the helical forms , since the helices are long - ranged structures exhibiting periodic hydrogen bonds . despite this, the study found that van der waals interactions were critical to explain the observed stability of poly - alanine helices up to about 700 k. through _ ab initio _ molecular dynamics calculations tkatchenko et al . found that , when van der waals effects were excluded , the helical structure gave way to the fully extended form at a temperature well below that observed experimentally , even though hydrogen bonds were still correctly accounted for .agreement with experiment was recovered when van der waals interactions were included in the calculations , which showed a breaking up of the helical structure between 700 and 800 k. drug discovery is a multi - billion dollar business and much effort is being put into so called _ rational drug design _ where potential drug molecules are scored based on their predicted binding affinity to a particular protein target. this transfers the _ trial - and - error _ phase away from the lab , where experiments to test drug binding affinities can be relatively expensive and time - consuming , to the computer , where thousands of potential drugs can be tested for binding at relatively low cost. working toward this end , antony et al . studied the interactions of a number of protein active sites with their respective biological ligands. they found that exclusion of van der waals interactions can substantially change the ordering of ligand binding affinities .further , they found that neglect of these interactions can actually lead to the computed binding energies for a ligand with its target receptor being of the wrong sign .another study , carried out by rutledge and wetmore focused on ligands that interact with their host protein via stacking interactions and t - shaped interactions. as before , they found that inclusion of van der waals effects is imperative to obtain accurate energetics in such systems .with the utility of these methods established , attention can be turned toward the future and what can be accomplished with them .computation of a full macromolecule in atomistic detail is still beyond the reach of dft , even for the most advanced computers , but the method can still be used as a tool to aid in our understanding of such systems .one useful approach that has been adopted by some groups is to use dft to parameterize new force fields .typically , these are parameterized either to reproduce experimental results or the results of high - level quantum chemistry calculations .as discussed in section [ sec_methods ] , quantum chemistry methods are limited to fairly small systems .parameterizing force fields using the much larger systems that dft is capable of simulating might help average out size effects and better represent the environment that exists within macromolecules .additionally , solid - state parameter sets could be developed to deal with molecular crystals .another useful application of dft is in the refinement of experimental structures .typical x - ray and nmr techniques provide data that is consistent with more than one structure .also problematic is the placement of the x - ray invisible hydrogen atoms . given this, experimentalists often use semi - empirical calculations to refine the observed structure .use of high - level dft calculations including van der waals interactions could yield a better result , since large systems can be calculated very accurately .drug discovery is another area where useful progress is being made by incorporating dft calculations and it is expected that dft will play an important role in this area soon. although an entire protein may not be able to be treated quantum mechanically , hybrid methods that apply varying levels of theory to regions within a protein are being used with much success .one can treat the drug molecule and its binding site with full quantum - mechanical rigor while treating distant regions using well - tested classical or semi - empirical approaches .this allows the most important physics to be treated accurately and coupled to a sufficient treatment of the less important parts of the problem .this is not only a useful approach for drug design but also applies to understanding the normal operation of ligand - binding proteins .such methods are general referred to as qm / mm , i.e. quantum mechanics / molecular mechanics .finally , although the applicability of dft to calculations on full macromolecules is currently limited , linear scaling dft methods are becoming popular and provide a tantalizing way forward .these approaches , which make use of special algorithms and highly - localized basis functions , can easily treat thousands of atoms see fig .[ scaling ] .such capabilities make computation of full macromolecular systems feasible . for example, the fledgling linear - scaling code onetep has been used to calculate properties of a 20 base - pair strand of dna containing almost 1300 atoms .if augmented with the ability to adequately treat dispersion interactions , such linear scaling dft approaches may provide a practical means to apply full quantum mechanics to biological problems of real interest in the near future .j. w. ponder , c. wu , p. ren , v. s. pande , j. d. chodera , m. j. schnieders , i. haque , d. l. mobley , d. s. lambrecht , r. a. distasio jr ., m. head - gordon , g. n. i. clark , m. e. johnson and t. head - gordon , _ j. phys .b _ * 114 * , 2549 ( 2010 ) . r. h. french , v. a. parsegian , r. podgornik , r. f. rajter , a. jagota , j. luo , d. asthagiri , m. k. chaudhury , y .- m .chiang , s. granick , s. kalinin , m. kardar , r. kjellander , d. c. langreth , j. lewis , s. lustig , d. wesolowski , j. s. wettlaufer , w .- y .ching , m. finnis , f. houlihan , o. a. von lilienfeld , c. j. van oss and t. zemb , _ rev .phys . _ * 82 * , 1887 ( 2010 ) .d. c. langreth , b. i. lundqvist , s. d. chakarova - kck , v. r. cooper , m. dion , p. hyldgaard , a. kelkkanen , j. kleis , l. kong , s. li , p. g. moses , e. murray , a. puzder , h. rydberg , e. schrder and t. thonhauser , _ j. phys . : condens* 21 * , 084203 ( 2009 ) .
recent years have seen vast improvements in the ability of rigorous quantum - mechanical methods to treat systems of interest to molecular biology . in this review article , we survey common computational methods used to study such large , weakly bound systems , starting from classical simulations and reaching to quantum chemistry and density functional theory . we sketch their underlying frameworks and investigate their strengths and weaknesses when applied to potentially large biomolecules . in particular , density functional theory a framework that can treat thousands of atoms on firm theoretical ground can now accurately describe systems dominated by weak van der waals interactions . this newfound ability has rekindled interest in using this tried - and - true approach to investigate biological systems of real importance . in this review , we focus on some new methods within density functional theory that allow for accurate inclusion of the weak interactions that dominate binding in biological macromolecules . recent work utilizing these methods to study biologically - relevant systems will be highlighted , and a vision for the future of density functional theory within molecular biology will be discussed . 2
recently , the fcc has released the tvws for secondary access under a database - assisted architecture , where there are several geo - location databases providing spectrum availability of the tv channels .these databases will be managed by database operators ( dos ) approved by the fcc . to manage the operation costs , proper business models are essential for dos .the fcc allows dos to determine their own pricing schemes and there are two different ways in which sus can assess the tvws with the help of a database .sus can register their wsds in the database in a soft - licence style .part of the available tv spectrum is then reserved for the registered sus .unregistered sus can also access tvws in a purely secondary manner .for instance , an su can first connect to the database and upload wsd information such as location and transmission power and then obtain a list of available channels from the database . as a result ,two different pricing schemes can be employed , one for registered and the other for unregistered sus , respectively .registered sus pay a registration fee to dos and access the reserved bandwidth exclusively .this pricing scheme can be referred to as the _ registration scheme_. unregistered sus query the database only when they are in need of tv spectrum .dos charge them according to the number of database queries they make .this pricing scheme is referred to as the _ service plan scheme_. the co - existence of multiple pricing channels allows dos to better manage their costs and provides different service qualities to different types of sus .for example , the registration scheme can be adopted by sus providing rural broadband or smart metering services , since the reserved bandwidth may suffer from less severe interference . on the other hand ,the service plan scheme suits the temporary utilization of tvws such as home networking . to harvest the advantages of both pricing schemes and maximize the profit of dos , two challenges need to be addressed .first , how should dos determine pricing parameters for each scheme ?with limited available tv bandwidth , dos need to decide how much to allocate to each pricing channel .also , the registration fee in the registration scheme and the price for a certain amount of queries should be determined .second , how should sus choose between the two schemes ?both schemes have their pros and cons for different types of sus .the two challenges are coupled together .the decisions of sus on which pricing scheme to choose can affect the profit obtained by dos while the pricing parameters designed by dos ensure sus have different preferences for either scheme . in this paper , we focus on do s hybrid pricing scheme design considering both the registration and the service plan scheme . to the best of our knowledge ,there are no existing works on geo - location database considering a hybrid pricing model - .we consider one do and multiple types of sus .the sus can strategically choose between the two pricing schemes . unlike many existing works consider no su strategies when there exist multiple pricing schemes , in this paper , we assume users can have their own choices other than being directly classified into either pricing scheme . we argue that especially for a new service like database - based networking , users will consider seriously of the benefits from each scheme and other players responses . in this paper, we consider a two - stage pricing framework . at stagei , the do announces the amount of bandwidth to be reserved and the registration fee for the registered sus and then the sus choose whether to register or not . at stage ii ,the do announces a set of service plans for the unregistered sus to choose from . for the sus ,they decide which pricing scheme to choose given the announced pricing schemes .if the service plan scheme is adopted , sus should further decide one particular plan to buy . in this paper , we consider two different su scenarios . in the non - strategic case ,sus have fixed their pricing scheme choices . in the strategic case ,sus can compete with each other in choosing either pricing schemes . in the later case, the competition among the sus is modeled with the non - cooperative game theory since the choices of the other sus can affect an su s utility owing to the sharing of tvws . for the do , the problem is to optimally allocate bandwidth and design pricing parameters based on estimated actions of the sus .in this paper , we consider the do has either complete or incomplete information of the sus . by complete information ,we mean that the do knows the exact _ type _ of each su .the type of su relates with its channel quality , valuation for the spectrum et .al . in the case of incomplete information ,the do knows only a distribution of the types .we model the optimal service plan design problem with contract theory .different service plans are considered to be different contract items and an optimal contract is determined based on the knowledge of the su types . to optimally choose pricing parameters , one challenge to tackleis how the do estimate the possible actions of the sus , especially for the strategic case .there may be multiple possible equilibriums existed in the game .we solve this challenge by exploring the nature of our problem and propose computationally feasible algorithm to estimate the outcome of the game .the major contributions of this paper are summarized as follows : * we propose a hybrid pricing scheme for the do considering the heterogeneity of sus types .as far as we know , it is the first work considering hybrid pricing schemes for tvws database .* we model the competition among the strategic sus as a non - cooperative game .we prove the existence of the nash equilibrium ( ne ) under both the complete and incomplete information cases . by exploring the nature of the problem , we design computational feasible distributed algorithms for the sus to achieve an ne with bounded iterations .* we propose algorithms for the do to optimally decide pricing parameters .we formulate the service plan design with contract theory and derive optimal contract items under the complete information and sub - optimal items for and incomplete information scenario .the rest of this paper is organized as follows . in section [ sec :model ] , we describe the pricing framework and detailed system model . problem formulation is presented in section [ sec : formulation ] . in section[ sec : non - strategic_complete ] we study the optimal pricing solution for non - strategic sus under the complete information scenario as a baseline case .then in section [ sec : strategic_complete ] and section [ sec : strategic_incomplete ] , we consider the sus to be strategic players , under complete and incomplete information scenario , respectively .numerical results are given in section [ sec : simulation ] .related works are further reviewed in section [ sec : related_works ] .finally , section [ sec : conclusion ] concludes the paper .in this section , after an overview of the system parameters , we introduce the big picture of the proposed pricing framework. then we further detail the model for the do and the sus , respectively .key notations are summarized in table [ tab : notations ] .we consider the scenario with one do and sus denoted as .we assume is public information available to the do and the sus .for the ease of analysis , we assume all the sus are in the same contention domain . fcc requires the sus to periodically access the database . in this paper, we consider a time duration of periods . in each period , before accessing the tv channels , the sus should connect to the database to obtain a list of available channels .we assume the total available tv channels in each period have an expected bandwidth of and the bandwidth of a channel is .the number of available channels is then ..key notations in this paper .[ cols="^,<",options="header " , ] when applying a two - dimensional search , we need to fix the range of . recall that is the money paid by the registered sus to the do .when , the highest revenue a su can obtain is .so should be within ] . if is too high , do will not have the incentive to consider the registration scheme .since the marginal cost of one database access is very small compared with the bandwidth reservation cost , we set the default value of to be 0 . in the case of non - strategic sus , we assume are the same among all types .we use a single value to denote the fraction of sus preferring the registration scheme .we will set to be either or in our simulations .in this section , for convenience we refer to the complete information scenario as _cis _ and the incomplete information scenario as _iis_. in this section ,we compare the two cases : non - strategic sus in cis and strategic sus in cis to show the impact of su strategy on the pricing solution .first , in fig . [ fig : epsilon0-b_r_non - strategic ] , we fixed the su type distribution to be distr .1 . we set to be either 0.05 or 0.1 .we then vary the reservation cost and plot the optimal obtained from three cases : non - strategic sus with , non - strategic sus with and strategic sus , all under cis . from fig .[ fig : epsilon0-b_r_non - strategic ] , we can see that with non - strategic sus , the do tends to reserve less bandwidth for the reservation scheme under the same and .that is because when the sus are non - strategic players , they decide whether to pay the registration fee only based on the prior knowledge of the number of sus in the registration scheme which is predetermined .therefore , many sus will overestimate the registration fee , so that the do can not increase the registration fee to obtain higher profit . as a result ,the do prefers the service plan scheme and reserves less bandwidth .second , in fig .[ fig : strategic_vs_non - strategic ] , we fixed the su type distribution to be distr . 1 and set to be 0.05 .we vary the reservation cost and plot the utility of do and the average utility of sus given the optimal pricing parameters . from fig .[ fig : strategic_vs_non - strategic_udo ] , we can see that in both scenarios , the do utility first decreases and then remains the same after . when is higher , the do utility in the non - strategic scenario is higher compared with that in the strategic su scenario . that s because when is higher , in the strategic scenario , the profit do makes from the registered sus become smaller however , most of do profit in the non - strategic su scenario comes from the non - registered sus .we can also see from fig .[ fig : strategic_vs_non - strategic_usu ] that when sus are strategically player , their average utility are still non - zero when . that is because , in the non - strategic su scenario , when , , all sus can only get zero utility .however , in the strategic su scenario , some sus can choose the registered scheme to obtain non - zero utility .from the comparison , we can conclude that with more intelligent su strategy , sus achieve higher average utility . we compared the do s revenue when applying hybrid and uniform pricing schemes .there are two uniform pricing schemes : registration only or service plan only .we set . in the case of registration scheme only , while in the case of service plan scheme only , .we show the maximum possible revenues the do can obtain in type distributions 1 - 3 and random distribution in fig .[ fig : hybrid ] .the results from random distribution is an average of 100 runs .we can see that the hybrid pricing scheme provides higher revenue for the do . by offering hybrid pricing schemes ,the do has a new degree of freedom to tune the bandwidth segmentation to increase its profit .we will evaluate the impact of in the next evaluation .to show the impact of the bandwidth reservation cost on the optimal bandwidth reservation , we vary from 0 to 5.2 and plot the optimal under each in both cis and iis in fig .[ fig : epsilon0-b_r_scenario0 ] and fig .[ fig : epsilon0-b_r_scenario1 ] , respectively . in both information scenarios , decreases with the increase of . that is because with greater , the reservation cost for the same bandwidth is higher , so the do prefers to allocate more bandwidth for unregistered users .we can see that when , for all cases , which means no bandwidth is reserved for the registration scheme .we can also observe from fig .[ fig : epsilon0-b_r ] that in distr . 2 decreases the fastest among the three distributions .that is because in distr .2 , there are more sus of higher types , who are more likely to have in eqn .( [ eqn : max_udo3 ] ) , thus do can benefit more from the service plan scheme .it is also worth noting that in the cis , decreases to zero sooner compared with that in the iis .that is because in cis , do can design the query plans for individual unregistered sus to make more profit . in fig .[ fig : information ] , we show the impact of the do s knowledge of the sus personal information on the utility of the do and the sus . the sus types are set to distribution 1 .we can see from fig .[ fig : information_udo ] that the do has higher utility in the cis than that in the iis , when .that is because when , in cis . as a result ,the bandwidth for the service plan scheme is non - zero and the do can make more revenue from unregistered sus in the cis .we can see from fig .[ fig : information_usu ] that the sus have higher average utility in the iis than that in cis , when .that because when , in iis . as a result, the bandwidth for the service plan scheme is non - zero and the sus can get non - negative utility when choosing the service plan scheme .we can also notice that when in fig .[ fig : information_udo ] and when in fig .[ fig : information_usu ] , , which means all bandwidth is reserved for registered sus . as the do design registration scheme to admit only registered sus with the highest type , the utilities obtain in both information scenario are the same . in summary , with more knowledge of the sus information ,the do enjoys higher utility . on the other hand, the hidden information provides the sus higher utility . to show the impact of the sus type distribution on the contract design, we fixed , , and compute the and in the service plans for 50 unregistered sus in the 5 different su type distributions .note that the parameters may not be the optimal pricing parameters for each distribution , the use of the same parameters for all the distributions allows us to see the impact of su type distribution separately .[ fig : contract ] shows the results for and . in fig .[ fig : contract_query ] , we can see that when the number of sus of lower types is higher , ( compared distr . 3 and distr .1 ) , do tends to assign non - zero queries for lower type sus .that is because a query is determined by a factor of in eqn .( [ eqn : max_udo3 ] ) . in fig .[ fig : contract_price ] we can see that if non - zero is assigned to a lower type , a lower price should be charged .this is because , the price should follow the individual rationality constraint ( eqn .( [ eqn : ir ] ) ) for such sus .we implemented the brunching and ironing algorithm for the contract design .we assume at stage i , the do and the sus still use algorithm [ alg : valid_q ] to estimate the contracts to ensure the existence of an ne .we generate random su type distributions with , and set , which indicates that all the sus choose the service plan scheme .we repeated the test for 100 times .we show the average utility of the do obtained from query assignment algorithms in fig .[ fig : suboptimal ] . for a different , algorithm[ alg : valid_q ] achieves the do utility within 90% of that obtained from the optimal algorithm . to check the convergence time to ne via algorithm[ alg : registration_incomplete_info ] , we fix , and generate sus with uniformly distributed types . under each , we repeat algorithm [ alg : registration_incomplete_info ] 200 times . fig . [fig : convergence ] shows the average steps of improvement needed to achieve an ne .it is clear that the number of steps is bounded by , which verifies theorem [ thm : ndrg_property2 ] .we may prove a tighter bound for the convergence time in our future work .in this paper , to make the problem solvable , we have made several simplifications in the system model .there are several interesting ways to further extend this work .first , the registration scheme considered in this paper admits a uniform registration fee . in a more flexible setting ,the registered sus can pay different amounts of registration fees and enjoy different shares of the reserved bandwidth . similarly with the query plan scheme ,the do can offer several levels of registration fees .second , the pricing framework considered in this paper is not dynamic .the pricing parameters are determined at the beginning once and will not change . in a more dynamic setting ,the do can redesign new pricing parameters for the following time durations and the sus can also re - select the pricing scheme given the observation in previous time durations and the newly announced pricing parameter . in this case , the interactions between the do and the sus become a repeated game . also , the sus may dynamically join and leave the query plan .therefore , the sus should consider the expectation of future su behaviors when choosing their pricing schemes .most existing works on geo - location databases can be classified into two categories .some works focus on the design of geo - location database to protect primary users . in ,gurney et al . discussed the methods to calculate the protection area for tv stations . in , murtyet al . designed a database - driven white space network based on measurement studies and terrain data .some other works focused on the networking issue with the assumption that the database is already set up . in ,feng et al .presented a white space system utilizing a database . in , chen et al .considered the channel selection and access point association problem .one recent work also address the business model related to the geo - location database . in ,the authors proposed that the geo - location database acts as a spectrum broker reserving the spectrum from spectrum licensees .they considered only one pricing scheme which is similar to the registration scheme discussed in our paper . compared to our previous work , in this paper, we further extend the scenario to non - strategic sus and compared the pricing schemes with non - strategic and strategic sus under complete information scenario .we also extend our theoretical analysis and numerical evaluations .many works also focus on the economic issue of dynamic spectrum sharing . in ,the pricing - based spectrum access control is investigated under secondary users competitions . in , spectrum pricing with spatial reuseis considered .contract theory is utilized in the scenarios where the spectrum buyers have hidden information . in , gao et al .leveraged contract theory to analyze the spectrum trading between primary operator and sus . in ,contract theory is applied to the cooperative communication scenario . in this paper, we also model the service plan design with contract theory .however , due to the co - existence of hybrid pricing schemes , there is uncertainty about the number of sus choosing the contract items , which is different from existing works .there are some works focus on the hybrid pricing of other limited resources . in .wang et.al study the problem of capacity segmentation for two different pricing schemes for cloud service providers .one key difference between our work and is that the strategic sus considered in our paper can dynamically choose between pricing schemes .while in , the users are pre - categorized into different pricing scheme before designing the pricing schemes .in this paper , we consider a hybrid pricing model for tvws database .the sus can choose between the registration and the service plan scheme .we investigate scenarios where the sus can be either non - strategic or strategic players and the do has either complete and incomplete information of the sus . in the non - strategic su scenario , we model the competitions among the sus as non - cooperative game and prove the existence of an ne in two different scenarios by showing that the game is an unweighted congestion game .we model the pricing for unregistered sus with contract theory and derive suboptimal query plans for different types of sus .based on the sus pricing scheme choices , the do optimally determines the bandwidth segmentation and pricing parameters to maximize its profit .we have conducted extensive simulations to obtain numerical results and verify our theoretical claims .the research was support in part by grants from 973 project 2013cb329006 , china nsfc under grant 61173156 , rgc under the contracts cerg 622410 , 622613 , hkust6/crf/12r , and m - hkust609/13 , as well as the grant from huawei - hkust joint lab .s. kawade , `` long - range communications in licence - exempt tv white spaces : an introduction to soft - licence concept , '' in _7th international conference on cognitive radio oriented wireless networks _ , 2012 .l. yang , h. kim , j. zhang , m. chiang , and c. w. tan , `` pricing - based decentralized spectrum access control in cognitive radio networks , '' _ ieee / acm transactions on networking _ , vol .522 - 535 , 2013 .
according to the recent rulings of the federal communications commission ( fcc ) , tv white spaces ( tvws ) can now be accessed by secondary users ( sus ) after a list of vacant tv channels is obtained via a geo - location database . proper business models are therefore essential for database operators to manage geo - location databases . database access can be simultaneously priced under two different schemes : the registration scheme and the service plan scheme . in the registration scheme , the database reserves part of the tv bandwidth for registered white space devices ( wsds ) . in the service plan scheme , the wsds are charged according to their queries . in this paper , we investigate the business model for the tvws database under a hybrid pricing scheme . we consider the scenario where a database operator employs both the registration scheme and the service plan scheme to serve the sus . the sus choices of different pricing schemes are modeled as a non - cooperative game and we derive distributed algorithms to achieve nash equilibrium ( ne ) . considering the ne of the sus , the database operator optimally determines pricing parameters for both pricing schemes in terms of bandwidth reservation , registration fee and query plans . tv white space , geo - location database , pricing , game theory , contract theory .
quantum walks are an important framework to model quantum dynamics , with applications ranging from quantum computation to quantum transport .being the quantum mechanical analogue of classical random walks , quantum walks can outperform their classical counterparts by exploiting interference in the superposition of the various paths in a graph as well as by taking advantage of quantum correlations and quantum particle statistics between multiple walkers .in fact , for multiparticle quantum walks , interactions lead to an efficient simulation of the circuit model of quantum computation .quantum walks can be formulated in both the discrete time and continuous time frameworks , where the latter can be obtained as a limit of the former . in this articlewe focus on single particle continuous time quantum walks ( ctqws ) , where the knowledge of the adjacency matrix of the graph is sufficient to completely describe the walk .several interesting algorithms have been developed in this framework .in fact , ctqws in sparse unweighted graphs are equivalent to the circuit model of quantum computation , although the corresponding simulation is not efficient . besides quantum algorithms ,ctqws are applied in areas such as quantum transport and state transfer . in most ctqw problems ,the quantity of interest is the population ( or probability amplitude ) at a particular node of the graph .for example , in the spatial search algorithm , the purpose is to maximize the probability amplitude at the solution node in the shortest possible time . in the glued trees algorithm demonstrated in , which has an exponential speed up over its classical counterpart, the walker traverses the graph therein in order to find an exit node . on the other hand , quantum transport problems involving a single excitation , for example , exciton transport in photosynthetic complexes ,can be modelled by ctqws . in such cases ,the figures of merits are typically the transport efficiency or the transfer time to a special node , known as the _ trap_. as for state transfer , the task is to send a qubit from one point of a spin - network to another with maximum fidelity . in many of these problems ,the graph in which the walk takes place possesses some symmetry which implies that the dynamics of the walker is restricted to a subspace that is smaller than the complete hilbert space spanned by the nodes of the graph . in this work ,we use invariant subspace methods , that can be computed systematically using lanczos algorithm , to obtain a reduced model that fully describes the evolution of the probability amplitude at the node we are interested in .this method involves obtaining the minimal subspace which contains this node and is invariant under the unitary evolution .this is simply the subspace that contains the node of interest , and all powers of the hamiltonian applied to it , also known as a krylov subspace .henceforth , this subspace will be denoted by .this subspace can be systematically obtained without taking into consideration the symmetries of the hamiltonian , using , for example , lanczos algorithm .this algorithm iteratively obtains the basis for the invariant subspace : the first basis element is the special node ; the basis element is calculated by applying the hamiltonian to the element and orthonormalizing with respect to the previous basis elements . when expressed in the lanczos basis ,any hermitian matrix becomes a tridiagonal matrix .thus , _ any problem in quantum mechanics _ wherein the dynamics is described by a time independent hamiltonian can be mapped to a ctqw on a weighted line , where the nodes are the elements of the lanczos basis . in this way ,we explore the notion of invariant subspaces to systematically reduce the dimension of the hamiltonian that completely describes the dynamics relevant to our problem .we use this method to obtain new results on several ctqw problems , as well as re - derive some other known results in a simpler manner .first , we consider the spatial search algorithm , which searches for an element contained in one of the nodes of the graph in time , which is optimal . this algorithm is known to hold optimally for structures such as the complete graph , hypercubes , lattices of dimension greater than four and more recently , for strongly regular graphs . in two dimensional lattices , the lower bound of only be achieved when the dispersion relation of the spectrum is linear at a certain energy , i.e. , it contains a dirac point , as in honeycomb ( e.g. graphene ) lattices , and crystal lattices .however , the characteristics that a graph must possess , in general , for this algorithm to run optimally remains an open question .in fact in , where the authors present a different spatial search algorithm based on the divide and conquer approach , their main criticism towards the ctqw version of the spatial search was the fact that an upper bound on the running time is unknown even if `` minor defects are introduced '' . here , we show that the algorithm runs optimally on the complete graph with imperfections in the form of broken links , and also for complete bipartite graphs ( cbg ) . in both cases ,the graphs are , in general non - regular , i.e. not all the nodes of the graph have the same degree .a particular case of the cbg is the star graph where nodes are connected only to a central node , which is a planar structure with link connectivity one .thus , this example shows that high connectivity is not a requirement for optimal quantum search .moreover , on removing links , such that , from a star graph , the emerging graph is robust as it preserves its star connectivity and search is still optimal provided that the broken link does not contain the solution . in all the graphs mentioned thus far , the hamiltonian of dimension , describing the dynamics of the algorithm , can be reduced to a hamiltonian of dimension at most four .the dynamics , driven by this reduced hamiltonian , can be viewed as a ctqw on a smaller graph , which provides an intuitive picture of the algorithm , similar to a quantum transport problem .it is worth noting that the reduced hamiltonians presented here describe the dynamics of the problem exactly and are not obtained by approximating the search hamiltonian at the avoided crossing as in .thus , this is a simple way to analyse the algorithm that , in some cases , allows us to understand why search is optimal in a certain graph without having to explicitly calculate the eigenstates of the hamiltonian .furthermore , we consider quantum transport on a graph , where an exciton is to be transferred from one node to a special node where it gets absorbed , known as the trap . in the scenariowhere there is no disorder , decoherence or losses , it was shown in that the transport efficiency is given by the overlap of the initial condition with the eigenstates having a non - zero overlap with the trap .we prove that this subspace is the same as the invariant subspace .this observation allows us to compute transport efficiencies without having to diagonalize the hamiltonian .we calculate the efficiency in the complete graph ( cg ) with this method ( obtaining the same result as in , which uses the eigenstates of the graph ) .furthermore , we obtain the transport efficiency on binary trees and hypercubes as a function of the number of generations and dimension respectively , for various initial conditions .finally , we show that the efficiency in all these structures increases on average , when a few links are broken randomly from the graph .a particularly interesting example is the one of breaking the link from the complete graph which connects the initial and trap nodes .in such a case , the efficiency increases to 1 , in the absence of decoherence and losses , irrespective of the size of the network . for this case , we also calculate analytically the trapping time , which does not depend on .this counter - intuitive result can be interpreted by looking at the reduced subspace of the graph , where the problem reduces to an end to end transport in a line of three nodes .similar results were obtained in , in the context of state transfer , although different methods were used for the analysis of the problem .overall , the instances presented herein show that even small perturbation to the symmetry of a structure leads to a drastic improvement of the transport efficiency in the absence of decoherence . when decoherence is present , the effect of geometry in the transport efficiency was numerically studied in , for random disordered structures .finally , we connect the results obtained for transport to the problem of state transfer in a quantum network . in the single excitation framework , the state transfer problem is equivalent to a ctqw of a single particle .we show that the fidelity of transferring an excitation from one node of the network to another node , is upper bounded by the square root of the transport efficiency in the analogous transport problem wherein is the initial state and is the trap node .this gives a simple way to upper bound the fidelity of transferring a qubit in any spin network .overall , we demonstrate that dimensionality reduction using the notion of invariant subspaces can be a useful tool to analyse ctqw based problems . by mapping a qw problem on a graph to one on a much smaller structure , the analysis of the problem becomes easier and the dynamics of the walk can be intuitively understood .krylov subspace methods and the lanczos algorithm for the analysis of ctqws have also been used in , but different results were obtained therein . in the discrete time framework , the role of symmetry and invariant subspaces were studied in .krylov subspace approaches were used to analyse adiabatic quantum search on structured database in , and to obtain bounds for information propagation on lattices in . the notion of invariant subspacesis also exploited in to simplify the analysis of parametrized hamiltonians of quantum many body systems .this paper is structured as follows : in sec . _ methods _ , the systematic method to obtain the reduced subspace is demonstrated ._ results _ comprises of the various applications of the reduction method , namely in quantum spatial search , quantum transport and state transfer .finally , we present our conclusions in sec . _* _ dimensionality reduction of continuous time quantum walks _ : * let us consider a graph of nodes , where is the set of nodes and , the set of links .the adjacency matrix of is of dimension and is defined as follows : formally , a ctqw on the graph takes place on a hilbert space of dimension that is spanned by the nodes of the graph with .a particle starting in a state evolves according to the schrdinger equation where the hamiltonian that governs the system dynamics is the adjacency matrix , i.e. , .after time , the particle is in the state and the probability that the walker is in node is given by .the unitary evolution of the state can be expressed as so is contained in the subspace .this subspace of is invariant under the action of the hamiltonian and thus also of the unitary evolution .trivially , the dimension of this subspace is at most .however , if the hamiltonian is highly symmetrical , only a small number of powers of are linearly independent and the dimension of can be much smaller than .thus , we can reduce the dimension of the problem in the following way .let be the projector onto .then where is the reduced hamiltonian . in the derivation we used the fact that , and .now , for any state , we have where , the reduced state , . the same reasoning could be applied using the projector onto the subspace , in which case we obtain with and . this way ,a reduced hamiltonian can be obtained which can be seen as the hamiltonian of a weighted graph that in some cases , can be much simpler than the original graph we started with . here, can be the solution node for search algorithms , the trap for quantum transport and the target node for state transfer problems ( see sec ._ results _ ) . a systematic way to calculate an orthonormal basis of is given by lanczos algorithm .this basis , which we denote by , can be obtained as follows : the first element is ; the element is obtained by orthonormalizing with respect to the subspace spanned by .the procedure stops when we find the minimum such that .it can be shown that projected in this basis has a tridiagonal form : this implies that , in fact , any problem in quantum mechanics with a time independent hamiltonian can be mapped to an equivalent problem governed by a tight - binding hamiltonian of a line with sites , with site energies and couplings . to illustrate the reduction method , we give a simple example of the quantum walk on the complete graph of n nodes , a graph wherein every node is connected to every other node , as shown in fig . [ complete ] .0.5 is represented in red and the other node represents the equal superposition of all nodes except . and represent their respective site energies and v the coupling between them .c ) the search hamiltonian in the reduced picture ( see eq . ) .in contrast to fig . 1b , the site energies of both the nodes are equal , leading to perfect transport between them . also , the transport time is given by the inverse of the coupling , which yields the running time of the algorithm ., title="fig : " ] + + 0.5 is represented in red and the other node represents the equal superposition of all nodes except . and represent their respective site energies and v the coupling between them .c ) the search hamiltonian in the reduced picture ( see eq . ) .in contrast to fig . 1b , the site energies of both the nodes are equal , leading to perfect transport between them . also , the transport time is given by the inverse of the coupling , which yields the running time of the algorithm . ,title="fig : " ] + + 0.5 is represented in red and the other node represents the equal superposition of all nodes except . and represent their respective site energies and v the coupling between them .c ) the search hamiltonian in the reduced picture ( see eq . ) .in contrast to fig . 1b , the site energies of both the nodes are equal , leading to perfect transport between them . also , the transport time is given by the inverse of the coupling , which yields the running time of the algorithm , title="fig : " ] the hamiltonian is given by the adjacency matrix of the graph in this case , if is a node of the graph , we have that where we define the equal superposition of all nodes except as furthermore , it is easy to see now that any state can be written as a linear combination of and .thus , to calculate , it is enough to consider the dynamics in this two dimensional subspace spanned by driven by the reduced hamiltonian this approach reduces the problem of calculating to the calculation of the exponential of a matrix instead of a matrix .we can see as a tight - binding hamiltonian of a structure with two sites and , with site energies and , respectively and a coupling of , as shown in fig .[ red_completegraph ] .another interesting example is the reduction of the quantum walk on the glued trees of height with nodes to the column states as done in , which is crucial to prove the exponential speed - up of this algorithm with respect to its classical counterpart .even when some symmetry of the graph is broken , say by breaking a link of the graph , this reduction is still very useful and captures the symmetry that remains ( see sec ._ results _ ) .this reduction method can also be used in the context of quantum transport .in fact , in the _ supplementary information _ , we show that is equal to the subspace spanned by the eigenstates of the hamiltonian which have a non - zero overlap with .this subspace is referred to as the ` non - invariant subspace ' in where is the trapping site .let us denote this subspace as .the calculation of is important for computing the transport efficiency in various networks , in the absence of interaction with the environment .the lanczos method provides a simpler way to calculate this subspace which eliminates the need to compute the eigenstates of the full hamiltonian .this way , it also enables us to efficiently analyse the effects of perturbing the symmetry of networks in transport dynamics , as described in subsec ._ applications to quantum transport _ of _results_. in the following section , we use this method toanalyse spatial search in highly symmetric graphs , calculate efficiency of transport in several structures and obtain bounds on the fidelity of single qubit state transfer in spin networks .the goal of the spatial search algorithm in the ctqw formalism is to find a marked basis state and proceeds by evolution of the initial state , according to the hamiltonian where is the adjacency matrix of and is the coupling between connected nodes that is tuned so as to run the algorithm optimally . as described in sec . _ methods _, the hamiltonian of a complete graph can be reduced to a two dimensional subspace , which can be seen as a line with two nodes .the reduced hamiltonian is given by and is depicted in fig .[ red_completegraph_search ] .the optimal value of is proven to be such that the dynamics is simply a rotation between and .this value is optimal because it ensures that the site energies of both the nodes and are equal , thus optimizing transport between these nodes .the initial state is approximately so the probability amplitude at becomes approximately 1 after a time that is of the order of the inverse of the coupling .hence , the running time of the algorithm is . here , we give examples of non - regular graphs where the algorithm runs optimally , by making use of the reduction method explained in sec . _methods_. first , we analyse the effect of breaking links from this graph and show that the optimal running time is maintained .this can be interpreted as an inherent robustness of the algorithm to imperfections of this form .furthermore , we prove that the spatial search algorithm also runs optimally for complete bipartite graphs . *_ optimal spatial search on complete graphs with broken links _ : * 0.3 nodes and broken links where at most one link is broken per node .the coupling to the third node is much weaker than the coupling and can be neglected .thus , the dynamics is the same as in fig .1c.,title="fig : " ] + + 0.6 nodes and broken links where at most one link is broken per node .the coupling to the third node is much weaker than the coupling and can be neglected .thus , the dynamics is the same as in fig .1c.,title="fig : " ] here , we consider the case of breaking links from a complete graph and show analytically that the optimality of the algorithm is maintained .we assume that at most one link is removed per node and hence in this scenario , there exist two cases that require separate analytical treatment , namely one where none of the broken links were connected to the solution state , and the other where one of the broken link was connected to .we analyse the former in this section while the latter is explained in the _supplementary information_. let us consider that the links broken correspond to the set , that is , at most one link is removed from each node , as shown in fig .[ completegraph_kbl ] .also , let be the set of nodes comprising of the broken links .the graphs obtained by breaking links from a complete graph are not regular and hence violates the requirement for regularity in networks in order to achieve a quadratic speed up . applying lanczos algorithm, we obtain the reduced basis , where is defined as i.e. , the equal superposition of all nodes of the graph except the solution and thus . is a state that is orthogonal to both and and is constructed as , where and .also , let such that , and , .thus , for large , the search hamiltonian in this basis is , we shall use degenerate perturbation theory to estimate the running time of the algorithm in this scenario .we write , with such that has terms of , has terms of order while contains terms ( see fig.[red_completegraph_kbl ] ) . has eigenstates and with eigenvalues and respectively and thus , in order for the dynamics to rotate between ( which is approximately for large ) and the solution state , the eigenvalues must be degenerate , making .the eigenstates of the perturbed hamiltonian are having eigenvalues .this gives the running time of the algorithm to be thereby preserving the optimal quadratic speed up .this result can be perceived as an inherent robustness of the algorithm to imperfections in the form of broken links .one could argue that this robustness has to do with the high connectivity of the structure .however , in the following subsection , we give the example of the star graph , a structure with low connectivity where the algorithm runs optimally that is also robust to broken links .also , in the context of quantum transport , we show that breaking a link from the complete graph can affect severely the dynamics if one starts with a localized initial state .+ * _ optimal spatial search on complete bipartite graphs _ : * another example of a highly symmetrical structure , that is in general non - regular , is the complete bipartite graph ( cbg ) . here , we show that spatial search is optimal for this class of graphs .a complete bipartite graph has two sets of vertices and such that each vertex of is only connected to all vertices of and vice - versa .this set of graphs is also denoted as , where and and we have .this is a non - regular graph , as long as .the complete bipartite graph is shown in fig . [ completebigraph ] .0.5 with the solution node , represented in red .b ) the reduced search hamiltonian for the complete bipartite graph with in the lanczos basis . are the equal superposition of the nodes in partition 1 ( excluding ) and 2 , respectively .however , the understanding of why the search is optimal in this graph is shown in fig . 3c .c ) the same hamiltonian as in fig .3b , after a basis rotation gives us an idea as to why the algorithm works optimally .the resultant basis is , and .the degeneracy between site energies of and facilitates transport between these two nodes while transport between and is inhibited by the energy gap between them ( much larger than the coupling ) .since there is a considerable overlap between the initial superposition of states and , there is a large probability amplitude at after a time ., title="fig : " ] + + 0.5 + + 0.5 quantum search was also analysed in these graphs in the formalism of discrete - time scattering quantum walks in .however , in that framework , the algorithm does not run optimally if . in this case , although each run of the algorithm takes time , the same must be repeated , on average , times to find the solution state with high probability .so , if is of , then and the total running time is linear in . in our analysis, we show that the ctqw algorithm works in time for all possible values of and . to analyse the problem , we first assume , without loss of generality , that the solution state belongs to the set of vertices ( we shall eliminate this requirement later ) . the subspace is spanned by , and .the reduced hamiltonian can be written as : let , and , . following the procedure in the previous subsection , we calculate , where a is the adjacency matrix of the cbg , and divide it in terms of and with this hamiltonian can be seen as a line with three nodes as shown in fig . [ red_completebigraph_search ] . in order to use perturbation theory we need to diagonalize .the eigenstates of are with eigenvalue , with eigenvalue and with eigenvalue . since has the largest overlap with , we choose such that and form a degenerate eigenspace of .the reduced search hamiltonian in the eigenbasis of , depicted in fig .[ red_completebigraph_search2 ] , gives a clearer idea as to why the search is optimal for this graph .the matrix element responsible for the speed of the search is thus , we obtain the running time : however , unlike previous cases , the success of the algorithm will not be 1 since this will be given by the overlap of the initial state with .the dynamics will rotate between and , leaving approximately invariant .thus , the probability of finding the solution by measuring at time is we have only if , in which case the graph is regular .it is important to note that one is unaware of which partition contains the solution state .the optimal measurement time depends on whether the solution node is in the partition of nodes or in the partition of nodes . depending on this, the optimal measurement time would be and respectively .thus , the strategy would be to measure interchangeably at time and then at time until the solution is found .thus , in such a scenario the expected running time would be , thereby preserving the quadratic speed up .in fact , this is an upper bound for the expected running time obtained by neglecting the probability of finding the solution even on measurement at the wrong time . in the extreme casewhen and i.e. , , we obtain a star graph . in this scenario ,the optimal is given by and the algorithm works with .thus on average , we have to repeat the algorithm twice to find the solution .we discuss this case in more detail next .+ + * _ optimal spatial search on the star graph _ : * the case of the star graph is particularly interesting because it is a planar structure with node and link connectivity equal to 1 .the structures for which quantum search is known to hold optimally are those with typically high connectivity ( complete graphs , hypercubes ). the quantum search algorithm also works with full quadratic speed - up on lattices of dimension greater than four and in two dimensional lattices with a dirac point in time .we will focus on the case where , , i.e. the solution is not contained in the central node of the star graph .the case , , where the solution is in the central node , is trivial because the graph is biased towards the solution and by starting in state , we can measure the solution with probability , in a time , which does not depend on .so , let the central node be denoted as , and assume . since this is a particular case of the cbg , eqs . and with , , yield in the basis .the eigenstates of are , and . in this basiswe have using degenerate perturbation theory , we obtain the ground and first excited eigenstates with energies .the running time is given by and probability of success , for the initial state is it is interesting to note that the algorithm also works , with probability , if one starts the quantum walk at the central node , since .this way , one avoids the cost of preparing the initial superposition of states to run the algorithm .furthermore , for the star graph , it is easy to analyse the robustness of the algorithm to imperfections in the graph in the form of broken links since the graph obtained after removing links is still a star graph .we assume links can be randomly broken so that we possess no knowledge of the links that were broken , nor of the value of .furthermore , we consider constant such that .we assume that the link containing the solution is not removed ; this probability is and therefore negligible for large . in this scenario ,the optimal value of is , so degenerate perturbation theory is still valid and the running time is : with success probability .thus , quantum search on the star graph is not only optimal but also is fairly robust to defects to the structure in the form of broken links .let us consider the dynamics of an exciton in a network with sites , governed by a tight binding hamiltonian with nearest neighbour couplings , defined as where is the site energy at site and is the coupling between site and . for our analysis, we shall assume that all the site energies are uniform and thus can be set to zero , simply by an overall energy shift . also , we assume and choose our energy units such that .thus , is nothing but the adjacency matrix of a graph with nodes , whose links connect nearest neighbours of the network ( henceforth , we shall use sites and nodes interchangeably ) .we consider that in one of the nodes of the graph , there is a trap that absorbs the component of the exciton s wave function at this node at a rate , known as the trapping rate .this model is motivated by the study of exciton transport in natural light harvesting systems . to model the trapping dynamics , we introduce the trapping hamiltonian : this matrix is anti - hermitian and leads to the expected non - unitary dynamics described above .we consider as figure of merit the efficiency of transport , defined as which gives the probability that the exciton is absorbed at the trap integrated over time .the total hamiltonian describing the dynamics is given by where is the adjacency matrix of the graph .the scenario assumed here is the ideal one , i.e , there is no disorder in the couplings or site energies of the hamiltonian nor decoherence during the transport . in this regime , in , the authors calculate the transport efficiency as the overlap of the initial state with the subspace spanned by the eigenstates of the hamiltonian that have a non - zero overlap with the trap . earlier in sec ._ methods _ , it was stated that this subspace is the same as the invariant subspace , which can be obtained without diagonalizing the hamiltonian .so the dynamics is such that the component of the initial condition within the space is absorbed by the trap node whereas the component outside this subspace remains in the network ( see proof in _ supplementary information _ ) .thus , computing the transport efficiency boils down to finding the overlap of the initial condition with . in the following subsections ,we give examples of how to analytically calculate transport efficiency on various structures given different initial conditions .we also analyse how transport efficiency can be improved by perturbing the symmetry of the complete graph by breaking one link . finally , we give numerical evidence that breaking a few links in highly symmetric structures leads to the improvement of transport efficiency , if the initial state is localized at one node . *_ calculation of transport efficiencies for some graphs with symmetry _ : * _ * complete graph : * _ as in sec ._ methods _ , we obtain that the reduced subspace for the complete graph with nodes is given by with the reduced hamiltonian for the transport problem is : in the aforementioned basis .if the initial state is localized at a node , the efficiency is given by this way , we recover the result obtained in without the need to solve the equations of motion or requiring to find the eigenstates of the system ._ * binary tree : * _ here we consider our graph to be a binary tree with levels , where the number of nodes is .the hamiltonian of the graph is given by : we place the trap at the root of the tree , i.e. .it is well known that a quantum walk on a binary tree can be reduced to the quantum walk on a line where each node represents a column state these column states are readily obtained by applying lanczos algorithm with the root node ( we define ) as the initial node . if the initial state of the transport problem is a state localized in column , the transport efficiency is : thus , we find that , in such a localized case , the efficiency decreases exponentially with the distance to the trap . _* hypercube : * _ another highly symmetric structure that appears frequently in the literature is the hypercube in the context of quantum computation , quantum transport and state transfer . here , we consider transport on a hypercube of dimension with sites .we label the sites of the hypercube by strings of bits such that each site is connected to another site if they differ by a single bit flip .this way , the hamiltonian of the graph can be written as : where is the pauli matrix acting on the bit . using symmetry arguments , it is shown in that the dynamics in this structure can be reduced to the subspace spanned by the symmetric states with bits 1 and bits 0 : also known as dicke states .this is done in the context of the search problem where the solution state is assumed to be at . in this picture , the hypercube can be seen as a chain with nodes . here we also assume , without loss of generality , that our trap state is applying lanczos algorithm , we also obtain that the invariant subspace , is spanned by the dicke states in eq . without using any symmetry arguments .this implies that , if the initial state is localized at a site , labelled by a bit string with bits 1 , the transport efficiency to the trap is it is interesting to note that the efficiency is not a monotonic function of the distance from the trap . if we consider the initial condition to be a statistical mixture of all sites we obtain the average efficiency : reproducing analytically the numerical result of for transport on the hypercube of dimension 4 ( in the limit of no disorder and no dephasing ) ._ * improving transport efficiency by removing links from highly symmetric graphs * : _ _ * complete graph with one broken link : * _ for a complete graph with nodes , the transport efficiency is given by provided the transport begins from a localized state .we find that breaking one link from a complete graph increases the transport efficiency .in fact breaking a link that connects the starting node to the trap makes the efficiency of transport go up to 1 .let denote the trap node , denote the starting node and be the equal superposition of the remaining nodes .the reduced space is spanned by and thus , this is counter - intuitive , as of all the available links , breaking the link that directly connects the starting node to the trap gives the maximum efficiency .similarly , it can be shown that removing a link between the initial node and another arbitrary node other than the trap gives .the above phenomenon can be better understood by calculating the dynamics of the resultant graph . in the reduced picture , the trap is coupled to which is in turn coupled to the starting node .the reduced hamiltonian of the graph in the basis is , incorporating the anti - hermitian term of the trap and considering large , for simplicity , results in , let the state of the exciton at time , .the dynamics of the system is thus , it is important to notice that due to adiabatic elimination , , resulting in an effectively two level system whose dynamics is governed by and .the schrdinger equation simplifies to the eigenstates and eigenvalues of are here , .it is worth noting that the two eigenstates are not orthogonal and the corresponding eigenvalues have both real and imaginary parts which is due to the fact that is not hermitian .now , expressing and in terms of the two eigenstates enables us to calculate the probability of reaching the trap .thus , here , and .the transport efficiency in this case is 1 .the trapping time or the transfer time , which is the average time required by the exciton to get absorbed by the trap is another relevant measure in quantum transport .the trapping time is given by , in this scenario , and , a closer look at shows that for very small , there is no trapping at all while for a large , one observes freezing of the evolution of the exciton owing to the quantum zeno effect .the optimal value of the transfer time is obtained for .this is in accordance to , wherein the authors find that the optimal conditions for transport of an exciton in photosynthetic complexes , are when the time scales of hopping and trapping converge ._ * highly symmetric graphs with broken links : * _ here , we show how the transport efficiency changes as we break links in the graphs mentioned previously in this section . for a graph with broken links ,we calculate the average transport efficiency by projecting the initial state , onto the subspace for each of the possible configurations of the graph with broken links and average over all of them .the initial state is set as as a statistical mixture of all nodes for the complete graph and the hypercube , while it is a statistical mixture of all leaf nodes in the case of a binary tree .the results are shown in fig .[ effbl ] .note that this approach is much faster than diagonalizing the graph hamiltonian and finding the overlap of the initial state with the eigenstates having non - zero overlap with the trap .for all these structures , we observe that the average transport efficiency always improves by breaking a few links from the graph .this can be attributed to the fact that by breaking a small number of links , the symmetry is reduced and the dimension of the subspace to which the dynamics is restricted increases .thus , this can be thought of as increasing the number of possible paths from the starting node to the trap , which previously lied outside this space owing to symmetry .however , we expect that when the number of broken links is comparable to the total number of links in the graph , the size of the reduced space would be very low as the trap gets decoupled from the graph in a large number of configurations , and hence the efficiency is low .this is visible for the case of a binary of three levels and four broken links as shown in fig .[ effbl ] a ) with the average transport efficiency being lower than in the case of no broken links . in this sectionwe show that the considerations made thus far for quantum transport can also be applied to the transfer of a qubit state in a network of spins with nearest neighbour interactions .we show that the square root of the efficiencies obtained for transport in various graphs are also upper bounds for the fidelity of the equivalent state transfer problem in the same graph .thus , all the results obtained for quantum transport can be can also be interpreted in the context of qubit transfer in a network . in particular , we conclude that the fidelity of state transfer can be enhanced by removing links in the network .let us assume we have spins disposed in the nodes of a graph , where each pair of spins interact if and only if they are connected by an edge .we model the interaction via the -hamiltonian with uniform coupling : where the sum runs over the pairs that represent an edge of the graph , and and are pauli matrices acting on the spin .the hamiltonian can be written equivalently as using the spin ladder operators and .this hamiltonian conserves the number of excitations , i.e. it commutes with the operator . if we restrict ourselves to the single excitation subspace of the total hilbert space , and define the basis we can write which is the same hamiltonian defining a continuous time quantum walk used previously . this way ,if our initial state is , the fidelity of transfer to a node is upper bounded by the overlap of with the subspace , since this is an invariant subspace of the hamiltonian . this way , where is the projection operator onto . but as ( the efficiency of transport in the graph defined by starting from a localized state ) , the fidelity of transfer to state is bounded by .it is important to note that the bound is , in general , not tight .the bound will only be tight when the reduced hamiltonian ( projected onto ) is the same as the reduced hamiltonian of a graph where there is perfect state transfer ( pst ) , i.e. , a graph where the maximum fidelity of state transfer is one .also , in the cases where the reduced hamiltonian is the same as the reduced hamiltonian of a graph where there is pretty good state transfer ( pgst ) , i.e. , the maximum fidelity , , where can be arbitrarily close to 0 , the bound is arbitrarily tight .this can be fulfilled , for example , if the reduced hamiltonian is a line having number of nodes equal to , where is prime , or with .a graph where this is observed is a binary tree with with levels such that fulfils these criteria . with this observation, one could think of ctqws as a way to prepare some multipartite entangled states with high fidelity , which is , in general , a difficult task .a quantum walk on the binary tree starting at the root node ( i.e. , ) , would evolve , after some time , to a state arbitrarily close to a -state with defined in eq . .this can be perceived as a way to prepare genuine multipartite entangled states with no time dependent control .another way to create such a highly entangled state is by tuning the couplings and site energies of the complete graph as in the spatial search ( see eq . [ ham_line2d ] ) such that the dynamics oscillates between a special node , with energy , and the equal superposition of all other nodes as depicted in fig .[ red_completegraph_search ] .thus , by starting the quantum walk at this special node , after a time , the quantum walk would be in the highly entangled state ( see eq .[ swbar ] ) . a physical implementation of a complete graph can be achieved in ions traps where the interaction between the ions can be approximately distance independent .in this work , we explore the notion of invariant subspaces to simplify the analysis of continuous time quantum walk ( ctqw ) problems , where the quantity of interest is the probability amplitude at a particular node of the graph .this way , we obtain new results concerning the spatial search algorithm , quantum transport and state transfer .first , we present an intuitive picture of the spatial search algorithm by mapping it to a transport problem on a reduced graph whose nodes represent the basis elements of the invariant subspace . furthermore , we show that the algorithm runs optimally ( in time ) on the complete graph with broken links and on complete bipartite graphs ( cbg ) .these constitute one of the first examples of non - regular graphs where this happens .a particular case of the cbg is the star graph , which is planar , has low connectivity and is robust to imperfections in the form of missing links .presently , we are considering the robustness of this algorithm to other kinds of defects . during the completion of this article, we came across refs . . in the former, it is shown that high connectivity is not a good indicator for optimal spatial search by giving an example of a graph with low connectivity where the algorithm runs optimally , and another graph with high connectivity , where the running time is not optimal . in , a diagrammatic picture of the spatial search algorithmis presented .furthermore , we present a simple method to calculate transport efficiency in graphs without having to diagonalize the hamiltonian .the efficiency is given by the overlap of the initial state with the invariant subspace .thus , we calculate analytically the transport efficiency in structures such as the complete graph , binary tree and hypercube , given various initial conditions .moreover , we explore the change in transport efficiency with broken links in these graphs . for the complete graph , breaking a link from the starting node increases the efficiency from to a constant : , if the link broken was connected to the trap and otherwise . in the former case ,we analytically calculate the transfer time which is independent of and is a function of the trapping rate .finally , we show that the square root of the efficiency of transport on a graph from a starting node to a destination ( trap ) node gives an upper bound on the fidelity of a single qubit transfer between these two nodes. this bound is tight if and only if the reduced hamiltonian is that of a spin network wherein perfect state transfer takes place . in summary , dimensionality reduction is an intuitive way to understand the behaviour of ctqws in graphs with symmetry .hence , this might lead to the design of new continuous time algorithms , the analysis of the robustness of ctqw algorithms to imperfections , and to novel state transfer and state engineering protocols .ln , sc and yo thank the support from fundao para a cincia e a tecnologia ( portugal ) , namely through programmes ptdc / poph and projects pest - oe / ege / ui0491/2013 , pest - oe / eei / la0008/2013 , uid / eea/50008/2013 , it / qusim and crup - cpu / cqvibes , partially funded by eu feder , and from the eu fp7 projects landauer ( ga 318287 ) and papets ( ga 323901 ) .mm and hn acknowledge support from quantum artificial intelligence laboratory at google .furthermore , ln and sc acknowledge the support from the dp - pmi and fct ( portugal ) through scholarships sfrh / bd/52241/2013 and sfrh / bd/52246/2013 respectively .we would like to thank mohan sarovar , akshat kumar and bruno mera for useful comments , as well as abolfazal bayat and rainer blatt for useful discussions while visiting the physics of information group in lisbon .10 url # 1`#1`urlprefix[2]#2 [ 2][]#2 , & ._ _ * * , ( ) ._ _ * * , ( ) . ._ _ * * , ( ) ., , & . in _ _ , ( , ) . ._ _ * * , ( ) . , & ._ _ * * , ( ) . ._ _ * * , ( ) . , , & ._ _ * * , ( ) . , & ._ _ * * , ( ) . ._ _ * * , ( ) . & ._ _ * * , ( ) ._ et al . _ . in _ _ , ( , ) . ._ _ * * , ( ) ._ _ * * , ( ) . , , , & ._ _ * * , ( ) . & ._ _ * * , ( ) . , , , & ._ _ * * , ( ) . ._ _ * * , ( ) . ._ _ * * , ( ) . , , & _ _ ( , ) . , ,_ * * , ( ) . , & ._ _ * * , ( ) . & ._ _ * * , ( ) . ._ _ * * , ( ) . , & ._ _ * * , ( ) . & ._ _ * * , ( ) . & .in _ _ , ( , , ) . , , , & ._ _ * * , ( ) . , , & ._ _ * * , ( ) . & ._ _ * * , ( ) . , , & ._ _ * * , ( ) . &_ _ * * , ( ) . , , &_ _ * * , ( ) . &_ _ * * , ( ) . , , & ._ _ ( ) . ._ _ * * , ( ) . , & ._ _ ( ) . , , & ._ _ * * , ( ) ._ _ * * , ( ) . , , &_ _ * * , ( ) ._ et al . _ . __ * * , ( ) . & ._ _ * * , ( ) . ._ _ * * , ( ) .in , the authors calculate the transport efficiency of structures in the no disorder , no dephasing regime , by calculating the overlap of the initial state with the subspace spanned by the eigenstates of the hamiltonian that have a non - zero overlap with the trap node .let this subspace be denoted by .here we prove that this subspace is equal to the space containing the trap node and that is invariant under the unitary evolution , denoted by .now , where , are the minimum number of eigenstates of with , such that ( there will be eigenstates with for ) . here by _ minimum _ number of eigenstates , it is meant that in the case of degenerate eigenspaces , more than one eigenstate can have a non - zero overlap with . in such a scenario , this ambiguity is resolved by choosing the eigenvector from this degenerate eigenspace that has the maximum possible overlap with and orthogonalize all the other vectors within this eigenspace with respect to it .this implies that the remaining eigenvectors is the degenerate space would have zero overlap with post orthogonalization .this procedure is explained in where this subspace is referred to as the non - invariant subspace and its calculation provides a simple way of obtaining the efficiency of transport to a trapping site on the graph ( in the absence of dephasing and losses ) .+ let us first assume that and .it is simple to see that by expressing the state as where in the first step we used that for .since the states span and each of these states can be expressed in terms of elements of , we conclude that and .now , it remains to show that each element of can be expressed as where are coefficients and .for this to happen we obtain the condition defining the matrix , eq .is equivalent to the condition .thus , must be an invertible matrix , so we must have . to show that is always invertiblewe show that . for thiswe define two matrices , and the diagonal matrix such that . because for we conclude that . also , is of the vandermonde form so its determinant is given by . because all belong to different eigenspaces for , all are different from each other and the vandermonde determinant is not zero .thus so is invertible .this completes the proof that .thus , the subspace comprising of the eigenstates of the hamiltonian having a non - zero overlap with is the same as the one containing and is invariant under the unitary evolution .the advantage of working with the latter subspace is that , one does not need to diagonalize the hamiltonian to obtain this space .from a complete graph , links are broken in a manner such that at most one link is broken per node , including the solution state , and thus , .the system hamiltonian evolves in a four dimensional subspace as we shall show subsequently .we assume that the link connected to the marked state that is broken is represented by .the remaining broken links are not connected to and so the set of all these broken links are represented by , whose cardinality is . also , let be the set of nodes comprising of these broken links .now , let be the equal superposition of the nodes corresponding to the broken links that that are not connected to . also let , be the equal superposition of the nodes that have degree , i.e. , they do not correspond to any broken link .projecting on to the space gives the reduced hamiltonian in the basis , where and , the search hamiltonian is thus , the initial superposition of states can be approximated to be in the limit of large .now , let , where , and being large . thus becomes degenerate perturbation theory enables us to separate into , and of terms of order , and respectively .we find the critical value of and the eigenvalues of to be .thus the running time is again , which is the optimal value .here we show that if the initial state , the transport efficiency is one .the efficiency of transporting an exciton from a starting node to the trap is given by now , using the schrdinger equation to replace , thus , this is using the fact that the anti - hermitian term of the hamiltonian reduces the norm of the state at the rate and for , the component of the wave function within gets absorbed completely and hence .this implies that to calculate the transport efficiency of an exciton starting from an initial state to a trap , it suffices to calculate the overlap of the initial state with .the component of the exciton outside this subspace will not get absorbed by the trap , but rather will remain in the network .
continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems . often , the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full hilbert space . in this work , we use invariant subspace methods , that can be computed systematically using lanczos algorithm , to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries . first , we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal : complete graphs with broken links and complete bipartite graphs , in particular , the star graph . these examples show that regularity and high - connectivity are not needed to achieve optimal spatial search . we also show that this method considerably simplifies the calculation of quantum transport efficiencies . furthermore , we observe improved efficiencies by removing a few links from highly symmetric graphs . finally , we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an xy spin network . = =
_ in vitro _microscopy is crucial for studying physiological phenomena in cells . for many applications , such as drug discovery , cancer cell biology and stem cell research , the goal is to identify and isolate events of interest .often , these events are rare , so automated high throughput imaging is needed in order to provide statistically and biologically meaningful analysis .thus , an ideal technique should be able to image and analyze thousands of cells simultaneously across a wide field of view ( fov ) . to observe dynamical processes across various spatial and temporal scales , both high spatial and high temporal resolutionare required .existing high throughput imaging techniques can not meet the space - bandwidth-_time _ product ( sbp - t ) required for widefield _ in vitro _ applications . here, we introduce a new computational microscopy technique with both high spatial and temporal resolution over a wide fov .our method is an extension of fourier ptychographic microscopy ( fpm ) , which overcomes the physical space - bandwidth product ( sbp ) limit . instead of choosing between large fov _ or _ high resolution , fpm achieves both by trading acquisition speed .illumination angles are scanned sequentially with a programmable led array source ( fig .[ setup ] ) , while taking an image at each angle . tilted illumination samples different regions of fourier space , as in synthetic aperture and structured illumination imaging .though the spatial resolution of each measurement is low , the images collected with high illumination angles ( dark field ) contain sub - resolution information .the reconstructed image achieves resolution beyond the diffraction limit of the objective - the sum of the objective and illumination nas .distinct from synthetic aperture , fpm does not measure phase directly at each angle , but instead uses nonlinear optimization algorithms similar to translational diversity and ptychography .conveniently , the led array coded source used here is implemented as a simple and inexpensive hardware modification to an existing microscope .our proposed method uses efficient source coding schemes to reduce the capture time by several orders of magnitude .( 0.2 na ) objective .multiple images are captured with coded illumination in order to reconstruct higher resolution ( up to 0.8 na ) .( b ) comparison of illumination schemes in terms of space , bandwidth and acquisition time .sequential fpm scans through each led , achieving large sbp at a cost of speed .our source - coded fpm implements hybrid patterning to achieve the same sbp with sub - second acquisition time .( c ) the number of images required for source - coded fpm ( blue ) grows more than 8 slower than for sequential fpm ( red ) as the final resolution increases .( solid lines : theoretical , points : our led array ) .[ setup],scaledwidth=75.0% ] the standard approach for large sbp microscopy is slide scanning , in which the sample is mechanically moved around while imaging with a high resolution objective to build up a large fov .scanning limits throughput and is unsuitable for _ in vitro _ imaging of dynamic events . instead of starting with high resolution and stitching together a large fov ,fpm starts with a large fov and stitches together images to recover high resolution .a major advantage is its ability to capture the required set of images with no moving parts , by varying illumination angles . in addition , scanners use high magnification objectives with short depth of field ( dof ) , thus requiring extensive auto - focusing mechanisms . in contrast , fpm provides robustness to focus errors because its dof is longer than that which would be provided by a high - magnification objective of equivalent na .further , out of focus images can be digitally refocused . the main limitation of fpm for real - time _ in vitro _applications is long acquisition times and large datasets .not only are a large number of images ( ) captured for each reconstruction , but also long exposure times are needed for the dark field images . to shorten exposure times ,we first built a custom hardware setup consisting of high - brightness leds and fast control circuitry ( see methods [ method - setup ] ) .this allows us to reduce total acquisition time by , enabling a full 173 led scan to capture 0.96 gigapixels of data in under 7 seconds . while this speed is already suitable for many _ in vitro _applications ( _ e.g. _ cell division processes ) , fast subcellular dynamics ( _ e.g. _ vesicle tracking ) require sub - second capture to avoid severe motion blur .since the camera s data transfer rate is the limiting factor , large sbp images can not be captured in less than 1 second unless we can reduce the data requirements .furthermore , for studies requiring time - series measurements , the large data requirement ( gigapixel per dataset ) poses severe burdens on both storage and processing . in order to reconstruct large sbp from fewer images ,we need to fundamentally change our capture strategy .to do this , we eliminate redundancy in the data by designing new source coding schemes . the redundancy arises because of a overlap requirement in fourier space for neighboring leds .this means that we must capture more data than we reconstruct if using ` sequential ' fpm .our angle - multiplexing scheme instead turns on multiple leds simultaneously for each measurement , allowing better coverage of fourier space with each image . without eliminating the overlap, we therefore fill in fourier space faster .previous work employed a random coding strategy across both brightfield and dark field regions .one problem with this scheme is that the brightfield images and dark field images have large differences in intensity ( - 100 ) .considering poisson noise statistics , this means that images with mixed brightfield and dark field components may suffer from the dark field signal being overwhelmed by the brightfield noise . as a result, we significantly improve our multiplexing results here by separating brightfield from dark field leds .furthermore , we note that asymmetric illumination - based differential phase contrast ( dpc ) provides a means for recovering quantitative phase and intensity images out to 2 the objective na with only 4 images .hence , there is no need to individually scan the brightfield leds .our new method , termed source - coded fpm , uses a hybrid illumination scheme : it first captures 4 dpc images ( top , bottom , left , right half - circles ) to cover the brightfield leds , then uses random multiplexing with 8 leds to fill in the dark field fourier space region ( fig .[ setup]b ) .source - coded fpm approaches the theoretical limit for , set by the camera s data transfer rate .we achieve of 46 megapixels / second , calculated according to ( 4 fov and 0.8 na captured in 0.8 seconds ) .though the data rate of our camera is larger ( 138 megapixels / second ) , we leave some redundancy in order to ensure robust algorithm convergence .reconstruction algorithms that explicitly take _ a priori _ information into account , such as sparsity based methods , could further improve capture speed ; here , for generality , we choose not to make any assumptions on the sample .both sequential fpm and source - coded fpm require the number of images in the dataset to grow quadratically with improved final resolution .this is due to the coverage area in fourier space increasing in proportion to the square of the final na .however , our source - coded method has a slower growth rate ( fig .[ setup]c ) and stays flat for less than 2 resolution improvement .conveniently , these techniques can flexibly trade off fov , resolution and acquisition time by choosing the illumination angle range .for _ in vitro _applications , stain - free and label - free is particularly attractive because it is non - invasive and non - toxic .fpm provides both intensity and phase , which contain morphological and cell mass information that can be used for quantitative study . in order to achieve accurate , high - quality results for unstained samples , we needed to make several modifications to the fpm algorithm .it is well known that non - convex problems such as phase retrieval will often get stuck in local minima ; the best way to avoid this is to provide a good initial guess .previous work in fpm uses a low - resolution intensity image as the initial guess .stained samples with strong intensity variations thus reconstruct successfully , since the intensity - only initialization is close to the actual solution .however , unstained samples are phase objects and so the intensity - only initial guess does not provide a good starting point .furthermore , fpm does not measure phase directly at each angle , and the phase contrast provided by the asymmetric illumination results in uneven sensitivity to phase at different spatial frequencies .a detailed phase transfer function analysis based on the weak - object ( born ) approximation shows that low - frequency phase information is poorly captured , since it results only from illumination angles that are close to the objective na .thus , low spatial frequency phase information is more difficult to reconstruct than high spatial frequency phase information , contrary to the situation for intensity reconstructions . to improve our reconstruction, we use a linearly approximated phase solution based on dpc deconvolution as a close initial guess for spatial frequencies within the 2 na bandwidth .we then run a nonlinear optimization algorithm to solve the full phase problem ( see methods [ method - algorithm ] ) , resulting in high - quality phase reconstructions with high resolution ( fig .[ compare]a ) and good low spatial frequency phase recovery ( fig .[ compare]b ) .we demonstrate our new source - coded fpm by reconstructing large sbp videos of popular cell types _ in vitro _ on a petri dish , for both growing and confluent samples .we observe sub - cellular and collective cell dynamics happening on different spatial and temporal scales , allowing us to observe rare events in both space and time , due to the large space - bandwidth-_time _ product and flexible tradeoff of time and sbp .objective with 0.7 na resolution ( sample : u2os ) . a zoom - in is shown at right , with comparison to reconstructions of the same sample before and after staining .( b ) our improved fpm algorithm provides better reconstruction of low - frequency phase information. a zoom - in region shows comparisons between phase reconstructions with and without our differential phase contrast ( dpc ) initialization scheme .( c ) to validate our source - coded fpm results , we compare to images captured with a 40 objective having high resolution ( 0.65 na ) but small fov ( sample : mcf10a ) , as well as sequential fpm .( d ) we simulate a phase contrast image and compare to one captured by a high - resolution objective ( 0.65 na , 40).[compare],scaledwidth=99.0% ] a major advantage of quantitative phase imaging is that it can visualize transparent samples in a label - free , non - invasive way .figure [ compare]a compares our reconstructions before and after staining , for the same fixed human osteosarcoma epithelial ( u2os ) sample . with staining ,the intensity image clearly displays detailed subcellular features .stained samples also display strong phase effects , proportional to the local shape and density of the sample . without staining ,the intensity image contains very little contrast and is nearly invisible ; however , the phase result clearly captures the sub - cellular features . due to the strong similarity between stained intensity and unstained phase, it follows that quantitative phase may provide a valid alternative to staining . to demonstrate the importance of using a good initial guess to initialize the phase recovery for unstained samples , we compare the fpm results both with and without our dpc initialization scheme ( fig .[ compare]b ) .both achieve the same 0.7 na resolution , with high spatial frequency features ( e.g. nucleus and filopodia ) being clearly reconstructed , as expected .however , without dpc initialization , low frequency components of the phase are not well recovered , resulting in a high - pass filtering effect on the reconstructed phase , much like zernike phase contrast . with dpc initialization , low frequencies , which describe the overall height and shape of the cells ,are correctly recovered .next , we verify the accuracy of the recovered phase values by comparing to images captured directly with a higher resolution objective ( 40 0.65 na ) .the reconstruction resolution for our method matches its expected value ( 0.7 na ) , as seen in fig .[ compare]c , which shows images of fixed human mammary epithelial ( mcf10a ) cells . to validate our quantitative phase result, we compare to that recovered from a through - focus intensity stack captured with the high - resolution objective , then input into a phase retrieval algorithm ( fig .[ compare]c and methods [ method - throughfocus ] ) .quantitative phase can further be used to simulate popular phase contrast modes such as dic and zernike phase contrast ( phc ) . in fig .[ compare]d , we compare an actual phc image to that simulated from our method s reconstructed result ( see methods [ method - phc ] ) .since the phc objective uses annular illumination ( ph2 , 0.25 na ) to achieve resolution corresponding to 0.9 na , it should have slightly better resolution than our reconstruction .note that the simulated phc image is effectively high - pass filtered phase information , so small details appear with better contrast .objective and achieving 0.8 na resolution .( c ) several frames of reconstructed video ( see video 1 ) from a zoom - in of one small area of confluent cells in which one cell is dividing into multiple cells .( d ) automated cell segmentation result for the full fov phase image , with ,400 cells successfully identified .( e ) calculated dry mass for each of the labeled cells in the zoom - in region over 4 hours at 2 minutes intervals . at right is a histogram of the background fluctuations in an area with no cells .[ fig : hela ] ] figure [ fig : hela ] shows a few frames from a time - lapse video of human cervical adenocarcinoma epithelial ( hela ) cell division process over the course of 4 hours , as well as an automated quantitative analysis of cell dry mass evolution .this data was captured using our improved sequential fpm ( 173 images ) - sample raw images and a schematic of the fourier space coverage are shown in fig .[ fig : hela]a .one time frame of the full fov phase reconstruction is shown in fig .[ fig : hela]b , and a few frames of the video for a single zoom - in are shown in fig . [fig : hela]c . in this region, one cell is undergoing mitosis and dividing into 4 cells , during which the cells detach from the surface and become more globular ( see video 1 ) .this situation , if imaged with a high magnification objective , would result in the detached cells moving out of the focal plane and blurring .however , fpm provides a longer dof than a high magnification objective with equivalent na , so the entire sample stays in focus across the fov .subcellular features are visible and the dynamics of actin filament formation can be tracked over time .we show only phase results ( omitting intensity ) , since samples are unstained and have little intensity contrast .since our reconstructed video contains gigapixels of quantitative phase data with ,000 cells in each frame , the practical extraction of relevant information requires automated analysis .it is well known that phc / dic images can not be automatically segmented ( due to the lack of low spatial frequency information ) ; however , our quantitative phase results do not have this problem . using an automated cell segmentation software ( cellprofiler ) applied directly to the full fov phase result , we first segment each frame of the video to find each of the ,400 cells ( fig .[ fig : hela]d ) . next , we compute each cell s dry mass over time through the division process , with a few sample cells plotted in fig .[ fig : hela]e .dry mass is calculated by integrating phase over each segmented cell region for each time frame ( see methods [ method - segment ] ) .note that the automated cell segmentation and dry mass calculation are sensitive to the quality of the phase result , and often fail when low spatial frequencies are not well reconstructed ; hence , the dpc initialization and other algorithm improvements implemented in this work are crucial for automated quantitative studies . objective and achieving 0.8 na resolution .( c ) sample frames of reconstructed video ( see video 2 ) for a zoom - in of one small area .top : successive frames at the maximum frame rate ( 1.25 hz ) , bottom : sample frames across the longer time - lapse ( 4.5 hours at 1 minute intervals ) .[ nsc],scaledwidth=75.0% ] source - coded fpm can observe samples across long time scales ( up to 4.5 hours ) with sub - second acquisition speed ( 1.25 hz ) .an example frame from a reconstructed large sbp phase video of adult rat neural stem cells ( nscs ) is shown in fig .[ nsc]b , in a petri dish .we use source - coded fpm to achieve the same sbp as in sequential fpm ( 0.8 na resolution across a 4 fov ) , but with only 21 images ( sample raw images shown in fig .[ nsc]a ) , as opposed to 173 . as a result, we significantly decrease the capture time , from 7 seconds to 0.8 seconds .this alleviates motion artifacts that would otherwise blur the result due to sub - cellular dynamics that happen on timescales shorter than 7 seconds .since our source - coded fpm reduces the number of images needed , we can also capture longer video sequences without incurring data management issues .hence , we create videos of both fast - scale dynamics and slow - scale evolution of nscs with details both on the sub - cellular level and across the entire cell population . for example , while vesicle transport and other organelle motions tend to occur on a short time scale and at a small length scale ( see video 2 ) , nsc differentiation into neuronal and glial lineages occurs on a longer time scale ( _ i.e. _ days ) and at a larger length scale . in the middle time scale , we observe nscs sensing their micro - environment by retracting and extending processes , reorganizing their cytoskeletons , migrating , and maturing their axonal and dendritic processes ., scaledwidth=75.0% ] with live samples imaged _ in vitro _, dynamics create motion blur artifacts that can destroy the resolution improvements gained by large sbp methods .thus , the final effective resolution is always coupled with acquisition speed and sample - dependent motion . in general , smaller subcellular features tend to move at faster speed ; hence , we find that capture times on the order of minutes always incur motion blur .for this reason , faster acquisition is essential for _ in vitro _ applications with typical cell types . to demonstrate this point, we compare the results of our source - coded fpm , sequential fpm with and without real - time hardware controls , and high magnification dpc .the dpc method is considered our benchmark , since it achieves faster acquisition speeds , avoiding motion blur , albeit with a small fov . in fig .[ fig : motion ] , we show zoomed - in phase reconstructions for two cell types with each capture scheme , from slowest to fastest . in each case, the final result has a nominal na of 0.8 and so each of these results _ should _ have the same resolution .however , it is clear that the slower capture schemes blur out features , rendering much of the small - scale structure and dynamics invisible . the mcf10a cells in fig .[ fig : motion]a , b exhibit the fastest dynamics observed , due to rapid shuttling of small vesicles and fluctuations of cytoskeletal fibers ( e.g. actin filaments , microtubules ) . using sequential fpm with 60 second capture time, we retain almost no details about the structure of the fibers ( fig .[ fig : motion]a ) . even with our faster hardware setup , requiring only 7 seconds acquisition time ,the fibers are still completely blurred out . by switching to our source - coded fpm , we obtain the same sbp as the previous two schemes , but with only 0.8 seconds acquisition time . now , fiber cytoskeletal details become more discernible , though there is still some motion blur , as compared to the dpc result ( video 3 ) . hence, even our source - coded fpm is missing some information in this case . in fig .[ fig : motion]b we zoom in on another fast process of vesicle transport , where we find that our method captures more of the dynamics than in the fiber fluctuation case , still with some blurring . in video 3 , we show time - lapse videos over 8 hours to observe cell migration and proliferation with subcellular details , captured using our source - coded fpm . thus , the trade - off in fov and time may be more appropriate , depending on what one aims to observe . in fig .[ fig : motion]c , d , example results for two different nscs are shown . in both cases , sequential fpm results in significant motion blur , particularly along thin , extended processes and within intracellular vesicles and organelles .in contrast , source - coded fpm is able to accurately capture the full details of the sample without motion blur artifacts .while it is difficult to compare directly because the live cells were moving in between capture schemes , in general we can say that most vesicle transport , retraction and extension of processes and other organelle motion can be clearly visualized using source - coded fpm ( see video 4 ) .the flexibility of our system in trading off fov , resolution and time means that experiments can be tailored to the sample .for example , the hela and nsc samples shown here display slower sub - cellular dynamics , so are more suited to high sbp imaging with our current scheme , whereas the mcf10a sample necessitates a trade - off of sbp in order to capture data with sufficient speed .when dynamics are on timescales faster than 0.8 seconds , one should reduce the number of images captured , either by sacrificing fov ( using a larger na objective ) or by reducing the resolution improvement factor ( using a smaller range of leds ) . in the limit, one can eliminate all the dark field led images and simply implement dpc for maximum speed of capture .however , for most biological dynamics that are studied _ in vitro _ ( e.g. differentiation , division , apoptosis ) , we find that sub - second acquisition is sufficient , and so the sbp should be maximized within this constraint . in summary , we have demonstrated a high - speed , large sbp microscopy technique , providing label - free quantitative phase and intensity information .our source - coded fpm method overcomes the limitations of existing large sbp methods , permitting fast , motion - free imaging of unstained live samples .this work opens up large sbp imaging to high throughput _ in vitro _applications across a large range of both spatial and temporal scales .a gallery of interactive full fov high resolution images from our experimental system can be found at http://www.gigapan.com/profiles/wallerlab_berkeley .we place a custom - built 32 surface - mounted led array ( 4 mm spacing , central wavelength 513 nm with 20 nm bandwidth ) placed mm above the sample ( fig .[ setup]a ) , replacing the microscope s standard illumination unit ( nikon te300 ) .all leds are driven statically using 64 led controller chips ( mbi5041 ) to provide independent drive channels .a controller unit based on arm 32-bit cortex^tm^ m3 cpu ( stm32f103c8t6 ) provides the logical control for the leds by the i interface at 5mhz , with led pattern transfer time of .the camera ( pco.edge 5.5 , 6.5 m pixel pitch ) is synchronized with the led array by the same controller via two coaxial cables which provide the trigger and monitor the exposure status .all raw images are captured with 14ms exposure times .we experimentally measure the system frame rate to be for capturing full - frame ( 2560 2160 ) 16 bit images .the data are transferred to the computer via cameralink interface .all _ in vitro _ experiments are performed in petri dishes placed inside a temperature and co controlled stage mounted incubator ( in vivo scientific ) .our new fpm reconstruction algorithm can be described in two steps .first , we calculate a low - resolution initialization based on dpc .next , we implement our quasi - newtons method iterative reconstruction procedure to include the higher - order scattering and dark field contributions , for 3 - 5 iterations . in our source - coded fpm ,the 4 brightfield images are directly used in the deconvolution - based dpc reconstruction algorithm to calculate phase within 2 the objectives na .the initial low - resolution intensity image is calculated by the average of all brightfield images corrected by the intensity falloffs and then deconvolved by the absorption transfer function . in our improved sequential fpm algorithm ,the 4 dpc images are numerically constructed by taking the sum of single - led images corresponding to the left , right , top , bottom half - circles on the led array , respectively . in the reconstruction , we divided each full fov raw image ( 2560 pixels ) into 6 sub - regions ( 560 pixels each ) , with 160-pixel overlap on each side of neighboring sub - regions .each set of images were then processed by our algorithm above to create a high resolution complex - valued reconstruction having both intensity and phase ( 2800 pixels ) .finally , all high resolution reconstructions were combined using the alpha - blending stitching method to create the full fov high resolution reconstruction .using a desktop computer ( intel i7 cpu ) , the processing time for each sub - region was in matlab .the total processing time for each full fov was .we capture intensity images with a high - resolution objective ( 40 0.65 na ) while moving the sample axially to 17 exponentially spaced positions from -64 m to 64 m using a piezostage ( mzs500-e - z , thorlabs ) .the images are then used to reconstruct phase ( fig .[ compare]c ) using a transport of intensity type algorithm based on spatial frequency domain fitting . our phase contrast simulation ( fig .[ compare]d ) fully accounts for the partially coherent annular illumination ( inner na = 0.25 , outer na = 0.28 , measured experimentally ) and apodized phase contrast pupil .the pupil consists of a -phase shifting ring with 75 attenuation ( size matching the source ) , and two apodization rings with attenuation ( widths are calculated based on ) . in our simulation , a tilted plane wave from each source point shifts the samples spectrum in the fourier space , which is then filtered by the pupil function before calculating the intensity image in the real space .the phase contrast image is the incoherent sum of all the intensity contributions from all the points on the annular source .image segmentation for each frame is performed by cellprofiler open - source software , which implements a series of automated operations including thresholding , watershedding and labeling , to return a 2d map containing segmented regions representing different cells .the 2d maps are then loaded into matlab to extract the phase within each individual cell .the total dry mass for each cell is calculated as the sum of the dry mass density . the dry mass density is directly related to phase by , where is the center wavelength , g is the average of reported values for refractive increment of protein .the background fluctuations are characterized from a region without any cells ( white square in figure [ fig : hela]d ) and having an area similar to the average cell size .background fluctuations are shown in the black curve and histogram in figure [ fig : hela]e .we achieve a standard deviation of 1.5pg , in units of dry mass , indicating good stability of the phase measurement .hela cells were cultured with dmem ( dulbecco s modified eagle s medium ) supplemented with 10% fbs ( fetal bovine serum ) , glutamine and penicillin / streptomycin .the cells were plated on a p75 flask and cultured in a 37 incubator with 5% co .the confluent cells were treated with 0.2% trypsin and passed as 1:8 ratio .two drops of trypisized cells were then added into a polyd - d - lysine coated 35 mm mattek glass - bottom plate with 2ml medium .after 6 hours incubation , the cells became fully attached to the plate .u2os cells were prepared using the same procedure .they were fixed in 4% formaldehyde at room temperature for 10 min and later stained with 1% toluidine blue o ( tbo ) at room temperature for 5 min and rinsed in three changes of double - distilled water .adult rat nscs were isolated from the hippocampi of 6-week - old female fischer 344 rats .to promote adhesion , tissue culture polystyrene plates were first coated with 10 / ml poly - l - ornithine ( sigma ) overnight at room temperature , followed by 5 / ml of laminin ( invitrogen ) overnight at 37 .nscs were cultured in monolayers in 1:1 dmem / f12 high - glucose medium ( life technologies ) , supplemented with n-2 ( life technologies ) and 20ng / ml recombinant human fgf-2 ( peprotech ) .media was changed every other day , and cells were subcultured with accutase upon reaching % confluency .mcf10a cells were cultured in dmem / f12 ( invitrogen ) , supplemented with 5% horse serum ( invitrogen ) , 1% penicillin / streptomycin ( invitrogen ) , 0.5 / ml hydrocortisone ( sigma ) , 100 ng / ml cholera toxin ( sigma ) , 10 / ml insulin ( sigma ) and 20ng / ml recombinant human egf ( peprotech ) .media was changed every other day , and cells were passed with trypsin upon reaching % confluency . in preparation for imaging, cells were washed once with pbs ( phosphate buffer saline ) and detached with either accutase or trypsin .300,000 cells were seeded onto 35 mm glass - bottom microwell dishes ( mattek ) and allowed to attach .funding was provided by the gordon and betty moore foundation s data - driven discovery initiative through grant gbmf4562 to laura waller ( uc berkeley ) .we would like to thank zachary phillips for help with experiments , and olivia scheideler , lydia sohn , david schaffer , peiwu qin and ahmet yildiz for providing cell samples and incubator .m. r. costa , f. ortega , m. s. brill , r. beckervordersandforth , c. petrone , t. schroeder , m. gtz , and b. berninger , `` continuous live imaging of adult neural stem cell division and lineage progression in vitro , '' development * 138 * , 10571068 ( 2011 ) .g. zheng , s. a. lee , y. antebi , m. b. elowitz , and c. yang , `` the epetri dish , an on - chip cell imaging platform based on subpixel perspective sweeping microscopy ( spsm ) , '' p. natl .sci . * 108 * , 1688916894 ( 2011 ) .a. greenbaum , w. luo , t .- w .su , z. grcs , l. xue , s. o. isikman , a. f. coskun , o. mudanyali , and a. ozcan , `` imaging without lenses : achievements and remaining challenges of wide - field on - chip microscopy , '' nat .methods * 9 * , 889895 ( 2012 ) .a. greenbaum , y. zhang , a. feizi , p .-chung , w. luo , s. r. kandukuri , and a. ozcan , `` wide - field computational imaging of pathology slides using lens - free on - chip microscopy , '' science translational medicine * 6 * , 267ra175 ( 2014 ) .t. gutzler , t. r. hillman , s. a. alexandrov , and d. d. sampson , `` coherent aperture - synthesis , wide - field , high - resolution holographic microscopy of biological tissue , '' opt . lett . * 35 * , 11361138 ( 2010 ) . o. bunk , m. dierolf , s. kynde , i. johnson , o. marti , and f. pfeiffer , `` influence of the overlap parameter on the convergence of the ptychographical iterative engine , '' ultramicroscopy * 108 * , 481487 ( 2008 ) .g. popescu , y. park , n. lue , c. best - popescu , l. deflores , r. r. dasari , m. s. feld , and k. badizadegan , `` optical imaging of cell mass and growth dynamics , '' am. j. physiol - cell ph . * 295 * , c538c544 ( 2008 ) .z. jingshan , r. a. claus , j. dauwels , l. tian , and l. waller , `` transport of intensity phase imaging by intensity spectrum fitting of exponentially spaced defocus planes , '' opt .express * 22 * , 1066110674 ( 2014 ) .a. e. carpenter , t. r. jones , m. r. lamprecht , c. clarke , i. h. kang , o. friman , d. a. guertin , j. h. chang , r. a. lindquist , j. moffat , p. golland , and d. m. sabatini , `` cellprofiler : image analysis software for identifying and quantifying cell phenotypes , '' genome biol .* 7 * , r100 ( 2006 ) .z. f. phillips , m. v. dambrosio , l. tian , j. j. rulison , h. s. patel , n. sadras , a. v. gande , n. a. switz , d. a. fletcher , and l. waller , `` multi - contrast imaging and digital refocusing on a mobile microscope with a domed led array , '' plos one * 10 * , e0124938 ( 2015 ) .
we demonstrate a new computational illumination technique that achieves large space - bandwidth-_time _ product , for quantitative phase imaging of unstained live samples _ in vitro_. microscope lenses can have either large field of view ( fov ) or high resolution , not both . fourier ptychographic microscopy ( fpm ) is a new computational imaging technique that circumvents this limit by fusing information from multiple images taken with different illumination angles . the result is a gigapixel - scale image having both wide fov and high resolution , i.e. large space - bandwidth product ( sbp ) . fpm has enormous potential for revolutionizing microscopy and has already found application in digital pathology . however , it suffers from long acquisition times ( on the order of minutes ) , limiting throughput . faster capture times would not only improve imaging speed , but also allow studies of live samples , where motion artifacts degrade results . in contrast to fixed ( e.g. pathology ) slides , live samples are continuously evolving at various spatial and temporal scales . here , we present a new source coding scheme , along with real - time hardware control , to achieve 0.8 na resolution across a 4 fov with sub - second capture times . we propose an improved algorithm and new initialization scheme , which allow robust phase reconstruction over long time - lapse experiments . we present the first fpm results for both growing and confluent _ in vitro _ cell cultures , capturing videos of subcellular dynamical phenomena in popular cell lines undergoing division and migration . our method opens up fpm to applications with live samples , for observing rare events in both space and time .
machine - type communication ( mtc ) is commonly characterized by a large number of cellular devices that are active sporadically , where a large number of devices may activate in a correlated way due to a sensed physical phenomenon ( e.g. , a power outage in the smart grid ) . in more traditional human - centric traffic where the associated payloads are relatively large, a small number of active devices can cause the network to become in outage mainly due to the lack of available resources for data transmission .in contrast , the associated payloads are relatively small in mtc such that the division of the aggregate available data rate with the small data rate required by each machine - type device ( mtd ) leads to the conclusion that the system can support a vast number of mtds .recent studies have shown that such a conclusion is misleading : the network still becomes in outage , not being able to provide access to the mtds , despite plenty of available resources to support a massive number of mtds . herethe culprit in the limited number of supported devices , is not the available resources as in human - centric traffic , but instead the bottlenecks in the access reservation protocol . specifically in lte ,the access reservation protocol that is outlined in fig .[ fig : lte_msg1-msg4 ] has two limitations that unveil with mtc .the first is in _ msg 1 _ , where only a limited number of preambles can be used to signal a sporadic request for uplink resources to the enodeb , in the rach phase .the second is in _ msg 2 _ , where a bottleneck may be caused by the limited amount of feedback resources in the access granting ( ag ) phase . in the literature , analytical models of the preamble collision probabilityhave already been considered in standardization documents and scientific papers . in the preamble collision probability is used to estimate the success probability of transmission attempts .however , we have found that existing models are incomplete and inaccurate and in this paper we introduce a superior model that closely matches the system outage breaking point of the detailed simulation .the second limitation in the ag phase has been considered separate from collisions in for bursty arrivals following the beta distribution , which is a valuable result for situations where many alarm messages are sent simultaneously . in authors present an approach to cell planning and adaptation of prach resources that only takes into account the preamble collisions . as we show in this paper ,the ag phase is a limiting factor before the amount of preamble collisions becomes an issue , since the impact of occasional collisions is effectively diminished with retransmissions . in authors present an analysis accounting for preamble collision and the ag phase , which however does not consider retransmissions .in this paper we propose an analytical model of the transmission failure probability in an lte cell for sporadic uplink transmissions carried over the lte random access channel .the proposed model captures the features of the existing access reservation protocol in lte , meaning that we are not proposing a new access protocol rather introducing a tool for analysis of the existing lte access reservation protocol .the purpose of the proposed model is to be able to estimate the capacity in terms of the number of terminals or rach arrival density that can be supported by lte in a given configuration while accounting for retransmissions as well as modeling the bottlenecks that appear in the contention phase and the ag phase .this is a major contribution of the paper , as the existing models do not capture these bottlenecks .three other contributions are : 1 ) an iterative procedure to determine the impact of retransmissions using a markov chain model of the retransmission and backoff procedure ; 2 ) analytical derivations of the metrics based on a markov chain , thereby achieving an analytical model that can be evaluated at click - speed .3 ) analysis of the protocol breaking point using ever increasing access loads to the network .initially we present the system model and assumptions in section [ sec : system_model ] , whereafter in section [ sec : arp_model ] we present the proposed analytical model of the access reservation bottlenecks in lte .the proposed model is compared numerically to simulation results and other models from the literature in section [ sec : results ] , and finally the conclusions are given in section [ sec : conclusion ] .we focus our analysis on a single lte cell , with mtds , also called machine - type user equipment ( ue ) .we assume that the mtc applications associated with these mtds , generate new uplink transmissions with an aggregate rate , as depicted in fig .[ fig : lte_diagram ] .that is , , where is the transmission generation rate of the out of mtc applications running on the ues .we assume this aggregate rate follows a poisson distribution with rate . for each new data transmission , up to retransmissions are allowed , resulting in a maximum of allowed transmissions .when these transmissions fail and retransmissions occur , then an additional stress is put on the access reservation protocol , since the rate of retransmissions adds to the total rate . as shown on fig .[ fig : lte_diagram ] , we split the access reservation model into two parts : ( i ) the one - shot transmission part in fig .[ fig : lte_diagram](a ) that models the bottlenecks at each stage of the access reservation protocol ; ( ii ) the -retransmission part in fig .[ fig : lte_diagram](b ) , where a finite number of retransmissions and backoffs are modeled .we focus our analysis on mtc , for which the traffic is characterized by having a very small payload .therefore , in the one - shot transmission , depicted in fig . [fig : lte_diagram](a ) , we assume that the rach and access granting phases are the system bottlenecks . in other words ,we assume that the network has enough data resources to deliver the serviced mtc traffic .the uplink resources in lte for frequency division duplexing ( fdd ) are divided into time and frequency units denoted resource blocks ( rbs ) . the time is divided in frames , where every frame has ten subframes , each subframe of duration ms .the system bandwidth determines the number of rbs per subframe that ranges between 6 rbs and 100 rbs .the number of subframes between two consecutive random access opportunities ( raos ) denoted varies between 1 and 20 .every rao occupies 6 rbs and up to 1 rao per subframe is allowed .the lte random access follows the access reservation principle meaning that devices must contend for uplink transmission resources in a slotted aloha fashion within a rao . as shown in fig .[ fig : lte_msg1-msg4 ] , the access procedure consists of the exchange of four different messages between a ue and the enodeb .the first message ( msg 1 ) is a randomly selected preamble sent in the first coming rao . in fig . [fig : lte_diagram](a ) the intensity of ue requests leading to preamble activations is represented by .lte has 64 orthogonal preambles , where only are typically available for contention among devices , since the rest are reserved for timing alignment .commonly , the enodeb can only detect which preambles have been activated but not if multiple activations ( collisions ) have occured .this assumption holds in small cells ( * ? ? ?17.5.2.3 ) , and refers to the worst - case scenario where the detected preamble does not reveal anything about how many users are simultaneously sending that preamble .in other words , the preamble collision is not detected at msg 1 .thereafter , in msg 2 , the enodeb returns a random access response ( rar ) to all detected preambles .the intensity of activated preambles is in fig .[ fig : lte_diagram](a ) represented by , where since in a preamble collision only 1 preamble is activated . the contending devices listen to the downlink channel , expecting msg 2 within .it should be noted that typically a maximum of 3 rar messages per subframe can be sent by the enodeb .if no msg 2 is received and the maximum of msg 1 transmissions has not been reached , the device backs off and restarts the random access procedure after a randomly selected backoff time within the interval \cap \mathbb{z}^+$ ] , where is the maximum backoff time .if received , msg 2 includes uplink grant information , that indicates the rb in which the connection request ( msg 3 ) should be sent .the connection request specifies the requested service type , e.g. , voice call , data transmission , measurement report , etc . in case of collisionthe devices receive the same msg 2 , resulting in their msg 3s colliding in the rb .in contrast to the collisions of msg 1 , the enodeb is able to detect collisions of msg 3 .the enodeb only replies to the msg 3s that did not experience collision , by sending message msg 4 , with which the required rbs are allocated or the request is denied in case of insufficient resources .the latter is however unlikely in the case of mtc , due to the small payloads .if the msg 4 is not received within since msg 1 was sent , the random access procedure is restarted .finally , if a device does not successfully finish all the steps of the random access procedure within msg 1 transmissions , an outage is declared .we now go to the analysis of the access reservation procedure .first , we model the _ one - shot transmission _ and then extend it to the _ -retransmissions _ model .the numerical results cover the complete model , as depicted in fig .[ fig : lte_diagram ] .we are interested in characterizing how often a transmission from a ue fails .this happens when the transmission is not successful in both the preamble contention and ag phases , i.e. , a request from the ue must not experience a preamble collision and the uplink grant must not become stale and dropped . we model this as two independent events : where is the collision probability in the preamble contention phase given ue request rate , while is the probability of the uplink grant being dropped from the ag queue given preamble activation rate .we start by computing .let denote the number of available preambles ( ) .let the probability of not selecting the same preamble as one other ue be .then the probability of a ue selecting a preamble that has been selected by at least one other ue , given contending ues , is : assuming poisson arrivals with rate , then : \\\nonumber & \leq 1 - \left ( 1 - \frac{1}{d}\right)^{\lambda_\text{t } \cdot \delta_\text{rao}-1},\end{aligned}\ ] ] where is the probability mass function of the poisson distribution with arrival rate .the inequality comes from applying jensen s inequality to the concave function , where is the total arrival rate ( including retransmissions ) , and is the average number of subframes between raos . if 10 raos per frame and if 2 raos per frame . ]the computed is thus an upper bound on the collision probability .the mean number of activated preambles in the contention phase per rao , is given by .as discussed in section [ sec : system_model ] , we assume that the enodeb is unable to discern between preambles that have been activated by a single user and multiple users , respectively. this will lead to a higher , than in the case where the enodeb is able to detect the preamble collisions .the main impact of this assumption is that there will be an increased rate of ag requests , even though part of these correspond to collided preambles , which even if accepted will lead to retransmissions .the can be well approximated , while assuming that the selection of each preamble by the contending users is independent , by , \cdot d,\ ] ] where is the probability of k successes , which can be well approximated with a poisson distribution with arrival rate , i.e. : to compute the outage probability due to the limitation in the ag phase , i.e. , due to the maximum number of uplink grants per subframe and a maximum waiting time of subframes , we consider that this subsystem can be modeled as a queuing system .we assume that the loss probability can be seen as the _ long - run fraction of costumers that are lost _ in a queuing system with impatient costumers . in lte, pending uplink grants are served with a deterministic time interval ( 1 subframe ) between each serving slot .a straightforward approach would be to use an m / d/1 model structure , as presented in , in order to compute the drop probability .unfortunately , the expression to compute for the m / d/1 queue does not have a closed - form solution .however the equivalent expression for the m / m/1 queue in has a closed - form solution , see eq . .we have compared the results of the two model types and found no noticeable difference in the computed outage numbers in practice .thus , in the following we use the m / m/1 model to compute : where is the queue load , is the number of uplink grants per rar ( ) , with and is the max waiting time ( in terms of requests ) in the uplink grant queue , i.e. , .the fact that we are using an m / m/1 model instead of an m / d/1 model , may cause a discrepancy between the simulation and model results when the queue becomes congested ( ) .however , we are interested in the switching point ( ) from which we then estimate accurately the outage breaking point , as shown in the results in section [ sec : results ] . when ues are allowed to make retransmissions the probability of an ue becoming in outage is the probability that none of the allowed transmissions attempts are successful . when retransmissions are allowed ( ) , the total arrival rate must include the extra arrivals caused by the ues retransmissionsthe number of retransmissions is however a result of the limit and transmission error probability , which in turn depends on the number of retransmissions .this chicken and egg problem can be solved iteratively using a derivative of the bianchi model applied to our system model . specifically , we are using a model adapted to lte , with a structure similar to the one presented in .the following derivations of the number of transmissions and outage probabilities have , to the best of our knowledge , not been presented previously ..,width=340 ] the mean number of required transmissions and outage probability , are computed with help of the markov chain model depicted in fig .[ fig : mc_backoff_model ] . in the markov chain model ,the state index denotes the transmission attempt stage and backoff counter .the number of allowed retransmissions is then given by . whenever the one - shot transmission is successful , depicted in fig .[ fig : lte_diagram](a ) , the ue enters the _ connect _ state : where , is short for .whenever the one - shot access fails , the ue increases the backoff counter : where and . at the last stage of the markov chain, the ue enters the _ drop _ state if the transmission fails : the ue enters the _ off _ state after the _ connect _ or the _ drop _ states , with probability : from the _ off _ state , the node enters the first transmission state with probability : where the probability is defined as .let be the steady state probability that a ue is at state .then can be derived as : for and .let be the steady state probability that a node is at _ connect _ state : by imposing the probability normalization condition , as detailed in appendix [ app : b_off ] , we find as : since a transmission will eventually either finish successfully in the _ connect _ state or unsuccessfully in the _ drop _ state , the outage probability can be computed as : where and , whose derivations are shown in the appendix [ app : b_off ] , can be computed as : the number of required transmissions can be estimated from the steady state probabilities , keeping in mind that represents the probability of using or more transmission attempts to deliver a packet , and is the probability of using exactly transmission attempts : from the number of transmissions , the value of can be solved iteratively using the fixed point equation : for the results presented in sec .[ sec : results ] we found that less than 20 iterations were needed to reach convergence ( less than 1% change between consecutive iterations ) .in our evaluation , we consider two prach configurations , namely the typical configuration with 5 subframes between every rao and the configuration with one rao every subframe .further , we consider first the case where only a single transmission is allowed ( one - shot , ) and then the more realistic configuration of allowed retransmissions .the model results are compared with a simulator that implements the full lte access reservation protocol as defined in and given parameters in table [ tab : lte_system_parameters ] ..lte simulation and model parameters [ cols="<,^",options="header " , ] in fig .[ fig : outage_rach_2_ntx_1 ] and [ fig : outage_rach_10_ntx_1 ] the outage probabilities are depicted for . there, the proposed model has a much better fit to the simulation results than the 3gpp tr 37.868 model ( * ? ? ?b.1 ) and the ericsson model in ( * ? ? ?* eq . ( 6 ) ) .specifically , in fig .[ fig : outage_rach_2_ntx_1 ] where the preamble collisions are the main error cause , the tr 37.868 and ericsson models are much worse than the proposed model . from fig .[ fig : outage_rach_10_ntx_1 ] it is clear that those models are not accounting for the limited number of uplink grants per rar that starts to have an impact around attempts / sec , causing an upward bend in the outage curve . in the typical configuration where retransmissions are allowed , a necessary feature of our model is that it is able to account for the feedback impact of retransmissions on the arrival rate .an intermediate metric that allows to study this is the number of transmissions per new data packet .this is shown in figs .[ fig : entx_rach_2_ntx_10 ] and [ fig : entx_rach_10_ntx_10 ] . in fig .[ fig : entx_rach_2_ntx_10 ] the number of transmissions is estimated accurately leading to a well - fitting estimation of the outage in [ fig : outage_rach_2_ntx_10 ] . for the case of 10 raos per frame ,the markov chain model slightly overestimates the number of transmissions .however , the breaking points in the curves are the same , meaning that the supported arrival rate in the simulation on fig .[ fig : outage_rach_10_ntx_10 ] is closely matched by the one in the model .finally , the results show that the proposed model is superior to the existing models from the literature , as they do not capture the feedback impact of the retransmissions and are therefore not able to estimate the system outage capacity .the presented results also reveal an interesting insight in dimensioning the lte access reservation parameters .given that there is a times difference in resource usage for raos ( 2 vs 10 raos per frame ) , the gain in supported arrival rate is quite modest , increasing from around to around , i.e. , a 25% increase . in order to further increase the capacity of the system ,it is necessary to simultaneously increase the number of rars per subframe .in this paper we have presented a low - complexity , yet accurate model to estimate the outage capacity of the lte access reservation protocol for machine - type communications , where the small payload sizes mean that system resources are typically not the limiting factor . the model accounts for both contention preamble collisions and the limited number of uplink grants in the random access response message , as well as the feedback impact that the resulting retransmissions has on the random access load . for the considered typical lte configurations ,the model is able to very accurately estimate the system outage capacity .this puts it forward as a useful tool in system dimensioning , as it allows to replace time - consuming simulations with click - speed calculations .future work should look into how diverse channel conditions and diverse traffic patterns of users can be efficiently included in the model . while the outage metric is very important from a planning perspective, other metrics such as access delay or transmission time would be very relevant to be able to estimate accurately when considering real - time machine - type communications .this work is partially funded by eu , under grant agreement no .the sunseed project is a joint undertaking of 9 partner institutions and their contributions are fully acknowledged .this work has been partially supported by the danish high technology foundation via the virtuoso project .g. corrales madueo , c. stefanovic , and p. popovski , `` reengineering gsm / gprs towards a dedicated network for massive smart metering , '' in _ proc . of the ieee internation conference on smart grid communications ( smartgridcomm 2014 ) _ , nov .2014 . c. ubeda , s. pedraza , m. regueira , and j. romero , `` lte fdd physical random access channel dimensioning and planning , '' in _ proc . of the ieee vehicular technology conference ( vtc fall 2012 ) _ , sept .2012 . c. karupongsiri , k. s. munasinghe , and a. jamalipour , `` random access issues for smart grid communication in lte networks , '' in _ proc . of the international conference on signal processing and communication systems ( icspcs 2014 ) _ , dec. 2014 .x. yang , a. fapojuwo , and e. egbogah , `` performance analysis and parameter optimization of random access backoff algorithm in lte , '' in _ proc . of the ieee vehicular technology conference( vtc fall 2012 ) _ , sept .
a canonical scenario in machine - type communications ( mtc ) is the one featuring a large number of devices , each of them with sporadic traffic . hence , the number of served devices in a single lte cell is not determined by the available aggregate rate , but rather by the limitations of the lte access reservation protocol . specifically , the limited number of contention preambles and the limited amount of uplink grants per random access response are crucial to consider when dimensioning lte networks for mtc . we propose a low - complexity model of lte s access reservation protocol that encompasses these two limitations and allows us to evaluate the outage probability at click - speed . the model is based chiefly on closed - form expressions , except for the part with the feedback impact of retransmissions , which is determined by solving a fixed point equation . our model overcomes the incompleteness of the existing models that are focusing solely on the preamble collisions . a comparison with the simulated lte access reservation procedure that follows the 3gpp specifications , confirms that our model provides an accurate estimation of the system outage event and the number of supported mtc devices .
in just 20 years the complexity of systems studied in surface science has increased by orders of magnitude . whereas previous major problems where associated with the adsorption behavior of small adsorbates such as co on pt(111 ) , typical challenges that dominate surface science nowadays are associated with the structural , electronic , and chemical properties of large organic adsorbates and molecular networks on metal and semiconductor surfaces . here the interaction strength and geometry of systems and the implications for nanoarchitectures , self - assembly , or atomic - scale manipulation is studied by two complementary sets of techniques , namely by _ in vacuo _surface science experiments and _ ab initio _ electronic structure simulations . the capabilities of both experiment and simulation have increased tremendously . on the computational side , previously unthinkable system sizes can be dealt with thanks to the immense increase in computational power and the numerical , as well as algorithmic efficiency of density - functional theory ( dft ) . owing to the latest advances in particular in the treatment of dispersive interactions , such calculations are nowadays able to provide a reliable description of the adsorption structure and energetics of highly complex organic adsorbates on metal surfaces . an especially important aspect of theoretical surface science that should benefit from this progress is the ability to conduct computational screenings of possible geometries of complex interfaces , be it for determining the atomic configuration of surface reconstructions , topologies of organic crystals , or identifying the manifold of all possible compositions and adsorption sites in processes of heterogeneous catalysis and in compound materials . however , one thing that has not changed much in recent years is how , in simulations , optimal adsorption structures are identified .the large number of degrees of freedom in a complex adsorbate , as well as the structural complexity of large surface nanostructures , necessitates a full global search for optimal structures and overlayer phases that includes all possible binding modes and chemical changes that can occur upon adsorption .knowing the geometric structure of an interface is an important prerequisite to investigating its electronic properties and level alignment , which in turn determines the performance in potential electronic or catalytic applications .an efficient potential energy surface ( pes ) sampling tool is necessary to enable this , as opposed to just conventional local optimization of several chemically sensible initial guesses . in general, this problem can be solved by applying global geometry optimization methods . the goal of a majority of such global optimization approaches is to efficiently traverse the pes by generating new trial configurations that are subsequently accepted or rejected based on certain criteria .multiple reviews on global optimization methods specific to certain types of systems , ranging from small clusters to large biomolecules , have been published recently . in particular , much attention has been attracted by the application of global optimization methods to the study of metal clusters , binary nanoalloys , proteins , water clusters , and molecular switches . recently , global geometry optimization schemes have been specifically developed and applied in the growing field of heterogeneous catalysis , including application in reaction coordinate prediction , and biomolecular simulations . a wide range of different global optimization algorithms has thereby been suggested over the last two decades , ranging from simple classical statistical mechanics simulated annealing schemes to sophisticated landscape paving , puddle - jumping , or neural - network controlled dynamic evolutionary approaches . two prominent and most popular families of global geometry optimization techniques include monte - carlo based methods , such as basin hopping ( bh ) , and evolutionary principles based genetic algorithms ( ga ) . however , no general rule of preference to a specific algorithm exists , as the efficiency of classical global optimization methods is highly system dependent . there are many technical aspects that influence the performance of any global sampling technique , such as the choice of initial geometry , the ways of disturbing the configuration during the trial move , the definition of acceptance criteria , the methods employed to calculate potential energies and forces _many authors were concerned with the efficiency of applied sampling methods and suggested various improvements . moreover , the possibility of moving from the total energy as the main sampling criterion towards observable - driven and grand - canonical global sampling schemes has been suggested .the parameter that plays the most crucial role in method performance , however , is the choice of coordinates suitable for representing the geometries and , most importantly , changes in geometries during the sampling , _ _i.e.__the so - called trial moves . the essential importance of the trial geometry generation step has already been noticed in early system - specific publications . for instance , simple group rotations within proteins or clusters already led to significant gains in sampling efficiency .furthermore , several computationally more expensive ways to produce elementary trial moves were suggested based on short high - temperature molecular dynamics ( md ) runs. for certain systems they have indeed proven to be much more effective , allowing to _ e.g. _ optimize large metallic clusters within atomistic models . from a programmer s point of view , the perhaps most intuitive representation of the atomic coordinates of trial moves are cartesian coordinate ( cc ) displacements .it has been noted though that this most popular representation is often inefficient , due to its chemical blindness , which may easily lead to unphysical configurations ( _ _ e.g.__dissociated structures ) .cc displacements do not account for the overall geometry and the coupling between coordinates .different approaches have been proposed to overcome this difficulty , such as virtual - alphabet genetic algorithms , employing the idea that coordinates should stand as meaningful building blocks , or genetic algorithms with space - fixed ccs , introducing non - traditional genetic operators . the main limitation of these methods , however , is that they remain rather system specific .the choice of suitable coordinates for global optimization should rather be system independent , while at the same time adapted to the chemical structure . especially in the case of large adsorbatesthe completely unbiased way in which the structural phase space is sampled in ccs does not allow for an efficient structural search for conformational changes upon adsorption that leave the molecular connectivity intact .the chemically most intuitive set of coordinates for such a situation would instead be internal coordinates ( ics ) , namely bond stretches , angle bends , or torsions .already in the seminal work on bh of wales and doye , the potential usefulness of such coordinates has been noted , and later shown to be beneficial for structures connected by double - ended pathways. one of the first applications of the idea of using ics for global geometry optimization was reported in the context of protein - ligand docking , which developed into the so - called internal coordinate mechanics ( icm ) model , further extending the above mentioned concept of meaningful building blocks . another example of global optimization in ics was the attempt of introducing dihedral angles into the framework of the so - called deterministic global optimization . global optimization of clusters and molecules in ics using the z - matrix representation was suggested by dieterich and hartke . however , automatic construction of a z - matrix is close to impossible for larger , more complicated systems , especially organic molecules containing rings . besides, it also does not eliminate the problem of coordinate redundancies , which generally limits the applicability of this approach .especially suitable for a system independent description of complex molecular structures are instead so - called delocalized internal coordinates ( dics ) , _ i.e. _ non - redundant linear combinations of ics , that have been extensively used for efficient local structure optimization of covalent molecules , crystalline structures , and for vibrational calculations . in our recent work , we implemented such automatically generated collective curvilinear coordinates in a bh global sampling procedure .the similarity of these coordinates to molecular vibrations does yield an enhanced generation of chemically meaningful trial structures , especially for covalently bound systems . in the application to hydrogenated si clusters ,we correspondingly observed a significantly increased efficiency in identifying low - energy structures when compared to cc trial moves and exploited this enhancement for an extensive sampling of potential products of silicon cluster soft landing on the si(001 ) surface . in the present work, we provide a detailed methodological account of this curvilinear coordinate global optimization approach , and extend it to a conformational screening of adsorbates on surfaces .we do this by introducing constraints and extending the coordinate system to include lateral surface translations and rigid adsorbate rotations .these curvilinear coordinates are constructed automatically at every global optimization step and , similarly to the original bh scheme , we pick a random set of coordinate displacements with which a trial move is attempted . testing this for the _ trans_--ionylideneacetic acid molecule adsorbed on a au(111 ) surface ( as an example of a complex functionalized organic adsorbate ) , and for methane adsorbed on a ag(111 ) surface ( as an example of a small adsorbate with a shallow pes and many minima ), we find that dic trial moves increase the efficiency of bh by both a more complete sampling of the possible surface adsorption sites and by reducing the number of molecular dissociations .the paper is organized as follows : in chapter [ methods ] we outline traditional approaches to global structure optimization , explain the construction of dics and our definition of a complete coordinate set for adsorbates on surfaces , and how these coordinates are applied for conformational structure searches in a global optimization framework such as basin hopping . in chapter[ computational ] we summarize the computational details of our benchmark studies on the surface - adsorbed aggregates presented in chapter [ results ] .we conclude our work in chapter [ conclusions ] .additional remarks on software and algorithms can be found in appendix [ appendix - winak ] , details on dft calculated diol stability can be found in appendix [ appendix - dft ] .a variety of global optimization procedures exists to date , including simulated annealing , genetic algorithms , bh , and many variants and combinations thereof . one reason for the success of basin hopping might be the intriguing simplicity of the approach , summarized as follows : 1 .displace a starting geometry with a random _ global _ cartesian coordinate ( cc ) trial move normalized to a step width ; 2 .perform a _local _ structure optimization to a minimum energy structure with energy ; 3 .pick a random number and accept the new minimum energy structure with the probability where corresponds to an effective temperature and refers to the energy of the current lowest energy structure ; 4 .if the structure is accepted , place it as a new starting point and proceed with point 1 . following this procedure , as illustrated in fig .1 , the pes is sampled until all chemically relevant minima are found . a sufficiently high ensures that also higher - lying minima are accepted with adequate probability and a large area of the pescan be sampled .the size of trial moves must be large enough to escape the basin of attraction of the current local minimum , but small enough not to lead to regions of the pes that are of little interest , _e.g. _ dissociated structures or structures with colliding atoms .the corresponding displacement is constructed from random cartesian vectors normalized to the chosen step width .this approach has proven highly efficient for optimization of clusters and biomolecules . in the following we will discuss a modification of this general algorithm only with respect to point 1 by changing the construction of the displacement vector . illustration of the basin hopping global optimization approach .random trial moves ( straight arrows , 1 ) displace the geometry towards a new basin of attraction and local optimization ( zigzag arrows , 2 ) identifies the corresponding minimum energy structure .the identified minima will be rejected ( 3 ) or accepted ( 4 ) with a certain probability given by eq . 1 . ]ics can be defined in many different ways .a typically employed approach is to define a set of coordinates in a body - fixed frame as a set of bond stretches , angle bends , torsional modes , and out - of - plane angles . any displacement from a given point in cartesian coordinate space can then be transformed to a set of internal coordinate displacements by where the set of coordinates in -space can be chosen to be non - orthogonal and complete ( _ e.g. _ via a z - matrix ) , however this is often difficult and highly system specific .an alternative approach is the construction of a highly redundant set of primitive ics to represent the independent degrees of freedom of an atom system . pulay , fogarasi , baker , and others have shown that an orthogonal , complete set of coordinates can be constructed from such a redundant coordinate set simply by singular value decomposition ( svd ) of the matrix . by construction ,matrix is positive semi - definite and hermitian , and satisfies \mathbf{u}^\dagger \quad .\ ] ] in eq .[ eq - svd ] , is a set of vectors in the space of primitive ics that defines linear combinations thereof .svd yields a subset of such independent vectors with non - zero eigenvalues that constitute a fully orthogonal system of dics . these coordinates span all primitive ics that were previously defined in . is therefore nothing else than a jacobian that transforms displacements in the space of primitive internals into displacements in the space of dics in the last equation we have defined the transformation matrix of dimension that transforms cc into dic displacements . unfortunately the back transformation can not simply be done by inversion as the matrix is non - quadratic and therefore singular .however , we can define a generalized inverse that connects displacements in cc and dic space where equation [ eq - backtransformation ] can easily be verified by multiplication from the left with . utilizing the above expressions we have thus established a bijective transformation between the displacements and .however , transformation of the absolute positions of atoms in a molecule , their nuclear gradients and second derivatives requires iterative backtransformation from dic to cc space .this is due to the fact that is simply a linear tangential approximation to the curvilinear hyperplane spanned by coordinates .further details on dic transformation properties can be found in ref . . in summary, the here presented construction allows to define displacements in terms of a random set of dics and expression thereof as cc displacements , which can then be used to generate a new trial structure from a starting geometry ( _ cf . _ basin hopping procedure , point 1 ) .illustration of the coordinate definition for a set of complete delocalized internal coordinates ( cdic ) .curvilinear coordinates are constructed by partitioning the system into subsystems such as molecule and surface ( here shown for methane adsorbed on ag(111 ) ) and associating each of those with a set of translations ( depicted in orange ) , rotations ( purple ) , and dics ( red ) . , , refer to translation , rotation , and dic vectors , while indices a and s refer to the adsorbate and the surface subsystem , respectively . , , and refer to the number of atoms in the complete system or respective subsystem . ] in the above presented formalism we have constructed dics , which fully describe all ( body - fixed ) deformations of a molecule . by construction , overall translations and rigid rotations of isolated molecules and clusters are removed due to their invariance with respect to these coordinates .re - introducing these degrees of freedom can in some cases be useful , for example in the description of molecules or clusters adsorbed on surfaces , in dense molecular arrangements , or organic crystals .in such systems the position and orientation of a tightly - bound subsystem , such as a molecule , with respect to the rest of the system , such as a surface or the molecular neighbors , is determined by overall weaker interactions and forces .one can account for this specific chemistry of the system by partitioning it into two ( or more ) subsystems ( _ e.g. _ adsorbate and surface ) , for each of which one constructs a set of 3 rigid translations , 3 rigid rotations , and dics , where is the number of atoms in subsystem a. in the following we will refer to such a set of coordinates as _ complete _ delocalized internal coordinates ( cdics ) .this idea is illustrated in fig .[ fig - cdc - scheme ] .the pictured system , namely methane on a ag(111 ) surface , is partitioned into molecule and surface , and each of these two subsystems is assigned its own set of coordinates , the concatenation of which comprises a full - dimensional coordinate vector that is equivalent to ccs .the translations can be added as simple cartesian unit vectors of the center of mass ( or any set or subset of atoms ) , as symmetry - adapted linear combinations thereof , or even as fractional ccs in a unit cell , for example to adapt to the symmetry of a given surface unit cell . for the definition of sub - system rotations we have chosen to employ nautical or tait - bryan angles .hereby the three rotation angles , , ( also called yaw , pitch , and roll ) are defined around three distinct rotation axes with respect to a reference orientation .this reference orientation is given by the molecular geometry from which the dics are constructed and establishes an eckart frame .changes to orientation are identified and applied by standard rigid - body superposition using quaternions . in the systems where the above partitioning makes sense chemically , the concomitant decoupling of coordinates may be beneficial for the description of dynamics and global optimization . in caseswhere , for example , no significant surface reconstruction is expected upon adsorption , it may not be useful to construct dics for the substrate , where chemical bonding is more isotropic . instead, cdic partitions can be constructed for arbitrary subsets of coordinates ( _ e.g. _ the adsorbate ) in order to then be combined with the remaining ccs ( _ e.g. _ the metal substrate ) .finally , cdics ( as well as dics ) make the definition of arbitrary coordinate constraints fairly straightforward as is shown in subsection [ section - constraints ] .starting from a set of dics , constraints on primitive internals such as bond stretches , angle bends , or torsions can be applied by projection from the initial set of dic vectors . the constraint vector is taken as a simple unit vector in the space of primitive ics 1 \\ \vdots \\[1ex ] 0 \\ 0 \\ \end{smallmatrix } \right ) \quad .\ ] ] correspondingly , the projection of this constraint vector onto the eigenvector matrix yields a vector in the space of dics ( denoted by the tilde sign ) .all vectors need to be orthogonalized with respect to ( _ e.g. _ using gram - schmidt orthogonalization ) to ensure that the primitive coordinate is removed from the dics .the resulting matrix is then used to transform back to ccs using the inverse of in the same way as in eq .[ eq - backtransformation ] .constraints on sub - system translations or rotations in cdics can be applied by simply removing them from the available set of coordinates in -space .the iterative nature of the backtransformation to absolute ccs may require an iterative coordinate constraint algorithm such as shake . cartesian constraints can be applied by simply excluding atoms or ccs from the reference geometry from which the dics are constructed , or by projections in cartesian space equivalent to the above projections in -space . summarizing the above , we have defined a set of complete , orthogonal curvilinear coordinates describing all internal motions of molecules , which can be constructed in a fully automatic manner from a set of ccs . for systems with multiple molecules , aggregates , or for surface - adsorbed moleculeswe can supplement these with translational and rotational degrees of freedom for the individual subsystems . any changes or displacements applied in these coordinates can be transformed back to ccs and be readily applied in this space .displacements in form of dics or cdics yield molecular deformations that are more natural to the chemistry of the system , simply due to their construction based on the connectivity of atoms .we have recently shown that application of such coordinates in the framework of global optimization methods can largely facilitate a structure search by preferential bias towards energetically more favorable geometries . modification of a global optimization algorithm in terms of dic trial moves on the example of the bh procedure sketched above can be summarized as follows : 1 . displace a starting geometry with a random dic ( cdic ) trial move .therefore : * construct a set of dics ( cdics ) ; * apply constraints , if necessary ; * pick a random ( sub-)set of dics ( cdics ) and normalize them to a given step width ; * transform coordinates back to cartesian space using eq .[ eq - backtransformation ] and apply them to the structure ; 2 . perform a _local _ structure optimization to a minimum energy structure with energy ; 3 .pick a random number and accept the new minimum energy structure with the probability 4 .if the structure is accepted , place it as a new starting point and proceed with point 1 .we consider three test sytems : retinoic acid ( rea , _ cf . _ fig .[ fig - gas - pics ] ) in gas phase ; _ trans_--ionylideneacetic acid ( -acid , _ cf .[ fig - beta ] ) adsorbed on a six - layer , surface unit cell au(111 ) slab ; and methane ( ch ) on a four - layer , surface unit cell ag(111 ) slab .all test systems were built using the ase ( atomic simulation environment ) package . energetics for the global optimization runs were calculated with density functional - based tight - binding ( dftb ) performed with the dftb+ v1.1 code . to correctly account for surface - molecule interactions in the -acid on au(111 ) case , the tkatchenko - scheffler screened dispersion correction method vdw was applied as an _ a posteriori _ correction to the dftb results . since vdw was not available in dftb+ at the time , we have neglected the environment - dependent rescaling of atomwise dispersion coefficients .environment - dependence of dispersion interactions in conjunction with tight - binding methods has been introduced recently . as convergence criterion for the self - consistent dftb charges , 10 electrons was used for all systems .dftb parameters for interaction between c , h , o , au , and ag have been generated as described in refs . and are available upon request .si and h parameters are described in ref . .we have used the refinements as described in ref . .we implemented the coordinate generation as well as a bh procedure in a modular python package ( ` winak ` , see appendix [ appendix - winak ] ) . the set of redundant internal coordinates was constructed by including bond lengths , bending angles , and dihedral angles .bond lengths were included if they were within the sum of the covalent radii of the atoms plus 0.5 .bending angles were included if the composing atoms were bonded to each other and if the bending angle was below 170 .dihedral angles were included if the composing atoms were bonded and if the dihedral angle was smaller than 160 .the effective temperature in the bh procedure was set to 300 k. trial moves in unconstrained dics / cdics and with all stretches constrained ( constr.dics/cdics ) were always constructed using linear combinations of a varying percentage of all modes , _e.g. _ 10% , 25% , 75% or 100% of all available coordinates .these subsets were randomly chosen and in the construction of the displacement vector each dic mode was scaled with a random factor in the range ] to the , and coordinates of every atom . as a normalization, all displacements were divided by the single largest component in ccs , such that the maximum absolute displacement of an atom in , , or direction was 1 .subsequently the whole displacement vector was multiplied by a tunable dimensionless factor _ dr _ , _ i.e. _ the step width introduced above .the global minimum was always used as initial starting geometry and local optimizations were performed until the maximum residual force was smaller than 0.025 ev / .local optimizations were performed using a standard quasi - newton optimizer and cartesian coordinates .in our recent work we have already described the efficiency of the dic - based structure search approach for metallic and hydrogenated silicon clusters in gas phase and when adsorbed on surfaces . in the following we shortly review these results in the present context and proceed to evaluate the efficiency and applicability of structure search based on dic ( cdic ) trial moves for global optimization of molecules in gas phase and adsorbed on surfaces . in our previous work we have compared the efficiency of bh when using ccs and dics as trial moves by performing a series of 500-step runs with different parameters ( step width , percentage of dics ) for a hydrogenated silicon cluster si both in isolation and adsorbed on a si(001 ) surface .regardless of the choice of parameters , all the runs in dics were shown to sample a more relevant region of the pes , that is , the minima identified with dic trial moves are more often intact and the distribution of their energies is invariably centered around lower energies compared to the corresponding cc run with equivalent parameters . furthermore , cc - based runs almost always failed to identify the global minimum , whereas almost all dic - based runs found it .finally , as dic trial moves produce less strained geometries , the local relaxation at each global step necessitates on average 30% fewer local optimization steps , which drastically reduces the overall simulation time .we ascribe the superior performance of dics to their collective account of the structure and chemistry of the system. dics by construction take into account the connectivity of atoms to produce collective displacements that resemble vibrational modes .such displacements are largely composed of concerted motions of groups of atoms , thus naturally suitable to explore configurational subspaces in which certain bonding patterns are preserved .the latter condition is particularly desirable in the investigation of both the gas - phase and the adsorption behavior of large functional molecules , where two intertwined aspects need to be addressed : _i ) _ the structural integrity is a necessary prerequisite to enable functionality ; _ ii ) _ even small conformational changes , or minor isomerizations , can determine dramatic variations of electronic properties ( _ e.g. _ of molecular switches ) .the structural screening of molecules of this kind should therefore be constrained to subspaces of the pes that satisfy these aspects , while ideally retaining a sufficient degree of flexibility to be able to capture non - intuitive , potentially relevant geometries .chemical formula ( top ) and ball - and - stick representation ( bottom ) of isolated all - trans retinoic acid ( rea ) .carbon atoms are colored gray , hydrogen atoms are colored white , and oxygen atoms are colored red . the global minimum geometry in gas phase is shown . ] in the case of large organic molecules , we need to additionally account for the fact that their degrees of freedom show largely varying bond stiffness .high - frequency stretch motions will for example generally not be relevant for a conformational structure search . to test the efficiency of the dic - bh approach in such cases , global optimization of isolated rea ( _ cf .[ fig - gas - pics ] ) has been performed using 100 trial moves based on random ccs and displacements along varying subsets of dics .table [ tab - cc - vs - dic ] compiles the number of symmetry - inequivalent minima found .it is evident that random ccs are particularly ill - suited for large organic molecules . in most of the cases the molecule ended up dissociated and once the molecule is torn apart , regaining an intact structure by random cc displacements is highly improbable . for most applications involving organic molecules , dissociated structuresare not of interest and the computing time used for the local optimization step is essentially wasted .when using random ccs , this can amount to up to 95% of the overall computing cost .lc1cmc1cmc1cmc1 cm & + & 0.55 & 0.70 & 0.90 & 1.20 + cc & 1 & 0 & - & - + dic 10% & 2 & 2 & 3 & 3 + dic 25% & 5 & 2 & 2 & 1 + dic 100% & 1 & 3 & 2 & 1 + & 0.55 & 0.70 & 1.50 & 2.00 + constr.dic 10% & 2 & 3 & 8 & 8 + & 0.70 & 0.90 & 1.50 & 2.50 + constr.dic 25% & 2 & 4 & 5 & 9 + constr.dic100% & 2 & 6 & 4 & 9 + the use of dic displacements already greatly improves on this .steps that lead to dissociations can be reduced to 75% of the overall amount of steps , while still finding more relevant minima compared to random ccs .the code responsible for creating the ics automatically detects dissociations , such that they can be discarded directly at runtime prior to local optimization .this shows that dics can already be useful in their most general form .however , for an efficient search of the conformational phase space of a large organic molecule this can even further be improved by removing bond stretches as explained in chapter [ methods ] . when the bond stretches are fully constrained ( constr .dics in table [ tab - cc - vs - dic ] ) , a much finer screening of the pes is possible , and more unique and intact minima are found .many different conformers ( for example eclipsed / staggered stereochemistry ) can be observed which would be extremely difficult to find using trial moves based on ccs or unconstrained dics .dissociation events can be reduced to as low as 33% of all steps even at large step widths .while in the simple example of rea global optimization might not be necessary to identify the most relevant conformations , these results demonstrate that customizing dics with structure - preserving constraints can make them more applicable to covalently - bound systems .we conclude with a note on the step width and percentage of dics . due to the normalization ,the result of the algorithm is highly dependent on the percentage of dics used .the more dics a displacement is made up of , the less influence a single dic has on the overall displacement .individual dics contribute more to the 10% than to the 25% dic displacements . to find certain minimasome atoms might have to be moved over a few ngstrm , so a large step width would be of advantage .however , for our test system cc displacements failed to find a single minimum already for a step width as small as 0.70 . for dic displacementsthe step width can and should be chosen much higher than that .optimization runs using constr.dic displacements performed best for a step width of above 2.00 .we proceed to apply the here presented approach to two molecules adsorbed on surfaces , a -acid molecule on au(111 ) and methane on ag(111 ) .the first case exhibits a large number of internal degrees of freedom , the second a shallow pes regarding translations and rotations on the surface .chemical formula ( left ) and ball - and - stick representation ( right ) of _ trans_--ionylideneacetic acid ( -acid ) .carbon atoms are colored gray , hydrogen atoms are colored white , and oxygen atoms are colored red . the global minimum geometry in gas phase is shown . ] even though there might be crystallographic data for large biological molecules , the current resolution of experiments is often insufficient to resolve the individual atomic positions of larger functional molecules adsorbed on surfaces .closely related to their role in nature , functional molecules could act as molecular switches when adsorbed on a metal surface . therefore , prediction of key structural elements by electronic structure methods is crucial .we study the performance of cdic - based global optimization on the example of a smaller analogue of rea , namely _trans_--ionylideneacetic acid ( -acid , _ cf .[ fig - beta ] ) adsorbed on a au(111 ) surface . as an initial starting geometry ,the molecule in the conformation representing the global minimum in gas phase was placed flat lying at a random adsorption site on the surface , and the structure was relaxed using local optimization .no surface relaxation was allowed .this was used as a starting point for all bh runs . as a reference , 100 global steps were performed for step widths of 0.25 , 0.5 , and 1.0 using cc trial moves .this is compared to 100 global step runs using 10% , 25% and 100% mixtures of constr.cdics , all with a step width of 1.5 . for surface - adsorbed molecules ,the shortcomings of standard cc displacements become even more apparent as shown in table [ tab - cc - sur ] . in 7090% of all global optimization steps ,a dissociated structure is obtained .most commonly the cyclohexenyl ring is torn apart or the conjugated side chain is broken . in the context of conformational switching, this would render this minimum irrelevant for the process under study .keeping important structural motifs intact is difficult with cc displacements .one could argue that setting a smaller step width remedies this problem ; however , our results show that , as the step width decreases , the starting geometry is found with higher probability .there seems to be no optimal setting for the step width parameter : either the molecule dissociates , or the sampling is not able to escape the initial basin of attraction . as a result , we did not obtain any single new geometry other than the starting point and there was almost no lateral adsorption site sampling .cc trial moves are thus not applicable to this case . lc1cmc1cmc1 cm & + & 0.25&0.5&1 + dissociations&73&93 & 98 + starting geometry&24 & 7 & 2 + different adsorption site&3 & 0&0 + new structures&0 & 0 & 0 + lc1cmc1cmc1 cm & + & 10%&25%&100% + dissociations & 62 & 38 & 37 + starting geometry & 8 & 3 & 30 + different adsorption site&19 & 48 & 23 + new structures&11 & 11 & 10 + the result of the sampling is changed dramatically by the introduction of cdic trial moves with constrained stretches ( constr.cdic , _ cf . _ table [ tab - di - sur ] ) .using 25% and 100% of available constr.cdics reduces dissociation events to about 40% of all steps .for 10% constr.cdic displacements at a step width of 1.5 the amount of dissociations is comparable to cc trial moves using a step width of 0.25 , however , at largely different sampling efficiency .when fewer dics are mixed into a displacement , the influence of each individual dic in the final trial move is larger due to normalization . as a consequence and consistent with the gas - phase sampling results above ,10% constr.cdic displacements introduce the most significant changes to the molecular geometry , _ i.e. _ result in more dissociations . inversely , 100% constr.cdic displacements sample the starting geometry more often than 10% or 25% constr.cdic trial moves . as the number of translations and rotations in comparison to the number of internal degrees of freedom in the overall displacement is smaller at higher percentages of cdics ,lateral sampling is less efficient than when using 25% or 10% constr.cdic trial moves .it is not straightforward to find the optimal position and orientation of a complex adsorbate like the one studied here , and efficient lateral sampling is therefore strongly desired .in contrast to standard cc based trial moves , different rotations and various adsorption sites of the molecule have been sampled during all three runs .a closer analysis of this aspect will be presented in section [ section - methane ] .conformational structure search for -acid on au(111 ) serves the purpose of finding new and stable configurations .especially interesting are _ cis _ isomers ( _ cf .[ fig - strucis ] , structures b and c ) since they could play an active role in a molecular switching process .these structures are particularly hard to uncover by unbiased sampling , since a concerted motion of a large number of atoms is necessary .we were able to find such minima in two out of three constr.cdic based runs ( 10% and 100% constr.cdic trial moves ) with a relatively short number of steps per run ( 25 and 88 global steps , respectively ) .it should be noted that cc based sampling , however , was not able to uncover any new conformations apart from the starting geometry .other minima of interest are conformations of the molecule that might all contribute to the apparent finite - temperature geometry observed in experiment , such as different ring conformations ( chair , boat , half - chair etc . ) or changes in internal degrees of freedom ( chain to ring angle , staggered vs. eclipsed conformations of methyl groups etc . ) .the broadest range of such structures has been identified using 10% constr.cdic displacements and a step width of 1.5 .configurations were found where the ring was bent towards the surface and away from it . we have also found stuctures where the ring is in a boat configuration ( _ cf . _[ fig - strucis ] , structure a ) or twisted with respect to the chain .methyl groups both on the ring and on the chain were found in different rotations ( staggered conformation ) .another group of minima involved modifications of the acid group , rotating either the oh or the whole cooh group by 180 .25% constr.cdic trial moves found similar minima , but were not able to sample changes in the acid group .instead , the boat configuration of the ring was sampled more often . the third setup ( 100% constr.cdic displacements )did not result in a completely rotated acid group , the oh group was found flipped once .this run was the only one that sampled half - chair conformations of the ring .a somewhat unexpected result was the repeated discovery of a diol structure ( _ cf .[ fig - strucis ] , structure d ) in all constr.cdic runs , while cc based runs always failed to find this structure .we have performed dft calculations that predict a stabilizing effect of the surface by 0.22 ev on this structure as compared to the gas - phase minimum conformation ; however , the acid structure nevertheless remains the global minimum ( see appendix [ appendix - dft ] ) .the energetically most favorable structures that were found by all runs are slight variations of the global minimum conformation found in gas phase , adsorbed such that the oxygen atoms are situated above a bridge site . in summary ,cc displacements were found to either repeatedly sample the starting geometry at small step widths or dissociated structures at larger step widths . using cc trial moves we were not able to find any step width that is both large enough to overcome barriers on the pes and simultaneously small enough not to dissociate the molecule . expressing the displacements in constr .cdics , however , enables a finely tunable sampling of phase space. indeed a small number of steps are sufficient to find a plethora of molecular configurations on the surface as well as different adsorption sites and rotations .methane ( ch ) adsorbed on ag(111 ) . shownare two neighboring primitive surface unit cells with the initial starting geometry used in the sampling runs on the left hand side , while high - symmetry adsorption sites are highlighted on the right hand side : fcc hollow site ( light red circle ) , hcp hollow site ( dark red circle ) , top sites ( light blue star ) , bridge sites ( dark blue star ) .all top sites , as well as all bridge sites , are symmetry equivalent . ]the example of -acid on au(111 ) illustrated the importance of chemically - adapted coordinates in sampling the internal degrees of freedom of an adsorbate on a surface .to better understand the sampling of combined surface / adsorbate degrees of freedom such as the lateral adsorption site and the orientation of the molecule on the surface we next study ch ( methane ) adsorbed on ag(111 ) .different adsorption sites of ch/ag(111 ) are found to have similar stability and the pes is dominated by small barriers and little corrugation between these sites . as a consequence , it should be an ideal test system for which we can sample all adsorption geometries in the unit cell . even though the system appears simpler than the previous test cases , a large number of distinct minima exists .the surface has four different high - symmetry adsorption sites ( _ cf . _[ fig - fcc ] ) and although the molecule is highly symmetric , ch can adsorb in multiple rotational orientations with respect to the surface . in principle , most of these minima might be found by visual inspection ; however , as emphasized by peterson , as soon as more than one adsorbate is introduced or if the surface features steps or defects , combinatorics makes brute - force sampling and analysis methods quickly intractable .we performed global optimization runs with cartesian displacements ( step width 0.4 ) as well as constr .cdic displacements made up of 25% and 75% of the available coordinates ( step width 1.25 ) until 260 intact ch minima were found .this took 600 global optimization steps with cc trial moves and 500 with constr.cdic trial moves , respectively .step widths were chosen according to the best performance in shorter test runs ( 100 steps each ) . as initial starting geometry, the molecule has been positioned at a top site with one hydrogen atom pointing away from the surface , followed by local optimization ( _ cf ._ left hand side of fig .[ fig - fcc ] ) .optimization results have been obtained for ch in a ( 2x2 ) unit cell to mimick a lower coverage at the surface .2d histogram of the ,-position of the carbon atom in ch in all 260 intact methane minima identified over the course of the 25% constr.cdic displacements ( left side ) and cc displacements ( right side ) sampling run ( see text ) .lighter yellow hexagons indicate a rare sampling of this position in the surface unit cell , darker orange or red hexagons are shown when this position was sampled more often .no hexagons are drawn when the simulation failed to ever visit this area on the surface .calculations were performed for a ( 2x2 ) unit cell to mimick lower coverage , but folded back into a primitive unit cell as indicated by black lines . ] to achieve a rigorous sampling on a metal surface , the first prerequisite is a complete lateral exploration of the surface unit cell . in our test case, this means that the simulation has to traverse through all four distinct high - symmetry adsorption sites ( right hand side of fig .[ fig - fcc ] ) .we therefore monitor the position of the carbon atom in all intact methane minima identified over the course of the sampling run and plot the result in a 2d histogram in fig [ fig-2dhist ] .it immediately stands out that cc trial moves fail to sample larger areas of the surface unit cell .this includes rare sampling of the bridge sites , the center of the fcc hollow site , as well as the surroundings of the hcp hollow site .top sites are instead sampled repeatedly , and computational sampling time is wasted in these revisits . in the case ofconstr.cdic trial moves the distribution of identified minima is more homogeneous and all four high - symmetry sites are visited .an especially interesting finding is that the most often sampled areas are not around the starting site , but around the bridge sites .there is also a slight preference for the fcc hollow site .since translations are explicitly included in constr.cdic displacements on surfaces , it is not a big surprise that our trial moves perform better in this regard .in contrast , a random cc trial move is unlikely to describe a concerted molecular translation across the surface .the second characteristic of a thorough structural sampling in this case is the variety of rotational arrangements of the methane molecule that are found .cc trial moves mostly produce geometries close to the top sites with one hydrogen atom pointing towards the center of the corresponding surface silver atom .a minimum with two hydrogen atoms wedged in - between the bridge site , the others pointing towards the top sites is found twice . the most stable configuration , which is only slightly more favorable than other minima , corresponds to adsorption at the hcp hollow site , with the hydrogen atoms rotated to lie above the ridges between silver atoms ( staggered ) .this geometry at the hcp and fcc hollow site was found in the bh run based on cc trial moves .the constr.cdic trial moves were instead able to find all these rotational arrangements , as well as a number of others .both runs ( 25% and 75% constr .cdic ) sampled a variety of different orientations at the bridge sites and close to the top sites as well .the 75% constr.cdic run even found the molecule flipped around completely once , with one of the hydrogen atoms pointing towards the surface .25% constr.cdic trial moves were the only ones that produced an eclipsed rotational configuration of hydrogen atoms at both hollow sites , with the h atoms oriented along the close - packed rows of the ag(111 ) surface .20 symmetry - inequivalent positions and orientations of ch on ag(111 ) were found in the two cdic ( 25% and 75% ) and one cc displacement based bh global optimization runs combined .depicted is the number of these minima found in each run up to a certain number of global steps .all minima that were encountered during the cc displacement based search were also discovered in at least one of the two constr.cdic runs . ][ fig - unique ] shows how the discovery of new , unique ch minima occurred over the course of the global optimization runs . in all three runscombined , a total of 20 symmetry - inequivalent minima was found , of which the cc based run was only able to find 8 .all these minima were also identified in either one or both of the constr.cdic runs .an interesting finding here is that the constr.cdic runs found twice as many inequivalent geometries as the cc run already after about 100 global optimization steps .after an initial phase of 150 steps , cc trial moves do not lead to any more new minima , whereas constr.cdic runs continue to find more . both effects together clearly show the superior performance of constr.cdic trial moves compared to simple cc trial moves .up to this point we have focused on geometries where the ch molecule remained intact. however , cdic trial moves are equally able to efficiently sample the space of chemically different minima that involve hydrogen dissociations and chemical reactions on the surface .cc displacements are completely defined by the single parameter of step width , and there is no straightforward way of fine tuning the search target , for example in terms of focusing on mainly dissociated or mainly intact geometries of ch .cdic displacements , however , enable a straightforward inclusion of constraints that confine configurational space , for example by constraining bond stretches . in order to analyze the distribution of dissociated and intact structures throughout a bh run, we additionally performed 500 steps of an unconstrained cdic run using 25% of available coordinates and a step width of 1.6 .figure [ fig - stacked ] shows the distribution of intact and dissociated minima for different displacement methods . as also found in the case of -acid on au(111 ) , constr.cdic displacements effectively restrict the geometry search to intact ch molecules .more importantly , unconstrained cdic trial moves find a similar distribution of dissociated and intact structures as cc displacements and are therefore equally suited to sample the space of reactive intermediates at the surface .however , this comes at a higher sampling rate of unique minima than achieved with cc trial moves .distribution of all minima found using different displacement methods for the first 500 global optimization steps of each run . shownare results for 25% constrained ( step width 1.25 ) and unconstrained cdic runs ( step width 1.6 ) and the cc run ( step width 0.4 ) .intact ch structures are shown in blue , ch in green , and all other minima ( ch , ch and c ) as orange . ]lastly , there is one more mechanism with which dics increase the computational efficiency of global optimization .the most expensive part of a global optimization step is the local optimization , which typically uses a large number of steps to converge to the local minimum energy structure .reducing the average number of such local optimization steps can lead to a significant computational speed - up . for all test caseswe find that constr.cdic trial moves generally generate geometries that are less strained compared to the ones produced by cc trial moves .this is emphasized by the fact that geometries created from cc trial moves take almost twice as many steps for local convergence as ones generated by 75% constr.cdic displacements , namely on average 528 bfgs steps compared to 267 using a force threshold of 0.025 ev / .as explained in section [ section - rea - au ] , using smaller percentages of constr.cdics introduces more severe changes to the molecule .therefore 25% constr.cdic displacements take slightly longer for convergence as 75% constr.cdic trial moves ( 309 steps ) .overall this nevertheless constitutes a significant reduction in computational cost , consistent with what we already observed in our previous work on silicon clusters . in summary , we thus find that global structure search with dic trial moves not only finds more unique minima than cc displacements , but does so at reduced computational cost as well .we presented a modification of the basin hopping algorithm by performing global optimization trial moves in chemically - motivated curvilinear coordinates that resemble molecular vibrations .this approach has recently been shown to be efficient in structure determination of clusters and was here investigated and extended for the application to organic molecules both in gas phase and adsorbed on metal surfaces .the chemical nature of the collective displacements enables straightforward inclusion of rotations , translations , as well as constraints on these and arbitrary internal degrees of freedom .this allows for a finely tunable sampling of practically relevant areas of phase space for complex systems that appear in nanotechnology and heterogeneous catalysis , such as hybrid organic - inorganic interfaces , organic crystals , self - assembled nanostructures , and reaction networks on surfaces .these systems feature heterogeneous chemical bonding , where stiff energetically favorable bonding moieties , such as covalent molecules , coexist with weak and flexible bonding forces such as interactions between molecules or between a molecule and a surface . for the show cases -acid adsorbed on au(111 ) and methane adsorbed on ag(111 ) we find that collective internal coordinate based trial moves outperform cartesian trial moves in several ways : _i ) _ cdics systematically identify a larger number of structures at equal number of global optimization steps , thus allowing to reduce the number of necessary global optimization steps ; _ ii ) _ constr.cdics are able to restrict structure searches to well defined chemically - motivated subdomains of the configurational space and thus find more relevant structures ; _ iii ) _dics generate less strained structures and reduce the computational cost associated with a single global optimization step . in future workwe plan to utilize the variability of singular - value decomposed collective coordinates in the context of materials structure search to facilitate simulation of surface reconstructions and defect formation , as well as crystal polymorphism and phase stability in organic crystals and layered materials .so far we have only discussed the relevance of these coordinate systems in the context of basin hopping , however displacement moves are a common element of many different stochastic global optimization strategies , such as , for example , minima hopping , where curvilinear constraints can help guide the md . our implementation can easily be extended to feature such algorithms as well .many global optimization procedures can be divided into three steps : displacement , optimization , evaluation and analysis . in order to make dic displacements as easily accessible as possible ,we have developed a modular python - based framework called ` winak ` , where each of the three aforementioned steps can be customized .for instance , in order to change from the basin hopping method to the minima hopping method , instead of a global optimization , an md run can be started , and instead of employing a metropolis criterion , all newly identified minima are accepted . in order to make the code independent of a particular application , each part of the global optimization process has been encapsulated in an individual class . exchanging local optimization by an md run can easily and safely be done , independent of the coordinate generation class .the same is true for the displacement step .logging and error handling is managed by analysis routines that work with any combination of sub classes . to ensure compatibility , abstract base classes and templatesare provided on the basis of which custom procedures for the three above mentioned steps can be developed .this code is designed to be interfaced with ase , which enables a wide range of electronic structure codes to be used in combination with ` winak ` .we repeatedly encountered diol isomers in the case of _trans_--ionylideneacetic acid adsorbed on au(111 ) on the dftb level .to further investigate this , we recalculated the energy differences between acid and diol , both in gas phase and on the surface using dft .all calculations were carried out using the perdew - burke - ernzerhof ( pbe ) exchange - correlation functional , implemented in the fhi - aims code using the light basis set . local optimization was considered converged when the maximum residual force was smaller than 0.025 ev / .calculations including the surface were done on 4 layers of ( 4 ) au ( 111 ) with a k - grid of .surface atoms were not allowed to relax during optimization . both in gas phase and on the surface the acid isomer is more stable , which aligns with chemical intuition .however , the energy difference is lowered from 1.09ev in the gas phase to 0.87ev for the adsorbed system , _i.e. _ the diol is stabilized .the authors would like to thank daniel strobusch and christoph scheurer for supplying the initial coordinate construction code and for fruitful discussions .rjm acknowledges funding from the doe - basic energy sciences grant no .de - fg02 - 05er15677 .cp gratefully acknowledges funding from the alexander von humboldt foundation and within the dfg research unit for1282 .dp gratefully acknowledges funding from the engineering and physical sciences research council ( project ep / j011185/1 ) .95ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1016/s0001 - 8686(00)00082 - 8[ * * , ( ) ] link:\doibase 10.1021/jp002302 t [ * * , ( ) ] link:\doibase 10.1016/s0167 - 5729(03)00015 - 3 [ * * , ( ) ] link:\doibase 10.1146/annurev.physchem.56.092503.141259 [ * * , ( ) ] link:\doibase 10.1146/annurev.physchem.50.1.413 [ * * , ( ) ] and , eds ., http://dx.doi.org/10.1007/978-3-642-34243-1[__ ] ( , , ) in link:\doibase 10.1016/s0360 - 0564(02)45013 - 4 [ _ _ ] , , vol . ,( , ) pp. link:\doibase 10.1063/1.4869598 [ * * , ( ) ] http://link.aps.org/doi/10.1103/physrevlett.116.146101 [ * * , ( ) ] link:\doibase 10.1103/physrevb.88.035421 [ * * , ( ) ] link:\doibase 10.1016/j.progsurf.2016.05.001 [ * * , ( ) ] link:\doibase 10.1016/j.jpowsour.2015.07.027 [ * * , ( ) ] link:\doibase 10.1039/c3cs60279f [ * * , ( ) ] link:\doibase 10.1021/jp030508z [ * * , ( ) ] link:\doibase 10.1021/jz400215j [ * * , ( ) ] link:\doibase 10.1021/jz502646d [ * * , ( ) ] link:\doibase 10.1002/9780470141748.ch1 [ * * , ( ) ] , ed . ,http://link.springer.com/book/10.1007/0-387-30927-6[__ ] ( , , ) link:\doibase 10.1038/29487 [ * * , ( ) ] http://www.sciencemag.org/cgi/doi/10.1126/science.285.5432.1368 [ * * , ( ) ] link:\doibase 10.1039/b305686d [ * * , ( ) ] link:\doibase 10.1039/b709000e [ * * , ( ) ] link:\doibase 10.1073/pnas.96.10.5482 [ * * , ( ) ] link:\doibase 10.1140/epjd / e2003 - 00182 - 9 [ * * , ( ) ] link:\doibase 10.1039/c0cp01065k [ * * , ( ) ] link:\doibase 10.1007/s11244 - 013 - 0161 - 8 [ * * , ( ) ] link:\doibase 10.1063/1.4916307 [ * * , ( ) ] link:\doibase 10.1021/acs.jcim.5b00243 [ * * , ( ) ] link:\doibase 10.1126/science.220.4598.671 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.88.068105 [ * * , ( ) ] link:\doibase 10.1063/1.1469605 [ * * , ( ) ] link:\doibase 10.1016/s0301 - 0104(02)00837 - 6 [ * * , ( ) ] link:\doibase 10.2478/v10006 - 011 - 0044 - 8 [ * * , ( ) ] link:\doibase 10.1039/c5sc04786b [ ( ) , 10.1039/c5sc04786b ] link:\doibase 10.1021/jp970984n [ * * , ( ) ] link:\doibase 10.1103/physrevlett.75.288 [ * * , ( ) ] link:\doibase 10.1103/physrevb.79.085412 [ * * , ( ) ] link:\doibase 10.1063/1.3097197 [ * * , ( ) ] link:\doibase 10.1063/1.3530590 [ * * , ( ) ] link:\doibase 10.1063/1.3656766 [ * * , ( ) ] link:\doibase 10.1016/j.cpc.2012.12.009 [ * * , ( ) ] link:\doibase 10.1021/ci400224z [ * * , ( ) ] link:\doibase 10.1039/c5cp01910a [ * * , ( ) ] link:\doibase 10.1021/acs.jctc.5b00962 [ * * , ( ) ] link:\doibase 10.1063/1.472311 [ * * , ( ) ] link:\doibase 10.1063/1.481737 [ * * , ( ) ] link:\doibase 10.1039/c3cp53537a [ * * , ( ) ] http://stacks.iop.org/0953-8984/21/i=8/a=084208 [ * * , ( ) ] link:\doibase 10.1021/nn800315x [ * * , ( ) ] link:\doibase 10.1021/nl102588p [ * * , ( ) ] link:\doibase 10.1021/j100141a013 [ * * , ( ) ] http://www.complex-systems.com/abstracts/v05_i02_a02.html [ * * , ( ) ] http://www.pearsonhighered.com/educator/product/genetic-algorithms-in-search-optimization-and-machine-learning/9780201157673.page[__ ] ( , , ) link:\doibase 10.1103/physreve.51.r2769 [ * * , ( ) ] link:\doibase 10.1021/ct3004832 [ * * , ( ) ] link:\doibase 10.1110/ps.19202 [ * * , ( ) ] link:\doibase 10.1063/1.467236 [ * * , ( ) ] link:\doibase 10.1080/00268970903446756 [ * * , ( ) ] http://scitation.aip.org/content/aip/journal/jcp/111/20/10.1063/1.479510 [ * * , ( ) ] http://scitation.aip.org/content/aip/journal/jcp/96/4/10.1063/1.462844 [ * * , ( ) ] link:\doibase 10.1063/1.471864 [ * * , ( ) ] link:\doibase 10.1063/1.478397 [ * * , ( ) ] link:\doibase 10.1063/1.1864932 [ * * , ( ) ] link:\doibase 10.1103/physrevb.79.224103 [ * * , ( ) ] link:\doibase 10.1021/jz5012934 [ * * , ( ) ] link:\doibase 10.1021/ct500291x [ * * , ( ) ] link:\doibase 10.1063/1.4793627 [ * * , ( ) ] link:\doibase 10.1021/acs.nanolett.5b03388 [ * * , ( ) ] link:\doibase 10.1063/1.3604565 [ * * , ( ) ] link:\doibase 10.1016/0022 - 2852(80)90199-x [ * * , ( ) ] \doibase 10.1002/(sici)1096 - 987x(19960115)17:1<49::aid - jcc5>3.0.co;2 - 0 [ * * , ( ) ] link:\doibase 10.1021/ja00504a009 [ * * , ( ) ] link:\doibase 10.1080/08927029108022453 [ * * , ( ) ] link:\doibase 10.1016/0021 - 9991(77)90098 - 5 [ * * , ( ) ] http://www.sciencedirect.com/science/article/pii/s0009261401000306 [ * * , ( ) ] link:\doibase 10.1271/bbb1961.49.2887 [ * * , ( ) ] http://scitation.aip.org/content/aip/journal/cise/4/3/10.1109/5992.998641 [ * * , ( ) ] link:\doibase 10.1021/jp070186p [ * * , ( ) ] link:\doibase 10.1103/physrevb.58.7260 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.108.146103 [ * * , ( ) ] link:\doibase 10.1063/1.4947214 [ * * , ( ) ] _ _ , master s thesis , ( ) http://link.springer.com/10.1140/epjd/e2012-30486-4 [ * * , ( ) ] link:\doibase 10.1103/physrevb.52.11492 [ * * , ( ) ] _ _ , ph.d .thesis , ( ) link:\doibase 10.1103/physrevlett.116.027201 [ * * , ( ) ] http://doi.wiley.com/10.1002/anie.201205718 http://onlinelibrary.wiley.com/doi/10.1002/ange.201205718/full [ * * , ( ) ] link:\doibase 10.1093/imamat/6.3.222 [ * * , ( ) ] link:\doibase 10.1093/comjnl/13.3.317 [ * * , ( ) ] link:\doibase 10.1090/s0025 - 5718 - 1970 - 0258249 - 6 [ * * , ( ) ] link:\doibase 10.1090/s0025 - 5718 - 1970 - 0274029-x [ * * , ( ) ] link:\doibase 10.1103/physrevlett.78.1396 [ * * , ( ) ] link:\doibase 10.1016/j.cpc.2009.06.022 [ * * , ( ) ] link:\doibase 10.1103/physrevb.88.035120 [ * * , ( ) ]
efficient structure search is a major challenge in computational materials science . we present a modification of the basin hopping global geometry optimization approach that uses a curvilinear coordinate system to describe global trial moves . this approach has recently been shown to be efficient in structure determination of clusters [ nano letters 15 , 80448048 ( 2015 ) ] and is here extended for its application to covalent , complex molecules and large adsorbates on surfaces . the employed automatically constructed delocalized internal coordinates are similar to molecular vibrations , which enhances the generation of chemically meaningful trial structures . by introducing flexible constraints and local translation and rotation of independent geometrical subunits we enable the use of this method for molecules adsorbed on surfaces and interfaces . for two test systems , _ trans_--ionylideneacetic acid adsorbed on a au(111 ) surface and methane adsorbed on a ag(111 ) surface , we obtain superior performance of the method compared to standard optimization moves based on cartesian coordinates .
a phase i trial for a new treatment is generally intended to determine a dose to use in subsequent phase ii and iii testing .phase i cancer trials have the additional complexity that the treatment in question is usually a cytotoxic agent and the efficacy usually increases with dose , and therefore it is widely accepted that some degree of toxicity must be tolerated to experience any substantial therapeutic effects .hence , an acceptable proportion of patients experiencing _ dose limiting toxicities _ ( dlts ) is generally agreed on before the trial , which depends on the type and severity of the dlt ; the dose resulting in this proportion is thus referred to as the _ maximum tolerated dose _ ( mtd ) .in addition to the explicitly stated objective of determining the mtd , a phase i cancer trial also has the implicit goal of safe treatment of the patients in the trial . however , the aims of treating patients in the trial and generating an efficient design to estimate the mtd for future patients often run counter to each other . commonly used designs in phase i cancer trials implicitly place their focus on the safety of the patients in the trial , beginning from a conservatively low starting dose and escalating cautiously .escalation is further slowed by the assignment of the same dose to groups of consecutive patients , as in the widely used 3-plus-3 design , which is convenient to administer and shortens trial duration by simultaneously following patients in groups of 3 . have documented that the overall response rates in these phase i trials are low , and substantial numbers of patients are treated at doses that are retrospectively found to be non - therapeutic . moreover , as pointed out by , these designs are very inefficient for estimating the mtd , which is implied by the 3-plus-3 design to correspond to the case .they proposed a bayesian model - based design , called the `` continual reassessment method '' ( crm ) , to choose the dose levels sequentially , making use of all past data at each stage .more than ninety new phase i methods were published between 1991 and 2006 , and there have been several reviews of the new methods ( e.g. , * ? ? ? * ) . in this paperwe focus on bayesian model - based designs and section [ sec : framework ] describes a general framework to develop and analyze them . as shown in sections [ sec : framework ] and [ sec : coh+1step ] , this framework allows one to incorporate the competing aims of a phase i cancer trial by choosing the loss function accordingly .it also enables one to derive certain desirable properties of the design , such as coherence , from the loss function , or to enforce them by using simple reformulations in this framework .section [ sec : examples ] provides implementation details and gives a simulation study comparing bayesian designs that correspond to different loss functions in the setting of a colon cancer trial considered by .a commonly used model - based approach to phase i cancer clinical trial design assumes the usual logistic regression model for the probability of dlt at dose level : in which and is unknown and to be estimated from the observed pairs , where if the subject , treated at dose , experiences dlt and otherwise .the frequentist approach to inference on uses the likelihood function and estimates by maximum likelihood , while the bayesian approach assumes a prior distribution of and uses the posterior distribution for inference on .denote the mtd by and the posterior distribution of based on by , and let denote the prior distribution .the bayes estimate of with respect to squared error loss is the posterior mean , and the crm proposed by uses this posterior mean to set the dose for the next patient , i.e. , . instead of the posterior mean, proposed to set equal to the -quantile of the posterior distribution , where is chosen to be slightly less than in their examples .this design is called `` escalation with overdose control '' ( ewoc ) and is called the `` feasibility bound . '' a sequence of doses is called `` bayesian feasible '' at level if for all , and the ewoc doses are optimal among bayesian - feasible ones ; see .note that the dose for the patient in crm or ewoc depends only on the posterior distribution , i.e. , is a functional of .this functional defines as a markov chain whose states are distributions on the parameter space and whose state transitions are given by the following ._ bayesian updating scheme : _ given current state ( which is a prior distribution of ) , let and generate first from and then .the new state is the posterior distribution of given .the functional for crm is , which minimizes the expected squared error loss ] , where more generally , we can consider other loss functions and define that attains ] to choose the next dose based on the current posterior distribution , by including the current design measure in the loss function ._ example 2 : bayesian - or -optimal designs ._ as described by , optimal design theory is concerned with choosing a design measure to minimize a convex function of the information matrix , where is the fisher information matrix at design point : the convex function is associated with the optimality criterion , e.g. , for -optimality and for -optimality . since is unknown , the frequentist approach uses a sequential design that replaces in by its maximum likelihood estimate at every stage .the bayesian approach puts a prior distribution on and minimizes . noting that this bayesian approach does not accomodate the fact that patients are assigned doses sequentially in phase i trials , ( * ? ? ?* section 5 ) propose to start the optimal design after an initial sample of patients so that the dose of a patient after this initial sample can be determined by minimizing where is the empirical measure of the initial sample of design points and is the posterior distribution of based on the initial sample .we can easily extend our loss function approach to bayes sequential designs by including as an argument of the loss function in this setting .let be the current posterior distribution of and be the current empirical design measure .define the sequential bayes optimal design chooses the next design level that minimizes .the measure in ( [ eq : ell - opt ] ) represents the new empirical measure obtained by adding to the support of , with .we can also impose a relaxed feasibility constraint in the choice of : as in , where with and is a prescribed positive constant . if , then and the constraint corresponds to requiring the doses to be bayesian feasible ( see the description of ewoc above ) .the preceding section has focused on determining the next dose by minimizing ] for given by or by ( [ eq : ewocloss ] ) , where is the current posterior distribution .this is tantamount to dosing the next patient at the best guess of , where `` best '' means `` closest '' according to some measure of distance from . on the other hand , a bayesian - or -optimal design aims at generating doses that provide most information , as measured by the fisher information matrix of a design measure , for estimating the dose - toxicity curve to benefit future patients . to resolve this dilemma between treatment of patients in the trial and efficient experimental design for post - trial parameter estimation, considered the finite - horizon optimization problem of choosing the dose levels sequentially to minimize the `` global risk '' ,\ ] ] in which denotes the prior distribution of , represents the loss for the patient in the trial , is the terminal estimate of the mtd and represents a terminal loss function .the optimizing doses depend on , where the horizon is the sample size of the trial , and therefore are not of the form considered in section [ sec : framework ] . in terms of `` individual '' and `` collective '' ethics , note that ( [ eq : fin_hor ] ) measures the individual effect of the dose on the patient through , and its collective effect on future patients through . by using a discounted infinite - horizon version of ( [ eq : fin_hor ] ), we can still have solutions of the form for some functional that only depends on .specifically , take a discount factor and replace ( [ eq : fin_hor ] ) by \ ] ] as the definition of global risk .note that this global risk measures the individual effect of the dose on the patient through , and its collective effect on future patients through .this means the myopic dose that minimizes ] . therefore the -posterior density of is ^{y_i } \left[\frac{1}{1+e^{g(x_i,\rho,\eta)}}\right]^{1-y_i } \pi(\rho,\eta),\ ] ] where ^{y_i } \left[\frac{1}{1+e^{g(x_i,\rho,\eta)}}\right]^{1-y_i } \pi(\rho,\eta ) d\rho d\eta.\ ] ] the marginal -posterior distribution of is then , and the crm and ewoc doses based on are the mean and -quantile of this distribution , respectively . the integrals in ( [ eq : post ] )can be evaluated by using a numerical double - integration routine involving gaussian quadrature in .this can be used to evaluate for a posterior distribution .we can find the minimum of over by a grid search in ] , we have for large , where , , are i.i.d . and generated from . letting denote the posterior distribution obtained from by including and letting , the nested expectation in ( [ eq:1-stp - ell ] )can be similarly approximated by using =e_{\pi_{+\{x , y\}}}h(\eta',x')\approx b^{-1}\sum_{b=1}^b h(\eta_b',x ' ) \frac{\pi_{+\{x , y\}}(\rho_b',\eta_b')}{\pi_0(\rho_b',\eta_b')},\\ \label{eq : impsamp3 } p_\pi(y=1|x)=\int f_\theta(x)d\pi(\theta)\approx b^{-1}\sum_{b=1}^b f_{\theta_b''}(x ) \frac{\pi(\rho_b'',\eta_b'')}{\pi_0(\rho_b'',\eta_b'')},\end{gathered}\ ] ] where , and .let and denote the right - hand sides of ( [ eq : impsamp2 ] ) and ( [ eq : impsamp3 ] ) , respectively . combining ( [ eq : impsamp1])-([eq : impsamp3 ] ) gives \right\}\frac{\pi(\rho_b,\eta_b)}{\pi_0(\rho_b,\eta_b)}.\ ] ] we can minimize the right - hand side of ( [ eq:1stprsk ] ) over ] .the two - parameter logistic model ( [ eq:2pl ] ) was chosen based on previous experience with the agent , and the uniform distribution over \times [ x_{\min } , x_{\max}] ] ( denoted od ) , and the coherence violation rate ( denoted chv ) in these expressions , and denote the probability and expectation , respectively , with respect to the prior distribution in the bayesian setting , or with respect to the appropriate fixed values of in the frequentist settings , and are computed by monte carlo . in terms of risk and risk , ewoc performs better in the bayesian setting than the myopic designs ewoc , ivoc , and crm , in that order .this occurs in the frequentist settings as well , although the ordering of the myopic designs varies depending on the particular parameter values .even though it myopically minimizes the posterior risk at every stage , ivoc performs poorly in terms of the cumulative risk , risk , in the bayesian setting .a possible explanation is that its loss function ( [ eq : invewoc ] ) is a function of , whose posterior distribution ( induced by the posterior distribution of ) has relatively large variance toward the middle of the interval in which takes values and , in particular , near , resulting in low initial doses observed in the simulations . on the other hand , in the freq setting , where is relatively small and the dose - response curve is relatively flat ( e.g. , large ) , ivoc performs well in terms of the risks . in terms of estimation, ewoc has the smallest rmse , with ewoc and ewoc both comparable in the bayesian setting .moreover , ewoc has uniformly the smallest rmse in the frequentist settings , with ivoc comparable to it in freq and ivoc and ewoc comparable to it in freq .in this paper we present a general formulation of bayesian sequential design of phase i cancer trials .this formulation enables us to prove a general coherence result in theorem [ thm : coh ] applicable to any design that can be defined as the minimizer of the posterior risk when the loss function satisfies some mild conditions .although the theorem is proved for the widely - used logistic regression model ( [ eq:2pl ] ) , the last paragraph of its proof in the appendix shows that it is applicable to any dose - response model that is non - increasing in the mtd , such as the model , which is also popular . in section [ sec:1step ]we propose a new design that incorporates both the individual ethics of the current patient begin administered the dose , through a given loss function such as the ewoc loss ( [ eq : ewocloss ] ) , and the collective ethics of all future patients by including an additional term in the overall loss function to represent the dose s information content for determining another dose for the next patient .the simulation study in section [ sec : sim ] shows that this new design is indeed an improvement over myopic designs in terms of global risk minimization , post - trial estimation of the mtd , and dlt and od rates .this design provides a practical alternative to the optimal design associated with the intractable markov decision problem of minimizing ( [ eq : globrisk ] ) , which requires at each stage the daunting consideration of all future posterior distributions and calculating their associated optimal doses . for the finite - horizon problem of minimizing ( [ eq : fin_hor ] ) , have developed an approximate solution which is a time - varying mixture of myopic and -optimal designs .the new design in section [ sec:1step ] , which can be described by a time - invariant functional of the posterior distribution at each stage , is substantially simpler computationally and provides substantial improvement over the myopic designs .we conjecture that with suitably chosen ( depending on ) , its global risk ( [ eq : globrisk ] ) can approximate that of the optimal design minimizing ( [ eq : globrisk ] ) . instead of minimizing ( [ eq : globrisk ] ) directly , it may be possible to obtain a good lower bound for ( [ eq : globrisk ] ) .such a bound , which can provide a benchmark for assessing the proposed design , is a topic for future work .we also consider an `` inverted '' loss function ( [ eq : invewoc ] ) , which measures deviation from the target dlt rate on the probability scale rather than on the dose scale , and the associated myopic design ivoc . even though ivoc minimizes the myopic posterior expected loss ( [ eq : invewoc ] ) at each stage ,its cumulative global loss risk in table [ table : sim ] is far from optimum , exceeding even that of ewoc which uses a completely different loss function , in the bayesian setting . on the other hand ,the design proposed in section [ sec:1step ] can be applied with the ivoc loss function ( [ eq : invewoc ] ) to yield a substantially improved design ivoc ._ proof of theorem [ thm : coh ] . _we prove coherence in de - escalation ; the proof for escalation is similar .let be the boundaries of the dose space , which is assumed to be a finite interval. for fixed , since is a convex function of , its right derivative with respect to is nondecreasing for , and the same is also true for the left derivative for .moreover , the left and right derivatives are equal and continuous except for at most countably many points ; see .let , be the posterior distribution obtained from and the additional dose - response pair , and let . since is convex in for every , so is ; moreover , its right derivative is given by . to show that , we shall assume that because the case is trivial .it suffices to show that because is convex and has minimizer . since and , recalling that , it follows that where \pi(\theta ) d\pi(\theta') ] .hence [f_{\theta}(x_\pi)-f_{\theta'}(x_\pi)]d\pi(\theta ) d\pi(\theta')\ge 0,\ ] ] in which the inequality follows from [f_{\theta}(x_\pi)-f_{\theta'}(x_\pi)]\ge 0\ ] ] for all , and , as will be shown below . combining ( [ eq : ldot ] ) and ( [ eq:2a ] ) yields , completing the proof of the theorem . from the assumption that is non - increasing in for any , it follows that is non - increasing in for fixed .it therefore suffices for the proof of ( [ eq : int - pos ] ) to show that is non - increasing in .since , $ ] , which is non - increasing in since .this work was supported in part by national science foundation grants dms-0907241 at university of southern california and dms-0805879 at stanford university .the authors thank the associate editor and two referees for their helpful comments .hardwick , j. and stout , q. f. ( 2001 ) . optimizing a unimodal response function for binary variables . in atkinson ,a. , bogacka , b. , and zhigljavsky , a. , editors , _ optimum design 2000 _ , pages 195210 .kluwer academic publishers , dordrecht .li , z. , durham , s. d. , and flournoy , n. ( 1995 ) .an adaptive design for maximization of a contingent binary response . in flournoy , n. and rosenberger , w. f. , editors , _ adaptive designs _ , pages 179196 .institute of mathematical statistics .
a general framework is proposed for bayesian model - based designs of phase i cancer trials , in which a general criterion for coherence of a design is also developed . this framework can incorporate both `` individual '' and `` collective '' ethics into the design of the trial . we propose a new design which minimizes a risk function composed of two terms , with one representing the individual risk of the current dose and the other representing the collective risk . the performance of this design , which is measured in terms of the accuracy of the estimated target dose at the end of the trial , the toxicity and overdose rates , and certain loss functions reflecting the individual and collective ethics , is studied and compared with existing bayesian model - based designs and is shown to have better performance than existing designs . [ firstpage ] cancer trials ; coherence ; dose - finding ; logistic regression ; markov decision problem ; phase i.
random multiple access protocols have played a crucial role in the development of both wired and wireless local area networks ( lans ) , and yet the performance of even the simplest of these protocols , such as slotted - aloha , is still not clearly understood .these protocols have generated a lot of research interest in the last thirty years , especially recently in attempts to use multi - hop wireless networks ( mesh and adhoc networks ) to provide low - cost high - speed access to the internet .random multiple access protocols allow users to share a resource ( e.g. a radio channel in wireless lans ) in a distributed manner without exchanging any signaling messages .a crucial question is to determine whether these protocols are efficient and fair , or whether they require significant improvements . in this paper , we consider non - adaptive protocols , where the transmission probability of a given transmitter is basically fixed .more specifically we analyze the behavior the slotted - aloha protocol in a buffered system with a fixed number of users receiving packets from independent markovian processes of pre - defined intensities .we aim at characterizing the stability region of the system .this question has been open since the first stability analysis of aloha systems in 1979 by tsybakov and mikhailov , and we will shortly explain why it is so challenging to solve .we propose an approximate stability region and prove that it is exact when the number of users grows large . to accomplish this , we characterize the mean field regime of the system when the number of users is large , explore the stability of this limiting regime , and finally explain how the stability of the mean field regime relates to the ergodicity of systems with a finite number of users .we also show , using both theoretical arguments and numerical results , that our approximate is extremely accurate even for small systems , e.g. with three users ( the approximate is actually exact for two users ) .our approach can be generalized to other types of non - adaptive random multi - access protocols ( e.g. , csma , carrier sense multiple access ) .we present this extension at the end of the paper .consider a communication system where users share a common resource in a distributed manner using the slotted - aloha protocol .specifically , time is slotted , and at the beginning of each slot , should a given user have a packet to transmit , it attempts to use the resource with probability .let represent the vector of fixed transmission probabilities .when two users decide to transmit a packet simultaneously , a collision occurs and the packets of both users have to be retransmitted .each user is equipped with an infinite buffer , where it stores the packets in a fifo manner before there are successfully transmitted .packets arrive into user s buffer according to a stationary ergodic process of intensity .the arrival processes are independent across users , and are markov modulated .more precisely , the packet arrivals for user can be represented by an ergodic markov chain with stationary probability of being in state , and with transition kernel .the markov chains are independent across users and take values in a finite space .if at time slot , a new packet arrives into the buffer of user with probability , where the s are positive real numbers such that .the average arrival rate of packets per slot at user is then .we use these chains to represent various classes of packet inter - arrival times .the simplest example is that of bernoulli arrivals , i.e. , when the inter - arrivals are geometrically distributed with mean : this can be represented by the markov chain with one state .we could also represent inter - arrivals that are sums ( or random weighted sums ) of geometric random variables . in the followingwe denote by the proportion of traffic generated by user .denote by the number of packets in the buffer of user at the beginning of slot .the state of the system is given by at time slot . is a discrete - time markov chain .the stability region is defined as the set of vectors such that the system is stable , i.e. is ergodic , for packet arrival rates .it is important to remark that , a priori , depends on the transmission probabilities , but also on the types of arrival processes defined by the transition kernels and the parameters . but to keep the notation simple , we use to denote the stability region . the problem of characterizing the stability region has received a lot of attention in the literature in the three last decades .first of all note that when the system is _ homogeneous _ in the sense that ] be the set of such that , , and . the approximate stability region is the region lying below one of boundaries defined by : ^n , \forall i,\\ & \lambda_i=\rho_i p_i\prod_{k\neq i}(1-\rho_kp_k)\bigg\}.\end{aligned}\ ] ] more precisely , is the set of positive vectors such that there exist and with for all .note that , so the proposed approximation is exact when .assume that the traffic distribution , is fixed and let us find the maximum total arrival rate such that belongs to the closure of .it can be easily shown that at this maximum , the user such that is . indeed , since for all , , then , and we deduce the maximum arrival rate : our main result states that the actual stability region is very close to the proposed approximation when is large .to formalize this , we introduce a sequence of systems indexed by , i.e. , the arrival rates are , the transmission probabilities are , and the markov chains modulating the arrival processes are . in order for a system with users to give reasonable bandwidth to each user the duration of a time slot must be of order seconds , i.e. , we suppose seconds will represent time slots .we assume that users can be categorized among a finite set of classes .further we assume the proportion of users in class tends to when .the class of a user characterizes its transmission probability and the packet arrival process in its buffer .the transmission probability of user of class is .we assume that for all , .this assumption is made so as to keep the approximation expression of the stability region simple . in sectionv , we explain how to extend the assumption , and give ways to remove it .note that , as kleinrock already noticed , the assumption is needed to guarantee a certain efficiency of the system . in order that the system not be overloaded , we assume that the mean packet arrival rate of user of class is .the class of a user also defines the markov chain modulating its arrival process .if user is of class , this markov chain is assumed to be in stationary regime with probability of being in state , in which case the probability that a packet arrives in its buffer is , where are positive real numbers such that .we assume that the markov chains are independent .denote by the transition kernel of .the markov chains may be fast or slow .in particular the arrival rate may change on the order of time slots if packets are generated by a http connection since the transmission rate is changed dynamically by http . on the other hand the state describing a voip connection would evolve on the scale of seconds as the speaker alternates between silent and speech periods . to represent these two scenarios ,we introduce continuous - time markov processes , , taking values in with jump rate kernel ( expressed in transitions per second ) , and we assume that the corresponding markov chains modulating the arrivals of the various users are as follows . in this case , for all , the law of is equal to that of ( recall that seconds roughly represent slots ) . in other words ,the modulating markov chain changes state on the scale of time slots ; i.e. times faster than the speed at which the user evolves ( e.g. the speed at which the user attempts to transmit a packet ) . in this case . in this case , for all , the law of is equal to that of ; i.e. the speed of the modulating markov chains is proportional to the speed of at which the user evolves . in this case we denote by the stability region of the system indexed by .as explained earlier , the stability region depends on the transmission probabilities and on the type of arrival processes .the following theorem compares the stability region with our proposed approximation as gets large .the theorem is valid for both types of sequences of systems , 1 and 2 .since does not depend on the kernels , the theorem indicates that when is large , the stability region depends on the arrival processes through the mean arrival rates only . define .[ th : stab1 ] for all small enough , there exists such that for all : + ( a ) if , then ; + ( b ) if , then .theorem [ th : stab1 ] is proven in sections iv and v. the main steps of the proof are as follows .+ ( 1 ) the evolution of the system when grows large is characterized : it is shown that with an appropriate scaling in time , the evolution of the distributions of the various queues is the solution of a deterministic dynamical system .this result is obtained using mean field asymptotic techniques , as presented in section iv.a .these techniques typically provide an approximate description of the evolution of the system over finite time horizons . herewe wish to study the ergodicity of slotted - aloha systems , which basically relates to the system dynamics over an infinite horizon of time .hence , classical mean field asymptotic results will be necessary but not sufficient to prove theorem 1 .the main technical contribution of this paper is to explain how mean field asymptotics can be used to infer the ergodicity of the finite systems , and this is what is done in steps ( 2 ) and ( 3 ) . + ( 2 ) we provide sufficient and necessary conditions for the global stability of the dynamical system describing the evolution of the system in the mean field regime .+ ( 3 ) finally , it is shown that the ergodicity of the initial system of queues is equivalent to the stability of the mean field dynamical system when grows large .a consequence of the above result is that when grows large , and whatever the arrival processes considered , the set traffic intensities such that there exist transmission probabilities stabilizing the system is the set with boundary : this result has been conjectured by tsybakov and mikhailov in .it has been proved in , but under the assumption that the arrival processes of the various users are correlated .the authors of have introduced the so - called _ sensitivity monotonicity _ conjecture under which they could also prove the result .theorem 1 says that when the number of users is large enough , the sensitivity monotonicity conjecture is not needed .it is worth noting that coincides with the shannon capacity region of the multi - user collision channel derived in .theorem 1 shows that the capacity region and the shannon capacity region are equivalent when the number of users grows large . in reader will find a more detailed discussion on the comparison of these two regions in communication systems .how far is the approximate region from the actual stability region ?theorem [ th : stab1 ] says that the gap tends to 0 when the number of users grows large .but even for small , is quite an accurate approximation as illustrated in the numerical examples provided later .why is the approximate region so accurate ? this accuracy can be explained by remarking that the boundaries of the regions and coincide in many scenarios .remember that can be interpreted as the stability region one would get if the evolutions of the different buffers were independent . as a consequence, it provides the exact stability condition for scenarios where , in the stability limit , the buffers become independent . a direction ( a vector with unit -norm ) is -homogeneous for the system considered if there exists a permutation of such that , for all , does not depend on . in the following , without loss of generality , when a direction is -homogeneous , the corresponding permutation is given by for all . the following proposition , proved in appendix ,formalizes the fact that the boundaries of and coincide on a set of curves corresponding to particular directions , -homogeneous directions . in case ,figure [ fig : ex3 ] gives a schematic illustration of these curves .assume that , where is a + -homogeneous direction for the system considered .define and similarly .then if and for , then : we now illustrate the accuracy of using numerical experiments .+ first , we consider the case of sources , each transmitting with probability 1/3 .we vary the relative values of the arrival rates at the various queues : , and .we vary from 1 to 50 .it can be shown that the approximate stability condition is in figure [ fig : comp ] ( left ) , we compare this limit to the actual stability limit found by simulation with bernoulli arrivals ( simulation 1 ) and hyper - geometric arrivals ( simulation 2 ) . in the latter case , the inter - arrivals for each user are i.i.d . , and an inter - arrival is a geometric random variable with parameter with probability 1/2 , and with probability 1/2 .this increases the variance of inter - arrivals ( when is small the variance scales as ) . in the numerical experiment, we chose .remark that the stability region is roughly insensitive to the distribution of inter - arrivals .this insensitivity has been also observed in the other examples presented in this section .the simulation results have been obtained running the system for about packet arrivals .note finally that the arrival rates are chosen so that the system is not -homogeneous .we make a similar numerical experiment when are equal to 0.6 , 0.3 , 0.1 respectively .the arrival rates at the three queues are as in example 1 .we vary from 0.1 and 10 . for , at the boundary of , queue 3 is saturated ( ) ; whereas for , queue 2 is saturated ( ) .the approximate stability condition is : figure [ fig : comp ] ( center ) illustrates the accuracy of .finally , we illustrate the accuracy of when the number of users grows . each user is assumed to transmit with probability andthe traffic distribution is such that for all .hence , again the system is not -homogeneous .one can easily show that in this direction , the approximate stability condition is : in figure [ fig : comp ] ( right ) , we compare the boundary of with that of when the distribution is linearly decreasing with .again as expected , provides an excellent approximation of the saturation level in the actual system .the two next sections are devoted to the proof of theorem 1 . in section iv , we provide a classical mean field analysis of the system , and in section v , we show how the stability in the limiting mean field regime translates into the ergodicity of the initial finite systems .in this section , we first present a generic mean field analysis of a system of interacting particles and then apply the results obtained to slotted - aloha systems .let us first give some notations ._ notations ._ let be a complete separable metric space , denotes the space of probability measures on . denotes the distribution of the -valued random variable .let be the set of right - continuous functions with left - handed limits , endowed with the skorohod topology associated with the metric , see p 168 . with this metric , is complete and separable . for two probability measures , we denote by their distance in total variation . consider a system of particles evolving in a state space at discrete time slots . is a finite set , and is at most countable . at time , the state of particle is .the first component of is fixed , and is used to represent the _ class _ of a particle as explained below . represents the state of particle at time .the state of the system at time can be described by the empirical measure .each particle is attached to an _ individual environment _ whose state at time slot belongs to a finite space . is a markov chain with kernel independent of .particles of the same class share the same kernel : implies .the markov chains are independent across particles , and are assumed to be in stationary regime at time 0 .let be the stationary distribution of the individual environment of a class- particle ._ evolution of the particles ._ we represent the possible transitions for a particle by a finite set of mappings from to .a -transition for a particle in state leads this particle to the state . in each time slotthe state of a particle has a transition with probability independently of everything else .if a transition occurs for a particle whose individual environment is in state , this transition is a -transition with probability , where , , and denote the state of the particle , the empirical measure before the transition and the state of its individual environment respectively .hence , in this state , a -transition occurs with probability : with for all .note that we do not completely specify the transition kernel of the markov chain .all what we require is that each particle has a transition with probability independently of the other particles .however given that transitions occur for two ( or more ) particles , these transitions can be arbitrarily correlated ( but with marginals given by ( [ jumps ] ) ) .note also that the chains evolve quickly compared to .we make the following assumptions on the transition probabilities convergence of to : + a2 .the functions are uniformly lipschitz : for all , in what follows , we characterize the evolution of the system when the number of particles grows . according to ( [ jumps ] ) , as , the evolution of slows down ( where is measured in slots ) .hence to derive a limiting behavior we define : where is measured in seconds . when , the environment processes evolve rapidly , and the particles see an average of the environments .we define the average transition rates for a particle in state by [ th : mainapp ] suppose that the initial values , , are i.i.d . and such that their empirical measure converges in distribution to a deterministic limit . then under assumptions a1 and a2 , there exists a probability measure on such that for all finite set of particles : in the above theorem denotes the trajectory of particle , which is a random variable taking values in .the result is then stronger than having the weak convergence of the distribution of in for any .for instance , it allows us to get information about the time spent by a particle in a given state during time interval ] .[ theoex ] for all time , for all , the differential equations ( [ eqdiff1 ] ) have a natural simple interpretation : is the total mean incoming flow of particles to state , whereas is the mean outgoing flow from .theorems 2 and 3 characterize the limiting system evolution on all compacts in time .hence , they do not say anything about the long - term behavior of the system . herewe will assume that the finite particle systems are ergodic and describe the mean field regime of the systems in equilibrium .to do so , we need two additional assumptions : + a3 . for all large enough ,the markov chain is positive recurrent .the set of the stationary distributions of the systems with particles is tight .the dynamical system ( [ eqdiff1 ] ) is globally stable : there exists a measure satisfying for all : and such that for all satisfying ( [ eqdiff1 ] ) , for all , .then the asymptotic independence of the particles also holds in the stationary regime , and is the limiting distribution of a particle : [ theost ] for all finite subsets , theorems 2 , 3 and 4 can be obtained applying classical mean field proof techniques , such as those used in .the interested reader is referred to for complete proofs of similar results in the case of a much more general system models .consider a type-1 sequence of slotted - aloha systems as described in the paragraph preceding theorem 1 . in the system with users ,consider a class- user .when it has a packet in its buffer , it transmits with probability .the packet arrivals in its buffer are driven by a -valued markov chain in stationary regime with distribution and whose transition kernel depends on only .when , a new packet arrives in its buffer with probability ( refer to section [ sec1 ] for the notation ) . the system can be represented as a system of interacting particles as described in section iv.a .each user corresponds to a particle whose state at time slot represents its class , and the length of its buffer : ; i.e. .the individual environment of particle at time slot is .denote by the empirical measure of the system at time slot : .assume that at time slot , the empirical measure is .the possible transitions for a user / particle are a packet arrival in the buffer ( we index this kind of transition by ) , and a packet departure ( indexed by ) . then . if user ( /particle ) is in state , and if its individual environment is , the probabilities of transition for the next slot are given as follows .the state becomes with probability : and with probability : where is the proportion of users of class and is the proportion of users of class with non - empty buffers .denote by the proportion of class- users at the limit when grows large .when , the functions , converge to , where : one can easily check that assumptions a1 and a2 are satisfied .moreover , the limiting averaged transition rates are : at time 0 , we apply a random and uniformly distributed permutation to the users so that their initial states become i.i.d .. this operation does not change the stability of the system . finally we scale time and consider .we can apply theorems 2 and 3 , and conclude that when grows large , the evolutions of the users become independent .furthermore , at time , if denotes the limiting probability that a user of class has packets in its buffer ( /\beta_v ] , then : where is given by ( [ eq : dyn2 ] ) with . for a given ,equations ( [ eq : dyn1bis ] ) are the kolmogorov equations corresponding to the evolution of the number of clients in a queue with poisson - modulated arrivals , exponential service requirements , and time - varying capacity equal to at time . in the following ,we denote by such a queue : the superscript represents the kernel of the process modulating the arrival rates , the subscript means that the capacity is time - varying .now for a given class , multiplying ( [ eq : dyn1bis ] ) by , and then summing over and , one gets ( [ mean ] ) where ; this is due to the fact that we assumed that the markov chains modulating the arrivals are initially in their stationary regimes , which implies that for any , .note finally that ( [ summean ] ) is also valid ( as a direct consequence of ( [ mean ] ) ) .we now investigate the stability of the dynamical system ( [ eq : dyn1bis])-([eq : dyn2 ] ) corresponding to a sequence of slotted - aloha of type 2 .actually , analyzing the stability of ( [ eq : dyn1bis])-([eq : dyn2 ] ) is more difficult than analyzing that of the dynamical system ( [ eq : dyn1])-([eq : dyn2 ] ) corresponding to a sequence of slotted - aloha of type 1 .as it turns out they have exactly the same stability condition .we let the reader adapt the following analysis to the case of the system ( [ eq : dyn1])-([eq : dyn2 ] ) .assume that .in the following we denote by and the unique solutions in and in , respectively , of : define the function from to ] , is the following subset of : ^v : \forall v , \lambda_v = p_v\rho_vbe^{-\sum_u\beta_u\rho_up_u}\}.\ ] ] actually , one can easily prove that is the stability region of a generalized system obtained from ( [ eq : dyn1bis])-([eq : dyn2 ] ) by adding a slot availability probability to the service rate of class- users ; i.e. this service rate becomes .we now provide an alternative representation of .define as the subset of whose upper pareto - boundary is the union of the following surfaces : ^v : \forall u,\\ & \lambda_{u}=\rho_{u}p_{u}be^{-\sum_w \beta_w\rho_w p_w}\}.\end{aligned}\ ] ] in the following , we use the following notation : for all , . we prove that when , then let component of the function be , and let \times\ldots\times [ 0,p_v] ] , is the subset of whose pareto - boundary is the union ( over ) of the following surfaces : ^v : \forall v',\\ { } & & \lambda_{v'}={bp_{v'}\rho_{v'}\over 1-\rho_{v'}{p_{v'}\over n } } \prod_u(1-\rho_u{p_u\over n})^{\beta_u^nn}\}.\end{aligned}\ ] ] one can easily see that for large enough , is very close to ( their hausdorff distance is of order ) . from thiswe deduce that there exists , such that for all , .define as : ^v : \forall v,\\ { } & & \lambda_v={bp_v\rho_v\over 1-\rho_v{p_v\over n}}\prod_u(1-\rho_u{p_u\over n})^{\beta_u^nn}\}.\end{aligned}\ ] ] we can prove ( as done after theorem 5 ) that when , .we now consider systems built from our original systems but such that each slot is available for transmission with probability , i.i.d . over slots .we show the following result by induction on , and deduce theorem 1 ( a ) applying it for .`` if there exists an small enough , such that for sufficiently large , , then the system with queues is stable .furthermore in such a case , the stationary distributions of such systems constitute a tight family of probability measures . ''let us first prove the result when . in such case ,all the queues are similar , and the system is then homogenous .we have and the system is stable iff by .now assume that .consider a particular queue : at any time , its distribution is stochastically bounded by the distribution we would obtain assuming that all the other queues are saturated . in the latter system ,the stationary distribution of the queue considered is that of a markovian queue of load for some . tightness follows .+ now let us assume that the result is true when , and let us prove it when .assume that for large enough , .denote by the -dimensional vector built from where the -th component has been removed .since , there exists such that : for large enough , consider the stochastically dominant system where all queues of class different than see saturated queues of class . for the latter sub - system , in view of ( [ eq21 ] ) , we can apply the induction result .we conclude that for large enough , the dominant system without queues of class is stable , and that the family of the corresponding stationary distributions is tight . + from theorem 5 applied to the dominant systems without queues of class , we know that the corresponding limiting system is globally stable. we can then apply theorem 4 to these systems to characterize the average proportion of slots left idle by the queues of class different than : when , this proportion tends to where is the lower solution of .now consider a queue of class in the dominant system .denote by its service process .we can make this process stationary ergodic just assuming that initially the system without queues of class is in stationary regime .the service rate of a queue of class converges to when .hence when is large enough , we have : \ge bp_v\exp(-\sigma)\exp(-\beta_vp_v)-\epsilon/4.\ ] ] now since , we have for large enough : -\epsilon/4.\ ] ] we deduce that in the dominant system , the queue of class are stable for large enough , and that their stationary distributions are tight .we conclude the proof noting that the original systems are stochastically dominated by systems that are stable for large enough , and such that their stationary distributions are tight .we now prove that there exists an integer such that for all , the system is unstable if or , equivalently , if where is defined in the previous paragraph .the system is monotone with respect to the arrival process : if we remove some incoming packets , the buffers can not increase .hence it is sufficient to prove that a modified system obtained from an independent thinning of the arrival process of all users is unstable .we first perform this suitable thinning . by assumption, there exists such that .then , there exists a class such that . up to extracting a subsequence, we may assume that this class does not depend on , for large enough .note also that the convergence of to implies that converges to some such that .we assume for simplicity that there exists a unique class like that ( the proof generalizes easily by performing a non - homogeneous thinning ) .then , from assumption ( [ eq : hypo1 ] ) , we can choose small enough such that satisfies for some and all large enough , and where is the smallest solution of .we now perform the thinning of the arrival processes : up to replacing by , we can assume directly that for all large enough , and .now , we define the stopping time if we prove that for an arbitrary initial condition , then the system is transient . as in the previous paragraph , we consider the dominant system where all users of class different than see saturated class- users .recall that , by construction , in this dominant system , the buffers of class- users are independent of the buffers of users of class different than . note also that on the event all class- users are saturated on ] , and denotes the stationary probability that the buffers from set are not empty in a system where user has been removed and has been replaced by .note that ( [ eq : szp1 ] ) ensures that these probabilities exist .remark that when removing user , the remaining system is -homogeneous . by induction , if , we deduce that for any , condition ( [ eq : szp1 ] ) is equivalent to : when the latter condition is satisfied , it is easy to show that buffer in the original system is stable , i.e. , that ( [ eq : szp2 ] ) holds. indeed consider the stochastically dominant system where users always transmit with probability .then the stability condition of buffers and is that of a two - buffer system with slot - availability probability equal to . the latter system is stable if and only if : which is equivalent to : one can verify that . to prove ( [ eq : erg ] ) , we compare the system with another system that starts empty too , and whose evolution is characterized by ( [ eq : dyn1bis ] ) where is replaced by .we denote by the solution of this new system , and define .we also denote by its total workload at time .the new system is equivalent to a system of independent queues with poisson modulated arrivals and constant capacities ( equal to for type- queue ) .note that the service rates of the queues in the new system are smaller than those in the original system .we deduce : for all , also remark that the original and the new systems have the same stationary behavior , which implies : . then : .\end{aligned}\ ] ] hence we have : to prove ( [ eq : erg ] ) , it suffices to show that for all , converges exponentially fast to when . as mentioned previously, represents the probability that a queue , initially empty , with poisson modulated arrivals , exponential service requirements and constant capacity .thus to prove ( [ eq : erg ] ) , we just need to show that a queue with poisson modulated arrivals is exponentially ergodic under the usual stability condition .this is what we prove next .the proof of the exponential ergodicity of queues with poisson modulated arrivals can be done classically , showing that the spectral gap of the corresponding markov process is positive .we give the proof for completeness .consider a queue of capacity .clients arrive according to a poisson modulated process .the service requirements are i.i.d .exponentially distributed with mean 1 .the arrivals are modulated by a -valued markov process whose transition kernel is , and stationary distribution . is a finite set .when , clients arrive at rate .assume that the queue is ergodic , i.e. , .denote by the number of clients in the queue at time .let denote the kernel of the markov process .we have : for all , , let denote the stationary probability to be in state .following , to prove exponential ergodicity , we just need to show that the spectral gap of is strictly positive .the gap is defined by : ,\ ] ] where is the set of measurable functions such that , and .the following lemma states the exponential ergodicity of the queue .we can first show that there exist two strictly positive constants such that : for all , where for some .actually , this result can be obtained using one of the classical methods to derive the tail of the stationary distribution of a queue with modulated arrivals , for example refer to theorem 2.4 in .note that is the steady state distribution of a markov process with two independent components : . is the markov process representing the number of clients in an queue with load , and and are independent .denote by the transition kernel of . from ( [ eq : chee ] ), one can easily deduce that there exists a constant such that , where ( resp . ) is the cheeger constant of ( resp . ) , see . for example, is defined by : the cheeger constant and the spectral gap are related .actually thanks to theorems 2.1 and 2.3 in , there exists a constant such that : the same inequalities ( with a different constant ) holds for . now observe that .this is due to the fact that by theorem 2.6 in , is the minimum of the spectral gap of and that of .both are strictly positive ( is a birth - death process , see corollary 3.8 in ; can take a finite number of values ) .we conclude : hence .
we analyze the stability of standard , buffered , slotted - aloha systems . specifically , we consider a set of users , each equipped with an infinite buffer . packets arrive into user s buffer according to some stationary ergodic markovian process of intensity . at the beginning of each slot , if user has packets in its buffer , it attempts to transmit a packet with fixed probability over a shared resource / channel . the transmission is successful only when no other user attempts to use the channel . the stability of such systems has been open since their very first analysis in 1979 by tsybakov and mikhailov . in this paper , we propose an approximate stability condition , that is provably exact when the number of users grows large . we provide theoretical evidence and numerical experiments to explain why the proposed approximate stability condition is extremely accurate even for systems with a restricted number of users ( even two or three ) . we finally extend the results to the case of more efficient csma systems . random multiple access , aloha , stability , mean field asymptotics .
quantum mechanics , as conventionally formulated , has two types of time evolution . an isolated system evolves according to the schrodinger equation where the time evolution operator is unitary : . however , when a measurement is made the state undergoes non - unitary von neumann projection ( i.e. , collapses ) to an eigenstate corresponding to the eigenvalue observed by the measuring device .because the two types of time evolution are so radically different , students typically demand ( and indeed are entitled to demand ) a rigorous definition of exactly when each of them apply .unfortunately , as is widely acknowledged , the conventional interpretation does not supply an entirely satisfactory definition ( see , e.g. , _ against measurement _ by j.s .bell , and the discussion of decoherence below ) .many students over the years ( including the author , when he was a student ) have wondered whether the measurement process can be described within a larger isolated system , containing both the original system and the measuring apparatus , whose dynamics is still unitary .however , measurement collapse and unitary evolution of are incompatible , because the projection operation is not invertible .collapse is fundamentally irreversible , whereas unitary evolution of the system and measuring device together is reversible. nevertheless , as we shall discuss below , unitary evolution of the larger system is compatible with the _ appearance _ of collapse to observers within the system . to make this discussion more explicit ,let be a single qubit and a device ( e.g. , stern - gerlach device ) which measures the spin of the qubit along a particular axis .the eigenstates of spin along this axis are denoted .we define the operation of as follows where denotes a state of the apparatus which has recorded a outcome , and similarly with .we can then ask what happens to a superposition state which enters the device . in the conventional formulation , with measurement collapse , _ one _ of the two final states _ or _ results , with probabilities and respectively .this probability rule is called the born rule . in the conventional formulationthe notion of probability enters precisely when one introduces wave function collapse . without collapsethe status of probability is much more subtle , as we will see .uses a device to measure a spin state . before the measurement there is a single observer state , but afterwards there are two observer states .,width=604 ] however , if the combined system evolves unitarily ( in particular , _ linearly _ ) as in ( [ unitary ] ) , we obtain a superposition of measurement device states ( see figure 1 ) : at first , this seems counter to actual experience : properly executed measurements produce a single outcome , not a superposition state .but how do we know for sure ?in fact , an observer described by the state might be unaware of the second branch of the wave function in state for dynamical reasons related to a phenomenon called _ decoherence _ .first , any object sufficiently complex to be considered either a measuring device or observer ( i.e. , which , in the conventional formulation can be regarded as semi - classical ) will have many degrees of freedom .second , a measurement can only be said to have occurred if the states and differ radically : the outcome of the measurement must be stored in a redundant and macroscopically accessible way in the device ( or , equivalently , in the local environment ) . therefore , the ability of the second branch to affect the first branch is exponentially small : the overlap of with is typically of order , where is a macroscopic number of degrees of freedom , and the future dynamical evolution of each branch is unlikely to alter this situation .for all practical purposes ( fapp , as j.s .bell put it ) , an observer on one branch can ignore the existence of the other : they are said to have decohered .each of the two observers will perceive a collapse to have occurred , although the evolution of the overall system has continued to obey the schrodinger equation .it is sometimes said that decoherence solves the measurement problem in quantum mechanics .more accurately , decoherence illuminates the process of measurement .but it does not answer the question of whether or not wave functions actually collapse _it merely makes it clear that they need not ! _ while decoherence is used in the most practical settings , such as the analysis of atomic experiments involving entanglement , somehow the analysts often fail to notice that their calculations work perfectly well in the absence of collapse ; the measurement can proceed in a perfectly continuous manner , with the different branches rapidly losing contact with each other .the result is the appearance of a single outcome without invoking collapse .the idea that wave function collapse is unnecessary , that the whole universe could be thought of as a closed system obeying the schrodinger equation and described by a _universal wave function _ , is due to everett .his conception is a _universal _ quantum mechanics : one in which observers and measuring devices are treated on the same footing as all other quantum degrees of freedom there are no special objects which collapse wave functions .it is sometimes claimed that the many worlds interpretation is experimentally indistinguishable from the conventional interpretation .this is true for all practical purposes ( fapp ) , but see deutsch for an experiment that is sensitive to whether or not wave functions undergo non - unitary projection .note also that an implicit assumption of conventional quantum mechanics is that observers ( i.e. , objects or measuring procedures that collapse wave functions ) can not themselves been placed in superposition states .however , rapid progress is being made toward the creation of macroscopic ( schrondinger s cat ) superposition states , including , possibly , superpositions of viruses and bugs .if a bug can be placed in a superposition state , why ca nt you ? in fact the foundations of quantum mechanics are not entirely disconnected from practical issues in cosmology .the cosmic microwave background data favors inflationary cosmology , according to which density perturbations in the early universe originate in the quantum fluctuations of the inflaton field itself .it is very hard to fit this into the conventional view what collapses the wavefunction of the inflaton ?there are no observers in the early universe the very existence and locations of observers ( such as humans ) are determined by the density perturbations themselves ! galaxies ,stars and planets are found in the overdense regions , but quantum mechanics itself decides which regions are overdense ; there is no classical system outside or observing the universe .it seems much more natural to note that differential scattering of gravitons due to more or less energy density in a particular region separates the inflaton wavefunction into decoherent branches .the gravitons decohere the inflaton state vector through interactions . butthis is accomplished through unitary evolution and does not require von neumann projection or collapse .other observers , living on other branches of the wavefunction , see a different cmb pattern on the sky .the schrodinger dynamics governing unitary evolution of the wave function is entirely deterministic . in the absence of collapse , this is the only kind of time evolution in quantum mechanics .if the initial state of is known at some initial time , it can be predicted with perfect accuracy at any subsequent time .how , in such a deterministic system , can the notion of probability arise ? in the conventional interpretation , with only a single realized outcome of an experiment , one simply _imposes _ the born probability rule together with measurement collapse : the probability of the plus outcome is , or more generally the likelihood associated with a particular component of the wave function is given by its magnitude squared under the hilbert measure .( philosophically , this imposition of objective randomness is a violent departure from the notion that all things should have causes ; see below . ) in the absence of collapse there is no logical point at which we can introduce a probability rule it must emerge by itself , and it must explain why an experimenter , who decoheres into different versions of himself on different branches , does so with probabilities determined by . when all outcomes are realized , what is meant by the _ probability _ of a particular one ?the problem is actually worse than this . for decoherence to work in the first place, the overlap or hilbert space inner product between two vectors must have a probabilistic interpretation : if the inner product is small ( decoherence has occurred ) the two branches are assumed not to influence each other ; an observer on one branch may be unaware of the other .let us neglect this further , more subtle , complication and simply assume that decoherence and the measuring apparatus work as desired . even in this reduced context , there is still a problem , as we now discuss .the many worlds interpretation must provide a probability measure over all decoherent outcomes , each of which is realized , i.e. , from the perspective of at least one observer or recording device .the difficulty becomes clear if we consider spins : , with each spin prepared in the identical state .( see figure 2 . ) again , all possibilities are realized , including the outcome where , e.g. , all spins are measured in the state : .if is small , then this outcome is very unlikely according to the usual probability rules of quantum mechanics .however , independent of the value of , it comprises one of the distinct possible outcomes .each of these outcomes implies the existence of an observer with distinct memory records . for sufficiently large and ,it can be shown that the vast majority of the realized observers ( i.e. , weighting each distinct observer equally ) see an outcome which is highly unlikely according to the usual probability rules .note that counting of possible outcomes depends only on combinatorics and is independent of . as , for all values of ( excluding exactly zero ) , almost all of the realized observers find nearly equal number of and spins : there are many more outcomes of the form , e.g. , with roughly equal number of s and s than with many more of one than the other .this had to be the case , because counting of outcomes is independent of the values of , leading to a symmetry between and outcomes in the combinatorics .in contrast , the born rule predicts that the relative number of and outcomes depends on . in the large limitalmost all ( distinct ) observers observers is weighted equally . ]experience outcomes that strongly disfavor the born probability rule : _ almost all of the physicists in the multiverse see experimental violation of the born rule_. or : _ almost none of the physicists in the multiverse see outcomes consistent with the born rule ._ everett referred to the branches on which results deviate strongly from born rule predictions ( i.e. , exhibit highly improbable results according to the usual probability rule ) as _ maverick _ branches . by definition ,the magnitude of these components under the hilbert measure vanishes as becomes large .but there is no a priori sense in which the hilbert measure is privileged in many worlds . nor is there even a logical place to introduce it it must emerge in some way by itself .everett claimed to derive quantum mechanical probability by taking to infinity and discarding all zero norm states in this limit , thereby eliminating all maverick outcomes .most advocates of many worlds regard this reasoning as circular and look elsewhere for a justification of the born rule . instead, most attempts to justify the born rule have relied on _ subjective _ probability arguments .( dynamical mechanisms for removing maverick branches have also been considered . )while objective probabilities can be defined through frequencies of outcomes of a truly random process , subjective probabilities deal with degrees of belief .the conventional quantum interpretation , with von neumann projection , assumes true randomness and objective probabilities : repeated measurements produce a truly random sequence of outcomes , with frequencies given by the born rule .outcomes are unknowable , _ even in principle _ , even with perfect knowledge of the state being measured .( einstein objected to the introduction of true randomness , because it implies outcomes without causes . ) in the absence of true randomness , probabilities are subjective and represent degrees of belief .we emphasize again that universal schrodinger evolution only admits subjective , but not objective , randomness and probability . just before the measurement depicted in figure 2 , all of the observers are in identical states . using the basis , where are individual spin eigenstates and runs from to , we can write where denotes the state describing the observer who recorded outcome .the first line has been written to emphasize that in the basis it appears as if there are identical observers , each of whom is _ destined _ to evolve into a particular one of the .of course , the observer does not know which of the they will evolve into , because they do not ( yet ) know the spin state on their branch .but this is a _ subjective _ uncertainty because , indeed , the outcome is already pre - determined .this perspective may appear more natural if one considers the time reversal of the final state in ( [ copies ] ) .each observer evolves backward in time to one of the identical s .if the ultimate decoherent basis of the universe were known a priori , the evolution of each branch would appear entirely deterministic observed outcome is that , before the measurement , he was paired with the spin state : .this initial condition _ caused _ the measurement outcome . compare to the conventional formulation , in which , e.g. , the observer measuring spin state finds the outcome and the wave function collapses to only , with no branch . clearly there is no _ cause _ for the outcome ( as opposed to ) : the result is truly , objectively random , something that einstein objected violently to . ]of course , this still neglects the question of why _ my consciousness _ in particular has been assigned to a specific decoherent branch of the universal wave function . butthis question , which singles out a particular consciousness out of the many that are presumed to be equally real , is in a sense beyond the many worlds framework .subjective probability arguments originate in certain postulates governing the way in which we _ reason _ about probabilities . for example , following laplace , we might require that two components of the wave function related by symmetry ( i.e. , with equal coefficients and ) must have equal probabilities as outcomes . by further analysis , we might conclude that probability must be proportional to hilbert space norm squared .indeed , gleason s theorem suggests that any reasonable calculus of quantum probabilities must result in the familiar born rule .the reader can judge for himself whether these arguments are convincing .it is important to emphasize that , even if successful , such arguments only address whether a particular observer should be _ surprised _ to see the born rule at work on his branch of the wave function .they do not alter the fact that the many worlds wave function realizes almost exclusively observers who see gigantic violations of the born rule ( even , gross departures from decoherence ) .this vast majority of observers seem unlikely to believe in quantum mechanics as the correct theory of nature , let alone to conform to the reasoning described above .the subjective arguments reassure us that we are `` unlikely '' to be one of these observers . butare we reassured ?spins . before the measurement there is a single observer state , but afterwards there are observer states.,width=604 ]decoherence does not resolve the collapse question , contrary to what many physicists think .rather , it illuminates the process of measurement and reveals that pure schrodinger evolution ( without collapse ) can produce the quantum phenomena we observe .this of course raises the question : do we need collapse ? if the conventional interpretation was always ill - defined ( again , see bell for an honest appraisal ; everett referred to it as a `` philosophical monstrosity '' ) , why not remove the collapse or von neumann projection postulates entirely from quantum mechanics ? the origin of probability is the real difficulty within many worlds interpretations .the problem is subtle and experts are divided as to whether it has been resolved satisfactorily . because the wave function evolves entirely deterministically in many worlds , all probabilities are necessarily subjective and the interpretation does not require true randomness , thereby preserving einstein s requirement that outcomes have causes . _acknowledgements _ the author thanks s. lloyd , l. maccone , d. politzer , j. preskill , d. reeb , a. zee , d. zeh , and especially w. zurek for useful discussions .the author is supported by the department of energy under de - fg02 - 96er40969 .this essay is based on several talks given over the years , including at the benasque center for science ( workshop on quantum coherence and decoherence ) , caltech institute for quantum information , national taiwan university and academia sinica , taiwan .h. everett , iii , `` the theory of the universal wavefunction '' ( phd thesis ) , reprinted in b. s. dewitt and r. n. graham ( eds . ) , the many - worlds interpretation of quantum mechanics , _princeton series in physics , princeton university press , princeton , usa ( 1973)_. for a thorough overview of this approach , including novel ideas and references to earlier work , see w. zurek , phys .a * 71 * , 052105 ( 2005 ) . for a recent calculation ,see phys .lett . * 106 * , 250402 ( 2011 ) .[ arxiv:1105.4810 [ quant - ph ] ] .
i give a brief introduction to many worlds or `` no wavefunction collapse '' quantum mechanics , suitable for non - specialists . i then discuss the origin of probability in such formulations , distinguishing between objective and subjective notions of probability .
extremely large telescopes of the next century will have diameters of up to 100 m. compared to current state of the art 10 m - class telescopes , the biggest of those telescopes provide a collecting power roughly 100 times as big , and an angular resolution 10 times as good .the science - drivers for such telescopes are threefold : first , and most straight forward , we will be able to carry out spectroscopy of objects that we already know about from deep imaging , but which are too faint for spectroscopy with present day telescopes .the most prominent and cited target of such observations is the hubble deep field .second , we will image objects that we have never seen before , because they are too faint or too distant .these two science drivers are the straight forward extension of what astronomers have done during the last century , and at first sight seem related only to the collecting area of the telescopes .but since such observations will be background limited , we will only gain a factor of 10 2.5 magnitudes compared to the existing 10 m telescopes for seeing limited observations of point sources .only adaptive optics assisted observations at the diffraction limit of the telescopes will boost the limiting magnitude by a factor of 100 5 magnitudes when enlarging the mirror - size from 10 m to 100 m. high angular resolution capability therefore will be mandatory . and third , most exciting for us , is the prospect of exploring the universe at angular scales of a few milliarcseconds , the diffraction limit of such an extremely large telescope . like the hubble deep field for the faint object science, the direct imaging and spectroscopy of planets can serve as the final goal for high angular resolution astronomy .while imaging at this angular resolution will also be possible with interferometric arrays like the vlt , only the collecting area of several 1000 m will provide enough photons for spectroscopy . since an extremely large telescope will cost the order of 1 billion us$ , throughput of the instruments has highest priority .throughput in this context does not only mean imperfect transmission and detection of the light , but specifically multiplex - gain .for the faint - object - science , best throughput implies simultaneous spectroscopy of as many objects as possible .this is the standard domain of multi - object - spectroscopy . on the other hand ,if objects are to be resolved , and if we are interested in their complex structure , integral - field - spectroscopy is the the first choice . this technology is definitely required when observing with adaptive optics at the diffraction limit of a telescope , both to avoid imperfect slit - positioning on the object , and for post - observational correction of the imperfect point - spread - function by means of deconvolution .there are several reasons , both object - inherent and technical , to carry out a significant fraction of the observations with such a telescope at near - infrared wavelengths : first , many of the faint objects we are looking for like in the hubble deep field are at high redshift .therefore a lot of the well established `` optical '' spectral diagnostics are shifted beyond 1 micron .second , many of the interesting objects in the universe like nuclei of galaxies , star- and planet - forming regions are hidden behind dust .for example our galactic center is dimmed in the visible by about 30 magnitudes , while we suffer from only 3 magnitudes of extinction in k - band ( 2.2 m ) . and third, high angular resolution through the earth s atmosphere is much easier achieved at longer wavelengths .even though there is no principle limit to achieve the diffraction limit in the visible , the high complexity of an adaptive optics system for an extremely large telescopes with roughly actuators may suggest to start with the easier task of correcting in the near - infrared .in this section we will present current developments and concepts for integral - field - spectroscopy and multi - object - spectroscopy ( with emphasis on the technology developed at the max - planck - institut fr extraterrestrische physik ) , and outline the technology - challenge for their operation at cryogenic temperatures .a number of instruments have been built or are going to be built for integral - field - spectroscopy and multi - object - spectroscopy . even though most of them are designed for operation at visible wavelengths ,their concepts are applicable for the near - infrared as well . in this sectionwe will describe the basic idea behind the different approaches and compare their specific properties and feasibility at cryogenic temperatures .an integral - field - spectrograph obtains simultaneously the spectra for a two - dimensional field with a single exposure .it therefore distinguishes itself from several other ways of measuring spectra for a two - dimensional field , which all need multiple integrations .well known and applied in astronomy for several decades are ( 1 ) fabry - perot - imaging - spectroscopy , ( 2 ) fouriertransform - spectroscopy , ( 3 ) slit - scanning - spectroscopy .why is integral - field - spectroscopy most appropriate to ground - based astronomy , especially at the highest angular resolution ?fabry - perot - imaging - spectroscopy and fouriertransform - spectroscopy require repetitive integrations to obtain full spectra .therefore ground - based observations suffer a lot from varying atmospheric conditions .both atmospheric absorption and emission must be measured in between two adjacent wavelength - settings , and since the atmospheric properties vary on a time - scale of minutes at near - infrared - wavelengths , long single exposures , and therefore high quality spectra for faint objects are almost impossible to record with wavelength scanning techniques .the difference between integral - field - spectroscopy and slit - scanning is in principle rather small , since both instruments provide roughly the same number of image points and the same spectral sampling .but because most astronomical targets are far from being slit - like , almost all observations can gain a lot from a square field of view .three basic techniques are used in today s instruments for integral - field spectroscopy : the mirror - slicer , the fiber - slicer and the micro - pupil - array .the basic idea of image - slicing with mirrors is rather simple : a stack of tilted plane mirrors is placed in the focal plane , and each mirror reflects the light from the image in a different direction . at a distance at which the rays from the different mirrors are clearly separated , a second set of mirrors realigns the rays to form the long - slit ( figure [ mirror ] ) of a long - slit - spectrograph , which disperses the light along the rows of the detector .this concept was successfully applied in the 3d - instrument , a near - infrared integral - field - spectrometer developed and operated by mpe , and will be used for spiffi ii , the adaptive - optics - assisted field - spectrometer for the vlt - instrument sinfoni .the disadvantage of this concept is that shadowing at the steps of the first stack of mirrors leads to unavoidable light losses .this shadowing effect increases with smaller mirrors and a larger field of view . in order to have little light losses one would like to have large mirrors in the first stack .because this increases the total slit - length , and therefore makes the collimator of the spectrograph - optics uncomfortably big , a compromise has to be found. for spiffi ii with its approximately 1000 spatial pixels arranged in 32 slitlets , we chose the width of each mirror of the first stack to be 300 m , which leads to a total slit - length of about 300 mm .for these parameters the shadowing - effect cuts out about 11% of the total light .the whole slicer for spiffi ii will be fabricated from zerodur using classical polishing techniques .optical contacting of the individual mirrors will provide a monolithic structure that is insensitive to changes in temperature .with 3d we proved that the concept is feasible , and our recent results from cool - downs of an engineering slicer to the temperature of liquid nitrogen ensures operation in cryogenic instruments .however , the concept will find its limitation for much larger fields due to increased shadowing .other recent developments of mirror - slicers derive from the basic design with plane mirrors , and take advantage of curved mirrors .such a concept avoids part of the shadowing - effects and provides a smaller `` slit '' , thereby simplifying the design of the spectrograph - optics .a completely different approach for integral - field - spectroscopy is based on optical fibers . in the image plane the two dimensional field is sampled by a bundle of optical fibers , which are then rearranged to a `` long - slit '' . as in the mirror - slicer - concept , a normal long - slit spectrograph can be used to disperse the light . as simple and expandable as this concept seems ,many little problems are inherent to such devices . to achieve a high coupling - efficiency , an array of square or hexagonal lenslets with a filling factor of close to 100 % is used to couple the light into the fibers .however , for a high coupling efficiency the fibers have to be accurately positioned behind each lenslet and the image quality of the lenslets has to be very good .one way to loosen the constraints on the positioning accuracy and the optical quality of the lenslets is to use a larger for the fiber , which in turn increases the f - number of the spectrograph - camera and finally limits the pixel - size , especially at extremely large telescopes . for a cryogenic instrumentthe positioning of the fibers behind the lenslets is another critical point , since differential thermal contraction both complicates gluing of the lenslets and fibers , and due to small displacements degrades the coupling efficiency .for spiffi we therefore started the development of monolithic lens - fiber - units ( figure [ fiber ] ) , each consisting of a silica - fiber , that has been flared and a spherical lens polished onto it . even though individual fiber - lens - units could be produced with an overall transmission of more than 70 % including coupling efficiency , reflection losses and intrinsic absorption , the technology is not yet optimized for producing several 1000 fibers at reasonable cost .despite all the technical problems with the image - slicer based on flared fibers , there are three main advantages of this concept over its competitors : first , its possible extension to any number of fibers .second , this concept provides full flexibility in the arrangement of the fibers ( see section on multiple - field - spectroscopy ) .third , the flared fiber is insensitive to a change in temperature and can be used at cryogenic temperatures .the flared - fiber technology will be implemented in lucifer , the general - purpose near - infrared instrument for the lbt . like the fiber - solution, the third concept for integral - field - spectroscopy is based on a micro - lens - array in the image plane .but instead of reimaging the pupil of the individual lenses onto different fibers , the whole set of micro - pupils is now fed into a spectrograph . with the micro - pupils filling only a small fraction of the total field , with a slight tilt in the dispersion directionthe spectra on the detector fill the unused detector area between the micro - pupils without overlapping of the individual spectra .compared to the fiber - based concept , there is no additional loss of light due to the coupling of light into the fiber .also the technology of producing micro - lens - arrays is now well established , and its application in cryogenic instruments seems straight forward .but while both mirror- and fiber - slicers can disperse the light all across the detector , the spectra of the micro - pupils need to be truncated before they overlap with the spectra of another micro - pupil .therefore such instruments can provide high spectral resolution only for a very limited wavelength range .the need for multi - object - spectroscopy is obvious for faint object astronomy .whenever good statistics is crucial for the scientific interpretation , we need to have information on as many objects as possible . andsince many programs require integration times of several hours per object , simultaneous observations of a large number of objects are the only possibility to carry out the observations within a reasonable time .there are two basic concepts to carry out simultaneous spectroscopy of multiple objects in a field : using multiple slits and coupling the light into fibers . in the multi - slit approacha mask with slits located at the object positions is placed in the image - plane. this slit - plate is normally fabricated `` off - line '' prior to the observation .special care must be taken to avoid overlap of the spectra from different slits .therefore usually several masks and observations are required for a complete set of spectra of the objects within a given field .the big advantage of such slit - mask - spectrographs is their high optical throughput , since no additional optical element is introduced .examples for such instruments are the cfht mos and the two virmos instruments for the vlt .one way to overcome the `` off - line - production '' of the slit - masks might be with micro - mirror arrays which would allow electronically controlled object selection .the second concept of multi - object - spectroscopy is based on fibers . while in previous instruments fibers had to be placed by hand ,nowadays robots arrange the fibers , like in the aat 2df .depending on the image - scale and the f - number , the light is either coupled directly into the fiber , or a lenslet is used to reimage the telescope - pupil onto the fiber core . as the fiber - based integral - field - unit, such multi - object - spectrographs can be expanded to almost any number of objects . for the time being , no cryogenic multi - object - spectrograph forthe infrared wavelength range has been set into operation .for the lucifer instrument for the lbt , however , possible realization of the two concepts multi - slit and fiber - based in a cryogenic instrument are under study : in a multi - slit - spectrograph , the technical key - problem is that the slit - masks have to be produced `` off - line '' , and need to be inserted into the cryogenic system .one possibility would be an air - lock through which a set of slit - masks are fed into a juke - box and cooled down to the temperature of liquid nitrogen , before they are actually moved into the field . a fiber - based system , however , will require a fully cryogenic robot to position the fibers .but unlike their optical counterparts , present - day fibers for the infrared are either rather fragile ( zirconiumflouride ) , or show significant extinction towards longer wavelengths ( waterfree silica ). therefore long fibers and big movements should be avoided , and a `` spaltspinne''-like mount of the fibers with a long - slit - spectrograph located directly behind seems most promising . while the fiber - technology e.g. based on the monolithic concept described for the integral - field - unit is almost established , a reliable cryogenic robot is not yet in operation .a common problem to both multi - object - concepts described above is the need to have precise target positions .in addition no ( fiber - concept ) or very limited ( multi - slit , since all slits are parallel ) spatial information can be obtained . both problems of multi - object - spectrographs the need for precise target - positions , and the lack of spatial information will be overcome by multi - field - spectroscopy : like in multi - object - spectrographs , multiple objects are observed simultaneously , but now each object is spatially sampled with an integral - field - unit . in principleeach of the three basic concepts for integral - field - units mirror - slicer , fiber - slicer , or micro - pupil - array could be combined with the multi - object concept : little mirror - slicers or micro - pupil - arrays are the natural extension of the slit - mask ( figure [ multifield ] ) . assuming the same pixel - scale , the same size of the individual fields and the same size of the detector , the source density decides which slicer - concept matches best the science - program . for micro - pupil - arrays, all the objects should be within a field with a linear dimension equal to the number of fields times the size of each single field . for mirror - slicers ,the objects should be arranged more loosely , the objects at least separated by the square of the linear dimension of each subfield . most promising , however , is the combination of the fiber - based multi - object- and integral - field - concepts .the single fibers of the multi - object - spectrograph need only to be replaced by small fiber - slicers built from several lenslet - fiber - units ( figure [ multifiber ] ) . depending on the science - program , either small individual fields with about seven pixels each , or larger fields with about 100 spatial elements would be selected .the monolithic concept developed for the fiber - slicer as described above would fulfill all requirements for this kind of multi - field - spectroscopy . however , like multi - field - spectrographs based on fibers , all multi - field - solutions for the infrared wavelength - range require reliable cryogenic robots .what is the maximum pixel scale ? in order to get a rough estimate , we will assume a telescope with a diameter of 100 m. the physical size of a pixel of a present day near - infrared - detector is about 20 m .we further know from present - day near - infrared - instruments like spiffi that the f - number of any camera - optic needs to be greater than or equal to roughly 1 to achieve acceptable image quality . from this limit , and the fact that is preserved in imaging optics , one can derive a maximum pixel size of 60 mas .so whenever larger image elements are required , their flux must be spread over several pixels .the smallest pixel scale is determined by the diffraction limit of the telescope . for the h - band ( 1.65 m )the appropriate pixel scale to nyquist - sample the image is 3 mas . what is the noise regime we have to work with ? let us assume h - band observations again .most of the sky - background in this wavelength range arises from about 70 oh lines , which sum up to a total surface brightness of about 14 mag / arcsec .the flux between the oh - lines is roughly 18 mag / arcsec .the first lesson we learn from these numbers is that even for present day technology oh - suppression is crucial for deep observations . in order to loose only 1/10 of the h - band - spectrum to oh - contaminated pixels, roughly 1400 pixels are required for nyquist - sampling , corresponding to a spectral resolution of approximately 3000 in h - band .but even at this spectral resolution and with adaptive - optics - pixel - scales , observations will be background - limited assuming future detectors with a read - noise close to 1 electron and negligible dark - current , and integration - times of the order of 1 hour . for usit is obvious that spectroscopy at the diffraction limit of an adaptive - optics equipped telescope and with pixel scales of the order of milliarcseconds requires integral - field - units .of the three concepts described above mirror - slicer , fiber - slicer , micro - pupil - array , the mirror - slicer provides the most efficient use of detector elements , because it is the only technology that actually uses almost all pixels . being not yet at its limitation in field - coverage, this concept may be the choice for the next generation of instruments . in the more distant future ,when detector - size and -availability will not limit our instrumentation - plans any more , the micro - pupil - concept , and finally the most expandable fiber - concept are most appropriate .since we will be sky - limited at near - infrared - wavelengths at any pixel - scale , the biggest gain in sensitivity ( 5 magnitudes ) over 10m - class telescopes will be achieved by adaptive - optics - assisted observations of point - like sources . in order to make most efficient use of the telescope time, multi - object spectroscopy will be one of the most important operation modes for extremely large telescopes .but with the pixel - size of a few milliarcseconds , the problem of accurate slit - positioning will require the extension of the multi - object - technique towards the multi - field - approach .for this application the fiber - based - concept combined with a cryogenic robot seems most promising . for spectroscopic surveys the object - density will finally determine the most appropriate instrumentation .but one should keep in mind that in contrast to smaller but equally sensitive future space - telescopes the maximum pixel - size for a 100 m telescope will be limited to about 50 mas . assuming oh - suppressed observations in the h - band with roughly 1400 spectral pixels for each image point , even 16 detectors with 4k 4k pixels could only cover a field 20 arcsec on a side . in order to make efficient use of an integral - field - unit for this application ,the source density should be of the order objects per arcminute , like extragalactic star - forming regions .for most applications , however , the source density will be much smaller , and the combination of deep imaging and multi - field - spectroscopy will match best .bacon r. et al . 1995 , a&as , 113 , 347b bash f. n. et al .1997 , spie , 2871 , 576 content r. et al .1997 , spie , 2871 , 1295 gilmozzi r. et al .1998 , spie , 3352 , 778 krabbe a. et al .1997 , spie , 2871 , 1179 le fevre o. et al . 1994 , a&a , 282 , 325l le fevre o. et al .1998 , spie , 3355 , 8 maihara t. et al .1993 , pasp , 105 , 940 mandel h. et al .1999 , in preparation mountain m. 1997 , spie , 2871 , 597 oliva e. & origlia l. 1992 , a&a , 254 , 466 pitz e. 1993 , a.s.p . conf ., 37 , 20 sebring t. a. et al .1998 , spie , 3352 , 792 taylor k. et al .1997 , spie , 2871 , 145 tecza m. & thatte n. 1998 , a.s.p .ser . , 152 , 271 tecza m. & thatte n. 1998 , spie , 3354 , 394 tecza m. 1999 , ludwig - maximilians - universitaet muenchen , thesis , submitted thatte n. et al .1998 , spie , 3353 , 704
integral - field - spectroscopy and multi - object - spectroscopy provide the high multiplex gain required for efficient use of the upcoming generation of extremely large telescopes . we present instrument developments and designs for both concepts , and how these designs can be applied to cryogenic near - infrared instrumentation . specifically , the fiber - based concept stands out the possibility to expand it to any number of image points , and its modularity predestines it to become the new concept for multi - field - spectroscopy . which of the three concepts integral - field- , multi - object- , or multi - field - spectroscopy is best suited for the largest telescopes is discussed considering the size of the objects and their density on the sky . -8 mm -8 mm
as is well - known , following the pioneer work on providing a -approximate solution for max - cut problem , the semidefinite programming ( sdp ) relaxation technique has been playing a great role in approximately solving combinatorial optimization problems and nonconvex quadratic programs ; see for example , .this paper is to present a surprise case where the sdp relaxation misleads the approximation in both theory and computation .consider the -ball ( ) constrained weighted maximin dispersion problem : where are given points , for , and is the -norm of .applications of ( p ) can be found in facility location , spatial management , and pattern recognition ; see and references therein .based on the sdp relaxation technique , haines et al . proposed the first approximation algorithm for solving ( p ) . however , their approximation bound is not so clean that it depends on the optimal solution of the sdp relaxation .fortunately , when , the approximation bound reduces to very recently , the above approximation bound ( [ bd ] ) is established for the special case based on a different algorithm . butfurther extension to ( p ) with remains open . in this paper, we show that , by removing the sdp relaxation from haines et al.s approximation algorithm and simply replacing the optimal solution of the sdp relaxation with a scalar matrix , the approximation bound ( [ bd ] ) becomes to be satisfied for ( p ) . it is the sdp relaxation that makes the whole approximation algorithm not only loses the theoretical bound ( [ bd ] ) but also performs poorly in practice .the remainder of this paper is organized as follows . in section 2, we present the existing approximation algorithm based on sdp relaxation . in section 3, we propose a new simple approximation algorithm without any convex relaxation and establish the approximation bound .numerical comparison is reported in section 4 .we make conclusions in section 5 . throughout this paper , we denote by and the -dimensional real vector space and the space of real symmetric matrices , respectively .let be the identity matrix of order . denotes that is positive ( semi)definite . the inner product of two matrices and is denoted by . stands for the probability .in this section , we present haines et al.s randomized approximation algorithm based on sdp relaxation .it is not difficult to verify that lifting to yields the sdp relaxation for ( p ) : since it is assumed that , the above ( sdp ) is a convx program and hence can be solved efficiently .now , we present haines et al.s sdp - based approximation algorithm for solving ( p ) .the original version of the above algorithm did not consider the possible case for some .it has been fixed in .the existence of in step 3 of algorithm is guaranteed by the inequality : which is a trivial corollary of the following well - known result .* lemma a.3)[thm2 ] let be a random vector , componentwise independent , with let and .then , for any , for the approximation bound of algorithm 1 , the following main result holds .* theorem 3)[thm0 ] for the solution returned by the algorithm 1 , we have where . moreover , when , .finally , we remark that the equality is no longer true when .the following counterexample is taking from .* example 4.1 ) let , , , , , and . solving ( sdp ) , we obtain moreover , the above value of is unique . for the detail of the verification, we refer the reader to .in this section , we propose a simple randomized approximation algorithm for solving ( p ) .we remove the sdp relaxation from algorithm 1 and then replace the optimal solution with the scalar matrix .it turns out that our new algorithm just uniformly and randomly pick a solution from , which is a set of points on the surface of the unit -ball .the detailed algorithm is as follows .surprisingly , for any , we can show that our new algorithm 2 always provides the approximation bound ( [ bd ] ) for ( p ) .[ thm1 ] let .for the solution returned by algorithm 2 , we have according to the settings in algorithm 2 , for such that , we have where the inequality ( [ num:1 ] ) holds since according to the cauchy - schwarz inequality , it holds that if there is an index such that , we have next , let be an optimal solution of ( p ) .then , for and , it holds that where the inequality ( [ num:11 ] ) holds since it follows from the hlder inequality that the inequality ( [ num:4 ] ) holds due to the cauchy - schwarz inequality and the inequality ( [ num:5 ] ) follows from ( [ num:0 ] ) . for the casethat there is an index such that , we also have thus , it follows from ( [ num:1 ] ) , ( [ num:00 ] ) , ( [ num:5 ] ) and ( [ num:6 ] ) that substituting in ( [ final ] ) completes the proof .theorem [ thm1 ] implies that algorithm 2 provides a asymptotic approximation bound for ( ) as increases to infinity .in this section , we numerically compare our new simple approximation algorithm ( algorithm 2 ) with the sdp - based algorithm proposed in ( i.e. , algorithm 1 in this paper ) for solving ( p ) . in algorithm 1 , we use sdpt3 within cvx to solve .all the numerical tests are constructed in matlab r2013b and carried out on a laptop computer with ghz processor and gb ram .first , we fix , , and .then , we randomly generate the test instances where varies in . since all of the input points with form an orderly matrix .we generate this random matrix using the following matlab scripts : .... rand('state',0 ) ; x = 2*rand(n,220)-1 ; .... for each instance , we independently run each of the two algorithms times with the same setting and then plot the objective values of the returned approximation solutions in figure 1 .second , we fix , , and let .the total input points are generated using the following matlab scripts : .... rand('state',0 ) ; x = 2*rand(n,30)-1 ; .... both algorithm 1 and algorithm 2 are then independently implemented times for each with the same setting .we plot the objective values of the returned approximation solutions in figure 2 . according to figures 1 and 2 ,the qualities of the approximation solutions returned by algorithm 2 are in general much higher than those generated by algorithm 1 .the practical performance demonstrates that at least for finding approximation solutions of ( p ) , the sdp relaxation is misleading . ., width=340 ] ., width=340 ]in this paper , we propose a new simple approximation algorithm for the -ball constrained weighted maximin dispersion problem ( p ) .it is inherited from the existing sdp - based algorithm by removing the sdp relaxation and then trivially replacing the optimal solution of the sdp relaxation with a particular scalar matrix .surprisingly , the simplified algorithm can provide the first unified approximation bound of for any , which remains open up to now except for the special cases and .numerical results also imply that the sdp relaxation technique is misleading in approximately solving ( p ) . finally , we raise a question whether the unified approximation bound can be extended to ( p ) with .schaback , r. : multivariate interpolation and approximation by translates of a basis function , in approximation theory viii , c. k. chui and l. l. schumaker eds ., world scientific , singapore , 491514 ( 1995 )
consider the problem of finding a point in a unit -dimensional -ball ( ) such that the minimum of the weighted euclidean distance from given points is maximized . we show in this paper that the recent sdp - relaxation - based approximation algorithm [ siam j. optim . 23(4 ) , 2264 - 2294 , 2013 ] will not only provide the first theoretical approximation bound of , but also perform much better in practice , if the sdp relaxation is removed and the optimal solution of the sdp relaxation is replaced by a simple scalar matrix .
the estimation of the uncertainty of a theoretical model is of great importance to evaluate the predicted ability of the model .the standard statistical methods , such as the least square and fitting , are widely used in the parametrization of various models .specially in nuclear physics , the methods are used to control the validity of the data fitting procedure in the liquid drop model ( ld ) , finite - range droplet model , lublin - strasbourg drop model , woods - saxon model , and skyrme ( like ) force .various of observed data can be considered in the fitting procedure , such as the nuclear mass , radius , single particle energies , deformations , and so on .taking nuclear mass models for example , the total uncertainties are obtained to be around mev from the fitting procedure in the finite - range droplet model , the lublin - strasbourg drop model , the hartree - fock - bogoliubov model , and the weizscker - skyrme mass model .the recent version of weizscker - skyrme mass model considers the effect of the surface diffuseness , which is important for nuclei with extreme isospin .such effect is especially evident for the extremely neutron - rich nuclei , because the valence neutron may extend very far due to the lack of the coulomb barrier .recent investigation on the heaviest known neutron - halo nuclei , , show that the upper limit of the radius is a key characteristic of the two - neutron halo .it is of great interesting to investigate the details of the total uncertainty obtained from fitting procedures .the total uncertainty normally comes from three parts , the model , the experiment , and the numerical method .the uncertainty from the model consists two parts , the statistical uncertainty from the not exactly determined parameters and the systematic uncertainty from the deficiency of the model .the systematic uncertainty is hard to be estimated because its origin is the deficiency of the theoretical model . in ref . , the systematic uncertainty are obtained by comparing a variety of models .two illustrative examples are given to estimate the systematic uncertainty by analysing the residues .here we suggest a practical method to decompose the statistical and systematic uncertainties from the total uncertainty and its distribution in one model in the case of large sample . in the case of large sample, both distributions of statistical and systematic uncertainties are considered as normal distributions .the moments of the residues are used to constrain the normal function . in the parametrization of the model , the uncertainty of each model parameters is obtained through the standard fitting procedure .the standard deviation of the statistical uncertainty is estimated through the randomly generated parameters following the normal distribution defined by their uncertainties . to decompose the total uncertainty , as an example, a possible choice is applying to the ld because of its simplicity .the uncertainty of the model parameters can be easily obtained through the linear fitting procedure .one of the well known deficiencies of the ld is the lack of the shell effect .it is helpful for further discussion when the present decomposition method applying to the ld with and without the shell effect .an observed data is described by a model with a few parameters . after fitting procedure ,the values and uncertainties of all parameters are obtained , and for the parameters .its total uncertainty , defined by the residue , includes three parts , the uncertainties from the model , the experiment , and the numerical method . in the present study ,only uncertainties from the model are considered for simplicity , including statistical and systematic uncertainties .it is reasonable for the ld because the experimental uncertainty of the binding energy is generally very small and the numerical uncertainty of a linear and analytical model is negligible .the distribution of the total uncertainty is the sum of the distributions of the statistical and systematic uncertainties . in the case of a large sample ,it is reasonable to suppose that the statistical and systematic uncertainties follow the normal distribution , although not exactly .a normal distribution is labeled as , with the mean value and the standard deviation .the distribution of total uncertainty is : where is the normalized factor .the mean values and are generally separated , which is rarely discussed in the previous works .the moments are important quantities in the description of a distribution , such as the mean value ( first moment ) , variance ( second moment ) , skewness ( third moment ) , kurtosis ( fourth moment ) . applying the calculations of moments to eq .( [ residue ] ) : the moments in the left hand side are calculated through the distribution of . the right hand side is obtained through the properties of the normal distribution . in principal the mean values and variances of the statistical and systematic uncertainties can be obtained through eq .( [ residue3 ] ) . however it sometimes has no physical solution because the normal distribution assumption is not exactly. the variance of the statistical uncertainty can be simulated through and : ^{2}}{m}.\end{aligned}\ ] ] is randomly generated through a normal distribution .the statistical uncertainty comes from the uncertainty of the parameters of the model . for the term, we randomly select one and one parameter from all possible candidates , while other parameters the same as the best fitted values . the number is chosen to be sufficient large comparing with the number of the observed data .such procedure simulates the deviation comes from the uncertainty of the parameters of the model .it is the estimation of the variance of the statistical uncertainty . together with the first two equations in eq .( [ residue3 ] ) , one can express and by , , , and .only one unknown remains in the latter two equations in eq .( [ residue3 ] ) .one can calculate the and as the function of and minimize , to estimate the value of , which is the criteria for the present study .the method discussed is labeled as the uncertainty decomposition method ( udm ) . in the present work ,the udm is applied to the ld .the ld is an empirical model describing the binding energies and other bulk properties of nuclei .microscopic approaches describe the binding energies of most nuclei with good accuracy , such as the energy density functional theory and the hartree - fock - bogoliubov method . some other microscopic approaches , such as the nuclear shell model ,concentrate on the light and medium mass nuclei .our previous works show that the shell model can give a precise description on the light nuclei from stability line to both the neutron and proton drip line .the ld can give well description on the binding energies of nuclei with the standard shell correction procedure .the original ld mass formula includes the volume energy , the surface energy , the coulomb energy , the volume term of proton - neutron asymmetry energy , and the paring energy .many additional terms are introduced to include more physical effect . such as, the surface energy of proton - neutron asymmetry is introduced to the ld mass formula .the ld mass formula is given as : where and for even - even , odd - even , and odd - odd nuclei , respectively . from and after we label eq .( [ ld6 ] ) as ld6 because of its six parameters .it should be noted that there are several forms of the surface asymmetry term when introducing to the ld .these six parameters are considered to be the most important terms for nuclear binding in a macroscopic view and reproduce experimental binding energies , generally speaking , within the precision of .the largest deviation comes from the lack of the shell effect .strutinsky procedure is the standard method to introduce the shell effect to the ld .a shell correction term is presented as the function of the number of the valence nucleons respected to the closed shell : where with is the number of the valence neutron ( proton ) respected to the nearest closed shell .it is obvious that in an illustrative view the shell correction is related to the number of the valence nucleons . here, we use an illustrative function for shell correction to ignore the numerical uncertainty and for further discussion : where and are the spin - orbit magic numbers , , , for both proton and neutron , and for neutron , and with the strength of the shell correction .the function considers the shell effect in an illustrative way that the nuclei around doubly magic nuclei get extra binding energy .the shell correction decreases when nuclei go far away from the doubly magic nuclei , with the scale of the distance .the exclusion of the magic numbers and is because the ld6 do not show the systematic necessity of the extra binding energy for the nuclei around these two magic numbers .the possible reason is that the ld works insufficiently in the light region . with the shell correction term eq .( [ shell2 ] ) , the ld mass formula can be written as : which is labeled as ld8 in the later discussion . in the following section ,the udm is applied to the ld .[ cols="^,^,^,^,^,^,^,^,^,^,^,^,^,^",options="header " , ] ( color online ) the distribution of the residues of the binding energies on the chart of nuclide of the ld7 fitted to the nuclei around the stability line in ame2012 .only the nuclei around stability line are shown in the upper panel , while the others are together shown in the lower panel ., title="fig : " ] ( color online ) the distribution of the residues of the binding energies on the chart of nuclide of the ld7 fitted to the nuclei around the stability line in ame2012 . only the nuclei around stability line are shown in the upper panel , while the others are together shown in the lower panel ., title="fig : " ] ( color online ) the distribution of the residues of the binding energies of the nuclei around stability line and all nuclei , and the corresponding estimated uncertainties of the ld7 fitted to the nuclei around stability in ame2012 . ]it is interesting to see what the ld and udm results are if only the nuclei around stability line are known , which is helpful for the understanding how the unmeasured nuclei can be described through the present observed data . nuclei are selected from ame2012 , which are around the stability line and with the small observed uncertainties of the binding energies ( in general , kev at , kev at , and kev at ) .table [ parastable ] presents the parameters obtained from these data for ld8 , ld6 , ld7 , and ld5 .the fittings of the ld8 and ld6 give large uncertainty on the surface asymmetry term .the ld7 and ld5 are then fitted without this term , defined as follow , respectively : it is obvious that the nuclei around stability line are less representative than all measured nuclei with larger uncertainties in each parameters of the ld , partially because of the less number of the data and partially because of the different function of each parameter .for example , both the paring and shell terms are less correlated to the isospin and other terms .the uncertainty of these two parameters changes not much for the data from the stability line to all measured nuclei because the most important change is on the isospin degree of freedom .the uncertainty of the volume and surface asymmetry term changes a lot as the isospin changes .the uncertainty of the surface asymmetry term is much larger than that of the volume asymmetry term because is generally much smaller than .thus the uncertainty of the surface asymmetry term is very large ( around ) if only nuclei near stability line is considered .it is stated in ref . that the form of the surface symmetry term is obtained not from any evidence of the nuclear mass but from the form of the volume asymmetry term .the udm results of these sets of parameters of the ld are presented in the table [ udmstable ] . because of the large uncertainties of the parameters , the estimated is larger than in most cases , which is the limitation of the udm .although the systematic uncertainties can not be determined among the large statistical uncertainties , many interesting discussions can be addressed .the standard deviations of the nuclei around stability line decrease a little from the ld5 to ld6 and from the ld7 to ld8 because of the added surface asymmetry term while the statistical uncertainty increases .it indicates that this term is not very necessary for the nuclei around stability line , which agrees with the original form of the ld . in the ld6 and ld8 , of the residues from stability line are similar compared with to that from all measured nuclei .although the surface asymmetry term can be excluded when the data is around stability line and its uncertainty is rather large if included , its value is acceptable obtained ( the best fitted values in table [ para ] are inside in table [ parastable ] ) and very useful for the prediction .it is reasonable because the surface asymmetry term has its physical meaning on the isospin degree of freedom .the real uncertainty is less than the estimation .the situation of the heavy nuclei is similar .although the estimated uncertainty is large , the real uncertainty is much smaller for both the binding energies and the neutron separation energies because the liquid drop assumption is suitable for the heavy nuclei . in the ld5 and ld7 , of the residues from all measured nucleiare dramatically larger compared with that from stability line .it is expected because the nuclei near stability line , , is nearly one dimension on the chart of nuclei , which misses many terms on isospin degree of freedom correlated to other degrees of freedom .such as , for certain , is limited around .the correlation between and other are missing .figure [ ld7sd ] presents the comparison of the results of the ld7 for both the nuclei around stability line and all measured nuclei .it is clearly seen that the parameters obtained from the stability line fail when approaching to neutron- and proton - rich nuclei .it is nice to see that the standard deviations of the statistical uncertainties keep almost the same from the stability line to all measured nuclei in both the ld5 and ld7 .such estimated can be used in the udm and give nice description on the residues , seen from fig .[ ld7counts ] taking the ld7 for example .although estimated seem to be large when only data around stability line is concentrated , their values are indeed useful for predictions in the global chart of nuclide even if some important terms are missing , such as the surface asymmetry term .but of course the systematic uncertainty may be very large .it is expected that the estimated obtained from the present observed data in sec .[ sec : level3 ] is useful to scale the statistical uncertainty for unmeasured nuclei .but as discussed before , such estimations are more suitable for a global investigation , may not for local cases . in fig .[ ld7counts ] , the normalized factor of is set to be one , which shows that the statistical uncertainty is very large that the systematic uncertainty can not be clarified for the nuclei around the stability line .it is seen that the systematic uncertainty less contributes to the residues of these nuclei with the same set of parameters when the data changes to all measured nuclei .in conclusion , a method is suggested to decompose the statistical and systematic uncertainties of a theoretical model after fitting procedure .the two uncertainties are obtained through the total uncertainty and the model parameters obtained from the fitting . such uncertainty decomposition method ( udm )are applied to the liquid drop model ( ld ) as an example .the estimated distribution can well reproduce the distribution of the residues of the binding energies .the specific nuclei locate where the physical considerations expect in the estimated distribution obtained without these considerations . such as the light nuclei and nuclei close to shell locate mostly inside the distribution of the systematic uncertainty , while the heavy nuclei and nuclei far from shell inside the distribution of the statistical uncertainty .the results are obtained purely from the mathematic forms of the ld and the observed data .thus the udm may be useful for the discovery of hinted physics in certain cases .the validity of the udm is tested through various approaches .it is acceptable that the distribution of the statistical and systematic uncertainties are assumed to be normal .the present work is constrained in the simple model under the simple statistical assumptions to see what can be obtain from the residues .it should be noted that the realistic distributions of the statistical uncertainty from the model parameters and the parameters themselves are still not clearly known for the present simple model and some modern nuclear mass model , such as the hartree - fock - bogoliubov model and the weizsecker - skyrme mass model . some advanced method to deal with the systematic uncertainty of the mass model are of great interesting , such as the image reconstruction techniques , the radial basis function approach , and the method for the evaluation of ame2012 .the former two reconstruct the uncertainty through the systematic trends of the residues , while the latter evaluates huge amount of the observed values and their uncertainties .the present work also shows the effectiveness and limitation of the udm in the investigation of the unmeasured nuclei . in a global view , the statistical uncertainty are well estimated even when certain important terms in the model are missing .the statistical uncertainty may be overestimated or underestimated in the specific regions of nuclei , such as the heavy and light nuclei .more statistical methods combined with the physical considerations should be considered to simulate the uncertainties of the parameters and the statistical uncertainties .the present work shows that the residues indeed includes more information beyond the most used two , the mean value and the standard deviation , which are rarely discussed before .it is expected that the udm can be used in other theoretical works , such as fitting effective nuclear force to nuclear data , which is helpful for the study of the statistical and systematic uncertainties of the levels through the nuclear shell model .the author acknowledge to the useful suggestion from dr .chong qi , and the collection of the data by the students enrolled the course `` applied statistics '' in 2013 . this work has been supported by the national natural science foundation of china under grant no .11305272 , the specialized research fund for the doctoral program of higher education under grant no . 20130171120014 , the fundamental research funds for the central universities under grant no .14lgpy29 , the guangdong natural science foundation under grant no .2014a030313217 , and the pearl river s&t nova program of guangzhou under grant no .201506010060 .99 j. dobaczewski , w. nazarewicz , and p.g .reinhard , j. phys .g : nucl . part .* 41 * , 074001 ( 2014 ) .w.d . myers and w.j .swiatecki , nucl . phys . * 81 * , 1 ( 1966 ) .p. mller and j.r .data nucl .data tables * 59 * , 185 ( 1995 ) .k. pomorski and j. dudek , phys .c * 67 * , 044316 ( 2003 ) .j. dudek and t. werner , j. phys .phys . , * 4 * , 1543 ( 1978 ) .j. dudek , _ et al ._ , j. phys .phys . , * 5 * , 1359 ( 1979 ) .j. bartel , nucl . phys .a * 386 * , 79 ( 1982 ) .e. chabanat , _ et al .phys . a * 627 * , 710 ( 1997 ) .e. chabanat , _ et al ._ , nucl . phys .a * 635 * , 231 ( 1998 ) .m. liu , n.wang , y. deng , and x. wu , phys .c * 84 * , 014333 ( 2011 ) .n. wang , m. liu , x.z .wu , and j. meng , phys .b * 734 * , 215 ( 2014 ) . t. suzuki , t. otsuk , c.x .yuan , and a. navin , phys .b * 753 * , 199 ( 2016 ) .yuan , t. suzuki , t. otsuka , f.r .xu , and n. tsunoda , phys rev .c * 85 * , 064324 ( 2012 ) ; c.x .yuan , c. qi , f.r .xu , t. suzuki , and t. otsuka , ibid * 89 * , 044327 ( 2014 ) k. heyde , basic ideas and concepts in nuclear physics , 2nd ed .( iop , bristol , 1999 ) .h. jiang , g.j .zhao , and a. arima , phys .c * 85 * , 024301 ( 2012 ) .a.e.l . dieperink and p. van isacker , eur .j. a * 32 * , 11 ( 2007 ) .m. wang , _ et al .c * 36*(12 ) , 1603 ( 2012 ) .
a method is suggested to decompose the statistical and systematic uncertainties from the residues between the calculation of a theoretical model and the observed data . the residues and the parameters of the model can be obtained through the standard statistical fitting procedures . the present work concentrates on the decomposition of the total uncertainty , of which the distribution corresponds to that of the residues . the distribution of the total uncertainty is considered as two normal distributions , statistical and systematic uncertainties . the standard deviation of the statistical part , , is estimated through random parameters distributed around their best fitted values . the two normal distributions are obtained by minimizing the moments of the distribution of the residues with the fixed . the method is applied to the liquid drop model ( ld ) . the statistical and systematic uncertainties are decomposed from the residues of the nuclear binding energies with and without the consideration of the shell effect in ld . the estimated distributions of the statistical and systematic uncertainties can well describe that of the residues . the normal assumption of the distribution of the statistical and systematic uncertainties is examined through various approaches . the comparison between the distributions of the specific nuclei and those of the statistical and systematic uncertainties are consistent with the physical considerations , although the latter two can be obtained without the knowledge of these considerations . such as , the ld are more suitable to describe the heavy nuclei . the light and heavy nuclei are indeed distributed mostly inside the distributions of the statistical and systematic uncertainties , respectively . the similar situation are also found for the nuclei close to and far from shell . the present method is also performed to nuclei around stability line . the results are used to investigate all measured nuclei , which show the usefulness of the udm in the exploration of the unmeasured nuclei .
according to the implementation of a differential equation , most approaches to continuous - time optimization can be classified as either a dynamical system ,, or a neural network ,,, .the dynamical system approach relies on the numerical integration of differential equations on a digital computer . unlike discrete optimazation methods, the step sizes of dynamical system approaches can be controlled automatically in the integration process and can sometimes be made larger than usual .this advantage suggests that the dynamical system approach can in fact be comparable with currently available conventional discrete optimal methods and facilitate faster convergence , .the application of a higher - order numerical integration process also enables us to avoid the zigzagging phenomenon , which is often encountered in typical linear extrapolation methods . on the other hand ,the neural network approach emphasizes implementation by analog circuits , very large scale integration , and optical technologies .the major breakthrough of this approach is attributed to the seminal work of hopfield , who introduced an artificial neural network to solve the traveling salesman problem ( tsp ) . by employing analog hardware ,the neural network approach offers low computational complexity and is suitable for parallel implementation . for continuous - time equality - constrained optimization, existing methods can be classified into three categories : feasible point method ( or primal method ) , augmented function method ( or penalty function method ) , and the lagrangian multiplier method .determining whether one method outperforms the others is difficult because each method possesses distinct advantages and disadvantages .readers can refer to ,,, and the references therein for details .the feasible point method directly solves the original problem by searching through the feasible region for the optimal solution .each point in the process is feasible , and the value of the objective function constantly decreases .compared with the two other methods , the feasible point method offers three significant advantages that highlight its usefulness as a general procedure that is applicable to almost all nonlinear programming problems : i ) the terminating point is feasible if the process is terminated before the solution is reached ; ii ) the limit point of the convergent sequence of solutions must be at least a local constrained minimum ; and iii ) the approach is applicable to general nonlinear programming problems because it does not rely on special problem structures such as convexity . in this paper ,a continuous - time feasible point approach is proposed for equality - constrained optimization .first , the equality constraint is transformed into a continuous - time dynamical system with solutions that always satisfy the equality constraint .then , the singularity is explained in detail and a new projection matrix is proposed to avoid singularity .an update ( or say a controller ) is subsequently designed to decrease the objective function along the solutions of the transformed system .the invariance principle is applied to analyze the behavior of the solution .we also propose a modified approach for addressing cases in which solutions do not satisfy the equality constraint .finally , the proposed optimization approach is applied to two examples to demonstrate its effectiveness .local convergence results do not assume convexity in the optimization problem to be solved .compared with global optimization methods , local optimization methods are still necessary .first , they often server as a basic component for some global optimizations , such as the branch and bound method . on the other hand, they can require less computation for online optimization .compared with the discrete optimal methods offered by matlab , at least two illustrative examples show that the proposed approach avoids convergence to a singular point and facilitates faster convergence through numerical integration on a digital computer . in view of these , the contributions of this paper are clear and listed as follows .\i ) a new projection matrix is proposed to remove a standard regularity assumption that is often associated with feasible point methods , namely that the gradients of constraints are linearly independent , see ( * ? ? ?* , equ.(4 ) ) , ( * ? ? ?* , equ.(2.3 ) ) , ( * ? ? ?* , assumption 1 ) .compared with a commonly - used modified projection matrix , the proposed projection matrix has better precision .moreover , its recursive form can be implemented more easily .\ii ) based on the proposed matrix , a continuous - time , equality - constrained optimization method is developed to avoid convergence to a singular point .the invariance principle is applied to analyze the behavior of the solution .\iii ) the modified version of the proposed optimization is further developed to address cases in which solutions do not satisfy the equality constraint .this ensures its robustness against uncertainties caused by numerical error or realization by analog hardware .we use the following notation . is euclidean space of dimension . denotes the euclidean vector norm or induced matrix norm . is the identity matrix with dimension denotes a zero vector or a zero matrix with dimension direct product and operation are defined in _appendix a_. the function _ { \times}:\mathbb{r } ^{3} ] and the matrix of second partial derivatives of known as hessian is given by and _ { ij}. ] are the equality constraints .they are both twice continuously differentiable .denote by {cccc}\nabla c_{1}\left ( x\right ) & \nabla c_{2}\left ( x\right ) & \cdots & \nabla c_{m}\left ( x\right ) \end{array } \right ] \in\mathbb{r } ^{n\times m}. ] and {\times}^{t}]^{t}\in\mathbb{r } ^{4\times3}. ] is defined in _appendix b. _ all solutions of the attitude kinematics satisfy the constraint driven by any .the explanation is given as follows .it is easy to check that {\times}q=0 ] then .since the two sets and are not connected , the solution of ( [ dynamical system ] ) starting from either set can not access the other .although , we still expect the global minimum that is why we often require that the initial value be close to the global minimum besides this , it is also expected that the function is chosen to make the set as large as possible so that the probability of is higher .if , then the function can be chosen to satisfy * theorem 1*. suppose that and where is with full column rank , and the space spanned by the columns of is the null space of then _ proof . _ since the remaining task is to prove namely for any there exists a control input that can transfer any initial state to since there exist such that and by the definition of design a control input {c}\frac{1}{\bar{t}}\left ( \bar{u}-u_{0}\right ) , \\ 0 , \end{array}\begin{array } [ c]{c}0\leqt\leq\bar{t}\\ t>\bar{t}. \end{array } \right . .\ ] ] with the control input above , we have when .then hence consequently , from the proof of _ theorem 1 _ , the choice of becomes a controllability problem .however , it is difficult to obtain a controllability condition of a general nonlinear system .correspondingly , it is difficult to choose for a general nonlinear function to satisfy motivated by the linear case above , we aim to design a function range is the null space of for any fixed this idea can be formulated as , where function is the projection matrix , which orthogonally projects a vector onto the null space of .one well - known projection matrix is given as follows ,,: we can easily verify that this projection matrix requires that should have full column rank , i.e. , every is a regular point .however , the assumption does not hold in cases where is singular .this condition is the major motivation of this paper .for example , consider an equality constraint as where {cc}x_{1 } & x_{2}\end{array } \right ] ^{t}\in\mathbb{r } ^{2}. ] has a unique feasible direction and the point {cc}0 & 0 \end{array } \right ] ^{t} ] has two feasible directions .this causes the singular phenomena .the singularity often occurs at the intersection of the feasible sets , where exist non - unique feasible directions .mathematically , is singular .concretely , the gradient vector of is{c}2x_{1}+2\\ -2x_{2}+2 \end{array } \right ] .\ ] ] at the points and the gradient vector of is{c}-2\\ 2 \end{array } \right ] , \nabla c\left ( x_{p_{2}}\right ) = \left [ \begin{array } [ c]{c}2\\ 2 \end{array } \right]\ ] ] and by ( [ projectionmatrix ] ) , the projection matrices are further {cc}0 & 1\\ 1 & 0 \end{array } \right ] , f\left ( x_{p_{2}}\right ) = \left [ \begin{array } [ c]{cc}0 & -1\\ -1 & 0 \end{array } \right]\ ] ] respectively . whereas, at the point the gradient vector of is{c}0\\ 0 \end{array } \right ] .\ ] ] for such a case , does not exist . to avoid singularity ,a commonly - used modified projection matrix is given as follows where is a small positive scale .we have no matter how small is . on the other hand , to obtain by ( [ modifiedprojectionmatrix ] ) , a very small will cause ill - conditioning problem especially for a low - precision processor .for example , consider the following gradient vectors:{cccc}1 & 1 & 1 & 1 \end{array } \right ] \nonumber\\ \nabla c_{2 } & = \left [ \begin{array } [ c]{cccc}2 & 1 & 1 & 1 \end{array } \right ] \nonumber\\ \nabla c_{3 } & = \left [ \begin{array } [ c]{cccc}3 & 2 & 2 & 2 \end{array } \right ] .\label{gradient}\ ] ] taking as the precision error , we employ ( [ modifiedprojectionmatrix ] ) with different to obtain the projection matrix .as shown in fig.2 , the error varies with different .the best precision error can be achieved only at with a precision error around . reducing will increase the numerical error .the best cure is to remove the linearly dependent vector directly from .for example , in {ccc}\nabla c_{1}\left ( x\right ) & \nabla c_{2}\left ( x\right ) & \nabla c_{3}\left ( x\right ) \end{array } \right ] \in\mathbb{r } ^{n\times3} ] then _ proof ._ see _ appendix c_. * theorem 2*. suppose that and the function is designed to be then _ assumption 1 _ is satisfied with and _ proof . _ since , the function is defined as in ( [ fx ] ) so that by _ lemma 1 . _ therefore , _ assumption 1 _ is satisfied with further by _ lemma 1 _ , * theorem 3*. suppose that and the function is in a recursive form as follows: then _ assumption 1 _ is satisfied with and and _ proof ._ see _ appendix d_. * remark 4*. in ( [ modrecursiveform ] ) , if then namely this is the normal way to construct a projection matrix . on the other hand ,if can be represented by a linear combination of then in this case , consequently , the projection matrix will reduce to the previous one , that is equivalent to removing the term this is consistent with the best way .* remark 5*. in practice , the impulse function is approximated by some continuous functions such as , where is a large positive scale .let us revisit the example for the gradient vectors ( [ gradient ] ) .taking as the error again , we employ ( [ modrecursiveform ] ) with to obtain the projection matrix with this demonstrates the advantage of our proposed projection matrix over ( [ modifiedprojectionmatrix ] ) .furthermore , compared with ( [ projectionmatrix ] ) or ( [ modifiedprojectionmatrix ] ) , the explicit recursive form of the proposed projection matrix is also easier for the designer to implement .in this section , by using lyapunov s method , the update ( or say controller ) is designed to result in .however , the objective function is not required to be positive definite .we base our analysis upon the lasalle invariance theorem . taking the time derivative of along the solutions of ( [ dynamical system ] )results in where in order to get a direct way of designing is proposed as follows where and .then ( [ dv ] ) becomes substituting ( [ controller ] ) into the continuous - time dynamical system ( [ dynamical system ] ) results in with solutions which always satisfy the constraint the closed - loop system corresponding to the continuous - time dynamical system ( [ dynamical system ] ) and the controller ( [ controller ] ) is depicted in fig.3 . unlike a lyapunov function, the objective function is not required to be positive definite . as a consequence ,the conclusions for lyapunov functions are not applicable . instead, the invariance principle is applied to analyze the behavior of the solution of ( [ dynamics1 ] ) . * theorem 4*. under _ assumption 1 _ , given , if the set is bounded , then the solution of ( [ dynamics1 ] ) starting at approaches , where if in addition then there must exist a ^{t} ] in this case , then the set is empty .although the proposed approach ensures that the solutions satisfy the constraint , this approach may fail if or if numerical algorithms are used to compute the solutions . moreover , if the impulse function is approximated , then the constraints will also be violated . with these results ,the following modified closed - loop dynamical system is proposed to amend this situation .similar to , we introduce the term into ( [ dynamics1 ] ) , resulting in where .define then where is utilized .if the impulse function is approximated , then and can be ignored in practice .therefore , the solutions of ( [ modifieddynamics1 ] ) will tend to the feasible set if is of full column rank .once the modified dynamical system ( [ modifieddynamics1 ] ) degenerates to ( [ dynamics1 ] ) .the self - correcting feature enables the step size to be automatically controlled in the numerical integration process or to tolerate uncertainties when the differential equation is realized by using analog hardware .* remark 7*. the matrix plays a role in coordinating the convergence rate of all states by minimizing the condition number of the matrix functions like .moreover , it also plays a role in avoiding instability in the numerical solution of differential equations by normalizing the lipschitz condition of functions like concrete examples are given in the following section .for a given lyapunov function , the crucial step in any procedure for estimating the attraction domain is determining the optimal estimate .consider the system of differential equations: where is the state vector , is a hurwitz matrix , and is a vector function .let be a given quadratic lyapunov function for the origin of ( [ marginsys ] ) , i.e. , is a positive - definite matrix such that .then the largest ellipsoidal estimate of the attraction domain of the origin can be computed via the following equality - constrained optimization problem : = 0.\nonumber\ ] ] since is bounded , the subset = 0\}\ ] ] is bounded no matter what is . for simplicity , consider ( [ marginsys ] ) with ^{t}\in\mathbb{r } ^{2}, ] where then the optimization problem is formulated as since the problem is further formulated as then ^{t}\\ \nabla c\left ( x\right ) & = \left [ \begin{array } [ c]{c}d_{2}-0.1d_{1}^{2}-0.2d_{1}d_{3}\\ d_{2}-0.1d_{1}^{2}+d_{3}\end{array } \right ] \\d_{1 } & = x_{1}+1,d_{2}=x_{2}+1,d_{3}=x_{1}+x_{2}+2.\end{aligned}\ ] ] in this example , we adopt the modified dynamics ( [ modifieddynamics1 ] ) , where is chosen as ( [ fx ] ) with and the parameters chosen as we solve the differential equation ( [ modifieddynamics1 ] ) by using the matlab function ode45 with variable - step .compared with the matlab optimal constrained nonlinear multivariate function fmincon , we derive the comparisons in table 1 .{ccccc}\hline\hline { \small method } & { \small initial point } & { \small solution } & { \small optimal value } & { \small cpu time ( sec.)}\\\hline { \small matlab fmincon } & { \small [ -3 1]} & { \small [ -1 -1]} & { \small 2.0000 } & { \small not available}\\ { \small new method } & { \small [ -3 1]} & { \small [ 0.2062 -0.8546]} & { \small 0.7729 } & { \small 0.125}\\\hline { \small matlab fmincon } & { \small [ 2 -4]} & { \small [ -1 -1]} & { \small 2.0000 } & { \small not available}\\ { \small new method } & { \small [ 2 -4]} & { \small [ 0.2062 -0.8545]} & { \small 0.7726 } & { \small 0.0940}\\\hline { \small matlab fmincon } & { \small [ 1 -4]} & { \small [ 0.2143 -0.8533]} & { \small 0.7740 } & { \small 0.2030}\\ { \small new method } & { \small [ 1 -4]} & { \small [ 0.2056 -0.8550]} & { \small 0.7733 } & { \small 0.1100}\\\hline\hline\end{tabular } .}\ ] ] the point ^{t} ] as shown in table 1 , under initial points ^{t}\in\mathcal{f} ] the matlab function fails to find the minimum and stops at the singular point , whereas the proposed approach still finds the minimum .under initial point ^{t}\notin\mathcal{f}, ] , the solutions of ( [ modifieddynamics1 ] ) change direction and then move to the minimum ^{t} ] is known as the _ essential matrix_. by using the direct product and the operation , the equations in ( [ essential ] ) are equivalent to where{c}m_{2,1}^{t}\otimes m_{1,1}^{t}\\ \vdots\\ m_{2,n}^{t}\otimesm_{1,n}^{t}\end{array } \right ] \in\mathbb{r } ^{n\times9},\nonumber\\ \varphi & = \text{vec}\left ( \left [ t\right ] _ { \times}r\right ) .\label{a}\ ] ] in practice , these image points and are subject to noise , . therefore , and are often solved by the following optimization problem where vec^{t}\in\mathbb{r } ^{12} ] the time derivative of along the solutions of ( [ dynamical system ] ) is where {c}\left ( i_{3}-tt^{t}\left/ \left\vert t\right\vert ^{2}\right .\right ) ^{t}h^{t}\left ( r^{t}\otimes i_{3}\right ) ^{t}\\ h^{t}\left ( r^{t}\otimes i_{3}\right ) ^{t}\left ( i_{3}\otimes\left [ t\right ] _ { \times}\right ) ^{t}\end{array } \right ] \in\mathbb{r } ^{6\times9}.\ ] ] the simplest way of choosing is . in this case , the eigenvalues of the matrix are often ill - conditioned , namely convergence rates of the components of depend on the eigenvalues of as a consequence , some components of converge fast , while the other may converge slowly .this leads to poor asymptotic performance of the closed - loop system .it is expected that each component of can converge at the same speed as far as possible .suppose that there exists a such that then by _ theorem 4 _ , approach the set each element of which is a global minimum since in the set .moreover , each component of converges at a similar speed .however , it is difficult to obtain such a , since the number of degrees of freedom of is less than the number of elements of .a modified way is to make a natural choice is proposed as follows where denotes the moore penrose inverse of .the matrix is to make positive definite , where is a small positive real . from the procedure above, needs to be computed every time .this however will cost much time .a time - saving way is to update at a reasonable interval .then ( [ dynamics1 ] ) becomes where is defined in ( [ f ] ) . the differential equation can be solved by runge - kutta methods , etc .the solutions of ( [ dynamics2 ] ) satisfy the constraints , where vec^{t}. ] ^{t}, ] ^{t}, ] ^{t}. ] we solve the differential equation ( [ modifieddynamics1 ] ) by using matlab function ode45 with variable - step .compared with matlab optimal constrained nonlinear multivariate function fmincon , we have the following comparisons:{ccc}\hline\hline { \small method } & & { \small cpu time ( sec.)}\\\hline { \small matlab fmincon } & { \small 1.2469e-004 } & { \small 0.2500}\\ { \small new approach } & { \small 1.8784e-005 } & { \small 0.1400}\\\hline\hline \end{tabular } .}\ ] ] as shown in table 2 , the proposed approach requires less time to achieve a higher accuracy .given that , the solution is a global minimum .the evolution of each element of is shown in fig.5 .the state eventually reaches a rest state at a similar speed . with different initial values ,several other simulations are also implemented .based on the results , the proposed algorithm has met the expectations .an approach to continuous - time , equality - constrained optimization based on a new projection matrix is proposed for the determination of local minima . with the transformation of the equality constraint into a continuous - time dynamical system ,the class of equality - constrained optimization is formulated as a control problem .the resultant approach is more general than the existing control theoretic approaches .thus , the proposed approach serves as a potential bridge between the optimization and control theories . compared with other standard discrete - time methods , the proposed approach avoids convergence to a singular point and facilitates faster convergence through numerical integration on a digital computer ._ a. kronecker product and vec _the symbol vec is the column vector obtained by stacking the second column of under the first , and then the third , and so on . with \in\mathbb{r } ^{n\times m} ] where the symbol _ { \times}: ] we have _ { \times}x=0_{3\times1},$ ] _ { \times}\right ) & = hx,\\ h & = \left [ \begin{array } [ c]{ccccccccc}0 & 0 & 0 & 0 & 0 & 1 & 0 & -1 & 0\\ 0 & 0 & -1 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 1 & 0 & -1 & 0 & 0 & 0 & 0 & 0 \end{array } \right ] ^{t}.\end{aligned}\ ] ] since if and if we have , according to this , we have the following relationship this implies that , namely . on the other hand ,any is rewritten as where is utilized .hence consequently , denote first , by _ theorem 2 _ , _ _ it is easy to see that the conclusions are satisfied with .assume and then prove that holds .if so , then we can conclude this proof .by we have by _ lemma 1 _ , we have namely , where \(i ) proof of _ proposition 1 . _ in the space the set is compact iff it is bounded and closed by theorem 8.2 in .hence , the remainder of work is to prove that is closed .suppose , to the contrary , not closed .then there exists a sequence with whereas , and which imply contradiction implies that is closed .hence , the set is compact . by ( [ dv1 ] ) , with respect to ( [ dynamics1 ] ) , . by _ assumption 1 _ ,all solutions of ( [ dynamics1 ] ) satisfy therefore , is positively invariant with respect to ( [ dynamics1 ] ) .\(ii ) proof of _ proposition 2 . _ since _ _ is compact and positively invariant with respect to ( [ dynamics1 ] ) , by _ theorem 4.4 _ ( invariance principle ) _ _ in , the solution of ( [ dynamics1 ] ) starting at approaches namely in addition , since ( [ dynamics1 ] ) becomes in , the solution approaches a constant vector \(iii ) proof of _ proposition 3 . _ since and satisfy the following two equalities there exists a such that for any as a consequence , for any there must exist such that .otherwise , therefore , is a kkt point .furthermore , by theorem 12.6 in , is a strict local minimum if , for all
in equality - constrained optimization , a standard regularity assumption is often associated with feasible point methods , namely the gradients of constraints are linearly independent . in practice , the regularity assumption may be violated . to avoid such a singularity , we propose a new projection matrix , based on which a feasible point method for the continuous - time , equality - constrained optimization problem is developed . first , the equality constraint is transformed into a continuous - time dynamical system with solutions that always satisfy the equality constraint . then , the singularity is explained in detail and a new projection matrix is proposed to avoid singularity . an update ( or say a controller ) is subsequently designed to decrease the objective function along the solutions of the transformed system . the invariance principle is applied to analyze the behavior of the solution . we also propose a modified approach for addressing cases in which solutions do not satisfy the equality constraint . finally , the proposed optimization approaches are applied to two examples to demonstrate its effectiveness . optimization , equality constraints , continuous - time dynamical systems , singularity
we present a generative model for constructing networks that grow via competition between cycle formation and the addition of new nodes .the algorithm is intended to model situations such as trading networks , kinship relationships , or business alliances , where networks evolve by either establishing closer connections by adding links to existing nodes or alternatively by adding new nodes . in arranging a marriage , for example , parents may attempt to find a partner within their pre - existing kinship network . for reasons such as alliance building and incest avoidance ,such a partner should ideally be separated by a given distance in the kinship network .such a marriage establishes a direct tie between families , creating new cycles in the kinship network .alternatively , if they do not find an appropriate partner within the existing network , they may seek a partner completely outside it , thereby adding a new node and expanding it .another motivating example is trading networks .suppose two agents ( nodes ) are linked if they trade directly .to avoid the markups of middlemen , and for reasons of trust or reliability , an agent may seek new , more distant , trading partners . if such a partner is found within the existing network a direct link is established , creating a cycle . if not , a new partner is found outside the network , a direct link is established , and the network grows . a similar story can be told about strategic alliances of businesses ; when a business seeks a partner , that partner should not be too similar to businesses with which relationships already exist .thus the business will first take the path of least effort , and seek an appropriate partner within the existing network of businesses that it knows ; if this is not possible , it may be forced to find a partner outside the existing network .all of these examples share the common property that they involve a competition between a process for creating new cycles within the existing network and the addition of new nodes to the network .while there has been an explosion of work on generative models of graphs , there has been very little work on networks of this type . the only exception that we are aware of involves network models of autocatalytic metabolisms .such autocatalytic networks have the property that network growth comes about through the addition of autocatalytic cycles , which can either involve existing chemical species or entirely new chemical species .previous work has focused on topological graph closure properties , or the simulation of chemical kinetics , and was not focused on the statistical properties of the graphs themselves .we call graphs of the type that we study here _ feedback networks _ because the cycles in the graph represent a potential for feedback processes , such as strengthening the ties of an alliance or chemical feedback that may enhance the concentration corresponding to an existing node . we study the degree distributions of the graphs generated by our algorithm , and find that they are well - described by distribution functions that have recently been proposed in nonequilibrium statistical mechanics , more precisely in nonextensive statistical mechanics .such distributions occur in the presence of strong correlations , e.g. phenomena with long - range interactions .our intuition for why these distributions occur here is that the cycle generation inherently generates long range correlations in the construction of the graph .the growth model we propose closely mimics the examples given above . for each time step ,a starting node is randomly selected ( e.g. the person or family looking for a marriage partner ) and a target node ( the marriage partner ) is searched for within the existing network .node is not known at the outset but is searched for starting at node .the search proceeds by attempting to move through the existing network for some number of steps without retracing the path .if the search is successful a new link ( edge ) is drawn from to . if the search is unsuccessful , as explained below , a new node is added to the graph and a link is drawn from to . this process can be repeated for an arbitrary number of steps . in our simulations , we begin with a single isolated node but the initial condition is asymptotically not important . for each time stepwe randomly draw from a scale free distribution the starting node , the distance ( number of steps necessary to locate starting at assuming that such a location does occur ) , and for each node along the search path , the subsequent neighbor from which to continue the search .while node is nt randomly selected at the outset , it is obviously guaranteed that the shortest path distance from to is at most .we now describe the model in more detail including the method for generating search paths , and the criterion for a successful search . ** _ selection of node *. the probability of selecting a given node from among the nodes of the existing network is proportional to its degree raised to a power .the parameter is called the _attachment parameter_. + ^{\alpha } } \over { { \sum_{m=1}^{n } } { \mbox [ { deg}}(m)]^{\alpha } } } \label{eq1}\ ] ] * * _ assignment of search distance *. an integer is chosen with probability where is the _ distance decay parameter _ ..is required to make the sum in the normalization converge . ] + + in our experiments , we use the approximation of for computing the denominator of eq .[ eq2 ] . * * _ generation of search path_*. in the search for node , assume that at a given instant the search is at node , where initially .a step of the search occurs by randomly choosing a neighbor of , defined as a node with an edge connecting it to .we do not allow the search to retrace its steps , so nodes that have already been visited are excluded .furthermore , to make the search more efficient , the probability of choosing node is weighted based on its unused degree , which is defined as the number of neighbors of that have not yet been visited . the probability for selecting a given neighbor is } \over { \sum_{m=1}^m { \big[1+{\mbox u}(m)^\gamma\big ] } } } , \label{eq3}\ ] ] where is the number of unvisited nearest neighbors of node is called the _ routing parameter_. if there are no unvisited neighbors of the search is terminated , a new node is created , and an edge is drawn between the new node and node . otherwise this process is repeated up to steps , and a new edge is drawn between node and node . in the first case we call this _ node creation _ , and in the second case , _typical feedback networks with for of , , , are shown in figures [ degree graphs ] and [ weight graphs ] .( 700,700 ) ( 0,0 ) ( 0,0)(c ) ( 350,0 ) ( 350,0)(d ) ( 0,350 ) ( 0,350)(a ) ( 350,350 ) ( 350,350)(b ) ( 700,700 ) ( 0,0 ) ( 0,10)(c ) ( 350,0 ) ( 350,10)(d ) ( 0,350 ) ( 0,360)(a ) ( 350,350 ) ( 350,360)(b ) the two figures display different depictions of the same four graphs . in figure [ degree graphs ]the sizes of the nodes represent their degrees and in figure [ weight graphs ] the thickness of the edge is proportional to the number of successfully created feedback cycles in which the edge participated ( i.e. the number of times the search traversed this edge ) .the attachment parameter controls the extent to which the graph tends to form hubs ( highly connected nodes ) . when there is no tendency to form hubs , whereas when is large there tend to be fewer hubs .as the distance decay parameter increases the network tends to become denser due to the fact that is typically very small .as increases the search tends to seek out nodes with higher connectivity , there is a higher probability of successful cycle formation , and the resulting graphs tend to be more interconnected and less tree - like . despite that fact that network formation in our model depends purely on local information , i.e. each step only depends on information about nodes and their nearest neighbors , the probability of cycle formation is strongly dependent on the global properties of the graph , which evolve as the network is being constructed . in our modelthere is a competition between successful searches , which increase the degree of two nodes and leave the number of nodes unaltered , and unsuccessful searches , which increase the degree of an existing node but also create a new node with degree one .successful searches lower the mean distance of a node to other nodes , and failed searches increase this distance .this has a stabilizing effect a nonzero rate of failed searchesis needed to increase distances so that future searches can succeed . using this mechanism to growthe network ensures that local connectivity structures , in terms of the mean distance of a node to other nodes , are somewhat similar across nodes thus creating long - range correlations between nodes .because these involve long - range interactions , we check whether the resulting degree distributions can be described by the form where the _ -exponential _function is defined as ^{{1}/{(1-q ) } } \;\;\;\;\;\ ; ( e_{1}^{x } = e^{x } ) \label{eq5}\ ] ] if , and zero otherwise .this reduces to the usual exponential function when , but when it asymptotically approaches a power law in the limit .when , the case of interest here , it asymptotically decays to zero .the factor coincides with if and only if ; is a characteristic degree number .the -exponential function arises naturally as the solution of the equation , which occurs as the leading behavior at some critical points .it has also been shown to arise as the stationary solution of a nonlinear fokker - planck equation also known as the _ porous medium equation_. various mesoscopic mechanisms ( involving multiplicative noise ) have already been identified which yield this type of solution .finally , the -exponential distribution also emerges from maximizing the entropy under a constraint that characterizes the number of degrees per node of the distribution .let us briefly recall this derivation . consider the entropy ^q}{q-1}\\ \bigl[s_1=s_{bg } & \equiv & -\int_0^\infty dk \ ,p(k ) \ln p(k)\bigr ] \ , , \nonumber\end{aligned}\ ] ] where we assume as a continuous variable for simplicity , and stands for _ boltzmann - gibbs_. if we extremize with the constraints and ^q } { \int_0^\infty dk \ , [ p(k)]^q } = k > 0\ , , \label{constr2}\ ] ] we obtain where the lagrange parameter is determined through eq .( [ constr2 ] ) . both constraints ( [ constr1 ] ) and ( [ constr2 ] ) impose .now to arrive at the ansatz ( [ eq4 ] ) that we have used in this paper , we must provide some plausibility to the factor in front of the -exponential .it happens this factor is the most frequent form of density of states in condensed matter physics ( it exactly corresponds to systems of arbitrary dimensionality whose quantum energy spectrum is proportional to an arbitrary power of the wave - vector of the particles or quasi - particles ; depending on the system , can be positive , negative , or zero , in which case the ansatz reproduces a simple -exponential ) . such density of states concurrently multiplies the boltzmann - gibbs factor , which is here naturally represented by .in addition to this , ansatz ( [ eq4 ] ) provided very satisfactory results in financial models where a plausible scale - free network basis was given to account for the distribution of stock trading volumes .an interesting financial mechanism using multiplicative noise has been recently proposed which precisely leads to a stationary state distribution of the form ( [ eq4 ] ) .it is for this ensemble of heuristic reasons that we checked the form ( 4 ) .the numerical results that we obtained proved a posteriori that this choice was a good one .( 700,700 ) ( 0,0 ) ( 0,0)(c ) ( 350,0 ) ( 350,0)(d ) ( 0,350 ) ( 0,350)(a ) ( 350,350 ) ( 350,350)(b ) [ cols="^,^,^,^,^,^,^,^,^ " , ] [ fitted values ] to study the node degree distribution , i.e. the frequency with which nodes have neighbors , we simulate 10 realizations of networks with for different values of the parameters and .some results are shown in figure [ degree distributions ] .we fit -exponential functions to the empirical distributions using the gauss - newton algorithm for nonlinear least - squares estimates of the parameters . due to limitations of the fitting software we used, we had to manually correct the fitting for the tail regions of the distribution . in table[ fitted values ] we give the parameters of the best fits for various values of , , and , demonstrating that the degree distribution depends on all three parameters .the solid curves in figure [ degree distributions ] represent the best fit to a -exponential .the fits to the -exponential are extremely good in every case . to test the goodness of fit, we performed kolmogorov - smirnov ( ks ) and wilcoxian ( w ) rank sum tests .due to the fact that the -exponential is defined only on , we used a two sample k - s test . to deal with the problem that the data are very sparse in the tail , we excluded data points with sample probability less than .for the k - s test the null hypothesis is never rejected , and for the w test one case out of twelve is rejected , with a value of .thus we can conclude that there is no evidence that the -exponential is not the correct functional form .( 700,700 ) ( 0,0 ) ( 0,0)(d ) ( 233,0 ) ( 233,0)(e ) ( 466,0 ) ( 466,0)(f ) ( 0,300 ) ( 0,300)(a ) ( 233,300 ) ( 233,300)(b ) ( 466,300 ) ( 466,300)(c ) from eq .( 4 ) we straightforwardly verify that , in the limit , we obtain ( see also figure [ degree distributions ] ) a pareto distribution , of the form , where and .this corresponds to scale - free behavior , i.e. the distribution remains invariant under the scale transformation . in general, however , scale free behavior is only approached asymptotically , and the -generalized exponential distribution , which contains the pareto distribution as a special case , gives a much better fit .* parameters of model vs. -exponential*. to understand how the parameters of the -exponential depend on those of the model , we estimated the parameters of the -exponential for , and .figure [ param delta dependency ] studies the dependence of on the graph parameters , and figure [ param dependencies ] studies the dependence of and .it is clear that depends solely on the attachment parameter .the other two q - exponential parameters ( and ) depend on all three model parameters .the parameter diverges when and grow large and .the parameter grows rapidly as each of the three model parameters increase .in figure [ edge distribution ] we study the distribution of edge weights , where an edge weight is defined as the number of times an edge participates in the construction of a feedback cyle ( i.e. how many times it is traversed during the search leading to the creation of the cycle ) . from this figure it is clear that this property is nearly independent of the attachment parameter , but is strongly depends on the routing parameter .in this paper we have presented a generative model for creating graphs representing feedback networks .the construction algorithm is strictly local , in the sense that any given step in the construction of a network only requires information about the nearest neighbors of nodes .nonetheless , the resulting networks display long - range correlations in their structure .this is reflected in the fact that the -exponential distribution , which is associated with long - range correlation in problems in statistical mechanics , provides a good fit to the degree distribution .we think this adds an important contribution to the literature on the generation of networks by illustrating a mechanism that specifically focuses on the competition between consolidation by adding cycles , which represent stronger feedback within the network , and growth in size by simply adding more nodes . in future work , we hope to apply the present model to real networks such as biotech intercorporate networks , medieval trading networks , marriage networks , and other real examples .d. r. white , ring cohesion theory in marriage and social networks , mathematiques et sciences humaines * 168 * , 5 ( 2004 ) ; k. hamberger , m. houseman , i. daillant , d. r. white , and l. barry , matrimonial ring structures , mathematiques et sciences humaines * 168 * 83 ( 2004 ) , _ social networks special issue _ edited by alain degenne .d. r. white , w. w. powell , j. owen - smith , and j. moody , networks , fields and organizations : micro - dynamics , scale and cohesive embeddings , computational and mathematical organization theory * 10 * , 95 ( 2004 ) ; w. w. powell , d. r. white , k. w. koput , and j. owen - smith , network dynamics and field evolution : the growth of interorganizational collaboration in the life sciences , american journal of sociology * 110*(4 ) , 1132 ( 2005 ) .b. bollobs , and o. m. riordan , mathematical results on scale - free random graphs , in _ handbook of graphs and networks : from the genome to the internet _ , edited by s. bornholdt and h. g. schuster ( berlin , wiley - vch , 2003 ) .l. a. adamic , r. m. lukose , and b. a. huberman , local search in unstructured networks , in _ handbook of graphs and networks : from the genome to the internet _ , edited by s. bornholdt and h. g. schuster ( berlin , wiley - vch , 2003 ) , cond - mat/0204181 r. j. bagley , and j. d. farmer , spontaneous emergence of a metabolism , in _ artificial life ii _, edited by c. g. langton , c. taylor , j. d. farmer , s. and rasmussen , ( addison wesley , redwood city , 1991 ) .b. bollobs , _ random graphs _, ( london - new york , academic press , inc . , 1985 ) ; e. palmer , _ graphical evolution : an introduction to the theory of random graphs _ , ( new york , wiley , 1985 ) ; p. erds , and a. rnyi , on the evolution of random graphs , bulletin of the institute of international statistics * 38 * , 343 ( 1961 ) . c. tsallis , possible generalization of boltzmann - gibbs statistics , j.stat.phys . *52 * , 479 ( 1988 ) ; e. m. f. curado , and c. tsallis , generalized statistical mechanics : connection with thermodynamics , j.phys.a * 24 * , 69 ( 1991 ) ; corrigenda , * 24 * , 3187 ( 1991 ) and * 25 * , 1019 ( 1992 ) ; c. tsallis , r. s. mendes and a. r. plastino , the role of constraints within generalized nonextensive statistics , physica a * 261 * , 534 ( 1998 ) .a. r. plastino , and a. plastino , non - extensive statistical mechanics and generalized fokker - planck equation , physica a * 222 * , 347 ( 1995 ) ; c. tsallis , and d. j. bukman , anomalous diffusion in the presence of external forces : exact time - dependent solutions and their thermostatistical basis , phys.rev.e * 54 * , 2197 ( 1996 ) ; i. t. pedron , r. s. mendes , l. c. malacarne , and e. k. lenzi , nonlinear anomalous diffusion equation and fractal dimension : exact generalized gaussian solution , phys.rev.e * 65 * , 041108 ( 2002 ) . c. anteneodo , and c. tsallis , multiplicative noise : a mechanism leading to nonextensive statistical mechanics , j.math.phys . * 44 * , 5194 ( 2003 ) ; t. s. biro , and a. jakovac , power - law tails from multiplicative noise , phys.rev.lett .* 94 * , 132302 ( 2005 ) ; c. anteneodo , non - extensive random walks , physica a ( 2005 ) , in press [ cond - mat/0409035 ] .r. osorio , l. borland , and c. tsallis , distributions of high - frequency stock - market observables , in _ nonextensive entropy - interdisciplinary applications _ , edited by m. gell - mann and c. tsallis ( oxford university press , new york , 2004 ) , p. 321333 .s. m. d. queirs , on the distribution of high - frequency stock market traded volume : a dynamical scenario , cond - mat/0502337 ; s. m. d. queirs , on the emergence of a generalised gamma distribution . application to traded volume in financial markets , europhys.lett .( 2005 ) , in press .
we investigate a simple generative model for network formation . the model is designed to describe the growth of networks of kinship , trading , corporate alliances , or autocatalytic chemical reactions , where feedback is an essential element of network growth . the underlying graphs in these situations grow via a competition between cycle formation and node addition . after choosing a given node , a search is made for another node at a suitable distance . if such a node is found , a link is added connecting this to the original node , and increasing the number of cycles in the graph ; if such a node can not be found , a new node is added , which is linked to the original node . we simulate this algorithm and find that we can not reject the hypothesis that the empirical degree distribution is a -exponential function , which has been used to model long - range processes in nonequilibrium statistical mechanics .
black holes are thought to be ubiquitous in dense stellar systems . matter accreting onto supermassive black holes near the centers of galaxies is believed to be responsible for the energetic emission produced by active galactic nuclei ( zeldovich 1964 ; salpeter 1964 ; lynden - bell 1969 ; rees 1984 ) .furthermore , it has been conjectured that all galaxies harbor such black holes at their centers ( but see gebhardt et al .2001 for recent observations that some do not ) .although definitive proof of this hypothesis is still lacking , there exists evidence in some galaxies , such as ngc 4258 ( greenhill et al .1995 ; kormendy & richstone 1995 ) and our own galaxy ( see melia & falcke 2001 for a review ) , for the presence of an unresolved central dark mass of such high density that it is unlikely to be anything other than a black hole ( maoz 1998 ) . in the case of the galactic center , which is thought to coincide with the unusual radio source sgr a * , future observations will measure the orbits of individual stars within of sgr a * ( see , e.g. , ghez et al .in addition , forthcoming radio observations will significantly improve the current limits on the proper motion of sgr a * itself ( m. reid 2001 , private communication ) .it is important , therefore , to understand the general properties of the dynamics of massive bodies in dense stellar systems so that the observations can be unambiguously interpreted and predictions can be made to stringently test underlying theories . to pursue this goal, we present a simple model for the dynamics of a single massive black hole at the center of a dense stellar system .our approach is motivated by the recognition ( chandrasekhar 1943a ) that the force acting on an object in a stellar system broadly consists of two independent contributions : one part , which originates from the `` smoothed - out '' average distribution of matter in the stellar system , will vary slowly with position and time ; the second part , which arises from discrete encounters with individual stars , will fluctuate much more rapidly .the smooth force itself is expected to be made up of two pieces : the first is the force arising from the potential of the aggregate distribution of stars at the position of the object ; and the second is the dissipative force known as dynamical friction , which causes the object to decelerate as it moves through the stellar background ( chandrasekhar 1943b ) .the problem of the dynamics of a black hole in a stellar system is then similar in spirit to the langevin model of brownian motion ( see , e.g. , chandrasekhar 1943a ) , which describes the irregular motions suffered by dust grains immersed in a gas . in the langevin analysis , a brownian particle experiences a decelerating force due to friction which is proportional to its velocity , and it experiences an essentially random , rapidly fluctuating force owing to the large rate of collisions it suffers with the gas molecules in its neighborhood . we extend this method of analysis to the black hole problem .we take the stellar system to be distributed according to a plummer potential ( see binney & tremaine 1987 , hereafter bt ) because the dynamical equations are then relatively tractable , and because this density profile provides a reasonably good fit to actual stellar systems . in 2, we set up the model equations and also provide a justification for breaking up the force on the black hole into two independent parts , one smooth and slowly varying , and the other rapidly fluctuating .the equations of motion for the black hole are shown to be similar to those of a brownian particle in a harmonic potential well . in 3 , we solve the equations of motion for the average position and velocity of the black hole , and the time - autocorrelation function of its position and velocity , obtaining both the transient and steady - state components of these functions . in 4 ,we derive the probability distributions of the black hole s position and velocity by solving the fokker - planck equation of the model .it is shown that in the steady state , these two variables are distributed independently with a gaussian distribution .the conclusions of 3 and 4 are tested in 5 by comparing them with the results of n - body simulations of various systems . in 6 ,we combine the results of the model and observational limits on the proper motion of sgr a * with physical arguments relating to the maximum lifetime of the cluster of stars surrounding the black hole ( following the approach in maoz 1998 ) to derive lower limits on the mass of sgr a*. finally , 7 summarizes the paper .consider a black hole of mass in a cluster of stars which we take to be described by a plummer model of total mass and length parameter .thus , the density and potential profiles are given , respectively , by where is the gravitational constant and is the radial position vector from the center of the stellar system , which is taken as the origin .the total mass in stars inside radius is then given the potential and density profiles , one can calculate the phase space distribution function , which in general depends both on position , and velocity , and which is defined such that is the mass in stars in the phase space volume .we make the assumption that for the spherically symmetric plummer model , is a function of the relative energy per unit mass only ( and independent of specific angular momentum ) , where , being the relative potential .the distribution function can then be calculated by the following equation ( see bt ) : for the plummer case , we get with these preliminaries , we are in a position to calculate the forces on the black hole in this model. there are three such forces : the restoring force of the stellar potential , dynamical friction , and a random force due to discrete encounters with stars .the restoring force on the black hole of the stellar potential is given by , where is the position vector of the black hole . now since the black hole is much more massive than the stars , its typical excursion from the center is small compared with , and we are entitled to neglect terms in the above equation of higher order than . the dominant restoring force on the black hole thus takes the form of hooke s law : where the `` spring constant '' is given by as the black hole moves through the sea of stars , it experiences a force of deceleration known as dynamical friction .we use for this the chandrasekhar dynamical friction formula ( chandrasekhar 1943b , bt ) : where in the above , is the velocity of the black hole , is the mass of each star ( in the following , we take all stars to have equal masses , for simplicity ) , and is the coulomb logarithm , which will be calculated below .note that the above formula was originally derived for the case of a mass moving through a homogeneous stellar system , for which the distribution function would be independent of .however , it is a good approximation to replace this in the case of non - homogeneous systems with the distribution function in the vicinity of the black hole ( see bt ) , especially since the distribution function for the plummer model varies slowly with in the region in which the black hole hole is confined ( ) . since the black hole moves very slowly compared with the stars, we may replace in the integral by to obtain ( see bt ) : but and for , .thus , we finally get or , since , the factor in the coulomb logarithm is given by where and are , respectively , the maximum and minimum impact parameters between the black hole and the stars that need be considered ; is usually set to be ( see bt , maoz 1993 ) , where is the typical relative speed between the black hole and the stars with which it interacts . since the velocity of the black hole is much smaller than that of the stars , we set to be the mean squared velocity of the stars : the maximum impact parameter is not well - defined ; however , an error in the choice of results in a much smaller error in the coefficient of dynamical friction , in which enters as the argument of the logarithm function .( note that there are other uncertainties as well : in the above formula for dynamical friction , the velocity dependence of the coulomb logarithm was ignored by replacing it with a typical relative velocity , , which is treated as a constant ; when the velocity of the black hole is small , it is not clear that this is a good approximation , and it is possible that the magnitude of the coefficient of dynamical friction would be modestly reduced relative to the above expressions [ see merritt 2001 ] . ) in this paper , we adopt a density - weighted formula for the coulomb logarithm given by maoz ( 1993 ) , which provides an implicit expression for ; in this case , should replace the coulomb logarithm in the above equations . for the case of the plummer potential , we obtain this gives a value for that is somewhat smaller than the core radius ( a value often chosen for ) , and indeed we find this choice provides a slightly better fit to our simulations ( see 5 ) than the alternative choice of .the third force acting on the black hole is the stochastic force denoted as , which arises from random discrete encounters between the black hole and the stars .this force can not be written down analytically in closed form , and is only defined statistically as described below .we can therefore characterize the dynamics of the black hole by which is the equation of motion of a harmonically bound brownian particle .the spatial components of this linear vector equation are separable into equivalent terms , and we will without loss of generality concern ourselves only with its -component : ( we will use and interchangeably in the following . ) this is a stochastic differential equation since , as noted above , the form of is not known . however , since this stochastic force is random and rapidly varying , we expect : ( 1 ) that this force is independent of ; ( 2 ) that it is zero on average ; and ( 3 ) that to an excellent approximation this force is uncorrelated with itself at different times .we may formalize these statements as where is a dirac delta function and the angular brackets denote an average over an ensemble of `` similarly prepared '' systems of stars in each of which the black hole has the same initial position and velocity .we take the factor to be independent of ; its magnitude will be determined in the next section . while this definition will not allow us to solve equation ( [ eqofmotion ] ) explicitly, we will obtain closed expressions for the time autocorrelation function of the black hole position and velocity in the next section . that the components of the random force can be separated and characterized as in the latter part of equation ( [ stochdef ] )is at this stage an assumption ; its justification must ultimately come from the agreement between the results of the model and the numerical simulations , as detailed in 5 .the autocorrelation function of the stochastic force on an individual star has been calculated before ( chandrasekhar 1944a , b ; cohen 1975 ) , in the approximation that the test star and its surrounding stars move along straight lines on deterministic orbits ; in this approximation , the autocorrelation function falls off as slowly as the inverse of the time lag for a uniformly dense infinite system . however , for the case we study in this paper , the fall - off will be much faster as a consequence of the rapid decrease in the density of the system outside the core radius ( see cohen 1975 ) , and because fluctuations will tend to throw the black hole and the field stars off their deterministic paths and by doing so reduce the correlation ( see maoz 1993 ) .another difference arises from the fact that we consider here a test object which is much more massive than the surrounding stars ; since the black hole moves very slowly relative to the stars , in the time that the motion of the black hole changes appreciably , the correlations in the force due to the stars would have worn off .our choice of the delta function to represent the force autocorrelation function is somewhat of an idealization , but is justified _ a posteriori _ by the good agreement between the model outlined above and the results of simulations described in 5 . before going on to solve the equations of motion , it is useful to list the approximations that have gone into setting up our model .we have assumed that a black hole of mass is located near the center of a stellar system of total mass and characteristic length scale , and that the mass of individual stars .hence , the black hole s velocity is expected to be very small compared with the velocities of the stars , and its position is expected to be confined in a small region , .we assume that the total force on the black hole is made up of two independent , separable parts .one ( i.e. , ) , which is due to very rapid fluctuations in the immediate surroundings of the black hole , is assumed to average to zero and to be uncorrelated with itself .the other part , which consists of dynamical friction and the force due to the aggregate stellar system , varies smoothly with the black hole s position and velocity on a time scale very much longer than that of the fluctuations .we assume that dynamical friction is given by the chandrasekhar formula , which entails a number of additional approximations ( see tremaine & weinberg 1984 ; weinberg 1986 ; nelson & tremaine 1999 ) .the chandrasekhar formula was originally derived for an infinite and homogeneous stellar system , but it is often employed for non - homogeneous systems by replacing the homogeneous density by the local density .the maximum effective impact parameter ( for relaxation encounters between the stars and the black hole ) that enters the coulomb logarithm is not well - defined ; we assume it to be given implicitly by the density - weighted expression ( [ bmax ] ) above .the gravitational encounters between the stars and the black hole are treated as a succession of binary encounters of short duration , i.e. as a markov process .the chandrasekhar formula approximates the orbits on which stars move past the black hole as keplerian hyperbolae , even though the actual stellar orbits are more complex .this formula neglects the self - gravity of stars in the wake induced by the black hole . despite these approximations ,chandrasekhar s formula has been found to provide an accurate description of dynamical friction in a variety of astrophysical situations ( see bt and references therein ) . in the present context, we will gauge the reliability of its use by appealing to numerical simulations to test the applicability of our model .we conclude this section by demonstrating that the time - scale for fluctuations in is very much smaller than the time - scale on which the position and velocity of the black hole change . near the center of a plummer model , where the massive black hole is localized , the stellar density is , since ; therefore , the typical separation between stars is .the typical stellar velocity is .the average time period of changes in , caused by discrete stellar encounters , is then approximately .now the characteristic time period with which the black hole s motion changes is , where , as is shown in the next section .we thus have if there are a total of stars each of mass ( i.e. , ) . therefore , for large , and we are justified in separating the total force on the black hole into slowly varying and rapidly fluctuating contributions .if we choose the initial position and velocity of the black hole to be then we can solve equation ( [ eqofmotion ] ) formally as using leibniz s rule to differentiate the second term on the right hand side of equation ( [ x ] ) under the integral sign , we can also solve for the velocity : in the above equations , in the case of interest with , we have , and so .note that the exact results of equation ( [ defs ] ) and not the above approximations have been used in comparing the predictions of the model with the numerical simulations of 5 . using the first of the properties in equation ( [ stochdef ] ), we have the following : note that in the steady state ( i.e. , as ) , the average values of the position and velocity components are zero . in the above equations and the subsequent equations , angular brackets have the same meaning as in equation ( [ stochdef ] ) . using the second of the properties in equation ( [ stochdef ] ), we can employ the delta function to perform the resulting double integral and solve for the time autocorrelation functions of the black hole position and velocity , with a time lag : in the same way we can calculate another quantity that will be useful later : using equations ( [ xcorrfull ] ) and ( [ vcorrfull ] ) , we can obtain the corresponding steady state autocorrelation functions for position and velocity , which are functions of the time lag alone , by letting go to infinity and thus letting transients die out , since , in most cases of interest , , these functions are essentially pure damped cosine terms with zero phase .note that the stationary state autocorrelation functions are related by it remains now to determine the constant .if we multiply ( [ eqofmotion ] ) by , rearrange and take the ensemble average , we obtain in the steady state , the average rate of change of total energy of the black hole is zero , and we get in other words , the `` heating '' by the medium due to fluctuations must in the steady state equal the `` cooling '' due to viscous dissipation by the force of dynamical friction , which is a form of the general relationship between the processes of fluctuation and dissipation ( see bekenstein & maoz 1992 ; maoz 1993 ; nelson & tremaine 1999 ) .the value of in the steady state is easily evaluated by setting in equation ( [ vcorr ] ) ; thus , bt calculate the total heating per unit time to be [ adapted from equation ( 8 - 66 ) in bt ] .isotropy implies that the heating due to the -component alone will be a third of this quantity , namely since the black hole velocity is small , we can replace the lower limit in the integral above by zero .then , for the plummer model we obtain where by plugging this back into the expression for and using equations ( [ cooling ] ) , ( [ fluctdiss ] ) and ( [ b ] ) , we obtain finally and the first equality in the above equation was obtained from equation ( [ vcorr ] ) by setting .note that this is slightly higher than the value that would have been obtained had the black hole s kinetic energy been in strict equipartition with that of the stars in the core of the plummer potential ; had that been the case , the numerical coefficient above would have been 1/6 instead of 2/9 ( see equation ( [ msvstar ] ) , where the mean squared 3-dimensional velocity of the stars in the core has been calculated ) . as an aside , we point out that a similar calculation for a maxwellian distribution of stars , where is the root mean squared value of a single component of velocity would have yielded and this is the familiar condition for equipartition between the kinetic energies of the black hole and a star . returning to the plummer model, we have , by making use of equations ( [ c ] ) , ( [ xcorr ] ) and ( [ vcorr ] ) , the following expressions for the mean squared position and velocity components of the black hole in the steady state : ( compare equation ( [ rmsx ] ) to equation ( 101 ) in bahcall & wolf 1976 ; the latter is rederived in lin & tremaine 1980 just after their equation ( 16 ) ; these equations agree with our result above to within a factor of order unity ) .if we have stars in the cluster of total mass , then , and we can rewrite the above equations as the treatment of chandrasekhar ( 1943a ) and wang and uhlenbeck ( 1945 ) , we can derive a partial differential equation , called the fokker - planck equation , for the joint probability distribution of the position and velocity components of the black hole .let represent the probability distribution of the - components of the black hole s position and velocity at time ; i.e. , is the probability that at time , the black hole lies between and and has a velocity between and .let , the black hole is at and , given that at time , it was at and ; is taken to be an interval that is long compared with the time - scale over which the stochastic force varies but is short compared with the time - scale on which the black hole s position and velocity change .the evolution of the probability is expected to be governed by the following equation : note that in writing this equation , we are assuming that the black hole s motion is a markov process which depends only on its position and velocity an `` instant '' before , and is independent of its previous history . rewriting the expression for in the above equation as whereeach is either or , and where for brevity we have defined the first term on the right hand side is simply , which cancels with the same term on the left hand side . dividing both sides by and taking the limit , we obtain where the coefficients are the diffusion coefficients of this general fokker - planck equation in two variables , and are defined as the diffusion coefficients can be calculated very easily by using the equation of motion ( [ eqofmotion ] ) and the definition of the autocorrelation of the random force in equations ( [ stochdef ] ) .we have and by integrating the equation of motion for a short time which is long enough that many random encounters have taken place but not so long that the black hole s and have changed appreciably , based on the above and equations ( [ stochdef ] ) , we find that the only diffusion coefficients that do not vanish as are : thus , the fokker - planck equation reduces to the stationary distribution is found by setting the time derivative on the left hand side of equation ( [ fpred ] ) to zero .the solution of equation ( [ fpred ] ) is complicated , but we write it down in terms of the quantities derived in previous sections ( see chandrasekhar 1943a ) : where note that equation ( [ fpsol ] ) describes a general gaussian distribution in the two variables and . of particular interest is the stationary distribution , which is obtained from equation ( [ fpsol ] ) by taking the limit : this is the product of two _ independent _ gaussian distributions in the variables and .this is a consequence of the linear nature of equations ( [ veceqofmotion ] ) and ( [ eqofmotion ] ) .it is easy to obtain the marginal stationary distributions of these variables by integrating out one or the other : where and are given by equations ( [ eq : x^2 ] ) and ( [ eq : v_x^2 ] ) , respectively .we have performed a number of computer simulations to test the validity of the model presented in 24 .the code we use solves the combined dynamics of the black hole and the stars using different equations of motion : where and are the mass and position of the black hole , respectively , and and are the mass and position of the -th star , respectively ; the coulomb force is softened by the parameter to prevent numerical divergences when a star passes very close to a black hole ; and is the analytical expression for the stellar potential in equation ( [ pot ] ) .thus , the black hole interacts with the stars through a softened coulomb force , and the stars interact with each other through an analytical gravitational field . combining an analytical potential with the traditional `` direct summation ''n - body technique ensures that accuracy is not sacrificed in calculating the motion of the black hole ( the object of greatest interest for us ) .the particles themselves are moved ( with varying step - sizes which are calculated at every time step ) using suitably modified versions of the fourth - order integrators of aarseth ( 1994 ) . the improved efficiency in the calculation is thus obtained at the price of having to keep the potential due to the stars ( although not necessarily their density profile ) fixed .however , this approximation does not appear to have a significant effect on our results .we have performed a number of simulations using other methods to test the results .these include the direct summation n - body code known as nbody1 ( aarseth 1994 ) for a relatively small number of particles , and the program known as scfbdy , described in detail in quinlan & hernquist ( 1997 ) .the latter program expresses the potential as an expansion in an appropriate set of basis functions instead of having a fixed potential in equation ( [ eq - st ] ) above ; the coefficients of this expansion are self - consistently updated at chosen time steps .although the precise motion of the black hole is not identical for different simulation methods since the force on the black hole in each case is calculated differently we have found that they all give similar results as far as the statistical properties of the black hole s dynamics are concerned . in particular , the mean squared values of the black hole s position and velocity in the stationary state of the system are approximately equal irrespective of the method used , and are similar to the values derived from the model presented in this paper .we believe that this is because the statistical properties of the black hole s motion are determined primarily by the properties of the restoring force and dynamical friction which are provided by the unbound stars , outside the region of the black hole s gravitational influence .these regions are relatively unaffected by the central black hole if its mass is much smaller than the total mass of the stellar system .that being so , we have used the method of the fixed potential for the simulations described below in order to be able to integrate efficiently large numbers of stars for long spans of time .our standard plummer model has parameters and ; in these units , the gravitational energy of the initial stellar system alone is , and the circular period at is .we take the mass of the black hole to be .the softening length was chosen to be .for these parameters , and . in figure 1 , we show the results of a simulation in which the black hole was started off with zero velocity from the origin in a system of stars .the first and second panels show the evolution of the black hole s x - component of position and velocity , respectively .the third panel shows the autocorrelation function of the x - component of the black hole s position as calculated from the simulation ( the calculation was stopped at time ) , and as computed from our model ; the two curves are in good agreement , at least for short time lags , and the discrepancies could be due to the uncertainty in the maximum effective impact parameter in the dynamical friction formula . note the persistence of the actual autocorrelation function of the black hole , which will be discussed further below .the autocorrelation function of is not shown ; according to equation ( [ corrrel ] ) it can be simply derived from the autocorrelation function of by taking a double time derivative .in figure 2 we test equations ( [ rmsxn ] ) and ( [ rmsvn ] ) which predict that the root mean squared position and velocity components of the black hole should decline with the total number of stars as .we show the results of 4 simulations with ,500 , 25,000 , 50,000 and 100,000 ; in each case , the simulation was stopped at time ; the agreement with the predictions of the model is evidently good . in figure 3 , we test equations ( [ fpsolstx ] ) and ( [ fpsolstv ] ) , which predict that the black hole s position and velocity components in the steady state should be gaussian distributed ; the empirically binned distributions were computed for the case with .the agreement with the model predictions is again very good . in the above simulations , the black hole s orbit remains close to the center and appears to be essentially stochastic , in that it does not seem to be confined to a special sheet or line in phase space . at any point in time ,many stars are bound to the black hole in the sense that they have negative energy with respect to it .most of these stars are within the gravitational sphere of influence of the black hole and their total mass is comparable to that of the black hole .the autocorrelation functions of the black hole s position and velocity do not appear to damp entirely with ever increasing time lag , in contrast to equations ( [ xcorr ] ) and ( [ vcorr ] ) .although figure 1 shows only a small part of the autocorrelation function , it turns out that the oscillations persist for indefinitely long at roughly the residual ( and , apparently , somewhat varying ) amplitude shown at the right of the third panel in the figure .the frequency of the oscillations is close to the fundamental frequency calculated in the paper and evident in the third panel of figure 1 .we attribute these oscillations to the presence of very weakly damped coherent modes in the stellar system , of the kind reported by miller ( 1992 ) and miller & smith ( 1992 ) , and calculated by mathur ( 1990 ) and weinberg ( 1994 ) .while it is not the purpose of this paper to study such modes , we have , in an attempt to identify the source of the above oscillations , performed the following experiment .we set up the system of stars as above but _ without _ the black hole at its center , and kept track of one component of the total force at the origin of the system .the discrete fourier transform of the sequence of forces at successive time steps revealed a strong peak very close to the fundamental angular frequency of oscillations at the bottom of the gravitational potential well : .this could account for the undamped low - amplitude oscillations , at roughly the above frequency , seen in the autocorrelation functions of the black hole s position and velocity .consider a simplified situation in which the black hole is subject to an additional force , which is due to the conjectured undamped mode of this frequency mentioned above ; here , and are taken to be independent of time , for simplicity . if we add this force to the right hand side of equation ( [ eqofmotion ] ) , assume that it is independent of the random force , andperform an analysis similar to that in 3 , it is easy to see that we would obtain a new contribution to each of the autocorrelation functions in equations ( [ xcorr ] ) and ( [ vcorr ] ) in the form of an additive term which is proportional to , i.e. , a term that does not damp with increasing ( note that for most systems , is approximately equal to , the frequency with which the black hole s position and velocity autocorrelation functions oscillate ) .the addition of such a term would not affect the good agreement for small time lags between equations ( [ xcorr ] ) and ( [ vcorr ] ) and the results of numerical simulation ( figure 1 ) , since the amplitude of these residual oscillations is very much smaller than the amplitude of the autocorrelation functions for small .we may use our model to derive a lower limit on the mass of the black hole in the galactic center , sgr a*. the observed upper limit of 20 km s on the intrinsic proper motion of sgr a * ( reid et al . 1999 ) , when combined with equation ( [ rmsv ] ) , provides such a limit .measurement of proper motions of stars close to sgr a * indicate that a total mass of resides within a distance of parsec of sgr a * ( see , e.g. , eckart & genzel 1997 , ghez et al .1998 , ghez et al .not all of this mass need be attributed to the black hole ; some of it could be due to a cluster of stars surrounding it .if we assume that this cluster is distributed according to a plummer profile , then we can apply the results derived in previous sections to the entire system comprised of the stellar cluster and the black hole .let us set the total mass inside a distance pc from sgr a * to be . using as the mass of the black hole, we have the condition where the second term is the mass of the stellar cluster ( of total mass ) within . combining this with equation ( [ rmsv ] ) in the form we obtain where is the maximum mean squared speed of one component of the black hole s velocity. we can now set km s to get an approximate lower limit on , assuming that .this relation is plotted in figure 4(a ) for various values of , the scale length parameter of the plummer cluster .the mass of the black hole must be given by points lying _ above _ the curved solid line .evidently , this relation implies a lower limit on .we can derive stricter limits on by noting , as did maoz ( 1998 ) , that the allowed values of are restricted by the condition that the upper limit of the lifetime of a cluster of stars is its evaporation time , when stars would have escaped from the cluster because of scattering .the evaporation timescale of the cluster should be long enough to make it probable that the cluster be observed at the present epoch .a reasonable assumption is that this timescale , , is bounded between the values 1 gyr and 10 gyr , the latter being the approximate age of the galaxy .the evaporation timescale is , where is the median relaxation time , given by ( see , e.g. , maoz 1998 , bt ) where is the number of stars in the cluster ( with given by equation [ mr ] ) , and is the system s median radius ; for the plummer profile .setting = 1 gyr , we obtain the dashed line in figure 4(a ) ; 1 gyr denotes the region to the _ right _ of this line .hence , the allowed values of and lie to the right of this dashed line and above the solid line .the minimum value of under these assumptions is given by point a at which pc , and the total mass of the star cluster is .if we perform a similar calculation for 10 gyr , we obtain the dotted line in figure 4(a ) ; the minimum value of is then given by point b at which pc and .reid et al . ( 1999 ) expect further observations to reduce the limit on the peculiar motion of sgr a * from 20 km s to 2 km s over the next few years .figure 4(b ) repeats the above calculations for this limit .the condition 1 gyr then gives a new lower limit for ( point a ) , at which pc , and .point b , the minimum of if 10 gyr , is now characterized by pc , and . in the latter case ,the attainable lower limit on will be interestingly close to its upper limit of .in this paper , we have developed a stochastic model to describe the dynamics of a black hole near the center of a dense stellar system .the total force on the black hole is decomposed into a slowly varying part originating from the response of the whole stellar system , and a random , rapidly fluctuating part originating from discrete encounters with individual stars .we have shown that the time scale over which the latter force fluctuates is very short compared with the time scale over which the former changes ; hence the justification for the separation of the total force into these two independent components . the slowly varying force itselfis approximated as the sum of two contributions : the force on the black hole due to the potential of the whole stellar system and a force of dynamical friction which causes the black hole to decelerate as it moves through the stellar system .the stochastic force at the position of the black hole is assumed to have a zero average and to be essentially uncorrelated with itself over time scales that are short compared with the characteristic period over which the velocity of the black hole changes considerably , but long enough for many independent fluctuations of the stochastic force to have occurred . if the stellar system is approximated by a plummer model , then the problem essentially reduces to describing the brownian motion of a particle in a harmonic potential .we have shown that after long times , the black hole s velocity has a zero average and its average location coincides with the center of the stellar potential .however , the root mean squared position and velocity tend towards non - zero values which are independent of time and of the black hole s initial position and velocity .the steady - state time autocorrelation functions of the position and velocity were shown to be approximately given by damped cosine functions . for a maxwellian distribution of stars , strict equipartition of kinetic energy between the black hole and the stars is achieved .the model was completed by solving for the one unknown parameter the mean squared magnitude of the stochastic force on the black hole by making use of the close relationship between processes of fluctuation and dissipation , according to which the `` heating '' by the fluctuating force equals the `` cooling '' caused by the dissipative force of dynamical friction ( the third force on the black hole the force due to the stellar potential is conservative ) .a fokker - planck equation for the diffusion of the probability distribution of the black hole s position and velocity was developed and solved .the solution implies that in the steady state these variables are distributed independently as gaussians .the predictions of the model were tested by comparing with the results of various n - body simulations ; the agreement is good , thus justifying the elements of the model . in the simulations , we find possible signs of the existence ofweakly damped coherent modes associated with the stellar system ( weinberg 1994 ) .the total force at the origin of the system has a strong fourier component at the frequency of fundamental oscillations characteristic of the approximately harmonic form of the stellar potential near the center .if we consider the black hole to be a test particle affected by such a force , then the autocorrelation functions of the black hole s position and velocity would be found not to damp with the time lag , but to persist ; the results of simulation do indeed show the presence of such oscillations at very low amplitude for arbitrarily long time lags .finally , we applied the results of the model to sgr a*. observational limits on the peculiar motion of sgr a * were used to obtain a lower limit on its mass , under the assumption that it is localized near the center of a system of equal mass stars distributed according to the plummer model .more stringent limits were then deduced by requiring that the evaporation timescale of the cluster of stars be larger than 1 gyr .the plummer model is a reasonable choice for the black hole problem considered here since it results in a separable system of linear stochastic differential equations ; it is not clear that arbitrary potential - density pairs would give equations that are similarly tractable . in particular, what would happen in the case of black holes at the centers of galaxies if those are described by singular power - law density profiles ?a simple - minded generalization of our method would not necessarily work ( for example , because of possible divergences in the distribution function ) .however , we believe that our model captures the qualitative features of more complicated situations for two related reasons .first , we note that the plummer model is not an equilibrium solution for a cluster of stars with a black hole present ( see , e.g. , huntley & saslaw 1975 and saslaw 1985 ) ; even in non - singular models such as the plummer model , the black hole ultimately induces a density cusp .we have ignored this complication in our model and found that the model nevertheless provides a good description of detailed numerical simulations .second , the black hole tends to carry its cusp of bound stars with it as it moves around ; thus , it is as if a black hole of a somewhat larger effective mass were moving in a background consisting of unbound stars . withthe cusp effectively removed , the density profile of this background would be flat near the center . since the restoring force and dynamical friction are provided mainly by the unbound stars , we believe that the essential components of our model are still valid .( for similar reasons see 5 for details the fact that our simulations use a fixed stellar potential is not expected to alter our conclusions . )we have carried out numerical simulations for the case of a particular density profile with a singularity , namely the hernquist ( 1990 ) model .we find that the qualitative results are similar to those described here .in particular , the early - time autocorrelation functions of the black hole s force and velocity continue to be described well by damped cosine functions of fixed frequency .the detailed characterization of the black hole behavior in terms of the model parameters is different , and requires a more careful calculation .we thank g. quinlan for providing the simulation code , s. tremaine for enlightening discussions and a careful reading of the manuscript , and m.j .reid and g. rybicki for useful discussions .we also thank the editor , ethan vishniac , for his helpful advice . this work was supported in part by nasa grants nag 5 - 7039 , 5 - 7768 , and by nsf grants ast-9900877 , ast-0071019 ( for al ) .
we develop a simple physical model to describe the dynamics of a massive point - like object , such as a black hole , near the center of a dense stellar system . it is shown that the total force on this body can be separated into two independent parts , one of which is the slowly varying influence of the aggregate stellar system , and the other being the rapidly fluctuating stochastic force due to discrete encounters with individual stars . for the particular example of a stellar system distributed according to a plummer model , it is shown that the motion of the black hole is then similar to that of a brownian particle in a harmonic potential , and we analyze its dynamics using an approach akin to langevin s solution of the brownian motion problem . the equations are solved to obtain the average values , time - autocorrelation functions , and probability distributions of the black hole s position and velocity . by comparing these results to n - body simulations , we demonstrate that this model provides a very good statistical description of the actual black hole dynamics . as an application of our model , we use our results to derive a lower limit on the mass of the black hole sgr a * in the galactic center .
mathematics can be done on two different levels .one level is rather informal , based on informal explanations , intuition , diagrams , etc . , and typical for everyday mathematical practice .another level is formal mathematics with proofs rigorously constructed by rules of inference from axioms .a large portion of mathematical logic and interactive theorem proving is aimed at linking these two levels .however , there is still a big gap : mathematicians still do nt feel comfortable doing mathematics formally and proof assistants still do nt provide enough support for dealing with large mathematical theories , automating technical problems , translating from one formalism to another , etc .we consider the following issue : there are several very mature and popular interactive theorem provers ( including isabelle , coq , mizar , hol - light , see for an overview ) , but they still can not easily share the same mathematical knowledge .this is a significant problem , because there are increasing efforts in building repositories of formalized mathematics , but still developed within specific proof assistants .building a mechanism for translation between different proof assistants is non - trivial because of many deep specifics of each proof assistant ( there are some recent promissing approaches for this task ) . instead of developing a translation mechanism , we propose a proof representation and a corresponding xml - based format .the proposed proof representation is light - weight and it does not aim at covering full power of everyday mathematical proofs or full power of first order logic .still , it can cover a significant portion of many interesting mathematical theories .the underlying logic of our representation is coherent logic , a fragment of first - order logic .proofs in this format can be generated in an easy way by dedicated , coherent logic provers , but in principle , also by standard theorem provers .the proofs can be translated to a range of proof assistant formats , enabling sharing the same developments .we call our proof representation `` coherent logic vernacular '' ._ vernacular _ is the everyday , ordinary language ( in contrast to the official , literary language ) of the people of some country or region .a similar term , _ mathematical vernacular _ was used in 1980 s by de bruijn within his formalism proposed for trying to _ put a substantial part of the mathematical vernacular into the formal system _ .several authors later modified or extended de bruijn s framework .wiedijk follows de bruijn s motivation , but he also notices : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ it turns out that in a significant number of systems ( ` proof assistants ' ) one encounters languages that look almost the same .apparently there is a canonical style of presenting mathematics that people discover independently : something like a _ natural _ mathematical vernacular .because this language apparently is something that people arrive at independently , we might call it _ the _ mathematical vernacular ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we find that this language is actually closely related to a proof language of coherent logic , which is a basis of our proof representation presented in this paper .our proof representation is developed also with _readable proofs _ in mind .readable proofs ( e.g. , textbook - like proofs ) , are very important in mathematical practice .for mathematicians , the main goal is often , not only a trusted , but also a clear and intuitive proof .we believe that coherent logic is very well suited for automated theorem proving with a simple production of readable proofs .in this section , we give a brief overview of interactive theorem proving and proof assistants , of coherent logic , which is the logical basis for our proof representation , and of xml , which is the technical basis for our proof format .interactive theorem proving systems ( or _ proof assistants _ ) support the construction of formal proofs by a human , and verify each proof step with respect to the given underlying logic .the proofs can be written either in a _ declarative _ or in a _ procedural _ proof style . in the procedural proof style , the proof is described by a sequence of commands which modify the incomplete proof tree . in the declarative proof style the formal document includes the intermediate statements .both styles are avaible in hol - light , isabelle and coq proof assistants whereas only the declarative style is available in mizar , see for a recent discussion .the procedural proof style is more popular in the coq community .formal proofs are typically much longer than `` traditional proofs . ''progress in the field can be measured by proof scripts becoming shorter and yet contain enough information for the system to construct and verify the full ( formal ) proof .`` traditional proofs '' can often hardly be called proofs , because of the many missing parts , informal arguments , etc . using interactive theorem provinguncovered many flaws in many published mathematical proofs ( including some seminal ones ) , published in books and journals .coherent logic ( cl ) was initially defined by skolem and in recent years it gained new attention .it consists of formulae of the following form : which are implicitly universally quantified , and where , , denotes a sequence of variables ( ) , ( for ) denotes an atomic formula ( involving zero or more of the variables from ) , denotes a sequence of variables ( ) , and ( for ) denotes a conjunction of atomic formulae ( involving zero or more of the variables from and ) . for simplicity , we assume that there are no function symbols with arity greater than zero ( so , we only consider symbols of constants as ground terms ). the definition of cl does not involve negation . for a single atom , can be represented in the form , where stands for the empty disjunction , but more general negation must be expressed carefully in coherent logic . in order to reason with negation in general , new predicate symbols are used to abbreviate subformulas . furthermore , for every predicate symbol ( that appears in negated form ) , a new symbol is introduced that stands for , and the following axioms are postulated ( cf . ) : , .cl allows existential quantifications of the conclusion of a formula , so cl can be considered to be an extension of resolution logic .in contrast to the resolution - based proving , the conjecture being proved is kept unchanged and directly proved ( refutation , skolemization and transformation to clausal form are not used ) .hence , proofs in cl are natural and intuitive and reasoning is constructive .readable proofs ( in the style of forward reasoning and a variant of natural deduction ) can easily be obtained .a number of theories and theorems can be formulated directly and simply in cl . in cl ,constructive provability is the same as classical provability .it can be proved that any first - order formula can be translated into a set of cl formulas ( in a different signature ) preserving satisfiability ( however , this translation does not always preserve constructive provability ) .coherent logic is semi - decidable and there are several implemented semi - decision procedures for it .argoclp is a generic theorem prover for coherent logic , based on a simple proof procedure with forward chaining and with iterative deepening .argoclp can read problems given in tptp form and can export proofs in the xml format that we describe in this paper .these proofs are then translated into target languages , for instance , the isar language or natural language thanks to appropriate xslt style - sheets ._ extensible markup language _( xml ) is a simple , flexible text format , inspired by sgml ( iso 8879 ) , for data structuring using tags and for interchanging information between different computing systems .xml is primarily a `` metalanguage''a language for describing other customized markup languages .so , it is not a fixed format like the markup language html in xml the tags indicate the semantic structure of the document , rather than only its layout .xml is a project of the world wide web consortium ( w3c ) and is a public format .almost all browsers that are currently in use support xml natively .there are several schema languages for formaly specifying the structure and content of xml documents of one class .some of the main schema languages are dtd ( _ data type definition _ ) , xml schema , relax , etc . .specifications in the form of schema languages enable automatic verification ( `` validation '' ) of whether a specific document meets the given syntactical restrictions ._ extensible style - sheet language transformation _ ( xslt ) is a document processing language that is used to transform the input xml documents to output files .an xslt style - sheet declares a set of rules ( templates ) for an xslt processor to use when interpreting the contents of an input xml document .these rules tell to the xslt processor how that data should be presented : as an xml document , as an html document , as plain text , or in some other form .the proposed proof representation is very usable and expressive , yet very simple .it uses only a few inference rules , a variant of the rules given in .given a set of coherent axioms and a coherent conjecture , the goal is to prove , using the rules given below , the following ( where denote a vector of new symbols of constants ) : the rules are applied in a forward manner , so they can be read from bottom to top .in the rules below we assume : * is a formula of the form ( [ eq : clf ] ) ( page ) ; * , , denote vectors of constants ( possibly of length zero ) ; * in the rule , are fresh constants ; * and denote vectors of variables ( possibly of length zero ) ; * ( ) have no free variables other than from ( and ) ; * are ground atomic formulae ; * and are ground conjunctions of atomic formulae ; * denotes the list of conjuncts in . { \gamma , ax , \underline{a_1(\vec{a } ) \wedge \ldots\wedge a_n(\vec{a } ) } \vdash p } { \gamma , ax , \underline{a_1(\vec{a } ) \wedge \ldots \wedge a_n(\vec{a } ) } , b_1(\vec{a},\vec{b } ) \vee \ldots \vee b_m(\vec{a},\vec{b } ) \vdash p}\ ] ] { \gamma , b_1(\vec{c } ) \vee \ldots \vee b_n(\vec{c } ) \vdash p } { \gamma,\underline{b_1(\vec{c } ) } \vdash p \,\,\,\,\,\,\ , \ldots \,\,\,\,\,\,\ , \gamma , \underline{b_n(\vec{c } ) } \vdash p}\ ] ] { \gamma , \underline{b_i(\vec{a},\vec{b } ) } \vdash \exists \vec{y } ( b_1(\vec{a},\vec{y } ) \vee \ldots \vee \ ; b_m(\vec{a},\vec{y } ) ) } { } \ ] ] { \gamma,\bot \vdash p } { } \ ] ] none of these rules change the goal , which helps generating readable proofs as the goal can be kept implicit .note that the rule actually combines universal instantiation , conjunction introduction , modus ponens , and elimination of ( zero or more ) existential quantifiers .this seems a reasonable granularity for an inference step , albeit probably the maximum for keeping proofs readable .compared to which defines the notion of obvious inference rule by putting constraints on an automated prover , our position is : the obvious inferences are the ones defined by the inference rules above . compared to the rules given in , we choose to separate the rule ( disjunction elimination ) and the rule from the single combined rule in , in order to improve readability .case distinction ( split ) is an important way of structuring proofs that deserves to be made explicit .also , could be seen as a with zero cases , but this would be less readable . any coherent logic proof can be represented in the following simple way ( is used zero or more time , involves at least two other objects ) : proof representation described in section [ sec : proofrepresentation ] is used as a basis for our xml - based proof format . it is developed as an interchange format for automated and interactive theorem provers .proofs ( for coq and isabelle / isar ) that are produced from our xml documents are fairly readable .the xml documents themselves can be read by a human , but much better alternative is using translation to human readable proofs in natural language ( formatted in latex , for instance ) .the proof representation is described by a dtd ` vernacular.dtd ` . as an illustration, we show some fragments : .... ... < !-- * * * * * * * * theory * * * * * * * * * * * * * * -- > < !element theory ( theory_name , signature , axiom * ) > < !element theory_name ( # pcdata ) > < !element signature ( type * , relation_symbol * , constant * ) > <! element relation_symbol ( type * ) > < !attlist relation_symbol name cdata # required > < !element type ( # pcdata ) > < !element axiom ( cl_formula ) > < !attlist axiom name cdata # required > ... .... the above fragment describes the notion of theory .( definitions , formalized as pairs of coherent formulae , are used as axioms . ) a file describing a theory could be shared among several files with theorems and proofs . .... ... < ! -- * * * * * * * * theorem * * * * * * * * * * * * * * -- > < ! element theorem ( theorem_name , cl_formula , proof+ ) > < !element theorem_name ( # pcdata ) > <! element conjecture ( name , cl_formula ) > < !-- * * * * * * * * proof * * * * * * * * * * * * * * -- > < !element proof ( proof_step * , proof_closing , proof_name ? ) > < !element proof_name empty > < !attlist proof_name name cdata # required > < ! -- * * * * * * * * proof steps * * * * * * * * * * * * * * -- > < ! element proof_step ( indentation , modus_ponens ) > < ! element proof_closing ( indentation , ( case_split|efq|from ) , ( goal_reached_contradiction|goal_reached_thesis ) ) > ... .... the above fragment describes the notion of a theorem and a proof .as said in section [ sec : proofrepresentation ] , a proof consists of a sequence of applications of the rule modus ponens and closes with one of the remaining proof rules ( , , or ) . within the last three, there is the additional information on whether the proof closes by ( by detecting a contradiction ) or by detecting one of the disjuncts from the goal .this information is generated by the prover and can be used for better readability of the proof but also for some potential proof transformations . within each proof stepthere is also the information on indentation .this information , useful for better layout , tells the level of subproofs and as such can be , in principle , computed from the xml representation .still , for convenience and simplicity of the xslt style - sheets , it is stored within the xml representation .we implemented xsl transformations from xml format to isabelle / isar ( ` vernacularisar.xls ` ) , coq ( ` vernacularcoqtactics.xls ` ) , and to a natural language ( english ) in latex form and in html form ( ` vernaculartex.xls ` and ` vernacularhtml.xls ` ) .the translation from xml to the isar language is straightforward and each of our proof steps is trivially translated into isar constructs .naturally , we use native negation of isar ( and coq ) instead of defined negation in coherent logic . the translation to coq has been written in the same spirit as the isar output despite the fact proofs using tactics are more popular in coq than declarative proofs .we refer to the assumptions by their statement instead of their name ( for example : ` by cases on ( a = b \/ a < > b ) ` ) .moreover , when we can , we avoid to refer to the assumptions at all .we did not use the declarative proof mode of coq because of efficiency issues .we use our own tactics to implement the inference rules of cl to improve readability .internally , we use an ltac tactic to get the name of an assumption .the forward reasoning proof steps consist of applications of the ` assert ` tactic of coq .equality is translated into leibniz equality .the translation to latex and html includes an additional xslt style - sheet that optionally defines specific layout for specific relation symbols ( so , for instance , can be the layout for ` cong(a , b , c , d ) ` ) .the developed xslt style - sheets are rather simple and short each is only around 500 lines long .this shows that transformations for other target languages ( other theorem provers , like mizar and hol light , latex with other natural languages , mathml , omdoc or tptp ) can easily be constructed , thus enabling wide access to a single source of mathematical contents . our automated theorem prover for coherent logic argoclp exports proofs in the form of the xml files that conforms to this dtd .argoclp reads an input theory and the conjecture given in the tptp form ( assuming the coherent form of all formulae and that there are no function symbols or arity greater than 0 ) .argoclp has built - in support for equality ( during the search process , it uses an efficient union - find structure ) and the use of equality axioms is implicit in generated proofs .the generated xml documents are simple and consist of three parts : ` frontpage ` ( providing , for instance , the author of the theorem , the prover used for generating the proof , the date ) , ` theory ` ( providing the signature and the axioms ) and , organized in chapters , a list of conjectures or theorems with their proofs .this way , some contents ( ` frontpage ` and ` theory ` ) can be shared by a number of xml documents . on the other hand ,this also enables simple construction of bigger collections of theorems .the following is one example of an xml document generated by argoclp : .... < ?xml version="1.0 " encoding="utf-8 "doctype main system " vernacular.dtd " > < ?xml - stylesheet href="vernacularisar.xsl " type="text / xsl " ?> < main > < xi : include href="frontpage.xml " parse="xml " xmlns : xi="http://www.w3.org/2003/xinclude"/ > < xi : include href="theory_thm_4_19.xml " parse="xml " xmlns : xi="http://www.w3.org/2003/xinclude"/ > < chapter name="th_4_19 " > < xi : include href="proof_thm_4_19.xml " parse="xml " xmlns : xi="http://www.w3.org/2003/xinclude"/ > < /chapter > </main > .... the overall architecture of the framework is shown in figure [ fig : architecture ] .our xml suite for coherent logic vernacular is used for a number of proofs generated by our prover argoclp . in this sectionwe discuss proofs of theorems from the book _ metamathematische methoden in der geometrie _, by wolfram schwabhuser , wanda szmielew , and alfred tarski , one of the twenty - century mathematical classics .the theory is described in terms of first - order logic , it uses only one sort of primitive objects points , has only two primitive predicates ( or arity 4 and of arity 3 , intuitively for congruence and betweenness ) and only eleven axioms .the majority of theorems from this book are in coherent logic or can be trivially transformed to belong to coherent logic .after needed transformations , the number of theorems in our development ( 238 ) is somewhat larger than in the book .here we list a proof of one theorem ( 4.19 ) from tarski s book .the theorem was proved by argoclp ( using the list of relevant axioms and theorems produced by a resolution theorem prover ) , the proof was exported in the xml format , and then transformed to a proof in natural language by appropriate xsl transformation ( is an infix notation for and it denotes that the pairs of points and are congruent , denotes that the point is between the points and , denotes that the points , and are collinear ) . assuming that and and it holds that . _ proof : _qed below is the same proof in isabelle / isar form : + below is the same proof in coq form : from the set of individual theorems ( 238 ) , the prover argoclp completely automatically proved 85 ( 36% ) of these theorems and generated proofs in the xml format .we created a single xml document that contains all proved theorems and other theorems tagged as conjectures .the whole document matches the original book by schwabhuser , szmielew , and tarski and can be explored in the latex ( or pdf ) form , html or as isabelle or coq development .in , wiedijk proposes a mathematical vernacular that is in a sense the common denominator of the proof languages of hyperproof , mizar and isabelle / isar .we agree with his conclusion in the last sentence of the quotation in the introduction , but we think that the three proof languages were _ not _ discovered independently . natural deduction has been introduced by the polish logicians ukasiewicz and jakowski in the late 1920 s , in reaction on the formalisms of frege , russell and hilbert . the term _ natural deduction _ seems to have been used first by gentzen , in german : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ich wollte zunchst einmal einen formalismus aufstellen , der dem wirklichen schlieen mglichst nahe kommt .so ergab sich ein kalkl des natrlichen schlieens " ._ ( first of all i wanted to set up a formalism that comes as close as possible to actual reasoning . thus arose a calculus of natural deduction".)gentzen , untersuchungen ber das logische schlieen ( mathematische zeitschrift 39 , pp.176210 , 1935 ) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the qualifier _ natural _ was of course particularly well - chosen to express that the earlier formalisms were unnatural ! as this was indeed the case , natural deduction quickly became the predominant logical system , helped by the seminal work by gentzen on cut - elimination .( ironically , this technical work in proof theory is best carried out with proofs represented in _ sequent calculus _ , using natural deduction on the meta - level . )it should thus not come as a surprise that the vernacular we propose also is based on natural deduction .one difference with wiedijk s vernacular is that ours is based on coherent logic instead of full first - order logic .this choice is motivated in section [ subsec : cl ] ( easier semi - decision procedure and more readable proofs ) .another difference is that wiedijk allows proofs to be incomplete , whereas we stress complete proof objects .this difference is strongly related to the fact that wiedijk s vernacular is in the first place an input formalism for proof construction , whereas our vernacular is an output formalism for proof presentation and export of proofs to different proof assistants . as far as we know, the mathematical vernacular proposed by wiedijk s has not been implemented on its own , although hyperproof , mizar and isabelle / isar are developed using the same ideas .a number of authors independently point to this or similar fragments of first - order logic as suitable for expressing significant portions of standard mathematics ( or specifically geometry ) , for instance , avigad et.al . and givant and tarski et.al . in the context of a new axiomatic foundations of geometry . a recent paper by ganesalingam and gowers is also related to our work .their goal is comparable to ours : full automation combined with human - style output .they propose inference rules which are very similar to our coherent logic based proof system .for example , their rule corresponds to the rule , corresponds to , corresponds to ( with length of greater than 0 ) , corresponds to the rule . yet , some rules they proposed are not part of our set of rules .the logic they use is full first - order , with a plan to include second - order features ( this would also be perfectly possible for coherent logic , which is the first - order fragment of _ geometric _ logic , which is in turn a fragment of higher - order logic , see ) . upon closer inspection, the paper by ganesalingam and gowers seems to stay within the coherent fragment , and proofs by contraposition and contradiction are delegated to future work .we find some support for our approach in the observation by ganesalingam and gowers that it will be hard to avoid that such reasoning patterns are applied in `` inappropriate contexts '' . on the other hand ,the primary domain of application of their approach is metric space theory so far , with the ambition to attack problems in other domains as well .it would be very interesting to test the two approaches on the same problem sets .one difference is that insists on proofs being faithful to the thought processes , whereas we would be happy if the prover finds a short and elegant proof even after a not - so - elegant proof search .another difference is that we are interested in portability of proofs to other systems . to our knowledge ,the prover described in is not publicly available .compared to omdoc , our proof format is much more specific ( as we specify the inference rules we use ) and has less features . it can be seen as a specific set of ` methods ` elements of the ` derive ` element of omdoc .an alternative to using coherent logic provers would be using one of the more powerful automated theorem provers and exploiting existing and ongoing work on proof reconstruction and refactoring ( see , for example , ) .this is certainly a viable option .however , reconstructing a proof from the log of a highly optimized prover is difficult .one problematic step is deskolemization , that is , proof reconstruction from a proof of the skolemized version of the problem .( the most efficient provers are based on resolution logic , and clausification including skolemizing is the first step in the solution procedure . )what can be said about this approach in its current stage is that more theorems can be proved , but their proofs can still be prohibitively complicated ( or use additional axioms ) .it has been , however , proved beneficial to use powerful automated theorem provers as preprocessors , to provide hints for argoclp .the literature contains many results about exchanging proofs between proof assistant using deep or shallow embeddings .boessplug , cerbonneaux and hermant propose to use the -calculus as a universal proof language which can express proof without losing their computational properties . to our knowledge , these works do not focus on the readability of proofs .over the last years a lot of effort has been invested in combining the power of automated and interactive theorem proving : interactive theorem provers are now equipped with trusted support for sat solving , smt solving , resolution method , etc .these combinations open new frontiers for applications of theorem proving in software and hardware verification , but also in formalization of mathematics and for helping mathematicians in everyday practice . exporting proofs in formats such as the presented one opens new possibilities for exporting readable mathematical knowledge from automated theorem provers to interactive theorem provers . in the presented approach ,the task of generating object - level proofs for proof assistants or proofs expressed in natural language is removed from theorem provers ( where it would be hard - coded ) and , thanks to the interchange xml format , delegated to simple xslt style - sheets , which are very flexible and additional xslt style - sheets ( for additional target formats ) can be developed without changing the prover . also , different automated theorem provers can benefit from this suite , as they do nt have to deal with specifics of proof assistants .the presented proof representation is not intended to serve as `` the mathematical vernacular '' .however , it can cover a significant portion of many interesting mathematical theories while it is very simple .often , communication between an interactive theorem prover and an external automated theorem prover is supported by a verified , trusted interface which enables direct calling to the prover . on the other hand ,our work yields a common format which can be generated by different automated theorem provers and from which proofs for different interactive theorem provers can be generated .the advantage of our approach relies on the fact that the proof which is exported is not just a certificate , it is meant to be human readable .the current version of the presented xml suite does not support function symbols of arity greated than 0 .for the future work , we are planning to add that support to the proof format and to our argoclp prover . in the current version , for simplicity ,the generated isar and coq proofs use tactics stronger than necessary .we will try to completely move to basic proofs steps while keeping simplicity of proofs . beside planning to further improve existing xslt style - sheets , we are also planning to implement support for additional target languages such as omdoc .marc bezem and thierry coquand . .in geoff sutcliffe and andrei voronkov , editors , _ 12th international conference on logic for programming , artificial intelligence , and reasoning lpar 2005 _ , volume 3835 of _ lecture notes in computer science_. springer - verlag , 2005 .jasmin christian blanchette . .in jasmin christian blanchette and josef urban , editors , _ third international workshop on proof exchange for theorem proving , pxtp 2013 , lake placid , ny , usa , june 9 - 10 , 2013 _ , volume 14 of _ epic series _, pages 1126 .easychair , 2013 .jasmin christian blanchette , lukas bulwahn , and tobias nipkow . .in cesare tinelli and viorica sofronie - stokkermans , editors , _ frontiers of combining systems , 8th international symposium , proceedings _ , volume 6989 of _lecture notes in computer science _ , pages 1227 .springer , 2011 .mathieu boespflug , quentin carbonneaux , and olivier hermant . .in _ second workshop on proof exchange for theorem proving ( pxtp ) _ , volume 878 of _ ceur workshop proceedings _ , pages 2843 .ceur-ws.org , 2012 .john fisher and marc bezem . .in cliff b. jones , zhiming liu , and jim woodcock , editors , _4th international colloquium on theoretical aspects of computing ictac 2007 _ , volume 4711 of _ lecture notes in computer science_. springer - verlag , 2007 .cezary kaliszyk and alexander krauss . .in sandrine blazy , christine paulin - mohring , and david pichardie , editors , _ interactive theorem proving - 4th international conference , itp 2013 , rennes , france , july 22 - 26 , 2013 . proceedings _ , volume 7998 of _ lecture notes in computer science _ , pages 5166 .springer , 2013 .cezary kaliszyk and josef urban . .in maria paola bonacina , editor , _ automated deduction - cade-24 - 24th international conference on automated deduction _ , volume 7898 of _ lecture notes in computer science _ , pages 267274 .springer , 2013 .steven obua and sebastian skalberg . .in ulrich furbach and natarajan shankar , editors , _ automated reasoning _ , volume 4130 of _ lecture notes in computer science _, page 298302 .springer berlin heidelberg , 2006 .steffen juilf smolka and jasmin christian blanchette . .in jasmin christian blanchette and josef urban , editors , _ third international workshop on proof exchange for theorem proving , pxtp 2013 _ , volume 14 of _ epic series _ , pages 117132 .easychair , 2013 .sana stojanovi , vesna pavlovi , and predrag janii ' c. . in pascal schreck , julien narboux , andjrgen richter - gebert , editors , _ automated deduction in geometry _ ,volume 6877 of _ lecture notes in computer science_. springer , 2011 .markus wenzel . .in yves bertot , gilles dowek , andr hirschowitz , c. paulin , and laurent thry , editors , _ theorem proving in higher order logics ( tphols99 ) _ , volume 1690 of _ lecture notes in computer science _ , pages 167184 .springer , 1999 .
we propose a simple , yet expressive proof representation from which proofs for different proof assistants can easily be generated . the representation uses only a few inference rules and is based on a fragment of first - order logic called coherent logic . coherent logic has been recognized by a number of researchers as a suitable logic for many everyday mathematical developments . the proposed proof representation is accompanied by a corresponding xml format and by a suite of xsl transformations for generating formal proofs for isabelle / isar and coq , as well as proofs expressed in a natural language form ( formatted in latex or in html ) . also , our automated theorem prover for coherent logic exports proofs in the proposed xml format . all tools are publicly available , along with a set of sample theorems .
first of all , i should like to thank the organizing committee for having given me an opportunity to talk about the history of cosmic ray research .it is , of course , impossible to give a comprehensive survey of one hundred years of cosmic ray research in a short time .so i must apologize at the outset that this will be just a snapshot from our long history that i have heard from pioneers .recently , i had two opportunities to review the history of cosmic ray research .i published an illustrated narrative on `` what are cosmic rays?''for younger readers [ 1 ] , and also an article entitled `` one hundred years of research on cosmic rays '' .the latter was published by the japan astronomy society in a series of books on modern astronomy that were written in japanese [ 2 ] .for this reason , and because next year will be the centenary of the discovery of cosmic rays , it seemed an appropriate time to present my personal views on the history of cosmic ray research in english .the early history was published before in 1985 in a book edited by sekido and elliot [ 3 ] .however , we note this did not include all contributions .the start of cosmic ray research was deeply connected with the two famous discoveries : x - rays by roentgen and radio - activity by becquerel . these were discovered both in year of 1900 .the discovery of cosmic rays was made when people were pushing forward the study of radio - activity .the mineral discovered by madame curie and schmidt in 1896 called pitchblende was an indispensable key mineral for the study of the properties of the radio - activity , since they showed extremely strong radio - activity .when people tried to put the stone including the mineral on the table and measured its intensity as a function of the distance , people noticed a curious fact that the flux did not go to zero even at the time in which the observer measured the intensity from a long distance from the sample .what are the components that they could not remove from the base line of the data ?theodor wulf carried the electrometer on the eiffel tower , but the meter did not show zero as expected .three scientists challenged this matter .hess ( austrian ) , kolhoerster ( german ) and gockel ( swiss ) .they used the balloon flight to study the difference of the ionization as a function of the altitude .they got on the balloon and courageously flied up over 5,300 m height , carrying an instrument called the electrometer .they might have been very cold at the highest point and might have strong headache due to low atmosphere pressure . however , they repeated the measurement of the ionization .and finally , they found that the ionization increased when the balloon climbed up higher altitude .hess named these radio - activities `` high altitude rays '' ( hoehen strahlung ) . who gave the name `` cosmic rays '' to these high altitude rays ?it was made by a famous us scientist , millikan , who found the unit of the electric charge .he carried the electroscope in the rocky mountain and sank the detector in muir lake ( 3,600 m ) , located near the highest peak of the rocky mountain , mt.whitney ( 4418 m ) ( figure 1 ) .then he sank the detector into another lake of lower altitude ( 2000 m ) and compared the difference of the ionization .then he found the difference purely came from just the absorption by the atmosphere and concluded that those activities were induced by the `` rays '' coming from the top of the earth , i.e. , the universe .so he named it as `` cosmic rays '' .it was the year of 1925 . in his paper to the american academy, he wrote that the penetrating rays were of cosmic origin and also he described that he sank the detector in a beautiful snow - fed lake .this might be pointed out as the first experiment using the water of the lake as the detector located at high altitude .it may be worthwhile to note that millikan had a picture on the early universe that the early universe was made by a big nucleus .so it would be natural to image that high - altitude rays are coming from large blob of nucleus [ 4 ] .in the long history of cosmic ray research , many new particles have been discovered .the first great discovery could be pointed out as the discovery of positrons , an anti - matter , which was made by anderson in 1932 .it would be interesting to know that millikan prepared the budget for the experiment of anderson. then muons were found by anderson and nedermyer in 1937 .when muons were found , yukawa believed that these particles might be the particle as he predicted in order to combine protons and neutrons inside nucleus to keep the nucleus stable ( 1935 ) .however , the feature was different from his prediction .the meson theory once encountered a difficulty . in 1942 ,sakata , inoue and tanigawa pointed out that the mesons found in cosmic rays might be of different kind from yukawa mesons .this hypothesis is called as `` two kinds of meson '' theory . in 1947 ,lattes , occhialini and powell discovered the yukawa meson in the emulsions exposed at high altitude weather station located at mt .chacaltaya in bolivia ( 5,250 m ) . thus the yukawa meson theory was confirmed .the nobel prize was given to yukawa in 1948 and to powell in 1950 .after that , a search for new particles in cosmic rays was steadily made .however , until 1970 , no new particle was discovered in cosmic rays and the job to find out new particles was considered as the matter of particle accelerator experiments . breaking this long silence , in 1970 ,short life particles were found in the emulsion experiments exposed at high altitude by the balloon flight by niu et al [ 5 ] . by present terminology ,they are called as the charmed particles .a number of charmed particles were produced by the accelerators at bnl and slac in 1974 and their properties were investigated in detail .i must describe here also the quark hunting experiments extended around 1966 , when i was a post - graduate course student of nagoya university . around that time , scientists considered that quarks might be involved in cosmic rays .they might be produced by the nuclear interaction processes between very high energy cosmic rays and the atmospheric nucleus . however , even though an enthusiastic search for the quarks has been made , they have not been found at all . finally , around 1974 , a theory was proposed that quarks must be confined in a bag , called mit bag model .this arises from a special property between two quarks that have a quantum number `` color '' .nowadays it is known as the `` asymptotic freedom '' .the force between two quarks behaves completely in different way in comparison with the coulomb force or the gravitational force .when two quarks approach in a short distance , the force between two quarks becomes weak . on the other hand , when two quarks separate each other in a certain distance , then strong attractive force appears between them .therefore , quarks are difficult to escape from the bag .this concept is well known as `` the confinement of the quark '' in the bag . in my opinion , this peculiar property between quarks has not been completely understood yet and the property must be classified into the same category of the physics : why there are heavy electrons ( muons ) in nature , why there are two kinds of particles that obey the bose - einstein statistics and the fermi - dirac statistics etc .we must accept the concept as they are .around 1980 , a fantastic theory was proposed .not only two forces , the electro - magnetic force and the weak force , but also three forces involving the strong force were unified .in other words , those three forces stem from the same origin . as a consequence of the theory, protons must decay into positrons and neutral pions .the theory is called as the grand unified theory ( gut ) .if the prediction is correct , protons are expected to be no more stable .people constructed large - volume water - tanks in the underground mines and waited for the proton decay .however , what they saw was not the evidence of the proton decay but the neutrino burst produced by the super nova explosion appeared near the large magellanic cloud in 1987 .koshiba got nobel prize thanks to this discovery .in addition , with the use of the same detector , another important evidence of the neutrino oscillation was found .even now in the underground laboratories , many people are searching the evidence for the dark matter , wimps ( weakly interacting massive particles ) .the study of the universe by means of cosmic rays may be alternatively defined as follows : where and how cosmic rays are accelerated . in other words, we are studying particle acceleration processes in various parts of the universe . according to the efforts extended by a number of scientists, we have now a good knowledge on the energy spectrum of cosmic rays .the energy range is extended over 12 orders of magnitude from to .the intensity drops off quickly with the energy and typical flux at 1 gev ( ) is 100 particles//sec / str .they are sometimes produced in association with solar flares . by solar flares, particles are accelerated beyond 100 gev .forbush noticed at first that those high energy cosmic rays are coming from the sun .they were observed in association with the large solar flare in february 28 , 1942 . according to our recent observation in association with the solar flare ,particles are accelerated over 56 gev [ 6 ] .the flux of cosmic rays near 100 tev ( ) is approximately 1 particle//sec / str .they may be produced by the shock acceleration processes .a very prominent particle acceleration theory was proposed in 1977 by five scientists [ 7 ] , [ 8 ] , [ 9 ] .it is now well known and called as the shock acceleration theory . since these particles have been accelerated to such high energies , it is impossible to measure their energies by the magnet - spectrometer . therefore , they are typically measured by the air shower method .one of the clear evidence for the shock acceleration mechanism was obtained by the simultaneous observations in the x - ray region ( asca ) and very high energy gamma - ray region ( hess ) ( figures 2 and 3 ) .the number of sources observed in the vhe gamma - ray regions has been accumulated and it turns out to be higher than 50 . in the near future, a powerful cherenkov telescope cta will be constructed .then the number of sources will exceed 1,000 .this means a new stage of cosmic ray research and we are really entering into the gamma - ray astronomy . in the new stage of vhegamma - ray study , at first , people will classify those sources into proton acceleration sources and electron acceleration sources . then within themwe will search different sources that were produced by the different acceleration mechanism . about the diffuse gamma - rays sources , recently the study entered into a new stage . due to new discoveries by the surface detectors , milagro and tibet as array , and also by the satellite board detector , fermi gamma - ray telescope , many bright sources that emit gamma - rays have been observed .also a global trend of the intensity difference ( anisotropy ) has been observed , which is induced by the modulation of the solar magnetic field ( solar wind ) .the milagro team and tibet team are going to construct new detectors in their observatories independently : one is at high altitude in mexico ( 5,000 m ) and the other is at high altitude in tibet ( 4,200 m ) .they are based on the water cherenkov method and they are designed to be able to separate the showers originated by protons from gamma - rays . in the latter case the muon components involved in the showers must be 50 times less than the showers induced by protons . we could find gamma - ray sources with energies higher than 100tev by means of these experiments . here, i would like to point out the possibility that the majority of cosmic rays might be accelerated by the stellar flares up to 100 gev with energy spectrum of ( the first step acceleration ) and those seed cosmic rays will be re - accelerated to 100 tev by the shock acceleration process in the snr with energy spectrum of [10 ] .the cosmic rays beyond , called the highest energy cosmic rays , are certainly the energy frontier .accelerators could not produce such a high energy at the moment .the highest energy region is very attractive region for both particle physics and astrophysics . protons with energy higher than not be involved in our galaxy , so that their origin must be different from our galaxy : either coming from other celestial objects or from the decay of exotic particles such as cosmic strings . however , at the moment , the observation results seem to include the systematic errors as shown in figure 4 .probably the difference of the energy spectrum originates from the systematic errors associated with the determination of the energy of each shower . in order to determine the incident energy of the showers , results of the monte carlo calculation have to be considered . in order to obtain the correct energy spectrum ,the monte carlo code of the simulation must be calibrated . however , there is no accelerator data on the forward production region of secondary particles beyond .the lhcf experiment will soon provide important data for the calibration of mc code for each air shower group and we will be able to understand the correct feature of the nature around the gzk cut - off region .after that , as the next step of the research , large air shower experiments like euso and/or auger north projects will be made . then we may enter a new step on cosmic ray research with the energy higher than ev .in closing this talk , i would like to introduce an interesting story that i heard from prof .oda , the father of x - ray astronomy of japan together with prof .s. hayakawa . since oda stayed at mit under prof .rossi when he was young , he could know why and how the air shower experiment started . around 1951 - 1956 , the existence of the magnetic field in the galactic arm was found .enrico fermi proposed to bruno rossi to measure the cosmic rays around by means of an air shower experiment .since the strength of the magnetic field in the galactic arm was already estimated to be about 3 at that time , protons with energy higher than could not be present in the galactic arm , therefore a cut - off at would have been expected .however , there is a shoulder near but no cut - off for cosmic rays and the cosmic ray spectrum is extended over .anyway , this is a reason how the air shower experiment started at mit .finally , i would like to introduce another topics related to japanese cosmic ray research .as i have already stated above section , around 1947 , the arrival direction of cosmic rays had not known at all . in 1950 prof .yataro sekido intended to measure it . in 1955 , he planned to build a large cherenkov telescope , with a diameter of 4 m [ 11 ] .he planned to find sources of cosmic - rays to measure the intensity of cosmic - ray muons as a function of arrival direction in the sky .people believed that cosmic rays arrived rectilinearly from their sources . around that time, the existence of the magnetic field in the galactic arm was not known .however , almost at the final stage of the construction , the existence of a weak magnetic field was discovered .so prof . satio hayakawa recommended to sekido to stop constructing the telescope .however , yataro sekido insisted on finishing its construction and making observations .hayakawa agreed with this policy .yataro sekido actually made a sky map of muon intensity with the large telescope .he obtained an almost uniform arrival map of muons over the sky . then prof .nagashima was invited to nagoya university to study the modulation of arrival directions of high energy cosmic - rays ( about 10 tev ) by magnetic fields - either the solar magnetic field , the interplanetary magnetic field or the galactic magnetic field .he found an anisotropy in arrival directions with an anisotropy of 2 over the mean intensity .recently , with the use of an air shower detector located at the south pole , a nice sky map of the anisotropy of cosmic - ray intensities at high energies was obtained .quite surprisingly , measurements of the southern sky coincide with measurements of the northern sky ( figure 5)[12 ] .what i should like to say here is that the detection of cosmic - ray sources at the highest energies will be subject to the problems that occurred at lower energies .so we should exercise some caution following sekido s historical experience .modern astronomy ( gendai no tenmongaku in japanese ) vol .17 ( 2008 ) 121 - 131 , published by japan hyoron co. early history of cosmic ray studies , edited by sekido and elliot published in d. reidel co. , astrophysics and space science library vol . 118humitaka sato , private communication ( 2010 ) .k. niu et al , progr .46 ( 1971 ) 1644 .y. muraki et al , astroparticle physics , 29 ( 2008 ) 229 - 242 .axford , e. leer and g. skadron , proceed .15th icrc ( plovdiv , bulgaria ) 11(1977 ) 132 .blandford and j.p ostriker , preprint 1977 , astrophys .j. 221 ( 1978 ) l29 .krymiskii , sov .( 1977 ) 327 .y. muraki , proceed .31st icrc ( lodz , poland ) paper og2.1,(2009)id-829 .satoru mori , proceed .of the cosmic ray research section of nagoya university ( in japanese ) vol .9 . ice cube collaboration , arxiv:1005.2960v1 [ astro-ph.he ] 17,may 2010 .
this short note describes the history of cosmic ray research . a part of this document was presented orally at the international conference of cris 2010 held in catania , italy . the document is written being based on the english translation of a japanese article entitled `` one hundred years of research on cosmic rays '' . the document was published in 2008 by the japan astronomy society as a series of books on `` modern astronomy '' ( volume 17 ) .
nanoparticles exhibit chemical and structural inhomogeneities insomuch as they appear in various morphologies , regardless of the production method used .even a well - crystallized nanoparticle with a well - defined polyhedral shape consists of low - coordinated surface atoms that are obviously dissimilar to high - coordinated ( bulk - like ) atoms .one recognizes immediately by inspection that the bulk - like atoms have indeed a local ( bonding ) environment that resembles the _ bulk solid _ _ _ _ _ whereas the surface atoms along the edges or at the corners have similarities with the atoms of _ atomic clusters _ _ _ _ _ in regard to bonding and local coordination . utilizing these similarities ( or dissimilarities ) in elucidatingthe properties of nanoparticles would be rewarding since one could then employ similarity search methods in computer - aided design of nanoparticles with customized properties .we should like to complement this consideration by noting that similarity measures / indices based on quantum - chemical descriptors have long been available , which proved to be promising in carrying out various tasks related to a multitude of physicochemical phenomena , such as comparing properties and reactivities of different molecular systems , deriving quantitative structure - activity or -property relationships , and identification of the active molecular sites .clearly these efforts demonstrate the utility of the similarity - based analysis in molecular design . on the other hand ,the structure and/or properties of nanoparticles have _ never _ _ _ _ _ been explored via the notion of the ( quantum ) similarity .this is , in our opinion , due to lacking a _ complete_____ similarity - based representation of atomic environments , which furnishes an adequate description for the nanoparticle atoms .the present study is thus devoted to fulfill an objective along this line : first , we develop a descriptor - based representation of atomic environments by devising _ two _ _ _ _ _ local similarity indices that are defined from the atom - partitioned _ shape function_____ , where and denote the electronic density function and number of electrons , respectively .then , we employ this representation to explore the size- , shape- , and composition - dependent nanocrystal energetics via the notion of quantum similarity .we focus on an _ energy difference _ _ _ _ _ , which is related to the _ atomic chemical potential _ _ _ _ _ , for its utility in the modelling and simulation of nanoparticles .this energy difference and the local similarity indices were obtained by performing first - principles calculations based on the density functional theory ( dft ) for ( i ) a set of _ database _ _ _ _ _ systems including unary atomic clusters in the shape of regular polyhedra and the bulk solids of c , si , pd , and pt , and ( ii ) a _ test set _ _ _ _ _ for validation , which includes a variety of unary nanocrystals as well as binary pt - pt nanoalloys and pt - c nanocompounds . regarding the energy difference as a `` property '' and exploring its correlations with the local similarity indices , we find that there _ exists _ _ _ _ _ an interconnection between nanocrystal energetics and quantum similarity measures .furthermore , we introduce an interpolation procedure in order to obtain total energies and energy differences from the two similarity indices .the latter enables us to demonstrate that the similarity - based energies could be utilized in computer - aided design of nanoparticles .although one of the similarity indices ( denoted by below ) employed in this study is of the same form as the local carb index , we find it necessary to introduce a _second _ _ _ _ _ similarity index ( denoted by below ) in order to achieve an adequate chemical representation .our findings reveal that using a single similarity index , cf . , leads to a bijection between and _ if and only if _ _ _ _ _ the set of systems are restricted to include _ equilibrium _ _ _ _ _ structures .this has the obvious drawback that it requires _ a priori _ _ _ _ _ knowledge of the _ equilibrium _ _ _ _ _ geometries .thus we find the introduction of a second similarity index necessary , which enables us to extend the aforementioned bijection to cover a generalized set of systems , including strained ( compressed or dilated ) systems .hence the foregoing drawback is overcome by using _two _ _ _ _ _ similarity indices with adequately tailored functional forms given below .it should be emphasized that this approach makes a similarity - based representation of the potential energy surfaces accessible thanks to the inclusion of the strained systems , which facilitates an adequate description of physicochemical processes .the rest of the paper is organized as follows : the next section is devoted to the methodological aspects which also summarizes the computational details .this is followed by a discussion of the calculation results before concluding remarks given in the last section .in this section we first define the energy difference and the similarity indices that constitute the employed chemical representation .we then describe ( i ) the set of _ database _ _ _ _ _ systems that are used to construct a database for the purpose of exploring correlations between energy differences and local similarity indices , and ( ii ) the _ test set _ _ _ _ _ employed for validation .next we introduce an _ ad hoc _ _ _ _ _ interpolation procedure that enables one to compute the similarity - based energy differences and total energies .we finalize this section with a description of our computational modelling framework .the aforementioned energy difference is defined by , \label{eqmu } \end{aligned}\ ] ] where is the number of atoms in the system under consideration , is the dft - calculated energy per atom for the system with the nearest - neighbor distance , is the dft - calculated energy per atom for the bulk solid with the _ equilibrium _ _ _ _ _ nearest - neighbor distance corresponding to the minimum of the total energy , and denotes the negative of the measured _ heat of atomization _ _ _ _ _ for the bulk solid .it is useful to define for the _ equilibrium _ _ _ _ _ value of the nearest - neighbor distance .note that would be equal to the _ atomic _ _ _ _ _ chemical potential at zero temperature ( ) for a _ unary _ _ _ _ _ system if the _zero of energy _ _ _ _ _ is set to the energy of the atom , i.e. , .recent investigations by one of the present authors have indicated that the energy difference could be utilized to introduce a _ scale _ _ _ _ _ of energy on which small ( less stable , more reactive ) and large ( more stable , less reactive ) nanocrystals are naturally ordered near the higher ( ) and lower ( ) ends of the scale , respectively . in the present study ,the isolated ( free ) atom is employed as a _ reference _ _ _ _ _ system for each atomic species : namely , the free c , si , pd , or pt atoms are used as a reference for the c , si , pd , or pt atoms in any system , viz . atomic clusters , bulk solids , or nanocrystals , respectively .this is advantageous for computational efficiency and avoids the need for an alignment procedure .it implies that the similarity of an atom of a certain type in a system is measured with respect to the free atom of the same type , regardless of the type of the system ( atomic cluster , bulk solid , or nanocrystal ) . the local ( atom - partitioned ) carb index , which here serves as an indicator of similarity of an atom , located at , of type in the system under consideration to the free atom ,could then be expressed as where and denote the shape function of the system under consideration and free atom , respectively .the use of the hirsfeld partitioning function in equation ( [ eq1st ] ) is encouraged by the holographic electron density theorem . here denotes the electron density of the isolated atom located at point , and is the promolecular electron density . as explained above, using does not suffice for obtaining a full - fledged representation of atomic environments .thus we introduce a _second _ _ _ _ _ indicator of local similarity given by as explained below , the introduction of enables one to treat _ energetic trends _ _ _ _ _ when the variations with the interatomic distance are taken into account .note that if the system itself is the free atom .in this study , first - principles calculations are employed for building a database that comprises and ( , ) values for the _ unary _ _ _ _ _ atomic clusters in the shape of platonic or archimedean solids .in addition to these regular polyhedra , dimers c , si , pd , and pt , and the bulk solids of c , si , pd , and pt are included in this database .it should be emphasized that not only equilibrium systems ( ) but also _ compressed _ _ _ _ _ or _ dilated _ _ _ _ _ systems ( or ) are included in this set . the systems in this set are called here _ database _ _ _ _ _ systems for ease of speech , which are thoroughly used for the purpose of exploring the correlations between the and or values .that the set of _ database _ _ _ _ _ systems comprise only _ equivalent _ _ _ _ _ atoms makes it possible to set in equation ( [ eqmu ] ) , where denotes the dft - calculated total energy . for each _ database _ _ _ _ _ system , plotting values as a function of yields a convex curve that could accurately be parameterized , as demonstrated below .accordingly , for a given _ database _ _ _ _ _ system , the energy difference defined in equation ( [ eqmu ] ) is represented by where denotes a polynomial function of forth order ( whose coefficients are determined by fitting to the dft - calculated values ) . it is found that one must employ a distinct function with a unique set of polynomial coefficients for each system . thus the relationships are tabulated for all _ database _ _ _ _ _ systems with c , si , pd , pt in tables 1 - 4 in supplementary data .as mentioned above , a variety of nanocrystals are utilized as _ test _ _ _ _ _ systems for the purpose of validation , which exhibit structural inhomogeneity owing to the presence of a number of _ inequivalent _ _ _ _ _ atoms . this __ _ _ _ set is designed to cover a variety of nanocrystal sizes and shapes as well as a range of nanoalloy compositions with various mixing patterns .thus a number of unary c , si , pd , or pt nanocrystals ( supplementary data , table 5 ) , uniformly mixed ( supplementary data , table 6 ) , core - shell segregated ( supplementary data , table 7 ) , and phase separated ( supplementary data , table 8) pt - pd nanoalloys , and pt - c nanocompounds ( supplementary data , table 9 ) are contained in the _ test _ _ _ _ _ set . in practice ,the atoms of the _ test _ _ _ _ _ systems were constrained to occupy the diamond ( c and si nanocrystals ) , fcc ( pd and pt nanocrystals and pt - pd nanoalloys ) , and zinc - blende ( pt - c nanocompounds ) lattice sites .it should be mentioned that platinum carbide nanocrystals are considered here only for the purpose of studying some challenging systems since the bonding characteristics of platinum carbide , which was synthesized for the first time in 2005 under extreme conditions via a high - pressure and high - temperature method in a diamond anvil cell with laser heating , is peculiar owing to the mixed covalent - ionic - metallic interatomic interactions . with the aid of tabulated relationships ( supplementary data , tables 1 - 4 ) , the energies of the _ test _ _ _ _ _systems are obtained according to the following procedure : the contribution to the energy by the atom of a _ test _ _ _ _ _ system is obtained , via interpolation , by where denotes the interpolation coefficients , and is the label for _ database _ _ _ _ _ systems ( whereas denotes atoms of the _ test _ _ _ _ _ system under consideration ) .performing a sum over yields a similarity - based _ total _ _ _ _ _ energy difference note that should be compared to a dft - calculated _ total _ _ _ _ _ energy difference given by , \label{eqedft } \end{aligned}\ ] ] for a nanoalloy / nanocompound made of a and b atoms , owing to the inclusion of and in the definition of , cf .equation ( [ eqmu ] ) .one obviously needs to set in equation ( [ eqedft ] ) for a unary nanocrystal made of a atoms .the dft - calculated energies employed in equations ( [ eqmu ] ) and ( [ eqedft ] ) as well as shape functions employed in equations ( [ eq1st ] ) and ( [ eq2nd ] ) were obtained within the generalized gradient approximation ( gga ) using the pbe exchange correlation potential , and employing the projector augmented - wave ( paw ) method , as implemented in vasp code .spin - polarization was taken into account and _ scalar _ _ _ _ _ relativistic effects were included in all calculations .the 2 and 2 , 3 and 3 , 4 and 5 , and 5 and 6 states are treated as valence states for carbon , silicon , palladium , and platinum , respectively .plane wave basis sets were used to represent the electronic states , which were determined by imposing a kinetic energy cutoff of 400 , 245 , 250 , and 230 ev for c , si , pd , and pt , respectively .primitive and/or conventional unit cells were used in the calculations for the bulk solids , viz .c and si in the diamond structure and pd and pt in the face - centered - cubic ( fcc ) structure , whose brillouin zones were sampled by fine * k*-point meshes generated according to monkhorst - pack scheme , ensuring convergence with respect to the number of * k*-points .a variety of cubic supercells with a side length in the range 1530 were used for the atomic clusters and nanocrystals , which included a vacuum region that put at least 10 distance between nearest atoms of two systems in neighboring supercells .only -point was used for brillouin zone sampling in the case of the cluster or nanocrystal supercells .the error bar for the energy convergence was on the order of 1 mev / atom in all calculations .the overlap integrals employed in equations ( [ eq1st ] ) and ( [ eq2nd ] ) were evaluated via an adaptive multidimensional integration routine within a spherical region about the atomic centers in real space . for efficiency in computing the integrands in equations ( [ eq1st ] ) and( [ eq2nd ] ) , spline interpolations of the electron density functions and were performed using the three - dimensional gridded data written by vasp .the integration region for any atom was imposed by setting the integrands to zero at every point where e / . that this approach yields sufficiently accurate results was checked by computing the normalization integrals such as and also the integrals such as which should yield for the atomic clusters in the shape of platonic or archimedean solids . for infinite systems such as bulk solids ( for which the shape function is zero everywhere but normalized to unity )one could still apply this approach thanks to the inclusion of the denominator terms in equations ( [ eq1st ] ) and ( [ eq2nd ] ) , and _ spatial localization _ _ _ _ _ imposed by the partitioning function .it was , however , required to use a sufficiently large supercell in which the region of integration is well confined .furthermore , it was found that the integrals of the types and show slow convergence with respect to the supercell size whereas the ratio converges rather quickly .hence sufficiently large supercells were used in the computation of and for the bulk solids , and it was confirmed that the computed values of and are independent of the size of the employed supercells .in this section we first investigate the correlations between the energy difference and local ( atom - partitioned ) carb index .we then explore the aforementioned interconnection between energetics and quantum similarity indices .next we employ the interpolation procedure developed in the preceding section in order to devise a means for characterizing energetic heterogeneity of nanoparticles .finally we expound the similarity - based approach developed here by comparing the similarity - based energies to dft - calculated energies for a number of unary c , si , pd , or pt nanocrystals , uniformly mixed , core - shell segregated , and phase separated pt - pd nanoalloys , and pt - c nanocompounds , cf .the _ test _ _ _ _ _ set . ( a ) and energy difference ( b ) versus the atom - partitioned carb index for the unary _ database _ _ _ _ _ systems . the systems made of c , si , pd , and pt atoms are represented by diamonds , squares , triangles , and circles , respectively .the points ( a ) or curves ( b ) corresponding to dimers and bulk solids are labeled while the unlabeled symbols represent the atomic clusters in the shape of regular polyhedra . ]the plot of the _ equilibrium _ _ _ _ _ energy difference versus the local carb index is displayed in figure 1(a ) .it is noticeable that there exists a correlation between and , which appears to be a distinct relationship for each atomic species .note that the correlation is seemingly _ linear _ _ _ _ _ for the pd and pt systems ( as marked by the dashed line passing through the set of pt systems ) .furthermore , it is seen that the set of pd and pt systems are grouped , i.e. , they fall nearly on the same line . for the si systemsthe correlation could also be regarded _approximately _ _ _ _ _ linear whereas the points for the c systems fall on a monotonic curve that is _ not _ _ _ _ _ linear and show a more pronounced scatter . yet , overall , there seems to exist a roughly one - to - one correspondence , i.e. , bijection , between the _ equilibrium _ _ _ _ _ energy difference and , regardless of the atomic species .this implies that the variety of local environments sampled by the _ database _ _ _ _ _ systems are adequately reflected by the atom - partitioned carb index . on one hand , this finding implying that a single number , viz . , per atom ( as opposed to a function of space ) suffices to capture _ energetics trends _ _ _ _ _ is striking in the view of the holographic electron density theorem which applies to the local electronic ground state density , i.e. , a function of space restricted to some region ( as opposed to a number ) .it could , on the other hand , be expounded by noting that _ similar _ _ _ _ _ atoms ( viz .atoms with close values ) would exhibit _ similar _ _ _ _ _ energetic stability ( as indicated by close values ) .( a)-(d ) and the similarity index ( e)-(h ) versus the similarity index for the c ( a ) and ( e ) , si ( b ) and ( f ) , pd ( c ) and ( g ) , and pt ( d ) and ( h ) systems .the circles represent the calculation results to which the solid - line curves are fitted . the curves corresponding to dimers and bulksolids are labeled while the unlabeled symbols represent the atomic clusters in the shape of regular polyhedra . ]it is obviously interesting to see if the preceding analysis for the unstrained systems could as well be applied to the ( negatively or positively ) strained systems , i.e. , if one could introduce a generalized - relationship .thus the plot of the energy difference versus the atom - partitioned carb index is drawn in figure 1(b ) for the pt systems , where the small ( large ) circles represents strained ( equilibrium ) systems .note that the large solid ( red ) circles as well as the ( red ) dashed lines on both panels of figure 1 are identical .the small ( black ) circles represent the dft - calculated ( , ) values .it is seen that the ( , ) points fall on a distinct convex curve ( represented by solid lines ) for each system .this observation is of practical significance , which makes it possible to parameterize as a function of . in practice , this parameterization was carried out by using a distinct polynomial function with a unique set of polynomial coefficients \{ } for each system , yielding the solid - line curves given in figure 1(b ) , which are obtained via fitting to the dft - calculated points . repeating the same procedure for the c , si , and pd systems results in the - curves given in the top panels of figure 2 where , for each atomic species , the curves for the atomic clusters lay necessarily between the curves for the dimer and bulk solid . on the other hand , despite the utility of the - parameterization , there exists now _ no _ _ _ _ _ one - to - one correspondence between the energy difference and the atom - partitioned carb index , i.e. , a given value of does not correspond to a _ unique _ _ _ _ _ _ database _ _ _ _ _ system .thus one can utilize the atom - partitioned carb index as a measure of similarity _ only _ _ _ _ _ for equilibrium systems .this , however , require _ a priori _ _ _ _ _ knowledge of the interatomic distances \{ } ; in other words , the equilibrium geometries .as mentioned above , this limitation is lifted by devising a _ second _ _ _ _ _ indicator of local similarity given in equation ( [ eq2nd ] ) .note that one would in principle need to use the local electronic ground state density rather than numeric value of an integral of it , viz . , in order to differentiate closely - related systems , cf .the holographic electron density theorem .our analysis reveals that a pair of local similarity indices constitute an adequate chemical representation adopted here since a single similarity index fails to reflect _energetic trends _ _ _ _ _ once the variations with the interatomic distance are taken into account .the bottom panels of figure 2 show the plot of versus for the _ database _ _ _ _ _ systems .although the two similarity indices are not genuinely independent of each other , cf .equations ( [ eq1st ] ) and ( [ eq2nd ] ) , the - curves appear to be distinct for each system . furthermore all curves lie in the same region bounded by the dimer curve acting as an upper bound and the curve for the bulk solid , which serves as a lower bound .interestingly , one could make the same observation in the top panels of figure 2 , where all the - curves are also bounded by the dimer and bulk curves .thus the - curves and the - curves are roughly ordered in a similar fashion .this finding encourages one to utilize the _second _ _ _ _ _ similarity index ( in combination with the local carb index ) to establish a bijection between the energy differences and similarity indices .the latter is achieved by employing the _ ad hoc _ _ _ _ _ interpolation formula given in equation ( [ wii ] ) , which enables one to obtain the contribution of atom to the similarity - based total energy difference from the similarity indices .computed via equation ( [ eqeloc ] ) for a series of pd nanocrystals ( top graphs ) , pt - pd nanoalloys ( middle graphs ) and pt nanocrystals ( bottom graphs ) .the scales on the left hand side represent the computed values of for pd atoms ( top scale ) and pt atoms ( bottom scale ) . ]figure 3 displays color - coded graphs of values for a series of pd nanocrystals ( top graphs ) , pt - pd nanoalloys ( middle graphs ) and pt nanocrystals ( bottom graphs ) .the scales on the left hand side represent the range for the computed values of for pd atoms ( top scale ) and pt atoms ( bottom scale ) .the higher end ( blue ) of these scales corresponds to less stable ( more reactive ) atoms while the lower end ( red ) corresponds to more stable ( less reactive ) atoms .it is encouraging to see that the bulk - like atoms near to the center of a nanocrystal have and values around the lower end whereas and values are considerably higher for the low - coordinated atoms located on the faces , along the edges , or at the corners . thus using the valuesfacilitates the characterization of the energetic ( site - specific , morphology - dependent ) inhomogeneity of the nanocrystals .moreover _ energetics trends _ _ _ _ _ in regard to the size - dependence appears to be reasonable as one approach bulk - like energies in going from small to large nanocrystals .besides using the set of or values enables one to look into local ( e.g. , site - specific ) mixing of - and -type atoms in a binary alloy formation .for example , comparative inspection of pt , pt , and pd in figure 3 shows that the pt atoms at the corners of pt are less energetic in comparison those of pt , indicating that alloying pt nanocrystal with pd increases the energetic stability of the corner ( pt ) atoms . since a similar analysiscould be applied to any nanocrystal in a site - specific manner , using the set of values would clearly be useful in elucidating trends in the size- , shape- and composition - dependent nanocrystal energetics .introduced in equation ( [ eqesim ] ) versus the dft - calculated total energy difference given in equation ( [ eqedft ] ) for a ) the unary c , si , pd , or pt nanocrystals , b ) the pt - pd nanoalloys with various mixing patterns , and c ) the pt - c nanocompounds . ]performing a sum over atoms as in equation ( [ eqesim ] ) enables one to obtain from the set of the atomic contributions , cf .figure 3 .it is then crucial to inquire if could be utilized in lieu of ( calculated directly ) for practical purposes , e.g. , in the computer - aided design of nanocrystals .thus the plot of versus is drawn for the _ test _ _ _ _ _ systems in figure 4 where the calculation results are included for a variety of nanocrystal sizes and shapes as well as a range of alloy / compound compositions with various mixing patterns .accordingly the upper , middle , and lower panels of figure 4 are devoted to the unary ( c , si , pd , or pt ) nanocrystals , the pt - pd nanoalloys , the pt - c nanocompounds , respectively . not only unstrained but also negatively or positively strained systems are included in these panels . herestrained systems are characterized by the value of the interatomic distance which is varied in the range of [ 1.37,1.55 ] for the c nanocrystals , [ 2.20,2.45 ] for the si nanocrystals , [ 2.57,3.11 ] for the pd nanocrystals , [ 2.57,2.91 ] for the pt nanocrystals , [ 2.55,3.12 ] for the pt - pd nanoalloys , [ 1.90,2.25 ] for the pt - c nanocompounds .a power - law regression analysis on the points marked by the filled symbols , which ensures that as , results in the dashed lines shown in the panels of figure 4 , which is given by ^\gamma \pm \delta , \label{eqfit}\ ] ] where both and are in ev , and the coefficients , the exponents , and the error bars are listed in table 1 .note that the closeness of the values of and to unity ( in association with a small ) indicates that would follow the same _ energetics trends _ _ _ _ _ as . on the other hand , having either or smaller than unity is an indication that the similarity - based interpolation procedure results in _ underestimation_____. the latter turns out to be the case as revealed by inspection of the slopes of the dashed lines in figure 4 as well as the entries of table 1 . yetthe standard deviation is relatively small , i.e. on the order of 5% in all cases , i.e. , not only for the unary nanocrystals but also for the pt - pd nanoalloys and pt - c nanocompounds . in practice , one could invert equation ( [ eqfit ] ) in order to obtain the total energy differences more accurately for these systems .it should be remarked that this description is not restricted to_ equilibrium _ _ _ _ _ configurations since a relatively wide range of interatomic distances are considered above .hence , portions of the potential energy surfaces of the nanocrystals are rendered accessible .further analysis reveals that the scatter of the points about the regression line is much less pronounced for slightly strained systems whereas significantly larger for the highly compressed systems .one should consequently recognize that the similarity - based approach exemplified here would be more suitable in describing a portion of the potential energy surface that is in the vicinity of _ equilibrium _ _ _ _ _ , availability of which is clearly of great service in a multitude of design problems ..[tbl1 ] the values for the coefficient , the exponent , and the error bar , which are introduced in equation ( [ eqfit ] ) , for the _ test _ _ _ _ _ systems .[ cols="<,^,^,^",options="header " , ]
we first develop a descriptor - based representation of atomic environments by devising _ two _ _ _ _ _ local similarity indices defined from an atom - partitioned quantum - chemical descriptor . then we employ this representation to explore the size- , shape- , and composition - dependent nanocrystal energetics . for this purpose we utilize an _ energy difference _ _ _ _ _ that is related to the atomic chemical potential , which enables one to characterize energetic heterogeneities . employing first - principles calculations based on the density functional theory for a set of _ database _ _ _ _ _ systems , viz . unary atomic clusters in the shape of regular polyhedra and the bulk solids of c , si , pd , and pt , we explore the correlations between the energy difference and similarity indices . we find that there _ exists _ _ _ _ _ an interconnection between nanocrystal energetics and quantum similarity measures . accordingly we develop a means for computing total energy differences from the similarity indices via interpolation , and utilize a _ test set _ _ _ _ _ comprising a variety of unary nanocrystals and binary nanoalloys / nanocompounds for validation . our findings indicate that the similarity - based energies could be utilized in computer - aided design of nanoparticles .
the cell cycle is the sequence of events by which a growing cell replicates all its components and divides them evenly between two daughter cells .many theoreticians have understood the cell cycle as a periodic process driven by a biochemical limit cycle oscillator .however , a growing body of experimental and theoretical evidences indicates that the eukaryotic cell cycle is a toggle switch between two stable steady states , controlled by checkpoints .this point of view was adopted by chen et al . in a recent mathematical model of the budding yeast cell cycle , and bistability in the yeast cell cycle control system has been confirmed recently by experiments from cross s laboratory .bifurcation theory is a mathematical tool for characterizing steady state and oscillatory solutions of a system of nonlinear differential equations ( ode s ) .the goal of this work is a detailed bifurcation analysis of chen s model .our bifurcation analysis supplements the numerical simulation carried out by chen et al . and clarifies their quantitative comparisons between experiment and theory .in addition , bifurcation theory helps us to identify control modules within chen s complicated model , thereby bringing some new insights to the yeast cell cycle control mechanism . a more through understanding of cell cycle control in yeastcan be very helpful in future efforts to model mammalian cell cycle controls .this paper is organized as follows . in sectionii , we give a brief introduction to the budding yeast cell cycle . in section iii , we introduce chen s model and present its one - parameter bifurcation diagram . in section iv , we study saddle node bifurcations in chen s model in order to provide a rigorous foundation for interpreting cross s experiment on bistability of the control system . in sectionv , we propose a reduced model with four time - dependent variables , which retains the main dynamical characteristics of the extended model .we characterize this model using two - parameter bifurcation diagrams . in sectionvi , we further reduce chen s model to three variables and demonstrate that the abbreviated model displays bifurcations and birhythmicity similar to more complex models . in section vii, we use the three - variable model to study effects of extrinsic fluctuations .the closing section is devoted to discussion .the nine - variable mathematical model and its parameters are given in the appendix ._ cell cycle phases. _ the cell cycle is the process by which one cell becomes two .the most important events in the cell cycle are replication of the cell s dna and separation of the replicated dna molecules to the daughter cells . in eukaryotic cells ,these events ( replication and separation ) occur in temporarily distinct stages ( s phase and m phase , respectively ) .s and m phase are separated in time by gaps called g1 and g2 phases . during s phase ( synthesis " ) ,double - stranded dna molecules are replicated to produce pairs of sister chromatids . during m phase ( mitosis " ) , sister chromatidsare separated so that each daughter cell receives a copy of each chromosome .the g1 checkpoint mechanism controls the initiation of s phase , and a g2 checkpoint mechanism controls entry in m phase . a mitotic checkpoint controls the transition from m phase back to g1 phase .the checkpoints monitor cell size , dna damage and repair , dna replication , and chromosome alignment on the mitotic spindle . _ molecular controls of budding yeast cell cycle ._ based on current knowledge about the molecular components controlling progression through the budding yeast cell cycle , a molecular wiring diagram was proposed by chen et al .a slightly simplified version of their diagram is presented in figure 1 .the molecular components can be divided into four groups : cyclins , inhibitors , transcription factors , and proteolytic machinery .there are two families of cyclins in figure 1 : cln s and clb s .denotes a concentration for protein abc . ]these cyclins combine with kinase subunits ( cdc28 ) to form active cyclin - dependent kinase heterodimers that trigger cell cycle events ( cdc28/cln2 initiates budding , cdc28/clb5 initiates dna synthesis , cdc28/clb2 initiates mitosis ) .cdc28 subunits are in constant , high abundance throughout cell cycle ; hence , the activity of cdc28/cyclin heterodimers is controlled by the availability of cyclin subunits .for this reason , cdc28 is not shown in figure 1 ; only the cyclin subunits are specified .( each cyclin molecule is understood to have a cdc28 partner . )sic1 ( in figure 1 ) is a cyclin - dependent kinase inhibitor : it binds to cdc28/clb dimers to form inactive trimers ( cdc28/clb / sic1 ) .sic1 does not bind to or inhibit cdc28/cln dimers .mcm1 , mbf , sbf and swi5 are transcription factors for synthesis of clb2 , clb5 , cln2 and sic1 , respectively .the degradation of these proteins is regulated by a ubiquitination pathway .proteins destined for degradation are first labeled by attachment of multiple ubiquitin molecules .ubiquitin moieties are attached to clb2 and clb5 by the apc ( anaphase promoting complex ) in conjunction with either cdc20 or hct1 .sic1 is ubiquitinated by a different mechanism(the scf " ) , which ( unlike the apc ) requires that its substrates be phosphorylated .budding yeast cells progress through the division cycle as the levels of the species in figure 1 come and go .thus the problem of cell cycle control is to understand the temporal fluctuations of these species . because the species in figure 1 are directly or indirectly interacting with all other species , simultaneous determination of their fluctuating concentrations require a precise mathematical model . using mass action and michaelis - menten rate laws , the complex wiring diagram in figure 1can be converted into ordinary differential equations , and from them the molecular levels can be computed .the model proposed by chen et al . includes about a dozen ode s and eleven algebraic equations with more than parameters .( refer to for a complete description of the wiring diagram and a derivation of the mathematical model , as well as for estimates of the rate constants in the model . ) in the appendix we present a reduced version of chen s model to be used in this paper for bifurcation analysis . from the original model, we drop the target variables ( spindle and bud formation , and dna synthesis ) because they are decoupled from the rest of the model .we reserve mass as the principal bifurcation parameter .we use the same parameter values as ref . , and they are presented in the appendix , table i. using the software package auto , we created a one - parameter bifurcation diagram ( figure 2 ) of the budding yeast cell cycle model , eqn .( a1-a20 ) , for parameter values given in table i. two saddle node bifurcations , at and , connect the stable steady states in figure 2 .there is also a subcritical hopf bifurcation on the upper branch of steady states at from which a branch of unstable limit cycles originates .these unstable oscillations disappear at an infinite - period saddle loop ( sl ) bifurcation near .a second branch of limit cycle oscillations , shown by filled circles that disappear at a different sl bifurcation point ( ) , are stable . the stable steady states ( solid line ) at values of [clb2] represent the g1 phase of the cell cycle .the stable oscillatory states ( filled circles ) represent autonomous progression through s , g2 and m phases of the cell cycle and then back into s phase . to get a full picture of cell cycle events, we must combine the dynamics of the cyclin - dependent kinase engine " ( as summarized in figure 2 ) with equations for cell growth and division ( changes in cell mass , ) . to this end , we supplemented eqn .( a1-a20 ) with an equation for mass growth , , or , in differential form , , and a rule for cell division ( reset to whenever clb2-dependent kinase activity drops below 0.005 ) .following chen et al . we choose because budding yeast cells divide asymmetrically . with these changes ,we compute a solution of the full system , eqn .( a1-a20 ) plus the dynamics of , and plot the resulting trajectory of motion " on the bifurcation diagram ( the red line in figure 2 ) .this trajectory shows that the control system stays in the g1 phase if .as increases further , the control system is captured by the stable limit cycle . as a result , } ] drops below 0.005 , causing the cell to divide and the control system to return to the stable g1 state .this bifurcation diagram of the chen et al .model exhibits the same features of cell cycle models of frog eggs and fission yeast , namely , saddle node bifurcations associated with stable and unstable oscillations . yet, there are subtle differences in these bifurcation diagrams . in the frog egg and fission yeast models , the large amplitudestable limit cycles end at a saddle - node invariant - circle ( snic ) bifurcation , not a sl bifurcation . in our casestable oscillations coexist with the stable steady states over a small range of mass values ( ) .however , when the budding yeast cell cycle model is supplemented by the mass growth equation , such differences seem unimportant .recently , cross et al . experimentally confirmed bistablity in activity of clb2-dependent kinase in budding yeast cells .it is interesting to mention that this result was predicted by a schematic sketch ( figure 9 of ref . ) intuitively drawn from interrelations of cdc28/clb2 with the g1 phase cyclin cln2 and the apc specificity factor cdc20 .we confirm this informal prediction of chen et al . by a rigorous bifurcation analysis of their model . in their experimental work ,cross et al . constructed a strain that under different experimental conditions may lack activities of cln2 or cdc20 or both .( to be precise , cross et al .used cln3 in place of cln2 , but that technical detail makes no difference to our analysis . ) by manipulating the activities of cln2 or cdc20 , they found that [ clb2 ] can be either in high or low , depending on initial conditions . in terms of bifurcation theory, they provided evidence for an s shaped steady state curve bounded by saddle - node bifurcations , with transitions driven by the activity of cln2 or cdc20 .indeed , we found that chen s model displays such bifurcations when [ cln2 ] and [ cdc20 ] are considered as bifurcation parameters . in accord with the experimental protocol of cross et al . , we consider [ cln2 ] and [ cdc20 ] as parameters , and therefore discard eqn .( a1 ) and eqn .( a4-a5 ) from chen s model .we performed bifurcation analysis for the remaining six ode s . in figure 3 , we show a combination of two bifurcation diagrams . in the left bifurcation diagram we set [ cln2]=0 and vary [ cdc20 ] , whereas in the right bifurcation diagram we set [ cdc20]=0 and vary [ cln2 ] . as massis the same in both cases ( ) , the two stable steady states in figure 3 represent g1 phase ( low clb2-dependent kinase activity ) and s / g2/m phase ( high clb2-dependent kinase activity ) .figure 3 shows that increasing [ cln2 ] drives the transition from g1 into s / g2/m , while activation of [ cdc20 ] drives the transition from s / g2/m back to g1 . using auto s facility for computing two - parameter bifurcation diagrams, we extended the saddle - node bifurcations in figure 3 into the parameter planes spanned by ( [ cln2], ) , ( [ cdc20], ) , and ( [ cln2],[cdc20 ] ) . in figure 4a, there are multiple steady states inside the cusp - shaped region bounded by the dashed lines , as expected . in figure 4b , there are two different bistable domains , bounded by dashed and dotted lines , respectively . where the domains overlap, we found that the control system has five steady states .we found that two different modules independently lead to the bistable domains in figure 4b .the dashed line curve is due to the hct1 module of the wiring diagram in figure 1 , whereas the dotted line curve is due to the sic1 module .finally , figure 4c shows the bistable region on the ( [ cln2],[cdc20 ] ) plane .because eqn . ( a1-a20 ) take into account many known details of cell cycle control , the model is very complex .it is difficult to understand from eqn .( a1-a20 ) what are the nonlinearities leading to specific features of the bifurcation diagram shown in figure 2 . to overcome this difficulty, we simplify chen s model , by defining a core module that retains the main dynamical features of the full set of equations .the reduced model can be useful in understanding the roles of nonlinear feedbacks in the control system . in figure 5we propose a simplified wiring diagram for the budding yeast cell cycle .we discarded from the original wiring diagram the sic1 and the clb5 modules and cdc20 s activation , retaining only four ode s . = m ( k_{s , n2}'+k_{s , n2 } '' [ { \rm sbf } ] ) - k_{d , n2 } [ { \rm cln2 } ] , \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ % % % equation#2 \frac{d}{dt}[{\rm clb2}]=m ( k_{s , b2}'+k_{s , b2 } '' [ { \rm mcm1 } ] ) - ( k_{d , b2}'+(k_{d , b2}''-k_{d , b2 } ' ) [ { \rm hct1 } ] + k_{d , b2 } ' '' [ { \rm cdc20 } ] ) [ { \rm clb2}],\ \ \ \ \ \ \ \ \ \ \ \\\ % % % equation#3 \frac{d}{dt}[{\rm hct1}]=\frac{(k_{a , t1}'+k_{a , t1 } '' [ { \rm cdc20}])(1-[{\rm hct1}])}{j_{a , t1}+1-[{\rm hct1}]}-\frac{v_{i , t1 } [ { \rm hct1}]}{j_{i , t1}+[{\rm hct1 } ] } , \ : \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ % % % equation#4 \frac{d}{dt}[{\rm cdc20}]=(k_{s,20}'+k_{s,20 } '' [ { \rm clb2 } ] ) -k_{d,20 } ' [ { \rm cdc20}],\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ; \\ \nonumber\end{aligned}\ ] ] where [ sbf ] is given by eqn . ( a13-a14 ) with [ clb5]=0 . [ mcm1 ]is given by eqn .( a11 ) , and is given by eqn .( a15 ) . with the elimination of the dynamics for cdc20 activation ,we define a new parameter in eqn .( 4 ) .we note that the results in this section do not change if .although figure 5 is much simpler than figure 1 , eqn .( 1 - 4 ) are still quite complex .the most uncertainties arise from the two transcription factors(sbf and mcm1 ) , which are described by nonlinear goldbeter - koshland functions .their role is to switch solutions from one branch to another . as the effects of the transcription factorscan be studied experimentally , we explore their roles via two - parameter bifurcation diagrams .first , by using mass as the primary bifurcation parameter , we computed a one - parameter bifurcation diagram similar to figure 2 .then , we continued the codimension - one bifurcations into two parameter domains , using the intensity coefficients of the transcription factors , and , as the secondary bifurcation parameters . figure 6a shows a two - parameter bifurcation diagram of eqn .( 1 - 4 ) on the plane .saddle - node bifurcations appear in figure 6a only if .a bistable domain is inside the dashed lines .it widens at smaller mass and larger .the dot - dashed line showing hopf bifurcation points continues inside the bistable domain . at ,the hopf bifurcation line is accompanied by a cyclic fold curve .these curves eventually coalesce at a larger mass value . inside the bistable domain the cyclic fold coalesces with a locus of saddle loops ( solid line in figure 6a ) .figure 6b shows a two - parameter bifurcation diagram on the plane .a crucial difference between figure 6a and figure 6b is the existence of a bistable domain at . if , the effect of [ mcm1 ] regulation is negligible .but if , [ mcm1 ] can destroy bistability . in figure 6b a hopf bifurcation line originates from a bogdanov - takens bifurcation .this line is accompanied by a line of saddle loops .the saddle loops change stability where the line of cyclic folds coalesces with the line of saddle loops .we found that eqn .( a1-a20 ) display two - parameter bifurcation diagrams similar to figure 6a - b .notice from figure 6b that the domain of bistability is quite independent of the activity of mcm1 , but the existence of the primary hopf bifurcation in the model is sensitively dependent on the activity of mcm1 .the eukaryotic cell cycle engine is a highly conserved molecular machine .it is expected that mathematical models of cell cycle controls in different organisms exhibit qualitatively similar dynamics as revealed by similar bifurcation diagrams .but there can be also peculiarities in these models , subject to particular parameter selections .as we mentioned in section 3 , the bifurcation diagram in figure 2 does not involve a snic bifurcation , as seen in bifurcation diagrams of mathematical models for frog eggs and fission yeast .although this difference is rather subtle and does not contradict any features of cell cycle physiology , we point out that chen s model eqn .( a1-a20 ) can display a snic bifurcation for appropriate choice of parameter values ( not shown ) . in this section, we examine snic bifurcation in a three variable model . to further simplify the model ,we neglect cln2 from the wiring diagram in figure 5 . as a result, we have a model with three time - dependent variables , =m ( k_{s , b2}'+k_{s , b2 } '' [ { \rm mcm1 } ] ) - ( k_{d , b2}'+(k_{d , b2}''-k_{d , b2 } ' ) [ { \rm hct1 } ] + k_{d , b2 } ' '' [ { \rm cdc20 } ] ) [ { \rm clb2}],\ \ \ \ \ \ \ \ \ \ \ \\\ % % % equation#3 \frac{d}{dt}[{\rm hct1}]=\frac{(k_{a , t1}'+k_{a , t1 } '' [ { \rm cdc20}])(1-[{\rm hct1}])}{j_{a , t1}+1-[{\rm hct1}]}-\frac{v_{i , t1 } [ { \rm hct1}]}{j_{i , t1}+[{\rm hct1 } ] } , \ : \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ % % % equation#4 \frac{d}{dt}[{\rm cdc20}]=(k_{s,20}'+k_{s,20 } '' [ { \rm clb2 } ] ) -k_{d,20 } [ { \rm cdc20}].\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ; \\ \nonumber\end{aligned}\ ] ] in eqn.(5 - 7 ) , [ mcm1 ] is given by eqn . ( a11 ) , and is given by eqn .we assume =0} ] in eqn .we also changed the values of some parameters in table i , as , , , .let ^ 0 ] and }^0 ] . substituting this functional relation between [ cdc20 ] and [ clb2 ] into eqn .( 5 - 6 ) , we can think of ( [ clb2],[hct1 ] ) as a two - variable system , susceptible to phase plane analysis .the nullclines of the two variable systems are plotted in figure 7 . from the intersections of these nullclines ,we find steady state solutions , ^ 0 ] , and consequently }^0 ] , } ] are the right hand sides of eqn . ( 5 - 7 ) and is gaussian white noise with zero mean and unit variance , we assume that mass increase is not affected by random fluctuations .we simulated eqn .( 8 - 11 ) using standard numerical techniques for stochastic differential equations . in figure 10we overplot two different simulations .the dashed lines show time evolutions of , [ clb2 ] , [ cdc20 ] and [ hct1 ] when birhythmicity occurs far from start . in this case, noise does not interfere with mitosis , and cell mass divides each time [ clb2 ] drops below .the solid lines show the case when birhythmicity occurs close to start , as in figure 8 . in this case noisecan switch the control system from slow , large amplitude oscillations to fast , small amplitude oscillations . as a result , [ clb2 ]does not go below and the cell can not divide .consequently , mass grows and the system goes to the stable steady state ( see filled diamonds at in figure 8) .therefore , in the presence of noise , birhythmicity may lead to mitotic arrest .in this work , we carried out bifurcation analysis of a model of the budding yeast cell cycle , based on earlier work by chen et al . which successfully accounts for many observed features of proliferating yeast cells .our results show that , despite a peculiarity in topology of the bifurcation diagram , the budding yeast cell cycle model displays the same basic features previously associated with frog egg and fission yeast models ; namely , saddle - node bifurcations associated with stable and unstable oscillations .we explored bistability and hysteresis in this model by numerical bifurcation analysis .some of our bifurcation diagrams can be useful for designing new experiments .for instance , our two parameter bifurcation analysis ( figure 4b ) suggests that the [ hct1 ] and [ sic1 ] modules may lead independently to bistable states , and there can be regions in parameter space with three stable steady states , when these two modules operate cooperatively .we found that a reduced model with four time - dependent variables retains the main characteristics of the bifurcation diagram of chen s model .this reduction allows us to explore the dominant roles of sbf and mcm1 transcription factors in budding yeast checkpoint controls .our two - parameter bifurcation diagrams ( figure 6 ) also can be useful in designing experiments for cell cycle controls by transcription factors .the budding yeast cell cycle model of chen et al .is parameter rich .although the parameter set presented in table i leads to a satisfactory fit of the model to many experimental observations , the choice of parameter values should be further constrained by new biochemical data about the protein - protein interactions and further improved by automatic parameter estimation techniques .on the other hand , different sets of parameters , leading to different bifurcation scenarios , are interesting from a theoretical standpoint .we have proposed a set of parameters for a reduced , three - variable model leading to a snic bifurcation .an interesting feature accompanying the appearance of a snic bifurcation in the reduced model is birhythmicity .birhythmicity has been found in a chemical system , but for biological systems , it is known theoretically only .we have shown that in the presence of extrinsic fluctuations , birhythmicity can lead to mitotic arrest .the fact that noise can switch a biochemical system from one stable solution to another is well known ( e.g. ref . ) , but switching from one stable oscillations to another is a less studied research area .a more systematic study of switching between stable limit cycles is a problem for the future .authors thank kathy chen and other members of computational cell biology group at virginia tech for many stimulating discussions .this work was supported by a grant from darpa s biocomputation program(afrl - 02 - 0572 ) . * figure captions * fig .wiring diagram of a budding yeast cell cycle model .2 . a one - parameter bifurcation diagram of eqn . ( a1-a20 ) for parameter values in table i.solid lines indicate stable steady states .dashed lines indicate unstable steady states .solid circles denote the maximum and minimum values of ] drops below .bistability and hysteresis driven by [ cln2 ] and [ cdc20 ] . on the left plane \equiv 0 ] .mass is fixed at 1 .filled diamonds show stable steady states , dashed lines show unstable steady states .dotted lines and arrows indicate the start and finish transitions of the hysteresis loop .( start refers to the g1 s transition , finish refers to the m g1 transition . )two - parameter bifurcation diagram on the ,m) ] plane .there are two independent pairs of saddle - node bifurcation curves in this figure ( dashed curves and dotted curves ) .depending on the overlaps of the regions bounded by these curves , the number of steady states varies from i to v. fig .two parameter bifurcation diagram on the , [ { \rm cdc20}]) ] is the region where [ mcm1 ] changes abruptly from 0 to 1 .bifurcation diagram of eqn .( 5 - 7 ) .filled diamonds show stable steady states , dashed lines show unstable steady states .stable limit cycle oscillations are shown by filled circles , unstable limit cycle oscillations are shown by open circles . fig .. two - parameter bifurcation diagram of eqn .( 5 - 7 ) on the plane .bistability is found inside the sn curve ( solid line ) .three different hopf bifurcations ( violet lines ) originate from three bogdanov - takens bifurcation points shown by filled circles at , and .cyclic folds are shown by lines in cyan , saddle loops by lines in green , and the red solid line shows snic bifurcations . , which runs next to , is not shown on the diagram . runs from a degenerate hopf bifurcation on to a degenerate hopf bifurcation on . runs from a degenerate saddle loops on to a degenerate saddle loop on ( not shown ) , crossing over on the way . where and run very close together , only is plotted on the figure . fig .stochastic simulations of eqn .( 8 - 11 ) .dashed lines show a case when birhythmicity occurs far from the snic bifurcation . in this case , noise does not interfere with cell cycle progression .solid lines show the case when birhythmicity occurs close to the snic bifurcation , as in figure 8 . in the presence of noise ,the latter case leads eventually to mitotic arrest .parameters are : .solid lines for , dashed lines for . = m ( k_{s , n2}'+k_{s , n2 } '' [ { \rm sbf } ] ) - k_{d , n2 } [ { \rm cln2 } ] , \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ % % % equation#1 \frac{d}{dt}[{\rm clb2}]_t = m ( k_{s , b2}'+k_{s , b2 } '' [ { \rm mcm1 } ] ) - ( k_{d , b2}'+(k_{d , b2}''-k_{d , b2 } ' ) [ { \rm hct1 } ] + k_{d , b2 } ' '' [ { \rm cdc20 } ] ) [ { \rm clb2}]_t,\ \ \ \ \ \ \ ; \ \ \ \ \\\ % % % equation#2 \frac{d}{dt}[{\rm hct1}]=\frac{(k_{a , t1}'+k_{a , t1 } '' [ { \rm cdc20}])(1-[{\rm hct1}])}{j_{a , t1}+1-[{\rm hct1}]}-\frac{v_{i , t1 } [ { \rm hct1}]}{j_{i , t1}+[{\rm hct1 } ] } , \ : \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ % % % equation#3 \frac{d}{dt}[{\rm cdc20}]_t=(k_{s,20}'+k_{s,20 } '' [ { \rm clb2 } ] ) -k_{d,20 } [ { \rm cdc20}]_t,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ; \\ % % % equation#4 \frac{d}{dt}[{\rm cdc20}]=k_{a,20}([{\rm cdc20}]_t-[{\rm cdc20}])-(v_{i,20}+k_{d,20 } ) [ { \rm cdc20 } ] , \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ : \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ % % % equation#5 \frac{d}{dt } [ { \rm clb5}]_t = m ( k_{s , b5}'+k_{s , b5 } '' [ { \rm mbf } ] ) - ( k_{d , b5}'+k_{d , b5}''[{\rm cdc20 } ] ) [ { \rm clb5}]_t , \ ; \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ % % % equation#6 \frac{d } { dt}[{\rm sic1}]_t= k_{s , c1}'+k_{s , c1 } '' [ { \rm swi5 } ] - ( k_{d1,c1}+\frac{v_{d2c1}}{j_{d2,c1}+[{\rm sic1}]_t } ) [ { \rm sic1}]_t,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ : \ \ \ \ \ \ \ \ \ \ \ \ \ \\ % % % equation#7 \frac{d } { dt } [ { \rm clb5|sic1}]= k_{as , b5 } [ { \rm clb5 } ] [ { \rm sic1 } ] -\nonumber \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ : \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ! \ ! \\ - ( k_{di , b5}+ k_{d , b5}'+ k_{d , b5 } '' [ { \rm cdc20 } ] + k_{d1,c1}+\frac{v_{d2c1}}{j_{d2,c1}+[{\rm sic1}]_t } ) [ { \rm clb5|sic1 } ] , \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ % % % equation#8 \frac{d } { dt } [ { \rm clb2|sic1}]= k_{as , b2 } [ { \rm clb2 } ] [ { \rm sic1 } ] -\nonumber \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ : \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ : \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ - ( k_{di , b2}+ ( k_{d , b2}'+(k_{d , b2}''-k_{d , b2 } ' ) [ { \rm hct1 } ] + k_{d , b2 } ' '' [ { \rm cdc20 } ] ) + k_{d1,c1}+\frac{v_{d2c1}}{j_{d2,c1}+[{\rm sic1}]_t } ) [ { \rm clb2|sic1 } ] , \ \ \ \ \ \ \\ % % % equation#9 v_{d2,c1}=k_{d2,c1}(\epsilon_{c1,n3 } [ { \rm cln3}]^*+\epsilon_{c1,k2 } [ { \rm bck2}]+[{\rm cln2}]+ \epsilon_{c1,b5 } [ { \rm clb5}]+\epsilon_{c1,b2 } [ { \rm clb2 } ] ) , \ \ \ ! \ \ \ \ \ \ \ \ \ \ \ \ : \ \ \ \ \ \ \\ { [ { \rm mcm1 } ] } = g(k_{a , mcm}[{\rm clb2}],k_{i , mcm } , j_{a , mcm},j_{i , mcm } ) , \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ; \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ { [ { \rm swi5 } ] } = g(k_{a , swi}[{\rm cdc20}],k_{i , swi}'+k_{i , swi } '' [ { \rm clb2 } ] , j_{a , swi},j_{i , swi}),\ \ ! \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ { [ { \rm sbf}]}={[{\rm mbf}]}=g(v_{a , sbf},k_{i , sbf } ' + k_{i , sbf}''[{\rm clb2}],j_{a , sbf},j_{i , sbf } ) , \ ! \ ! \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ v_{a , sbf}=k_{a , sbf}([{\rm cln2}]+\epsilon_{sbf , n3 } ( [ { \rm cln3}]^*+ [ { \rm bck2 } ] ) + \epsilon_{sbf , b5 } [ { \rm clb5 } ] ) , \ \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ v_{i , t1}=k_{i , t1}'+k_{i , t1}''([{\rm cln3}]^*+\epsilon_{i , t1,n2 } [ { \rm cln2}]+\epsilon_{i , t1,b5 } [ { \rm clb5}]+\epsilon_{i , t1,b2 } [ { \rm clb2 } ] ) , \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ { [ { \rm clb2}]_t}=[{\rm clb2}]+[{\rm clb2|sic1 } ] , \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ { [ { \rm clb5}]_t}=[{\rm clb5}]+[{\rm clb5|sic1 } ] , \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ { [ { \rm sic1}]_t}=[{\rm sic1}]+[{\rm clb2|sic1}]+[{\rm clb5|sic1}],\!\ ! \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ { [ { \rm bck2}]}=m [ { \rm bck2}]^0,\!\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ { [ { \rm cln3}]^*}=[{\rm cln3}]_{max } \frac{m d_{n3}}{j_{n3 } + m d_{n3}}. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ : \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \nonumber\end{aligned}\ ] ] p. nurse , cell * 100 * , 71 ( 2000 ) .b. alberts , d. bray , j. lewis , m. raff , k. roberts , and j. d. watson , _ molecular biology of the cell _ 3rd edition , new york : garland publishers ( 1994 ) .a. murray and t. hunt , _ the cell cycle _ , new york , w. h. freeman co. ( 1989 ) .r. norel and z. agur , science * 251 * , 1076(1991 ) .m. n. obeyesekere , s. o. zimmerman , e. s. tecarro , g. auchmuty , bull .math biol .* 61 * , 917(1999 ) .v. hatzimanikatis , k. h. lee , and j. e. bailey , biotechnol . bioeng . * 65 * , 631(1999 ) .k. nasmyth , trends .* 12 * , 405 ( 1996 ) . c. p. fall , e. s. marland , j. m. wagner , j. j. tyson editors , _ computational cell biology _ springer - verlag , new york ( 2002 ) .k. chen , a. csikasz - nagy , b. georffy , j. val , b. novak and j. j. tyson , mol .biol . cell . *11 * , 369 ( 2000 ) .f. r. cross , v. archambault , m. miller and m. klovstad , mol .biol . cell .* 13 * , 52 ( 2002 ) .w. sha , j. moore , k. chen , a. d. lassaletta , c. yi , j. j. tyson , j. c. sible , pnas usa * 100 * , 975 ( 2003 ) .y. a. kuznetsov , _ elements of applied bifurcation theory _ , new york , springer verlag , ( 1995 ) .s. h. strogatz , _ nonlinear dynamics and chaos_,reading , ma , addison - wesley , ( 1994 ). k. w. kohn , mol .cell * 10 * , 2703(1999 ) .m. t. borisuk and j. j. tyson , j. theor . biol .* 195 * , 69 ( 2000 ) .b. novak , z. pataki , a. gilberto and j. j. tyson , chaos * 11 * , 277 ( 2001 ) .e. j. doedel , t. f. fairgrieve , b. sandstede , a. r. champneys , y. a. kuznetsov , x. wang , _ auto 97 : continuation and bifurcation for ordinary differential equations ( with homcont ) _ , 1998 .a. goldbeter , _ biochemical oscillations and cellular rhythms _ , cambridge ( 1996 ) .a. goldbeter , d. e. koshland , pnas usa * 78 * , 6840 ( 1981 ) . j. j. tyson , a. csikasz - nagy , and bela novak , bioessay * 24 * , 1095 ( 2002 ) .j. j. tyson and b. novak , j. theor .biol . * 210 * , 249 ( 2001 ) .j. j. tyson , k. chen , and b. novak , nature reviews * 2 * , 908 ( 2001 ) .m. kaern and a. hunding , j. theor .193 * , 47(1998 ) .d. battogtokh and j. j. tyson , preprint ( 2004 ) .( arxiv : q - bio.sc/0402040 ) a. sveiczer , j. j. tyson and b. novak , biophys . chem . * 92 * , 1(2001 ) . n. g. van kampen , _stochastic processes in physics and chemistry _ , elsevier scienceb .v. , amsterdam , ( 1992 ) .r. steuer , _ effects of stochasticity in models of the cell cycle : from quantized cycle times to noise - induced oscillations _ , in press .j. m. sancho , m. san - miguel , s. l. katz and j. d. gunton , phys .a * 26 * , 1589(1982 ) .j. garcia - ovajo , j. m. sancho , _ noise in spatially extended systems _ , springer - verlag , new york , ( 1999 ) .j. w. zwolak , j. j. tyson , and l. t. watson , _parameter estimation for a mathematical model of the cell cycle in frog eggs _ , technical report tr-02 - 18 , computer science , virginia tech .d. battogtokh , d. k.asch , m. e. case , j. arnold , and h. b. schuttler , pnas usa * 99 * , 16904 ( 2002 ) .t. haberrichter , m. marhl , and r. heinrich , biophys . chem . * 90 * , 17(2001 ) . c. perez - iratxeta , j. halloy , f. moran , j. l. martiel , a. goldbeter , biophys. chem . * 74 * , 197(1998 ) .h. m. mcadams and a. arkin , annu .* 27 * , 199(1998 ) .j. hasty , j. paradines , m. dolnik , j. j. collins , pnas usa * 97 * , 2075(2000 ) .
we study the bifurcations of a set of nine nonlinear ordinary differential equations that describe the regulation of the cyclin - dependent kinase that triggers dna synthesis and mitosis in the budding yeast , _ saccharomyces cerevisiae_. we show that clb2-dependent kinase exhibits bistability ( stable steady states of high or low kinase activity ) . the transition from low to high clb2-dependent kinase activity is driven by transient activation of cln2-dependent kinase , and the reverse transition is driven by transient activation of the clb2 degradation machinery . we show that a four - variable model retains the main features of the nine - variable model . in a three - variable model exhibiting birhythmicity ( two stable oscillatory states ) , we explore possible effects of extrinsic fluctuations on cell cycle progression .