article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
in many cases krylov space solvers are the methods of choice for the inversion of large sparse matrices . while most krylov space solvers are parameter free and do not have to be tuned to a particular problem , exploiting special algebraic properties of the matrix can lead to considerable acceleration of these algorithms .a recently discussed example is given by -hermitean matrices , e.g. , where the number of matrix - vector products of algorithms like qmr or bicg can be reduced by a factor of two if multiplications by and are cheap .another case which has been discussed in some detail recently is the application of krylov space solvers to shifted equations , i.e. where the solution to has to be calculated for a whole set of values of .this kind of problem arises in quark propagator calculations for qcd as well as other parts of computational physics ( see ) .it has been realized that several algorithms allow one to perform this task using only as many matrix - vector operations as the solution of the most difficult single system requires .this has been achieved for the qmr , the mr and the lanczos - implementation of the bicg method .we present here a unifying discussion of the principles to construct such algorithms and succeed in constructing shifted versions of the cg , cr , bicg and bicgstab algorithms , using only two additional vectors for each mass value . the method is also easily applicable to many other cases .the key to this construction is the observation that shifted polynomials , defined by where is the polynomial constructed in the krylov space method , are often still useful objects. since vectors generated by these shifted polynomials are simply scaled vectors of the original vectors , they are easily accessible . in the following sections we discuss the properties and construction of shifted polynomials in several cases .we then present the shifted versions of the above mentioned algorithms and finally perform some numerical tests .our ultimate goal is to construct an algorithm for a whole trajectory of matrices while only applying the matrix - vector operations for the inversion of one matrix , presumably the one with the slowest convergence . in the class of krylov space solvers , one deals with residuals or iterates which are in some ways derived from polynomials of the matrix : we simply define the shifted polynomial as is determined by the normalization conditions for required in the algorithm .it is easy to see that we can construct solvers which generate iterates of the form without additional matrix - vector products for multiple values of since the calculation of can be derived from matrix - vector products of one single system .if is a polynomial which reduces the vector , e.g. which is an approximation to in some complex region containing the relevant eigenvalues of and , will be a useful polynomial , too .another class of useful polynomials are the leja - polynomials , where the roots of the polynomial are given by the leja points of a compact set in the complex plane not containing the origin implicitely defined by the leja points are usually not uniquely defined .the polynomial defined by is a good approximation of in .the application of leja polynomials to matrix inversion problems has been described in . if is translation invariant , e.g. which is for example true if k is a circle with center on the positive real axis and is real and positive , all leja points are translation invariant and the shifted polynomial is exactly the leja polynomial for the translated region .the application of leja polynomials to construct krylov space methods for the wilson matrix is currently under investigation . in the case of formally orthogonal polynomials , which are usually generated in cg and lanczos - type algorithms, we can also see that the shifted polynomials are exactly the polynomials generated by the process for the shifted matrix . to see this , we introduce the lanczos polynomials have the property of formal orthogonality , namely or , for the non - hermitean process , for some vector .it should be noted that usually is uniquely determined up to a scalar constant ( in the case it is not uniquely determined the lanzcos process can break down ) . since we must have since is a formally orthogonal polynomial for as well .we therefore expect that the polynomials generated in cg and lanczos - type algorithms are of a shifted structure .we can indeed generate the exact processes for several values of using only one matrix - vector operation each iteration by calculating the shifted polynomials .in the following we will show how to calculate the parameters of the shifted polynomials from the original process in the case of the above mentioned recurrence relations .this recurrence is found in mr - type methods or in hybrid methods using mr - type iterations .we assume here that the leading coefficient is one .the polynomial is given directly as a product of its linear factors : to calculate the shifted polynomial , we look at a linear factor resulting in and the shifted polynomial is therefore given by if the spectrum of the matrix lies in the right half of the complex plane we can expect that all inverses of the roots lie there , too .we can then easily see that for , so that the shifted polynomial converges better than the original polynomial with a rate growing with .this is not surprising since we expect the condition number of the matrix to decrease for .let us construct an algorithm using this shifted polynomial .if the single update is given by we can generate the solutions by remarkably , if is generated by the minimal residual condition , this is exactly the same algorithm which was found in with a completely different approach , namely by a taylor - expansion of the residual in and resummation of the series .this is not completely surprising , since in the derivation in approximations were made to achieve that no additional matrix - vector products are needed and the small recursion length is kept , which automatically leads to the shifted polynomial .however , the taylor - expansion becomes prohibitively complex when applied to algorithms like bicgstab , whereas the shifted polynomial method can easily be transferred .let us now apply these ideas to the case of three - term recurrences , which usually appear in algorithms derived from the lanczos process .we look at a general three - term recurrence relation of the form we want to calculate the parameters of the shifted polynomial the equations are given by matching the parameters of the parameters are not completely fixed .one possible choice is this was realized in to construct the qmr and tfqmr method for shifted matrices .the lanczos vectors are in fact independent of .if we want to use directly as a residual , we impose the condition .this determines the parameters of the shifted polynomial : with and the initial conditions . for the case of the lanczos processit is easy to proof by induction that the parameters , and are indeed the parameters generated by the lanczos process for the matrix if the process does not break down .the update of the solution vector is given by this is basically the bioresu-algorithm from .there the equations ( [ simple ] ) are used and an overall normalization factor is recursively determined .it should be noted that this method does not only apply to the lanczos process , but for general parameters , and .the shifted polynomial will then not be the polynomial generated for the shifted process , but the shifted systems still converge if .now let us turn to the more interesting case of coupled two - term recurrence relations .these relations have generally a superior numerical stability compared to the equivalent three - term recurrence .we look at recurrences of the cg - type form where the initial condition has been used.the method can simply be applied to a more general choice of parameters .we want to calculate the parameters needed to generate the shifted polynomial .unfortunately will generally not be a shifted polynomial .this is however not a problem , since since we can calculate without additional matrix - vector products from if the vectors are needed , we can reformulate the recursion as follows : we have in exact arithmetic .depending on the algorithm one or both vectors and have to be stored for all values of .let us calculate the parameters of the shifted process . to do this, we derive the three - term recurrence for : the parameters are given by with the initial conditions and .we thus find for the shifted parameters at the expense of calculating by introducing an additional vector and additional dot products , we can also calculate the shifted parameters and using the original formulae .these formulae do not only apply to the cg process , which will be demonstrated below .we have thus shown that one can implement coupled two - term recurrences for shifted matrices .we can now derive shifted versions of solvers based on these recursion relations by simply calculating the shifted parameters and using the proportionality between the shifted and original polynomials . whether we succeed in deriving the shifted algorithm without any additional matrix - vector productsdepends on whether matrix - vector products of vectors which are derived from polynomials which have no shifted structure are needed .in some cases we can eliminate these matrix - vector products by expressions involving other vectors .in this section we develop shifted algorithm variants of the following algorithms : cg , cr , bicg , bicgstab . in addition shifted versions of the solvers qmr , tfqmr and mr are known , so that for most popular krylov space methods shifted solvers are available . note that since tfqmr is based on cgs , the shifted version of the latter algorithm is basically also available . in table[ tab1 ] we present the currently known short recursion methods for shifted matrices with memory requirements . to avoid a proliferation of new names we propose to simply add -m to the name of an algorithm to indicate its shifted version ..[tab1 ] memory requirements and references for shifted system algorithms for unsymmetric or nonhermitean matrices .we list the number of additional vectors neccessary for additional values of ( which is independent of the use of the -symmetry ) . [ cols="^,^,^,^,^",options="header " , ] note that we can not easily generalize this method to the cgne algorithm , since is not generally a shifted matrix . for staggered fermions , however , we are in the lucky position that the matrix has the structure with real , so that is a shifted matrix. since the cg and cr algorithms are optimal for staggered fermions , we have optimal shifted algorithms available for this case . for wilson fermions the interesting algorithmsare mr and bicgstab , the former due to its simple implementation and small memory requirements and the latter due to its superior performance and stability ; see e.g. .we present here a version of the cg algorithm for shifted matrices .the variants bicg and bicg are derived analoguosly .note that the initial guess has to be set to zero . 0.3 cm this algorithm is a straightforward realization of the formulae ( [ eqparbegin ] ) - ( [ eqparend ] ) .note that we need only 2 additional vectors for each value of even in the nonsymmetric bicg case , since we can calculate the parameters from the parameters of a single system . has to be chosen in a way that for some , which means that corresponds usually to the system with the slowest convergence .the cr algorithm is the truncated version of the generalized conjugate residual method which is a coupled two - term version of the gmres algorithm ( see and references therein ) .we formulate an algorithm which applies the shifted polynomials to the shifted matrices .the algorithm applied to the shifted matrix does in this case not necessarily generate the shifted polynomial .the structure is identical to the cg - m but the parameters are calculated differently , namely we have note that formulae ( [ eqparbegin])([eqparend ] ) still apply although we do not generate the lanczos polynomial .note also that we do not know a priori whether this algorithm converges for the shifted systems .this has to be checked by testing if has only eigenvalues with positive real part , we can however expect that is generally negative and positive .if we have we can easily see from formula ( [ eqparend ] ) that follows .this suggests that we can expect convergence if the zero shift corresponds to the system with the worst condition , which was confirmed in tests with the wilson fermion matrix . in the bicgstab algorithm , we generate the following sequences where and are the bicg - polynomials and where the parameters are derived from a minimal residual condition . for the shifted algorithm we have the update of the solutionhas the form the problem is that the update of itself requires the calculation of , which straightforwardly means we have one additional matrix - vector multiplication for each value of .but we can use the relation to eliminate this matrix - vector product at the expense of one auxiliary vector to store .this method is safe since only if the algorithm breaks down anyway .the complete algorithm is then given by ( note that ) 0.3 cm the convergence of the shifted algorithms can be verified by checking that it is however generally advisable for all shifted algorithms to test all systems for convergence after the algorithm finishes since a loss of the condition ( [ shiftc ] ) due to roundoff errors might lead to erratic convergence .there are two major limitations to shifted algorithms which diminish their usefulness considerably .first , we have to start with the same residual for all values of , which means that can not have -dependent left preconditioning .secondly , preconditioning must retain the shifted structure of the matrix . while preconditioning can reduce the computational effort it has also the important property of numerically stabilizing the inversion algorithm , which is essential to achieve convergence in many cases .a class of preconditioners which is potentially suitable for shifted systems are polynomial preconditioners .we note here that we do not expect to considerably accelerate the matrix inversion algorithms by polynomial preconditioning in the case of the wilson matrix since the polynomials generated by methods like bicgstab are already nearly optimal .we apply a preconditioning polynomial and solve the equation will generally depend on the shift , so we are looking for polynomials which statisfy and which are good preconditioners . for the linear case ,the general solution is where is an arbitrary constant .the case was proposed for the wilson fermion matrix in , leading to the preconditioned matrix which is fortunately a reasonable preconditioner for the wilson fermion matrix , so that the total work is approximately the same as for the unpreconditioned system .we lose ( for general sources ) however a factor of two compared to the usual even - odd preconditioning .we assume that generally we do not have to worry too much if is a good preconditioner for since usually these systems converge faster .problems only arise if the desired precision is close to the precision where the residuals stagnate . given a preconditioner of the form can calculate the preconditioner by requiring that ( [ condi ] ) holds , which results in a system of equations for the parameters .suitable polynomials can for example be constructed from chebychev- , leja- or gmres - polynomials .we will not examine this approach further and only apply linear preconditioners in our numerical tests .the algorithms were tested on quenched configurations at , fixed to coulomb gauge .we used generally 32-bit precision for the vectors and matrix and 64-bit precision for the accumulation of dot products and parameter recursions .the tests were performed on a cray t3d machine using the milc code basis and configurations .other tests of the qmr and mr methods can be found in .the set of hopping parameter values was taken from an actual heavy - light calculation with gaussian wall sources .we compared the results against t even - odd preconditioned bicgstab using the result of lower values as initial guesses .we also applied the methods to clover fermions on the same configurations .we performed tests for two lattices , two spin- and colorindices and sources of size 2 and 6 .we found comparable results in all cases . in figure [ fig1 ]we show the convergence history of a sample run with wilson fermions taken from an actual production run for heavy - light systems .the method is ( averaged over our test runs ) only about 14% faster than bicgstab with continued guesses , which is due to the fact that the gap between the light mass and the heavier masses is too large .the desired accuracy was for the 3 heavier and for the lighter masses .it is easy to see , however , that this factor increases rapidly for mass values which lie closer together , since the continued guess method can not keep the total number of matrix multiplications constant in contrast to shifted methods .the method is advantageous in a specific case with masses ( we assume nonlocal sources here ) , if the first term is simply the total number of iterations using the standard algorithm , the last term is twice the number of iterations for the slowest system using a zero guess .obviously one can construct examples where this number can become very large .note that for point sources the shifted method wins another factor 2 in the wilson fermion case .we also tested the mr - m method with an overrelaxation parameter .while the mr algorithm performed comparably to the bicgstab algorithm in this situation , we found that the residuals of the higher mass systems stagnate at a value of .this problem was less pronounced on smaller lattices , so that we assume that it is connected to a loss of condition ( [ shiftc ] ) due to roundoff errors .it might however also be connected to our specific implementation .the same problem can also be seen in in figure 2 .we used the tadpole - improved value of the clover constant and values of so that the inversion takes approximately as long as in the wilson case . for the bicgstab algorithm we used the preconditioned matrix for bicgstab - m we used the linear preconditioner ( [ linear ] ) with preconditioned matrix does not separate nicely like in the wilson case , which makes however no difference in the computational effort for general sources .it does however serve its main purpose , namely to stabilize the algorithm sufficiently so that it converges in our test cases .we find that the implementation of the preconditioner is important in the sense that a violation of condition ( [ condi ] ) due to roundoff errors can lead to a stagnation of the shifted residuals .the number of iterations needed with zero initial guess is approximately the same for the bicgstab and bicgstab - m for the smallest mass which means that the linear preconditioner reduces the condition of the matrix as well as the preconditioner ( [ ilu ] ) .the further conclusions are therefore similar to the wilson fermion case . in figure [ figcl1 ]we show a convergence history for a system with clover fermions .note that we saw examples of a loss of precision in the shifted residuals which lead to early stagnation , so that it is advisable to check the residuals of the shifted systems for convergence . herethe mass values lie effectively closer together and a bigger improvement can be seen .we presented a simple point of view to understand the structure of krylov space algorithms for shifted systems , allowing us to construct shifted versions of most short recurrence krylov space algorithms .we developed the shifted cg - m and cr - m algorithm which can be applied to staggered fermion calculations . since efficient preconditioners for the staggered fermion matrixare not known , a very large improvement by these algorithms can be expected .we also presented the bicgstab - m method , which , among the shifted algorithms , is the method of choice for quark propagator calculations using wilson ( and presumably also clover ) fermions if enough memory is available .it becomes available simply by extending existing bicgstab implementations .we investigated the efficiency of this method in realistic applications and found that , for sources other than point sources , the improvement depends heavily on the values of the quark masses .the improvement is generally higher for masses which lie closer together .the numerical stability of convergence of the shifted systems is found to be very good so that this method is feasible in 32-bit arithmetic .the application of this method to clover fermions is possible . using simple linear polynomial preconditioningwe can stabilize the solver sufficiently even for relatively small quark masses .we proposed a way to apply higher order polynomial preconditioners to shifted matrix solvers which may be neccessary in the case of very small quark masses .roundoff errors might however in some cases affect the convergence of the shifted systems so that the final residuals have to be checked .this work was supported in part by the u.s .department of energy under grant no .de - fg02 - 91er40661 .computations were performed on the cray t3d at the san diego supercomputer center .i would like to thank s. pickels , c. mcneile and s. gottlieb for helpful discussions .van der vorst , bi - cgstab : a fast and smoothly converging variant of bi - cg for the solution of nonsymmetric linear systems , siam j. sci statist .comput . 13( 1992 ) 631 , m. h. gutknecht , variants of bicgstab for matrices with complex spectrum , siam j. sci .comput . 14( 1993 ) 1020
we investigate the application of krylov space methods to the solution of shifted linear systems of the form for several values of simultaneously , using only as many matrix - vector operations as the solution of a single system requires . we find a suitable description of the problem , allowing us to understand known algorithms in a common framework and developing shifted methods basing on short recurrence methods , most notably the cg and the bicgstab solvers . the convergence properties of these shifted solvers are well understood and the derivation of other shifted solvers is easily possible . the application of these methods to quark propagator calculations in quenched qcd using wilson and clover fermions is discussed and numerical examples in this framework are presented . with the shifted cg method an optimal algorithm for staggered fermions is available . iuhet-353 + december 1996 0.5 cm * krylov space solvers for shifted linear systems * 0.5 cm beat jegerlehner 0.1 cm indiana university , department of physics + bloomington , in 47405 , u.s.a . 0.3 cm
in wind instruments , a single transition between two successive notes often requires the coordinated movement of two or more fingers ( for simplicity , we shall refer to all digits , including thumbs , as fingers ) .one of the reasons why players practise scales , arpeggi and exercises is to learn to make smooth , rapid transitions between notes , without undesired transients . for players ,this motivation is particularly important for slurred notes , where the transition involves no interruption to the flow of air , and should ideally produce no wrong notes and no detectable silence between the notes . in practice , finger movements are neither instantaneous nor simultaneous , and it takes a finite time to establish a new standing wave in the instrument bore .slurred transitions involving the motion of only a single finger can produce transients that result from the finite speed of the finger that pushes a key in one direction , or of the spring that returns it to its rest position . for transitions involving the motion of two or more fingers , there can be an additional transient time due to the time differences between the movements of each finger , which invites the question : how close to simultaneous can flutist finger movements be , and are there preferred finger orders in particular note transitions ? although previous studies have monitored finger motion on the flute , they have been concerned with the flute as a controller for electronic instruments . the midi flute developed at ircam initially used optical sensors , but the final version used hall effect sensors with magnets attached to the keys .the `` virtually real flute '' used linear hall effect sensors and could detect the speed of key transitions .the hyper flute employed a large number of sensors , but only two keys had linear hall effect sensors . used infra - red tracers attached to a player s fingernails and recorded their motion with a video camera .although suitable for detecting the gestures of a player , this approach does not provide sufficient resolution ( in either space or time ) to monitor the detailed fingering behaviour occurring in note transitions .this paper reports explicit measurements of the times taken for keys to open and to close under the action of fingers and springs , and determines the key order and relative timing in transitions involving multiple fingers .the flute was chosen partly because of the similarity in size and construction of most of its keys , which means that keys move with approximately similar speeds , and also that similar sensors could be used for each .these sensors monitored the position of each key using reflected , modulated infra - red radiation and had the advantage that they did not alter the mass of keys nor affect their motion .the flute has the further advantage that measured acoustic impedance spectra are available for all standard fingerings , in addition to acoustical models of all possible fingerings .in most woodwind instruments , the played note is determined in part by the combination of open and closed holes in the side of its bore , which is called a fingering .each fingering produces a number of resonances ( corresponding to extrema in the input impedance ) , one or more of which can be excited by a vibrating reed or air jet . on many modern woodwindsthere are more holes in the instrument than fingers on two hands .consequently some keys operate more than one tone hole , often using a system of clutches , and some fingers are required to operate more than one key .the acoustical impedance spectrum of the flute for a particular fingering can be predicted by acoustical models , and important details of its behaviour can be deduced from this .the virtual flute is an example of a web service using such an acoustical model to predict the pitch , timbre and ease of playing .this service however does not yet give indications on the playing difficulties associated with transitions between two different fingerings .the embouchure of the flute is open to the air and so the instrument plays approximately at the minima in the input impedance at the embouchure .the player selects between possible minima by adjusting the speed of the air jet , and consequently a periodic vibration regime is established with fundamental frequency close to that of a particular impedance minimum or resonance .fine tuning is achieved by rotating the instrument slightly , which has the effect of changing the jet length , the occlusion of the embouchure hole and thus the solid angle available for radiation , thereby modifying the acoustical end effect .changing from one fingering to another usually changes most of the frequencies of the bore resonances and consequently also the note played . discuss flute control parameters in detail . in some cases ,no fingering changes are required : the player can select among different resonances by adjusting the speed and length of the jet at the embouchure .thus the standard fingering for f4 is used by players to play the notes f4 and f5 , ( and can also play c6 , f6 , a6 and c7 ) .many of the transitions between two notes separated by one or two semitones involve moving only a single finger . provided that fingers or springs move the key or keys sufficiently quickly, one would expect no strong transients when slurring such transitions .small transient effects can always arise because of the fact that the strength of the resonances in the bore of the flute are somewhat reduced when a key is almost , but not completely closed ( an example is given later in fig . [fig : freqamplkeyvstime ] ) .several mechanisms can produce undesirable transients in note transitions .one of particular interest to us may occur when a slurred transition involves the motion of two or more fingers .the speed of moving keys is limited by the speed of fingers in one direction and of the key springs that move them in the opposite direction .further , the acoustic effects produced by the motion of a key are not linear functions of key displacement , so it is difficult to define simultaneous motion , particularly for keys moving in opposite directions . in practice, fingers will always move at slightly different times and with different speeds ( how different are these times is one of the questions that we address ) .this means that , between two notes that involve the motion of more than one finger , there are several possible intermediate discrete key configurations , as well as the continuous variations during key motion .furthermore , these different intermediate states may have quite different acoustic properties , which are not necessarily intermediate between those of the initial and final configurations .we divide intermediate fingerings into three categories : safe .: : if the minima for lie at frequencies close to ( or intermediate between ) those of the initial and final states , and have similar magnitudes , then a steady oscillation of the jet can be maintained during a slurred transition .a transition that passes transiently through such a fingering can be called a safe transition and is illustrated schematically in fig .[ fig : unsafea ] .this result is discussed in more detail in section [ sec : two - fing - trans ] .unsafe ( detour ) .: : if one of the intermediate states exhibits a minimum in at a frequency unrelated to both notes in the desired transition , it may cause an undesired note to sound briefly during the transition see fig .[ fig : unsafea ] .unsafe ( dropout ) .: : if there are no deep minima for at frequencies close to those of the initial and final states , jet oscillation may not be maintained during a slurred transition , because the jet length and speed are inappropriate for the frequency of the nearest minimum . in this case , the intensity of the tone will decrease substantially .[ fig : unsafeb ] shows an example of a note transition for which one of the intermediate fingerings has a weak resonance . we could also define a transition as ` unsafe ( trapped ) ' when the second fingering has , as well as the resonance usually used , a strong resonance at a frequency close to that of the first note and so may ` trap ' the jet . gives examples of this situation .5 . in the safe transition ,all intermediate fingerings produce notes with a pitch very close to that of the initial or final note . in the unsafe ( detour ) transition , the tone is not interrupted , but the transitory notes are not close to e5 or f .the lower frame shows possible intermediate key states and transitions in that transition .the safe paths are shown with dark arrows , and labelled with number of the key that moves .white circles indicate open tone holes , black indicates holes closed by a key directly under the finger , and grey shows those closed indirectly by the mechanism .the legends show a common notation for these fingerings ., title="fig : " ] 5 . in the safe transition ,all intermediate fingerings produce notes with a pitch very close to that of the initial or final note . in the unsafe ( detour ) transition , the tone is not interrupted , but the transitory notes are not close to e5 or f .the lower frame shows possible intermediate key states and transitions in that transition .the safe paths are shown with dark arrows , and labelled with number of the key that moves .white circles indicate open tone holes , black indicates holes closed by a key directly under the finger , and grey shows those closed indirectly by the mechanism .the legends show a common notation for these fingerings ., title="fig : " ] [ fig : statese5f5 ] 6 .the top graph shows the impedance spectra , showing the minima that play f6 and f , and the second graph the impedance spectra for the possible intermediate fingering states .the possible transition paths are shown in the schematic graph of frequency vs time .the fingerings involved are shown in the lowest schematic .the keys controlled by the right hand are emphasised and the rest shown pale . ] in some cases , such as the c6 to d6 transition ( also analysed later ) , there is no safe intermediate fingering so , unless fingers move nearly simultaneously , audible transients are expected .of course , even for our definition of safe transitions , transients are expected in the flute sound : it takes time for a wave to travel down the bore , to reflect at an open tone hole and to return , and several such reflections may be required to establish a standing wave with a new frequency . finally , it should be remembered that some transients are an important part of the timbre of wind instruments and may be expected by musicians and listeners .an optical method was chosen because , unlike magnetic systems , there was no need to attach magnets or other material to the keys , and thus alter their mass or feeling under the fingers .mechanical contacts suffer reliability problems and exhibit bounce and/or hysteresis .a reflective photosensor was glued below the edge of each key so that the intensity of light reflected from the edge of the key increased monotonically as the key was closed ( see fig .[ fig : flutedemo ] ) . the chosen sensors ( kodenshi sg-2bc ) were small ( 4 mm diameter ) , low mass ( 160 mg ) , and combined an infrared light emitting diode ( led ) with a high - sensitivity phototransistor ( peak sensitivity at 940 nm ) .instead , the sensor signal was distinguished from the background illumination by modulating the output of the led in the sensor with a 5 khz sine wave .the phototransistor in the sensor was connected as an emitter follower with an rc stage to filter out dc variations due to changes in lighting conditions . because the background illumination and degree of shading can vary for each experiment , the dc bias was adjustable to provide adequate dynamic range for the 5 khz signal without clipping .this was set using a separate 8 element bar led display for each channel .this procedure removes most , but not all , of the dependence on background illumination : a small component remains because of non - linearities .the sensor signals from 16 keys and the sound were recorded on a computer using two motu 828 firewire audio interfaces ( 24 bit at 44.1khz ) . because the same hardware was used to sample both the sound and the output of the key sensors, we can be certain that any delays in sampling due to latencies and buffering will be identical and consequently will cancel exactly when timing differences are calculated .the sensor output as a function of position was measured in experiments in which the sensor output was recorded while a key was slowly closed using a micrometer screw .these showed that the amplitude of the modulated output from the sensor was a monotonic , but nonlinear function of key position . in a further series of experiments ,the flute was played by a blowing machine while a key was slowly closed by the micrometer .the frequency and intensity are plotted as a function of sensor output in fig .[ fig : freqamplkeyvstime ] .the shape of this plot is explained by the compressibility of the key pad .once the pad makes contact with the rim of the tone hole chimney , the playing frequency reaches its lowest value and remains unchanged as the pad is compressed while the key is further depressed .( the sound level is high when the key is fully open and also when the key is closed with the pad compressed .the sound level is lower , however , when the hole is almost closed by the uncompressed pad , which is presumably a consequence of leaks between the pad and the rim , this can be seen in fig .[ fig : freqamplkeyvstimemulti ] ) measurements such as that shown in fig .[ fig : freqamplkeyvstime ] can not be simply used to calibrate measurements made with players rather than blowing machines .the main reason is that , for some players , the tip of the finger occasionally overhangs the key and contributes a small component to the reflection measured by the sensor .the sound produced by the flute was recorded using a rhode nt3 microphone placed on a fixed stand and digitized using one input of the motu audio interfaces .the midpoint of the flute was approximately 50 cm away from the microphone .the frequency was calculated from the recorded sound file using praat sound analysis software , using autocorrelation with an analysis window of 15 ms .the time resolution for frequency transitions was estimated to be 8 ms by studying a semitone discontinuity in a sinusoidal signal ( i.e. between 1000 hz and 1059 hz ) .the intensity and sound level were also extracted using praat .one of the purposes of this study concerns the relative timing of the open / closed transitions , so it is necessary to relate a defined value of sensor output to the effective opening / closing of the associated key .most of the variation in sound frequency occurs close to the point of key closure , so the saturation point in fig .[ fig : freqamplkeyvstime ] was considered as a possible choice . in practice , because of the variations described earlier , this value would be somewhat different for each flutist , key and level of background illumination . instead , guided by curves such as those shown in fig .[ fig : freqamplkeyvstime ] , we set the reference value for a key transition between 70% and 85% of the total variation in sensor output , the exact value depending on the key and situation ( see fig . [fig : freqamplkeyvstime ] ) . determining the duration of effective key opening and closing is also complicated by the saturation of sensor output described above .after examination of a range of traces under different conditions , we chose to measure the time taken between sensor signals of 30% and 70% of the maximum range of the sensor output .this rate of change was then multiplied by a factor of 100/40 to produce a measurement of the effective key closing or opening time .examples are shown in fig .[ fig : freqamplkeyvstimemulti ] .an automated software routine was used to detect and characterise the key movements in each recording .first , it detected each time the output of a key sensor passed through a value midway between neighbouring fully closed and fully open states .these then served as initial starting points to find the nearest times when a key was effectively open or closed .these allowed calculation of the duration of each open / closed or closed / open transition .we estimated an uncertainty in each individual measurement by determining how long it took each key sensor output to vary by % around the effective open or closing value .this value , was on average 1.4.4 ms ( n=2069 ) for a closing key and 4.1.6 ms ( n=1639 ) for an opening key .a single key may be associated with several different note transitions , so the key movements , detected as described above , need to be associated with the note transitions of interest . in order to find note transitions automatically ,the detected pitch was quantised to the set of notes used in the exercise .these quantised data were smoothed using a filter which calculates the median value within a window of 50 ms so that only sufficiently long values corresponding to note transitions are detected . for each note transition thus detected , the corresponding nearest transition reference times for key motion are found .the pitch and key position detection are shown for two particular exercises in fig .[ fig : freqamplkeyvstimemulti ] . 6 is shown in parts * d * to * g*.their sensor values are shown in * f * and * g*. the dark shaded bands again show the uncertainty in the key transition value and time .the pale shading shows the time between 30% and 70% of the key sensor value , which we discuss in the text .again , the arrows show the interval between note transition and effective key opening / closing .the difference between these arrows gives the delay between the two keys , here about 8 ms . ]the following parameters were calculated for each key transition associated with a given note transition : the effective duration of the key transition , the offset in time between the key transition and the pitch transition ( see fig .[ fig : freqamplkeyvstimemulti ] ) and an estimate of the uncertainty in the key closing or opening time .the 11 players participating in this experiment were divided into 3 categories according to experience .professionals ( players p1 to p7 ) are those with more than 8 years experience , and for whom flute playing is a significant part of their current professional life , either as performers or teachers .amateurs ( a1a3 ) have between 3 and 8 years of flute playing experience , and regularly play the flute as a non - professional activity .beginners ( b1 ) have less than 3 years experience .these experiments were conducted in a room with a low reverberance that was significantly isolated from external noise .all measurements were performed on a specific flute from the laboratory , a c - foot pearl flute fitted with a sensor near each key .the flute is a plateau or closed key model , i.e. the keys do not have holes that must be covered by the fingers and it has a split e mechanism , which means that there is only one hole functioning as a register hole in the fingering for e6 .players could use their own head joint if desired .a typical session took about 75 minutes .each subject was asked to warm up freely for about 15 minutes , in order to become accustomed to the experimental flute , the change in balance and some awkwardness caused by the cables .they also used this time to rehearse the particular exercises in the experimental protocol .the musical exercises , written in standard musical notation , were delivered to the subjects approximately one week before the recording session .some players did not complete all exercises .the players were recorded performing a selection of note transitions , scales , arpeggi and musical excerpts . except for the musical excerpts ,each written exercise was performed at least twice , at two different tempi : players were asked to play `` fast '' ( explained to them thus : as fast as possible while still feeling comfortable and sure that all the notes in the exercise were present ) and `` slow '' ( in a slow tempo but still comfortable to perform the exercise once in a single breath ) . in the case of the fast performance , the musician was asked to repeat the exercise as many times as possible ( but still comfortable ) in a single breath .the times taken for each key to be effectively depressed or released ( i.e. to make the relevent key open or close depending upon its mechanism ) are shown in table [ tab : dur ] . in all cases ,pressing times are quicker , perhaps because the finger can transfer momentum gained in a motion that may begin before contact with the key , whereas a released key has to be accelerated from rest by a spring . the large variation among the durations for finger activated motion may include the variations in strength and speed of the fingers in the somewhat different positions in which they are used .there is rather less variation among the mechanical release times .however , some keys differ noticeably from the others .large variation in the latter involves the key mechanism : some keys move alone , others in groups of two or three , because of the clutches that couple their motion . in particular , the left thumb key and the right little finger ( d key ) have stiffer springs , so their release movement is significantly faster ( ) than for other keys .overall , slow tempi produce significantly slower ( ) key press times ..durations ( average standard deviation and sample size in brackets ) of different key movements [ cols=">,<,^,^ " , ] when only one key is involved in a note transition , the pitch change is a direct consequence of the motion of that key . as explained above , the transition from one note to another is not discrete .the frequency of the impedance minimum corresponding to the fundamental of one note is shifted gradually as the opening cross - section of the hole is increased .a relatively small range of key positions , near the fully closed limit is associated with most of the change in pitch ( fig .[ fig : freqamplkeyvstime ] ) : variations in position near the fully open state have much less effect .the delay between detected key motion and frequency change was 1.9.8 ms ( n=1303 ) , which is less than the uncertainties in the measurements : the uncertainty in frequency change is several ms and the experimental uncertainty in key motion a few ms ( fig .[ fig : freqamplkeyvstimemulti ] ) .( the time for radiated sound from the flute to reach the microphone was only about 1 or 2 ms . ) when two keys are involved in a note transition , there are two possible intermediate key configurations due to the non - simultaneous movement of the fingers .examples involving the index and ring fingers of the right hand moving in opposite directions are the transitions f4 to f , f5 to f and f6 to f , but only f6/f involves an unsafe intermediate . using x to indicate depressed and o unpressed , and only indicating the first three fingers of the right hand, this transition goes from xoo to oox , with the possible intermediates being xox and ooo , as shown in fig .[ fig : unsafeb ] .the fingering with both keys depressed ( xox ) plays a note 14 cents flatter than f6 .the fingering with neither key depressed ( ooo ) is more complicated .if the speed of the air jet is well adjusted to play f6 or f , then this fingering does not play a clear note ( see fig . [fig : oscillograms ] ) . if the speed of the air jet is somewhat slower than would normally be used to play these notes , then it will play a note near b5 .so , apart from the ideal , unrealisable , ` simultaneous ' finger movement , there can only be one safe path for the transition xoo to oox and that goes via the fingering xox ( in which both keys are briefly depressed ) : the slight perturbation in pitch can not be detected in a rapid transition . 6 transitions ( nominally 1397 to 1480 hz ) , with key sensor signals shown below .spectra were taken with windows of 1024 samples ( 23 ms ) , separated by 256 samples ( 6 ms ) , gray levels are proportional to the logarithm of the amplitude in each frequency bin .the example shown on top with both keys closed during the transient is a safe transition ( see text ) , and that on the bottom with both keys open during the transient is an unsafe ( dropout ) .the former shows a brief and relatively continuous transient . in the latter ,the harmonic components of the sound are interrupted for tens of ms . during this transient, a larger band component appears at about 12 khz , unrelated to the resonances of the bore , corresponding to the edge tone produced in the absence of a suitable resonance in the unsafe transition.,title="fig : " ] 6 transitions ( nominally 1397 to 1480 hz ) , with key sensor signals shown below .spectra were taken with windows of 1024 samples ( 23 ms ) , separated by 256 samples ( 6 ms ) , gray levels are proportional to the logarithm of the amplitude in each frequency bin .the example shown on top with both keys closed during the transient is a safe transition ( see text ) , and that on the bottom with both keys open during the transient is an unsafe ( dropout ) .the former shows a brief and relatively continuous transient . in the latter ,the harmonic components of the sound are interrupted for tens of ms . during this transient, a larger band component appears at about 12 khz , unrelated to the resonances of the bore , corresponding to the edge tone produced in the absence of a suitable resonance in the unsafe transition.,title="fig : " ] we have also sought to compare intermediate states used during different exercises involving the same transition .players were asked to rapidly alternate between the two notes as well as play it in the context of a scale .the exercise of rapidly alternating between xoo and oox fingerings is an artificial exercise . for such rapid altenations, players often use special trill fingerings , in which intonation and/or timbre are sacrificed in return for ease of rapid performance . to perform this trill , a flutist would normally alternate the xox and oox fingerings , i.e. transform it into a single finger transition using the ring finger only . thus the exercise is one that flutists will not have rehearsed before this study . by contrast , the same key transition in the context of a scale ( here the b minor scale ) , is one which experienced flutists will have practised over years .considerable differences were found between slow and fast trials ( data not shown ) .the results for all players are presented in fig .[ fig : histograms ] . 6 in alternations ( hollow rectangles ) and scale exercises ( filled rectangles ) for different players .rectangles represent the range between the first and third quartiles with a central line representing the median .error bars extend to the range of data points not considered as outliers .outliers are represented as crosses . ][ sec : statistics ] fig .[ fig : histograms ] shows that the descending transition ( f to f6 ) has a relatively consistent behaviour . for note alternations ,professional musicians used a safe finger order 72% of the times ( that is transiting through the xox state where both keys are depressed ) .although they sometimes ( 48% ) use the unsafe finger order in the scale context , was in average 3.8.6 ms ( n=210 ) , so that this transition is close to simultaneous . in the ascending case ( f6 to f ) of the alternation exercise professionalsused safe transitions in 57% of cases , although the behaviour was less homogeneous among players ( p<0.001 in the f6/f ; p=0.02 in f/f6 ) , but in the scale context musicians used safe transitions 97% of the time ( = -26 ms ( n=33 ) ) .although the transition from f4 to f uses the same fingerings as f6 to f , in this case all transition pathways are safe .interestingly , most of the professionals tend to use the finger order which would be safe for f6 to f with similar time differences between keys ( p=0.02 for f / f and p=0.13 for f/f ) , even though there is no unsafe intermediate for f4 to f .for one professional ( p5 ) and most amateurs , the time differences did change significantly but with no consistent direction ( either becoming more or less simultaneous ) .the finger order remained the same as for the other subjects .the example we will use is the e5 to f transition , which involves three different keys ( 11 , 12 and 13 ) , moved respectively by the index , middle and ring fingers of the right hand , with the little finger remaining depressed throughout . using the notation described above and neglecting other keys , e5 is played using xxo ( ) and f is played oox ( ) .thus the fingers are lifted from the keys 11 ( index ) and 12 ( middle ) and key 13 ( ring ) is depressed .discounting the idealised simultaneous movement of the fingers , there are six possible pathways involving discrete transients . from an acoustical point of view , only one of these is safe : key 11 moves first , then key 13 then key 12 ( i.e. xxo , oxo , oxx , oox ) .this path is safe because oxo and oxx both play slightly flat versions of f .conversely , when descending from f to e5 , the only safe transition is 12 , 13 and then 11 ( oox , oxx , oxo , xxo ) .any other path involves a fingering that produces a note near g5 ( ooo ) , f5 ( xox or xoo ) or d5 ( xxx ) , which may or may not be noticeable depending on the duration of the intermediate .the paths are shown in fig .[ fig : statese5f5 ] .the results for all flutists for this transition are summarised in fig .[ fig : unsafetyoveral ] .for this transition , most professionals are nearly safe , using pathways that are unsafe for only 20 ms .the durations in the intermediate states vary substantially among players . even though this context ( a d major scale ) is one that flutists would have practised many times , the delay times are substantially longer here than for the two finger , contrary motion example shown above , which uses two of the same fingers . from examining the average and standard deviations of the inter - key time difference, we also observe that some of the subjects ( p2 and p5 ) have significantly smaller time differences ( data not shown ) .to summarise , professionals spend an average of 13 9 ms ( n=225 ) in unsafe transitions whereas amateurs spend 25 16 ms ( n=88 ) , which are significantly different ( ) . 5 in a d scale ( three fingers ) , and f6-f in a b minor scale and also in an alternation exercise ( two fingers ) .] the three long fingers and the thumb of the left hand move in the transition from c6 ( oxoo , or ) to d6 ( xooo or ) , with the index finger releasing a key and the others depressing keys . here, there is no completely safe path of transient fingerings , because the intermediate states involving a change in position of any two fingers all sound a note different from c6 or d6 .there are partially safe paths , for example , starting with c6 and moving first either the middle or ring finger still produces a note very close to c6 .average times spent in unsafe transitions , measured in the context of a g major scale , are shown in fig .[ fig : unsafetyoveral ] . as with the previous example, most flutists exhibited some preferred paths , but they are not consistent among players , and sometimes the same player may use different finger orders while playing quickly or slowly . to summarise , all professional players spend less time in unsafe configurations than do the amateurs : unsafe intermediate states last for an average of 34 14 ms ( n=65 ) for amateurs , 21 11 ms ( n=133 ) for professionals .the difference is significant ( ) .thus , for both professionals and amateurs , the time spent in unsafe configurations is larger when four fingers rather than three are involved .typical values of delays between fingers are about 10 to 20 ms for transitions that involve the motion of two or more fingers and significantly longer for amateurs than for professionals . in multi - finger transitions ,the most frequent finger orders are often those that avoid or minimise the use of transient fingerings that are unsafe , as defined above .this is particularly true for transitions involving two fingers , but less evident in more complicated transitions . we can compare the finger motion delays for the four , three and two - finger changes discussed above .( for one finger , the delay by definition is zero , as it does not include the time for key motion . )this is shown in fig .[ fig : unsafetyoveral ] , which summarises the results obtained in the examples studied in this article .these results can be related to similar studies on repetitive tapping movements in non - musical exercises . in these , single finger movements show a variability in inter - tap intervals of about 30 ms , increasing to 60 ms in the most agile fingers when two fingers are involved .when ring and little fingers are involved , this value is increased to 120 ms .these high values suggest that the inaccuracies in multi - tap intervals are mostly due to the duration of the finger motion rather than to synchronisation between the motion of two fingers , but no references were found for measurements of these values .some informal tests on our subjects show that when no sound output is involved the standard deviation in delays between keys can increase from approx . 20 to 40 ms , independently of the proficiency .these values suggest that in musicians the sound feedback improves the performance of the gesture .finally , delays between fingers are unimportant if their effect on sound can not be detected . studied perceptual attack times in different pairs of instruments .these have variations that range from 6 to 25 ms , but in the flute they are about 10 ms .minimum durations needed to identify pitch are typically 4 periods ( less than 10 ms for flute notes ) .for single key transitions , the transition time is simply determined by the time for a finger to push a key in one direction , typically 10 ms , or of a spring to push it in the other , typically 16 ms .when more than one finger is involved , the delay times between individual key movements must be added to this . for a transition involving only two fingers and thus only two pathways , players in general coordinate their fingers so that the unsafe transition is avoided .for some transitions , there is no safe path .professionals , unsurprisingly , are more nearly simultaneous than amateurs . for both amateurs and professionals , total delay increases with the number of fingers in contrary motion .
motion of the keys was measured in a transverse flute while beginner , amateur and professional flutists played a range of exercises . the time taken for a key to open or close is typically 10 ms when pushed by a finger or 16 ms when moved by a spring . delays between the motion of the fingers were typically tens of ms , with longer delays as more fingers are involved . because the opening and closing of keys will never be exactly simultaneous , transitions between notes that involve the movement of multiple fingers can occur via several possible pathways with different intermediate fingerings . a transition is classified as ` safe ' if it is possible to be slurred from the initial to final note with little perceptible change in pitch or volume . some transitions are ` unsafe ' and possibly involve a transient change in pitch or a decrease in volume . in transitions with multiple fingers , players , on average , used safe transitions more frequently than unsafe transitions . professionals exhibited smaller average delays between the motion of fingers than did amateurs .
nature has followed mathematical principles of structural organization in the construction of macromolecular configurations .our proposal in the present work is the modelling of the folded stage of proteins by some combinatorial optimization techniques associated to euclidean full steiner trees .this means that henceforth we take the 3-dimensional euclidean space as our metric manifold .the analysis to be undertaken can be summarized by the trial of obtaining the potential energy minimization of a protein structure through the problem of length minimization of an euclidean steiner tree .our fundamental pattern of input points will be given by sets of evenly spaced points along a right circular helix of unit radius .we have , where is the angular coordinate and stands for the pitch of the helix .we also use the result of steiner points belonging to another helix of the same pitch and smaller radius or where the function above is easily obtained from the requirement of meeting edges at an angle of on each steiner point . to be rigorous ,we should write , where the max above should be understood as a piecewise choice of the maple software .we now introduce a generalization of the formulae above by thinking on subsequences of input and steiner points , corresponding to non - consecutive points . these subsequences are of the form : where , are the number of intervals of skipped points before the present point on each subsequence and is the number of skipped points .we also have : ;\,\,\,\,\,\,\,\,\,1\leq l_s\leq l_{s max } = \left[\frac{n - k-2}{m}\right]\\ \nonumber\\0\leq j\leq m-1 , \,\,\,\,\,\,\,1\leq k \leq m \nonumber\end{aligned}\ ] ] and the square brackets stand for the greatest integer value .the sequences corresponding to eqs.(1.1 ) and ( 1.2 ) are of course included in the scheme above .they are and , respectively . in the general case, we can define new sequences of and points instead those given by eqs.(1.1 ) and ( 1.2 ) .we shall have respectively , the present development is independent of a specific coordinate representation of the points .if we now assume helical point sets whose points are evenly spaced along right circular helices , we get the function is obtained through the same requirement of meeting edges at on each steiner point .we have , analogously , where in figure ( 1 ) below we show some sequences of input points for .[ hp ] from eqs.(2.5 ) and ( 2.6 ) and figure ( 1 ) , we can write for the length of the spanning trees +(m-1)\sqrt{\alpha^2\omega^2+a_1 + 1}.\ ] ] the length of the steiner trees is then \right)\\ & + & \sqrt{m^2\alpha^2\omega^2+r_m(\omega,\alpha)(a_m+1)}\,\,\,.\,\,\sum_{k=1}^{m}\left[\frac{n - k-2}{m}\right]\nonumber\\ & + & 2\sqrt{m^2\alpha^2\omega^2+(1-r_m(\omega,\alpha))^2+r_m(\omega,\alpha)(a_m+1)}\nonumber.\end{aligned}\ ] ] after using some useful relations like =n - m\ ] ] =n - m-2\ ] ] and taking the limit for , we get by following the prescriptions for writing the steiner ratio , we can write for the steiner ratio function of very large helical point sets with points evenly spaced along right circular helices where the process above should be understood in the sense of a piecewise function formed by the functions corresponding to the values .(2.15 ) is our proposal for a steiner ratio function ( srf ) .it allows for an analytic formulation of the search of the steiner ratio which is then defined as the minimum of the srf function , eq.(2.15 ) .actually , there is a further restriction to be imposed on function ( 2.15 ) in order to characterize it as an useful srf function .this restriction is that we should consider only full steiner trees , i.e. , non - degenerated steiner trees in which there are exactly three edges meeting at each steiner point .this restriction can be imposed on the spanning trees , by requesting that the angle between consecutive edges formed with the points as vertices should be lesser than .we have in figure ( 2 ) below we can see the restrictions corresponding to eq.(2.16 ) , for .the horizontal line is .[ hp ] the spanning tree is the only one which corresponds to full steiner trees in a large region of the -interval convenient for our work .the other trees , correspond to forbidden regions in the same -interval .the corresponding steiner trees to be obtained from the positions of the points and are necessarily degenerate and should not be taken into consideration .thus , the prescription ( 2.15 ) for the srf function turns into where the function ( 2.17 ) has a global minimum in the point and for a proof see .the last value corresponds to the famous main conjecture of ref . about the value of the steiner ratio in 3-dimensional euclidean space .it lead us also to think that nature has solved the problem of energy minimization in the organization of intramolecular structure by choosing steiner trees as an intrinsic part of this structure .in the following we continue to work in a manifold with an euclidean distance definition .let us now introduce a tree as that of figure ( 3 ) [ hp ] there are input points ( position vectors ) and steiner points ( position vectors ) .if is not an integer number , there is not a tree with these , values . in figure( 3 ) with , we assume to be a feasible value .the knowledge of the steiner problem tell us that this tree structure is not stable since its total length can be reduced by decreasing the number .the usual stable steiner problem corresponds to . in this sectionwe shall give another proof of this fact by exploring the concept of a steiner network with physical interaction among its vertices .the structure depicted at figure ( 3 ) is a representative of the network which models the fundamental interactions inside a biomacromolecule .let us consider the interaction of this structure with similar structures .let the resulting interaction forces as applied to input and steiner points be , , respectively and let be the length of an edge between a steiner point and an input point on its neighbourhood .we have the following identities : where , , , stand for the modulus of the parallel components to the edges of the resulting forces , , respectively .the total length of the tree above is from eqs.(2.20 ) , ( 3.1 ) , we can write the total length in the form we now specialize this set of applied forces at the vertices as being collinear with the edges joining them , or is the elastic constant .the assumption of local equilibrium of these forces lead to the conditions : this is a set of generalized fermat problems or steiner problems . for this equilibrium configuration , eq.(3.3 )turns into the stability of this equilibrium configuration under a variation of the applied forces can be tested by latexmath:[\[\delta l=\sum_{j=1}^{n}\mathbf{r}_j\,\,\cdot\,\,\delta\hat{f}_{j we take cartesian coordinates for the vectors , and we consider the three independent variations , , in the coordinates of the steiner points .the corresponding variations in the length of the tree are of the form (x_j - x_{s_k})}{||\mathbf{r}_j-\mathbf{s}_k||^3}=0\ ] ] and two other analogous expressions for the variations , . from the arbitrariness of these variations we can write , }{||\mathbf{r}_j-\mathbf{s}_k||^3}=0\,.\ ] ] we can also write we now write the position vectors , for the configuration depicted at figure ( 3 ) .the points can be taken as evenly spaced along right circular helices which radii are 1 and , respectively .we have , is a function which can be derived from the equilibrium conditions in eqs.(3.7)(3.9 ) . for is only one solution given by this solution coincides with eq.(1.3 ) . for , another useful solution could be obtained from the equations : curiously , nature has chosen this solution for to keep sure of partial equilibrium of side chains between the amide plane conformation in proteins .for the configuration given by eqs.(3.15)(3.16 ) ,eq.(3.15 ) can be written as where the geometrical object can be written in the coordinates of eqs.(3.16 ) and ( 3.17 ) as ^2+\alpha^2\omega^2[2r_p(j\!\!-\!\!1)k\cos(j\!\!-\!\!1\!\!-\!\!k)\omega\!\!-\!\!k^2]}{[1+r_p^2\!\!-\!\!2r_p\cos(j\!\!-\!\!1\!\!-\!\!k)\omega+\alpha^2\omega^2 ( j\!\!-\!\!1\!\!-\!\!k)^2]^{3/2}}.\ ] ] to each -value , there will be a term which dominates the sum above . however , we can not have for .this can be seen from the fact for a vertex ( ) there are ( ) nearest external vertices .the sequence of their consecutive position vectors is and the requirement corresponds to an integer -value only for .this case which is known to correspond to the most stable problem has as a possible configuration the figure ( 4 ) below [ hp ]we have stressed on some past publications that there is a self - consistent treatment of the intramolecular organization of biomacromolecules in terms of steiner networks .this representation is able at deriving information concerning its stability and evolution .the supporting facts for stability are now well - established and the ideas related to the evolution of macromolecules are in their way to be developed and accepted as a preliminary theory of molecular evolution .the missing subject is a full description of geometric chirality and in order to unveil some of its properties , we have proposed to study the influence of some proposals for chirality measure on the dynamics of optimization problems .these are aimed at studying the structures which energy is around the assumed energy of the minimum solution and the variation process of the chiral properties in the neighbourhood of this minimum .we think that this research line is worth of serious scientific work and should take advantage of the best efforts of very good scientific researchers for some years .mondaini , r.p . : the disproof of a conjecture on the steiner ratio in and its consequences for a full geometric description of macromolecular chirality .biomat symp .* 2 * , 101177 ( 2003 ) .
the application of optimization techniques derived from the study of euclidean full steiner trees to macromolecules like proteins is reported in the present work . we shall use the concept of euclidean steiner ratio function ( srf ) as a good lyapunov function in order to perform an elementary stability analysis . * keywords : * steiner , biomacromolecular structure , full trees , geometric chirality .
co - expression networks have been frequently used to reverse engineer the whole - genome interactions between complex multicellular organisms by ascertaining common regulation and thus common functions .a gene co - expression network is represented as an undirected graph with nodes being genes and edges representing significant inter - gene interactions .such a network can be constructed by computing linear ( e.g. ) or non - linear ( e.g. ) co - expression measures between paired gene expression profiles across multiple samples . as the first formal and wide - spread correlation measure , pearson s correlation coefficient ( pcc ) , or alias pearson s , is one widely used technique in co - expression network construction .however , all - pairs pcc computation of gene expression profiles is not computationally trivial for genome - wide association study with large number of gene expression profiles across a large population of samples , especially when coupled with permutation tests for statistical inference .the importance of all - pairs pcc computation and its considerable computing demand motivated us to investigate its acceleration on parallel / high - performance computing architectures .pcc statistically measures the strength of linear association between pairs of continuous random variables , but does not apply to non - linear relationship .thus , we must ensure the linearity between paired data prior to the application of pcc . given two random variables and of dimensions each ,the pcc between them is defined as - \bar{u})(v[k ] - \bar{v})}{\sqrt{\sum_{k=0}^{l-1}(u[k ] - \bar{u})^2 \sum_{k=0}^{l-1}(v[k]-\bar{v})^2 } } \label{equation : pearsonr}\ ] ] in equation ( [ equation : pearsonr ] ) , ] .notations are likewise defined for .given a variable pair , the sequential implementation of equation ( [ equation : pearsonr ] ) has a linear time complexity proportional to .moreover , it is known that the absolute value of the nominator is always less than or equal to the denominator .thus , always varies in ] and - \bar{u})^2} ] and there are two boundary cases needing to be paid attention to : one is when and the other is when .when , because no cell in the upper triangle appears before row ; and when , because all cells in the upper triangle are included .in this way , we have defined equation ( [ equation : j ] ) based on our job numbering policy , i.e. all job identifiers vary in and jobs are sequentially numbered left - to - right and top - to - bottom in the upper triangle ( see fig .[ fig : direct ] for an example ) .reversely , given a job identifier ( ) , we need to compute the coordinate in order to locate the corresponding variable pair . as per our definition , we have it needs to be stressed that there is surely an integer value satisfying these two inequalities based our job numbering policy mentioned above . by solving , we get this is because ( ) and thus has two distinct solutions theoretically , and ( ) all values are to the left of the symmetric axis , meaning strictly monotonically decreasing as a function of . meanwhile , by solving , we get similarly , this is because ( ) and thus has two distinct solutions theoretically , and ( ) all values are to the left of the symmetric axis , meaning strictly monotonically decreasing as a function of . in this case , by defining , and , we can reformulate equations ( [ equation : inequality_y_hi ] ) and ( [ equation : inequality_y_low ] ) to be . in this case , because , we know that ] , which satisfies equation ( [ equation : inequality_j ] ) .meanwhile , it is known that there always exists one and only one integer in ( can be easily proved ) and this integer equals , regardless of the value of . since ] .this is reasonable because the runtime of pcc computation is merely subject to and and independent of specific values .they are randomly generated by setting to 16,000 ( 16k ) , 32,000 ( 32k ) or 64,000 ( 64k ) and to 5,000 ( 5k ) .table [ tab : artificial_alglib ] shows the performance comparison between lightpcc and alglib .compared to alglib , lightpcc runs , and faster using one phi and , , and faster using 16 phis for =16k , =32k and =64k , respectively .moreover , the speedup gradually increases as grows larger .it is worth mentioning that many applications require determining the statistical significance of pairwise correlation . for this purpose ,permutation test is a frequently used approach for statistical inference .however , this approach needs to repeatedly permute vector variables at random and compute pairwise correlation from the random data , where the more iterations ( typically iterations ) are conducted , the more precise statistical results ( e.g. -value ) can be expected ( except for the cases of complete permutation tests that rarely happen ) . in this case, the runtime with a specified number of permutation tests can be roughly inferred from the runtime per iteration and the number of iterations conducted ..comparison with alglib on artificial data [ cols= " < ,< , < , < , < , < , < " , ] on this real dataset , compared to alglib , lightpcc runs faster on a single phi , faster on 2 phis , faster on 4 phis , faster on 8 phis , and faster on 16 phis ( see table [ tab : human ] ) . in comparison to single - threaded mkl , lightpcc runs faster on a single phi and faster on 16 phis . when it comes to 16-threaded mkl, lightpcc is not able to outperform the former until phis are used , similar to the assessment using artificial datasets . for this case ,our algorithm reaches a maximum speedup of 4.3 with 16 phis .furthermore , we evaluated mkl on a single phi as well .the experimental result showed that it took 11.6 seconds for phi - based mkl to finish the computation , resulting in a speedup of 2.77 over our algorithm on one phi .as for parallel scalability , lightpcc also demonstrates good performance ( refer to figure [ fig : real ] ) , where compared to the execution on one phi , the speedup is 1.6 on 2 phis , 3.2 on 4 phis , 6.2 on 8 phis and 11.9 on 16 phis , respectively .pcc is a correlation measure investigating linear relationship between continuous random variables and has been widely used in bioinformatics . for instance , one popular application is to compute pairwise correlation between gene expression profiles and then build a gene co - expression network to identify common regulation and thus common functions . in addition , pcc can be applied to some computational problems ( e.g. feature selection and correlation clustering ) in machine learning as well . in this paper ,we have presented lightpcc , the first parallel and distributed all - pairs pcc computation algorithm on phi clusters .it is written in c++ template classes and harnesses three levels of parallelism ( i.e. simd - instruction - level parallelism , thread - level parallelism and accelerator - level parallelism ) to achieve high performance .furthermore , we have proposed a general framework for workload balancing in symmetric all - pairs computation .this framework assigns unique identifiers to jobs in the upper triangle of the job matrix and builds bijective functions between job identifier and coordinate space .we have evaluated the performance of lightpcc using a set of gene expression profiles and further compared it to a sequential c++ implementation in alglib and an implementation using the cblas_dgemm gemm routine in mkl , both of which run on the cpu .performance evaluation showed that lightpcc runs up to faster than alglib on one 5110p phi and up to faster on 16 phis , with a corresponding speedup of up to 6.8 on one phi and up to 71.4 on 16 phis over single - threaded mkl .besides , lightpcc yielded good parallel scalability with respect to varied number of phis .as part of our future work , we plan to apply this work to construct genome - wide gene expression network ( e.g. from conventional microarray data , emerging rna - seq data or diverse genomic data ) and integrate it with statistical and graph analysis methods to identify critical pathways .in addition , our current implementation does not distribute pcc computation onto cpu cores .therefore , we expect to further boost its performance by employing an alternative cpu - phi coprocessing model that enables concurrent workload distribution onto both cpus and phis .this research is supported in part by us national science foundation under iis-1416259 and an intel parallel computing center award ._ conflict of interest _ : none declared .a. j. butte and i. s. kohane , `` unsupervised knowledge discovery in medical databases using relevance networks . '' in _ proceedings of the amia symposium_.1em plus 0.5em minus 0.4emamerican medical informatics association , 1999 , p. 711 .a. a. margolin , i. nemenman , k. basso , c. wiggins , g. stolovitzky , r. d. favera , and a. califano , `` aracne : an algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context , '' _ bmc bioinformatics _, vol . 7 , no .suppl 1 , p. s7 , 2006 .chang , a. h. desoky , m. ouyang , and e. c. rouchka , `` compute pairwise manhattan distance and pearson correlation coefficient of data points with gpu , '' in _ acis international conference on software engineering , artificial intelligences , networking and parallel / distributed computing_.1em plus 0.5em minus 0.4emieee , 2009 , pp .501506 .e. kijsipongse , c. ngamphiw , s. tongsima _ et al ._ , `` efficient large pearson correlation matrix computing using hybrid mpi / cuda , '' in _ international joint conference on computer science and software engineering_.1em plus 0.5em minus 0.4emieee , 2011 , pp . 237241 . y. wang , h. du , m. xia , l. ren , m. xu , t. xie , g. gong , n. xu , h. yang , and y. he , `` a hybrid cpu - gpu accelerated framework for fast mapping of high - resolution human brain connectome , '' _ plos one _ , vol . 8 , no . 5 , p. e62789 , 2013 .y. wang , m. j. anderson , j. d. cohen , a. heinecke , k. li , n. satish , n. sundaram , n. b. turk - browne , and t. l. willke , `` full correlation matrix analysis of fmri data on intel xeon phi coprocessors , '' in _ proceedings of the international conference for high performance computing , networking , storage and analysis_.1em plus 0.5em minus 0.4emacm , 2015 . y. liu , t .-tran , f. lauenroth , and b. schmidt , `` swaphi - ls : smith - waterman algorithm on xeon phi coprocessors for long dna sequences , '' in _ ieee international conference on cluster computing_.1em plus 0.5em minus 0.4emieee , 2014 , pp .257265 .s. misra , k. pamnany , and s. aluru , `` parallel mutual information based construction of genome - scale networks on the intel xeon phi coprocessor , '' _ ieee / acm transactions on computational biology and bioinformatics _ , vol .12 , no . 5 , pp . 10081020 , 2015 .l. jin , z. wang , r. gu , c. yuan , and y. huang , `` training large scale deep neural networks on the intel xeon phi many - core coprocessor , '' in _ 2014 ieee international parallel & distributed processing symposium workshops_.1em plus 0.5em minus 0.4emieee , 2014 , pp .16221630 .a. viebke and s. pllana , `` the potential of the intel xeon phi for supervised deep learning , '' in _ 17th ieee international conference on high performance computing and communications_.1em plus 0.5em minus 0.4emieee , 2015 , pp .758765 .y. liu , b. schmidt , and d. l. maskell , `` msa - cuda : multiple sequence alignment on graphics processing units with cuda , '' in _ ieee international conference on application - specific systems , architectures and processors_.1em plus 0.5em minus 0.4emieee , 2009 , pp .121128 .t. kiefer , p. b. volk , and w. lehner , `` pairwise element computation with mapreduce , '' in _ proceedings of the 19th acm international symposium on high performance distributed computing_.1em plus 0.5em minus 0.4emacm , 2010 , pp .826833 .q. zhu , a. k. wong , a. krishnan , m. r. aure , a. tadych , r. zhang , d. c. corney , c. s. greene , l. a. bongo , v. n. kristensen __ , `` targeted exploration and analysis of large cross - platform human transcriptomic compendia , '' _ nature methods _ , vol . 12 , no . 3 , pp .211214 , 2015 .x. pan , d. papailiopoulos , s. oymak , b. recht , k. ramchandran , and m. i. jordan , `` parallel correlation clustering on big graphs , '' in _ advances in neural information processing systems _ , 2015 , pp .f. zhao , y. qu , j. liu , h. liu , l. zhang , y. feng , h. wang , j. gan , r. lu , and d. mu , `` microarray profiling and co - expression network analysis of lncrnas and mrnas in neonatal rats following hypoxic - ischemic brain damage , '' _ scientific reports _ , vol . 5 , 2015 .b. issac , n. f. tsinoremas , and e. capobianco , `` abstract b1 - 04 : co - expression profiling and transcriptional network evidences from rna - seq data reveal specific molecular subtype features in breast cancer , '' _ cancer research _, vol . 75 , no . 22 supplement 2 , pp .b104 , 2015 .b. wang , a. m. mezlini , f. demir , m. fiume , z. tu , m. brudno , b. haibe - kains , and a. goldenberg , `` similarity network fusion for aggregating data types on a genomic scale , '' _ nature methods _ , vol .11 , no . 3 , pp .333337 , 2014 .
co - expression network is a critical technique for the identification of inter - gene interactions , which usually relies on all - pairs correlation ( or similar measure ) computation between gene expression profiles across multiple samples . pearson s correlation coefficient ( pcc ) is one widely used technique for gene co - expression network construction . however , all - pairs pcc computation is computationally demanding for large numbers of gene expression profiles , thus motivating our acceleration of its execution using high - performance computing . in this paper , we present lightpcc , the first parallel and distributed all - pairs pcc computation on intel xeon phi ( phi ) clusters . it achieves high speed by exploring the simd - instruction - level and thread - level parallelism within phis as well as accelerator - level parallelism among multiple phis . to facilitate balanced workload distribution , we have proposed a general framework for symmetric all - pairs computation by building bijective functions between job identifier and coordinate space for the first time . we have evaluated lightpcc and compared it to two cpu - based counterparts : a sequential c++ implementation in alglib and an implementation based on a parallel general matrix - matrix multiplication routine in intel math kernel library ( mkl ) ( all use double precision ) , using a set of gene expression datasets . performance evaluation revealed that with one 5110p phi and 16 phis , lightpcc runs up to and faster than alglib , and up to and faster than single - threaded mkl , respectively . in addition , lightpcc demonstrated good parallel scalability in terms of number of phis . source code of lightpcc is publicly available at http://lightpcc.sourceforge.net . pearson s correlation coefficient ; co - expression network ; all - pairs computation ; intel xeon phi cluster
channel state information at the transmitter ( csit ) plays an important role in interference management in wireless systems .interference networks with global and instantaneous csit provide a great improvement of performance .for example , in a -user interference channel with global and instantaneous csit , the sum degrees of freedom ( sum - dof ) linearly increases with the number of user pairs , which is much higher than that of the interference channel with no csit . in practice , however , obtaining global and instantaneous csit for transmitter cooperation is especially challenging , when the transmitters are distributed and the mobility of wireless nodes increases . in an extreme case wherethe channel coherence time is shorter than the csi feedback delay , it is infeasible to acquire instantaneous csit in wireless systems .obtaining global knowledge of csit is another obstacle for realizing transmitter cooperation when the backhaul or feedback link capacity is very limited for csit sharing between the distributed transmitters .therefore , in this paper , we investigate a fundamental question : is it still possible to obtain dof benefits in interference networks under these two practical constraints in this paper we seek to answer this question by developing an interference alignment algorithm exploiting local and moderately - delayed csit .recently , an intriguing way of studying the effect of delayed csit in wireless networks has been initiated by the work .in particular , in the context of the vector broadcast channel , showed that completely - delayed csit ( i.e. , csi feedback delay larger than the channel coherence time ) is still useful for improving the sum - dof performance by proposing an innovative transmission strategy .the key idea of the transmission method was that a transmitter utilizes the outdated csit to align inter - user interference between the past and the currently observed signals .subsequently in , the sum - dof was investigated for a variety of interference networks ( interference and x - channel ) when completely outdated csi knowledge was available at transmitters .later , the characterization of the delayed csit effects was extended to the case where the feedback delay is less than the channel coherence time , i.e. , _ moderately - delayed csit regime _ .this regime is particularly interesting because , in practice , the feedback delay can be less than the channel coherence time depending on the user mobility .further , by leveraging the results obtained with completely - delayed csit , it is possible to obtain a complete picture on how the csi feedback delay affects the scale of the capacity . in the moderately - delayed csit regime ,a transmitter is able to exploit alternatively both current and delayed csi .this observation naturally leads to the question of whether current and delayed csit can be jointly exploited to obtain synergistic benefits . in recent work in ,the benefits of jointly exploiting current and outdated csit are substantial over separately using them in the context of vector broadcast channel .in particular , it was shown that , up to a certain delay in the feedback , sum - dof loss does not occur for the multi - input - single - output ( miso ) broadcast and interference channel .local csit is also a preferable requirement in the design of wireless systems particularly when the transmitters are not co - located , and the capacity of the backhaul links is limited .when the transmitters are distributed , each transmitter may obtain local csi between itself and its associated receivers using feedback links without further exchange of information between the transmitters .the impact of local or incomplete csit has been actively studied , especially for multiple - input - multi - output ( mimo ) interference networks . in particular , proposed iterative algorithms for interference alignment with local but instantaneous csit and demonstrated the dof benefits in mimo interference channels .the limitation , however , is that the convergence of the algorithms in is not guaranteed when csi delay is considered . in ,the feasibility of interference alignment was characterized by an iterative algorithm using incomplete but instantaneous csit in a -user mimo interference channel . in was shown that it is possible to strictly increase the dof with completely - delayed and local csit for the the two - user mimo and -user miso interference channels which are more closely related to our work .nevertheless , to the best of our knowledge , characterizing the benefits of dof is still an open problem in single - antenna interference channels with local and moderately - delayed csit . in this paperwe propose a distributed interference management technique for interference networks with distributed and moderately - delayed csit .the proposed method is a structured space - time repetition transmission technique that exploits both current and outdated csit jointly to align interference signals at unintended receivers in a distributed manner .since this method is a generalization of the space - time interference alignment ( stia ) in a vector broadcast channel by imposing the distributed csit constraint , we refer to it as _ distributed stia_. " one distinguishing feature of the proposed method is that a transmitter only uses local csit , thereby reducing overhead incurred by csit sharing between transmitters , which differs from the conventional ia in and stia in . with the proposed method ,we show that the optimal sum - dof is achievable for the x - channel with local csit , provided that the csi feedback delay is less than fraction of the channel coherence time .this result implies that there is no loss in dof even if the transmitters have local and delayed csit for this class of x - channels .furthermore , for the three - user interference channel with local csit , we demonstrate that a total of sum - dof is achievable when the csi feedback delay is three - fifths of the channel coherence time , i.e. , . by leveraging the sum - dof result in with our achievability results ,we establish inner bounds of the trade - off for both channels .a major implication of this result is that local and moderately - delayed csit obtains strictly better the sum - dof over the no csit case in a certain class of interference channels . as a byproduct , leveraging the sum rate outer bound result in and the proposed method , we characterize the sum - capacity of the two - user x - channel with a set of particular channel coefficients within a constant number of bits .the rest of the paper is organized as follows . in sectionii , we describe signal models of the x - channel , the three - user interference channel , and csi feedback model .we explain the key idea of the proposed transmission method through a motivating example in section iii . in section iv , we characterize the trade - off region between the sum - dof and csi feedback delay of the x - channel .we provide an inner bound of the trade - off region for the three - user interference in section v. in section , we provide the sum - capacity results for the two - user x - channel within a constant number of bits .the paper is concluded in section vii . throughout this paper , transpose , conjugate transpose , and inverse of a matrix represented by , , , respectively .in addition , represents a complex gaussian random variable with zero mean and unit variance .in this section , we explain two signal models for the x and three - user interference channel and describe the csi feedback assumptions used for this paper . x - channel with a single antenna . , width=288 ] we consider the x - channel with transmitters and two receivers , each with a single antenna .as illustrated in fig .[ fig:1 ] , transmitter intends to send an independent message for receiver using input signal ] at receiver is & = \sum_{k=1}^k{h}_{\ell , k}[n]x_{k}[n ] + { z}_{\ell}[n],\end{aligned}\ ] ] where ] represents the channel value from transmitter to user .all channel values in a different fading block are drawn from an independent and identically distributed ( iid ) continuous distribution across users .assuming that feedback links are error - free but have the feedback delay of time slots , transmitter has knowledge of the channel vector ,h_{\ell , k}[2],\ldots , h_{\ell , k}[n - t_{\rm fb}]\} ] .then , the input signal of transmitter is generated as a function of the transmit messages and the delayed and local csit , i.e. , =f_k(w_{1,k},w_{2,k } , { \bf h}_{k}^{n - t_{\rm fb}}) ] .we also consider the three - user interference channel where all the transmitters and the receivers have a single antenna as illustrated in fig .[ fig:2 ] .the difference with the x - channel is that , in this channel , transmitter for intends to send one unicast message to its corresponding receiver using input signal ] , the input signal is generated by a function of the message and the channel knowledge , i.e. , =f_k(w_{1,k } , { \bf h}_{k}^{n - t_{\rm fb}}) ] , is given by =\sum_{k=1}^3h_{\ell , k}[n]x_k[n]+z_{\ell}[n],\end{aligned}\ ] ] where the transmit power at each transmitter is assumed to be , |^2\right ] \leq p ] in time slot .for instance , as depicted in fig .[ fig:3 ] , in time slot 9 , the transmitter has delayed channel knowledge for the first and second blocks , whereas it has current channel knowledge for the third block . and csi feedback delay is , a transmitter is able to access current and outdated csi over one - thirds and two - thirds of the channel coherence time ., width=307 ] [ fig:3 ] let us introduce a parameter that measures csi obsoleteness compared to the channel coherence time . we refer to this parameter as the normalized csi feedback delay : the case of corresponds to the case of completely outdated csi regime as considered in . in this case , only completely outdated csi is available at the transmitter .we refer to the case where as the instantaneous csit point .since there is no csi feedback delay , the transmitter can use the current csi in each slot . as illustrated in fig .[ fig:3 ] , if , the transmitter has instantaneous csi over one - thirds of the channel coherence time and completely outdated csi for the previous channel blocks .since the achievable data rate of the users depends on the parameters and signal - to - noise ratio ( snr ) , we express it as a function of and . in particular , for codewords spanning channel uses , a rate of message , , is achievable if the probability of error for the message approaches zero as .then , the sum - dof trade - off of the x - channel with local csit is defined as a function of the normalized feedback delay , with the same definition of a rate for message , the sum - dof trade - off of the three - user interference channel with local csit is note that the sum - dof regions of interference networks with local and delayed csit are less or equal to those of the networks with global and delayed csit .in this section , we illustrate the core ideas behind our approach using the x - channel as an example . gaining insights from this section ,we extend our method into the x - channel and three - user interference channel in a sequel .let us consider the x - channel with local and delayed csit and focus on the case of , i.e. , when feedback time is two - thirds of the channel coherence time .it is known from , that the sum dof with instantaneous csit ( corresponding to is 4/3 ) .we will show that even with , the sum dof of is achievable .in particular , we will show that four independent data symbols are delivered over 3 channel uses ,h_{i , j}[4],h_{i , j}[9]\} ] is able to access current csi because =h_{i , j}[9] ] and =b_1 ] and =b_2 ] and ,h_{2,2}[n]\} ] . with these csit , transmitter 1 and transmitter 2send a superposition of two information symbols using a proposed interference alignment technique .using the fact that =h_{i , j}[9] ] in time slot 4 .similarly , receiver 2 acquired a linear combination of undesired symbols and in time slot 1 , i.e. , (a_1,b_1) ] and (a_1,b_1) ] , the received signals in time slot 9 are : & = { h}_{1,1}[9]\left(\frac{h_{2,1}[1]}{h_{2,1}[9]}a_1+\frac{h_{1,1}[4]}{h_{1,1}[9]}a_2\right)\nonumber \\ & + { h}_{1,2}[9]\left(\frac{h_{2,2}[4]}{h_{2,2}[9]}b_1+\frac{h_{1,2}[4]}{h_{1,2}[9]}b_2\right)\nonumber \\ & = l_1[9](a_1,b_1)+l_1[4](a_2,b_2 ) , \\ { y}_{2}[9 ] & = { h}_{2,1}[9]\left(\frac{h_{2,1}[1]}{h_{2,1}[9]}a_1+\frac{h_{1,1}[4]}{h_{1,1}[9]}a_2\right)\nonumber \\ & + { h } _ { 2,2}[9]\left(\frac{h_{2,2}[1]}{h_{2,2}[9]}b_1+\frac{h_{1,2}[4]}{h_{1,2}[9]}b_2\right)\nonumber \\ & = l_2[1](a_1,b_1)+l_2[9](a_2,b_2).\end{aligned}\ ] ] we explain the decoding method to recover intended symbols by focusing on receiver 1 .receiver 1 obtains a new linear combination containing desired symbols (a_1,b_1) ] .then , the concatenated input - output relationship is given by \\ { y}_{1}[9]-y_1[4]\\ \end{array}% \!\!\right]\!= \!\left[\!\!% \begin{array}{cc } { h}_{1,1}[1 ] & { h}_{1,2}[1 ] \\\frac{{h}_{1,1}[9]h_{2,1}[1]}{h_{2,1}[9 ] } & \frac{{h}_{1,2}[9]h_{2,2}[4]}{h_{2,2}[9 ] } \\\end{array}% \!\!\right]\!\!\left[\!\!% \begin{array}{c } a_1 \\ b_1\\ \end{array}% \!\!\right ] .\label{eq : example_decoding}\end{aligned}\ ] ] since the channel values were selected from a continuous distribution per each block , receiver 1 is able to recover and almost surely by applying a zf decoder . by symmetry, receiver 2 operates in a similar fashion , which implies that a total of sum - dof is achievable .[ interpretation of the proposed transmission method ] now we reinterpret the proposed transmission method from the perspective of higher - order message transmission techniques in , which is helpful for understanding the key principle of the proposed transmission method . in the phase when the transmitters do not have knowledge of csit due to feedback delay , they send a information symbol per time slot , which can be interpreted as the first - order message . in the second phasewhen both current and delayed csit is available , the two transmitters distributively construct a second - order message based on their local csit .unlike the two - user vector broadcast channel in , the second - order message (a_1,b_1)+l_1[4](a_2,b_2) ] for transmitter 1 and }{h_{2,2}[7 ] } , \frac{h_{1,2}[4]}{h_{1,2}[7 ] } \right\} ] for receiver .alternatively , in time slot , transmitter sends information symbol =s_{2,k} ] for and delayed csi ,h_{2,k}[n]\} ] and ] and ] and overheard one interfering equation ] for , then it is possible to eliminate the effect of the sum of interfering symbols from ] as side - information . to satisfy this, we choose the precoding coefficients ] as {v}_{1,k}[n]= h_{2,k}[t_1 ] , \nonumber \\ h_{1,k}[n]{v}_{2,k}[n]= h_{1,k}[t_2 ] , \label{stia_cond}\end{aligned}\ ] ] where , .this inter - user interference alignment condition enables that information symbols for receiver ( 2 ) have the same interference shape with the linear combination of the interference signal overheard by receiver 2 ( 1 ) in time slot in the first phase . since channel ] for , receiver 1 performs the aligned interference cancellation to extract the equations that contain the desired symbols as -y_1[t_2]&=l_{1,1}[n]+l_{1,2}[n]-l_{1,2}[t_2 ] , \nonumber \\ & = l_{1,1}[n ] , \nonumber \\ & = \sum_{k=1}^k \underbrace{h_{1,k}[n]v_{1,k}[n]}_{\tilde{h}_{1,k}[n]}s_{1,k}.\end{aligned}\ ] ] after the aligned interference cancellation , we obtain the concatenated system of equations for receiver 1 as \\ \hline y_1[t_3]-y_1[t_2 ] \\% y_1[4]-y_1[2 ] \\ \vdots \\ y_1[t_{k\!+\!1}]-y_1[t_2]\\! \right ] & \!\!\!=\!\!\ !\underbrace { \left[\!\!\ \begin{array}{ccc } h_{1,1}[t_1 ] & \cdots \!\!\!\ ! & \!\!\!\! h_{1,k}[t_1 ] \\\hline \tilde{h}_{1,1}[t_3 ] & \cdots \!\!\!\ ! & \!\!\!\ !\tilde{h}_{1,k}[t_3 ] \\% \hline % \tilde{h}_{1,1}[t_4 ] & \cdots \!\!\!\ ! & \!\!\!\! \tilde{h}_{1,k}[t_4 ] \\% \hline \vdots & \ddots \!\!\!\ ! & \!\!\!\ !\vdots\\ \tilde{h}_{1,1}\![t_{k\!+\!1}\ ! ] & \cdots & \!\!\!\! \tilde{h}_{1,k}[t_{k\!+\!1}\ ! ]\\ % \hline \end{array}\!\!\!% \right]}_{{\bf \hat h}_1}\!\!\!\ ! \left[\!\!\!% \begin{array}{c } { s}_{1,1 } \\% \hline s_{1,2 } \\ \vdots \\s_{1,k}\\ \end{array}\!\!\ !\nonumber\end{aligned}\ ] ] since we pick the precoding coefficients ] , the effective channel matrix for has a full rank of almost surely , thereby in the high regime , receiver 1 is able to decode the desired symbols by applying a zf decoder . by symmetry ,it is possible to obtain data symbols over time slots for receiver 2 , provided that the feedback delay is less than the fraction of the channel coherence time . as a result ,a total sum - dof is achievable over channel uses .recall that a total number of channel uses was and we have shown that sum - dof are achievable over time slots , i.e. , .for the residual time resources , we simply apply a tdma transmission method achieving sum - dof .then , as goes to infinity , the asymptotically achievable sum dof is given as : which completes the proof .now , we make several remarks on the implication of our results . with the proposed achievability method used for proving theorem 1, one can easily prove that the optimal sum - dof of is achievable for the mimo x - channel with antennas at each node almost surely when the transmitters have local csi and the normalized feedback delay is less than .from the fact that any achievable sum - dof in the x - channel is also achievable in the x - channel for , we are able to establish a lower bound of the sum - dof region for x - channel as .the lower bound does not scale with neither or , the number of transmitters or receivers .nevertheless , the lower bound is strictly better than the best known lower bound for the case with delayed csi alone [ 9 ] for all values of .the proposed method achieves the optimal sum - dof for x - channel with local csit as long as the normalized feedback delay is less than .this does not necessarily imply that the maximum allowable normalized feedback delay achieving the optimal sum - dof is .in other words , we do not establish any optimality claim on our achievable sum - dof region with respective to the normalized feedback delay .the problem of characterizing the maximum allowable feedback delay remains an interesting open problem .let us consider an analogous network in which users ( transmitters ) per cell intend to communicate to their respective base station ( receiver ) while interfering with each other .in particular , when the number of cells is two , then we refer this network to a two - cell interfering multiple access channel . in this network, one can easily apply the proposed method to show that the sum - dof of is achievable with local csit if the normalized feedback delay is less than . to shed further light on the significance of the trade - off region derived in theorem 1 , it is instructive to compare it with the other regions achieved by different methods when . for the two - user x - channel , by leveraging the transmission method proposed in , it is possible to establish a trade - off region for the x - channel with global and delayed csit , which is stated in the following corollary .2 x - channel.,width=316 ] [ corollary1 ] a csi feedback delay - dof gain trade - off region for the two - user x - channel with global and delayed csit is given by _ proof : _ achievability of the trade - off region is direct from theorem 1 and the agk method proposed in . for the case of , from theorem 1 ,it was shown that of sum - dof are achievable .alternatively , for the completely - delayed regime , , the transmission method in exploiting global csit allows to attain of sum - dof . as a result, it is possible to achieve any points in the line connecting two points between , and through a time - sharing technique .using a time sharing technique between ia and tdma , the region of is achievable with global csit for .analogously , if we apply the time sharing method between ia and gmk method , then the region of is achieved with global csit for . therefore , as illustrated in fig .[ fig:4 ] , the proposed method allows to attain a higher trade - off region between csi feedback delay and sum - dof than those obtained by the other methods when the csi feedback is not too delayed .for example , when , the proposed method achieves the sum - dof of , which yields the sum - dof gain over ia - tdma and sum - dof gain over ia - gmk even with local csit .another remarkable point is that csit sharing between transmitters does not contribute to improve the sum - dof if the feedback delay is less than two - thirds of the channel coherence time .whereas , global csit knowledge is useful to increase the dof performance as the normalized feedback delay increases beyond .in this section we characterize the trade - off region between the sum - dof and csi feedback delay for the three - user interference channel with local and delayed csit . in this channel ,designing interference alignment algorithm with local and delayed csit is more challenging than the case of the x - channel .this difficulty comes from that it may be impossible for simultaneously aligning interference at more than two receivers in a distributed manner .interestingly , even in this setting , we show that local and delayed csit still provides a gain in dof beyond that obtained by tdma .the following is the main result of this section .[ theorem2 ] for the three - user interference channel with distributed and delayed csit , the trade - off region between the sum - dof and the feedback delay is we focus on proving the point of because other points connecting and are simply attained by time sharing between the proposed method and tdma transmission . to show the achievability of , we demonstrate that a total of six independent symbols can be reliably delivered over five time slots . without loss of generality , we assume that transmitter is able to access local and delayed csit over three time slots ,h_{2,k}[n],h_{3,k}[n]\} ] for under the premise of .phase one spans three time slots . in this phase, a scheduled transmission is applied , which requires no csit .specifically , in time slot 1 , transmitter 1 and transmitter 2 send information symbol and .then , the received signals are &= h_{1,1}[1]a_1+h_{1,2}[1]b_1=l_{1}[1](a_1,b_1 ) , \nonumber \\{ y}_{2}[1]&= h_{2,1}[1]a_1+h_{2,2}[1]b_1=l_{2}[1](a_1,b_1 ) , \nonumber \\{ y}_{3}[1]&= h_{3,1}[1]a_1+h_{3,2}[1]b_1=l_{3}[1](a_1,b_1).\end{aligned}\ ] ] in time slot 2 , transmitter 1 and transmitter 3 send symbol and .then , the received signals are &= h_{1,1}[2]a_2+h_{1,3}[2]c_1=l_{1}[2](a_2,c_1 ) , \nonumber \\{ y}_{2}[2]&= h_{2,1}[2]a_2+h_{2,3}[2]c_1=l_{2}[2](a_2,c_1 ) , \nonumber \\{ y}_{3}[2]&= h_{3,1}[2]a_2+h_{3,3}[2]c_1=l_{3}[2](a_2,c_1).\end{aligned}\ ] ] in time slot 3 , transmitter 2 and transmitter 3 send information symbol and .then , the received signals are &= h_{1,2}[3]b_2+h_{1,3}[3]c_2=l_{1}[3](b_2,c_2 ) , \nonumber \\{ y}_{2}[3]&= h_{2,2}[3]b_2+h_{2,3}[3]c_2=l_{2}[3](b_2,c_2 ) , \nonumber \\ { y}_{3}[3]&= h_{3,2}[3]b_2+h_{3,3}[3]c_2=l_{3}[3](b_2,c_2).\end{aligned}\ ] ] phase two uses two time slots .recall that , in the second phase , transmitters exploit both current and outdated csit . in timeslot 4 , transmitter 1 sends a superposition of and with the precoding coefficients ] ; transmitter 2 and transmitter 3 send and with the precoding coefficients ] .the construction method of the precoding coefficients is to provide the same interference shape to receivers what they previously obtained during the phase one in a distributed manner .specifically , the precoding coefficients are chosen as =\frac{h_{3,1}[1]}{h_{3,1}[4]} ] , =\frac{h_{3,2}[1]}{h_{3,2}[4]} ] .this allows for receiver 2 and 3 to obtain the aligned interference shape that they acquired in time slot 1 and time slot 2 , respectively .then , the received signals are &= h_{1,1}[4](v_{a,1}[4]a_1\!+\!v_{a,2}[4]a_2 ) \nonumber \\ & + h_{1,2}[4]v_{b,1}[4]b_1 + h_{1,3}[4]v_{c,1}[4]c_1,\nonumber \\ & = l_{1}[4](a_1,b_1)+l_{1}[4](a_2,c_1 ) , \nonumber \\ { y}_{2}[4]&= h_{2,1}[4](v_{a,1}[4]a_1\!+\!v_{a,2}[4]a_2 ) \nonumber \\ & + h_{2,2}[4]v_{b,1}[4]b_1 + h_{2,3}[4]v_{c,1}[4]c_1,\nonumber \\ & = l_{2}[4](a_1,b_1)+l_{2}[2](a_2,c_1 ) , \nonumber \\ { y}_{3}[4]&= h_{3,1}[4](v_{a,1}[4]a_1\!+\!v_{a,2}[4]a_2 ) \nonumber \\ & + h_{3,2}[4]v_{b,1}[4]b_1 + h_{3,3}[4]v_{c,1}[4]c_1,\nonumber \\ & = l_{3}[1](a_1,b_1)+l_{3}[4](a_2,c_1).\end{aligned}\ ] ] in time slot 5 , transmitter 2 sends a linear combination of and using the precoding coefficients ] ; transmitter 1 and transmitter 3 send information symbol and by applying the precoding ] , respectively . in particular , the precoding coefficients are selected as =\frac{h_{3,2}[1]}{h_{3,2}[5]} ] , =\frac{h_{3,1}[1]}{h_{3,1}[5]} ] so that receiver 1 receives the aligned interference shape with what it obtained in time slot 3 .then , the received signals are &= h_{1,1}[5]v_{a,1}[5]a_1 + h_{1,2}[5]v_{b,1}[5]b_1\nonumber \\ & + h_{1,2}[5]v_{b,2}[5]b_2 + h_{1,3}[5]v_{c,2}[5]c_2,\nonumber \\ & = l_{1}[5](a_1,b_1)+l_{1}[3](b_2,c_2 ) , \nonumber \\{ y}_{2}[5]&= h_{2,1}[5]v_{a,1}[5]a_1 + h_{2,2}[5]v_{b,2}[5]b_1\nonumber \\ & + h_{2,2}[5]v_{b,2}[5]b_2 + h_{2,3}[5]v_{c,2}[5]c_2,\nonumber \\ & = l_{2}[5](a_1,b_1)+l_{2}[5](b_2,c_2 ) , \nonumber \\ { y}_{3}[5]&= h_{3,1}[5]v_{a,1}[5]a_1 + h_{3,2}[5]v_{b,2}[5]b_1\nonumber \\ & + h_{3,2}[5]v_{b,2}[5]b_2 + h_{3,3}[5]v_{c,2}[5]c_2,\nonumber \\ & = l_{3}[1](a_1,b_1)+l_{3}[5](b_2,c_2).\end{aligned}\ ] ] now, we explain how each receiver decodes its two desired symbols through a successive interference cancellation technique . * receiver 1 first obtains a linear combination of and by subtracting ] , i.e. , -y_1[3]=l_1[5](a_1,b_1) ] and (a_1,b_1) ] from ] and (a_2,c_1) ] using side information obtained in time slot 2 , i.e. , ] with (a_1,b_1) ] .subtracting (a_1,b_1) ] , it is possible to obtain a new linear equation that contains information symbols and only , i.e. , (b_2,c_2) ] and (b_2,c_2) ] using side information acquired in time slot 1 , i.e. , ] and (b_2,c_2) ] from ] . since receiver 3 already obtained a different equation (a_2,c_1) ] and multiplying normalization factor , receiver 1 has the following resultant input - output relationship : \\ \frac{{y}_{1}[9]}{\sqrt{p_{2}^{\star}}}-y_1[4]\\ \end{array}% \right]&\!=\!\ ! \underbrace{\left[% \begin{array}{cc } { h}_{1,1}[1 ] & { h}_{1,2}[1 ] \\ \frac{{h}_{1,1}[9]h_{2,1}[1]}{h_{2,1}[9 ] } & \frac{{h}_{1,2}[9]h_{2,2}[4]}{h_{2,2}[9 ] } \\\end{array}% \right]}_{{\bf h}_1}\!\!\left[\!\!% \begin{array}{c } a_1 \\b_1 \\ \end{array}% \!\!\right ] \nonumber \\&+\underbrace{\left[\!\!% \begin{array}{c } z_1[1 ] \\ \frac{{z}_{1}[9]}{\sqrt{p_{2}^{\star}}}-z_{1}[4 ] \\\end{array}% \!\!\right]}_{{\bf z}_1}. \label{eq : sumrate1}\end{aligned}\ ] ] similarly , the resulting input - output relationship at receiver 2 is \\ \frac{{y}_{2}[9]}{\sqrt{p_{1}^{\star}}}-y_2[1]\\ \end{array}% \right]&\!=\!\ ! \underbrace{\left[% \begin{array}{cc } { h}_{2,1}[4 ] & { h}_{2,2}[4 ] \\\frac{{h}_{2,1}[9]h_{1,1}[4]}{h_{1,1}[9 ] } & \frac{{h}_{2,2}[9]h_{1,2}[4]}{h_{1,2}[9 ] } \\\end{array}% \right]}_{{\bf h}_2}\!\!\left[\!\!% \begin{array}{c } a_2\\ b_2 \\\end{array}% \!\!\right ] \nonumber \\&+\underbrace{\left[\!\!% \begin{array}{c } z_2[4 ] \\ \frac{{z}_{2}[9]}{\sqrt{p_{1}^{\star}}}-z_{2}[1 ] \\\end{array}% \!\!\right]}_{{\bf z}_2}. \label{eq : sumrate2}\end{aligned}\ ] ] note that the covariance matrices ] are ~~\rm{and}~~ { \bf z}_2= \sigma^2\left[% \begin{array}{cc } 1 & 0 \\ 0 & 1\!+\!\frac{1}{p_{2}^{\star } } \\ \end{array}% \right].\end{aligned}\ ] ] since we have used 3 channel uses , the achievable sum - rate of the two - user x - channel is }{3},\end{aligned}\ ] ] which completes the proof . to evaluate the performance of the proposed approach within a constant gap , it is instructive to compare our sum - rate result with an existing outer bound result in , which is restated in the lemma below .[ lemma1 ] the rate tuple of the gaussian two - user x - channel with the same set of channel coefficients \} ] and and are orthogonal matrices , the sum - rate gap is bounded regardless of snr as we prove the constant gap result using both theorem [ theorem3 ] and lemma [ lemma1 ] for a particular set of channel values .suppose the channel coefficients whose absolute values are one but with different phases , i.e. , =e^{-jt\theta_{i , k}}}$ ] for . for this class of channels , from lemma [ lemma1 ] ,the sum - rate outer bound is given by where .further , we assume that the phases of the channel coefficients are selected so that and are orthogonal matrices .since from ( [ eq : pwcont1 ] ) and ( [ eq : pwcont2 ] ) , the achievable sum rate of the proposed method is given by therefore , the gap between the outer bound in ( [ eq : outerex ] ) and the achievable rate in ( [ eq : innerex ] ) is bounded as since and for all , the gap further simplifies as this completes the proof .this corollary reveals that the proposed method achieves the sum - capacity of two - user x - channel within a constant gap for the entire snr range for this particular class of channel coefficients , i.e. , phase fading channels .this analysis should be carefully interpreted because it holds for the special sets of channel realizations . to provide a result for arbitrary channel realizations ,the achievable ergodic rates of the two - user - x channel are compared with those obtained from the rate outer bound expression in and tdma transmission through simulations to demonstrate the superiority of the proposed method in the finite snr regime .[ fig : dof ] illustrates the ergodic sum - rate obtained by the tdma method , the proposed method , and outer bound expression in lemma [ lemma1 ] when each channel is drawn from the complex gaussian distribution , i.e. , .one interesting observation is that the proposed interference alignment method always provides a better sum - rate than the tdma method over the entire snr regime , as the proposed method obtains the signal diversity gain from the repetition transmission method .further , the proposed interference alignment achieves the ergodic sum - capacity of the two - user x - channel within a constant number of bits bits / sec / hz over the entire range of snr .in particular , in the low snr regime , i.e. , db , the sum - capacity within one bit / sec / hz is achievable .in this paper , we proposed a new interference management technique for a class of interference networks with local and moderately - delayed csit . with the proposed method, we characterized achievable trade - offs between the sum - dof and csi feedback in the interference networks with local csit . from the established trade - offs , we demonstrated the impact on how local and delayed csit affects the scale of network capacity in the interference networks .further , by leveraging a known outer bound result , we showed that the proposed method achieves the sum - capacity of the two - user x - channel within a constant number of bits .incorporating the effect of imperfect csi and continuous block fading models would be desirable to refine and complete the analysis further .another interesting direction for future study would be to investigate the effects of relays in interference networks with moderately - delayed csit by leveraging the idea of space - time physical layer network coding .v. r. cadambe and s. a. jafar , `` interference alignment and the degrees of freedom of the user interference channel , '' _ ieee transactions on information theory _ , vol .54 , pp .3425 - 3441 , aug .m. maddah - ali , a. motahari , and a. khandani , `` communication over mimo x channels : interference alignment , decomposition , and performance analysis , '' _ ieee transactions on information theory _ , vol .54 , no . 8 , pp . 3457 - 3470 , aug .2008 . c. s. vaze and m. k. varanasi `` the degree - of - freedom regions of mimo broadcast , interference , and cognitive radio channels with no csit , '' _ ieee transactions on information theory _ , vol .58 , no . 8 , pp . 5354 - 5374 , aug .m. a. maddah - ali and d. tse , `` completely stale transmitter channel state information is still very useful , '' vol .4418 - 4431 , july 2012 .a. ghasemi , a. s. motahari , and a. k. khandani , `` interference alignment for the mimo interference channel with delayed local csit , '' , feb .[ online]:arxiv:1102.5673 .h. maleki , s. a. jafar , and s. shamai , `` retrospective interference alignment over interference networks , '' , vol . 6 , no .228 - 240 , june 2012 .r. tandon , s. mohajer , h. v. poor , and s. shamai ( shitz ) , `` degrees of freedom region of the mimo interference channel with output feedback and delayed csit , '' vol .1444 - 1457 , march 2013 .t. gou and s. a. jafar , `` optimal use of current and outdated channel state information - degrees of freedom of the miso bc with mixed csit , '' vol . 16 , no . 7 , pp .1084 - 1087 , july 2012 . n. lee and r. w. heath jr ., `` not too delayed csit achieves the optimal degrees of freedom , '' pp .1262 - 1269 , oct .2012 .k. gomadam , v. r. cadambe , and s. a. jafar , `` a distributed numerical approach to interference alignment and applications to wireless interference networks , '' vol .57 , no . 6 , pp . 3309 - 3322 , jun . 2011
this paper proposes an interference alignment method with distributed and delayed channel state information at the transmitter ( csit ) for a class of interference networks . the core idea of the proposed method is to align interference signals over time at the unintended receivers in a distributed manner . with the proposed method , achievable trade - offs between the sum of degrees of freedom ( sum - dof ) and feedback delay of csi are characterized in both the x - channel and three - user interference channel to reveal the impact on how the csi feedback delay affects the sum - dof of the interference networks . a major implication of derived results is that distributed and moderately - delayed csit is useful to strictly improve the sum - dof over the case of no csi at the transmitter in a certain class of interference networks . for a class of x - channels , the results show how to optimally use distributed and moderately - delayed csit to yield the same sum - dof as instantaneous and global csit . further , leveraging the proposed transmission method and the known outer bound results , the sum - capacity of the two - user x - channel with a particular set of channel coefficients is characterized within a constant number of bits .
mathematical models of the distributed service provision problem have been studied thoroughly in computer science under the name of _ selfish load balancing _ and _ congestion games _ .most results apply concepts borrowed from game theory and concern worst - case analysis , in particular the computation of the so - called price of anarchy " , i.e. the ratio between the cost of the worst nash equilibrium ( ne ) and the optimal social cost .several works also address algorithmic issues , such as the question of designing distributed dynamics that converge to nes , their convergence time , or computational complexity . in many practical problems, service providers should be more interested in the average - case scenario , in particular in the average cost of service / resource allocations determined by the selfish behavior of users . in order to be able to compare the expected cost of different service allocations , a service provider is called to the arduous computational task of evaluating an average over the possibly huge number of different nes that are obtained as result of the allocation . in addition, service providers do not always have perfect information about the user behavior a fact that is usually modeled by including some stochastic parameter into the problem . in the presence of stochasticity , algorithms based on monte carlo methods are extremely inefficient even for moderately large problem sizes , whereas recent works have shown that much better results can be obtained using message - passing algorithms inspired by statistical physics methods . in our formulation, we assume that the total service capacity of the existing infrastructure can be partially adapted by activating or deactivating some of the service units . our goal is to find the configuration of active server units that achieves the best trade - off between maintenance costs for the provider and user satisfaction .for the sake of example , we assume maintenance costs expressed in terms of energy costs to keep the service unit active . for any given configuration of service units and users ,we propose to use a belief - propagation ( bp ) algorithm to evaluate the cost of every service configuration .moreover , we put forward an approximate method , also based on bp , which allows to perform the average over the stochastic parameters within the same message - passing algorithm used to average over the nes .the information obtained is then used to optimize the service units allocation .this can be done easily either exhaustively or by means of decimation methods .the service provision problem is represented by a bipartite graph , in which and are the sets of nodes , _ users _ and _ service units _ respectively , and is the set of edges connecting nodes and . in generalthe graph is not complete , i.e. , users can not connect to any service unit .in addition , every service unit has an operational energy cost , .thus , in some time periods it may be convenient to keep active only part of the existing service units ( ) and deactivate the others ( ) .the first ingredient of the model is the rational behavior of the users . in many problems , such as selfish load balancing , this is introduced by assuming that the quality of service received by a user , when selecting a service unit , depends on the load of the unit at the time of service , defining a correlation between users utilities . here ,we simplify the model assuming that users satisfaction in selecting a service unit does not depend on the state of the service unit itself ( provided it is available ) .each edge has a weight , , that represents the satisfaction obtained by user when selecting service unit .however , users decisions are not independent , as there is a limitation to the number of individuals that can be served at the same time by the same service unit .the weight in the opposite direction , , is the workload sustained by the service unit when providing the service to user . if we assume that each service unit can sustain a maximum load , the sum of the workloads of all users selecting unit should not exceed .this set of hard _ capacity constraints _ introduces an indirect competition between users .more precisely , suppose that user considers service unit to be the preferable one ( i.e. , ) , but adding the workload of to the total load already faced by unit , it would exceed .then we say that service unit is _ saturated _ for user and the latter has to access another of the service units accessible to her .she will thus turn to the unit with the second best weight . if this one is _available _ , user will make use of it , otherwise she will try the third one on her list and so on .note that multiple connections from the same user to many service units are not allowed .the second ingredient is stochasticity .we imagine that in any realistic situation the activity of the users could follow very complex temporal patterns .users could leave the system and come back , using different service units depending on their preference and the current availability .the stochastic nature of the problem is summarized into a set of stochastic parameters . atany given time , with probability the user is absent from the service system and , whereas with probability she is present and .for the moment , we can assume that the actual realization , , is known .given the bipartite graph , the configuration of active service units and the set of parameters , every user tries to maximize her own utility using the best service unit available ( i.e. , among those that are not saturated or inactive ) .such a system model can represent several different application scenarios .for example , we can represent videoconferencing , including several multipoint control units ( mcus ) or a heterogeneous wireless access network , where points of access , possibly using different technologies , are available ( e.g. , 3g / lte , wifi , wimax ) to the users . in these scenarios , indeed , it would be convenient to switch off service units when underloaded .the system outlined above poses the following service provision problem : _ at any given time period , which service units should be activated by a central controller , in order to maximize the users satisfaction and minimize the system energy consumption ? _since the decision of the central controller has to account for the rational behavior of the users , we address the optimization problem as follows .first , we consider a system configuration , where the active service units are given , and model the users association process as a game .the players of the game are the users that have to select a service unit among the active ones .we solve the game so that , for each user pattern , , the corresponding ne strategy profiles can be identified ; note that , given , there may exist multiple nes .then , in order to evaluate the performance of the system configuration , we define an objective function , which accounts for the energy cost of the active units and for the users satisfaction .since , for a given , different nes are reached depending on the order of arrival of the users , we average the objective function first over all nes corresponding to , and then over all possible realizations of the users arrival process .finally , we use the obtained result as an indication of the system configuration performance , and we select the system configuration that optimizes such an index . let us now detail the procedure outlined above .we denote the tagged system configuration by , and define as the set of service units that can be selected by user .also , let be the set of users that can select service unit and let be the set of nodes with . in the game ,the action of the generic user ( player ) consists in choosing one of the service units connected to her , e.g. , with .the payoff is if unit is active and not saturated otherwise it is . if no unit is chosen , and the payoff is , being a penalty value .more precisely : if instead user is absent , is the only possible value and we set note that , at every perturbation in the system , e.g. due to the departure of a user , a player may decide to connect to another service unit than the currently selected one , if she can increase her payoff .it is useful to represent the ne conditions in terms of _ best - response relations _ : a strategy profile is a pure ne if and only if , for each user , is the best response to the actions of the others , i.e. , in principle , the weight given to each ne should depend on specific details of the dynamics of the users ( e.g. on the order of arrival of the users and on the order according to which users unilaterally deviate from the current strategy profile if they find it convenient ) .unfortunately these details are largely unknown in any realistic setup .it is thus worth considering a simplifying hypothesis in which all the nes are weighted uniformly and the complex user dynamics is summarized in the average over the realizations of the stochastic parameters . in generalwe do not know which users are actually present in the system , but we assume to know the probability that user is present , . the problem consists in optimizing the trade - off between the system energy cost and the expected users satisfaction , i.e. in finding the configuration of active service units which maximizes the following objective function : - \alpha\sum_{s}r_{s}x_{s } \nonumber\end{aligned}\ ] ] where represents the average over the values of that satisfy the ne conditions ( which depend implicitly on and ) .the objective function is composed of two contrasting terms : a first contribution that measures the overall quality of the service , and a second term that quantifies the total cost of active service units ( alternatively , the service provider s revenue could depend explicitly on the perceived quality of the service ) .the parameter is used to set the relative importance of the two objectives .we can finally formulate the optimization goal of the central controller which is , given , the vector of unit capacities , the payoff matrix , the vector of presence probabilities , and the parameter , to find a minimizing such that . in conclusion , the vector represents the system configuration that corresponds to the best tradeoff between the system energy cost and the user satisfaction .the ne conditions define a set of local hard constraints on the individual actions , such that finding a pure ne can be translated into finding a solution of a constraint - satisfaction problem ( csp ) . using the node variables , we can formulate the associate csp over a factor graph with many small loops even when the original graph had none , which is not appropriate to develop a solution algorithm based on message passing . in the followingwe switch to an equivalent representation , using edge variables , that is much more convenient for message passing applications .the actions of the users can be described using three - states variables defined on the undirected edges ( see figure [ fig2 ] ) where `` saturated for '' refers to the case in which if connects to , the latter violates the capacity constraint , while `` available '' refers to the case where is active and able to accommodate user .the nes are the configurations taken by the variables that satisfy the following set of constraints .first , users can not have access to more than one service unit at the same time : \leq t_{u } \quad ( \forall u\in{\mathcal u}) ] . andthird , users try to use the best service unit available : .the stochastic optimization problem consists in finding the configuration of active service units such that it maximizes the objective function }w_{us } \right\rangle \right ] -\alpha\sum_{s}r_{s}x_{s}.\ ] ] the most difficult part of this optimization problem is that of performing the average over the nes in the presence of stochasticity , that is the essential step to be able to evaluate the average costs and benefits from activating / deactivating different service units .once this is done , the optimization step over the becomes trivial and it can be done either exhaustively or by means of decimation methods . in the next sectionwe describe an approximate method to perform the average over the nes and the stochastic parameters in a computationally efficient way .in general one should first average over the pure nes at fixed realization of the stochastic parameters and then perform the average over the distribution of the latter .the double average is extremely costly at a computational level .the message passing approach allows one to perform these two steps together although at the cost of introducing an approximation in the computation .highly simplifies the representation and allows to get rid of small loops .the other two types of variable nodes are blue and empty nodes , corresponding respectively to the stochastic parameters and the service provider s variables .square nodes and are standard factor nodes containing the capacity constraints and the best - response conditions.,width=377 ] for an observable , the average over nes is }{z({\mathbf x},{\mathbf t } ) } o({\mathbf y},{\mathbf x},{\mathbf t})\end{aligned}\ ] ] in which ] .the main difficulty of performing the quenched average is due to the presence of the normalization factor at the denominator of . a mean - field approximation , based on the factorization ansatz ,can be used to transform our quenched average into an easily computable annealed one . in this approximationwe get the factor graph associated to the problem is shown in figure [ fig2 ] .in addition to the usual terms and , corresponding to hard constraints , it also contains energetic terms on the nodes .the energetic terms are unknown but can be computed implicitly introducing a new set of messages and that must be adjusted in order to have the correct probability marginal on each variable node .on such a factor graph , it is possible to derive the following set of message passing equations by means of which the observable can be approximately evaluated .the proportionality symbol means that the marginal probabilities need to be correctly normalized .a complete description of the method will be presented elsewhere .in this section , we present some numerical results obtained by our algorithm on random graphs generated with the following procedure .both the users and the service units are placed at random in the unit square of the two - dimensional euclidean space . for each user , only the nearest service units are assumed to be accessible , and , for each of these , the workload is , i.e. , an integer proportional to the square of the distance between and ( the proportionality constant is such that the maximum weight is equal to a specified value ) . recall that the payoff for a disconnected user is ( see ( [ eq : pi ] ) ) , whereas for connected users is .finally , the presence probabilities ] .we have considered four scenarios , whose parameters are reported in table [ tab : para ] ; the ranges for all these parameters are such that the instances are non trivial ..scenarios considered in the simulations [ cols="<,^,^,^",options="header " , ] as a first test of our message passing approach , in scenario s1 we compared it to an exhaustive enumeration of the nes for fixed , averaging the results over a sample of values of .more specifically , we considered ( for a given configuration of the service units ) the following two observables , in terms of which one can compute the objective function we propose to use for the greedy procedure : }w_{su}\right\rangle \right]\end{aligned}\ ] ] which is the average ( over the realization of and over the nes ) of the sum of the workloads for the users that are present and connected to some service unit , and } \right > \right]\end{aligned}\ ] ] which is the average ( again , over the realization of and over the nes ) of the number of disconnected users .we compare the value obtained by our algorithm for with where is a random sample of realizations of extracted from ( and is the size of the sample ) and where is the average over the nes ( for fixed and ) of the sum of the workloads for connected users , which is computed with an exhaustive enumeration of the possible allocations .a similar comparison is done for .of course , in this scenario the number of users is limited since the exhaustive enumeration is possible only if the size of the instance is very small .scatter plots a and b in fig .[ figcomparison ] compare our algorithm with the exhaustive enumeration under scenario s1 . as the sample size increases ,the data points tend to collapse onto the diagonal , i.e. , as the accuracy of the sampling procedure improves , the results obtained by sampling tend to those obtained with the message passing algorithm , except for a small number of `` outliers '' ( less than one percent of the instances ) .this confirms that , even on very small instances , the two hypotheses on which our method is based , namely the decorrelation assumption of the cavity method and the factorization hypothesis for the partition function , are a good approximation .+ in the next scenario s2 , we compare the results obtained by our algorithm with those obtained by computing the average over the nes ( for fixed ) with bp , and then averaging over with an explicit sampling .this allows us to test , on larger instances , the factorization assumption for the partition function .note that our algorithm requires only one convergence of the message passing procedure to perform both averages .the explicit sampling , instead , requires convergences of a message passing , which is ( almost ) as complex as ours ; thus , it is roughly slower by a factor .this limits the number of instances that we have been able to analyze to less than 100 .again , scatter plots c and d of fig .[ figcomparison ] show that , as the sample size increases , the data points tend to collapse onto the diagonal , with the exception of a few cases for the estimation of . finally , in scenario s3 we provide an example of optimization .we used our greedy decimation heuristic based on the message passing algorithm for a single instance .the heuristic we use to find the optimal allocation is the following .we start by computing the value of the objective function }w_{us } \right\rangle \right]\end{aligned}\ ] ] when all the service units are on ( i.e. for each ) .then , we compute the same objective function for all the configurations obtained by switching off one service unit .we actually switch off the service unit that corresponds to the smallest drop in the objective function .the same procedure is then iterated , computing the variations in the objective function associated to switching off each of the service units that are still on , and actually switching off the one that minimizes the drop , until all the service units are off ( or we decide to stop ) .the results of this `` greedy decimation '' are shown in fig .we observe that during the first 8 steps of the decimation ( i.e. as we switch off the first 8 service units ) the value of the objective function decreases very modestly ( dropping by 0.18% overall ) , while for larger number of steps the drops are much greater .we therefore decide to stop the decimation after 8 steps .this allows to switch - off 16% of the service units ( i.e. to save 16% of the electric power ) without affecting at all the service level .in this paper , we presented a novel computationally efficient optimization approach for distributed resource allocation problems under user behavior uncertainty .we propose a belief propagation scheme to compute the costs of different service configurations .this is obtained by averaging over all the possible nash equilibrium points associated to a given system configuration .the authors acknowledge the european grants fet open 265496 , erc 267915 and italian firb project rbfr10quw4 . e. koutsoupias and c. h. papadimitriou .worst - case equilibria ._ symp . on theoretical aspects of computer science _e. even - dar , a. kesselman , and y. mansour .convergence time to nash equilibria . in _ proc .30th international colloq . on automata , languages and programming _ , pp .502 - 513 ( 2003 ) .a. goel and p. indyk .stochastic load balancing and related problems . _symp . on foundations of computer science _j. kleinberg , y. rabani , and e. tardos .allocating bandwidth for bursty connections .29th acm symposium on theory of computing _ ( 1997 ) .f. altarelli , a. braunstein , a. ramezanpour , and r. zecchina , stochastic matching problem , _ phys .lett . _ * 106 * ( 2011 ) m. mzard and a. montanari _ information , physics and computation_. oxford graduate texts ( 2009 ) .
we develop a computationally efficient technique to solve a fairly general distributed service provision problem with selfish users and imperfect information . in particular , in a context in which the service capacity of the existing infrastructure can be partially adapted to the user load by activating just some of the service units , we aim at finding the configuration of active service units that achieves the best trade - off between maintenance ( e.g. energetic ) costs for the provider and user satisfaction . the core of our technique resides in the implementation of a belief - propagation ( bp ) algorithm to evaluate the cost configurations . numerical results confirm the effectiveness of our approach .
recently , the problems of information science were investigated from statistical mechanical point of view . among them , the image restoration is one of the most suitable subjects . in the standard approach to the image restoration ,an estimate of the original image is given by maximizing a posterior probability distribution ( the map estimate ) .in the context of statistical mechanics , this approach corresponds to finding the ground state configuration of the effective hamiltonian for some spin system under the random fields . on the other hand , it is possible to construct another strategy to infer the original image using the thermal equilibrium state of the hamiltonian . from the bayesian statistical point of view , the _ finite temperature restoration _ coincides with maximizing a posterior marginal distribution ( the mpm estimate ) and using this strategy , the error for each pixel may become smaller than that of the map estimate . as we use the average of each pixel ( spin ) over the boltzmann - gibbs distribution at a specific temperature , the thermal fluctuation should play an important role in the mpm estimate .then , the temperature controls the shape of the distribution and if we choose the temperature appropriately , the sampling from the distribution generates the important configurations for a fine restoration . besides this hill - climbing mechanism by the thermal fluctuation , we may use another type of fluctuation , namely , the quantum fluctuation which leads to quantum tunneling between the states . if we use the sampling from the boltzmann - gibbs distribution based on the quantum fluctuation , it may be possible to obtain much more effective configurations for a good restoration .the idea of the mrf s model using the quantum fluctuation was recently proposed by tanaka and horiguchi , however , they investigated the quantum fluctuation in the context of the optimization ( the map estimate by the quantum fluctuation ) and they used the ground state as the estimate of the original image .we would like to stress that we use the distribution based on the quantum fluctuation itself and the expectation value is used to infer the original image .it is highly non - trivial problem to investigate whether the mpm estimate based on the quantum fluctuation becomes better than the map estimate or the thermal fluctuation based mpm estimate .this is a basic concept of this paper .this paper is organized as follows . in the next sec .ii , we explain our model system and the basic idea of our method in detail . in sec .ii , we also introduce the criterion of the restoration , that is , the overlap between the original image and the result of the restoration . in sec .iii , we introduce the infinite range model in order to obtain analytical results on the performance of the restoration , and calculate the overlap explicitly . in sec .iv , we show that quantum monte carlo simulations in 2-dimension support our analytical results . in sec .v , we introduce the iterative algorithm which is derived by mean - field approximation and apply this algorithm to image restoration for standard pictures . the last sec .vi is devoted to discussion about all results we obtain . in this section, we also mention the inequality which gives the upper bound of the overlap .let us suppose that the original image is represented by a configuration of ising spins with probability .these images are sent through the noisy channel by the form of sequence .then , we regard the output of the sequence through the noisy channel as .the output probability for the case of the binary symmetric channel ( bsc ) is specified by the following form ; we easily understand the relevance of this expression for the bsc ; lets suppose that each pixel changes its sign with probability and remains with during the transmission , that is , we easily see that there is a simple relation between flip probability and inverse temperature as .this is reason why we refer to this type of noise as _ binary symmetric _ channel . using the assumption that each pixel in the original image is corrupted independently ( so - called _ memory - less channel _ ) ,namely , , we obtain eq .( [ poutbc ] ) .this bsc is simply extended to the following gaussian channel ( gc ) where is a standard deviation of observable ( corrupted pixel ) from scaled original pixel .then , the posterior probability , which is the probability that the source sequence is provided that the output is , leads to by the bayes theorem . as we treat the bw image and the bsc ( [ poutbc ] ) , a likelihood is appropriately written by appearing in the bayesian formula ( [ pcond ] ) is a model of the prior distribution and we usually use the following type ; where means the sum with respect to the nearest neighboring pixels and controls the smoothness of the picture according to our assumption . substituting eqs .( [ likelihood ] ) and ( [ ferro ] ) into eq .( [ ppost ] ) , we obtain the posterior probability explicitly ; in the framework of the map estimate , we regard a configuration which maximizes the posterior probability latexmath:[ ] denotes the average over the distribution .the standard replica calculations and saddle point method lead to the following coupled equations . =m_{0 } & = & { \tanh}({\beta}_{0}m_{0 } ) \label{m01 } \\\mbox{}[\langle { \sigma}_{ik}^{\alpha } \rangle_{h,\beta_{m},\gamma}]=m & = & \frac{{\rm tr}_{\xi}\,{\rm e}^{{\beta}_{s}m_{0}{\xi } } } { 2\,{\cosh}({\beta}_{s}m_{0 } ) } \int_{-\infty}^{\infty}\!\!\ !du\ , { \omega}^{-1 } \int_{-\infty}^{\infty}\!\!\ ! d{\omega}\ , { \phi}y^{-1}\ , { \sinh}y \label{m1 } \\ \mbox{}[{\xi}_{i}\langle { \sigma}_{ik}^{\alpha } \rangle_{h,\beta_{m},\gamma}]= t & = & \frac{{\rm tr}_{\xi}\,{\rm e}^{{\beta}_{s}m_{0}{\xi } } } { 2\,{\cosh}({\beta}_{s}m_{0 } ) } \int_{-\infty}^{\infty}\!\!\ ! du\ , { \omega}^{-1 } \int_{-\infty}^{\infty}\!\!\ ! d{\omega}\ , { \xi}\ , { \phi}y^{-1}\ , { \sinh}y \label{t1 } \\\mbox{}[\langle ( { \sigma}_{ik}^{\alpha})^{2 } \rangle_{h,\beta_{m},\gamma } ] = q & = & \frac{{\rm tr}_{\xi}\,{\rm e}^{{\beta}_{s}m_{0}{\xi } } } { 2\,{\cosh}({\beta}_{s}m_{0 } ) } \int_{-\infty}^{\infty}\!\!\ !du \left [ { \omega}^{-1 } \int_{-\infty}^{\infty}\!\!\ ! d{\omega}\ , { \phi}y^{-1}\ , { \sinh}y \right]^{2 } \label{q1 }\\ \mbox{}[\langle { \sigma}_{ik}^{\alpha}{\sigma}_{il}^{\alpha } \rangle_{h,\beta_{m},\gamma}]=s & = & \frac{{\rm tr}_{\xi}\ , { \rm e}^{{\beta}_{s}m_{0}{\xi } } } { 2\,{\cosh}({\beta}_{s}m_{0 } ) } \int_{-\infty}^{\infty}\!\!\ !du\ , { \omega}^{-1 } { \biggr [ } \int_{-\infty}^{\infty}\!\!\ !d{\omega}\ , { \phi}^{2}y^{-2 } { \cosh}y \nonumber \\ \mbox { } & + & { \gamma}^{2 } \int_{-\infty}^{\infty}\!\!\ ! d{\omega}\ , y^{-3}{\sinh}y { \biggr ] } , \label{s1}\end{aligned}\ ] ] where means the average by the posterior probability using the same way as eq .( [ avequantum ] ) . or means gaussian integral measure . in order to obtain the above saddle point equations, we used the replica symmetric and the static approximation , that is , we also defined functions , and as then the overlap which is a measure of retrieval qualityis calculated explicitly as = m = \frac{{\rm tr}_{\xi}\,{\xi}{\rm e}^{{\beta}_{s}m_{0}{\xi } } } { 2\,{\cosh}({\beta}_{s}m_{0 } ) } \int_{-\infty}^{\infty}\!\!\ !du \int_{-\infty}^{\infty}\!\!\ !dw \nonumber \\ \mbox { } & { \times } & { \rm sgn } \left [ u\sqrt{(\tau h)^{2}+q(j\beta_{j})^{2 } } + ( { \tau}_{0}h+j_{0}{\beta}_{j}t){\xi}+{\beta}_{m}m + j{\beta}_{j}w\sqrt{s - q } \right ] \label{overlap2}\end{aligned}\ ] ] where the above overlap depends on through ( [ m1 ] ) .we first consider the case of , that is to say , the conventional image restoration .we choose a snapshot from the distribution ( [ ferro ] ) at source temperature . according to nishimori and wong , we fix the ratio and adjust as a parameter for simulated annealing and controls as a quantum fluctuation .if we set , the lines of should be identical to the results by the _ thermal _ mpm estimate .on the other hand , if we choose and , the resultant line represents the performance of the _ quantum _ map estimate .we should draw attention to the fact that the quantum fluctuation vanishes at . in practical applications of the _ quantum annealing _ based on quantum monte carlo simulations , we should reduce from to during monte carlo updates .however , the resultant performance obtained here is calculated analytically provided that the system reaches its equilibrium state .therefore , we can regard the result as a performance when is decreased slowly enough . in fig .[ fig1 ] , we set the ratio to its optimal value and plot the overlap for the case of and . obviously , for the case of , the maximum is obtained at a specific temperature . however ,if we add a finite quantum fluctuation , the optimal temperature is shifted to the low temperature region . in fig .[ fig2 ] , we plot for the case of with the fixed optimal ratio .this figure shows that if we set the parameters to their optimal value in the thermal mpm estimate , the quantum fluctuation added to the system destroys the recovered image ( see the lines for the case of ) .therefore , we may say that it is impossible to choose all parameters and so as to obtain the overlap which is larger than . this fact is also shown by -dimensional plot in fig .although , we found that a finite does not give the absolute maximum of the overlap , the _ quantum _ mpm estimate has another kind of advantages .[ fig3 ] indicates , the overlap of the the quantum mpm estimate is almost flat in comparison with or .this is a desirable property from practical point of view .this is because the estimation of the hyper - parameters is one of the crucial problems to infer the original image , and in general , it is difficult to estimate them beforehand . therefore , this robustness for hyper - parameter selection is a desirable property .we also see this property in fig .as we already mentioned , the overlap at and corresponds to the result which is obtained by quantum annealing , that is to say , the quantum map estimate .we see that the result of the quantum mpm estimate is slightly better than that of the quantum map estimate .we next show the effect of the parity check term . in fig .[ fig4 ] , we set and and plot the overlap as a function of for several values of .we see that the performance of the restoration is improved by introducing the parity check term which has much information about the local structure of the original image . in the next section, we check the usefulness of this method in terms of quantum monte carlo simulation .in this section , monte carlo simulations in realistic -dimension are carried out in order to check the practical usefulness of our method .we use the _ standard pictures _ which are provided on the web site as the original image , instead of the ising snapshots . in order to sampling the important points which contribute to the local magnetization , we use the quantum monte carlo method which was proposed by suzuki . as we mentioned in the previous sections, we can treat the -dimensional quantum system as -dimensional classical system by the trotter decomposition . in this sense, the transition probability of the metropolis algorithm leads to \label{trans}\end{aligned}\ ] ] where is energy of the classical spin system in -dimension ( in the present case , -dimension ) as follows .\nonumber \\ \mbox { } & - & \frac{h}{p}\sum_{ijk}{\tau}_{i , j } { \sigma}_{i , j , k } - b\sum_{ijk}{\sigma}_{i , j , k}{\sigma}_{i , j , k+1 } \label{energy}\end{aligned}\ ] ] where we defined . the transition probability eq .( [ trans ] ) with eq .( [ energy ] ) generates the boltzmann - gibbs distribution asymptotically and using the importance sampling from the distribution , we can calculate the expectation value of the -th spin , namely , , and using this result , we obtain an estimate of the -th pixel of the original image as .we show the results in fig.s [ fig5 ] and [ fig6 ] . from these figures, we see that there exists the optimal value of the transverse field . in fig.s [ fig7 ] and [ fig8 ] , we display the results by quantum monte carlo simulations when we add the parity check term for the parameter sets and .we see that the resultant pictures using the parity check term are almost perfect ( see and ) .in the previous sections , we see that the quantum fluctuation works effectively on image restoration problems in the sense that the quantum fluctuation suppress the error of the hyper - parameter s estimation in the markov random fields model . in addition , by making use of the quantum monte carlo simulations , we could apply it to the image restoration of the 2-dimensional standard pictures .however , in order to carry out the simulations , it takes quite long time to obtain the average and it is not suitable for practical situations . in this section , in order to overcome this computational time intractability , we derive the iterative algorithm based on the mean - field approximation .this algorithm shows fast convergence to the approximate solution . within the mean - field approximation ,we rewrite the density matrix for 2-dimensional version of the effective hamiltonian as where we defined as with in the above expressions , means eigenvalues of the matrix ( {11}=h_{ij}^{(+ ) } , [ \hat{\mbox{\boldmath }}_{ij}]_{22}=h_{ij}^{(- ) } , [ \hat{\mbox{\boldmath }}_{ij}]_{12}= [ \hat{\mbox{\boldmath }}_{ij}]_{21}=-\gamma ] .we solve the mean - field equations ( [ iteration ] ) with respect to until the condition .we show its performance in fig .[ fig9 ] and table [ table1 ] . from table[ table1 ] , we see that if we introduce appropriate quantum fluctuation , the performance is remarkably improved , and in addition , the speed of the convergence becomes much faster .however , if we add the quantum fluctuation too much , the fluctuation destroys the recovered image .we also see that the optimal value of exists around .in this paper , we investigated to what extent the quantum fluctuation works effectively on image restoration . for this purpose, we introduced an analytically solvable model , that is , the infinite range version of the mrf s model .we applied the technique of statistical mechanics to this model and derived the overlap explicitly .we found that the quantum fluctuation improves the quality of the image restoration dramatically at a low temperature region . in this sense , the error of the estimation for the hyper - parameters can be suppressed by the quantum fluctuation .however , we also found that the maximum value of the overlap never exceeds that of the classical ising case .we may show this fact by the following arguments ; first of all , the upper bound of the overlap for the classical system is given by setting , that is , \nonumber \\ \mbox { } & = & { \rm tr}_{\{\tau,\xi\ } } { \xi}_{i}{\rm e}^{\beta_{\tau}\sum_{i}\tau_{i}\xi_{i } } p(\ { \xi\ } ) { \cdot } \frac{{\rm tr}_{\sigma } \sigma_{i}{\rm e}^{\beta_{\tau}\sum_{i}\tau_{i } \sigma_{i}}p_{m}(\ { \sigma\ } ) } { |{\rm tr}_{\sigma } \sigma_{i}{\rm e}^{\beta_{\tau}\sum_{i}\tau_{i}\sigma_{i } } p_{m}(\ { \sigma\ } ) | } \nonumber \\ \mbox { } & = & { \rm tr}_{\tau } \sigma_{i}{\rm e}^{\beta_{\tau}\sum_{i}\tau_{i}\sigma_{i } } p_{m}(\ { \sigma\ } ) | . \end{aligned}\ ] ] for the quantum system , the overlap is bounded by this maximum value as \nonumber \\ \mbox { } & \,\leq\ , & | { \rm tr}_{\{\tau,\xi\ } } { \xi}_{i } { \rm e}^{\beta_{\tau}\sum_{i}\tau_{i}\xi_{i } } p(\ { \xi\ } ) { \rm sgn } [ { \rm tr}_{\hat{\sigma } } \hat{\sigma}_{i}^{z } { \rm e}^{h\sum_{i}\tau_{i}\hat{\sigma}_{i}^{z}+ \gamma \sum_{i}\hat{\sigma}_{i}^{x } } p_{m}(\ { \hat{\sigma}_{i}^{z}\ } ) ] | \nonumber \\ \mbox { } & = & { \rm tr}_{\tau}|{\rm tr}_{\xi } { \xi}_{i}{\rm e}^{\beta_{\tau}\sum_{i}\tau_{i}\xi_{i } } p(\ { \xi\ } ) |=m_{\rm max}^{{\rm ( thermal)}}.\end{aligned}\ ] ] we can see this inequality more directly as follows .\nonumber \\ \mbox { } & = & m^{{\rm ( quantum)}}(h , p_{m},\gamma ) , \label{inequality3 } \end{aligned}\ ] ] where the identity was used . we should notice that in the left hand side of the above inequality ( [ inequality3 ] ) , the arguments of the trace w. r. t. always take positive values , while in the right hand side , they can be negative . in order to check the usefulness of the method, we carried out quantum monte carlo simulations in realistic -dimension .we found that the results by the simulation support qualitative behavior of the analytical expressions for overlap .we introduced the iterative algorithm in terms of the mean - field approximation and applied it to image restoration of the standard pictures .we found that the quantum fluctuation suppress the error of the hyper - parameter estimation .in addition , we found that the speed of the convergence to the solution is accelerated by the quantum fluctuation .from all results obtained in this paper , we concluded that the quantum fluctuation turns out to enhance tolerance against uncertainties in hyper - parameter estimation .however , if much higher quantities of restoration are required , we must estimate those parameters using some methods .one of the strategies for this purpose is selecting the parameters and which maximize a_ marginal likelihood_. by making use of the infinite range model , the usefulness of this method can be evaluated .the details of the analysis will be reported in forth coming paper .the author acknowledges h. nishimori for fruitful discussions and useful comments .he also thanks k. tanaka for kind tutorial on the theory of image restoration and drawing his attention to reference .he acknowledges d. bolle , a. c. c. coolen , d. m. carlucci , t. horiguchi , p. sollich and k. y. m. wong for valuable discussions .the author thanks department of physics , tokyo institute of technology and department of mathematics , kings college , university of london for hospitality .this work was partially supported by the ministry of education , science , sports and culture , grant - in - aid for encouragement of young scientists , no . 11740225 , 1999 - 2000 andalso supported by the collaboration program between royal society and japanese physical society .9999 s. geman and d. geman , ieee trans . , * pami 6 * , 721 ( 1984 ) . j. marroquin , s. mitter and t. poggio , j. american stat .assoc . , * 82 * 76 , ( 1987 ). j. m. pryce and a. d. bruce , j. phys . a : math* 28 * , 511 ( 1995 ) .k. tanaka and t. horiguchi , the transactions on the institute of electronics , information and communication engineers * j80-a-12 * , 2117 ( 1997 ) ( in japanese ) .m. suzuki , prog . of theor .* 56 * 1454 ( 1976 ) .d. sherrington and s. kirkpatrick , phys .lett . * 35 * , 1792 ( 1975 ) .k. chakrabarti , a. dutta and d. sen , _ quantum ising phases and transitions in transverse ising models _ , lecture note in physics * 41 * , springer ( 1996 ) .a. j. bray and m. a. moore , j. phys .c * 13 * , l655 ( 1985 ) .h. nishimori and k. y. m. wong , phys .e * 60 * 132 ( 1999 ) .s. kirkpatrick , c. d. gelatt jr . and m. d. vecchi , science * 220 * ( 1983 ) .the standard pictures which are used in this paper are available at ftp://ftp.lab1.kuis.kyoto - u.ac.jp / pub / sidba/. t. kadowaki and h. nishimori , phys .e * 58 * 5355 ( 1998 ) .d. m. carlucci and j. inoue , phys .e * 60 * 2547 ( 1999 ) .j. inoue and d. m. carlucci , submitted to phys .e ( 2000 ) .
quantum fluctuation is introduced into the markov random fields ( mrf s ) model for image restoration in the context of bayesian approach . we investigate the dependence of the quantum fluctuation on the quality of bw image restoration by making use of statistical mechanics . we find that the maximum posterior marginal ( mpm ) estimate based on the quantum fluctuation gives a fine restoration in comparison with the maximum a posterior ( map ) estimate or the thermal fluctuation based mpm estimate . = 14000 pacs numbers : 02.50.-r , 05.20.-y , 05.50.-q
[ [ program - maintenance - and - trace - analysis . ] ] * program maintenance and trace analysis .* + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + several experimental studies ( e.g. , ) show that maintenance is the most expensive phase of software development : the initial development represents only 20 % of the cost , whereas error fixing and addition of new features after the first release represent , each , 40 % of the cost .thus , 80 % of the cost is due to the maintenance phase .a key issue of maintenance is program understanding . in order to fix logical errors, programmers have to analyze their program symptoms and understand how these symptoms have been produced . in order to fix performance errors ,programmers have to understand where the time is spent in the programs . in order to add new functionality , programmers have to understand how the new parts will interact with the existing ones .program analysis tools help programmers understand programs . for example, type checkers help understand data inconsistencies . slicing tools help understand dependencies among parts of a program .tracers give insights into program executions .some program analysis tools automatically analyze program execution traces .they can give very precise insights of program ( mis)behavior . we have shown how such trace analyzers can help users debug their programs . in our automated debuggers , a trace query mechanism helps users check properties of parts of traced executions in order to understand misbehavior . in this article, we show that trace analysis can be pushed toward monitoring to further help understand program behavior . whereas debuggers are tools that retrieve run - time information at specific program points , monitors collect information relative to the whole program executions .for example , some monitors gather statistics which help detect heavily used parts that need to be optimized ; other monitors build graphs ( e.g. , control flow graphs , dynamic call graphs , proof trees ) that give a global understanding of the execution .[ [ execution - monitoring . ] ] * execution monitoring . * + + + + + + + + + + + + + + + + + + + + + + + monitors are trace analyzers which differ from debuggers .monitoring is mostly a `` batch '' activity whereas debugging is mostly an interactive activity . in monitoring ,a _ set _ of properties is specified beforehand ; the whole execution is checked ; and the global collected information is displayed . in debugging ,the end - user is central to the process ; he specifies on the fly the very next property to be checked ; each query is induced by the user s current understanding of the situation at the very moment it is typed in .monitoring is therefore less versatile than debugging .the properties specified for monitoring have a much longer lifetime , they are meant to be used over several executions .it is , nevertheless , impossible to foresee all the properties that programmers may want to check on executions .one intrinsic reason is that these properties are often driven by the application domain .therefore monitoring systems must provide some genericity .[ [ existing - approaches - to - implement - monitors . ] ] * existing approaches to implement monitors .* + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + unfortunately , monitors are generally implemented by ad hoc instrumentation .this instrumentation requires a significant programming effort . when done at a low level , for example by modifying the compiler and the runtime system , it requires deep knowledge that mostly only the language compiler implementors havehowever , the monitored information is often application - dependent , and application programmers or end - users know better what has to be monitored .but instrumenting compilers is almost impossible for them .an alternative to low - level instrumentation is source - level instrumentation ; run - time behavior information can be extracted by source - to - source transformation , as done for ml and prolog for instance . such instrumentation ,although simpler than low - level compiler instrumentation , can still be too complex for most programmers .furthermore , for certain new declarative programming languages like mercury , instrumentation may even be impossible .indeed , in mercury , the declarative semantics is simple and well defined , but the operational semantics is complex .for example , the compiler reorders goals according to its needs .furthermore , input and output can be made only in deterministic predicates .this complicates code instrumentation .thus , ad hoc instrumentation is tedious at a low level and it may be impossible at a high level .on the other hand , the difficult task of instrumenting the code to extract run - time information has , in general , already been achieved to provide a debugger .debuggers , which help users locate faults in programs are based on tracers .these tracers generate execution traces which provide a precise and faithful image of the operational semantics of the traced language .these traces often contain sufficient information to base monitors upon them .[ [ our - proposal . ] ] * our proposal . * + + + + + + + + + + + + + + + in this article, we propose a high - level primitive built on top of an event oriented execution tracer .the proposed monitoring primitive , called foldt , is a fold which operates on a list of events .an event oriented _ trace _ is a sequence of events .an _ event _ is a tuple of event attributes .event attribute _ is an elementary piece of information that can be extracted from the current state of the program execution .thus , a trace can be seen as a sequence of tuples of a database ordered by time .many tracers are event - oriented : for example , prolog tracers based on byrd box model , tracers for c such as dalek and coca , the egadt tracer for pascal , the esa tracer for ada , and the ebba tracer for distributed systems .one of the key advantages of our approach is that it allows a clean separation of concerns ; the definition of the monitors is totally distinct from both the user source code and the language compiler .we have implemented foldt on top of the mercury trace .we give a number of applications of the foldt operator to compute various monitors : execution profiles , graphical abstract views , and test coverage measurements .each example is implemented by a few lines of mercury which can be written by any mercury programmer .these applications show that the mercury trace , indeed , contains enough information to build a wide variety of interesting monitors .detailed measurements show that , under some requirements , foldt can have acceptable performance for executions of several millions of execution events .therefore our operator lays the foundation for a generic and powerful monitoring environment .the proposed scheme has been integrated into the mercury environment .it is fully operational and part of the mercury distribution .note that we have implemented the foldt operator on top of mercury mostly for historical reasons .we acknowledge that some of the monitors were particularly easy to write thanks to the neatness of mercury libraries , in particular the set library ( e.g. , figure [ cfg - source ] ) .nevertheless , foldt could be implemented for any system with an event - oriented tracer .[ [ plan . ] ] * plan . * + + + + + + + in section [ collect - section ] , we introduce the foldt operator and describe its current implementation on top of the mercury tracer . in section [ collect - app ] , we illustrate the genericity of foldtwith various kinds of monitors . all the examples are presented at a level of detail that does not presuppose any knowledge of mercury .section [ performance - section ] discusses performance issues of foldt .section [ collect - related ] compares our contribution with related work .a thorough description of the mercury trace can be found in appendix [ trace - appendix ] .appendix [ queens - prog - appendix ] lists a mercury program solving the n queens problem , which is used at various places in the article as an input for our monitors .in this section , we first define the foldt operator over a general trace in a language - independent manner .we describe an implementation of this operator for mercury program executions , and then present its current user interface .a trace is a list of events ; analyzing a trace therefore requires to process such a list .the standard functional programming operator fold encapsulates a simple pattern of recursion for processing lists .it takes as input arguments a function , a list , and an initial value of an accumulator ; it outputs the final value of the accumulator ; this final value is obtained by successively applying the function to the current value of the accumulator and each element of the list . as demonstrated by hutton , fold has a great expressive power for processing lists .therefore , we propose a fold - like operator to process execution traces ; we call this operator foldt . before defining foldt, we define the notions of event and trace for sequential executions .( execution event , event attributes , execution trace ) + _ an execution event _ is an element of the cartesian product , where for are arbitrary sets called _event attributes_. _ an execution trace _ is a ( finite or infinite ) sequence of execution events ; the set of all execution traces is denoted by .we note the size ( its number of events ) of a finite trace and the size of infinite traces .the following definition of foldt is a predicative definition of a fold operating on a finite number of events of a ( possibly infinite ) trace .the set of predicates over is denoted by .( foldt ) + [ foldt - def ] a _ foldt monitor of type _ is a 3-tuple : such that : , either + ( 1 ) + + ( 2 ) + + is called _ the result of the monitor on trace . we use the notation to mean that there exists a unique , and for the sequence ( in the mathematical sense ) .operationally , an accumulator of type is used to gather the collected information .it is first initialized ( ) .the predicate collect is then applied to each event of the trace in turn , updating the accumulator along the way ( ) .there are two ways to stop this process : ( 1 ) the folding process stops when the end of the execution is reached if the trace is finite ( ) ; ( 2 ) if collect fails before the end of the execution is reached ( ) . in both cases ,the last value of the accumulator ( ) is processed by post_process , which returns a value ( ) of type ( ) .note that this definition holds for finite and infinite traces ( thanks to the second case of definition [ foldt - def ] ) .this is convenient to analyze programs that run permanently .the ability to end the foldt process before the end of the execution is also convenient to analyze executions part by part as explained in section [ part - by - part - analysis - section ] .a further interesting property , which is useful to execute several monitors in a single program execution , is the possibility to simultaneously apply several fold on the same list using a tuple of fold ; in other words : + where : , , .we prototyped an implementation of foldt for the mercury programming language . after a brief presentation of mercury and its trace system ,we describe our foldt implementation .mercury is a logic and functional programming language .the principal differences with prolog are as follows .mercury supports functions and higher - order terms .mercury programs are free from side - effects ; even input and output are managed in a declarative way .mercury strong type , mode and determinism system allows a lot of errors to be caught at compile time , and a lot of optimizations to be done .the trace generated by the mercury tracer is adapted from byrd box model .its attributes are the event number , the call number , the execution depth , the event type ( or port ) , the determinism , the procedure ( defined by a module name , a name , an arity and a mode number ) , the live arguments , the live non - argument variables , and the goal path . a detailed description of these attributes together with an example of event is given in appendix [ trace - appendix ] .an obvious and simple way to implement foldt would be to store the whole trace into a single list , and then to apply a fold to it .this naive implementation is highly inefficient , both in time and in space .it requires creating and processing a list of possibly millions of events .most of the time , creating such a list is simply not feasible because of memory limitations . with the current mercury trace system, several millions of events are generated each second , each event requiring several bytes .to implement realistic monitors , run - time information needs to be collected and analyzed simultaneously ( on the fly ) , * without explicitly creating the trace . * in order to achieve analysis on the fly , we have implemented foldtby modifying the mercury trace system , which works as follows : when a program is compiled with tracing switched on , the generated c code is instrumented with calls to the tracer ( via the c function trace ) . before the first event ( resp . after the last one ) , a call to an initialization c function trace_init ( resp . to a finalization c functiontrace_final ) is inserted .when the trace system is entered through either one of the functions trace , trace_init , or trace_final , the very first thing it does is to look at an environment variable that tells whether the mercury program has been invoked from a shell , from the standard mercury debugger ( mdb ) , or from another debugger ( e.g. , morphine ) .we have added a new possible value for that environment variable which indicates whether the program has been invoked by foldt . in that case , the trace_init function dynamically links the mercury program under execution with the object file that contains the object code of collect , initialize , and post_process .dynamically linking the program to its monitor is very convenient because neither the program nor the monitor need to be recompiled .once the monitor object file has been linked with the program , the c function trace_init can call the procedure initialize to set the value of a global variable accumulator_variable ( of type ) . at each event, the c function trace calls the procedure collect which updates accumulator_variable . if collect fails or if the last event is reached , the c function trace_final calls the procedure post_process with accumulator_variable and returns the new value of this accumulator ( now of type ) . in this section, we first describe what the user needs to do in order to define a monitor with foldt .then , we show how this monitor can be invoked . ' '' '' 1 : - type accumulator_type = = < a mercury type > .initialize(accumulator ) : - < mercury goals which initialize the accumulator > .collect(event , accumulatorin , accumulatorout ) : - < mercury goals which update the accumulator > .: - type collected_type = = < a mercury type > .post_process(accumulator , foldtresult ) : - < mercury goals which post - process the accumulator > . ' '' '' we chose mercury to be the language in which users define the foldt monitors to monitor mercury programs .as a matter of fact , it could have been any other language that has an interface with c , since the trace system of mercury is written in c. the choice of mercury , however , is quite natural ; people who want to monitor mercury programs are likely to be mercury programmers .the items users need to implement in order to define a foldt monitor are given in figure [ use - collect ] .lines preceded by ` % ' are comments .first of all , since mercury is a typed language , one first needs to define the type of the accumulator variable accumulator_type ( line 2 ) .then , one needs to define initialize which gives the initial value of the accumulator , and collect which updates the accumulator at each event(line 9 ) .optionally , one can also define the post_process predicate which processes the last value of the accumulator .post_process takes as input a variable of type accumulator_type ( ) and outputs a variable of type collected_type ( ) .if collected_type is not the same type as accumulator_type , then one needs to provide its definition too ( line 13 ) .types and modes of predicates initialize , collect and post_process should be consistent with the following mercury declarations : ` : - pred `` initialize``(accumulator_type::out ) is det . `+ ` : - pred `` collect``(event::in , accumulator_type::in , ` + ` accumulator_type::out ) is semidet . `+ ` : - pred `` post_process``(accumulator_type::in , collected_type::out ) ` + ` is det . `these declarations state that initialize is a deterministic predicate ( is det ) , namely it succeeds exactly once , and it outputs a variable of type accumulator_type ; collect is a semi - deterministic predicate , namely it succeeds at most once , and it takes as input an event and an accumulator .if collect fails , the monitoring process stops at the current event. this can be very useful , for example to stop the monitoring process before the end of the execution if the collecting data is too large , or to collect data part by part ( e.g. , collecting the information by slices of 10000 events ) .this also allows foldt to operate over non - terminating executions .the type event is a structure made of all the event attributes .to access these attributes , we provide specific functions which types and modes are : + `` ` : - func < attribute_name>(event::in ) = < attribute_type>::out . ` '' , + which takes an event and returns the event attribute corresponding to its name .for example , the function call depth(event ) returns the depth of event .the complete list of attribute names is given in appendix [ trace - appendix ] .figure [ count - call - collect ] shows an example of monitor that counts the number of predicate invocations ( calls ) that occur during a program execution .we first import library module int ( line 1 ) to be able to manipulate integers .predicate initialize initializes the accumulator to ` 0 ' ( line 3 ) .then , for every execution event , collect increments the counter if the event port is call , and leaves it unchanged otherwise ( line 5 ) . since collectcan never fail here , the calls to collect proceed until the last event of the execution is reached . note that those five lines of code constitute _ all the necessary lines _ for this monitor to be run . for the sake of conciseness , in the following figures containing monitors, we sometimes omit the module importation directives as well as the type of the accumulator when the context makes them clear . ' '' '' 1 : - import_module int .: - type accumulator_type = = int .initialize(0 ) .collect(event , c0 , c ) : - if port(event ) = call then c = c0 + 1 else c = c0 . ' '' '' [ invoke - fold ] currently , foldt can be invoked from a prolog query loop interpreter .we could not use mercury for that purpose because there is no mercury interpreter yet .we have implemented a prolog predicate named run_mercury , which takes a mercury program call as argument , and which forks a process in which this mercury program runs in coroutining with the prolog process .the two processes communicate via sockets .when the first event of the mercury program is reached , the hand is given to the prolog process which waits for a foldt query .the command foldt has two arguments ; the first one should contain the name of the file defining the monitor to be run ; the second one is a variable that will be unified with the result of the monitor .when foldt is invoked , ( 1 ) the file containing the monitor is used to automatically produce a mercury module named foldt.m ( by adding the declarations of initialize , collect , and post_process , as well as the definitions of the event type and the attribute accessing functions ) ; ( 2 ) foldt.m is compiled , producing the object file foldt.o ; ( 3 ) foldt.o is dynamically linked with the mercury program under coroutining .of course , steps ( 1 ) and ( 2 ) are only performed if the file containing the monitor is newer than the object file foldt.o .a monitor stops either because the end of the execution is reached , or because the collect predicate failed ; in the latter case , the current event ( i.e. , the event the next query will start at ) is the one occurring immediately after the event where collect failed . ' '' '' ` [ morphine ] : run_mercury(queens ) , foldt(count_call , result ) . `+ ` ` _ ` a 5 queens solution is [ 1 , 3 , 5 , 2 , 4 ] ` _ + ` last event of queens is reached ` + ` result = 146 more ? ( ;) ` + ` [ morphine ] : ` ' '' '' a possible session for invoking the monitor of figure [ count - call - collect ] is given in figure [ morphine - session ] . at the right - hand side of the ` : ' prompt , there are the characters typed in by a user .the line in italic is output by the mercury program ; all the other lines are output by the prolog process .we can therefore see that the program queens ( which solves the queens problem , cf appendix [ queens - prog - appendix ] ) produces 146 procedure calls .being able to call foldt from a prolog interpreter loop enables users to write scripts that control several foldt invocations .figures [ depth - monitor ] and [ depth - session ] illustrate this .the monitor of figure [ depth - monitor ] computes the maximal depth for the next 500 events . in the session of figure [ depth - session ] , a user ( via the .directive ) defines the predicate print_max_depth that calls the monitor of figure [ depth - monitor ] and prints its result in loop until the end of the execution is reached .this is useful for example for a program that runs out of stack space to check whether this is due to a very deep execution and to know at which events this occurs .note that the fact that the monitor is dynamically linked with the monitored program has an interesting side - effect here : one can change the monitor during the foldt query resolution ( by modifying the file where this monitor is defined ) .indeed , in our example , one could change the interval within which the maximal depth is searched from 500 to 100 .the monitor would be ( automatically ) recompiled , but the foldt query would not need to be killed and rerun .this can be very helpful to monitor a program that runs permanently ; the monitored program is simply suspended while the monitor is recompiled . ' '' '' 1 initialize(acc(0 , 0 ) ) .collect(event , acc(n0 , d0 ) , acc(n0 + 1 , max(d0 , depth(event ) ) ) ) : - n0 < 500 . ' '' '' ' '' '' ` [ morphine ] : [ user ] .` + ` print_max_depth : - ` + ` foldt(max_depth , acc ( _ , maxdepth ) ) , ` + ` print(````the maximal depth is ` ' ' ` ) , print(maxdepth ) , nl , ` + ` print_max_depth . `+ ` ^d ` + ` [ morphine ] : run_mercury(qsort ) , print_max_depth . `+ + ` ` _ ` the maximal depth is 54 ` _ + ` ` _ `the maximal depth is 28 ` _+ ` ` _ ` the maximal depth is 50 ` _+ ` ` _ ` [ 0 , 2 , 4 , 6 , 7 , 8 , ... , 94 , 95 , 99 , 99 ] ` _ + ` last event of qsort is reached ` + ` ` _ ` the maximal depth is 53 ` _ + ` [ morphine ] : ` ' '' '' as a matter of fact ( as the prompt suggests ) , the prolog query loop that we use is morphine , an extensible debugger for mercury `` la opium '' . the basic idea of morphine is to build on top of a prolog query loop a few coroutining primitives connected to the trace system ( like foldt ) .those primitives let one implement all classical debugger commands as efficiently as their hand - written counter - parts ; the advantage is , of course , that they let users implement more commands than the usual hard - coded ones , fitting their own needs .invoking foldt from a debugger has a further advantage ; it makes it very easy to call a monitor during a debugging session , and vice versa .indeed , some monitors are very useful for understanding program runtime behavior , and therefore can be seen as debugging tools .in this section , we describe various execution monitors that can be implemented with foldt .we first give monitors which compute three different execution profiles : number of events at each port , number of goal invocations at each depth , and sets of solutions .then , we describe monitors that produce two types of execution graphs : dynamic control flow graph and dynamic call graph . finally , we introduce two test coverage criteria for logic programs , and we give the monitors that measure them .[ collect - monitoring ] ' '' '' 1 : - import_module int , array .: - type accumulator_type = = array(int ) .: - mode acc_in : : array_di .: - mode acc_out : : array_uo .initialize(array ) : - init(5 , 0 , array ) .collect(event , array0 , array ) : - port = port(event ) , port_to_int(port , intport ) , lookup(array0 , intport , n ) , set(array0 , intport , n+1 , array ) .: - pred port_to_int(port::in , int::out ) is det .port_to_int(port , number ) : - ( if port = call then number = 0 else if port = exit then number = 1 else if port = redo then number = 2 else if port = fail then number = 3 else number = 4 ) . ' '' '' in figure [ count - call - collect ] , we have given a monitor that counts the number of goal invocations .figure [ statistic - collect ] shows how to extend this monitor to count the number of events at each port .we need 5 counters that we store in an array . in the current implementation of foldt, the default mode of the second and third argument of collect , respectively equal to in and out , can be overridden ; here , we override them with array_di and array_uo ( lines 4 and 5 ) . modesarray_di and array_uo are special modes that allow arrays to be destructively updated .predicate initializecreates an array array of size 5 with each element initialized to 0 ( line 8) .predicate collect extracts the port from the current event ( line 11 ) and converts it to an integer ( line 12 ) ( cf appendix [ trace - appendix ] ) ; we ignore them here for the sake of conciseness . ] .this integer is used as an index to get ( lookup/3 ) and set ( set/4 ) array values . the goal lookup(array0 , intport , n ) returns in n the intport element of array0 .the goal set(array0 , intport , n+1 , array ) sets the value n+1 in the intport element of array0 and returns the resulting array in array .[ histogramme - collect - section ] ' '' '' 1 initialize(acc ) : - init(32 , 0 , acc ) .collect(event , acc0 , acc ) : - ( if port(event ) = call then depth = depth(event ) , ( if semidet_lookup(acc0 , depth , n ) then set(acc0 , depth , n+1 , acc ) else size(acc0 , size ) , resize(acc0 , size*2 , 0 , acc1 ) , set(acc1 , depth , 1 , acc ) ) else acc = acc0 ) . ' '' '' figure [ histogram - collect ] implements a monitor that counts the number of calls at each depth .predicate initialize creates an array of size 32 with each element initialized to 0 ( line 4 ) . at call events ( line 7 ), predicate collect extracts the depth from the current event ( line 8) and increments the corresponding counter ( lines 10 and 14 ) .whenever the upper bound of the array is reached , i.e. , whenever semidet_lookup/4 fails ( line 9 ) , the size of the array is doubled ( lines 13 ) . ' '' '' 1 : - type solution >proc_name / arguments .: - type accumulator_type = = list(solution ) .initialize ( [ ] ) .collect(event , accin , accout ) : - ( if port(event ) = exit , solution = proc_name(event)/arguments(event ) , not(member(solution , accin ) ) then accout = [ solution | accin ] else accout = accin ) . ' '' '' the monitor of figure [ solutions - collect ] collects the solutions produced during the execution .we define the type solution as a pair containing a procedure and a list of arguments ( line 1 ) .the collected variable is a list of solutions ( line 2 ) , which is initialized to the empty list ( line 4 ) . if the current port is exit ( line 8) and if the current solution has not been already produced ( lines 9,10 ) , then the current solution is added to the list of already collected solutions ( line 12 ) . note that for large programs , it would be better to use a table from predicate names to set of solutions instead of lists .other execution abstract views that are widely used and very useful in terms of program understanding are given in terms of graphs . in the following ,we show how to implement monitors that generate graphical abstractions of program executions such as control flow graphs and dynamic call graphs .we illustrate the use of these monitors by applying them to the queens program given in appendix [ queens - prog - appendix ] .this line program generates events for a board of . in this article, we use the graph drawing tool dot .more elaborated visualization tools such as in would be desirable , especially for large executions .this is , however , beyond the scope of this article . ' '' '' ' '' ''we define the _ dynamic control flow graph _ of a logic program execution as the directed graph where nodes are predicates of the program , and arcs indicate that the program control flow went from the origin to the destination node .the dynamic control flow graph of the 5 queens program is given in figure [ cfg - eps ] .we can see , for example , that , during the program execution , the control moves from predicate main/2 to predicate data/1 , from predicate data/1 to predicate data/1 and predicate queen/2 .note that such a graph ( or variants of it ) is primarily useful for tools and only secondarily for humans . ' '' '' 1 : - type predicate > proc_name / arity .: - type arc > arc(predicate , predicate ) .: - type graph = = set(arc ) .: - type accumulator_type > collected_type(predicate , graph ) .initialize(collected_type(``user''/0 , set__init ) ) .collect(event , acc0 , acc ) : - port = port(event ) , ( if ( port = call ; port = exit ; port = fail ; port = redo ) then acc0 = collected_type(previouspred , graph0 ) , currentpred = proc_name(event ) / proc_arity(event ) , arc = arc(previouspred , currentpred ) , set__insert(graph0 , arc , graph ) , acc = collected_type(currentpred , graph ) else acc = acc0 ) . ' '' '' an implementation of a monitor that produces such a graph is given in figure [ cfg - source ] .graphs are encoded by a set of arcs , and arcs are terms composed of two predicates ( lines 1 to 3 ) .the collecting variable is composed of a predicate and a graph ( line 4 ) ; the predicate is used to remember the previous node .the collecting variable is initialized with the fake predicate user/0 , and the empty graph ( line 6 ) . at call ,exit , redo , and fail events ( line 10 ) , we insert in the graph an arc from the previous predicate to the current one ( lines 11 to 14 ) . ' '' '' ' '' '' note that in our definition of dynamic control flow graph , the number of times each arc is traversed is not given .even if the control goes between two nodes several times , only one arc is represented .one can imagine a variant where , for example , arcs are labeled by a counter ; one just needs to use multi - sets instead of sets .the result of such a variant applied to the 5 queens program is displayed figure [ cfg - cpt ] .note that here , the queens program was linked with a version of the library that has been compiled without trace information .this is the reason why one should not be surprised not to see any call to , e.g , io__write_string/3 in this figure . ' '' '' ' '' ''a _ static call graph _ of a program is a graph where the nodes are labeled by the predicates of the program , and where arcs between nodes indicate potential predicate calls .we define the _ dynamic call graph _ of a logic program execution as the sub - graph of the ( static ) call graph composed of the arcs and nodes that have actually been traversed during the execution . for example , in figure [ dcg - eps ] , we can see that predicate main/2 calls predicates data/1 , queen/2 , and print_list/2 . in this particular example , the static and dynamic call graphs are identical . ' '' '' 1 : - type accumulator_type > ct(stack(predicate ) , graph ) .initialize(ct(stack , set__init ) ) : - stack__push(stack__init , `` user''/0 , stack ) .collect(event , ct(stack0 , graph0 ) , acc ) : - port = port(event ) , currentpred = proc_name(event ) / proc_arity(event ) , update_call_stack(port , currentpred , stack0 , stack ) , ( if port = call then previouspred = stack__top_det(stack0 ) , set__insert(graph0 , arc(previouspred , currentpred ) , graph ) , acc = ct(stack , graph ) else acc = ct(stack , graph0 ) ) .: - pred update_call_stack(trace_port_type::in , predicate::in , stack(predicate)::in , stack(predicate)::out ) is det .update_call_stack(port , currentpred , stack0 , stack ) : - ( if ( port = call ; port = redo ) then stack__push(stack0 , currentpred , stack ) else if ( port = fail ; port = exit ; port = exception ) then stack__pop_det(stack0 , _ , stack ) else stack = stack0 ) . ' '' '' an implementation of a monitor that builds the dynamic call graphs is given in figure [ dcg - source ] . in order to define this monitor, we use the same data structures as for the previous one , except that we replace the last traversed predicate by the whole call stack in the collected variable type ( line 2 ) .this stack is necessary in order to be able to get the direct ancestor of the current predicate .the set of arcs is initialized to the empty set ( lines 4 ) and the stack is initialized to a stack that contains a fake node user/0 ( line 5 ) . in order to construct the set of arcs ,we insert at call events an arc from the previous predicate to the current one ( line 12 ) . the call stack is maintained on the fly by the update_call_stack/4 predicate ; the current predicate is pushed onto the stack at call and redo events ( line 22 ) , and popped at exit , fail , and exception events ( line 24 ) .the result of the execution of this monitor applied to the 5 queens program is displayed in figure [ dcg - eps ] .note that the call stack is actually available in the mercury trace .we have intentionally not use it here for didactical purpose in order to demonstrate how this information can easily ( but not cheaply ! )be reconstructed on the fly .[ tests ] in this section , we define two notions of test coverage for logic programs , and we show how to measure the corresponding coverage rate of mercury program executions using the foldt primitive . the aim here is not to provide the ultimate definition of test coverage for logic programs , but rather to propose two possible definitions , and to show how the corresponding coverage rate measurements can be quickly prototyped . as a consequence ,the proposed monitors can not pretend to be optimal either in functionality , or in implementation .the aim of test coverage is to assess the quality of a test suite . in particular, it helps to decide whether it is necessary to generate more test cases or not . for a given coverage criterion , one can decide to stop testing when a certain percentage of coverage is reached .the usual criterion used for imperative languages are _ instruction _ and _ branch _ criteria .the _ instruction coverage rate _ achieved by a test suite is the percentage of instructions that have been executed .the _ branch coverage rate _ achieved by a test suite is the percentage of branches that have been traversed during its execution .one of the weaknesses of instruction and branch coverages is due to boolean expressions .the problem occurs when a boolean expression is composed by more than one atomic instruction : it may be that a test suite covers each value of the whole condition without covering all values of each atomic part of the condition .for example , consider the condition ` a or b ' and a test suite where the two cases ` , ' and ` , ' are covered . in that case , every branch and every instruction is exercised , and nevertheless , b never succeeded. if b is erroneous , even % instruction and branch coverage will miss it .whereas in imperative programs , you get conditional branches only in the conditions of if - then - else and loops , in logic programs you get them at every unification and call ( whose determinism allows failure ) ; therefore this issue is crucial for logic programs . in order to address the above problem , we need a coverage criterion that checks that each single predicate defined in the tested program succeeds and fails a given number of times .but we do not want to expect every predicate to fail because some , like printing predicates , are intrinsically deterministic .therefore , we want a criterion that allows the test designer to specify how many times a predicate should succeed and fail . therefore we define a _ predicate criterion _ as a pair composed of a predicate and a list of exit and fail . in the case of mercury, we can take advantage of the determinism declaration to automatically determine if a predicate should succeed and fail .here is an example of predicate criterion that can be automatically defined according to the determinism declaration of each predicate : + [ cols= " < , < " , ] + the event structure is illustrated by figure [ event_structure ] .the displayed structure is related to an event of the execution of a qsort program which sorts the list of integers using a _ quick sort _the information contained in that structure indicates that qsort : partition/4 - 0 is currently invoked , it is the tenth trace event being generated , the sixth goal being invoked , and it has four ancestors ( depth is 5 ) . at this point, only the first two arguments of partition/4 are instantiated : the first one is bound to the list of integers and the second one to the integer 3 ; the third and fourth arguments are not live , which is indicated by the atom ` - ' .there are two live local variables : h , which is bound to the integer 1 , and t , which is bound to the list of integers .the goal path tells that this event occurred in the then branch ( t ) of the second conjunction ( c2 ) of the first switch ( s1 ) of partition/4 ..... : - module queens . nodiag ( _ , _ , [ ] ) .nodiag(b , d , [ n|l ] ) : - nmb isn - b , bmn is b - n , ( d = nmb - > fail ; d = bmn - > fail ; true ) , d1 is d + 1 , nodiag(b , d1 , l ) .
program execution monitoring consists of checking whole executions for given properties , and collecting global run - time information . monitoring helps programmers maintain their programs . however , application developers face the following dilemma : either they use existing monitoring tools which never exactly fit their needs , or they invest a lot of effort to implement relevant monitoring code . in this article we argue that , when an event - oriented tracer exists , the compiler developers can enable the application developers to easily code their own monitors . we propose a high - level primitive , called foldt , which operates on execution traces . one of the key advantages of our approach is that it allows a clean separation of concerns ; the definition of monitors is totally distinct from both the user source code and the language compiler . we give a number of applications of the use of foldt to define monitors for mercury program executions : execution profiles , graphical abstract views , and test coverage measurements . each example is implemented by a few lines of mercury . * keywords : * monitoring , automated debugging , trace analysis , test coverage , mercury .
the enhancement and detection of elongated structures is important in many biomedical image analysis applications .these tasks become problematic when multiple elongated structures cross or touch each other in the data . in these casesit is useful to decompose an image in local orientations by constructing an orientation score . in the orientation score ,we extend the domain of the data to include orientation in order to separate the crossing or touching structures ( fig .[ fig:2dos ] ) . from 3d data we construct a 3d orientation score , in a similar way as is done for the more common case of 2d data and 2d orientation score .next , we consider operations on orientation scores , and process our data via orientation scores ( fig .[ fig : overviewoperations ] ) .for such operations it is important that the orientation score transform is invertible , in a well - posed manner . in comparison to continuous wavelet transforms on the group of 3d rotations , translations and scalings, we use all scales simultaneously and exclude the scaling group from the wavelet transform and its adjoint , yielding a coherent state type of transform , see app.a .this makes it harder to design appropriate wavelets , but has the computational advantage of only needing a single scale transformation .the 2d orientation scores have already showed their use in a variety of applications . in orientation scores were used to perform crossing - preserving coherence - enhancing diffusions .these diffusions greatly reduce the noise in the data , while preserving the elongated crossing structures .next to these generic enhancement techniques , the orientation scores also showed their use in retinal vessel segmentation , where they were used to better handle crossing vessels in the segmentation procedure . to perform detection and enhancement operations on the orientation score ,we first need to transform a given greyscale image or 3d dataset to an orientation score in an invertible way . in previous worksvarious wavelets were introduced to perform a 2d orientation score transform .some of these wavelets did not allow for an invertible transformation ( e.g. gabor wavelets ) .a wavelet that allows an invertible transformation was proposed by kalitzin .a generalization of these wavelets was found by duits who derived a unitarity result and expressed the wavelets in a basis of eigenfunctions of the harmonic oscillator .this type of wavelet was also extended to 3d .this wavelet however has some unwanted properties such as poor spatial localization ( oscillations ) and the fact that the maximum of the wavelet did not lie at its center ( * ? ? ?4.11 ) . in class of cake - wavelets were introduced , that have a cake - piece shaped form in the fourier domain ( fig .[ fig : cakewavelets ] ) .the cake - wavelets simultaneously detect oriented structures and oriented edges by constructing a complex orientation score .because the different cake - wavelets cover the full fourier spectrum , invertibility is guaranteed .in this paper we propose an extension of the 2d cake - wavelets to 3d .first , we discuss the theory of invertible orientation score transforms .then we construct 3d cake - wavelets and give an efficient implementation using a spherical harmonic transform .finally we mention two application areas for 3d orientation scores and show some preliminary results for both of them . in the first application, we present a practical proof of concept of a natural extension of the crossing preserving coherence enhancing diffusion on invertible orientation scores ( cedos ) to the 3d setting . compared to the original idea of coherence enhancing diffusion acting directly on image - data have the advantage of preserving crossings .diffusions on se(3 ) have been studied in previous ssvm - articles , see e.g. , but the full generalization of cedos to 3d was never established .is correlated with an oriented filter to detect structures aligned with the filter orientation .bottom left : this is repeated for a discrete set of filters with different orientations .bottom right : the collection of 3d datasets constructed by correlation with the different filters is an orientation score and is visualized by placing a 3d dataset on a number of orientations.,title="fig:",scaledwidth=60.0% ] + is correlated with an oriented filter to detect structures aligned with the filter orientation .bottom left : this is repeated for a discrete set of filters with different orientations .bottom right : the collection of 3d datasets constructed by correlation with the different filters is an orientation score and is visualized by placing a 3d dataset on a number of orientations.,title="fig:",scaledwidth=70.0% ]an invertible orientation score :{\mathbb{r}}^3 \times s^2 \rightarrow { \mathbb{c}} ] .we can further simplify the reconstruction for wavelets for which ({\boldsymbol{\omega } } ) \d \sigma({\mathbf{n } } ) \approx 1 ] by )({\mathbf{x}},{\mathbf{n}}_i)=(\overline{\psi_{{\mathbf{n}}_i } } \star f)({\mathbf{x } } ) .\label{eq : construction1discrete}\ ] ] the exact reconstruction formula is in the discrete setting given by )({\mathbf{x } } ) \\ & = \mathcal{f}_{{\mathbb{r}}^3}^{-1 } \left [ ( m_\psi^d)^{-1 } \mathcal{f}_{{\mathbb{r}}^3 } \left [ \tilde { { \mathbf{x } } } \rightarrow \sum_{i=1}^{n_o } ( \check { \psi}_{{\mathbf{n}}_{i } } \star { \mathcal{w}}_\psi^d[f](\cdot,{\mathbf{n}}_i))(\tilde { { \mathbf{x } } } ) \d \sigma({\mathbf{n}}_i ) \right ] \right ] ( { \mathbf{x } } ) , \end{split } \label{eq : reconstruction1discrete}\ ] ] with the discrete spherical area measure which for reasonably uniform spherical sampling can be approximated by , and ({\boldsymbol{\omega } } ) \right|^2 \d \sigma({\mathbf{n}}_i).\ ] ] again , an exact reconstruction is possible iff .a class of 2d cake - wavelets , see , was successfully used for the 2d orientation score transformation .we now generalize these 2d cake - wavelets to 3d cake - wavelets .our 3d transformation using the 3d cake - wavelets should fulfill a set of requirements , compare : 1 .the orientation score should be constructed for a finite number ( ) of orientations .2 . the transformation should be invertible and all frequencies should be transferred equally to the orientation score domain ( ) .3 . the kernel should be strongly directional .4 . the kernel should be polar separable in the fourier domain , i.e. , , with . because by definition the wavelet has rotational symmetry around the -axis we have .5 . the kernel should be localized in the spatial domain , since we want to pick up local oriented structures .the real part of the kernel should detect oriented structures and the imaginary part should detect oriented edges . the constructed oriented score is therefore a complex orientation score .we now discuss the procedure used to make 3d cake - wavelets . according to requirement 4we only consider polar separable wavelets in the fourier domain , so that . for the radial function we use , as in , which is a gaussian function with scale parameter multiplied by the taylor approximation of its reciprocal to order to ensure a slower decay .this function should go to 0 when tends to the nyquist frequency .therefore the inflection point of this function is fixed at with by setting . in practicewe have , and because radial function causes to become really small when coming close to the nyquist frequency , reconstruction eq . becomes unstable .we solve this by either using approximate reconstruction eq . or by replacing , with small .both make the reconstruction stable at the cost of not completely reconstructing the highest frequencies which causes some additional blurring .we now need to find an appropriate angular part for the cake - wavelets .first , we specify an orientation distribution , which determines what orientations the wavelet should measure . to satisfy requirement 3 this function should be a localized spherical window , for which we propose a b - spline , with and the order b - spline given by the parameter determines the trade - off between requirements 2 and 3 , where higher values give a more uniform at the cost of less directionality .first consider setting so that has compact support within a convex cone in the fourier domain .the real part of the corresponding wavelet would however be a plate detector and not a line detector ( fig .[ fig : cakepiececreatesplatedetector ] ) .the imaginary part is already an oriented edge detector , and so we set where the real part of the earlier found wavelet vanishes by anti - symmetrization of the orientation distribution while the imaginary part remains . as to the construction of , there is the general observation that we detect a structure that is perpendicular to the shape in the fourier domain , so for line detection we should aim for a plane detector in the fourier domain . to achieve thiswe apply the funk transform to , and we define where integration is performed over denoting the great circle perpendicular to .this transformation preserves the symmetry of , so we have .thus , we finally set for an overview of the transformations see fig .[ fig : cakewavelets ] . in subsection [ ssect : cake ] we defined the real part and the imaginary part of the wavelets in terms of a given orientation distribution . in order to efficiently implement the various transformations ( e.g. funk transform ) , and to create the various rotated versions of the wavelet we express our orientation distribution in a spherical harmonic basis up to order : because of the rotational symmetry around the -axis, we only need the spherical harmonics with , i.e. , . for determining the spherical harmonic coefficients we use the pseudo - inverse of the discretized inverse spherical harmonic transform ( see ( * ? ? ?* section 7.1 ) ) , with discrete orientations given by an icosahedron of tesselation order 15 .according to , the funk transform of a spherical harmonic equals with the legendre polynomial of degree evaluated at .we can therefore apply the funk transform to a function expressed in a spherical harmonic basis by a simple transformation of the coefficients .we have .we therefore anti - symmetrize the orientation distribution eq .( [ eq : antisymmetrize ] ) via . to make the rotated versions of wavelet we have to find in . to achieve this we use the steerability of the spherical harmonic basis .spherical harmonics rotate according to the irreducible representations of the so(3 ) group ( wigner - d functions ) here and denote the euler angles with counterclockwise rotations , i.e. , .this gives because both anti - symmetrization and funk transform preserve the rotational symmetry of , we have , and eq . ( [ eq : rotationh ] )reduces to now use the invertible orientation score transformation to perform data - enhancement according to fig . [fig : overviewoperations ] .because is not a lie group , it is common practice to embed the space of positions and orientations in the lie group of positions and rotations se(3 ) by setting with any rotation for which .this holds in particular for orientation scores .the operations which we consider are scale spaces on se(3 ) ( diffusions ) , and are given by with here is the solution of } , \label{eq:}\ ] ] where in coherence enhancing diffusion on orientation scores ( cedos ) is adapted locally to data } ] . is an isometric transform onto a unique closed reproducing kernel space with as an -subspace .we distinguish between the isometric wavelet transform and the unitary wavelet transform .we drop the formal requirement of being square - integrable and being admissible in the sense of , and replace the requirement by , as it is not strictly needed in many cases .this includes our case of interest and its left - regular action on where gives rise to an orientation score with _ any _ rotation mapping onto and symmetric around the -axis . herethe domain is the coupled space of positions and orientations : , cf . . from the general theory of reproducing kernel spaces , ( where one does not even rely on the group structure ), it follows that is unitary , where denotes the abstract complex reproducing kernel space consisting of functions on with reproducing kernel with left - regular representation given by .now , as the characterization of the inner product on is awkward , we provide a basic characterization next via the so - called inner product .this is in line with the admissibility conditions in .[ mpsirecon ] let be such that ( [ eq : admissibilityrequirement ] ) holds .then is unitary , and we have where ,\mathcal{t}_{m_{\psi}}[{\mathcal{w}}_\psi g])_{\mathbb{l}_{2}({\mathbb{r}}^3\rtimes s^2))} ] .let on .the space is a closed subspace of hilbert space , where \in\mathbb{l}_{2}(\mathbb{r}^3)\}$ ] , and projection of embedding space onto the space of orientation scores is given by , where is the natural extension of the adjoint to the embedding space .
the enhancement and detection of elongated structures in noisy image data is relevant for many biomedical applications . to handle complex crossing structures in 2d images , 2d orientation scores were introduced , which already showed their use in a variety of applications . here we extend this work to 3d orientation scores . first , we construct the orientation score from a given dataset , which is achieved by an invertible coherent state type of transform . for this transformation we introduce 3d versions of the 2d cake - wavelets , which are complex wavelets that can simultaneously detect oriented structures and oriented edges . for efficient implementation of the different steps in the wavelet creation we use a spherical harmonic transform . finally , we show some first results of practical applications of 3d orientation scores . scores , reproducing kernel spaces , 3d wavelet design , scale spaces on se(3 ) , coherence enhancing diffusion on se(3 )
current dense and future super dense mobile broadband networks are subject to various scenarios of simultaneous interfering communication links . in cellular networks ,interference from neighboring base stations ( bss ) is still one of the most prominent performance degradation factors resulting in outages or performance losses at the cell edges as well as increasing the need for complex handovers .a classical approach to tackle interference is through medium access control and medium sharing techniques , which in turn severely compromise the performance of each individual user in the network due to explicit time sharing over the common resources . as we move towards denser networks with bss and access points covering smaller areas to get antennas closer to the users, interference is becoming increasingly challenging .interference management in cellular networks has been first and foremost implemented through smart reuse of network resources , mostly through the so - called frequency division multiple access ( fdma ) techniques .previous generations of cellular network standards employed orthogonal _ reuse- _ schemes , where neighboring cells do not interfere on each others resources . a frequency band used by a cell is not allowed , in this paradigm , to be used by neighboring cells , thereby greatly lowering the inter - cell interference floor . while the previous generation of mobile communications , namely universal mobile telecommunications system ( umts ) , moved from the reuse- to a reuse- paradigm , todays long - term evolution ( lte ) specifications include a more fine - grained approach . in classically deployed networks with large homogeneous cells ,a core observation was that interference is mainly an issue for mobile terminal ( mts ) laying far from their respective bss , i , at the cell edges . according to this approach , lte bss separate frequency bands dynamically andensure that those allocated to the cell edges are non - overlapping .such fractional frequency reuse ( ffr ) schemes are a very efficient form of interference management as it requires relatively low coordination from the bss part . on the other hand, it may require more advanced power control in the downlink , and from the network point of view , bss inefficiently use the time and frequency resources .capitalizing on the wide deployment of multiple antennas , especially at the bs side , and the advances in multi - antenna signal processing techniques , a new approach for interference management has made its way into mobile communication standards .coordinated multi - point ( comp ) is a broad umbrella name for coordination schemes that aim at realizing multi - user communications , i sharing the medium among multiple network nodes over space on top of the possible sharing over time and frequency resources .focusing on the downlink and considering joint transmission ( jt ) comp , in the theoretical limit of infinitely many distributed antennas , one could exactly pinpoint each mt and ensure that the signal intended for it adds up at its position , while creating no interference for the other mts in the network . in this case ,interference is not only removed , but is actually harnessed and exploited to increase the received signal power at each mt . however , for the practical implementation of jt comp schemes , sharing of channel state information ( csi ) and data for the targeted mts among the coordinated bss as well as tight synchronization at the data level among them are necessary .these requirements are actually constituting the major downfall of jt comp in practical cellular networks , rendering hard to achieve its theoretical gains in practice . on top of that ,it was shown in that , imperfect and/or outdated csi and uncoordinated interference have a very large impact on the performance of conventional jt comp schemes .practical radio - frequency ( rf ) components , such as oscillators with phase noise , were also shown to have a similar effect . as an alternative to jt comp for the downlink of cellular networks , coordinated beamforming ( cb )is based on shared knowledge of the spatial channels between the coordinated bss and their intended mts to separate the different data streams without exchanging mts data . as such, cb schemes come with less stringent synchronization and coordination requirements , while retaining at least a large part of the jt comp performance . with cb , coordinated bss only share csi , andas long as the csi is up to date , synchronization is unneeded and each bs in the coordination cluster may transmit independently .recent releases of the lte specifications by the 3^rd^ generation partnership project ( 3gpp ) have integrated the necessary elements to estimate the interfering channels on the mt part , with added reference signals and coordination of these signals among the coordinated bss .3gpp also included advanced 3-dimensional ( 3d ) beamforming capabilities and more complex antenna patterns in the latest standards as well as associated simulation tools .although the standardization of csi exchange between bss is still left to the discretion of the vendors , the aforementioned improvements enable the practical implementation of cb schemes , on which we focus the present article .the theoretical design of cb schemes has been lately the subject of many research papers , of which representative examples are . among these schemes , some target at the so - called multiple - input multiple - output ( mimo ) _ interference channel ( ifc ) _ , where each multi - antenna bs belonging to the coordination cluster wishes to serve exactly one multi - antenna mt , while are intended to the more general mimo _ interference broadcast channel ( ibc ) _ , where each coordinated multi - antenna bs may serve concurrently more than one multi - antenna mts . in this article , we present comparative performance evaluation results among the recent cb schemes , which constitute future candidates for implementation in practical cellular networks due to their offered theoretical performance gains coming with reduced coordination overhead , and their increased level of compatibility to the latest relevant standards specifications .to advocate on the adequacy of interference coordination , only at the beamforming level , as an enabling approach for boosting the performance of dense networks , we consider as example scenarios of interest small - cell network deployments , where high capacity and tightly synchronized on the signal level links among the bss belonging in a coordination cluster are not feasible .in such scenarios , coordination may be fully dynamic as a result of a scheduling mechanism , and hence , carried out through dedicated wireless links .we focus on revealing the potential resilience of the cb schemes to uncoordinated interference and investigating their performance with standardized feedback .the latter goal may also serve as an indicator of the impact of the quality or latency of csi to the performance of the considered schemes . to achieve the former goal, we propose a parametric system model where the powers of intra - cluster interference ( ici ) and out - of - cluster interference ( oci ) are defined relatively to the power of the desired signal .the impact of oci on both the clustered and centralized cb schemes designed for the ifc , and on the decentralized schemes that can be applied to the ibc is assessed .we then discuss how to adapt these schemes in current and future standards , and how practical feedback and quantization may impact their performance .finally , we conclude with some specific research directions , that may be pursued to improve the performance and integration of cb schemes in future lte networks and beyond .to investigate the impact of interference in coordinated transmission schemes , we hereinafter present a simple system model that captures the relative effect of ici and oci in the received signal . for the interference experienced by each mt associated to a bs belonging to a coordination cluster , we make the following assumptions : * the aggregate ici is of relative power ] compared to that of the desired signal .this parameter indicates the effectiveness of bs clustering for coordinated transmission .low values of indicate that most of the interference for a specific mt has been included within the cluster . using the latter two assumptions ,the proposed system model is mathematically described as follows .we consider an infinitely large cellular network from which we single out bss , indexed in the set , to form a coordination cluster . on some time - frequency resource unit, the bs cluster aims at providing service to mts indexed in the set .a set of mts associated to bs is denoted by such that , all sets for all form a partition of the set . without loss of generality , we assume that each bs is equipped with a -element antenna array whereas , each mt has antennas .let also represent the -dimension vector with the information bearing signal transmitted from the bs and intended for the mt .then , the baseband received -dimension vector at the mt can be expressed as where denotes the channel matrix between the mt and the bs , and the -dimension vector is the oci , for which we model the amplitude of its elements as independent and identically distributed nakagami- random variables .it can be shown that this modeling of oci includes that of .in addition , the -dimension vector represents the noise modeled as additive white gaussian such that .we further normalize the channel matrices in order to have , in average , ici power at the signal level as and oci power at the signal level as , where we have assumed that for with .the system model of is capable of describing a wide range of interference scenarios by varying the parameters and as well as the distribution of , thereby capturing how interference coordination might perform for mts in different network setups .an example illustration of this model is depicted in fig [ fig : general_ibc ] .the three bss in the center of the figure are assumed to form a coordination cluster .the mts falling into the regions covered by these bss are subject to relative interference from intra - cluster bss , and aggregate interference from each of the out - of - cluster bss .in this section , the system model of section [ sec : cellular_model ] is first employed to a simplistic cellular network in order to demonstrate the theoretical gains of jt comp and cb schemes over representative non - coordinated ones as well as to compare jt comp with cb .then , we present performance comparisons among cb schemes requiring full csi exchange among coordinating bss as well as schemes that operate with limited coordination overhead .the compared schemes differ on the considered design objective and the level of taking network interference under consideration .bss , each designed to service a group of mts ( on the bottom as black and grey dots ) .bss are separated into two groups , with one of the groups including the bss in the center that form a coordination cluster .each mt associated to a bs belonging in the latter cluster is subject to ici of relative power ( in red ) as well as to oci of relative power ( in yellow).,width=624 ] we consider a cluster of bss as a part of a large cellular network , which aims at serving mts in every time - frequency resource unit ; one mt is associated to the one bs and the other mt to the other bs . focusing on the presented system model and using the classical bounds for the individual mt rates in multiple - input single - output ( miso ) ifcs , it holds that : multi - antenna bss and single - antenna mt assigned per bs .the case where is suitable for describing cell - edge mts which are subject to ici having the same relative strength with their intended signal . in both cases , with respect to the power of the intended signal.,width=470 ] * with full reuse of time - frequency resources , each mt is subject to interference from every bs not associated with , and its rate is upper bounded as ; * with orthogonal allocation of the resources , ici is absent but the prelog factor appears on each individual mt rate , yielding ; * with the cb scheme based on interference alignment ( ia ) , ici can be completely nulled , and the individual mt rate becomes ; and * with ideal jt comp , the interference power actually boosts the intended signals and the individual mt rate is given by .the latter rates for each individual mt are sketched in fig [ fig : comparisons_toy ] with oci being db lower than that of the power of the intended signal , i , , and for two different values of , which reveals the relative power of ici .as expected , both coordinated transmission schemes provide substantial gains compared with full reuse and orthogonal transmission when the network operates in the interference - limited regime , i , when increases . as approaches the gain of jt comp over ia decreases .for example , for db and in fig [ fig : comparisons_toy ] , ia results in a nearly gain over orthogonal transmission while , this gain becomes nearly for jt comp .when decreases , the latter gain of ia remains the same whereas , that of jt comp decreases to nearly .this example illustrates that , in many cases of interest , a large part of the coordination gain comes more from the removal of interference from the signal of interest rather than from stacking the powers of multiple bss .it is also noted that , when considering practical implementation issues in achieving jt comp , the bonus of full coordination becomes even lower , since jt comp is more afflicted by degraded csi and dirty rf than cb . and .the coordination cluster comprises of -antenna bss and -antenna mt associated with each bs . a maximum of iterations was used for each of the iterative schemes maximum sinr , wmmse , and reconfigurable .the performance of full reuse and orthogonal mimo transmission is also depicted.,width=470 ] we hereinafter focus on the -user ifc , which constitutes a special case of the system model of section [ sec : cellular_model ] where each comprises of exactly one mt . in figs[ fig : ifc_results_1 ] and [ fig : ifc_results_2 ] , we consider a coordination cluster of bss with and , and compare the ergodic performance with optimum receivers for different values of and , and spatially independent rayleigh fading of the following cb schemes : _ i _ ) ia that aims at aligning , and then nulling interference at each mt belonging in the bs cluster ; _ ii _ ) maximum signal - to - interference - plus - noise ratio ( sinr ) that targets at maximizing the received sinr of each transmitted information data stream in the cluster ; _ iii _ ) weighted minimum mean squared error ( wmmse ) that minimizes a metric for the whole network that is based on the mmse ; and _ iv _ ) reconfigurable .the latter scheme combines a network - wide mmse criterion with the single - user mimo waterfilling solution in order to maximize the rate of each mt associated with the coordination cluster , accordingly to the condition of its desired channel and the whole network s interference level .although , for all aforementioned cb schemes , we consider here a centralized implementation with full csi exchange among coordinating bss , it is noted that , for the maximum sinr , wmmse , and the reconfigurable schemes , distributed versions are also available , where explicit csi exchange among bss is avoided , and thus , coordination overhead can be potentially reduced . and . the coordination cluster comprises of -antenna bss and -antenna mt associated with each bs .a maximum of iterations was used for each of the iterative schemes maximum sinr , wmmse , and reconfigurable .the performance of full reuse and orthogonal mimo transmission is also depicted.,width=470 ] the ia , maximum sinr , wmmse , and the reconfigurable cb schemes are linear schemes , which means that each bs transmits its signal using precoded symbols as , where represents the precoding matrix and is the -dimension information stream vector . upon signal reception ,each mt estimates the desired transmitted symbols using a decoding matrix , forming . for ia and the maximum sinr schemes in figs [ fig : ifc_results_1 ] and [ fig : ifc_results_2 ] , each set to according to the ia feasibility conditions , and for all was obtained in closed form for ia and iteratively for maximum sinr .for both the iterative schemes wmmse and reconfigurable , each was initialized as and obtained at the end of the algorithmic iterations or upon convergence , explicitly for the reconfigurable scheme and implicitly for wmmse together with all matrices . more specifically , the reconfigurable scheme outputs to be sent by each coordinated bs together with their beamforming directions whereas , wmmse only generates the transmit covariances matrices with possibly some streams set to zero power , and thus unusable .this means that , for the latter scheme the optimum needs to be searched in some way , a fact that will cause an extra overhead in practical networks and possibly decrease performance .as it can be concluded from figs [ fig : ifc_results_1 ] and [ fig : ifc_results_2 ] for a maximum of iterations per iterative scheme , the performance of all considered cb schemes is susceptible to ici and oci .this behavior for oci was also observed in for ia .for example , for db , it is shown that the performance of all cb schemes drops approximately between the two interference scenarios , according to which decreases from to and increases from to .interestingly , for the considered interference cases in both figures and db , the maximum sinr , wmmse , and reconfigurable schemes , that take oci interference under consideration , provide equal to or slightly more than improvement compared to ia .this behavior witnesses that maximum sinr , wmmse , and reconfigurable schemes are highly resilient to the values , however , their resilience to values is low , especially for wmmse and high values .this result tends to reinforce the necessity of considering oci when designing cb schemes , and justify their study under practical network conditions . as also demonstrated in fig [ fig : ifc_results_2 ], the majority of the cb schemes perform very close or slightly better to full reuse and for db , all cb schemes outperform orthogonal mimo transmissions. however , for db , orthogonal transmissions is the best option , a fact that witnesses that to achieve the best performance for general values of , the coordination cluster needs to adopt a dual - mode operation , which switches depending on the values between the reconfigurable cb scheme for example and orthogonal mimo transmissions . in the cb schemes discussed before ,mts served by the clustered bss are assumed to be clustered so as to create a separate group .this transpires in the current lte standard , in particular , lte release describes a _ comp cluster _ in which bss may coordinate their transmissions .this comp cluster forms the basis into which the techniques may be implemented , although as we will discuss in the following section , information exchange between the coordinated bss is still not standardized . inside a comp cluster, a mt may estimate the channels of its interferers through specific csi structures and commands .this csi may then be used to compute interference - aware receive filters or fed back to their associated bs for further processing .the bss inside a comp cluster may also be able to exchange csi when operating in time - division duplexing mode , by making use of the channel reciprocity property that is lacking in the more common frequency - division duplexing ( fdd ) mode .notwithstanding , csi exchange among coordinated bss is still a complex operation ; it weighs heavily on the backbone network , and as of today , there is also no specific standardized mechanism on how or when to transfer this information .therefore , the straightforward implementation of the described cb schemes outside of a vendor - locked configuration is still out of reach .one example of a cb scheme with limited coordination overhead is the downlink ia presented in , which is suitable for the more general ibc .this schemes capitalizes on the standards specifications to allow each mt to estimate its strongest interferer , and feedback good precoder candidates to its associated bs .consider a cellular network with coordination clusters of bss , where each coordinated bs aims at sending a single information stream to its associated mts . the basic idea of downlink ia is to force the received signal at each mt associated to the cluster from the non - intended coordinated bs in a signal subspace of rank , thus freeing up one decoding dimension from interference for the desired signal .these decoding directions create equivalent miso channels for the mts associated to the cluster , which can then feedback them to their associated bs .each bs can then employ any multi - user mimo technique to multiplex the information streams towards its respective mts , such as zero - forcing beamforming .the benefit of the downlink ia scheme is that , although each bs only frees a single dimension , the interference - free direction is different for each mt , thereby enabling multi - user diversity .one can contrast downlink ia with ffr schemes , where the dimension freed in frequency is the same for all mts . in our performance evaluation , we further assume that each mt can learn the precoder chosen by the bs for their stream at the end .since they have already estimated the channel from the non - intended clustered bs , they know the necessary information to update their receivers to interference rejection combining ( irc ) receivers , as described in . and for a ibc with -antenna nodes and a main interferer for each coordinated bs .for both the eigenbeams and downlink ia schemes , irc receivers have been used .the performance of the wmmse scheme is also illustrated for comparison purposes.,width=470 ] the performance of downlink ia with irc receivers over spatially independent rayleigh fading channels is illustrated in fig [ fig : ibc_results ] as a function of the for different values of , , , , and . within this figure , we also plot the performance of a more classical multi - user mimo scheme where interference is not exploited , and for which the decoding direction of each mt assigned to the cluster is chosen as the strongest eigenvector of its intended channel ; this scheme is denoted as the eigenbeams scheme .note that with the eigenbeams scheme , each bs can support mts whereas , with downlink ia each bs served mts . as seen from the figure , and as expected , for cell - edge mts there is potentially a very large sinr gain coming from the removal of the interfering coordinated bs . in that case, downlink ia shows a gain from the eigenbeams scheme in the average sum rate per coordinated bs . on the other hand , the gain of downlink ia for mts that are not at the cell edge , andthus do not experience a strong interferer , is reduced . as highlighted by the theoretical example in section [ sec : example ] , the performance of downlink ia depends to the values of both and ; if is much larger than , we have a strong interest in removing the interference even if it means being somewhat misaligned with our own channel . on the other hand ,if the remaining oci is on the level of the ici , downlink ia provides less gains than a straightforward multi - user mimo scheme like eigenbeams .this is in line with recent analyses , as e in , where it was shown that blindly applying ia in a clustered cellular network is altogether detrimental .we can also conclude from fig [ fig : ibc_results ] that the performance of the wmmse scheme is poor in this context , since it targets at minimizing the interference from the non - intended clustered bs even if it has to shut down transmissions to its mts . at convergence of this iterative algorithm, a subset of the mts will experience a very high sinr , but since some streams will be unused , the overall performance is lower than that of downlink ia or eigenbeams .the cb schemes presented in the previous section necessitate sort of csi exchange among the bss belonging in the coordination cluster .however , there is still no standardized mechanism in the current lte specifications for full csi exchange in cellular networks .this means that non - proprietary attempts at achieving cb are not truly possible as of today . as such, cb is not feasible outside of a vendor - locked coordinated set of bss , or within a single bs with remote radio heads .this precludes many of the presented advanced cb schemes , which require csi exchange and possibly joint computation of the transmission parameters among the coordinated bss .-antenna nodes and a main interferer for each coordinated bs . in all cases , and , and mts are assumed to compute the optimal irc receivers having knowledge of the precoder chosen by their assigned bs.,width=470 ] focusing on the lte release for feedback specifications , we henceforth compare the performance of the downlink ia and eigenbeams schemes for the ibc scenario of fig [ fig : ibc_results ] under standardized feedback , and compare it with the ideal feedback case .in particular , the csi feedback needed in these schemes is limited to the feedback of only a channel quality indicator ( cqi ) and a precoding matrix indicator ( pmi ) for each frequency subband .the physical layer procedures related to this feedback and the pmi codebooks are described in . in fig[ fig : standard_ibc ] , we evaluate the performance of downlink ia and eigenbeams with practical feedback using -antenna network nodes and a -bit codebook that creates a family of possible precoders . to apply this codebook to the considered ibc scenario , we feedback the equivalent channel by using the pmi to the closest precoder in the family . as depicted from fig [ fig : standard_ibc ], this procedure results into a net performance loss of about for the downlink ia scheme and for the eigenbeams scheme .it can be shown that this loss is not entirely linked to the somewhat coarse feedback quantization , but rather in the way the codebook is constructed in .in fact , increasing the number of bits in the feedback scheme , while keeping the same codebook construction , does not improve the performance substantially .this indicates that the sheer number of bits for the feedback channel is not itself the strongest indicator of feedback quality , and that codebook construction is in fact a fundamental question .higher precision in the feedback process as well as accurate csi estimation are thus still two of the key questions to answer today for coordinated transmissions schemes as well as for many other channel - dependent signal processing techniques .in addition , practical csi exchange between bss participating in a coordination cluster is undefined as of the latest lte release .there is actually no standardized way of encoding csi in fdd systems .the specifications of the backbone communications in a comp set are also left to vendor implementations , precluding any inter - vendor comp set to be set up in practice .as network deployments become denser , interference arises as a dominant performance degradation factor that is almost irrespective to the underlining physical - layer technology .the feature of coordinating bs transmissions to manage interference in cellular networks is already a part of the latest lte release , offering significant potential for performance improvement especially at the cell edges . among the recently proposed coordination schemes , thereexist cb schemes that require coordination overhead that is more or less compatible with the current standard s specifications , and adapt satisfactory to ici , while showing some resilience to oci .however , to maximize the benefit from cb in future communication networks , certain advances need to take place .one of these is bs clustering that needs to be both dynamic and scalable .efficient clustering methods , based for example on network connectivity or received sinr , that keep oci levels to the minimum can be combined with cb schemes to boost network performance .another necessary progress in coordination schemes is the design of techniques for information exchange with low overhead among the coordinated network nodes .the coordination overhead of the latest cb schemes is still far from what can be supported in the current lte release .this necessity becomes even more prominent in fully distributed cb schemes , where information needs to be exchanged iteratively between transmitters and receivers .in fact , it is yet unclear how to practically implement iterative cb schemes and their required information exchange overhead .there are issues in both the actual form the information messages will take , the structure of the message - passing shells , and most importantly the quantization that has to be done on the message content . up to this day, there is little research on designing cb schemes where the iterative computation supports noisy or quantized messages .this is also related to the accuracy of csi as measured by members of the coordination cluster .last but not least , coordination schemes need to be designed to account for the characteristics of technologies intended for next generation networks , such as for example full duplex radios and massive mimo with possibly hybrid analog and digital processing .v. jungnickel , k. manolakis , w. zirwas , b. panzner , v. braun , m. lossow , m. sternad , r. r. apelfrjd , and t. svensson , `` the role of small cells , coordinated multipoint , and massive mimo in 5 g , '' _ ieee commun . mag ._ , vol .52 , no . 5 , pp . 4451 , may 2014 .a. s. hamza , s. s. khalifa , h. s. hamza , and k. elsayed , `` a survey on inter - cell interference coordination techniques in ofdma - based cellular networks , '' _ ieee commun .surveys & tut ._ , vol . 15 , no . 4 , pp .16421670 , aug .j. lee , y. kim , h. lee , b. l. ng , d. mazzarese , j. liu , w. xiao , and y. zhou , `` coordinated multipoint transmission and reception in lte - advanced systems , '' _ ieee commun . mag ._ , vol .50 , no .4450 , nov .d. gesbert , s. hanly , h. huang , s. s. shamai , o. simeone , and w. yu , `` multi - cell mimo cooperative networks : a new look at interference , '' _ ieee j. sel . area ._ , vol . 28 , no . 9 , pp .13801408 , dec . 2010 .r. irmer , h. droste , p. marsch , m. grieger , g. fettweis , s. brueck , h .-mayer , l.thiele , and v. jungnickel , `` coordinated multipoint : concepts , performance , and field trial results , '' _ ieee commun . mag ._ , vol .49 , no . 2 , pp .102111 , feb . 2011 .k. gomadam , v. r. cadambe , and s. a. jafar , `` distributed numerical approach to interference alignment and applications to wireless interference networks , '' _ ieee trans . on inf .57 , no . 6 , pp . 33093322 , jun. 2011 .q. shi , m. razaviyayn , z .- q .luo , and c. he , `` an iteratively weighted mmse approach to distributed sum - utility maximization for a mimo interfering broadcast channel , '' _ ieee trans . on signal process ._ , vol .59 , no . 9 , pp .43314340 , sep . 2011 . g. c. alexandropoulos and c. b. papadias , `` a reconfigurable iterative algorithm for the -user mimo interference channel , '' _ signal process .( elsevier ) _ , vol .93 , no . 12 , pp .33533362 , dec . 2013 .r. w. heath , jr ., t. wu , y. h. kwon , and a. c. k. soong , `` multiuser mimo in distributed antenna systems with out - of - cell interference , '' _ ieee trans . on signal process ._ , vol .59 , no .10 , pp . 48854899 , oct .
modern cellular networks in traditional frequency bands are notoriously interference - limited especially in urban areas , where base stations are deployed in close proximity to one another . the latest releases of long term evolution ( lte ) incorporate features for coordinating downlink transmissions as an efficient means of managing interference . recent field trial results and theoretical studies of the performance of joint transmission ( jt ) coordinated multi - point ( comp ) schemes revealed , however , that their gains are not as high as initially expected , despite the large coordination overhead . these schemes are known to be very sensitive to defects in synchronization or information exchange between coordinating bases stations as well as uncoordinated interference . in this article , we review recent advanced coordinated beamforming ( cb ) schemes as alternatives , requiring less overhead than jt comp while achieving good performance in realistic conditions . by stipulating that , in certain lte scenarios of increasing interest , uncoordinated interference constitutes a major factor in the performance of comp techniques at large , we hereby assess the resilience of the state - of - the - art cb to uncoordinated interference . we also describe how these techniques can leverage the latest specifications of current cellular networks , and how they may perform when we consider standardized feedback and coordination . this allows us to identify some key roadblocks and research directions to address as lte evolves towards the future of mobile communications .
due to the proliferation of devices such as smart phones , tablets , and personal digital assistants ( pdas ) and the exponential growth of the number of subscribers , the world has witnessed a dramatic increase in wireless traffic recently .the low - hanging fruit in terms of spectral efficiency gains of traditional point - to - point links has reached the theoretical limits. only incremental gains of spectral efficiency appears feasible at this point .thus , is it possible to significantly improve overall spectral efficiency of networks any further ?note that wireless nodes operate half - duplex in ( hd ) mode by separating the uplink and downlink channels into orthogonal signaling ( time or frequency ) slots .fd mode ( i.e. , both uplink and downlink on the same channel at the same time ) , if possible , has the potential to double the spectral efficiency instantly .the tremendous implications of fd wireless nodes will thus be not only to transform for cellular network designs radically and but also to double capacity , speed or number of subscribers of cellular networks . however , a key challenge in implementing a fd transceiver is the presence of loopback interference ( li ) . since the li is caused by the self - transmitted signal in the transceiver , up until recently fd radio was considered practically infeasible .this long - held pessimistic view has been challenged in the wake of recent advances in antenna design and introduction of analog / digital signal processing solutions .to this end , several single and multiple antenna fd implementations have been developed through new li cancellation techniques .antenna separation / radio frequency ( rf ) shielding , analog / digital and hybrid analog - digital circuit domain approaches can achieve significant levels of li cancellation in single antenna fd systems .multiple antenna li suppression / cancellation techniques are largely based on the use of directional antennas and spatial domain cancellation algorithms .the implementation of single antenna fd technology with li cancellation was demonstrated in .the authors in and characterized the spatial suppression of li at fd relaying system . a multiple - input multiple - output ( mimo ) fd implementation ( midu )was presented in , while reported design and implementation of an in - band wifi - phy based fd mimo system .fd systems find useful in several new applications that exploit their ability to transmit and receive simultaneously .some of these examples include one - way and two - way fd relay transmission , simultaneous sensing / operation in cognitive radio systems and reception / jamming for enhanced physical layer security .another possible advantageous use of fd communications is the simultaneous uplink ( ul)/downlink ( dl ) transmission in wireless systems such as wifi and cellular networks .however , such transmissions introduce li and internode interference in the network as dl transmission will be affected by the li and the ul user will interfere with the dl reception .therefore , in the presence of li and internode interference , it is not clear whether fd applied to ul / dl user settings can harness performance gains . to this end ,several works in the literature have presented useful results considering topological randomness which is a main characteristic of wireless networks .a new modeling approach that captures topological randomness in the network geometry and is capable of producing tractable analytical results is based on stochastic geometry . in a fd cellular analytical model based on stochastic geometry was used to derive the sum capacity of the system .however , assumed perfect li cancellation and therefore , the effect of li is not included in the results .the application of fd radios for a single small cell scenario was considered in . specifically in this work , the conditions where fd operation provides a throughput gain compared to hd and the corresponding throughput results using simulations were presented . in ,the combination of fd and massive mimo was considered for simultaneous ul / dl cellular communication .the information theoretic study presented in , has investigated the rate gain achievable in a fd ul / dl network with internode interference management techniques . in , joint precoder designs to optimize the spectral and energy efficiency of a fd multiuser mimo system were presented .however considered fixed user settings for performance analysis and as such the effect of interference due to distance , particularly relevant for wireless networks with spatial randomness , is ignored .tools from stochastic geometry has been used to analyze the throughput of fd networks in .specifically , studied the throughput of multi - tier heterogeneous networks with a mixture of access points ( aps ) operating either in fd or hd mode .the throughput gains of a wireless network of nodes with both fd and hd capabilities has been quantified in , while analyzed the mean network throughput gains due to fd transmissions in multi - cell wireless networks . in this paper, we consider a wireless network scenario in which a fd ap is communicating with the single antenna spatially random user terminals to support simultaneous ul and dl transmissions . specifically we consider a poisson point process ( ppp ) for the dl users and assume that the scheduled ul user is located distance apart .the ap employs multiple antennas and therefore , precoding can be applied for proper weighting of the transmitted and received signals and spatial li mitigation / cancellation .we develop a performance analysis and characterize the network performance using ul and dl average sum rate as the metric .further , we present insightful expressions to show the effect of network parameters such as the user spatial density , the li and internode interference ( through ul / dl user distance parameter ) on the average sum rate . our contributions are summarized as follows : * we consider both li and internode interference and derive expressions for the ul and dl average sum rate when several precoding techniques are applied at the ap . specifically , precoding schemes based on the maximum ratio combining ( mrc)/ maximal ratio transmission ( mrt ) , zero - forcing ( zf ) for li cancellation and the optimal precoding scheme for sum rate maximization are investigated . in order to highlight the system behavior and shed insights into the performance , simple expressions for certain special cases are also presented .further , as an immediate byproduct , the derived cumulative density functions ( cdfs ) of the signal - to - interference noise ratios ( sinrs ) can be used to evaluate the system s ul and dl outage probability .* our findings reveal that for a fixed li power , when the internode interference is increased , the scheme achieves a better performance than the scheme . on the other hand , by keeping the amount of internode interference constant , while decreasing the li the scheme performs better than the scheme. moreover , in the presence of li , increasing the receive antenna number at the fd ap with the scheme , is more beneficial in terms of the average sum rate than increasing the number of transmit antennas at the ap .* we compare the sum rate performance of the system for fd and hd modes of operation at the ap to elucidate the snr regions where fd outperforms hd .our results reveal that , the choice of the linear processing play a critical role in determining the fd gains .specifically , optimal design can achieve up to average sum rate gain in comparison with hd scheme in all li regimes .however , at high li strength as well as high transmit power regime ( i.e. , db ) , fd mode with scheme becomes inferior as compared to the hd mode .moreover , our results indicate that different power levels at the ap and ul user have a significant adverse effect to decrease the average sum rate in the hd mode of operation than the fd counterpart .the rest of the paper is organized as follows .section [ sec : system model and assumption ] presents the system model and section [ sec : performance analysis ] analyzes the ul and dl average sum rate of different precoding schemes applied at the ap .we compare the sum rate of the counterpart hd mode of operation as well as some special cases in section iv .we present numerical results and discussions in section [ sec : numerical results ] before concluding in section [ sec : conclusion ] .* notation : * we will follow the convention of denoting vectors by boldface lower case and matrices in capital boldface letters .the superscripts , , and , denote the conjugate transpose , euclidean norm , the trace of a matrix , and the matrix inverse , respectively . stands for the expectation of random variable and is the identity matrix of size . a circularly symmetric complex gaussian random variable with mean and variance represented as . is the exponential integral ( * ? ? ?* eq . ( 8.211.1 ) ) . is the gauss hypergeometric function ( * ? ? ?* eq . ( 9.111 ) ) and is the appell hypergeometric function ( * ? ? ?* eq . ( 5.8.2 ) ) .is the meijer g - function ( * ? ? ?* eq . ( 9.301 ) ) and is parabolic cylinder function ( * ? ? ?* eq . ( 9.241.2 ) ) .consider a single cell wireless system with an ap , where data from users in the ul channel , and data to the users in the dl channel are transmitted and received at the same time on the same frequency as shown in fig .[ fig : system model ] .all users in the cell are located in a circular area with radius and the ap is located at the center .we assume that users are equipped with a single antenna , while the ap is equipped with receive and transmit antennas for fd operation .the single antenna assumption is made for several pragmatic reasons .first , most mobile handsets are single antenna devices .second , since in the case of multiple antennas , the capacity is unknown or at least it will be a complicated optimization problem .third , single antenna user equipment is also an exceedingly common assumption made in massive mimo and other wireless literature .also since we assume multiple - antenna ap , it can cancel li and provide a good rate for the ul / dl user etc . in the sequel , we use subscript- for the ul user , subscript- for the dl user , and subscript- for the ap .similarly , we will use subscript- , subscript- , subscript- , and subscript- to denote the ap - to - ap , ap - to - dl user , ul user - to - dl user , and ul user - to - ap channels , respectively .we model the locations of the dl users inside the disk as an independent two - dimensional homogeneous ppp with spatial density .the ap selects a dl user that is physically nearest to it as well as an ul user distance away can also be random without affecting the main conclusions , since we can always derive the results by first conditioning on and then averaging over .the fixed inter user distance assumption can be shown to preserve the integrity of conclusions even with random transmit distances . ] from the dl user in a random direction of angle . in our setup , we study a worst - case scenario where users are located in the smallest allowed distance as this scenario serves as a useful guideline for practical fd network design .parametrization in terms of the inter user ul / dl distance has also been adopted by some of the existing papers . ]the ap selects a dl user that is physically nearest to it .we use the terms `` nearest dl user '' and `` scheduled dl user '' interchangeably throughout the paper to refer to this user .selection of a nearest user is necessary for an fd ap since transmitting very high power signals towards distant periphery users in order to guarantee a quality - of - service can cause overwhelming li at the receive side of the ap .moreover , cell sizes have been shrinking progressively over generations of network evolution . therefore in some next generation networkseach user will be in the coverage area of an ap and can be considered as a nearest user . as a benchmark comparison we also consider the random user selection ( rus ) in section [ sec : numerical results ] . under rusthe ap randomly selects one of all candidate dl users with equal probability . for a more realistic propagation model , we assume that the links experience both large - scale path loss effects and small - scale rayleigh fading . for the large - scale path loss ,we assume the standard singular path loss model , , where denotes the path loss exponent and is the euclidean distance between two nodes .if is at the origin , the index will be omitted , i.e. , . in order to facilitate the analysis, we now set up a polar coordinate system in which the origin is at the ap and the scheduled dl user is at .therefore , we have . in the following , we will need the exact knowledge of the spatial distribution of the in terms of and . since we assume that nearest dl user is scheduled for downlink transmission, denotes the distance between the ap and the nearest dl user .therefore , the probability distribution function ( pdf ) of the nearest distance for the homogeneous ppp with intensity is given by . moreover , angular distribution is uniformly distributed over ] where represents the residue at , after some manipulations , can be expressed as where and . in general , the double integral in does not admit a simple analytical solution for an arbitrary value of .however , the cdf can be straightforwardly evaluated using numerical integration . by substituting into , after some manipulations , the exact average capacity of the ul user can be written as the following propositions characterize for the interference - limited scenario with and the special cases with and correspond to free space propagation and typical rural areas , respectively , and constitute useful bounds for practical propagation conditions ] and .[ propos : average capacity of the ul user mrc / mrt ] the spatial average rate of the ul user in case-1 for is given by where is the gamma function ( * ? ? ?* eq . ( 8.310.1 ) ) .moreover , , , and . to prove this proposition , the following lemma is useful .the proof of lemma [ propos : cdf for sinrd uplink mrc / mrt case1 alpha2 ] is presented in appendix [ apx : propos : cdf for sinrd uplink mrc / mrt case1 alpha2 ] .[ propos : cdf for sinrd uplink mrc / mrt case1 alpha2 ] the cdf of , for is given by next , by using lemma [ propos : cdf for sinrd uplink mrc / mrt case1 alpha2 ] , and plugging into , after some algebraic manipulation , the desired result in can be obtained . before proceeding further, we present the following lemma , which will be used to establish an upper bound on the achievable rate of the ul user for .[ lemma : cdf for sinrd uplink mrc / mrt case1 alpha4 ] the cdf of for is lower bounded as see appendix [ apx : propos : cdf for sinrd uplink mrc / mrt case1 alpha4 ] .[ prop : acheivable rate uplink alpha4 ] for , the spatial average rate of the ul user is upper bounded by by substituting the lower bound of from lemma [ lemma : cdf for sinrd uplink mrc / mrt case1 alpha4 ] into , and applying the transformation , an upper bound for the average rate of the ul user can be expressed as where the integral , can be expressed ( * ? ? ?( 17 ) ) in terms of the tabulated meijer g - function as the above integral can be solved with the help of ( * ? ? ?* eq . ( 21 ) ) and ( * ? ? ?* eq . ( 8.2.2.14 ) ) to yield the desired result in .note that the meijer g - function is available as a built - in function in many mathematical software packages , such as maple and mathematica . _case-2 ) : _ in this case , the cdf of can be expressed as where and with .moreover , and denote the cdf and the pdf of the and , respectively . in the sequel, we first derive the expressions for and , then use these expressions to derive the .it is easy to show that follows central chi - square distribution with degrees - of - freedom to denote that is chi - square distributed with degrees - of - freedom . ]whose pdf is given by before deriving the cdf of , we first note that can be written as where is unitary matrix , , , and is the largest eigenvalue of the wishart matrix . here , the second equality is due to the eigen - decomposition .it is well known that and follows a beta distribution with shape parameters and , which is denoted as .accordingly , the cdf of , can be found as where is the lower incomplete gamma function defined by ( * ? ? ?( 8.350.1 ) ) . in, the equality ( a ) follows by substituting , ( b ) is obtained by expressing the function in terms of meijer g - function according to ( * ? ? ? * eq .( 8.4.16.1 ) ) , and ( c ) follows with the help of ( * ? ? ?* eq . ( 7.811.3 ) ) . for , using the equality (* eq . ( 8.2.2.18 ) ) and applying ( * ? ? ?* eq . ( 8.4.3.4 ) ) , the cdf in leads to a closed - form result for the cdf of the exponential rv with parameter , which confirms our analysis .now , by substituting and into , we have evaluation of is difficult . in order to circumvent this challenge ,a common approach adopted in the performance analysis literature is to neglect the awgn term .therefore , by using ( * ? ? ?* eq . ( 2.24.3.1 ) ) can be evaluated as the average rate of the ul user in the interference - limited case can be expressed as the proof is straightforward and follows from the definition of .* _ evaluation of _ : * in order to derive a general expression for the , according to , we need to obtain the cdf of in , which can be written as where . note that in our system model the randomness of the is due to the fading power envelope .as such , can be re - expressed as plugging into and using the identity ( * ? ? ?* eq . ( 8.356.3 ) ) , exact average rate of the dl user can be written as where is upper incomplete gamma function ( * ? ? ?* eq . ( 8.350.2 ) ) .moreover , for ( interference - limited case ) , using the cdf provided in the following lemma , proposition [ prop : acheivable rate downlink ] presents the average rate of the dl user .[ lemma : cdf of sinrd general mrc / mrt ] the cdf of , can be expressed as for , with the help of ( * ? ? ?* eq . ( 6.451.1 ) ) , can be written as using the mclaurin series representation of the exponential function and expressing ^{-1})^{-{n_{{\mathsf{d}}}}} ] ) and is the element of . then from the law of large numbers for the asymptotic large regime, we have where denotes the almost sure convergence . as a result, the li can be reduced significantly by scaling the ap transmit power with together with the mrc receiver .hence , the average sum rate for the scheme with asymptotic large regime can be expressed as where ( after replacing with ) provides an expression for the first expectation term in the interference - limited case . moreover ,the right hand side expectation term is given by .in this subsection , we compare the performance of the hd and fd modes of operation at the ap . in the hd mode of operation, ap employs orthogonal time slots to serve the ul and dl user , respectively .in order to keep our comparisons fair , we consider _ `` antenna conserved '' _ ( ac ) and _ `` rf - chain conserved '' _( rc ) scenarios which are adopted in the existing literature . under ac condition , the total number of antennas used by the hd ap and fd ap are kept identical .however , the number of rf chains employed by the hd ap is higher than that of the fd ap and hence former system would be a costly option . under rc condition ,the total number rf chains used hd and fd modes are kept identical .therefore , in dl ( or ul ) transmission , the hd ap only uses ( or ) antennas under the rc condition , while it uses antennas under the ac condition .the average sum rate under the rc condition , using the weight vector for the mrc receiver , and the mrt precoding vector can be expressed as where ( ) is a fraction of the time slot duration of , used for dl transmission , , and , where and are the transmit power of the ap and ul user , respectively , in the hd - rc mode . under the ac condition , the average achievable rate can be expressed as where , and , where and are the transmit power at the ap and ul user , respectively . using with change of variables , the second expectation of andcan be obtained . moreover , after some algebraic derivations , we get where and under rc and ac conditions and , respectively . for the fd and hd ap ( , and m ) .simulation results are shown by dashed lines.,width=343,height=257 ] [ fig : sum_rate_v_delta ] we end this section with the following remarks . in general , the corresponding snrs for dl and ul transmissions in hd mode are larger than those of in fd mode . however , although hd mode does not induce li and internode interference , it imposes a pre - log factor on the spectral efficiency . since , most of the results contain meijer g - functions , a direct comparison of the average sum rate of the fd and hd modes is challengingnevertheless , let us consider the achievable rate region of both fd and hd modes .the rate region frontiers can be found by sweeping over the full range of ] .we also included the benchmark performance achieved by interference - limited assumption for the first three antenna pairs .this results implies that the achievable rate given by proposition [ propos : average capacity of the ul user mrc / mrt ] , proposition [ prop : acheivable rate downlink ] , and are good predictors of the system s sum rate . for the fd and hd ap and for different precoding schemes ( m , db , , and ).,width=340,height=253 ] [ fig : sum_rate_v_nu ] an interesting observation that can be extracted from fig .[ fig : sum_rate_antenna config ] is that , increasing the receive antenna number at the fd ap is more beneficial to the ul and dl average sum rate than increasing the number of transmit antenna elements .for example , the relative performance gain of ] is more than that of ] , especially at high transmit power levels of db .as noted before , this can be explained by the fact that by doubling the number of transmit / receive antenna number at the ap , the numerator of and is increased in the same proportions .however , by doubling the , the li at ap is boosted , leading to a decrease in and consequently in the average sum rate .the performance of a wireless network scenario in which a multiple antenna equipped fd ap communicates with spatially random single - antenna user nodes in the ul and dl channels simultaneously has been analyzed .in particular , we considered precoding schemes based on the principles of mrc , mrt and zf and studied the system performance in terms of the ul and dl average sum rate .further , we have considered the problem of optimal precoding design for the ul and dl sum rate maximization and reformulated the problem as a sdp , which can be efficiently solved .analysis and simulation results demonstrated the superiority of the optimal precoding scheme over the mrc / mrt and mrc / zf schemes .we further studied the effect of resource allocation , li and internode interference , and antenna configuration on the system sum rate .we found that the mrc / mrt scheme can offer a higher ul and dl average sum rate compared to the mrc / zf scheme , when the li is significantly canceled or the internode interference is weak enough , and vice versa .furthermore , we observed that the performance gap between the fd and hd modes can be further increased by deploying more transmit / receive antennas at the ap .as for future research work , the performance gains due to fd transmission in setups such as heterogeneous network architectures with mixed fd / hd mode operation , cooperative relaying and mimo may be characterized to further establish the viability of the usage of fd terminals .m , , and ).,width=340,height=249 ] [ fig : sum_rate_antenna config ]following , the corresponding to and is given by with the help of ( * ? ? ? * eq .( 3.661.4 ) ) , and making the change of variable , we obtain to the best of our knowledge , the integral in does not admit a closed - form solution . in order to proceed , we use taylor series representation ( * ? ? ?* eq . ( 1.211.1 ) ) for the term , and write a change of variable , and after some manipulations , can be expressed as finally , using ( * ? ? ?* eq . ( 5.8.2 ) ) , we arrive at the desired result given in .following , the corresponding to and can be written as by using , the inner integral can be obtained as where , , and , with , , and .we can simplify the above integral in the case of .hence , after a simple substitution , can be written as in order to simplify , we adopt a series expansion of the exponential term . substituting the series expansion of into the yields let us denote . by making the change of variable , we obtain now with the help of ( * ? ? ?* eq . ( 9.111 ) ) the integral in can be solved to yield .a. sabharwal , p. schniter , d. guo , d. w. bliss , s. rangarajan , and r. wichman , `` in - band full - duplex wireless : challenges and opportunities , '' _ieee j. sel .areas commun .1637 - 1652 , sep . 2014 .d. w. bliss , t. hancock , and p. schniter , hardware and environmental phenomenological limits on full - duplex mimo relay performance , " in _proc.46th asilomar conf . on signals , systems , and computers ( asilomar 2012 )_ , pacic grove , ca , nov .2012 , pp .34 - 39 .e. aryafar , m. a. khojastepour , k. sundaresan , s. rangarajan , and m. chiang , `` midu : enabling mimo full duplex,''in _ proc .18th intl .conf . mobile computing and networking ( acm mobicom 12 ) _ ,new york , ny , aug .2012 , pp .257 - 268 .d. korpi , l. anttila , v. syrjl , and m. valkama , widely linear digital self - interference cancellation in direct - conversion full - duplex transceiver , " _ ieee j. sel. areas commun .32 , pp . 1674 - 1687 ,h. a. suraweera , i. krikidis , g. zheng , c. yuen , and p. j. smith , lowcomplexity end - to - end performance optimization in mimo full - duplex relay systems , " _ ieee trans .wireless commun ._ , vol . 13 , pp . 913 - 927 , jan. 2014 .h. q. ngo , h. a. suraweera , m. matthaiou , and e. g. larsson , `` multipair full - duplex relaying with massive arrays and linear processing , '' _ ieee j. sel .areas commun .1721 - 1737 , sep 2014 .t. riihonen and r. wichman , energy detection in full - duplex cognitive radios under residual self - interference , " in _ proc .cognitive radio oriented wireless networks ( crowncom ) , _ oulu , finland , june 2014 , pp .57 - 60 .g. zheng , i. krikidis , j. li , a. p. petropulu , and b. e. ottersten , improving physical layer secrecy using full - duplex jamming receivers , " _ ieee trans .signal process .4962 - 4974 , oct .2013 .m. vehkaper a , m. girnyk , t. riihonen , r. wichman , and l. rasmussen , `` on achievable rate regions at large - system limit in full - duplex wireless local access , '' in _ proc .black sea conf . commun . and networking ( blackseacom ) _ , batumi , georgia , july 2013 , pp .7 - 11 .k. sundaresan , m. khojastepour , e. chai , and s. rangarajan , full - duplex without strings : enabling full - duplex with half - duplex clients , " in _ proc .20th intl .mobile computing and networking ( mobicom 2014 ) _ , maui , hi , sep .2014 , pp .55 - 66 .s. goyal , p. liu , s. s. panwar , r. a. difazio , r. yang , j. li , and e. bala , `` improving small cell capacity with common - carrier full duplex radios , '' in _ proc .ieee intl .( icc 2014 ) _ , sydney , australia , june 2014 , pp .4987 - 4993 .b. yin , m. wu , c. studer , j. r. cavallaro , and j. lilleberg , `` full - duplex in large - scale wireless systems , '' in _ proc .asilomar conf .signals , systems and computers ( asilomar 2013 ) _ , pacic grove , ca , nov . 2013 , pp .1623 - 1627 .d. nguyen , l .- n .tran , p. pirinen , and m. latva - aho , `` precoding for full duplex multiuser mimo systems : spectral and energy efficiency maximization , '' _ ieee trans . signal process .4038 - 4050 , aug .2013 .s. wang , v. venkateswaran , and x. zhang , exploring full - duplex gains in multi - cell wireless networks : a spatial stochastic framework , " _ in proc .ieee conference on computer communications ( infocom 2015 ) _ , hong kong , apr .2015 , pp .855 - 863 . european fp7 project duplo ( full - duplex radios for local access ) , " european commission - research : the seventh framework programme , http://www.fp7-duplo.eu/index.php/general-info , tech . rep .2012 .v. s. adamchik and o. i. marichev , `` the algorithm for calculating integrals of hypergeometric type functions and its realization in re- duce system , '' in _ in proc .symbolic and algebraic comput . _ ,tokyo , japan , 1990 , pp .212 - 224 .m. charafeddine , a. sezgin , and a. paulraj,rate region frontiers for n - user interference channel with interference as noise , " in _ proc .45th allerton conf .communication , control , and computing _ , monticello , il , sep .
a full - duplex ( fd ) multiple antenna access point ( ap ) communicating with single antenna half - duplex ( hd ) spatially random users to support simultaneous uplink ( ul)/downlink ( dl ) transmissions is investigated . since fd nodes are inherently constrained by the loopback interference ( li ) , we study precoding schemes for the ap based on maximum ratio combining ( mrc)/maximal ratio transmission ( mrt ) , zero - forcing and the optimal scheme for ul and dl sum rate maximization using tools from stochastic geometry . in order to shed insights into the system s performance , simple expressions for single antenna / perfect li cancellation / negligible internode interference cases are also presented . we show that fd precoding at ap improves the ul / dl sum rate and hence a doubling of the performance of the hd mode is achievable . in particular , our results show that these impressive performance gains remain substantially intact even if the li cancellation is imperfect . furthermore , relative performance gap between fd and hd modes increases as the number of transmit / receive antennas becomes large , while with the mrc / mrt scheme , increasing the receive antenna number at fd ap , is more beneficial in terms of sum rate than increasing the transmit antenna number . full - duplex , stochastic geometry , average sum rate , precoding , interference , performance analysis .
formation of dna loops is a common motif of protein - dna interactions . a segment of dna forms a loop - like structure when either its ends get bound by the same protein molecule or a multi - protein complex , or when the segment gets wound around a large multi - protein aggregate , or when the segment connects two such aggregates . in bacterial genomes , dna loops were shown to play important roles in gene regulation ; in eucaryotic genomes , dna loops are a common structural element of the condensed protein - dna media inside nuclei .understanding of the structure and dynamics of dna loops is thus vital for studying the organization and function of the genomes of living cells . the amount of experimental data on the dna physical properties and protein - dna interactions both _ in vivo _ and _ in vitro _ has grown dramatically in recent years . with the advent of modern experimental techniques , such as micromanipulation and fast resonance energy transfer , researchers were presented with unique opportunities to probe the properties of individual macromolecules .x - ray crystallography , nmr , and 3d electron cryomicroscopy provided numerous structures of protein - dna complexes with resolution up to a few angstroms , including such huge biomolecular aggregates as rna polymerase and nucleosome .the ever growing volume of data provides theoretical modeling , which has generally been recognized as a vital complement of experimental studies , with an opportunity to revise the existing models , and to build new improved models of biomolecules and biomolecular interactions .several existing dna models are based on the theory of elasticity .these models treat dna as an elastic rod or ribbon , sometimes carrying an electric charge .the geometrical , energetic , and dynamical properties of such ribbon can be studied at finite temperature using monte - carlo or brownian dynamics techniques employing a combined elastic / electrostatic energy functional .such studies usually involve extensive data generation , for example , numerous monte carlo structural ensembles , and require significant investment of computational resources .alternatively , one could resort to faster theoretical methods , such as statistical mechanical analysis of the elastic energy functional or normal mode analysis of the dynamical properties of the elastic rod .a fast approach to studying the static properties of dna loops such as the loop energy , structure , and topology consists in solving the classical equations of elasticity , dated back to kirchhoff and derived on the basis of the same energy functional .the equations can be solved with either fixed boundary conditions for the ends of the loop or under a condition of a constant external force acting on the ends of the loop . in order to achieve a realistic description of the physical properties of dna, the classical elastic functional has to be modified : ( i ) the modeled elastic ribbon has to be considered intrinsically twisted ( and , possibly , intrinsically bent ) in order to mimic the helicity of dna , ( ii ) the ribbon has to carry electrostatic charge , ( iii ) be anisotropically flexible , i.e. , different bending penalties should be imposed for bending in different directions , ( iv ) be deformable , e.g. , through extension and/or shear , ( v ) have different flexibility at different points , in order to account for dna sequence - specific properties , ( vi ) be subject to possible external forces , such as those from proteins or other dna loops .the earlier works have presented many models including several of these properties , e.g. , bending anisotropy and sequence - specificity ; extensibility , intrinsic twist / curvature , and electrostatics ; extensibility , shearability , and intrinsic twist ; intrinsic curvature and electrostatics ; or intrinsic twist and forces due to self - contact . yet , to the best of our knowledge , a completely realistic treatment , where the theory of elasticity would be modified as to include all of the listed dna properties , has never been published . while for dna segments of large length some of these properties can be disregarded or averaged out using a proper set of effective parameters , we feel that a proper model of dna on the scale of several hundred base pairs which is a typical size of dna loops involved in protein - dna interactions must be detailed , including a proper description of all physical properties of real dna .this work offers a step towards such a generalized elastic dna model .the kirchhoff equations of elasticity are derived in sec .[ sec : theory ] below for an intrinsically twisted ( and possibly bent ) elastic ribbon with anisotropic bending properties .the terms corresponding to external forces and torques are included and can also be used to account for the electrostatic self - repulsion of the rod , as described in sec .[ sec : electrostatics ] .all the parameters are considered to be functions of the ribbon arclength , thus making the dna model sequence - specific .only the dna deformability is omitted from the derived equations - yet can be straightforwardly included in the problem , as discussed in sec .[ sec : discussion ] . the numerical algorithm for solving the modified kirchhoff equations , based on the earlier work , is presented ( sec .[ sec : lacsols_short ] ) . the proposed model is used to predict and analyze the structure of the dna loops clamped by the _ lac _repressor , a celebrated _ e. coli _ protein , reviewed in sec .[ sec : lac_operon ] .the system provides a typical biological application for the developed model and is used to extensively analyze the effect of bending anisotropy ( sec .[ sec : anisotropy ] ) and electrostatic repulsion ( sec . [ sec : electrostatics ] ) in our model and to evaluate the corresponding model parameters .the dna sequence - specific properties in terms of elastic moduli and intrinsic curvature , while included in the derived equations , were not used in the study of the _ lac _ repressor system .the further developments that would make the model truly universal and the model s scope of applicability are discussed in sec .[ sec : discussion ] .notably , it is shown how the elastic rod model can be combined with the all - atom model in multi - scale simulations of protein - dna complexes or how all - atom dna structures can be built on the basis of the coarse - grained model .in this section we describe first the classical kirchhoff theory of elasticity and then how this theory can be applied to modeling dna loops .the classical theory of elasticity describes an elastic rod ( ribbon ) in terms of its centerline and cross sections ( fig .[ fig : elrod]a ). the centerline forms a three - dimensional curve parametrized by the arclength _ s_. the cross sections are `` stacked '' along the centerline ; a frame of three unit vectors , , uniquely defines the orientation of the cross section at each point .the vectors and lie in the plane of the cross section ( for example , along the principal axes of inertia of the cross section ) , and the vector is the normal to that plane .therefore , we frequently drop the explicit notation `` '' from the equations throughout the paper . ] .if the elastic rod is inextensible then the tangent to the center line coincides with the normal : ( the dot denotes the derivative with respect to ) .parameterization of the elastic rod .( a ) the centerline and the intrinsic local frame , , .( b ) the principal normal and the binormal form the natural local frame for the 3d curve . ]the components of all the three vectors can be expressed through three euler angles , , , which define the rotation of the local coordinate frame relative to the lab coordinate frame .alternatively , one can use four euler parameters , , , , related to the euler angles as and subject to the constraint ( see , e.g. , ) .the computations presented in this paper employ the euler parameters in order to avoid the polar singularities inherent in the euler angles .following kirchhoff s analogy between the sequence of cross - sections of the elastic rod and a motion of a rigid body , the arclength can be considered as a time - like variable. then the spatial angular velocity of rotation of the local coordinate frame can be introduced : the vector is called the vector of strains .geometrically , its components and are equal to the principal curvatures of the curve , so that the total curvature equals and the vectors of principal normal and binormal ( fig .[ fig : elrod]b ) are the third component of the vector of strains is the local twist of the elastic rod around its axis .all three components can be expressed via the euler parameters : if the elastic rod is forced to adopt a shape different from that of its natural ( relaxed ) shape , then elastic forces and torques develop inside the rod : the components and compose the shear force ; the component is the force of tension , if , or compression , if , at the cross section at the point . and are the bending moments , and is the twisting moment . in equilibrium ,the elastic forces and torques are balanced at every point by the body forces and torques , acting upon the rod : the body forces and torques of the classical theory usually result from gravity or from the weight of external bodies as in the case of construction beams . in the case of dna ,such forces are mainly of electrostatic nature , as will be described below .the last equation required to build a self - contained theory of the elastic rod relates the elastic stress to the distortions of the rod .the classical approach stems from the bernoulli - euler theory of slender rods , which stipulates the elastic torques to be linearly dependent on the curvatures and twist of the inherently straight rod : the linear coefficients and are called the bending rigidities of the elastic rod , and is called the twisting rigidity . for a solid rod , the classical theory finds that , , where is the young s modulus of the material of the rod , and , are the principal moments of inertia of the rod s cross - section .the twisting rigidity is proportional to the shear modulus of the material of the rod ; in the simple case of a circular cross - section , . the equations , , , , and form the basis of the kirchhoff theory of elastic rods .we simplify the equations , first , by making all the variables dimensionless : where is the length of the rod .second , we express the derivatives through , , , and the euler parameters , using equations ( [ eq : cur_eu1]-[eq : cur_euo ] ) and the constraint , differentiated with respect to .third , we eliminate the variables and using equations and arrive at the following system of differential equations of 13-th order : the solutions to this system correspond to the equilibrium conformations of the elastic rod .the 13 unknown functions , , , and are directly obtainable from by virtue of . ] describe the geometry of the elastic rod and the distribution of the stress and torques along the rod .the equations can be solved for various combinations of initial and boundary conditions .the case considered in this paper will be the boundary value problem , when the equilibrium solutions are sought for the elastic rod with fixed ends that is , with known locations , of its ends at and , and known orientation , of the cross section at those ends .such case would correspond , for example , to a dna loop whose ends are bound to a protein .( a ) the elastic rod fitted into an all - atom structure of dna .( b ) a coordinate frame associated with a dna base pair , according to . ] in general , the system ( [ eq : grand_1]-[eq : grand_11 ] ) has multiple solutions for a given set of boundary conditions .the dimensionless elastic energy of each solution is computed according to the bernoulli - euler approximation as the quadratic functional of the curvatures and the twist : the straight elastic rod becomes the zero - energy ground state for the functional ( [ eq : ener_funct ] ) .if the interactions of the elastic rod with external bodies , expressed through the forces and torques , are not negligible then the energy functional will include additional terms due to those interactions .elastic rod theory is a natural choice for building a model of dna a long linear polymer .the centerline of the rod follows the axis of the dna helix and watson - crick base pairs form cross - sections of the dna `` rod '' ( fig .[ fig : elrod_dna]a ) . a coordinate frame can be associated with each base pair according to a general convention ( fig .[ fig : elrod_dna]b ) . for a known all - atom structure of a dna loop , an elastic rod ( ribbon ) can be fitted into the loop using those coordinate frames .conversely , a known elastic rod model of the dna loop can be used directly to build an all - atom structure of the loop ( see app . [sec : appendixf ] ) . finally , if the all - atom structure is known only for the base pairs at the ends of the loop ( as in the case of the _ lac _ repressor cf sec . [ sec : lac_solutions ] )then the coordinate frames can be associated with those base pairs and provide boundary conditions for eqs .( [ eq : grand_1]-[eq : grand_11 ] ) . however , several modifications of the classical theory are necessary in order to describe certain essential properties of dna .first of all , the relaxed shape of dna is a helix , which is described by a tightly wound ribbon rather than a straight untwisted rod treated by our equations so far .the helix has an average twist of / so that one helical turn takes about 36 .this is much smaller than the persistent length of dna bending ( 500 ) or twisting ( 750 ) , so even a relatively straight segment of dna is highly twisted .we introduce the intrinsic twist of the elastic rod as a parameter in our model .it is considered to be a function of arclength , because the twist of real dna varies between different sequences .second , certain dna sequences are also known to form intrinsically curved rather than straight helices . in terms of our theorythat means that the curvature of the relaxed rod may be different from zero in certain sections of the rod .the intrinsic curvatures are introduced in our model similarly to the intrinsic twist as functions of arclength determined by the sequence of the dna piece in consideration . the intrinsic twist and curvaturesresult in the modified bernoulli - euler equation : where now , the elastic torques are proportional to the changes in the curvatures and twist from their intrinsic values , rather than to their total values .note , however , that the `` geometrical '' eqs . and consequently , eqs .( [ eq : grand_5]-[eq : grand_8 ] ) still contain the full values of the twist and curvature , so that switching from eq . to eq .does not simply result in replacing and with and throughout the system ( [ eq : grand_1]-[eq : grand_11 ] ) .another important structural property of real dna , which we want to encapsulate in our theory , is the sequence - dependence of the dna flexibility , i.e. , that certain sequences of dna are more rigid than others .accordingly , the bending and twisting rigidities , , and in eq . become functions of arclength rather than constants . the exact shape of these functions depends on the sequence of the studied dna piece ( cf app . [ sec : appendixd ] ) .the dimensionless bending rigidities , , and ( as well as the dimensionless forces and torques ) now become scaled not by , but by an arbitrary chosen value of the twisting modulus ( for example , the average twisting rigidity of dna ) : ( _ cf _ eq .( [ eq : mod_scale_0],[eq : for_scale_0 ] ) ) . with the above changes ,the new grand system of differential equations becomes : and the new energy of the elastic rod is computed as : the system ( [ eq : adv_grand_1]-[eq : adv_grand_11 ] ) describes the elastic rod model of dna in the most general terms .not all of the options provided by such model will be explored in the present work ; most times the equations will be simplified in one way or another .the unexplored possibilities and situations when they might become essential will be discussed in sec .[ sec : discussion ] . to conclude this section ,let us observe two immediate results of switching from the classical equations ( [ eq : grand_1]-[eq : grand_11 ] ) to the more realistic equations ( [ eq : adv_grand_1]-[eq : adv_grand_11 ] ) .first , the high intrinsic twist results in strongly oscillatory behavior of solutions to the system ( [ eq : adv_grand_1]-[eq : adv_grand_11 ] ) . due to the non - linear character of the system, the oscillatory component can not be separated from the rest of the solution .second , a dna loop that contains intrinsically bent segments ( those inside which ) may not be uniformly twisted as it would be in the classic case even if the loop were isotropically flexible ( ) and no external torques were acting upon the loop ( ) .whereas the classical theory necessitates that in such case ( _ cf _ eq . ) , the right - hand part of the updated eq .is non - zero if .in this section we first describe the test case system for our theory : the complex of the _ lac _ repressor protein with dna .then this protein - dna system is used to illustrate the numeric algorithm of solving the equations of elasticity .finally , different solutions for the dna loop clamped by the _ lac _ repressor are presented . for a test case study ,the developed theory was used to build a model of the dna loop induced in the _ e. coli _genome by the _ lac _repressor protein ._ lac _ repressor functions as a switch that shuts down the lactose ( _ lac _ ) operon a famous set of _e. coli _ genes , the studies of which laid one of the cornerstones of modern molecular biology .the genes code for proteins that are responsible for lactose digestion by the bacterium ; they are shut down by the _repressor when lactose is not present in the environment .when lactose is present , a molecule of it binds inside the _ lac _repressor and deactivates the protein , thereby inducing the expression of the _ lac _ operon ( fig .[ fig : lac_operon]a - b ) .the _ lac _ repressor consists of two dna - binding `` hands '' , as it can be seen in the crystal structure of the protein ( fig .[ fig : lac_operon]c ) .each `` hand '' recognizes a specific 21-bp long sequence of dna , called the operator site .the _ lac _repressor binds to two operator sequences and causes the dna connecting those sequences to fold into a loop .there are three operator sequences in the _ e. coli _ genome : o , o , and o ; the repressor binds to o and either o or o , so that the resulting dna loop can have two possible lengths : 385 bp ( o-o ) or 76 bp ( o-o ) ( fig .[ fig : lac_operon]b ) .all three operator sites are necessary for the maximum repression of the _ lac _ operon .while the long o-o loop ( 385 bp ) is the easier to form , the short o-o loop ( 76 bp ) contains the _ lac _ operon promoter so that folding this 76 bp region into a loop is certain to disrupt the expression of the _ lac _ operon . it would hardly be possible to crystallize the dna loops induced by the _repressor merely because of their size and thus the crystal structure of the _ lac _ repressor was obtained with only two disjoint operator dna segments bound to the protein .the equations of elasticity , discussed above , can be used to build elastic rod structures of the missing loops , connecting the two dna segments .such structure would allow to further the study of the _ lac _ repressor - dna interactions in several ways .first , the force of the protein - dna interactions computed after solving the equations of elasticity can be used in modeling the changes in the structure of the _ lac _ repressor that likely occur under the stress of the bent dna loop .second , the elastic rod structure of the loop may serve as a scaffold on which to build all - atom structures of its parts of interest such as the binding sites of other proteins , important for the _ lac _ operon expression , e.g. , rna polymerase and cap or , indeed , of the whole loop ( cf app .[ sec : appendixf ] ) .all - atom simulations of these sites , either alone or with the proteins docked , may provide interesting keys to the interactions of the regulatory proteins with the bent dna loop and therefore , to the mechanism of the _ lac _ operon repression .third , one can predict how changing the dna sequence in the loop would influence the interactions between the _ lac _ repressor and the dna by changing the sequence - dependent dna flexibility in our model and observing the resulting changes in the structure and energy of the dna loop .finally , the _ lac _ repressor system can be used to tune the elastic model of dna per se , in terms of parameters and complexity level , by comparing the predictions resulting from our model with the experimental data .these questions will be further discussed in sec .[ sec : discussion ] , while the following sections will detail our study of the _ lac _ repressor system .the crystal structure of the _ lac _ repressor - dna complex provides the boundary conditions for the equations of elasticity ( [ eq : adv_grand_1]-[eq : adv_grand_11 ] ) .the terminal base pairs of the protein - bound dna segments are interpreted as the cross - sections of the loop at the beginning and at the end , and orthogonal frames are fitted to those base pairs , as illustrated in fig .[ fig : elrod_dna]b .the positions of the centers of those frames and their orientations relative to the lab coordinate system ( lcs ) provide 14 boundary conditions : , , , for equations ( [ eq : adv_grand_1]-[eq : adv_grand_11 ] ) . in order to match the 13th order of the system , a boundary condition for one of the s is dropped ; it will be automatically satisfied because the identity is included into the equations .evolution of the elastic rod structure during the solution of the bvp for the short loop .( a ) the initial solution : a closed circular loop .( b ) the solution after the first iteration cycle .( c , d ) the solutions after the second iteration cycle , for the clockwise ( c ) and counterclockwise ( d ) rotation of the end .( e , f ) the solutions after the third iteration cycle ; the previous solutions are shown in light color ; the views from the top include the forces that the dna loop exerts on the dna segments bound to the _ lac _ repressor .the protein - bound dna segments from the _ lac _ repressor crystal structure are shown for reference only , as they played no role during the iteration cycles except for providing the boundary conditions.,title="fig : " ] evolution of the elastic rod structure during the solution of the bvp for the short loop .( a ) the initial solution : a closed circular loop .( b ) the solution after the first iteration cycle .( c , d ) the solutions after the second iteration cycle , for the clockwise ( c ) and counterclockwise ( d ) rotation of the end .( e , f ) the solutions after the third iteration cycle ; the previous solutions are shown in light color ; the views from the top include the forces that the dna loop exerts on the dna segments bound to the _ lac _ repressor .the protein - bound dna segments from the _ lac _ repressor crystal structure are shown for reference only , as they played no role during the iteration cycles except for providing the boundary conditions.,title="fig : " ] the iterative continuation algorithm used for solving the bvp is the same as that used in our previous work ( with some modification when the electrostatic self - repulsion is included into the equations , as described in sec .[ sec : electrostatics ] ) .the solution to the problem is constructed in a series of iteration cycles .the cycles start with a set of parameters and boundary conditions for which an exact solution is known rather than the desired ones .gradually , the parameters and the boundary conditions are changed towards the desired ones .usually , only some of the parameters are being changed during each specific iteration cycle : for example , only or only . during the cycle ,the parameters evolve towards the desired values in a number of iteration steps ; the number of steps is chosen depending on the sensitivity of the problem to the modified parameters . at each step, the solution found on the previous step is used as an initial guess ; with a proper choice of the iteration step , the two consecutive solutions are close to each other , which guarantees the convergence of the numerical bvp solver . for the latter ,a classical software , colnew , is employed .colnew uses a damped quasi - newton method to construct the solution to the problem as a set of collocating splines . the exact solution , from which the iteration cycles started , was chosen to be a circular closed ( ) elastic loop with zero intrinsic curvature , constant intrinsic twist = 34.6 deg / bp ( the average value for the classical b - form dna ) , constant elastic moduli , and no electrostatic charge ( ) -[eq : adv_grand_11 ] ) analytically , as described in , in the case of an isotropically flexible ( ) elastic loop .that would save us the first three iteration cycles .however , for that to be possible , the boundary conditions had to be symmetric , that is , the angles between the normal to the cross - section and the end - to - end vector had to be the same at both and ends . in our case , the angles in question were equal to 65 and 99 at the and ends , respectively . ] .this solution is shown in fig .[ fig : iter_short]a ; the explicit form of the solution is given in app .[ sec : appendixa ] .the loop started ( and ended ) at the center of the terminal base pair of one of the protein - bound dna segments .the coordinate frame associated with that base pair ( or the loop cross - section at ) was chosen as the lcs .the orientation of the plane of the loop was determined by a single parameter the angle between the plane of the loop and the -axis of the lcs . in the first iteration cycle ,the value of was changed , so that the end of the loop moved by 45 to its presumed location at the beginning of the second dna segment ( fig .[ fig : iter_short]b ) . in the second iteration cycle ,the cross - section of the elastic rod at the end was rotated so as to satisfy the boundary conditions for ( fig .[ fig : iter_short]c , d ) .the rotation consisted in simultaneously turning the normal of the cross - section to coincide with the normal to the terminal base pair and rotating the cross - section around the normal in order to align the vectors and with the axes of the base pair .extraneous solutions to the bvp obtained for a different orientation of the initial circular loop .the initial loop from fig .[ fig : iter_short]a is shown in panel ( a ) in light color . ] depending on the direction of the rotation , two different solutions to the problem arise . rotating the end clockwise results in the solution shown in fig .[ fig : iter_short]c . rotating the end counterclockwiseresults in the solution shown in fig .[ fig : iter_short]d. the former solution is underwound by per bp on the average and the latter solution is overwound by per bp on the average hence , they will be hereafter named `` u '' and `` o '' .the two solutions may be transformed into each other during an additional iteration cycle : namely , a rotation of the end clockwise by around its normal transforms the u solution into the o solution , and vice versa .then , a rotation of the end by another turn restores the u solution , so that a continuous rotation of the end results in switching between the two solutions .that happens because of a self - crossing of the dna loop , which is not yet prevented in the model at this point .topologically , rotating the end increases the linking number of the loop by and a self - crossing reduces the linking by the same amount ; therefore , two full turns are exactly compensated by one self - crossing , and the original solution gets restored after two turns . in the third iteration cycle , the bending moduli and ( so far , equal to each other )were changed from 0.5 to 2/3 which is the ratio between dna bending and twisting moduli that most current experiments agree upon 500 .there is less agreement as to what the twisting persistence length should be , but most of the recent data agree on the value of 750 . from the relations for the bending and twisting moduli and one obtains cm , cm , and 2/3 . ] .such increase in the bending rigidity slightly changed the geometry of the u and o solutions ( fig .[ fig : iter_short]e - f ) and increased the unwinding / overwinding to and , respectively .the change in has a clear topological implication : the deformation of the looped dna is distributed between the writhe ( bending ) of the centerline and the unwinding / overwinding of the dna helix .when the bending becomes energetically more costly , the centerline of the loop straightens up ( on the average ) and the deformation shifts towards more change in twist .notably , two more solutions may result from the iteration procedure , depending on the orientation of the initial simplified circular loop ( fig .[ fig : iter_concom ] ) .however , for the 76 bp loop these solutions are not acceptable , because the centerlines of the corresponding dna loops would have to run right through the structure of the _ lac _ repressor ( cf fig .[ fig : iter_concom]c - d and fig .[ fig : lac_operon]c ). therefore , only the two former solutions to the problem are acceptable in the case of the short loop .the shapes of the solutions obtained after the third iteration cycle become our first - approximation answer to what the structure of the dna loop created by the _ lac _ repressor must be .the solutions are portrayed in fig .[ fig : iter_short]e - f , and the profiles of their curvature , twist , and elastic energy density are shown in fig . [ fig : short_plots ] ( left columns of the two panels ) .the u solution forms an almost planar loop , its plane being roughly perpendicular to the protein - bound dna segments ( fig .[ fig : iter_short]e ) .the shape of the loop resembles a semicircle on two relatively straight segments connected by short curved sections to the _ lac _ repressor - bound dna .accordingly , the curvature of the loop is highest in the middle and at the ends , the strongest bend being around 6 deg / bp , and drops in between ( fig .[ fig : short_plots ] ) .the average curvature of this loop is 3.7 deg / bp .the unwinding is constant , by virtue of for the same reason the energy density profile simply follows the curvature plot .the total energy of the loop is 33.0kt , of which 26.8kt are accounted for by bending , and 6.2kt by twisting .the stress of the loop pushes the ends of the protein - bound dna segments ( and consequently , the _ lac _ repressor headgroups ) away from each other with a force of about 10.5pn ( fig .[ fig : iter_short]e ) .the o solution leaves and enters the protein - bound dna segments in almost straight lines , connected by a semicircular coil of about the same curvature as that of the u solution , not , however , confined to any plane ( fig .[ fig : iter_short]f ) .the average curvature equals 3.6 deg / bp .the energy of this loop is higher : 38.2kt , distributed between bending and twisting as 28.5kt and 9.7kt , respectively .the forces of the loop interaction with the protein - bound dna segments equal 9.2pn and are pulling the ends of the segments past each other ( fig .[ fig : iter_short]f ) .since the energy of the u loop is 5kt lower than that of the o loop , one could conclude that this form of the loop should be dominant under conditions of thermodynamic equilibrium . yet , both energies are too high : the estimate of the energy of the 76 bp loop from the experimental data is approximately 20kt at high salt concentration ( see app . [sec : appendixb ] ) . therefore , one can not at this point draw any conclusion as to which loop structure prevails , and further improvements to the model are needed , such as those described in sections [ sec : anisotropy]-[sec : electrostatics ] . four solutions of the bvp problem for the 385 bp - long dna loop .underwound solutions are marked by the letter ` u ' , overwound ones , by ` o ' . ] using the same algorithm , the bvp was solved for the 385 bp loop .similarly , four solutions were obtained ( cf figs .[ fig : iter_short]e - f , [ fig : iter_concom]c - d ) . with the longer loop, the previously inacceptable solutions are running around the _ lac _ repressor rather than through it and therefore , are acceptable .all four solutions ( denoted as u , u , o , o ) are shown in fig .[ fig : long_sols ] . the solutionsu and u are underwound , o and o are overwound .the geometric and energetic parameters of the four solutions are shown in table [ tab : long_sols ] ..[tab : long_sols]geometric and energetic properties of the four solutions of the bvp problem for the 385 bp - long dna loop , in the case . [ cols="^,^,^,^,^,^,^ " , ] in the broad range of parameters and , the four long loops showed the same tendencies as the two short loops .the bending anisotropy allowed for a significant reduction in the elastic energies , making the loops effectively softer ( more bendable ) .the shapes of the loops , after undergoing some transformation following the change in the bending anisotropy or in the loop rigidity , eventually reached asymptotic states . the asymptotic states for the soft loops ( those with the small ratio and/or high ) were strongly bent conformations with practically zero unwinding / overwinding , where the twisting energy was of the same order as the small bending energy .the asymptotic states for the rigid loops ( those with the large ratio and on the order of 1 ) were the conformations with the least possible bending for each given loop topology , where the twisting achieved the worst possible value so as to optimized the bending , yet the latter still accounted for most of the elastic energy .same as in the case of the short loops , the shapes of the overwound and underwound solutions converged when the loop rigidity achieved particularly large values .elastic energy difference between u and ( a ) o , ( b ) u , and ( c ) o solutions for the 385 bp - long dna loop .3d plots of the energy difference in the coordinates and are shown on the left , and contour maps of the 3d plots for the values of 0.5 , 1 , 1.5 , 2 , 2.5 , and 3kt are shown on the right . ]the underwound u solution was again the one to have the least elastic energy among the four solutions throughout the whole studied range of and ( and ) values .the maps of the energy difference between u and the other three solutions are shown in fig .[ fig : uo_diff_long ] .the energy of the o solution does not normally differ from that of the u solution by more than several kt , therefore the o solution should contribute to a small extent to the thermodynamic ensemble of the loop shapes and the properties of the _ lac _ repressor - bound 385 bp dna loop .in contrast , the energies of the u and o loops are consistently 2 - 2.5 times larger than the energy of the u loop and the difference amounts to small kt values only for unlikely combinations of and .therefore , one can safely conclude that these two loops , even though uninhibited by steric overlap with the _ lac _ repressor , are still extraneous solutions , as in the case of the 76 bp loop , and may be safely excluded from any computation involving the properties of the thermodynamic ensemble of the 385 bp loop conformations .the last , but perhaps , the most important extension of the classic theory that is included in our model , consists of `` charging '' the modeled dna molecule .the phosphate groups of a real dna carry a substantial electric charge : per helical turn , that significantly influences the conformational properties of dna .the dna experiences strong self - repulsion that stiffens the helix and increases the distance of separation at the points of near self - contact . also , all electrostatically charged objects in the vicinity of a dna such as amino acids of an attached protein , or lipids of a nearby nuclear membrane interact with the dna charges and influence the dna conformations .below , we describe our model of the electrostatic properties of dna and the effects of electrostatics on the conformation of the _ lac _ repressor dna loops .the electrostatic interactions of dna with itself and any surrounding charges are introduced in our theory through the body forces : where is the electric field at the point and is the density of dna electric charge at the point .the present simplified treatment places the dna charges on the centerline , as it was done in other studies .implications of a more realistic model will be discussed in sec .[ sec : discussion ] .the charge density is modeled by a smooth differentiable function with relatively sharp maxima between the dna base pairs , where the phosphate charges should be located .the chosen ( dimensionless ) function : is somewhat arbitrary but specifics are unlikely to significantly influence the results of our computations , as will be discussed below . denotes the number of base pairs in the modeled dna loop ( which is assumed to be starting and ending with a base pair ) and denotes the total charge of that dna loop .that charge is reduced from its regular value of per base pair due to manning counterion condensation around the phosphates : . in this work ,we assume , which is an observed value for a broad range of salt concentrations .the electric field is composed of the field of external electric charges , not associated with the modeled elastic rod whichever are included in the model and from the field of the modeled dna itself ( fig .[ fig : el_setup ] ) . is computed using the debye screening formula : where is the radius of debye screening in an aqueous solution of mono valent electrolyte of molar concentration at 25 .the dielectric permittivity is that of water : .electrostatic interactions in the elastic rod problem .the electric field is computed at each point of the rod as the sum of the `` external '' field , produced by the charges , not associated with the elastic rod , and the `` internal '' field , produced by the charges , placed in the maxima of the charge density of the elastic rod . here , and ; see eq . for detail .the area of the rod shown in light color lies within the cutoff and is excluded from computations of the electric field at the point . ] the first term in eq. represents the dna interaction with external charges , located at the points ; the sum runs over all those charges .the second term represents the self - repulsion of the dna loop , and that sum runs over all the maxima of the charge density , where the dna phosphates charges of are located .this sum approximates the integral over the charged elastic rod ; computing such integral would be more consistent with the chosen model .however , this approximation is rather accurate , as will be shown below , and is made in order to significantly reduce the amount of computations required to calculate the electric field . more importantly ,certain phosphate charges are excluded from the summation in the second term ( hence the prime sign next to the sum ) .those excluded are the charges that are located closer to the point than a certain cutoff distance ( fig . [ fig : el_setup ] ) .the reason for introducing such cutoff is that the dna elasticity has partially electrostatic origin , so that the energetic penalty for dna bending and twisting , approximated by the elastic functional , already includes the contribution from electrostatic repulsion between the neighboring dna charges .it is debatable what `` neighboring '' implies here , i.e. , how close should two dna phosphates be in order to significantly contribute to dna elasticity . in this work , the cutoff distance is chosen to be equal to one step of the dna helix ( =36 ) .this , on the one hand , is the size of the smallest structural unit of dna , beyond which it does not make sense at all to use a continuum model of the double helix so , at the very least , the phosphate pairs within such unit should be excluded from the explicit electrostatic component . on the other hand ,the forces of interaction between the phosphates , separated by more than that distance from each other , are already much smaller than the elastic force , as will be shown below .thus , even though the chosen cutoff might be too small , the resulting concomitant stiffening of the dna is negligible .for comparison , calculations with cutoff values and were also performed .thus , the electric field , computed using , is substituted into , and the resulting body forces appear in eqs .( [ eq : adv_grand_1 ] ) , ( [ eq : adv_grand_2 ] ) , ( [ eq : adv_grand_4 ] ) of the `` grand '' system , in place of the previously zeroed terms .`` unzeroing '' those terms , however , is not the only change to the equations .more importantly , these body force terms depend on the conformation of the entire elastic loop due to the self - repulsion term in .this makes the previously ordinary differential equations of elasticity integro - differential and therefore , requires a new algorithm for solving them .the solutions of the integro - differential equations minimize the new energy functional : where is the elastic energy computed as in , is the electrostatic energy computed , in accordance with , as and is the electrostatic `` ground state '' energy : computed using the same formula for the straight dna segment of the same length as the studied loop . to solve the integro - differential equations ,the following algorithm is employed .as previously , the electrostatic interactions are `` turned on '' during a separate iteration cycle . at each step of the latter ,the equations are solved for the electric field , where the `` electrostatic weight '' grows linearly from 0 to 1 .changes in the predicted structure and energy of the 76 bp dna loops after electrostatic interactions are taken into account . left column : the u solution ; right column : the o solution .( a , b ) elastic ( e ) , electrostatic ( q ) , and total ( t ) energy of the elastic loop vs. the weight of the electrostatic interactions .( c , d ) uncharged ( , in light color ) and completely charged ( , in dark color ) structures of the elastic loop .the bottom views are rotated by 70 around the vertical axis relative to the top views .( e , f ) the r.m.s.d . between the charged and uncharged loop structures , in dna helical steps h. the data in this figure correspond to , , ionic strength of 10 mm and an electrostatics exclusion radius . ]however , each step of this iteration cycle becomes its own iterative sub - cycle .the electric field is computed at the beginning of the sub - cycle and the equations are solved with this , constant field . then the field is re - computed for the new conformation of the elastic rod , the equations are solved again for the new field , and so on until convergence of the rod to a permanent conformation ( and , consequently , of the field to a permanent value ) . the weight is kept constant throughout the sub - cycle . to enforce convergence ,the field used in each round of the sub - cycle is weight - averaged with that used in the previous round : the averaging weight is selected by trial and error so as to speed up convergence . for the _ lac _ repressor system , the trivial choice of turned out to be satisfactory .this approach to solving the integro - differential equations assumes that the elastic rod conformation changes smoothly with the growth of the electric field .for intricate rod conformations , which might change in a complicated manner with the addition of even small electrostatic forces , this approach may conceivably fail .yet , it worked extremely well for the studied case of the dna loop clamped by the _lac _ repressor .the changes to the structure and energy of the 76 bp dna loops due to electrostatic interactions were computed for a broad range of ionic strength ( 0 , 10 , 15 , 20 , 25 , 50 , and 100 mm ) and three different cutoff values ( , , ) .the computations were performed with and the previously selected ( resulting in the elastic moduli of and ) .the external charges included in the model were those associated with the phosphates of the dna segments from the crystal structure ( see fig . [fig : el_setup ] ) .the iteration cycle was divided into 100 sub - cycles , which showed a remarkable convergence : the length of no sub - cycle exceeded three iteration rounds .the changes in the structure and energy of the elastic loops due to electrostatic interactions are presented in fig . [fig : el_eff ] for the ionic strength of 10 mm and the exclusion radius of .the structure of the u solution almost does not change : the r.m.s.d . between the original ( ) and the final ( ) structures does not exceed .neither do the curvature and twist profiles of this loop significantly change ( fig .[ fig : short_plots ] , 3d column ) .the energy of the loop changes by the electrostatic contribution of 6.1kt , that because the structure is not changing practically does not depend on ( fig .[ fig : el_eff ] ) .this energy increase is mainly accounted for by the interaction of the loop termini with each other and the external dna segments ( fig .[ fig : short_plots ] ) . the self - repulsion accounts for 55% of the electrostatic energy ;45% comes from the interaction with the external dna segments .the apparent reason for the absence of a large change in the geometry of the u loop lies in the fact that this loop is an open semicircular structure , which at no point comes into close contact with itself or the external dna .in contrast , the initial structure of the o loop is bent over so that the beginning of the loop almost touches the end of the loop and the attached dna segment ( fig .[ fig : aniso_drw ] , [ fig : el_eff]d ) . as a result, the electrostatic interactions force a significant change in the structure and energy ( fig .[ fig : el_eff]b , d , f ) .the structure opens up , the gap at the point of the near self - crossing increases , the r.m.s.d . between the final and the original structures reaches ( fig .[ fig : el_eff]f ) ; the dna overwinding almost doubles ( fig .[ fig : short_plots ] , 6th column ) .this allows the electrostatic energy to drop from 13.2kt to 8.0kt , yet the elastic energy grows by 1.7kt ( fig . [fig : el_eff]b ) ; together , the energy reaches 36.3kt so that the gap from the u loop increases from 3.3kt to 7.4kt . as in the case of the u loop ,the main contribution to the electrostatic interactions comes from the loop ends ( fig .[ fig : short_plots ] ) ; the energy distribution between self - repulsion and the interactions with the external dna charges is practically the same .the effect of ionic strength on the predicted structure and energy of the 76 bp dna loops . left column : the u solution ; right column : the o solution .( a , b ) elastic ( e ) , electrostatic ( q ) , and total ( t ) energy of the elastic loop .shown are only the plots for the exclusion radius of ; the energy plots for the other exclusion radii are almost indistinguishable .( c , d ) snapshots of the elastic loop structures for 10 mm ( dark ) , 25 mm ( medium ) , and 100 mm ( light color ) .points where the snapshots were taken are shown as dots of the corresponding colors on the axes of panels ( a ) , ( b ) , ( e ) , ( f ) . the bottom views are rotated by 70 around the vertical axis relative to the top views .( e , f ) the r.m.s.d . of the loop structures from those computed without electrostatics ( equivalent to infinitely high salt ) .the lines , from top to bottom , correspond to the exclusion radii of , , and . ]naturally , the calculated effect diminishes when the ionic strength of the solution increases and therefore the electrostatics becomes more strongly screened .[ fig : ion_change ] shows what happens to the structure and energy of the u and o loops when the ionic strength changes in the range of 10 mm 100 mm ( which covers the range of physiological ionic strengths ) .the structure and elastic energy of the u loop again show almost no change , the total energy of that loop falls from 29.7kt to 23.5kt due to the drop in electrostatic energy . the structure of the o loop returns to almost what it was before the electrostatics was computed ( within the r.m.s.d . of ) ; the elastic energy of this loop drops back to 26.6kt and the electrostatic energy to mere 0.5kt .this results show that theoretical estimates of the energy of a dna loop formation in vivo need to employ as good an estimate of the ionic strength conditions as possible .the _ lac _ repressor loops were extensively used to analyze all the assumptions and approximations of our model and showed that those were satisfactory indeed .the calculations were repeated for the self - repulsion cutoffs of and .the resulting change in the loop energy at mm equals only and for the u loop , and and for the o loop ; all these values drop to below 0.1kt when the ionic strength rises to 100 mm .the difference is mainly in electrostatic energy , and the elastic energy is always within 0.1kt of that of the structures obtained with .accordingly , the r.m.s.d . from the uncharged structure varies by at most for the different cutoffs ( fig .[ fig : ion_change]e , f ) .therefore , even the `` largest '' cutoff is satisfactory for the electrostatic calculations while at the same time increasing the speed of computations .an additional advantage of using larger cutoff is that the concomitant stiffening of the modeled dna , which possibly takes place due to too many phosphate pairs included in the electrostatic interactions , is reduced .such stiffening , however , is truly negligible : in all the cases , the electrostatic force does not exceed 1 - 2pn per base pair , compared to the calculated elastic force in the range of 10 - 20pn .changes in the cutoff results in only insignificant changes of the electrostatic force . nor does evaluating the electric field and energy using the sums and ( instead of a more consistent integral over the loop centerline ) result in any significant difference .test calculations showed that in all the studied cases the two ways of evaluating the energy differ by at most 0.02% .finally , it was tested in how far the particular choice for the charge density of dna influences the computation results .the calculations were repeated for the constant charge density ( in dimensionless representation ) .the energies of the loop conformation never changed by more than of their values in the whole range of and ; the r.m.s.d .between the loop conformations obtained with different never exceeded .therefore , the electrostatic properties of the elastic rod in the current model can safely be computed with constant electrostatic density , further saving the computation time .the electrostatic computations were similarly performed for the 385 bp loops , in the same range of salt concentrations and for exclusion radii of and . for the u and o loops ,the results were qualitatively the same as in the case of the short loops .the elastic loops became more open and straigtened up with respect to the _ lac _ repressor ; the energy of the loops increased by 0 - 6kt , depending on the salt concentration ( fig .[ fig : ion_change_long ] ) .the u loop was again the one to change its structure and energy to the least extent .the results of the computations using the different cutoff radii were practically the same ; approximating the self - repulsion field by a discrete sum gave almost exact results ; replacing the charge density function with the constant function had no significant influence on the results .one difference from the short loop case consisted , though , in the more significant change of the long loop structures with respect to the uncharged loops .the r.m.s.d.s reached for the u loop and for the o loop ( fig .[ fig : ion_change_long ] , cf fig .[ fig : ion_change]e , f ) . as previously , the major part of the electrostatic effect consisted in repulsion between the ends of the loop , brought closely together by the protein , and the protein - attached dna segments .this repulsion tended to change the direction of the ends of the loop , bending them away from each other and the dna segments . in the case of the short loops it was impossible to notably change the direction without significantly stressing the rest of the loopyet the long loops could more easily accomodate opening up at the ends and therefore changed their structures more significantly .the larger structural change necessitated longer calculations .for the long loops , the iteration steps typically consisted of 5 - 6 iteration sub - cycles , and even of a few dozen sub - cycles at especially stiff steps .the u and o loops showed a similar responce to the electrostatics at high salt concentration ( above 25 mm ) .their electrostatic energy lied in the range 0 - 5kt , and the structural change due to the increased bending of the loop ends amounted to up to r.m.s.d . from the uncharged structure for the u solution and up to for the o solution ( fig .[ fig : ion_change_long ] ) . yet, low salt and stronger electrostatics rendered the solutions instable .electrostatic computations with no salt screening ( 0 mm ) transformed the u solution into the u solution and the o solution into the o solution .what made the solutions instable was apparently the ever increasing bending of the ends of the loops away from each other that , in combination with the bending anisotropy , also caused high twist oscillations near the loop ends ( as described in sec . [ sec : anisotropy_structure ] ) .the combination of twisting and bending caused the loops to flip up as one can cause a piece of wire to flip up and down by twisting its ends between one s fingers .for the intermediate salt concentrations ( 10 - 20 mm ) the oscillations of the intermediate solutions between the two possible states resulted in non - convergence of the iterative procedure . in this respect , using the larger electrostatic exclusion radius improved convergence .for the exclusion radius of , the iterations did not converge for salt concentrations of 15 and 20 mm ; convergence for the 10 mm salt was achieved but resulted again in flipping up to the stable solutions . for the exclusion radius of , the iterations successfully converged to u and o solutions ( albeit somewhat changed due to the electrostatics ) for the 15 and 20 mm salt concentrations and did not converge for 10 mm only .such instability of the u and o loops serves , of course , as another argument for disregarding them in favor of the stable u and o solutions . upon introducing the electrostatic self - repulsion ,an interesting experiment could be performed .self - crossing by the solutions during the iteration cycles as described in sec .[ sec : lacsols_short ] was no longer possible .therefore , one could explore whether superhelical structures of the loop could be built by further twisting the ends of the loop .one extra turn around the cross - section at the end did generate new structures of the u and o loops . yet , those structures were so stressed and had such a high energy ( on the order of 50kt higher than their predecessors ) that it was obvious that those structures will not play any part at all in the real thermodynamic ensemble ot the _ lac _ repressor loops .any further twisting of the ends resulted in non - convergence of the iterative procedure .clearly , this length of a dna loop is insufficient to produce a rich spectrum of superhelical structures .below , we will review the presented modeling method and its limitations , compare it with the previously reported similar models , discuss the possible applications and the further developments of the method , and summarize what has been learned about the _ lac _ repressor - dna complex .the presented modeling method consists in approximating dna loops with electrostatically charged elastic rods and computing their equilibrium conformations by solving the modified kirchhoff equations of elasticity .the solutions to these equations provide zero - temperature structures of dna loops the equilibrium points , around which the loops fluctuate at a finite temperature . from these solutions, one can automatically obtain both global and local structural parameters , such as the twist and curvature at every point of the loop , the loop s radius of gyration , the linking number of the loop and its distribution between writhe and twist , various protein - dna distances , the distances between dna sites of special interest , etc .the solutions readily provide an estimate of the energy of the dna loop and the forces that the protein has to muster in order to confine the loop termini to the given conformation , the distribution of the energy between bending and twisting , and the profiles of stress and energy along the dna loop .our method allows to build a family of topologically different loop structures by applying such simple geometrical transformations as twisting and rotating the loop ends , or varying the initial loop conformation that serves as the starting point for solving the bvp . by comparing the energies of the topologically different conformations and assuming the boltzmann probability to find the real dna loop in either of them , one can either deduce the lowest energy structure which the loop should predominantly adopt as was mostly the case in the studied _ lac _ repressor - dna complex or to compute the loop properties of interest by taking boltzmann averages among several obtained structures of comparable energies . for dna loops of a size of several persistence lengths ( 150 bp ) , only a few topologically different structures of comparable energy can exist and our simplified search of the conformational space should be sufficient to discover all members of the topological ensemble as it has been demonstrated in this work .yet longer dna loops should produce large topological families of structures and , unless a fast exhaustive search procedure is discovered , this limits the applicability of our method to dna loop on the order of or shorter than 1,000 bp .the method is still good for generating sample structures of longer loops , but more structures of comparable energy are likely to be missed .the lower boundary of applicability of our method is stipulated by the loop diameter : the kirchhoff theory is applicable only if the elastic rod is much longer than its diameter .the diameter of the dna double helix is 2 nm , which is equivalent to 6 bp .therefore , the loop studied by our method should be at least several times longer than that : roughly , 50 bp and longer .ideally , one would request that the loop exceed the persistence length of dna , yet proteins are known to bend dna on smaller scales as , for example , the _ lac _ repressor does so we apply the theory to dna loops of around 100 bp in length . hence , we suggest that the presented model can be applied to studying the conformations of dna loops of about 100 bp to 1,000 bp length . buildinga single conformation is a fast process that takes only several hours of computation on a single workstation .the problem can be solved for a certain set of boundary conditions , as presented here or the loop boundaries can be systematically moved and rotated and the dependence of the loop properties on the boundary conditions can be studied by re - solving the problem in each new case .such approach has the advantage over the monte carlo simulations in avoiding having to build and analyze a massive set of structures sampling the conformational space . at a finite temperature , the dna loops exist as an ensemble of conformations . while generating all feasible topologically different structures , our method still neglects the thermal vibrations of each of them and the related entropic effects . yet , those effects are likely to be insignificant , since the length of the studied dna loops is limited to several persistence lengths , as discussed above. the related thermal oscillations of the loop structure should be small , although a separate study to quantify the effect of the oscillations seems worthwhile . for longer dna loops , an extensive sampling of the conformational space for example , using the monte carlo approach is indispensable .compared to the pre - existing analytical and computational dna models based on the theory of elasticity , our model provides a universal and flexible description of dna properties and interactions .most previous methods either considered the dna to be isotropically flexible , or did not consider the effects of the dna intrinsic twist and curvature , or limited the treatment to a purely elastic model , that is , to the cases when the electrostatic properties of dna could be disregarded . in the present work , a model of anisotropically flexible , electrostatically charged dna with intrinsic twist and intrinsic curvature has been employed , kirchhoff equations were derived in their most general form ( [ eq : adv_grand_1]-[eq : adv_grand_11 ] ) , and all these dna properties have been extensively studied in the case of the _ lac _ repressor loops . as it has been shown in sec .[ sec : anisotropy ] , the combination of the intrinsic twist with the anisotropic flexibility is essential in order to correctly estimate the energy of the dna loop , as well as the local bend and twist at each point of the loop .the electrostatic interactions are important in the case of a close contact of the loop with itself or with other molecules ( sec .[ sec : electrostatics ] ) .the universality of our approach allows us to include all these cases into the scope of approachable problems .moreover , all the dna parameters : the bending moduli , the intrinsic twist and curvature , the charge density are treated in our equations as functions of the arclength .this provides an automatic tool for studying the sequence - dependent properties of the dna loops .the elastic moduli and intrinsic parameters need to be specified for different dna sequences , e.g. , as outlined in . then the properties of any loop are approximated by functions , , , , , smoothly changing their values between those associated with each base pair in the loop sequence ( as discussed in app .[ sec : appendixd ] ) and the influence of the dna sequence on the conformation and energy of that loop can be studied by simply substituting their functions associated with different dna sequences into eqs .( [ eq : adv_grand_1]-[eq : adv_grand_11 ] ) . of course, extensive testing of such parameters and comparison with experimental results on dna bending and loop energetics has to be done prior to performing any such studies of sequence - dependent effects .another opportunity that our model provides consists in its ability to mimic the effect of protein binding within dna loops .for example , if a protein is known to bend dna over a certain region then the intrinsic curvature term can be adjusted so as to enforce the required curvature over that region of the dna loop . to strictly enforce the curvature ,the rigidity of that region would have to be increased as well .studying the solutions to the kirchhoff equations with such intrinsic curvature term would reveal what changes to the energy and conformation occur in the dna loop upon binding of the specified protein .a recent study of the cap protein binding within the _ lac _ repressor loop , performed in this vein , suggested an explanation for the experimentally observed cooperation between the cap and the _ lac _ repressor in dna binding .certainly , the different features of our model have to be used only as each specific problems dictates .for example , the model of isotropically flexible dna seems sufficient for determining the global properties of dna loops , such as the linking numbers of different conformations , their radii of gyration , or the energy distribution between bend and twist . in the present study , increasing the complexity of the model and varying the elasticity properties did not much change the shape of the loop , which determines these global properties ( cf . figs .[ fig : iter_short ] , [ fig : long_sols]-[fig : range_0 ] , [ fig : el_eff ] , [ fig : ion_change ] ) .presumably , the reduced model should be sufficient for computing the global properties of other dna loops .if , however , the energy of the loop has to be estimated , or the energies of different loop conformations need to be compared , or , indeed , the local structure at a certain point of the loop needs to be predicted , with a view to study how binding of a certain protein in that area changes upon the loop formation then the anisotropic loop model becomes imperative , as it has been demonstrated in sec .[ sec : anisotropy ] . finally , the problems which involve loop conformations with a near self - contact such as those of the o loop or the tightly wound superhelical structures necessitate including the electrostatic force term into kirchhoff equations .the same holds for the problems studying the effect of salt concentration on structure and energetics of dna loops .the speed and high adaptability of our modeling approach makes it a good candidate for multi - scale modeling simulations .the most obvious application would be to study the structure and dynamics of a system similar to the _ lac _ repressor - dna complex : a protein or a protein aggregate holding a long dna loop .the structure of the loop and the forces and torques that the loop exerts on the protein are directly obtainable from the elastic rod computations .such forces would normally consist of the elastic forces at the loop boundaries and , perhaps , the electrostatic forces if parts of the dna loop approach the protein - dna complex closely .the electrostatic forces are directly computable from the predicted loop geometry .the forces and torques can then be plugged in an all - atom simulation of the structure of the protein aggregate and the dna segments directly bound to it , similarly to how it is done in steered molecular dynamics simulations .the state of the dna segments that provide the boundaries of the dna loop can be routinely checked during the all - atom simulations .whenever a notable change in the positions and orientations of the boundaries occur , the coarse - grained elastic computations have to be repeated ( using the previous solutions as the initial guess ) and the simulations continue with the updated values of the forces and torques .the fast coarse - grained computations presented here result in only a marginal increase in the total time required for the all - atom simulation .the multi - scale simulations provide a direct means to study the changes in structure and dynamics of the the protein aggregate under the force of the dna loop it creates .another application of multi - scale modeling could be to a system where several proteins bind to the same dna loop and alter the dna geometry at or near their binding sites .such systems arise during gene transcription , when multiple transcription factors and rna polymerase components bind at or near a gene promoter , during dna replication when helicases and gyrases concurrently alter the dna topology , inside eukaryotic chromatin with multiple nucleosomes clustering on long dna threads .simulations of the separate protein - dna complexes could be run on atomic scale , and the stress imposed on the dna in one simulation could be passed to another via an elastic rod model of the dna segment(s ) connecting the different complexes .the binding of some proteins could even be mimicked with the intrinsic curvature and twist terms , as outlined above .finally , the local dna geometry predicted in our model can be used to replace the coarse - grained elastic ribbon with an all - atom dna structure in selected areas of the loop or even over the whole loop ( cf app .[ sec : appendixf ] ) .an example is shown in fig .[ fig : lac_operon]c , d , where the all - atom structure was placed on top of the predicted 76 bp and 385 bp u loops .a local segment of the all - atom structure can be simulated with the forces and torques from the rest of the loop applied to its boundaries using the same multi - scale simulation technique as outlined above .the segment can be simulated on its own , so as to compare the segment s structural dynamics in an unconstrained conformation , or inside the large bent loop . or , if the segment in question is a binding site of a certain protein , that protein could be docked to the dna segment , and the simulation of the two would allow one to study the changes in the protein binding to dna upon the loop formation . with the advent of the computational power ,even the simulations of the all - atom structures of moderately sized dna loops , such as the 76 bp loop studied here , would become possible .for such a simulation , the all - atom loop structure , predicted on the basis of the elastic loop model , can serve as a good starting point .naturally , before being used in such advanced simulations , the model has to be refined and extensively tested using all the available experimental data . in the present state, we have not even converged to a single preferred set of elastic moduli , as a whole family of values may account for the _ lac _ repressor loop bending energies .refining and adjustment of the model parameters on the basis of other data is at this point imperative for making the suggested universal elastic model fully functional on all levels .such data can come from the experiments on dna interaction with other dna - binding proteins , including topoisomerases , from the energetics of dna minicircles , and from the analysis of deformations in dna x - ray structures , following the ideas of olson _ et al _ .another necessary adjustment would have to account for the well - known sequence - specificity of dna properties .the bending moduli have to be determined in the sequence - specific fashion , as functions of the arclength , varying according to the dna sequence at each point of the studied loop ( see app .[ sec : appendixd ] ) .the intrinsic curvatures and varying intrinsic twist are also known to have a significant effect on the structure and dynamics of some dna sequences and parameterizing these functions is therefore also important for our model . before the parameters are better defined , the model is still good for solving general problems , such as selection between alternative dna loop topologies or energy estimates within several kt , as discussed in the previous section , but not for more subtle quantitative predictions of the structural and energetical properties of dna loops .several extensions to the model may prove necessary in order to achieve realistic dna descriptions in certain cases .one obvious extension is including dna deformability into eqs .( [ eq : adv_grand_1]-[eq : adv_grand_11 ] ) .dna is known to be a shearable and extensible molecule , as is evidenced by the deformations observed in the all - atom structures or micromanipulation experiments .the dna deformability can influence both the local and global structure of the modeled dna , especially in view of the observed coupling between the dna stretch and twist . in order to include the dna deformability in our model , eqs .( [ eq : adv_grand_1]-[eq : adv_grand_11 ] ) have to be modified as outlined in app .[ sec : appendixe ] , raising the order of the system ( [ eq : adv_grand_1]-[eq : adv_grand_11 ] ) to 16 .another vital modification consists in adding a steric repulsion parameter to the other force terms . in the present study ,there was no need for such a term because of its insignificance compared to the dna self - repulsion .however , if the dna interaction with a positively charged object ( e.g. , the histone core of the nucleosome ) is to be described , then the steric repulsion term becomes imperative lest the elastic solutions collapse on the positive charges , causing non - convergence of the iterative process .the steric repulsion can be described as the van der waals 6 - 12 potential and would not be qualitatively different from the electrostatic repulsion introduced in . only the electric charges in eq .have to be replaced by the van der waals coefficients , the degrees of the denominators have to be changed to 6 or 12 , and the inverse screening radius has to be set to zero otherwise , the forces of steric repulsion are computed in the same way and through the same iterative algorithm applied to solving the integro - differential equations .as for the electrostatic self - repulsion , its description in our model can be rendered more realistic by placing the phosphate charges on the outside of the double helix rather than on the centerline as in the present simplified treatment . placing the phosphates at the points of the rod cross - section , where and are determined by the dna chemical structure ( fig . [fig : elrod_dna ] ) , would result in replacing the radii in eq .by .that would make the electric field dependent not only on the radius - vector , but also on the the euler parameters that determine the orientation of the local coordinate frame .this additional dependence is unlikely to result in any algorithmic difficulties with solving the equations .physically , however , moving the phosphate charges away from the centerline means introducing external torques in each cross - section in additional to external forces , and the torques could change the calculated loop structure again , unlikely in the general case , but possibly in the case of a close approach of the charged elastic ribbon to itself or any other charged object involved in the model .as has been noted above , the discussed modeling method produces static , zero - temperature structures of dna loops . yet, the entropic contribution to the free energy of different dna states may sometimes be important .interestingly , there is a way to estimate the structural entropy of a dna loop with our method .one could employ the intrinsic curvature / twist terms and perform statistical sampling by assigning bends and unwinding / overwinding with the energetic penalty between 0 and 1kt at random points of the loop , analyzing the resulting changes in the loop structure and energy .since the zero temperature structure of the loop is used as the starting point in the iterative calculations of the randomly bent and twisted structures , the iterations should converge much faster than those running from scratch .thus the ensemble of thermally excited structures can be generated and used to obtain the properties of the studied dna loop at a final temperature .the entropy of each looped state can be estimated , for example , through the volume of space swept by all the different structures from the thermal ensemble . with the modifications , outlined above , the proposed model is likely to describe dna loops very realistically , yet still be less detailed and computationally much faster than all - atom models .one drawback , however , lies in the non - linear nature of the elastic problems .the solutions to the equations of elasticity are known to exhibit a non - trivial dependence on the problem parameters , for example , the boundary conditions . a latent response to the change of a certain condition can lead to an abrupt change in the shape of the bvp solution , for example , if the point of instability of the latter is achieved .if the solution changes too abruptly , the iterative procedure will fail due to non - convergence of the bvp solver .another reason for the non - convergence is meeting a bifurcation point , as it happened at low salt concentrations with the u and o solutions for the 385 bp loop ( sec .[ sec : electro_385 ] ) .such problems seem to be inherent to our method .if they are encountered , we would recommend to thoroughly analyze the nature of the non - convergence , for example , through monitoring the evolution of the intermediate solutions prior to the non - convergence . in the case of abrupt changes ,the ultimate structure can perhaps be guessed , or achieved along a different pathway with an equivalent endpoint that might not be leading the rod through the point of abrupt change ( for instance , rotating the end of the elastic rod counterclockwise by if a clockwise rotation by is causing problems ) . in the case of a bifurcation ,the bifurcating solutions branches need to be analyzed which by itself may provide useful insights into the structural properties and transformations of the studied loop . to conclude the paper , let us summarize what has been learned with our method about the specific system , the _ lac _ repressor and its dna loop .for both possible lengths of the loop , it has been shown that the underwound loop structure pointing away from the _ lac _ repressor should be predominant under thermodynamic equilibrium conditions , unless other biomolecules interfere .the predicted structure of the u loop depends only slightly on the salt concentration , although the loop energy exhibits a stronger dependence .the experimentally observed energy of the loop can be obtained with the right combination of parameters - which , of course , have to be extensively tested with other protein - dna systems .another reason why the predicted structure of the dna loop has to be treated with caution is the dynamic nature of the _ lac _ repressor - dna complex .the x - ray structure , on which our conclusions were based , has been obtained without the dna loop connecting the protein - bound dna pieces and , therefore , can be presumably changed by the stress of the bent dna loop . yet, our results pave the way for analyzing these changes .the forces of the protein - dna interactions , computed with our model , can be applied in a multi - scale simulation of the _ lac _ repressor - dna complex , as described above .such simulation would reveal the equilibrium state of the _ lac _ repressor with the bound dna loop , or at least show the spectrum of structural states visited by the protein during its dynamics with the attached loop .the results of such dynamics may even change our conclusions about the structure and energetics of the dna loops created by the _ lac _ repressor .when the boundary conditions change , the o loop can become energetically preferable to the u loop , or even one of the dismissed extraneous solutions may become feasible . yet , to make multi - scale simulations possible , that would study the complexes between _ lac _ repressor , or other proteins , and the dna loops in their dynamic nature , was the driving force behind the development of our coarse - grained dna model which , per se , does not pretend to yield final conclusions about the _ lac _ repressor - dna complex .in conclusion , a universal theoretical model of dna loops has been presented .the model unifies several existing dna models and provides description of many physical properties of real dna : anisotropic flexibility , salt - dependent electrostatic self - repulsion , intrinsic twist and curvature all the properties being sequence - dependent .the model is applicable to a broad range of problems regarding the interactions of dna - binding proteins with dna segments of moderate length ( 1001,000bp ) and can serve as a basis of all - atom or multi - scale simulations of protein - dna complexes .the application of the model to the _ lac _ repressor - dna complex revealed a likely structure of the 76 bp and 385 bp dna loops , created by the protein .the experimentally measured energy of the dna loop formation is obtainable with a proper set of parameters .the obtained forces of the protein - dna interactions can be used in a multi - scale simulation of the _ lac _ repressor - dna complex . further comparison with experimental datawill be beneficial for optimizing the parameters and approximations of the model .this work was supported by grants from the roy j. carver charitable trust , the national institute of health ( phs 5 p41 rr05969 ) , and the national science foundation ( bir 94 - 23827eq ) .the figures in this manuscript greatly benefited from the molecular visualization program vmd .the following eleven dimensionless functions constitute the initial simplified solution to the equations ( [ eq : adv_grand_1]-[eq : adv_grand_11 ] ) , the planar circular uncharged isotropic rod : here is an arbitrary parameter that determines the angle between the axis of the lcs and the plane of the elastic rod . according to , the equilibrium constant of binding of the _ lac _ repressor to a single - operator dna equals about m for o and m for o at high salt concentration ( 0.2 m ) .this results in the free energies of binding and .the equilibrium binding constant of the _ lac _ repressor to the dna promoter , containing both o and o sites , equals m , resulting in the free energy .this results in the free energy of formation of the 76 bp dna loop . in order to derive ,let us consider a short section of a tightly twisted rod with no intrinsic curvature .then the total curvature equals .the bending energy of the section is : where is the angle between the binormal and ( fig .[ fig : elrod]b ) . a simple textbook calculation using frenet formul yields since the rod is tightly twisted , the term grows much faster with than both other terms in and .therefore , the second term in is an integral of a fast oscillating function and as such is much smaller than the first term .accordingly , and , since the approximation of a single bending modulus assumes we arrive at .the sequence - dependent parameters could be related to the arclength of the rod in several possible ways .the simplest would be a step - wise assignment of the parameters . at the points , corresponding to each dna step between two neighboring base pairs , the parameters ( elastic moduli , , and , intrinsic curvature and twist ) would adopt the values correspondent to that dna step .connecting those points with smooth functions ( for example , spline - based ones ) would result in the desired parameter setup , , etc ., for the particular loop .a slightly modified approach could keep the dna step - based parameters constant in a certain area in the middle between the base pairs and limit the zone of a smooth transition from value to value to a certain width .then , the sequence - based parameter functions between the dna steps to and from the base pair located at the point would look like : certainly , another smoothly differentiable function can be used in the transition zone instead of the sine .finally , a more complicated approach can relate the parameters to a longer dna sequence surrounding each point rather than to only three neighboring base pairs defining the two adjacent dna steps .such approach would be more realistic , as the experimental and simulation data indicate . in that case, the model parameter functions would have to depend on multiple base pairs neighboring each point , for example , as described in .the dna deformability can be described in our model by additional three variables , combined into the shift vector .its components are the amount of shear in the two principal directions , and the component is the amount of extension along the normal . the vector of shift and the elastic force are linearly related to each other , similarly to the vector of curvature and the torque ( cf eq . ): where are the shear moduli in the two principal directions , and is the extension modulus of dna .those parameters would also have to be determined from and extensively verified against the experimental data , e.g. those presented in .thus introduced , deformability changes eq . into propagating into eq . . finally , the systems ( [ eq : grand_1]-[eq : grand_11 ] ) , ( [ eq : adv_grand_1]-[eq : adv_grand_11 ] ) become of 16-th rather than 13-th order, but could be similarly solved by the continuation method . the following algorithm has been used in this work to build all - atom dna structures on the basis of coarse - grained elastic rod solutions .first , the all - atom structures of the base pairs , with the sugar phosphate backbone groups attached , were built in the idealized b - conformations using quanta .second , the chosen elastic rod solution was used to obtain local coordinate frames from at each point corresponding to the location of each base pair of the loop along the centerline of the dna helix .third , the all - atom base pairs were centered at the points and aligned with the coordinate frames as illustrated in fig .[ fig : elrod_dna ] .fourth , the sugar phosphate groups were connected with each other and the _ lac _ repressor - bound dna segments by phosphodiester bonds .fifth , two 50-step minimization rounds with x - plor using charm22 force field were performed in order to relieve bad interatomic contacts and chemical group conformations ( bonds , angles , dihedral angles ) resulting from this idealized placement , especially for the dna backbone . the resulting all - atom structure , while still stressed at certain points and overidealized at others , presents a reasonable starting point for any all - atom or multi - scale simulations , as described in sec .[ sec : discussionb ] .
a versatile approach to modeling the conformations and energetics of dna loops is presented . the model is based on the classical theory of elasticity , modified to describe the intrinsic twist and curvature of dna , the dna bending anisotropy , and electrostatic properties . all the model parameters are considered to be functions of the loop arclength , so that the dna sequence - specific properties can be modeled . the model is applied to the test case study of a dna loop clamped by the _ lac _ repressor protein . several topologically different conformations are predicted for various lengths of the loop . the dependence of the predicted conformations on the parameters of the problem is systematically investigated . extensions of the presented model and the scope of the model s applicability , including multi - scale simulations of protein - dna complexes and building all - atom structures on the basis of the model , are discussed .
the solvency ii directive requires that insurance undertakings have to calculate the _ solvency capital requirement _ taking into account for the correlation among the risk driver .this implies the existence of a diversification effect .the evaluation of the solvency ii capital requirement net of diversification effect is a needful procedure to know the real capital absorption of the lines of business and to evaluate the relative financial performance .academic research addresses the capital allocation for many years .indeed they were formulated various approaches to the problem by moving from game theory or establishing the principles of coherence through axiomatic definitions for evaluating allocation methods in relation to the specific risk measures .this last line of research has provided significant applications respect to various risk measures assuming different distributions for the underlying risk variable and identifying the euler s allocation principle as the highest performing .+ the most important papers we refer are : * tasche ( 1999 ) define the rorac compatibility as the most important economic property of an allocation principle and state that , for risk measures with continuous derivatives , the unique continuous per - unit allocation principle rorac compatible is that of euler * denault ( 2001 ) establishes the principles of coherence for an allocation principle and derive the euler allocation principle moving from game theory * a. buch , g. dorfleitner ( 2008 ) state that the euler allocation principle associated with a coherent risk measure produce a coherent allocation of risk capital the aim of this paper is to study the solvency ii capital requirement allocation for european insurance companies that calculates the scr by means of the standard formula providing an allocation principle and an approach to evaluate the financial performance of the risk capital invested .+ + our way it is to consider the scr as risk measure noticing that , under the set of hypothesis underlying the standard formula , is coherent in the sense of artzner ( 1999 ) .then , by means of the euler s allocation principle , we derive the closed formulas to calculate the allocated scr among the risk considered in the multilevel aggregation scheme of solvency ii standard formula . due to the cited results we know that the allocation provided is coherent in the sense of denault ( 2001 ) and rorac compatible .then we show that , given the rorac compatibily , this result can be used to evaluate the financial performance of an insurance portfolio . + +the paper is organized as follows . in the section [ par : theory ]we introduce all theoretical background used in the following sections as euler theorem , coherence of risk measures ( ) , coherence of risk capital allocation ( ) , rorac compatibility and coherence of the euler s allocation ( and ) . in the section [ sec : hp fs ]we describe the standard formula approach to scr calculation and show a set of coherent hypothesis and definitions . in the section [ sec : corpo ]we define the diversification effect as variable and we provide the formulas for the scr allocation among each single macro and micro risk included in the multilevel aggregation scheme of the standard formula . in the section [ sec : risk appetite ]we provide a mean variance model for the rorac to evaluate the underwriting and reinsurance policies and to define the risk appetite on each sub - portfolios by solving an optimization problem .follows the conclusion and the perspectives for future research .we consider an insurance company whose portfolio of insurance contract is composed by -homogeneous sub - portfolios .we define a set of random variable in the probability space ] that are rorac compatible , they are uniquely determined as : this is called euler s allocation principle of the risk measure among the -sub - portfolios . from a mathematical point of view , the euler allocation principle derives from the application to the risk measures considered of the well known euler s homogeneous function theorem .the euler s allocation principle described in the previous subsection , is one of the most popular allocation method proposed in the literature .this is due for its suitable properties . in this way ,a very important contribution is that of buch et g. dorfleitner ( 2008 ) . from an axiomatic point of view, they study the relation between the properties of the euler s allocation principle and those of the risk measure to which the allocation is applied . what they find is resumed in the following proposition .[ prop : buch_dorfleitner ] the euler s allocation principle applied to a coherent risk measure has the properties of full allocation , `` no undercut '' and riskless allocation " so it is coherent in the sense of denault ( 2001 ) .this result has a main role for our research because we will use it to prove that the allocation methodology that we find to calculate the allocated scr in the solvency ii standard formula framework by means of the euler s principle , is coherent in the sense of denault ( 2001 ) .this , united to the rorac compatibility ensured by the euler s principle , and the closed formulas implies very suitable properties and practical implications for our results .the solvency ii directive provide that the insurance companies have to calculate their regulatory capital requirement , named solvency capital requirement ( hereafter referred as scr ) , by means of a risk based methodology . from a practical point of view, they can choose between a standard formula provided by eiopa or to produce him self an internal model .in the following we take into account only for the case that the scr is calculated with the standard formula ( hereafter referred as fs ) .the fs provides that the company should calculate the scr through the modular approach that will be defined .+ + the risk - based modular approach considered in the solvency ii framework provides that the company has to consider the global risk which is exposed to by dividing it into single components .these components are related with different sources of risk ( hereafter named `` risk '' ) like reserve risk , mortality risk , interest rate risk e so on .the modular scheme considers macro risks .the generic macro risk ( ) is composed by micro risk .we use the following notation for all variables will be defined : the first digit of the subscript identifies the macro - risk and is from to , the second one identifies the micro - risk and is from to ( where identify the overlying macro - risk ) .+ + we define a set of random variable on the probability space $ ] .let ( with ) the random variable that describe losses , over an annual time horizon , associated with the micro - risk and let the respective r.v .unexpected losses .the generic macro risk depends of a random variable .the total risk of the company is described with the random variable .the solvency ii capital requirement of the company is defined by means of the specific risk measure on following defined : [ def : sf ] 1 . is the capital requirement referred to the micro - risk defined as : + is approximated by means of specific formulas provided by eiopa . is the capital requirement referred to the macro - risk calculated by aggregating the underlying micro - risk : + + where represents the linear correlation coefficients provided by eiopa . is the overall capital requirement of the company and is calculated by aggregating the underlying macro - risk : + + + where represents the linear correlation coefficients provided by eiopa .note that the square - root aggregation formula ( [ eq : scr_ix ] and [ eq : scr_i ] ) implies that the r.v . ( ) are jointly normal distributed and linearly correlated[multiblock footnote omitted ] so that the scr is a coherent risk measure .+ +with reference to the scrs of the macro - risk and micro - risk , moving from lemma [ lemma : tasche_euler_theorem ] the allocation formulas are obtained .[ macrorisktheorem ] in the case of solvency ii standard formula , the rorac compatible allocation of the overall scr among the constituents macro - risk is uniquely determine as : where is the allocated _ i - th _ macro - risk. from lemma [ lemma : tasche_euler_theorem ] ( euler s allocation principle ) holds that : + + where the partial derivative is : and so : moving from theorem [ macrorisktheorem ] it is possible to reach a similar result for the micro - risk allocation .it is useful to define the variable allocation ratio as : + + [ microrisktheorem ] in the case of solvency ii standard formula , the rorac compatible allocation of the macro - risk scr ) among the micro - risk is uniquely determined as : where the variable is the allocated micro - risk .from lemma [ lemma : tasche_euler_theorem ] ( euler s allocation principle ) we have that : by means elementary algebra holds that : so that : the theorems [ macrorisktheorem ] and [ microrisktheorem ] enables to conclude that , under assumptions ( [ def : sf ] ) , the rorac compatible and coherent allocation of scr is uniquely determined by means the euler s principle and can be expressed by means of the closed expressions reported .the risk based approach for the solvency ii capital requirement calculation enables insurance companies to evaluate their profitability taking into account for the capital absorption of the each sub - portfolio .furthermore , the solvency ii directive requires that insurance undertaking have to evaluate their underwriting and reinsurance policies and to define the limits for the risk appetite . in order to do it, we propose to lead back the problem to the classical portfolio theory . in this waywe show that it is possible to use the same integrate framework for all the named evaluation .in particular , we consider a mean - variance model on the sub - portfolios rorac . ] .+ to evaluate different underwriting and reinsurance strategies according with the defined risk appetite , we propose the following optimization problem : where : - : : is the expected global _rorac _ of the company - : : cv is the vector with coefficient of variation of the rorac : + [ cols="^ " , ] |=========================================== | |=========================================== - : : p is the vector of lobs premium . and are the contraints for the premium depending on the commercial strategy of the company - : : r is reisurance program subjected to qualitative and quantitative contraints - : : is risk appetite limit - : : and are the bounds imposed to the overall scr to limit the solution to wich compatible with the capital availabilities of the company from a practical point of view it is sufficient to test the realistic scenario of underwriting ( given the commercial power ) and reinsurance ( given the market offer ) and choose the global strategy that are optimal . in the following table , we show a numerical example based on a risk profile measurement rorac compatible derived from an anonymous non - life company data base : + [ cols="^,^,^,^",options="header " , ] the data above can be reported in the figure [ riskreturn ] that represents the contribution of each lob risk performance to the company s risk situation .in this paper we have shown that , under solvency ii standard formula framework , is possible to obtain a solvency capital requirement allocation among micro and macro risks that is coherent in the sense of denault and rorac compatible .we demonstrated the results by means of the euler s allocation principle .+ then , given the allocated scr , we have provided a procedure to evaluate the underwriting and reinsurance policies and to determine the risk appetite of the stakeholder by means of a rorac index and collocating the argument under the classical portfolio theory . + some possible developments can be the construction of a model for the micro risk allocation among sub - portfolios for all the case in which is not possible to apply the formulas we provided ( eg . the allocation of the interest rate risk among life lobs etc . ) .this may have a very strong effect in the real analysis due for its straightforward possibility of application eg .the valuation of underwriting strategy and reinsurance strategy .acerbi c. , tasche d. , 2002 _ on the coherence of expected shortfall_. journal of banking and finance 26 , 1487 - 1503 .albrecht p. , 2004 _ risk based capital allocation_. in : encyclopedia of actuarial science .wiley , chichester .artzner p. , delbaen f. , eber j.m . , heath d. , 1997 _ thinking coherently_. risk 10 , 68 - 71 .artzner p. , delbaen f. , eber j.m . , heath d. , 1999 _ coherent measures of risk_. mathematical finance 9 , 203 - 228 .buch a. , dorfleitner g. , 2008 _ coherent risk measures , coherent capital allocations and the gradientallocation principle_. insurance : mathematics and economics 42 235242 .bullen p.s . ,mitrinovic d.s . , vasic m. , 2003 _ handbook of means and their inequalities_. kluwer , dordrecht , boston , london .delbaen f. , 2002 _ coherent risk measures on a general probability space_. in : advances in finance and stochastics .springer , berlin , pp 1 - 37 .denault m. , 2001 _ coherent allocation of risk capital_. journal of risk 4 , 7 - 21 .dhaene j. , goovaerts m. , kaas r. , 2003 ._ economic capital allocation derived from risk measures_. north american actuarial journal 7 , 44 - 59 dhaene j. , tsanakas a. , valdez e. , vanduffel s. , 2005 ._ optimal capital allocation principles_. in : 9th international congress on insurance : mathematics and economics .ime2005 , 6 - 8 july , quebec , canada .dhaene j. , laeven r. , vanduffel s. , darkiewicz g , goovaerts m. , 2007 ._ can a coherent risk measure be too subadditive?_. journal of risk and insurance .eiopa , 25.07.2014 , _ the underlying assumption in the standard formula for solvency capital requirement calculation ._ - https://eiopa.europa.eu .grndel h. schmeiser h. , 2007 ._ capital allocation for insurance companies - what good is it?_. journal of risk and insurance . maume - deschamps v. , rullire d. , said k. , 2015 , _ a risk management approach to capital allocation_. + arxiv preprint arxiv:1506.04125v1 [ q-fin.rm ] .panjer h. , 2001 _ measurement of risk , solvency requirements and allocation of capital within financial conglomerates_.university of waterloo . working paper .sandstrom a. , 2007 , _ solvency ii : calibration for skewness_. scandinavian actuarial journal .tasche d. , 1999 , _ risk contributions and performance measurement_. working paper , technische universitat munchen , 1999 .tasche d. , 2002 , _ expected shortfall and beyond_. journal of banking and finance 26 , 15191533 .tasche d. , 2004 , _ allocating portfolio economic capital to sub - portfolios_. in : dev , a. ( ed . ) , economic capital a practioner guide .risk books , london , pp .tasche d. 2007 , _ capital allocation to business units and sub - portfolios : the + euler principle_. arxiv.org + http://ideas.repec.org/p/arx/papers/0708.2542.html .urban m. , dittrich j. , klppelberg c. , stlting r. , 2004 ._ allocation of risk capital to insurance portfolios_. bltter dgvfm 26 , 389406 .koyluoglu u. , stoker j. , 2002 , _honour your contribution_. risk , 15(4):9094
the aim of this paper is to introduce a method for computing the allocated solvency ii capital requirement ( scr ) of each risk which the company is exposed to , taking in account for the diversification effect among different risks . the method suggested is based on the euler principle . we show that it has very suitable properties like coherence in the sense of denault ( 2001 ) and rorac compatibility , and practical implications for the companies that use the standard formula . further , we show how this approach can be used to evaluate the underwriting and reinsurance policies and to define a measure of the company s risk appetite , based on the capital at risk return . + + * keywords * + solvency capital requirement allocation , euler principle , standard formula , return on risk adjusted capital , risk - return profile , underwriting policy , rorac compatibility
the year 2005 was declared the world year of physics which is an international celebration of physics .it marks the hundredth anniversary of the pioneering contributions of albert einstein , the greatest man in the twentieth century as chosen by time magazine . in 1905 , one hundred year ago , a swiss patent employee , albert einstein , published a paper entitled on the electrodynamics of moving bodies " which described what is now known as special relativity .it drastically changed human fundamental concepts of motion , space and time .the year 1905 was the miraculous year for einstein .in the same year he also published three other trailblazing papers .one accounted for the photoelectric phenomena and made up a part of the foundation of quantum mechanics .he won the nobel prize in physics due to the ideas of this paper in 1921 .the second one was about the explanation of brownian motion and helped to establish the reality of the molecular nature of matter and to present convincing evidence for the physical existence of the atom .the third one gave the most famous and beautiful equation in special relativity , , which has received various experimental verifications and has had wide application in modern physics .these groundbreaking papers have shattered many cherished scientific beliefs and greatly promoted the development of modern physics .they won for einstein the greatest physicist as newton in all human history .next , we will follow the process of the establishment of special relativity and summarize some useful skills in research. one of the most famous puzzles at the end of the nineteenth century was the ether which was proposed as a medium to support the electromagnetic wave propagation .maxwell s fundamental equations about the electromgnetic field were published in 1862 .it leads to the electromagnetic wave equation in free space , where is any component of or .classical mechanics tells us that wave propagation needs a medium to support it .for example , sound waves have to travel in the air medium . for the propagation of electromagnetic wave, physicists presumed an ether medium which was entirely frictionless , pervaded all space , and was devoid of any interaction with matter . although many ingenious physics papers during 1885 - 1905 were dedicated to verifying it , the ether refused to reveal its presence to the pursuers . in 1881 , a 28-year - old american physicist , albert michelson , realised the possibility of an experimental test for the existence of ether by measuring the motion of the earth through it .he performed the experiment in potsdam , germany .although he got negitive results in detecting the relative motion of the earth and ether , his measurement is not so accurate as to give an important result .six years later , albert michelson and edward morley in cleveland , ohio carried out a high - precision experiment to demonstrate the existence of ether with an interferometer , which is now called the michelson interferometer .this experiment is a high - precision repetition of michelson s experiment in potsdam . in their experiment shown in fig .1 , a beam of light from the source was directed at an angle of 45 degree at a half - silvered mirror and was split into two beams 1 and 2 at point o by the mirror too .these beams 1 and 2 traveled at a right angle to each other .the two beams were reflected by separate mirrors , then recombined and entered a telescope to form a fringe pattern .the fringe pattern would shift if there was an effect due to the relative motion of the earth and the ether when the apparatus was rotated .therefore , by monitoring the changes in the fringe pattern , they could tell the relative motion of the earth and the ether .even with the high - precision apparatus , they did not find any experimental evidence for the existence of relative motion of the earth and ether .these results indicated that either there is no ether or the earth is in the ether rest frame all the time during the experiment . since the earth is always altering its velocity when moving around the sun , the experimental result appeared to show that there was no existence of ether .all the repetitions of their experiment in the succeeding years were still unable to detect any relative motion of the earth and the ether . ] but ether died hard .many physicists including michelson himself made great efforts to retain the ether yet explain the michelson experiment .he attributed the negative results to the earth dragging some of the ether along with its motion .as a consequence the ether was motionless with respect to the earth near its surface .george fitzgerald put forward another possible explanation in 1892 following the lorentz - fitzgerald contraction equation , as we now know , where is called the proper length of an object which is measured in the rest frame of the object . to a moving observer of velocity ,any length along the direction of motion undergoes a length contraction by a factor of .he proposed that the experimental apparatus would shorten in the direction parallel to the motion through the ether .this shrinkage would compensate the light paths and prevent a displacement of the fringes due to the relative motion of the earth and the ether .hendrik lorentz discovered the well - known lorentz transformation in 1904 under which the electromagnetic theory expressed by the celebrated maxwell equations were in form invariant in all inertial frames . where we assume coordinate system moves relative to another one in the -axes with uniform velocity and .although he laid the foundation for the theory of relativity with his mathematical equations , lorentz still tried to fit these remarkable equations into the ether hypothesis and save the ether from the contradiction of the michelson experiment .all these efforts failed to explain the michelson experiment while retaining the ether .it was genius albert einstein who abandoned the ether entirely .he wrote in his celebrated paper on relativity in 1905 : the introduction of a ` light ether ' will prove to be superfluous , inasmuch as in accordance with the concept to be developed here , no ` space at absolute rest ' endowed with special properties will be introduced , nor will a velocity vector be assigned to a point of empty space at which electromagnetic processes are taking place . "furthermore he developed the special relativity by conjucturing two postulations : \1 .the laws of nature are the same in all coordinate systems moving with uniform motion relative to one another .the speed of light is independent of the motion of its source .of which , the first one is a natural generalization for all kinds of physical experience since it is reasonable to expect that the laws of nature are the same with respect to different inertial frames of reference . whereas the second one just represents a simple experimental fact . in michelson s experiment ,the speed of light was found to be constant with respect to the earth . put in other words ,the speed of light is the same for observers in different inertial frames of reference since an observer on the earth at two different times may be regarded as an observer in two different inertial frames of reference .it is only a small jump to the postulate of einstein s special relativity that the speed of light is independent of the motion of its source . from the above condensed outline of the establishment of special relativity, we learn that when proposing an idea or theory with which we may account for some unexplained phenomena , they should be based upon the given facts of experiment or phenomena .sometimes , the ideas may contradict well - known theories which are not experimentally proved , such as the ether .if the problem is subtle and complicated , especially in physics , we should make incisive analyses , see through the general appearance and grasp the underlying nature .the above method is of practical use in research which could be seen from the following four examples .a good example in illustration of the method is the origin of quantum physics which now plays an important role in various scientific areas . at the end of the nineteenth century ,classical physics achieved great success .but some experimental results were incompatible with the classical physics such as the specific heat of a solid , the photoelectric effect and the thermal radiation of a black body .kirchhoff initiated his theoretical research on thermal radiation in the 1850s . by the end of the nineteenth centurytwo important empirical formulae on black - body radiation had been derived based upon the fundamental thermodynamics .wien proposed a formula for the energy density inside a black body in 1896 , where is the temperature of the wall of a black body , is the radiation frequency and , are two constants .rayleigh and jeans derived a result from a different approach in 1990 , where is the boltzmann s constant .the rayleigh - jeans formula was in agreement with the experimental curve at low frequencies whereas the wien formula fitted the experimental curve well at high frequencies .it was a great discrepancy at the turn of twentieth century that they failed to completely explain black - body radiation .seemingly there was no way out since these formulae were based upon the fundamentals of classical physics .things changed on dec .14 , 1900 when at a german physical society meeting max planck presented his paper entitled on the theory of energy distribution law of normal spectrum " which not only solved the puzzel of black - body radiation but uncovered the quantum world .it marks the birth of quantum physics .he assumed that this energy could take on only a certain discrete set of values such as 0 , ... , where is now known as planck s constant .these values are equally spaced rather than being continuous .this assumption apparently contradicted the equipartition law and common sense .he argued that the wall of a black body emitted radiation in the form of quanta with energy of integer mutiple of .based on this bold assumption , planck gave a formula of the energy density at frequency , which were in complete agreement with experimental results on general grounds .his formula was an ingenious interpolation between the wien formula and the rayleigh - jeans formula .we now know that it gives the correct explanation of the black body radiation spectrum .this proposal established his status in science .as einstein said : `` very few will remain in the shrine of science , if we eliminate those moved by ambition , calculation , of whatever personal motivations ; one of them will be max planck . ''another example concerns the situation of string theory in its early stage .we know that quantum field theory worked well in the unification of quantum mechanics and electromagnetism in the 1940s . that it could also describe the weak and strong interactions was understood by the end of 1960s .it has played a significant role in our understanding of particle physics in many ways , from the formulation of the four - fermion interaction theory to the unification of electromagnetic and weak interactions .but when we attempted to incorporate it with gravity at high energy scale , severe problems appeared .for example , when , the interaction of gravitation can not be negligible , where is planck energy .the short - distance divergence problem of quantum gravity arouse .it was non - renormalizable even though we have employed the usual renormalization extracting the meaningful physical terms from the divergences .string theory solved this severe problem . according to the postulates in string theory , all elementary particles , as well as the gravitonwere regarded as one dimensional strings rather than point - like particles which were generally accepted at the time .but the generally accepted point - like particle concept was not experimentally proved .each string has a lot of different harmonics and the different elementary particles were regarded as different harmonics in string theory .therefore the world - line of a particle in quantum field theory shown in fig .2 was replaced by its analog in string theory , the world - sheet of a string which could join the world - sheet of another string smoothly . as a consequence ,the vertex of an interaction in a feynman diagram was smeared out . in string theory the massless spintwo particle in the string spectrum was just right identified as the graviton which mediates gravitation . at low energy scalethe interaction of massless spin two particle is the same as that required by general relativity . from this simple string postulate, string theory leads to a number of fruitful results .it is the only currently known consistent theory of quantum gravity which does not have the above divergence problem .one of the vibrational forms of the string possesses just the right property spin two to be a graviton whose couplings at long distance are those of general relativity .it admits chiral gauge couplings which have been the great difficulty for other unifying models .in addition , string theory predicts supersymmetry and generates yang - mills gauge fields and it has found many applications to mathematics in the area of topology and geometry . also in string theory there are no dimentionless adjustable parameters which generally appear in quantum field theory , such as the fine - structure constant .nowadays , string theory ( detailed descriptions may be found in refs. ) has already become one of the most active areas of research in physics .thanks to the postulate of one dimensional string .it sheds light on and promises new insights to some deepest unsolved problems in physics , for example , what cause the cosmic inflation ? how does time begin ? what constitutes the dark matter and what is the so - called dark energy ?the third example is upon the process of the discovery of the neutrino.(in this paper , all neutrino mean only . ) in 1896 , radioactivity was discovered by henri becquerel which marked the birth of modern nuclear physics .subsequently three types of radioactive rays were identified .they were called alpha ray , beta ray and gamma ray separately .becquerel established that the beta rays were high - speed electrons in 1990 .employing electric and magnetic fields , he deflected beta rays and found that they were negatively charged and that the ratio of charge to mass of the beta particle was the same as that of an electron .after more accurate measurements on beta decays physicists found a serious problem . unlike alpha decay and gamma decay in which the emitted particles carried away the well - defined energy which is equal to the total energy difference of the initial and final states , beta decay emited electrons with a continuous energy spectrum .it meant that a particular nucleus emitted an electron bearing unpredictable energy in a particular transition .this experimental result apparently violated the conservation laws of energy and momentum .wolfgang pauli proposed an entirely new particle - neutrino in order to solve this serious problem . in his open letter to the group of radioactives at the meeting of the regional society in tubingen on december 4 , 1930 , he proposed the neutrino based on the given fact of experiment : `` ... this is the possibility that there might exist in the nuclei electrically neutral particles , which i shall call neutrons , which have spin 1/2 , obey the exclusion principle and moreover differ from light quanta in not travelling with the velocity of light . ''i admit that my remedy may perhaps appear unlikely from the start , since one probably would long ago have seen the neutrons if they existed . but ` nothing venture , nothing win ' , and the gravity of the situation with regard to the continuous beta spectrum is illuminated by a pronouncement of my respected predecessor in office , herr debye , who recently said to me in brussels ` oh , it s best not to think about it at all - like the new taxes ' . one ought to discuss seriously every avenue of rescue . " in his letter , pauli called his new proposed particle - the `` neutron '' which is now called neutrino due to enrico fermi .pauli proposed that this new speculative neutral particle might resolve the nonconservation of energy .if the proposed neutrino and the electron were emitted simultaneously , the continuous spectum of energy might be explained by the sharing of energy and momentum of emitted particles in beta decay .it is worth mentioning that long before the neutrino was experimentally detected , enrico fermi incorporated pauli s proposal in his brilliant model for beta decay in the framework of quantum electrodynamics in 1934 .he showed clearly with his beta decay theory that the neutron decayed into a proton , an electron and a neutrino simultaneously .the neutrino was experimentally detected by fred reines and clyde cowan , at the los alamos lab in 1956 using a liquid scintillation device .this important discovery won the 1995 nobel prize in physics .a lot of famous phenomena and problems solved and unsolved , related with the neutrino were found .parity violation takes place whenever there is the neutrino taking part in a weak interaction .this is just as the behavior of monopole under parity. they may be the same particle we think drawing inspiration from einstein .so it causes the above violation .time reversal violation of this kind of weak interaction also is due to sign change of charges . when detecting neutrino emitted from the sun , fewer solar neutrino capture rate than the predicted capture rate in chlorine from detailed models of the solar interiorwas found in 1968 .this is the solar neutrino puzzel .we explain the flavor change easily by the new nature of neutrino in solar neutrino puzzel . in short ,the change is by pair creation and annihilation .later the same phenomena were also observed by other groups using different materials .as the most fascinating particle , the neutrino is so important that neutrino physics has become one of the most significant branches of modern physics .thanks to the conjecture of the neutrino by pauli .although his proposal contradicted the well - accepted knowledge at the time on beta decay process , his new beta decay process involving the neutrino was not completely impossible experimentally . with this proposal we could overcome the serious problem and rescue the fundamental conservation laws of energy and momentumthe little neutrino has found its application to a number of different research areas in physics , such as in particle physics , nuclear physics , cosmology and astrophysics . ]finally we give an example of astronomy to show the usefulness of our method deduced from the establishment of special relativity . before the sixteenth century, it was extensively accepted that the sun , the moon and planets all orbited about the earth which was at the center of the universe . in his famous book , almagest ,the antient greek astronomer claudius ptolemy proposed the earth - centered model of the universe .he proved that the earth was round and the gravity everywhere pointed to the center of the earth .every planet moved along an epicycle whose center revolved around the earth just as the sun and the moon revolving around the earth .the epicycle was carried along on a larger circle like a frisbee spinning on the rim of a rotating wheel shown in fig .he postulated the epicycle to explain the looping motion of a planet .the people on the earth would not see the observed irregular motion of a planet if the planet moved around the earth in a circular orbit , rather than in a epicycle .the ptolemaic model of the universe was the obvious and direct inference when people observed the motion of the sun day after day and the motions of the moon and the planets night after night .therefore ptolemy s theory was well - accepted and prevailed for a long time . ] in about 1510 , nicholas copernicus presented the helio - centric model . in his celebrated bookpublished in 1543 , de revolutionibus orbium coelestium , he postulated that the planets including the earth all moved around the sun shown in fig . 4 .( in 1781 , william and caroline herschel discovered uranus , the first planet found beyond the saturn boundary , which was generally acknowledged as the outer limit of the solar system for thousands of years .neptune was discovered in 1846 by johann galle .the discovery of neptune was a great triumph for theoretical astronomy since neptune was at first predicted by adams and le verrier using mathematical arguments based on newton s universal gravitation law and then observed near their predicted locations .pluto was predicted by percival lowell and found in 1930 by clyde tombaugh . )the earth spin about its axis one rotation per day and revolved around the sun in the plane of the ecliptic .he explained the apparent looping motions of the planets in a simple way using his new helio - centric model .they were the direct consequence of the relative motion of the planets and the earth when people saw from the earth .he could not prove his radical helio - centric model at the time .although he simplified the cumbersome ptolemaic system , both the earth - centered model and the helio - centric model could account for the observations of motion of the celestial bodies .copernicus s theory gives an alternative theory of ptolemy .even though the problem was subtle and complicated , we should grasped the hidden nature behind the phenomena .as copernicus pointed out that the extremely massive sun must rule over the much smaller planet and the earth .he therefore drew his conclusion that it was the earth that moved around rather than the sun .it was isaac newton who provided the correct explanation of kepler s laws and convinced people that the earth and other planets revolved around the massive sun due to the attractive force with his ingenious universal law of gravitation . where is the universal gravitation force between the two bodies with mass m and m respectively , g is called the gravitational constant , and r is the distance between the centers of mass of the two bodies .galileo galilei first observed the four satellites orbited around jupiter which exhibited undoubtedly that the earth were not the center of all circular motions of the celestial bodies utilizing his telescopes .he stated the four small bodies moved around the larger planet - jupiter like venus and mercury around the sun .also he observed the phases of venus which was the direct result of the planet moving around the sun . ]we have reviewed the process of the establishment of special relativity against the background of physics around the turn of the twentieth century in our paper .moreover we have outlined the scientific method which helps to do research .some examples are presented in order to illustrate the usefulness of the method .we have discussed the origin of quantum physics and string theory in its early years of development .discoveries of the neutrino and the correct model of solar system have also been demonstrated .we have shown that the method is of practical use in a wide range , from physics to astronomy , from ancient science to modern ones .this year is the unprecedented world year of physics which acknowledges the contribution of physics to the world .it marks the hundredth anniversary of the pioneering contributions of albert einstein in 1905 as well as the fiftieth anniversary of his death in 1955 .we dedicate this paper to albert einstein .10 a. einstein , annalen der physik * 17 * , 891 ( 1905 ) a. einstein , annalen der physik * 17 * , 132 ( 1905 ) a. einstein , annalen der physik * 17 * , 549 ( 1905 ) a. einstein , annalen der physik * 17 * , 639 ( 1905 ) j. d. jackson , classical electrodynamics , 2nd edition , john wiley & sons , new york , ( 1975 ) a. beck and p. havas , the collected papers of albert einstein , english translation * 2 * , princeton university press , princeton , usa(1987 ) max planck , annalen der physik , * 4 * , 553 ( 1901 ) max planck , where is science going ?( several essays by max planck .prologue by albert einstein ) , dover , new york ( 1981 ) s. l. glashow , nucl .22 , 579 ( 1961 ) .s. weinberg , phys .* 19 * , 1264 ( 1967 ) .a. salam , in elementary particle theory , proc . of the 8th nobel symposium , aspenasgarden , 1968 , ed . by n. svartholm , p.367 m. b. green , j. h. schwarz and e. witten , superstring theory , vols. 1 and 2 , cambridge university press , cambridge , uk(1987 ) charles enz , no time to be brief - a scientific biography of wolfgang pauli , oxford university press , ( 2002 ) e. fermi , z. fur physik * 88*,161 , ( 1934 ) f. reines and c. cowan , science , 124 ( 1956 ) 10 j. adamczewski , nicolaus copernicus and his epoch , copernicus society of americ isaac newton , mathematical principles of natural philosophy and his system of the world , univ . of calif . press , berkeley , ca ( 1962 )
we review the physics at the end of the nineteenth century and summarize the process of the establishment of special relativity by albert einstein in brief . following in the giant s footsteps , we outline the scientific method which helps to do research . we give some examples in illustration of this method . we discuss the origin of quantum physics and string theory in its early years of development . discoveries of the neutrino and the correct model of solar system are also presented .
we thank n. menon and c. d. santangelo for support and discussions , and acknowledge funding from national science foundation grants dmr 0907245 and dmr 0846582 .
we briefly discuss an arch - like structure that forms and grows during the rapid straightening of a chain lying on a table . this short note accompanies a fluid dynamics video submission ( v029 ) to the aps dfd gallery of fluid motion 2011 . the accompanying video shows fifty feet of chain ( silver - plated base metal , mm links , ) arranged on about two feet of table top ; off camera , a power drill pulls one end at a velocity 8 m / s . the relevant forces are those due to the chain s inertia , and a line tension whose role is identical to that of pressure in a one - dimensional incompressible fluid . gradients in this tension amplify and advect small of out - of - plane disturbances , presumably with some upward rectification by the table . the advection velocity drops to zero as the chain approaches its terminal velocity . there , the tension is uniform , structures are nearly `` frozen '' in the laboratory frame , and the vertical slack accumulates behind and within a slowly growing arch . by the end of the video , this arch has grown to about ten centimeters high . however , an analysis that takes gravity into account suggests that such a structure may remain stable when grown to a height on the order of a meter , the natural length scale of the system with gravity included ( ) . structures of such a size are difficult to access experimentally , as they do not evolve within the corresponding natural time scale of about one second ( ) . this is because the system is oblivious to gravity during the observation period of the experiment ; in a region with local curvature on the order of a few tens of inverse meters , an element of chain sees a large ratio of inertial to gravitational accelerations .
in many physical systems , boundary conditions are both the most important and the most difficult part of a theoretical treatment . in computational approaches ,boundaries pose further difficulties .even with an analytic form of the correct physical boundary condition in hand , there are usually many more unstable numerical implementations than stable ones .nowhere is the boundary problem more acute than in the computation of gravitational radiation produced in the coalescence of two black holes . in order to avoid the topological complications introduced by the black holes ,the proposed strategy for attacking this problem , initially suggested by w. unruh , is to excise an interior region surrounded by an apparent horizon . these are uncharted waters and there are many different tactics that can be pursued to attain an apparent horizon boundary condition . one common feature of all current approaches to this problem is the use of a cauchy evolution algorithm in the interior region bordering the apparent horizon . in this paperwe present an alternative tactic based upon a characteristic evolution in that inner region ; and we present a simple model of its global implementation . in order to provide orientation , we begin with a synopsis of the apparent horizon boundary condition and its computational difficulties .an apparent horizon is the boundary of the region on a cauchy hypersurface containing trapped surfaces .this explicit reference to a cauchy hypersurface in the definition gives an apparent horizon an elusive nature .indeed , there are cauchy hypersurfaces in the extended schwarzschild spacetime which come arbitrarily close to the final singularity but do not contain an apparent horizon .there is strong reason to believe that the same is true in any spherically symmetric black hole spacetime .on the other hand , when they exist , apparent horizons are useful spacetime markers because they must lie inside the true event horizon .consequently , signals can not propagate causally from the apparent horizon to future null infinity . thus truncation of the interior spacetime at the apparent horizon does not affect the gravitational waves radiated to infinity .this is the physical rationale behind the apparent horizon boundary condition .there is a gauge ambiguity in the inner boundary defined by an apparent horizon which is associated with the choice of cauchy foliation .such an ambiguity is not associated with the event horizon .however , the event horizon is of no practical use in a cauchy evolution since it can only be constructed in retrospect , after the global geometry of the spacetime has been determined .a better alternative is the trapping horizon , defined as the boundary of the spacetime region containing trapped surfaces . herethe reference to cauchy hypersurfaces is dropped while retaining the quasilocal concept of trapped surfaces .trapping horizons exist in any black hole spacetime whereas the existence of apparent horizons is dependent on the choice of cauchy foliation . in practice ,the problem of locating trapped surfaces is partially solved in the process of setting initial data . for the 3-dimensional problem of two inspiraling black holes ,there are several numerical approaches for determining appropriate initial cauchy data .an apparent horizon , when it exists , is a marginally trapped surface and lies on the trapping horizon . once the initial cauchy hypersurface cuts across a trapping horizon in this way , the scenario for pathological foliations is not present initially ; and a reasonable choice of lapse should guarantee that future cauchy hypersurfaces continue to contain that component of the apparent horizon .however , in the two black hole problem , besides the two disjoint apparent horizons present initially , an outer apparent horizon ( surrounding them ) is expected to form at a later time .finding and locating this outer apparent horizon can make the computational problem enormously easier by using it as the new inner boundary at this stage .excellent progress has been made in designing apparent horizon finders and trackers for this purpose .however , it is not known what lapse condition on a cauchy foliation would guarantee that an outer apparent horizon form at the earliest possible time .besides these geometrical issues there are a number of serious computational difficulties in implementing an apparent horizon boundary condition . in order to obtain gravitational waveforms ,the computational domain must cover a time interval of the order of several hundred in the exterior region whereas typically a singularity forms on a time of order in the region close to the apparent horizon .thus a slicing which avoids the singularity for several hundred will necessarily develop coordinate singularities .in addition , the inner boundary traced out by an apparent horizon is generically spacelike ( at best lightlike ) .thus if the coordinates defining the numerical grid were to remain constant in time on the boundary ( `` apparent horizon locking '' ) then the coordinate trajectories would have to be superluminal . while horizon locking works in the spherically symmetric case , it is difficult to implement in a cartesian 3-dimensional grid .the alternative is to let the apparent horizon move through the coordinate grid . at the same time , the location of the apparent horizon must be determined by solving an elliptic equation or an equivalent extremum problem .the requirements on the grid are further complicated when the black hole is spinning . on top of all these difficulties , the computational techniques must ensure that the strong fields inside the apparent horizon boundary do not severely leak into the exterior due to finite difference approximations . causal differencing and algorithms based upon a strictly hyperbolic version of the initial value problem been proposed to avoid this .however , no 3-dimensional cauchy code has yet been successful in evolving a schwarzschild black hole .= 3.2 in it is clear that the 3-dimensional coalescence of black holes challenges the limits of computational know - how .we wish to present here a new approach for excising an interior trapped region which might provide enhanced flexibility in tackling this important problem . in this approach , we locate the interior boundary of the cauchy evolution _outside _ the apparent horizon . across this inner cauchy boundarywe match to a characteristic evolution based upon an ingoing family of null hypersurfaces .it is the inner boundary condition for the characteristic evolution which is then given by a null hypersurface version of the apparent horizon boundary condition . in the case of two black holes ,the inner boundary would consist of two disjoint topological spheres , chosen so that their inner directed null normals are converging .fig [ fig : tblackholes ] provides a schematic picture of the global strategy .two disjoint characteristic evolutions , based upon ingoing null hypersurfaces , are matched across worldtubes a and b to a cauchy evolution of the shaded region .the interior boundary of each of these characteristic evolutions borders a region containing trapped surfaces .the outer boundary of the cauchy region is another worldtube c , which matches to an exterior characteristic evolution based upon outgoing null hypersurfaces extending to null infinity .this strategy offers several advantages in addition to the possibility of restricting the cauchy evolution to the region outside the black holes .although , finding a marginally trapped surface on the ingoing null hypersurfaces remains an elliptic problem , there is a natural radial coordinate system to facilitate its solution .however it is also possible to locate a trapped surface on the ingoing null hypersurface by a purely algebraic condition .since this trapped surface ( when it exists ) lies in the region invisible to it can be used to replace the trapping horizon as the inner boundary . in either case, moving the black hole through the grid reduces to a 1-dimensional radial motion , leaving the angular grid intact and thus reducing the complexity of the computational masks which excise the inner region .( the angular coordinates can even rotate relative to the cauchy coordinates in order to accommodate spinning black holes ) .the chief problem of this approach is that a caustic may be encountered on the ingoing null hypersurface before entering the trapped region .this is again a problem whose solution lies in choosing the right initial data and also the right geometric shape of the two - surface across which the cauchy and characteristic evolutions are matched .there is a great deal of flexibility here because of the important feature that initial data can be posed on a null hypersurface without constraints .the strategy of matching an interior cauchy evolution to an exterior _ outgoing _ characteristic evolution has been described and implemented to provide a computational cauchy outer boundary condition in various cases , ranging from 1 and 2 dimensional simulations to 3-dimensional simulations that include .a slight modification allows changing an outgoing null formalism ( and its evolution code ) to an ingoing one .this is briefly reviewed in sec .[ sec : null ] . by matching cauchy and characteristic algorithms at both an inner and outer boundary, the ability to include facilitates locating the true event horizon while excising an interior trapped region . in sec .[ sec : trapped ] , we discuss the problem of locating trapped surfaces on an ingoing null hypersurface . in sec . [ sec : spherical ] , we present an implementation of these ideas to the global evolution of spherically symmetric , self gravitating scalar waves propagating in a black hole spacetime . in this case , the performance of the matching approach equals that of previous cauchy - only schemes that have been applied to this problem .we introduce a unified formalism for coordinates based upon either ingoing or outgoing null hypersurfaces .let label these hypersurfaces , ( ) , be labels for the null rays and be a surface area distance . in the resulting coordinates , the metric has the bondi - sachs form where , with a unit sphere metric . in the outgoing case ,writing , it is convenient to express the metric variables in the form where .this yields the standard outgoing null coordinate version of the minkowski metric by setting . in the ingoing case ,writing , the only component of the minkowski metric which differs is .this can be effected by the substitution in the outgoing form of the metric .the substitution ( [ eq : betasub ] ) can also be used in the curved space case to switch from outgoing to ingoing coordinates , in which case it is equivalent to an imaginary shift in the integration constant for the einstein equation determining ( see equation ( [ eq : beta ] ) below ) .this leads to the ingoing version of the metric of course , at a given space - time point the values of the coordinates and and the metric quantities , , and are not the same in the ingoing and outgoing cases ; but since we do not consider transformations between outgoing and ingoing coordinates there is no need to introduce any special notation to distinguish between them .this same substitution also provides a simple switch from the outgoing to the ingoing version of einstein equations written in null coordinates .this is consistent because contains a free integration constant which can be chosen to be complex ( as long as it leads to a real metric ) . in order to see how this works consider the outgoing version of the null hypersurface equations : where is the covariant derivative and the curvature scalar of the 2-metric .the -equation ( [ eq : beta ] ) allows the substitution ( [ eq : betasub ] ) to be regarded as a change in integration constant .then carrying out this substitution in equations ( [ eq : beta])-([eq : v ] ) leads to the ingoing version of the null hypersurface equations : this formal substitution also applies to the dynamical equations and provides a simple means to switch between evolution algorithms based upon ingoing and outgoing null cones .as we have already noted , although the same coordinate labels and are used for notational simplicity in both the outgoing metric ( [ eq : umet ] ) and the ingoing metric ( [ eq : vmet ] ) , they represent different fields .an exception occurs for spherical symmetry where the the surface area coordinate can be defined uniquely in terms of the same 2-spheres of symmetry used in both the ingoing and the outgoing coordinates . in this case , the spacelike or timelike character of the hypersurfaces is consistent under the substitution ( [ eq : betasub ] ) because the change involved in going from ( [ eq : v ] ) to ( [ eq : iv ] ) implies that changes sign in switching from outgoing to ingoing coordinates . as a result we obtain a consistent value for , with the ( ) sign holding for outgoing ( ingoing ) coordinates . in the absence of spherical symmetry , the surface areacoordinate used in the bondi - sachs formalism has a gauge ambiguity associated with the changes in ray labels , under which it transforms as a scalar density . on any null hypersurface with a preferred compact spacelike slice ,this coordinate freedom in may be fixed by requiring that on .this then determines a unique foliation on either the ingoing or outgoing null hypersurface emanating from .cauchy - characteristic matching can be used to replace artificial boundary conditions which are otherwise necessary at the outer boundary of a finite cauchy domain .the exterior characteristic evolution can then be extended to null infinity to form a globally well - posed initial value problem . in tests of nonlinear 3-dimensional scalar waves ,cauchy - characteristic matching dramatically outperforms the best available artificial boundary condition both in accuracy and computational efficiency .we now describe how this matching strategy can be used at the inner boundary of a cauchy evolution which is joined to an ingoing null evolution . on the initial cauchy hypersurface , denoted by time ,let be a ( topological ) 2-sphere forming the inner boundary of the region being evolved by cauchy evolution .let represent the future evolution of under the flow of the vector field , where is the unit vector field normal to the cauchy hypersurfaces and and are the lapse and shift .given the initial cauchy data on , boundary data must be given on the world tube in order to determine its future evolution .the cauchy hypersurfaces foliate this world tube into spheres .the boundary data is obtained by matching across to an interior null evolution based upon the null hypersurfaces emanating inward from from the spheres .the evolutions are synchronized by setting on .= 3.2 in the combination of initial null data on and initial cauchy data on determine the future evolution in their combined domain of dependence as illustrated in fig [ fig : domain ] . in order to avoiddealing with caustics we terminate the null hypersurfaces on an inner boundary whose location is determined by a trapping condition , as discussed below .this inner boundary plays a role analogous to an apparent horizon inner boundary in a pure cauchy evolution .the region inside is causally disjoint from the domain of dependence which is evolved from the initial data . in the implementation of this strategy in the model spherically symmetric problem of sec .[ sec : spherical ] , we describe in detail how data is passed back and forth across to supply an outer boundary value for the null evolution and an inner boundary value for the cauchy evolution . in the remainder of this section ,we discuss how some of the key underlying issues might be handled in the absence of symmetry . in order to ensure that an inner trapping boundary exists itis necessary to choose initial data which guarantees black hole formation .such data can be obtained from initial cauchy data on for a black hole .however , rather than extending the cauchy hypersurface inward to the apparent horizon it could instead be truncated at an initial matching surface located sufficiently far outside the apparent horizon to avoid computational problems with the cauchy evolution .the initial cauchy data would then be extended into the interior of as null data on until a trapping boundary is reached .two ingredients are essential in order to arrange this .first , must be chosen to be convex , in the sense that its outward null normals uniformly diverge and its inner null normals uniformly converge . given any physically reasonable matter source, the focusing theorem then guarantees that the null hypersurface emanating inward from continues to converge until reaching a caustic .second , initial null data must be found which leads to trapped surfaces on before such a caustic is encountered .the existence of trapped surfaces depends upon the divergence of the outward normals to slices of .given the appropriate choice of the existence of such null data is guaranteed by the evolution of the extended cauchy problem .however , it is not necessary to actually carry out such a cauchy evolution to determine this null data .it is the data on which is most critical in determining whether a black hole can form .this can be phrased in terms of the trapping gravity of , introduced below . in the spherically symmetric einstein - klein - gordon model( see sec .[ sec : spherical ] ) , if has sufficient trapping gravity to form a trapped surface on in the absence of scalar waves crossing then a trapped surface also forms in the presence of scalar waves . in the absence of symmetry , this suggests that given appropriate initial cauchy data for horizon formation that the simplest and perhaps physically most relevant initial null data for a black hole would correspond to no gravitational waves crossing .the initial null data on can be posed freely , i.e. it is not subject to any elliptic or algebraic constraints other than continuity requirements with the cauchy data at . in the vacuum case, the absence of gravitational waves in the null data has a natural ( although approximate ) formulation in terms of setting the ingoing null component of the weyl tensor to zero on .key to the success of this approach is the proper trapping behavior , i.e. the convergence of both sets of null vectors normal to a set of slices of located between the caustics and the matching boundary . by construction ,the ingoing null hypersurface , given by , is converging along all rays leaving the initial slice coordinatized by .( here are ingoing null coordinates . ) in order to investigate the trapping of we must determine the divergence of slices of defined by .let be tangent to the generators of , with normalization . then and .let be the outgoing normal to , normalized by .then where its contravariant components are and let be the projection tensor into the tangent space of and define and by raising and lowering indices with .its contravariant components are , , and ; and its covariant components are , , and . the outward divergence of is given by . ( the conventions are chosen so that for a slice of an outgoing null cone in minkowski space . )then a straightforward calculation yields {,a } \nonumber \\ & & -{g^{vr}\over r}(rg_{vr}g^{ab})_{,r}r_{,a}r_{,b } \nonumber \\ & & -g^{vr}r_{,b}(g^{ab}g_{va})_{,r } , \label{eq : divergence}\end{aligned}\ ] ] which is to be evaluated on after the derivatives are taken .the divergence of the generators tangent to is given by .then so that is converging in accord with our construction .a slice of is trapped if . in terms of the ingoing bondi metric variables defined in ( [ eq : vmet ] ) ,equation ( [ eq : divergence ] ) gives {,a } \nonumber \\ & - & r(r^{-1}e^{2\beta}h^{ab})_{,r}r_{,a}r_{,b } + r^2r_{,a}u^a_{,r}. \label{eq : diverg}\end{aligned}\ ] ] setting in equation ( [ eq : diverg ] ) gives a 2-dimensional laplace equation for the function which locates a marginally trapped surface .such a surface lies on a trapping horizon and is ( a component of ) the apparent horizon of any cauchy hypersurface which contains it . for a marginal surface to lie on an _outer _ trapping horizon its trapping gravity , defined as must be real and positive , so that .the trapping gravity generalizes the concept of the surface gravity of an event horizon to trapping horizons . in our coordinate system ,thus if the surface is marginally trapped then positive trapping gravity implies that the surface is trapped for small .we use equation ( [ eq : gravity ] ) to generalize the definition of trapping gravity to an arbitrary slice of a inwardly converging null hypersurface. then slices of positive trapping gravity tend toward trapping as decreases. however , in general , there seems to be no purely local criterion which guarantees the existence of a trapped surface before encountering a caustic as . in the special case of a spherically symmetric slice of a spherically symmetric null cone , the laplace equation for a marginally trapped slicereduces to the algebraic condition that , which is satisfied where the hypersurface becomes null .the vacuum schwarzschild metric in ingoing null coordinates ( which are equivalent to ingoing eddington - finkelstein ( ief ) coordinates ) is given by , , and .in this case , determines the location of the event horizon ( which coincides with the apparent horizon ) and the surface gravity reduces to . in the non - vacuumspherically symmetric case , determines the apparent horizon . in the absence of spherical symmetry , vanishes on a slice of the form at points for which , where we will refer to the largest slice of on which as a `` q - boundary '' , relative to .( enters here because it provides the reference for slices ) . such a sliceis everywhere trapped or marginally trapped so that the q - boundary provides a simple algebraic procedure for locating an inner boundary inside an event horizon .a q - boundary , when it exists , will always lie inside ( smaller ) or tangent to the trapping horizon .let describe the q - boundary and let ( ) be the smallest ( largest ) value of on the trapping horizon .then on the trapping horizon at and so that equation ( [ eq : divergence ] ) implies at ( and at .consequently , , with equality holding only when at .there are thus two possible strategies for positioning an inner boundary , both of which ensure that the ignored portion of spacetime can not causally effect the exterior spacetime : ( i ) use the trapping horizon , in which case the 2d elliptic equation ( [ eq : divergence ] ) must be solved on a sphere in order to determine its location ; or ( ii ) use the q - boundary which is determined by a simple algebraic condition . strategy ( i ) is similar to approaches used to locate an apparent horizon on a cauchy hypersurface .the advantage in the null cone case is that there is a natural radial coordinate defined by the coordinate system to reduce the elliptic problem to 2 angular dimensions and to define a mask for moving the excised region through the computational grid .strategy ( ii ) carries no essential computational burden since the quantities and are obtained by means of an inward radial integral as part of the evolution scheme .one merely stops the integration when the inequality defining the q - boundary is satisfied . thus strategy ( ii )is preferable unless either caustics or singularities appear before reaching the q - boundary .it is easy to choose black hole initial data so that the q - boundary and trapping horizon agree at the beginning of the evolution but whether the q - boundary will move too far inward to be useful is a critical question which would depend upon choices of lapse , shift and the geometry of the matching world tube .further research is necessary to decide if strategy ( ii ) is viable on geometric grounds in a highly asymmetric spacetime .= 3.2 in our purpose here is to present a spherically symmetric model demonstrating the feasibility of a stable global algorithm based upon three regions which cover the spacetime exterior to a single black hole .fig [ fig : spacetime ] illustrates 1-dimensional radial geometry .the innermost region is evolved using an ingoing null algorithm whose inner boundary lies at the apparent horizon and whose outer boundary lies outside the black hole at the inner boundary of a region evolved by a cauchy algorithm .data is passed between these regions using a matching procedure which is detailed below .the outer boundary of the cauchy region is handled by matching to an outgoing null evolution .the details of the outgoing null algorithm and of the cauchy evolution are not discussed since they have been presented elsewhere .we will discuss the matching conditions since they differ from those used previously due to different choices of gauge conditions .we will also present the field equations since they are important for understanding the matching procedure .the cauchy evolution is carried out in ief coordinates .the metric in this coordinate system is the set of equations used in the evolution are ',\ ] ] where the over - dot represents partial with respect to , prime denotes partial with respect to and the scalar field variables are defined by the outgoing - null metric is the outgoing hypersurface equations ( [ eq : beta ] ) - ( [ eq : v ] ) reduce to with and , and the outgoing version of the scalar wave equation is an ingoing null evolution algorithm can be obtained from the outgoing algorithm by the procedure described in sec .[ sec : null ] . given the ingoing null metric independent set of equations are the hypersurface equations and the scalar wave equation given data for on and on the world tube , defined by , and integration constants for and on evolution proceeds to null hypersurfaces , defined by , by an inward radial march along the null rays emanating inward from .the initial data consists of a schwarzschild black hole of mass which is well separated from a gaussian pulse of ( mostly ) ingoing scalar radiation .initially there is no scalar field present on the ingoing - null patch , so the initial data there is simply similarly , initially there is no scalar field on the outgoing - null patch , so we have .the values for and are determined by matching to the cauchy data at the world tube ( see below ) and integrating the hypersurface equations ( [ eq : sibeta ] ) and ( [ eq : siv ] ) . in the cauchy region , the scalar field is given by } , \\\phi & = & \phi \left [ { 1\over\tilde{r } } - \frac{d\left(\tilde{r}-c\right)^{d-1}}{\sigma^d}\right ] , \\\pi & = & \phi \left [ \frac{2-\tilde{\beta } } { \tilde{r}\left ( 1-\tilde{\beta}\right ) } - \frac{d\left(\tilde{r}-c\right)^{d-1}}{\sigma^d } \right],\end{aligned}\ ] ] where , , , and are scalars representing the pulse s amplitude , center , shape , and width , respectively .the geometric variables are initialized using an iterative procedure , as detailed in .since the ief coordinate system is based on ingoing null cones , it is possible to construct a simple coordinate transformation which maps the ief cauchy metric to the ingoing null metric , namely this results in the following transformations between the two metrics . } \\ \label{eq : vin } v & = & r \frac{2\tilde{\beta } - 1 } { 1- \tilde{\beta } } \\ \label{eq : tbetain } \tilde{\beta } & = & \frac{v+r}{v+2r } \\ \label{eq : tain } \tilde{a } & = & e^\beta \sqrt{v / r + 2}.\end{aligned}\ ] ] the extrinsic curvature components can be found using only the cauchy metric , note that these transformation equations are valid everywhere in the spacetime , not just at the world tube .the matching conditions at the outer world tube are more complicated .both the cauchy and characteristic systems share the same surface area coordinate but there is no universal transformation between their corresponding time coordinates .however , we can construct a coordinate transformation which is valid everywhere on the world tube . to do this , we start with a general , differential coordinate transformation , whose unknown function is to be determined from the matching conditions : to keep the notation simpler , we will write the cauchy metric as inverting ( [ eq : gentr ] ) and substituting , we get on the world tube , we require .thus , we set . for the metric , we get the condition that the -direction be null implies that upon substitution of the ief metric functions , this determines that the matching conditions are then } \\ \label{eq : vout } v & = & r \frac{1 - 2\tilde{\beta } } { 1- \tilde{\beta } } \\ \label{eq : tbetaout } \tilde{\beta } & = & \frac{v - r}{v-2r } \\ \label{eq : taout } \tilde{a } & = & e^\beta \sqrt{2 - v / r}.\end{aligned}\ ] ] as is typical with finite difference calculations , the continuum functions are discretized spatially and placed on grids with points . in our case we have three regions .the inner - null variables are placed on grids with points , the cauchy variables on grids with points , and the outer - null variables on grids with points . on each grid ,the spatial index runs from to , or , respectively , with representing the smallest value .= 3.2 in in addition to the spatial discretization , each function needs two or more time levels . while both the cauchy and null evolution schemes use only two time levels , we keep an extra level in each to facilitate the matching . fig [ fig : inmatch ] shows how the finite difference grids match at the inner world tube .the two grids are aligned in both and .this means no interpolations are necessary .the world tube is at on the cauchy grid and on the ingoing null grid .the cauchy variables need boundary values on time level at .the metric values come from the null variables at level , with , using the transformation equations ( [ eq : tbetain ] ) and ( [ eq : tain ] ) .these relations are algebraic and straightforward to implement .boundary values for the extrinsic curvature components come from equations ( [ eq : kttdef ] ) and ( [ eq : krrdef ] ) . is computed algebraically and is computed using second - order , centered - in - time , forward - in - space derivatives in the cauchy grid .the transformation of the scalar field requires transforming derivatives between the two coordinate patches . at the world tube ,the relationships among the derivatives are these lead to the following equations for and : and the null variables need boundary values at , .the metric values come from the cauchy variables at level , with , using ( [ eq : betain ] ) and ( [ eq : vin ] ) .the evolution equation for at the world tube is .\end{aligned}\ ] ] = 3.2 in the situation at the outer world tube is shown in fig [ fig : outmatch ] . here, the grids align in space at two values of but in time only at the world tube .the cauchy metric boundary values at come directly from the null variables at using the transformation equations ( [ eq : tbetaout ] ) and ( [ eq : taout ] ) .since the grids do nt align in time as they do at the inner world tube , we use a different procedure for the scalar field boundary values .we use the derivative transformation to get an evolution equation for at the world tube .we then set and using their definitions ( [ eq : defphi ] ) and backward second - order derivatives .the boundary values for the null variables must be interpolated in time using the cauchy variables at and . given values for a function at time levels and , we can get its value at the beginning of the null cone using notice that this expression requires a value for at level , something that wo nt be known until the next time step .we have found it sufficient to extrapolate from the previous time levels using thus , to get values for and we use the matching conditions ( [ eq : betaout ] ) and ( [ eq : vout ] ) along with the above interpolation .for the scalar field , we interpolate the value of from the cauchy grid , using . the apparent horizon is found on the ingoing null cones using the apparent horizon equation which reduces simply to . when the scalar field passes into the black hole , the horizon grows outward and we simply stop evolving the grid points that are now inside . to evaluate the performance of this approach ,we compare it to a second order accurate , purely cauchy evolution in ief coordinates , as presented in .for the comparison shown here , we place the outer boundary of the cauchy evolution at and evolve to to prevent any outer boundary effects from influencing the comparison .the scalar field pulse is centered at and has a mass of .fig [ fig : converg ] shows the mutual convergence between the cauchy variables in the two codes .this demonstrates that the two programs are solving the same problem , and provides evidence that the matching approach generates the correct spacetime .= 3.2 in the reason that the convergence rate appears to drop to first order when the scalar field hits the horizon is an artifact arising from the motion of the horizon .as mass falls into the hole , there is a critical amount which causes the horizon to move out by one grid point .if at the coarsest resolution the horizon moves a distance then on the next finer grid it only moves by , and so on .thus , at different resolutions , the black holes have slightly different locations .the resulting shift in the location of the inner boundary causes convergence between successively finer numerical solutions to drop from second order to first order .this is an unavoidable diagnostic effect due to the comparison of numerical solutions .we believe that the numerical solution would converge at second order to an exact solution of the physical problem .no exact solutions are known to use for a check of this , but for a weak scalar field the horizon does not move and we do measure second order convergence throughout the evolution .further , we see the same convergence order drop for strong fields in cauchy - only or null - only evolutions , and thus are certain it is not due to the matching procedure .our work shows that the matching approach provides as good a solution to the black hole excision problem in spherical symmetry as previous treatments .it also has some advantages over the pure cauchy approach , namely , it is computationally more efficient ( fewer variables ) , and is much easier to implement .we achieved a stable evolution simply by transforming the outgoing - null evolution scheme to work on ingoing null cones , and implementing it .achieving stability with a purely cauchy scheme in the region of the apparent horizon is trickier , involving much trial and error in choosing difference schemes .it should be noted , however , that implementing the matching may be tricky , especially in higher dimensions .whether it is easier than implementing cauchy differencing near the horizon remains to be seen .also , using the ingoing null formulation , we have achieved the stable evolution of a schwarzschild black hole in 3-dimensions ( the details will be presented elsewhere ) and are working on rotating and moving black holes .long term stable evolution of a 3d black hole has yet to be demonstrated with a cauchy evolution .this work has been supported by nsf phy 9510895 to the university of pittsburgh and by the binary black hole grand challenge alliance , nsf phy / asc 9318152 ( arpa supplemented ) .computer time for this project has been provided by the pittsburgh supercomputing center under grant phy860023p .we thank r. a. isaacson for helpful comments on the manuscript .j. thornburg , class .quantum grav . * 4 * , 1119 ( 1987 ) .e. seidel and w. suen .* 69 * , 1845 ( 1992 ) .m. a. scheel _et al_. , phys .d * 51 * , 4208 ( 1995 ) .m. a. scheel _et al_. , phys .d * 51 * , 4236 ( 1995 ) .r. l. marsa and m. w. choptuik , phys .d * 54 * , 4929 ( 1996 ) .j. thornburg , phys .d * 54 * , 4899 ( 1996 ) g. cook and j. w. york , phys .d * 41 * , 1077 ( 1990 ) .k. p. tod , class .quantum grav .* 8 * , l115 ( 1991 ) .t. nakamura , y. kojima and k. oohara , phys . lett . *106a * , 235 ( 1984 ) .a. j. kemball and n. t. bishop , class . quantum grav . * 8 * , 1361 ( 1991 ) .m. f. huq , s. a. klasky , m. w. choptuik and r. a. matzner , ( unpublished ) .p. anninos , k. camarda , j. libson , j. mass , e. seidel and w. suen , `` finding apparent horizons in dynamic 3d numerical spacetimes '' , gr - qc 9609059 .t. w. baumgarte , g. b. cook , m. a. scheel , s. l. shapiro and s. a. teukolsky , phys .d * 54 * , 4849 ( 1996 ) .robert m. wald , general relativity .robert m. wald and vivek iyer , phys .d * 44 * , r3719 ( 1991 ) .s. a. hayward , phys .d * 49 * 6467 ( 1994 ) .g. b. cook , m. w. choptuik , m. r. dubal , s. klasky , r. a. matzner and s. r. oliveira , phys .d , * 47 * , 1471 ( 1993 ) .a. abrahams , a. anderson , y. choquet - bruhat and j. w. york , phys .lett . * 75 * , 3377 ( 1995 ) j. anderson , j. comput . phys . * 75 * , 288 ( 1988 ) . n. t. bishop , in approaches to numerical relativity ( 1992 ) . n. t. bishop , class. quantum grav .* 10 * , 333 ( 1993 ) .clarke and r.a .dinverno , class .quantum grav . * 11 * , 1463 ( 1994 ) .clarke and r.a .dinverno , and j.a .vickers , phys .d * 52 * , 6863 ( 1995 ) .m. r. dubal , a. dinverno and c.j.s .clarke , phys .d * 52 * , 6868 ( 1995 ) r. gmez , p. laguna , p. papadopoulos and j. winicour .d * 54 * , 4719 ( 1996 ) . n. t. bishop_ et al_. , phys .lett . * 76 * , 4303 ( 1996 ) . n. t. bishop _et al_. , submitted to j. comput .p. anninos et .al . , phys .d * 51 * 5562 ( 1995 ) .h. bondi , _ et al_. , proc .a * 269 * , 21 ( 1962 ) .r. sachs , proc .a * 270 * , 103 ( 1962 ) .j. winicour , j. math . phys .1193 ( 1983 ) . j. winicour , j. math . phys . * 25 * , 2506 ( 1984 ) .r. gmez and j. winicour , j. math . phys . * 33 * , 1445 ( 1992 ) .
we present a new method for treating the inner cauchy boundary of a black hole spacetime by matching to a characteristic evolution . we discuss the advantages and disadvantages of such a scheme relative to cauchy - only approaches . a prototype code , for the spherically symmetric collapse of a self - gravitating scalar field , shows that matching performs at least as well as other approaches to handling the inner boundary .
when the first microprocessor was released , its memory operations were relatively short when compared to their corresponding arithmetic operations . since then , microprocessors have been trending strongly in the other directions , with today s load and store operations being several orders of magnitude slower than arithmetic operations .this so called memory wall has only been exacerbated by the coming of microprocessors .the added complexity of trying to synchronize memory operations and , more importantly , cache contents between cores can tremendously slow down performance if not executed intelligently . in this paper, we will discuss variations of the standard moesi cache coherence scheme that allow a cache to either update or invalidate during a write request , depending on the situation .+ the most common and widely used state - based coherence scheme in multi - core machines is the moesi scheme .it consists of the following five states : + _( m)odified _ the cache block is the sole owner of dirty data . + _ ( o)wned _ - the cache block owns the dirty data , but there are other sharers .a cache with a block in the o state processes requests for that block from other cores . + _( e)xclusive _ - the cache is the sole owner of clean data .+ ( _ s)hared _ - the cache is one of several possessors of a block , but it is not the owner and its data is clean . + _( i)nvalid _ - the cache block does not hold valid data + in general , most machines will use an invalidate protocol with nmoesi . that is , when two caches contain blocks with the same tag , a write to one cache causes an invalidation signal to be sent to the other cache . a cache will send out this invalidate signal unless it knows it is the sole owner of the data , such as in the m or e state . in the case of the o , s or i state, the cache will generate and invalidate signal that will tell all other caches to set their copies of the data to i. + invalidate schemes can be thought of a reactive approach to cache coherence .a cache will only receive modified data from another cache if it asks for it . for a more proactive approach, one would look to an update scheme .+ an update signal is sent with data in the same scenarios where an invalidate scheme would send an invalidate signal , but rather than set their blocks to i , these cores would replace their old data with the block s new value and set it to the s state .both schemes have their advantages and disadvantages .it s good to be proactive and use and update scheme if you know that a block written to by one core will soon be read by another core , but updates can also generate a lot of unnecessary bus traffic .+ meanwhile , invalidate schemes will avoid this bus traffic up front , but may still generate it later if they need to read a block that has been invalidated .like most things , _ it is possible that a good answer lies somewhere in between_. below , we propose hybrid schemes that switch between invalidating and updating depending on the cores recent behavior .+ a fair amount of research was done on the advantages and disadvantages of updating or invalidating in the mid-80s . since then, most research has gone towards other aspects of coherence , but many of these papers present a reasonable starting place . + a method called the rb protocol was proposed by rudolf and segall for write - through caches .the scheme updates all other cores on the write - through by default , but if two writes occurred back to back , data in all other cores would be invalidated .this likely saved traffic for write - through machines , but as most machines today have write - back caches , updating on every write would create an excessive amount of extra bus traffic .+ karlin , manasse , rudolf and sleator would later propose a scheme called competitive snooping which would rely on amortized analysis to allow updates to occur so long as there was enough allotted cost for them to occur .this cost was related to the amount of time it would have taken if invalidation had occurred instead , but that invalidation eventually resulted in cache misses . while interesting , this scheme would likely also struggle on write - back machines . as we will show later, it is much better to invalidate by default and update when necessary .+ while both above methods relied mainly on the patterns of their own cores , archibald proposed a scheme that would take into account the actions of other cores .once again , it updated by default , but if any core had three writes to a single location without any other core accessing that location , invalidation would occur instead .we also see a potential profit of hybrid schemes in various fields such as large - scale systems with shared memory , memory - optimized protocols , and others .our proposed schemes all begin by invalidating first , then allowing updates when certain criteria have been met .they also heavily take into account the actions of other cores on the network .for our research , we decided to implement and compare several different schemes for performance : + this is the basic scheme that is used by many multicore systems .when a cache writes to a block in the o , s or i state , it sends an invalidate signal to the network .all other cores that receive this signal invalidate their copies of the block .+ the opposite of the invalidate - only scheme , caches writing to a block in the o , s or i state send an update signal with data to the network .all other cores that receive this signal update their copies with the correct value and set themselves to s. + this is the first of our proposed hybrid schemes that we implemented ourselves . in this scheme ,each cache block carries with it an associated counter that is used to determine whether updates or invalidates should occur upon a write .it is defined by the following three scenarios : + 1 . upon entry to the cache from main memory ,counter is initialized to zero . 2 . whenever a read request is seen by a cache and it contains a valid block with matching address , that block s counter is increased by one3 . after a blockis successfully written to , its value decreases by one .when we write to a block , we check the counter value against the threshold .if the counter is above or equal to the threshold , we send an update signal to the network .otherwise , we send an invalidate signal .the logic behind this scheme is two - fold .when we sense multiple reads to a block , we increase the counter and aim to update rather than invalidate .when we sense more writes , we have a lower counter and invalidate other blocks instead .+ this scheme is the same as the invalidate - only scheme except that when writing to a block that is in the o state , we send an update signal to the network rather than an invalidate signal .invalidation still occurs when writing to a block in the s or i state . as we will discuss later, the threshold scheme works best with a threshold of one .when a block s counter is set to one , its state is almost always zero , so this scheme attempts to approximate the effects of the threshold scheme without the extra hardware .+ our final scheme is an alternate version of the threshold scheme . rather thankeep track of read and write requests to a memory location , whether or not to do an update is determined by the number of sharers any given data block has . if the number is above or equal to a certain number of sharers, an update will occur in place of an invalidate .this is particularly relevant due to its ease of implementation in directory schemes , whose popularity is on the rise in highly parallel machines .in order to simulate each of these different schemes , our team developed a simple cache simulating program in c++ .the program takes as input a list of loads and stores , with each string in the list containing a load / store identifier , a core number , and an address . when run with one of these inputs , the program simulates the operation of anywhere from 1 to 16 separate caches under the standard moesi protocol . during the run, it keeps track of the number of reads , writes , read request , write requests ( invalidates ) and update requests at each core .since our program simulates the scheme functionality independent of timing , we are looking at the total number of read requests , write requests and update requests as our metric for performance .the total number of requests is proportional to the amount of traffic that would exist on the network and therefore is an acceptable means of judging performance .we chose to develop our own simulator mainly for speed of simulation and ease of programming .doing so gave us the freedom to keep track of whatever metrics we liked , while also being able to easily add in various different versions of the coherence scheme .other simulators like multi2sim , which is discussed in the next section , proved to be incredibly difficult to make changes to and were significantly slower due to all of the additional work that goes into the full timing simulation .ultimately , it was decided that timing simulation was less important than the functional simulation , since timing varies so greatly from machine to machine .+ our simulator can simulate anywhere from 2 to 16 caches at once .the simulator only uses one level of caches . beyond the first level, all caches are connected to main memory .each cache contains 64 sets with 4 blocks in each set .each dataset that we generated to run on the simulator contains roughly five million loads / stores , so the metric used in this paper will be the total number of read requests , invalidates and updates on all cores per five million instructions .in order to run our simulator , we needed to generate files containing list of loads and stores to the various cores .we chose to look at a diverse array of datasets in order to gain the best possible understanding of our various schemes .also , we made sure that generated datasets are reasonably representative of their benchmark .+ each of these benchmarks was run on 2 , 4 , 8 and 16 cores .each scenario was simulated _ using invalidate - only , update - only , threshold , adapted - moesi and number of sharers _ schemes .+ we certainly wanted to include datasets corresponding to commercial benchmarks . to do this , we took advantage of the multi2sim timing simulator .while it was very difficult to implement the new hybrid schemes in the multi2sim timing simulator , we found that it was easy to adapt the simulator to generate datasets .while running a timing simulation , we had the simulator output to a file the information for five million consecutive loads and stores .we usually waited several tens of millions of instructions for the parallel programs to get warmed up before starting the output .this way , we were able to generate a more representative sample of the benchmark s performance .we generated datasets from the following four benchmarks in this way .+ _ bodytrack _ computer vision algorithm + _ dedup _ compression of a data stream through local and global means + _ streamcluster _ solves online clustering problem + _ swaptions _ uses monte carlo techniques to price a portfolio of swaptions + finally , we created a handful of pseudo - random datasets meant to represent common multicore scenarios , such as many cores sharing a lock , many cores updating an array based on an element s neighbors , and a server model .these datasets were generated with simple c++ programs .+ our _ locks _ dataset established 3 shared locks between any number of cores .each core had a 10% chance of accessing the lock .when doing so , the core would write to the lock to free it if it possessed it .if it did not possess the lock , it would read from the lock and then write to take the lock if no one else possessed it . only blocks containing the locks were shared between cores .all other data accesses were restricted to their own private range of addresses . + our _ arrays_ dataset represents an array that is constantly updated by comparing elements . in this scenario ,an array element is read by one core , as are its neighbors above , below to the right and to the left of it .each core traversed through a row in this array , and during each cycle , a core would be randomly chosen to process the next element in its row . note that in a real program , this would result in non - deterministic behavior .+ our _ pseudo - server _ dataset represents a very basic server - client model with public and private data where one core is allowed to write to shared data and each other core may only read from it .the server core can write to any block in the whole address range .the address range itself is split into two sections .the first section is public and can be read by any client core .the second section represents private space and is divided between all of the client cores which are only allowed to read from their own space .below we present results and analysis for each scheme using the various benchmarks .note that all graphs only display the total sum of all bus transactions for each scenario .detailed breakdown of how those transactions are split between read requests , invalidates and updates is provided in the appendix .+ first , we will simply look at the base _ invalidate - only _ and _ update - only _ schemes . to limit the amount of data presented in this section , only graphs for 8-core scenariosare presented , although results from scenarios with other numbers of cores will be discussed . additionally , as mentioned above , the numbers presented are bus transactions per five million memory instructions .results for the commercial and artificial workloads are shown below ( * figure 1 * ) .+ the primary point gained from this data is that , for many applications , there is a large gap between the number of transactions that occur with an update - only scheme and an invalidate - only scheme . in many workloads ,the amount of data that is heavily shared between cores is much less than the amount of data that is primarily used by one core but is occasionally accessed by others . in an update - only scheme , we are updating any core that has ever accessed the shared data , when we ideally only want to update those cores that have accessed it recently .the one exception to this pattern is the _ bodytrack _ benchmark .the difference between the two schemes is relatively small , indicating denser sharing between the caches .as we will see later , this makes this benchmark a good candidate to improve performance under a hybrid scheme ( * figure 2 * ) .+ our artificially generated benchmarks present much less variation between the two extremes .the pseudo - server benchmark , due to its unique structure , actually performs better under the update - only scheme . + another interesting note to take away is that the _ arrays _ benchmark maintains a consistent number of transactions regardless of scheme , even though the distribution of updates / invalidates is different . due to the enforced order of the memory transactions ( they happen in order on each core , although the core that may proceed in each iteration is chosen randomly ) , the benchmark never really benefits from any updates .+ in this section , we will analyze the results from running the benchmarks with the _ threshold _ scheme at several different thresholds , as well as under the _ adapted - moesi scheme _ ( * figure 3 * ) .+ for the most part , there is a much smaller gap between the number of transactions that occur with the _ invalidate - only _ scheme and the hybrid scheme .still , for those benchmarks that originally had a large gap , the _ invalidate - only _ scheme outperforms any hybrid scheme . for _ bodytrack _ , however , the hybrid schemes of _ threshold 1 _ and _ adapted - moesi _ actually outperform the other schemes . since the benchmark was relatively dense , and because the update and invalidate schemes both performed relatively well , having a smart way to choose whether to update or invalidate ends up improving performance .+ when it came to the value to set the _ threshold _ to , only a value of one really showed any difference from an invalidate - only scheme .the _ threshold _ of 3 was in most cases identical to running _ with invalidate - only_. + due to this result , we believed that it may be worthwhile to implement a scheme that updates when the state of the block being written to was ( o)wned . this logic stemmed from the observation that when the threshold of one was met , the block was most commonly in the o state . in practice , however , this performed not better that a _ threshold _ of one , but at times would perform significantly worse . while blocks with a counter value that met the threshold of one were often in the o state , not all blocks in the o statewould necessarily have a threshold value of one ( * figure 4 * ) . + it is somewhat difficult to tell because of the scale of the graph , but the _ locks _ benchmark performed slightly worse with the _ threshold _ scheme than it did with the _ invalidate - only _ scheme , while the server benchmark did slightly better .the arrays benchmark still did not see any change . + the server benchmark is interesting because it was the only one to do better under the _ update - only _ scheme . in this case , the _ threshold _ and _ adapted - moesi _ schemes did better than always invalidating , but worse than always updating . while these hybrid schemes will not necessarily be the best possible scheme for each benchmark , they may provide a decent compromise between schemes that perform best always invalidating and those that perform best always updating .+ finally , we will address the results gained from running each benchmark under the _ number of sharers _ scheme ( * figure 5 * ) .+ the _ number of sharers _scheme actually performs relatively well in most cases . like the _ threshold _ scheme , it performs better on the _ bodytrack _ benchmark than either always updating or always invalidating .interestingly , the _ swaptions _ benchmark also sees improvement . unlike the _ threshold _ scheme , this scheme has the benefit of always knowing exactly how many other caches share data with a cache that is being written to , and this seems to be reflected as an increase in performance on some benchmarks .+ on other benchmarks , specifically _ streamcluster _ , this scheme seems to perform worse .because of how the updating works , the only way for a core not to become a sharer again is to be evicted from the cache , since it will never be invalidated once updates start happening .if a core does nt access a block regularly but also does nt evict it often enough , the scheme may update when it does nt need to .this effect is reflected in the poor performance of the _ streamcluster _ benchmark ( * figure 6 * ) .+ finally , the results for the _ number of sharers _ scheme on the artificial benchmarks look very similar to the _ threshold scheme _ , except the results are more exaggerated . it does worse on the _ locks _ benchmark but better on the _ server _ benchmark .because of the factors discussed above , this scheme seems to be more of a win - more / lose - more scheme than the _ threshold _ scheme . if a benchmark benefitted from the _ threshold _ scheme relative to the _ invalidate - only _ scheme , it benefits more with the number of sharers scheme .if it did worse with _ threshold _ , it does even worse with number of sharers .+ the minimum number of sharers required for updates to occur seemed to be best set around half of the number of cores .if it was too little , such as two sharers in the case of eight cores , then too many updates occurred .when the required number of sharers got above half , the performance usually stagnated at a constant value , since anything that is shared between half of the cores is generally shared between almost all of them .in this final section of the paper , we will discuss what conclusions can be drawn from the above analyzed data , what additional considerations need to be taken into account when judging the results , and suggest further research that can be done in this area .+ there certainly exist examples of benchmarks that perform better with either an _ invalidate - only _ scheme or an _ update - only _ scheme . in some instances , such as the bodytrack benchmark , there exist hybrids that perform better than either _ invalidate - only _ or _ update - only_. in other instances , there are hybrid schemes that will perform better than one of _ invalidate - only _ or _ update - only _ but worse than the other .+ when considering different threshold values for the _ threshold _ scheme , a value of 1 provided the most dramatic result .high threshold values functioned almost identically to _ invalidate - only _ schemes . employing the _ threshold _ scheme with a value of one resulted in the lowest number of transactions on some benchmarks , while providing a reasonable compromise on others .+ the adapted - moesi scheme did not perform as well as expected , as it led to more bus transactions than the threshold scheme in every scenario .+ finally , the _ number of sharers _scheme performed reasonably well , especially when the required number of sharers needed to perform an update was around half the number of cores .however , it varied more from the average than the threshold scheme did .because of this , the threshold scheme seems to be the correct choice for a scheme that will provide the optimal compromise between benchmarks that perform best with more updates and those that perform best with more invalidates .+ our simulator did not take timing into account , as we were only concerned with counting the total number of transactions . since the timing would vary from machine to machine, metrics such as ipc would be less informative than the total number of transactions . in a real machine, the timing of updates and invalidates plays an important role .updating results in longer stores but potentially much faster loads , while invalidation can do the reverse .+ we also did not consider hardware cost when evaluating the various schemes .updating on its own requires more hardware since more complex transactions must be sent over the bus . the _ threshold _ scheme requires substantial extra hardware , since each cache block must contain its own counter .the _ adapted - moesi _ scheme requires virtually no extra hardware .the number of sharers requires some sort of centralized index of the number of sharers on all data blocks in all caches .this can be easily accomplished by the directory in any cache coherence protocol that uses one .+ while the _ adapted - moesi _ scheme was meant to emulate a _ threshold _scheme with a threshold value of one using less hardware , it ultimately failed in that endeavor . still there is certainly a way to get the same effect with significantly less hardware .+ while we chose not to concern ourselves with the timing effects of the various schemes , they would certainly be interesting to address .+ finally , since our simulator used a snoopy protocol combined with _moesi _ , it would be interesting to see how each of these schemes interacts with a directory - based protocol. it would be especially interesting for the _ number of sharers _ scheme , as that scheme would be so easy to implement in a directory - protocol .
_ in general when considering cache coherence , write back schemes are the default . these schemes invalidate all other copies of a data block during a write . in this paper we propose several hybrid schemes that will switch between updating and invalidating on processor writes at runtime , depending on program conditions . we created our own cache simulator on which we could implement our schemes , and generated data sets from both commercial benchmarks and through artificial methods to run on the simulator . we analyze the results of running the benchmarks with various schemes , and suggest further research that can be done in this area . _ 2
with each domain and material parameter , an infinite number of tensors , called the generalized polarization tensors ( gpts ) , is associated .the concept of gpts was introduced in .the gpts contain significant information on the shape of the domain .it occurs in several interesting contexts , in particular , in low - frequency scattering , asymptotic models of dilute composites ( see and ) , in invisibility cloaking in the quasi - static regime and in potential theory related to certain questions arising in hydrodynamics .another important use of this concept is for imaging diametrically small conductivity inclusions from boundary or multistatic response measurements .multistatic response measurements are obtained using arrays of point source transmitters and receivers .this measurement configuration gives the so - called multistatic response matrix ( msr ) , which measures the change in potential field due to a conductivity inclusion .in fact , the gpts are the basic building blocks for the asymptotic expansions of the perturbations of the msr matrix due to the presence of small conductivity inclusions inside a conductor .they can be reconstructed from the multi - static response ( msr ) matrix by solving a linear system .the system has the remarkable property that low order generalized polarization tensors are not affected by the error caused by the instability of higher orders in the presence of measurement noise . based on the asymptotic expansion , efficient and direct ( non - iterative ) algorithms to determine the location and some geometric features of the inclusions were proposed .we refer to and the references therein for recent developments of this theory .an efficient numerical code for computing the gpts is described in . in , we have analyzed the stability and the resolving order of gpt in a circular full angle of view setting with coincident sources and receivers , and developed efficient algorithms for target identification from a dictionary by matching the contracted gpts ( cgpts ) .the cgpts are particular linear combinations of the gpts ( called harmonic combinations ) and were first introduced in . as a consequence ,explicit relations between the cgpt of scaled , rotated and translated objects have been established in , which suggest strongly that the gpts can also be used for tracking the location and the orientation of a mobile object .one should have in mind that , in real applications , one would like to localize the target and reconstruct its orientation directly from the msr data without reconstructing the gpts .in this paper we apply an extended kalman filter to track both the location and the orientation of a mobile target directly from msr measurements .the extended kalman filter ( ekf ) is a generalization of the kalman filter ( kf ) to nonlinear dynamical systems .it is robust with respect to noise and computationally inexpensive , therefore is well suited for real - time applications such as tracking .target tracking is an important task in sonar and radar imaging , security technologies , autonomous vehicle , robotics , and bio - robotics , see , for instance , .an example in bio - robotics is the weakly electric fish which has the faculty to probe an exterior target with its electric dipole and multiple sensors distributed on the skin .the fish usually swims around the target to acquire information .the use of kalman - type filtering for target tracking is quite standard , see , for instance , .however , to the best of our knowledge , this is the first time where tracking of the orientation of a target is provided .moreover , we analyze the ill - posed character of both the location and orientation tracking in the case of limited - view data . in practice, it is quite realistic to have the sources / receivers cover only a limited angle of view . in this case , the reconstruction of the gpts becomes more ill - posed than in the full - view case .it is the aim of this paper to provide a fast algorithm for tracking both the location and the orientation of a mobile target , and precisely analyze the stability of the inverse problem in the limited - view setting .the paper is organized as follows . in section [ sec : cond_prob_cgpt2 ] we recall the conductivity problem and the linear system relating the cgpts with the msr data , and provide a stability result in the full angle of view setting . in section [ sec : tracking_of_mobile_target ] we present a gpt - based location and orientation tracking algorithm using an extended kalman filter and show the numerical results in the full - view setting . in section [ sec : limited_angle_view ]we analyze the stability of the cgpt - reconstruction in the limited - view setting and also test the performance of the tracking algorithm .the paper ends with a few concluding remarks .an appendix is for a brief review of the extended kalman filter .we consider the two - dimensional conductivity problem .let be a bounded -domain of characteristic size of order and centered at the origin .then is an inclusion of characteristic size of order and centered at .we denote by its conductivity , and its contrast . in the circular setting , coincident sources / receiversare evenly spaced on the circle of radius and centered at the origin between the angular range ] the location and the orientation of a target . where is the rotation by .let be the cgpt of , and be the cgpt of .then the equation becomes : where is the truncation error , and the measurement noise at time .the objective of _ tracking _ is to estimate the target s location and orientation from the msr data stream .we emphasize that these informations are contained in the first two orders cgpts as shown in the previous paper .precisely , let , and , then the following relations ( when it is well defined ) exist between the cgpt of and : hence when the linear system is solvable , one can estimate by solving and accumulating and .however , such an algorithm will propagate the error over time , since the noise presented in data is not properly taken into account here . in the followingwe develop a cgpt - based tracking algorithm using the extended kalman filter , which handles correctly the noise .we recall first the definition of complex cgpt , with which a simple relation between and can be established .let .the complex cgpts are defined by where denotes the hermitian transpose .therefore , we have where the matrix of dimension over the complex fields is defined by it is worth mentioning that and are complex matrices of dimension .to recover the cgpt from the complex cgpts , we simply use the relations where are the real and imaginary part of a complex number , respectively . for two targets satisfying , the following relationships between their complex cgpt hold : [ eq : ccgpt_tsr ] where is a upper triangle matrix with the -th entry given by [ [ linear - operator - mbf - t_t ] ] linear operator : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + now one can find explicitly a linear operator ( the underlying scalar field is ) which depends only on , such that , and the equation becomes for doing so , we set , where is given by ( [ eq : def_u_mat ] ) .then , a straightforward computation using , , and shows that where are defined in ( [ eq : msr_cgpt_linsys_coeffwise ] ) . therefore , we get the operator : the ekf is a generalization of the kf to nonlinear dynamical systems . unlike kf which is an optimal estimator for linear systems with gaussian noise ,ekf is no longer optimal , but it remains robust with respect to noise and computationally inexpensive , therefore is well suited for real - time applications such as tracking .we establish here the _ system state _ and the _ observation _ equations which are fundamental to ekf , and refer readers to appendix [ sec : extend - kalm - filt ] for its algorithmic details .we assume that the position of the target is subjected to an external driving force that has the form of a white noise . in other wordsthe velocity of the target is given in terms of a two - dimensional brownian motion and its position is given in terms of the integral of this brownian motion : the orientation of the target is subjected to random fluctuations and its angular velocity is given in terms of an independent white noise , so that the orientation is given in terms of a one - dimensional brownian motion : we observe the target at discrete times , , with time step .we denote , , and .they obey the recursive relations since the increments of the brownian motions are independent from each other , the vectors given by are independent and identically distributed with the multivariate normal distribution with mean zero and covariance matrix given by the evolution of the state vector takes the form the observation made at time is the msr matrix given by , where the system state is implicitly included in the operator .we suppose that the truncation error is small compared to the measurement noise so that it can be dropped in , and that the gaussian white noise of different time are mutually independent . we emphasize that the velocity vector of the target does not contribute to , which can be seen from .to highlight the dependence upon , we introduce a function which is nonlinear in , and takes as a parameter , such that then together with we get the following _ system state _ and _ observation _ equations : [ eq : sys_state_obs_eq ] note that is linear , so in order to apply ekf on , we only need to linearize , or in other words , to calculate the partial derivatives of with respect to . clearly , the operator contains only the information concerning the acquisition system and does not depend on .so by , we have while the calculation for is straightforward using .we have where the derivatives are found by the chain rule : and .the -th entry of the matrix is given by the derivatives and are calculated in the same way . herewe show the performance of ekf in a full angle of view setting with the shape a as target , which has diameter 10 and is centered at the origin .the path is simulated according to the model during a period of 10 seconds ( ) , with parameters , and the initial state .we make sure that the target is always included inside the measurement circle on which sources / receivers are fixed , see fig .[ fig : target_path ] .the data stream is generated by first calculating the msr matrix corresponding to each then adding a white noise .suppose that the cgpt of is correctly determined ( for instance , by identifying the target in a dictionary ) .then we use the first two orders cgpt of in , and take as initial guess of for ekf .we add and of noise to data , and show the results of tracking in fig .[ fig : tracking_big_small_target ] ( a ) ( c ) and ( e ) .we see that ekf can find the true system state , despite of the poor initial guess , and the tracking precision decays as the measurement noise level gets higher .the same experiment with small target ( of same shape ) of diameter 1 is repeated in fig .[ fig : tracking_big_small_target ] ( b ) ( d ) and ( f ) , where the tracking of position remains correct , on the contrary , that of orientation fails when the noise level is high .such a result is in accordance with physical intuitions .in fact , the position of a small target can be easily localized in the far field , while its orientation can be correctly determined only in the near field .while the initial guess given to ekf is .the crosses indicate the position of sources / receivers , while the circle and the triangle indicate the starting and the final position of the target , respectively . in blueis the true trajectory and in red the estimated one.,width=283 ]in this section we study the stability of cgpts reconstruction and tracking problem in the case , always under the condition that , _i.e. _ , the number of sources / receivers is two times larger than the highest order of cgpts to be reconstructed . unlike in the full - view case ,here is no longer orthogonal in general , nonetheless one can still establish the svd of similarly as in proposition [ prop : full - angle - view - svd ] .consider the concentric and limited - view setting with , and suppose that is of maximal rank .let be the -th largest eigenvalue of the matrix and let be the associated orthonormal eigenvector .then the -th singular value of the operator is , with the associated left singular vector the matrix .in particular , the condition number of the operator is with being the condition number of the matrix .we first note that for any matrices we have : taking , and , we get where is the kronecker s symbol , which implies that is the -th singular value of .we denote by the maximal and the minimal singular values of a matrix , then and the condition number of is therefore bounded by .we denote by the vector space of functions of the form with , and the subspace of such that .functions of can be written as with .observe that taking discrete samples of at is nothing but applying the matrix on a coefficient vector .we have the following result . for any ,the matrix is of maximal rank .multiplying in by , and using the fact that , we have where for , and for .the vectors are linearly independent since they are the first rows of a vandermonde matrix .therefore , for implies that for all , which means that is of maximal rank .consequently , for arbitrary range , a sufficient condition to uniquely determine the cgpts of order up to is to have sources / receivers .we denote by the dirichlet kernel of order : we state without proof the following well known result about .[ lem : vk_innerprod_sum ] the functions is an orthogonal basis of . for any ,the following identity holds : where denotes the complex conjugate .in particular , we have for [ lem : vk_interpolation ] given a set of different points , there exist interpolation kernels for , such that : when the number of points is odd , it is well known that takes the form when is even , by a result established in it is easy to see that in both cases belongs to .now we can find explicitly a left inverse for .[ prop : expl - left - inverse ] under the same condition as in lemma [ lem : vk_interpolation ] , we denote by the interpolation kernel and define the matrix as then . in particular , if is odd , the matrix can be calculated as given , and the associated function defined by , we have for . using and, we find that and therefore , .observe that , and all belong to , so when is odd , we easily deduce using . in general , the left inverse in is not the pseudo - inverse of , and by definition , we have if is symmetric . if is the orthogonal projection of onto , _i.e. _ , then , .therefore , is the pseudo - inverse of if and only if the interpolation kernel satisfies : proposition [ prop : expl - left - inverse ] can be used in the noiseless limited - view case to reconstruct the cgpt matrix from the msr measurements . in fact , from ( [ eq : op_l ] ) it immediately follows that this shows that in the noiseless case , the limited - view aspect has no effect on the reconstruction of the gpts , and consequently on the location and orientation tracking . in the presence of noise , the effect , as will be shown in the next subsection , is dramatic .a small amount of measurement noise significantly changes the performance of our algorithm unless the arrays of receivers and transmitters offer a directional diversity , see fig .[ fig : target_path_lim_aov ] .we undertake a numerical study to illustrate the ill - posedness of the linear system in the case of limited - view data .[ fig : svd_ctc_dctcd_gamma ] shows the distribution of eigenvalues of the matrix and at different values of with and . in fig .[ fig : svd_ctc_dctcd_cond ] , we calculate the condition number of and ( which is equal to that of by ) for different orders . from these results , we see clearly the effect of the limited - view aspect .first , the tail of tiny eigenvalues in fig .[ fig : svd_ctc_dctcd_gamma].(a ) suggests that the matrix is numerically singular , despite the fact that is of maximal rank . secondly ,both and rapidly become extremely ill - conditioned as increases , so the maximum resolving order of cgpts is very limited . furthermore , this limit is intrinsic to the angle of view and can not be improved by increasing the number of source / receivers , see fig .[ fig : svd_ctc_dctcd_cond ] ( c ) and ( d ) .the analysis above suggests that the least - squares problem is not adapted to the cgpt reconstruction in a limited - view setting .actually , the truncation error or the noise of measurement will be amplified by the tiny singular values of , and yields extremely instable reconstruction of high - order cgpts , _e.g. _ , . instead ,we , in order to reconstruct cgpts from the msr data , use thikhonov regularization and propose to solve with a small regularization constant .it is well known that the effect of the regularization term is to truncate those singular values of smaller than , which consequently stabilizes the solution .the optimal choice of depends on the noise level , and here we determine it from the range ] . the optimal estimator ( in the least - squares sense ) of the system state given the observations is the conditional expectation .\end{aligned}\ ] ]since the joint vector is gaussian , the conditional expectation is a linear combination of , which can be written in terms of and only .the purpose of the kf is to calculate from and .we summarize the algorithm in the following .* initialization : * , \ p_{0|0 } = \cov{x_0}.\end{aligned}\ ] ] * prediction : * * update : * to apply the kf algorithm the covariance matrices must be known .consider now a nonlinear dynamical system : where are the same as in the kf , while the functions are nonlinear and differentiable .nothing can be said in general on the conditional distribution due to the nonlinearity .the ekf calculates an approximation of the conditional expectation by an appropriate linearization of the state transition and observation models , which makes the general scheme of kf still applicable .however , the resulting algorithm is no more optimal in the least - squares sense due to the approximation .let , the partial derivatives of ( with respect to the system state and the process noise ) evaluated at , and let be the partial derivatives of ( with respect to the system state and the observation noise ) evaluated at .the ekf algorithm is summarized below .
in this paper we apply an extended kalman filter to track both the location and the orientation of a mobile target from multistatic response measurements . we also analyze the effect of the limited - view aspect on the stability and the efficiency of our tracking approach . our algorithm is based on the use of the generalized polarization tensors , which can be reconstructed from the multistatic response measurements by solving a linear system . the system has the remarkable property that low order generalized polarization tensors are not affected by the error caused by the instability of higher orders in the presence of measurement noise . mathematics subject classification ( msc2000 ) : 35r30 , 35b30 keywords : generalized polarization tensors , target tracking , extended kalman filter , position and orientation tracking , limited - view data , instability
here are some common claims .the founders of infinitesimal calculus were working in a vacuum caused by an absence of a satisfactory number system .the incoherence of infinitesimals was effectively criticized by berkeley as so much hazy metaphysical mysticism .dalembert s visionary anticipation of the rigorisation of analysis was ahead of his time .cauchy took first steps toward replacing infinitesimals by rigor and epsilontics , in particular giving a modern definition of continuity .cauchy s false 1821 version of the sum theorem " was corrected by him in 1853 by adding the hypothesis of uniform convergence .weierstrass finally rigorized analysis and thereby eliminated infinitesimals from mathematics .dedekind discovered the essence of continuity " , which is captured by his cuts .one of the spectacular successes of the rigorous analysis was the mathematical justification of dirac s `` delta functions '' .robinson developed a new theory of infinitesimals in the 1960s , but his approach has little to do with historical infinitesimals .lakatos pursued an ideological agenda of kuhnian relativism and fallibilism , inapplicable to mathematics .each of the above ten claims is in error , as we argue in the next ten sections ( cf .crowe ) .the historical fact of the dominance of the view of analysis as being based on the real numbers to the exclusion of infinitesimals , is beyond dispute .one could ask oneself why this historical fact is so ; some authors have criticized mathematicians for adhering to an approach that others consider less appropriate . in the present text, we will _ not _ be concerned with either of these issues .rather , we will be concerned with another issue , namely , why is it that traditional historical scholarship has been inadequate in indicating that alternative views have been around .we will also be concerned with documenting instances of tendentious interpretation and the attendant distortion in traditional evaluation of key figures from mathematical history .felix klein clearly acknowledged the existence of a parallel , infinitesimal approach to foundations .having outlined the developments in real analysis associated with weierstrass and his followers , klein pointed out in 1908 that _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the scientific mathematics of today is built upon the series of developments which we have been outlining . but an essentially different conception of infinitesimal calculus has been running parallel with this [ conception ] through the centuries ( klein ) . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ such a different conception , according to klein , `` harks back to old metaphysical speculations concerning the structure of the continuum according to which this was made up of [ ] infinitely small parts '' ( ibid . ) .the pair of parallel conceptions of analysis are illustrated in figure [ 31 ] . \ar@{-}@<-0.5pt>[rr ] \ar@{-}@<0.5pt>[rr ] & { } \ar@{->}[d]^{\hbox{st } } & \hbox{\quad b - continuum } \\ { } \ar@{-}[rr ] & { } & \hbox{\quad a - continuum } } \ ] ] a comprehensive re - evaluation of the history of infinitesimal calculus and analysis was initiated by katz & katz in , , and .briefly , a philosophical disposition characterized by a preference for a sparse ontology has dominated the historiography of mathematics for the past 140 years , resulting in a systematic distortion in the interpretation of the historical development of mathematics from stevin ( see ) to cauchy ( see and borovik & katz ) and beyond .taken to its logical conclusion , such distortion can assume comical proportions .thus , newton s eventual successor in the lucasian chair of mathematics , stephen hawking , comments that cauchy _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ was particularly concerned to banish infinitesimals ( hawking ) , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ yet _ on the very same page _ 639 , hawking quotes cauchy s _ infinitesimal _ definition of continuity in the following terms : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the function remains continuous with respect to between the given bounds , if , between these bounds , an infinitely small increment in the variable always produces an infinitely small increment in the function itself ( ibid ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ did cauchy banish infinitesimals , or did he exploit them to define a seminal new notion of continuity ?similarly , historian j. gray lists _ continuity _ among concepts cauchy allegedly defined _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ using careful , if not altogether unambiguous , * limiting * arguments ( gray ) [ emphasis added authors ] , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ whereas in reality _ limits _ appear in cauchy s definition only in the sense of the _ endpoints _ of the domain of definition ( see , for a more detailed discussion ) . commenting on ` whig ' re - writing of mathematical history ,related comments by grattan - guinness may be found in the main text at footnote [ muddle ] . ]p. mancosu observed that _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the literature on infinity is replete with such ` whig ' history .praise and blame are passed depending on whether or not an author might have anticipated cantor and naturally this leads to a completely anachronistic reading of many of the medieval and later contributions . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the anachronistic idea of the history of analysis as a relentless march toward the yawning heights of epsilontics is , similarly , our target in the present text .we outline some of the main distortions , focusing on the philosophical bias which led to them .the outline serves as a program for further investigation .were the founders of infinitesimal calculus working in a vacuum caused by an absence of a satisfactory number system ?a century before newton and leibniz , simon stevin ( stevinus ) sought to break with an ancient greek heritage of working exclusively with relations among natural numbers , ; thus , was not a number , nor are the fractions : ratios were relations , not numbers . consequently , stevin had to spend time arguing that the unit ( ) _ was _ a number . ] and developed an approach capable of representing both `` discrete '' number ( ) composed of units ( ) and continuous magnitude ( ) of geometric origin .instances of discrete quantities are number and speech ; of continuous , lines , surfaces , solids , and besides these , time and place " . ] according to van der waerden , stevin s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ general notion of a real number was accepted , tacitly or explicitly , by all later scientists . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ d. fearnley - sander wrote that _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the modern concept of real number [ ... ] was essentially achieved by simon stevin , around 1600 , and was thoroughly assimilated into mathematics in the following two centuries . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ d. fowler points out that _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ stevin [ ... ] was a thorough - going arithmetizer : he published , in 1585 , the first popularization of decimal fractions in the west [ ... ] ; in 1594 , he described an algorithm for finding the decimal expansion of the root of any polynomial , the same algorithm we find later in cauchy s proof of the intermediate value theorem . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ fowler emphasizes that important foundational work was yet to be done by dedekind , who proved that the field operations and other arithmetic operations extend from to ( see section [ dede ] ) .meanwhile , stevin s decimals stimulated the emergence of power series ( see below ) and other developments .we will discuss stevin s contribution to the intermediate value theorem in subsection [ ivt ] below .in 1585 , stevin defined decimals in _ la disme _ as follows : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ decimal numbers are a kind of arithmetic based on the idea of the progression of tens , making use of the arabic numerals in which any number may be written and by which all computations that are met in business may be performed by integers alone without the aid of fraction .( _ la disme _ , on decimal fractions , tr .by v. sanford , in smith ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ by numbers met in business " stevin meant finite decimals , but see stevin s comments on extending the process _ ad infinitum _ in main text at footnote [ ad2 ] . ] and by `` computations '' he meant addition , subtraction , multiplications , division and extraction of square roots on finite decimals .stevin argued that numbers , like the more familiar continuous magnitudes , can be divided indefinitely , and used a water metaphor to illustrate such an analogy : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ as to a continuous body of water corresponds a continuous wetness , so to a continuous magnitude corresponds a continuous number .likewise , as the continuous body of water is subject to the same division and separation as the water , so the continuous number is subject to the same division and separation as its magnitude , in such a way that these two quantities can not be distinguished by continuity and discontinuity ( stevin , 1585 , see ; quoted in a. malet ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ stevin argued for equal rights in his system for rational and irrational numbers .he was critical of the complications in euclid ( * ? ? ?* book x ) , and was able to show that adopting the arithmetic as a way of dealing with those theorems made many of them easy to understand and easy to prove . in his _ arithmetique _ , stevin proposed to represent all numbers systematically in decimal notation .ehrlich notes that stevin s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ viewpoint soon led to , and was implicit in , the analytic geometry of ren descartes ( 1596 - 1650 ) , and was made explicit by john wallis ( 1616 - 1703 ) and isaac newton ( 1643 - 1727 ) in their arithmetizations thereof ( ehrlich ) . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ stevin s text _ la disme _ on decimal notation was translated into english in 1608 by robert norton ( cf .cajori ) .the translation contains the first occurrence of the word `` decimal '' in english ; the word will be employed by newton in a crucial passage 63 years later.see footnote [ new2 ] . ]wallis recognized the importance of unending decimal expressions in the following terms : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ now though the proportion can not be accurately expressed in absolute numbers : yet by continued approximation it may ; so as to approach nearer to it , than any difference assignable ( wallis s algebra , p. 317, cited in crossley ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ similarly , newton exploits a power series expansion to calculate detailed decimal approximations to for ( newton ) .] by the time of newton s _ annus mirabilis _, the idea of unending decimal representation was well established .historian v. katz calls attention to `` newton s analogy of power series to infinite decimal expansions of numbers '' ( v. katz ) .newton expressed such an analogy in the following passage : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ since the operations of computing in numbers and with variables are closely similar i am amazed that it has occurred to no one ( if you except n. mercator with his quadrature of the hyperbola ) to fit the doctrine recently established for decimal numbers in similar fashion to variables , especially since the way is then open to more striking consequences .for since this doctrine in species has the same relationship to algebra that the doctrine in decimal numbers has to common arithmetic , its operations of addition , subtraction , multiplication , division and root - extraction may easily be learnt from the latter s provided the reader be skilled in each , both arithmetic and algebra , and appreciate the correspondence between decimal numbers and algebraic terms continued to infinity and just as the advantage of decimals consists in this , that when all fractions and roots have been reduced to them they take on in a certain measure the nature of integers , so it is the advantage of infinite variable - sequences that classes of more complicated terms ( such as fractions whose denominators are complex quantities , the roots of complex quantities and the roots of affected equations ) may be reduced to the class of simple ones : that is , to infinite series of fractions having simple numerators and denominators and without the all but insuperable encumbrances which beset the others ( newton 1671 , ) . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in this remarkable passage dating from 1671 , newton explicitly names infinite decimals as the source of inspiration for the new idea of infinite series.see footnote [ new1 ] . ]the passage shows that newton had an adequate number system for doing calculus and real analysis two centuries before the triumvirate . in 1742 , john marsh first used an abbreviated notation for repeating decimals ( marsh , cf .cajori ) .euler exploits unending decimals in his _ elements of algebra _ in 1765 , as when he sums an infinite series and concludesthis could be compared with peirce s remarks . over a centuryago , charles sanders peirce wrote with reference to : `` although the difference , being infinitesimal , is less than any number [ one ] can express [ , ] the difference exists all the same , and sometimes takes a quite easily intelligible form '' ( peirce ; see also s. levy ) .levy mentions peirce s proposal of an alternative notation for equality up to an infinitesimal " .the notation peirce proposes is the usual equality sign with a dot over it , like this : `` '' .see also main text at footnote [ per2 ] . ] that ( euler ) . in the context of his decimal representation , stevin developed numerical methods for finding roots of polynomials equations .he described an algorithm equivalent to finding zeros of polynomials ( see crossley ) .this occurs in a corollary to problem 77 ( more precisely , lxxvii ) in ( stevin ) . herestevin describes such an argument in the context of finding a root of the cubic equation ( which he expresses as a proportion to conform to the contemporary custom ) here the whimsical coefficient seems to have been chosen to emphasize the fact that the method is completely general ; stevin notes furthermore that numerous addional examples can be given .a textual discussion of the method may be found in struik .centuries later , cauchy would prove the intermediate value theorem ( ivt ) for a continuous function on an interval by a divide - and - conquer algorithm .cauchy subdivided into equal subintervals , and recursively picked a subinterval where the values of have opposite signs at the endpoints ( cauchy ( * ? ? ?* note iii , p. 462 ) ) . to elaborate on stevin s argument following ( * ? ? ?* , p. 475 - 476 ) , note that what stevin similarly described a divide - and - conquer algorithm .stevin subdivides the interval into _ ten _ equal parts , resulting in a gain of a new decimal digit of the solution at every iteration of his procedure .stevin explicitly speaks of continuing the iteration _ad infinitum_:cf .footnote [ ad1 ] . ]_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ et procedant ainsi infiniment , lon approche infiniment plus pres au requis ( stevin ( * ? ? ? * , last 3 lines ) ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ who needs _ existence _proofs for the real numbers , when stevin gives a procedure seeking to produce an explicit decimal representation of the solution ?the ivt for polynomials would resurface in lagrange before being generalized by cauchy to the newly introduced class of continuous functions.see further in footnote [ lag2 ] .] one frequently hears sentiments to the effect that the pre - triumvirate mathematicians did not and could not have provided rigorous proofs , since the real number system was not even built yet .such an attitude is anachronistic .it overemphasizes the significance of the triumvirate project in an inappropriate context .stevin is concerned with constructing an algorithm , whereas cantor is concerned with developing a foundational framework based upon the existence of the _ totality _ of the real numbers , as well as their power sets , etc .the latter project involves a number of non - constructive ingredients , including the axiom of infinity and the law of excluded middle . butnone of it is needed for stevin s procedure , because he is not seeking to re - define `` number '' in terms of alternative ( supposedly less troublesome ) mathematical objects .why do many historians and mathematicians of today emphasize the great triumvirate s approach to proofs of the existence of real numbers , at the expense , and almost to the exclusion , of stevin s approach ? can this be understood in the context of the ideological foundational battles raging at the end of 19th and beginning of 20th century ?these questions merit further scrutiny .was berkeley s criticism of infinitesimals as so much hazy metaphysical mysticism , either effective or coherent ?d. sherry dissects berkeley s criticism of infinitesimal calculus into its metaphysical and logical components , as detailed below .the _ logical criticism _ is the one about the disappearing .here we have a ghost : , but also a departed quantity : ( in other words eating your cake : and having it , too : ) .thus , berkeley s _ logical criticism _ of the calculus is that the evanescent increment is first assumed to be non - zero to set up an algebraic expression , and then _ treated as zero _ in _ discarding _ the terms that contained that increment when the increment is said to vanish . in _ the analyst _ relies upon the determination of the tangent to a parabola due to apollonius of perga ( * ? ? ?* book i , theorem 33 ) ( see andersen ) . ] the fact is that berkeley s logical criticism is easily answered within a conceptual framework available to the founders of the calculus .namely , the _ rebuttal _ of the logical criticism is that the evanescent increment is not _ treated as zero _ , but , rather , merely _ discarded _ through an application of leibniz s _ law of homogeneity _ ( see leibniz ) and bos ) , which would stipulate , for instance , that here we chose the sign which was already used by leibniz where we would use an equality sign today ( see mcclenon ) .the law is closely related to the earlier notion of _ adequality _ found in fermat .adequality is the relation of being infinitely close , or being equal `` up to '' an infinitesimal .fermat exploited adequality when he sought a maximum of an expression by evaluating expression at and at , and forming the difference . in modern notationthis would appear as ( note that fermat did not use the function notation ) .huygens already interpreted the `` '' occurring in this expression in the method of adequality , as an infinitesimal .is an `` infinitely small quantity '' ( see huygens ) .see also andr weil , . ] ultimately , the heuristic concepts of adequality ( fermat ) and law of homogeneity ( leibniz ) were implemented in terms of the _ standard part _ function ( see figure [ tamar ] ) . in a passage typical of post - weierstrassian scholarship , kleiner and movshovitz - hadar note that_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ fermat s method was severely criticized by some of his contemporaries .they objected to his introduction and subsequent suppression of the mysterious .dividing by meant regarding it as not zero . discarding implied treating it as zero .this is inadmissible , * they rightly claimed . * in a somewhat different context , but * with equal justification * , berkeley in the 18th century would refer to such s as ` the ghosts of departed quantities ' [ emphasis added authors ] ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ however , fermat scholar p. strmholm already pointed out in 1968 that in fermat s main method of adequality , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ there was never [ ] any question of the variation being put equal to zero .the words fermat used to express the process of suppressing terms containing was _ `` elido '' _ , _ `` deleo '' _ , and _`` expungo '' _ , and in french _`` iefface '' _ and _ `` ite''_. we can hardly believe that a sane man wishing to express his meaning and searching for words , would constantly hit upon such tortuous ways of imparting the simple fact that the terms vanished because was zero ( strmholm ) . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ thus , fermat planted the seeds of the answer to the logical criticism of the infinitesimal , a century before george berkeley ever lifted up his pen to write _ the analyst_. the existence of two separate binary relations , one `` equality '' and the other , `` equality up to an infinitesimal '' , was already known to leibniz ( see e. knobloch and katz and sherry for more details ) .berkeley s _ metaphysical criticism _ targets the absence of any empirical referent for `` infinitesimal '' .the metaphysical criticism has its roots in empiricist dogma that every meaningful expression or symbol must correspond to an empirical entity .ironically , berkeley accepts many expressions lacking an empirical referent , such as ` force ' , ` number ' , or ` grace ' , on the grounds that they have pragmatic value .it is a glaring inconsistency on berkeley s part not to have accepted infinitesimal " on the same grounds ( see sherry ) .it is even more ironic that over the centuries , mathematicians were mainly unhappy with the logical aspect , but their criticisms mainly targeted what they perceived as the metaphysical / mystical aspect . thus , cantor attacked infinitesimals as being abominations " ( see ehrlich ) ; r. courant described them as mystical " , `` hazy fog '' , etc .e. t. bell went as far as describing infinitesimals as having been * _ slain _ , * _ scalped _ , and * _ disposed of _ by the cleric of cloyne ( see figure [ uccello ] ) .generally speaking , one does not _ slay _ either scientific concepts or scientific entities . bellicose language of this sort is a sign of commitments that are both emotional and ideological .were dalembert s mathematical anticipations ahead or behind his time ?one aspect of dalembert s vision as expressed in his article for the _ encyclopedie _ on _ irrational _ numbers , is that irrational numbers _ do not exist_. here dalembert uses terms such as surds " which had already been rejected by simon stevin two centuries earlier ( see section [ stev ] ) . from this point of view, dalembert is not a pioneer of the rigorisation of analysis in the 19th century , but on the contrary , represents a throwback to the 16th century .dalembert s attitude toward irrational numbers sheds light on the errors in his proof of the fundamental theorem of algebra ; indeed , in the anemic number system envisioned by dalembert , numerous polynomials surely fail to have roots .dalembert used the term charlatanerie " to describe infinitesimals in his article _ diffrentiel _ .anti - infinitesimal vitriol is what endears him to triumvirate scholars , for his allegedly visionary remarks concerning the centrality of the limit concept fall short of what is already found in newton.see pourciau who argues that newton possessed a clear kinetic conception of limit ( sinaceur and barany argue that cauchy s notion of limit was kinetic , rather than a precursor of a weierstrassian notion of limit ) .pourciau cites newton s lucid statement to the effect that `` those ultimate ratios are not actually ratios of ultimate quantities , but limits which they can approach so closely that their difference is less than any given quantity '' ( newton , 1946 and 1999 ) . the same point , and the same passage from newton , appeared a century earlier in russell ( * ? ? ?* item 316 , p. 338 - 339 ) . ]he never went beyond a kinetic notion of limit , so as to obtain the epsilontic version popularized in the 1870s .dalembert was particularly bothered by the characterisation of infinitesimals he found in the geometers " .he does not explain who these geometers are , but the characterisation he is objecting to can be already found in leibniz and newton .namely , the geometers used to describe infinitesimals as what remains _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ not before you pass to the limit , nor after , but at the very moment of passage to the limit " ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in the context of a modern theory of infinitesimals such as the hyperreals ( see appendix [ rival2 ] ) , one could explain the matter in the following terms .we decompose the procedure of taking the limit of , say , a sequence into two stages : 1 . evaluating the sequence at an infinite hypernaturalsee main text at footnote [ hyper2 ] . ] , to obtain the hyperreal ; and 2 .taking its standard part .thus is adequal to , or ( see formula ) , so that we have . in this sense , the infinitesimals exist at the moment " of taking the limit , namely _ between _ the stages ( i ) and ( ii ) .felscher describes dalembert as `` one of the mathematicians representing the heroic age of calculus '' .felscher buttresses his claim by a lengthy quotation concerning the definition of the limit concept , from the article _ limite _ from the _ encyclopdie ou dictionnaire raisonn des sciences , des arts et des mtiers _ : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ on dit quune grandeur est la limite dune autre grandeur , quand la seconde peut approcher de la premire plus prs que dune grandeur donne , si petite quon la puisse supposer , sans pourtant que la grandeur qui approche , puisse jamais surpasser la grandeur do nt elle approche ; ensorte que la diffrence dune pareille quantit sa limite est absolument inassignable ( encyclopdie , volume 9 from 1765 , page 542 ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ one recognizes here a kinetic definition of limit already exploited by newton . on pourciaus analysis . ] even if we do attribute visionary status to this passage as many historians seem to , the fact remains that dalembert did nt write it .felscher overlooked the fact that the article _ limite _ was written by two authors . in reality , the above passage defining the concept of limit " ( as well as the two propositions on limits ) did not originate with dalembert , but rather with the encyclopedist jean - baptiste de la chapelle .de la chapelle was recruited by dalembert to write 270 articles for the _ encyclopdie_. the section of the article containing these items is signed ( e ) ( at bottom of first column of page 542 ) , known to be de la chapelle s `` signature '' in the _encyclopedie_. felscher had already committed a similar error of attributing de la chapelle s work to dalembert , in his 1979 work .note that robinson similarly misattributes this passage to dalembert .did cauchy take first steps toward replacing infinitesimals by rigor , and did he give an epsilontic definition of continuity ?a claim to the effect that cauchy was a fore - runner of the epsilontisation of analysis is routinely recycled in history books and textbooks . to put such claims in historical perspective, it may be useful to recall grattan - guinness s articulation of a historical reconstruction project in the name of h. freudenthal , in the following terms : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ it is mere feedback - style ahistory to read cauchy ( and contemporaries such as bernard bolzano ) as if they had read weierstrass already . on the contrary ,their own pre - weierstrassian muddlesgrattan - guinness s term `` muddle '' refers to an irreducible ambiguity of historical mathematics such as cauchy s sum theorem of 1821 .see footnote [ gg1 ] for a related comment by mancosu . ]need historical reconstruction . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ it is often claimed that cauchy gave an allegedly `` modern '' , meaning epsilon - delta , definition of continuity .such claims are anachronistic . in reality ,cauchy s definition is an infinitesimal one .his definition of the continuity of takes the following form : an infinitesimal -increment gives rise to an infinitesimal -increment ( see ) . the widespead misconception that cauchy gave an epsilontic definition of continuityis analyzed in detail in .cauchy s primary notion is that of a _ variable quantity_. the meaning he attached to the latter term in his _cours danalyse _ in 1821 is generally agreed to have been a sequence of discrete values .he defines infinitesimals in terms of variable quantities , by specifying that a variable quantity tending to zero _ becomes _ an infinitesimal .he similarly defines limits in terms of variable quantities in the following terms : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ lorsque les valeurs successivement attribues une mme variable sapproche indfiniment dune valeur fixe de manire finir par en diffrer aussi peu que lon voudra cette dernire est appele limite de toutes les autres ( cauchy ) . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ cauchy s definition is patently a kinetic , not an epsilontic , definition of limit , similar to newton s .. ] while epsilontic - style formulations do occasionally appear in cauchy ( though without bolzano s proper attention to the order of the quantifiers ) , they are not presented as definitions but rather as consequences , and at any rate never appear in the context of the property of the continuity of functions .thus , grabiner s famous essay _ who gave you the epsilon ?cauchy and the origins of rigorous calculus _ cites a form of an epsilon - delta quantifier technique used by cauchy in a proof : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ let be two very small numbers ; the first is chosen so that for all numerical [ i.e. , absolute ] values of less than , and for any value of included [ in the interval of definition ] , the ratio will always be greater than and less than ( grabiner citing cauchy ) . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ grabiner describes such an epsilon - delta technique as `` the algebra of inequalities '' .the thrust of her argument is that cauchy sought to establish a foundation for analysis based on the algebra of inequalities .is this borne out by the evidence she presents ?let us consider grabiner s evidence : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ cauchy gave essentially the modern definition of continuous function , saying that the function is continuous on a given interval if for each in that interval the numerical [ i.e. , absolute ] value of the difference decreases indefinitely with ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ is this `` essentially the modern definition of continuity '' , as grabiner claims ? hardly so .cauchy s definition is a blend of a kinetic ( rather than epsilontic ) and an infinitesimal approach .grabiner fails to mention three essential items : * cauchy prefaces the definition she cited , by describing his as an _ infinitely small increment _ : + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ si , en partant dune valeur de on attribue la variable un _ accroissement infiniment petit _ ( cauchy ) [ emphasis added the authors ] ; _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * cauchy follows this definition by another , _ italicized _ , definition , where both and the difference are described as being _infinitesimal _ :if the former is infinitesimal , then so is the latter ; * infinitesimals provide a method for calculating limits , whereas epsilon , delta methods require the answer in advance ( see madison and stroyan ) .the advantage of infinitesimal definitions , such as those found in cauchy , is their covariant nature ( cf .lutz et al . ) . whereas in the epsilontic approach one needs to work one s way backwards from the value of the limit , in the infinitesimal approach one can proceed from the original expression , simplify it , and eventually arrive at the value of the limit .this indicates that the two approaches work in opposite directions .the infinitesimal calculation goes with the natural flow of our reasoning , whereas the epsilontic one goes in the opposite direction .notice , for example , that delta corresponds to the independent variable even though the value of delta depends on our choice of epsilon , which corresponds to the _ dependent _ variable .the infinitesimal calculation , in contrast , begins with the the independent variable and computes from that the value of the dependent variable .did cauchy exploit epsilon - delta techniques in building foundations for analysis ?let us examine grabiner s evidence .she claims that , in cauchy s proof of the intermediate value theorem ( ivt ) , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we have the algebra of inequalities providing a technique which cauchy transformed from a tool of approximation to a tool of rigor ( grabiner ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ yet grabiner s treatment of cauchy s proof of the ivt in page offers no evidence that cauchy employed an epsilon - delta technique.grabiner further attributes to lagrange the polynomial case of cauchy s divide - and - conquer argument in the proof of the ivt , whereas we saw in subsection [ ivt ] that stevin did this two centuries before lagrange ( see main text at footnote [ lag1 ] ) . ]an examination of cauchy s proof in his _ note iii_ reveals that , on the contrary , it is closely tied to cauchy s infinitesimal definition of continuity .thus , cauchy constructs an increasing sequence and a decreasing sequence , denoted respectively and ( cauchy ) with a common limit , such that has opposite sign at the corresponding pairs of points .cauchy concludes that the values of at the respective sequences converge to a common limit .being both nonpositive and nonnegative , the common limit must vanish .koetsier speculates that cauchy may have hit upon his concept of continuity by analyzing his proof of the ivt ( perhaps in the case of polynomials ) .the evidence is compelling : even though cauchy does not mention infinitesimals in his _ note iii _ , and are recognizably variable quantities differing by an infinitesimal from the constant quantity . by cauchy s definition of continuity , and must similarly differ from by an infinitesimal .contrary to grabiner s claim , a close examination of cauchy s proof of the ivt reveals no trace of epsilon - delta .following koetsier s hypothesis , it is reasonable to place it , rather , in the infinitesimal strand of the development of analysis , rather than the epsilontic strand . after constructing the lower and upper sequences ,cauchy does write that the values of the latter `` finiront par differer de ces premiers valeurs aussi peu que lon voudra '' .that may sound a little bit epsilon / delta .meanwhile , leibniz uses language similar to cauchy s : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ whenever it is said that a certain infinite series of numbers has a sum , i am of the opinion that all that is being said is that any finite series with the same rule has a sum , and that the error always diminishes as the series increases , so that it becomes as small as we would like [ ut fiat tam parvus quam velimus " ] ( leibniz ) . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ cauchy used epsilontics if and only if leibniz did , over a century before him .the exaggerated claims of a cauchy provenance for epsilontics found in triumvirate literature go hand - in - hand with neglect of his visionary role in the development of infinitesimals at the end of the 19th century . in 1902 , e. borel elaborated on paul du bois - reymond s theory of rates of growth , and outlined a general `` theory of increase '' of functions , as a way of implementing an infinitesimal - enriched continuum . in this text ,borel specifically traced the lineage of such ideas to a 1829 text of cauchy s on the rates of growth of functions ( see fisher for details ) . in 1966 , a. robinson pointed out that _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ following cauchy s idea that an infinitely small or infinitely large quantity is associated with the behavior of a function , as tends to a finite value or to infinity , du bois - reymond produced an elaborate theory of orders of magnitude for the asymptotic behavior of functions stolz tried to develop also a theory of arithmetical operations for such entities ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ robinson traces the chain of influences further , in the following terms : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ it seems likely that skolem s idea to represent infinitely large natural numbers by number - theoretic functions which tend to infinity ( skolem [ 1934]),the reference is to skolem s 1934 work .the evolution of modern infinitesimals is traced in more detail in table [ heuristic ] and in borovik et al . . ] also is related to the earlier ideas of cauchy and du bois - reymond . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ one of cantor s _ btes noires _ was the neo - kantian philosopher hermann cohen ( 18421918 ) ( see also mormann ) , whose fascination with infinitesimals elicited fierce criticism by both cantor and b. russell . yet at the end of the day , a. fraenkel ( of zermelo fraenkel fame ) wrote : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ my former student abraham robinson had succeeded in saving the honour of infinitesimals - although in quite a different way than cohen and his school had imagined ( fraenkel ) . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _was cauchy s 1821 sum theorem " false , and what did he add in 1853 ? as discussed in section [ did ] , cauchy s definition of continuity is explicitly stated in terms of infinitesimals : `` an infinitesimal -increment gives rise to an infinitesimal -increment '' .boyer declares that cauchy s 1821 definition is to be interpreted " in the framework of the usual limits " , at a point of an archimedean continuum .traditional historians typically follow boyer s lead .but when it comes to cauchy s 1853 modification of the hypothesis of the sum theorem " in ( cauchy ) , some historians declare that it is to be interpreted as adding the hypothesis of uniform convergence " ( see e.g. , ltzen ) .are boyer and ltzen compatible ?note that an epsilontic definition ( in the context of an archimedean continuum ) of the uniform convergence of a sequence to necessarily involves a _ pair _ of variables ( where ranges through the domain of and ranges through ) , rather than a single variable : we need a formula of the sort ( prefaced by the usual clause `` '' ) .now cauchy s 1853 modification of the hypothesis is stated in terms of a _ single _variable , rather than a pair of variables .namely , cauchy specified that the condition of convergence should hold always " .the meaning of the term `` always '' becomes apparent only in the course of the proof , when cauchy gives an explicit example of evaluating at an infinitesimal generated by the sequence .thus the term `` always '' involves adding extra values of at which the convergence condition must be satisfied ( see brting and katz & katz ) .cauchy s approach is based on two assumptions which can be stated in modern terminology as follows : 1 .when you have a closed expression for a function , then its values at `` variable quantities '' ( such as ) are calculated by using the same closed expression as at real values ; 2 . to evaluate a function at a variable quantity generated by a sequence , one evaluates term - by - term .cauchy s strengthened condition amounts to requiring the error to become infinitesimal : which in the case of given by translates into the requirement that tends to zero .an epsilontic interpretation ( in the context of an archimedean continuum ) of cauchy s 1821 and 1853 texts is untenable , as it necessitates a _ pair _ of variables as in , where cauchy only used a _single one _ ,namely , but one drawn from a `` thicker '' continuum including infinitesimals .namely , cauchy draws the points to be evaluated at from an infinitesimal - enriched continuum .we will refer to an infinitesimal - enriched continuum as a _ bernoullian continuum _ , or a `` b - continuum '' for short , in an allusion to johann bernouilli.bernoulli was the first to use infinitesimals in a systematic fashion as a foundational concept , leibniz himself having employed both a syncategorematic and a true infinitesimal approach .the pair of approaches in leibniz are discussed by bos ( * ? ? ?* item 4.2 , p. 55 ) ; see also appendix [ rival2 ] , footnote [ ber2 ] . ] a null sequence such as becomes " an infinitesimal , in cauchy s terminology .evaluating at points of a bernoullian continuum makes it possible to express uniform convergence in terms of a single variable rather than a pair .once one acknowledges that there are _ two _ variables in the traditional epsilontic definition of uniform continuity and uniform convergence , it becomes untenable to argue that the condition cauchy introduced was epsilontic uniform convergence . a historian who describes cauchy s condition as uniform convergence , must acknowledge that the definition involves an infinitesimal - enriched continuum , at variance with boyer s interpretation . in appendix [ rival2 ] ,subsection [ a1 ] we present a parallel distinction between continuity and uniform continuity , where a similar distinction in terms of the number of variables is made .did weierstrass succeed in eliminating infinitesimals from mathematics ? { | p{.4 in } || p{1.45 in } | p{2.7 in } | } \hline years & author & contribution \\\hline\hline 1821 & cauchy & infinitesimal definition of continuity \\ \hline 1827 & cauchy & infinitesimal delta function \\ \hline 1829 & cauchy & defined `` order of infinitesimal '' in terms of rates of growth of functions \\\hline 1852 & bj\"orling & dealt with convergence at points `` indefinitely close '' to the limit \\\hline 1853 & cauchy & clarified hypothesis of `` sum theorem '' by requiring convergence at infinitesimal points \\ \hline 1870 - 1900 & stolz , du~bois - reymond , veronese , and others & infinitesimal - enriched number systems defined in terms of rates of growth of functions\\ \hline 1902 & emile borel & elaboration of du bois - reymond 's system \\ \hline 1910 & g.~h.~hardy & provided a firm foundation for du bois - reymond 's orders of infinity \\\hline 1926 & artin -- schreier & theory of real closed fields \\ \hline 1930 & tarski & existence of ultrafilters \\ \hline 1934 & skolem & nonstandard model of arithmetic \\ \hline 1948 & edwin hewitt & ultrapower construction of hyperreals \\ \hline 1955 & { { \l}o{\ 's } } { } & proved { { \l}o{\ 's } } 's theorem forshadowing the transfer principle \\ \hline 1961 , 1966 & abraham robinson & non - standard analysis \\ \hline 1977 & edward nelson & internal set theory \\ \hline \end{tabular}\ ] ] the persistent idea that infinitesimals have been `` eliminated '' by the great triumvirate of cantor , dedekind , and weierstrass was soundly refuted by ehrlich .ehrlich documents a rich and uninterrupted chain of work on non - archimedean systems , or what we would call a bernoullian continuum .some key developments in this chain are listed in table [ heuristic ] ( see for more details ) .the elimination claim can only be understood as an oversimplification by weierstrass s followers , who wish to view the history of analysis as a triumphant march toward the radiant future of weierstrassian epsilontics .such a view of history is rejected by h. putnam who comments on the introduction of the methods of the differential and integral calculus by newton and leibniz in the following terms : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if the epsilon - delta methods had not been discovered , then infinitesimals would have been postulated entities ( just as ` imaginary ' numbers were for a long time ) .indeed , this approach to the calculus enlarging the real number system is just as consistent as the standard approach , as we know today from the work of abraham robinson [ ] if the calculus had not been ` justified ' weierstrass style , it would have been ` justified ' anyway ( putnam ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in short , there is a cognitive bias inherent in a postulation in an inevitable outcome in the evolution of a scientific discipline .the study of cognitive bias has its historical roots in francis bacon s proposed classification of what he called _( a latin plural ) of several kinds .he described these as things which obstructed the path of correct scientific reasoning .of particular interest to us are his _ idola fori _ ( `` illusions of the marketplace '' : due to confusions in the use of language and taking some words in science to have meaning different from their common usage ) ; and _ idola theatri _ ( `` illusions of the theater '' : the following of academic dogma and not asking questions about the world ) , see bacon . completeness , continuity , continuum , dedekind `` gaps '' : these are terms whose common meaning is frequently conflated with their technical meaning . in our experience , explaining infinitesimal - enriched extensions of the reals to an epsilontically trained mathematician typically elicits a puzzled reaction on her part : `` but are nt the real numbers already _ complete _ by virtue of having filled in all the _ gaps _ already ? '' this question presupposes an academic dogma , viz ., that there is a single coherent conception of the continuum , and it is a complete , archimedean ordered field .this dogma has recently been challenged .numerous possible conceptions of the continuum range from s. feferman s predicative conception of the continuum , to f. william lawvere s and j. bell s conception in terms of an intuitionistic topos , , . to illustrate the variety of possible conceptions of the continuum , note that traditionally , mathematicians have considered at least two different types of continua .these are archimedean continua , or a - continua for short , and infinitesimal - enriched ( bernoulli ) continua , or b - continua for short . neither an a - continuum nor a b - continuum corresponds to a unique mathematical structure ( see table [ continuity ] ) .thus , we have two distinct implementations of an a - continuum : * the real numbers ( or stevin numbers),see section [ stev ] . ] in the context of classical logic ( incorporating the law of excluded middle ) ; * brouwer s continuum built from `` free - choice sequences '' , in the context of intuitionistic logic . { | p{.9 in } || p{1 in } | p{.9 in } | p{.5 in } | p{.75 in } | p{.5 in } | } \hline & archimedean & bernoullian \\classical & stevin 's \mbox{continuum} & robinson 's continuum \\\hline intuitionistic & brouwer 's continuum & lawvere 's continuum \\ \hline \end{tabular}\ ] ] john l. bell describes a distinction within the class of an infinitesimal - enriched b - continuum , in the following terms .historically , there were two main approaches to such an enriched continuum , one by leibniz , and one by b. nieuwentijt , who favored nilpotent ( nilsquare ) infinitesimals whose squares are zero .mancosu s discussion of nieuwentijt in is the only one to date to provide a contextual understanding of nieuwentijt s thought ( see also mancosu and vailati ) .j. bell notes : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ leibnizian infinitesimals ( differentials ) are realized in [ a. robinson s ] nonstandard analysis , more precisely , the hewitt - o - robinson continuum ; see appendix [ rival2 ] . ] and nilsquare infinitesimals in [ lawvere s ] smooth infinitesimal analysis ( bell ) . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the latter theory relies on intuitionistic logic.lawveres infinitesimals rely on a category - theoretic framework grounded in intuitionistic logic ( see j. bell ) . ]an implementation of an infinitesimal - enriched continuum was developed by p. giordano ( see ) , combining elements of both a classical and an intuitionistic continuum .the weirstrassian continuum is but a single member of a diverse family of concepts .did dedekind discover the essence of continuity " , and is such essence captured by his cuts ? in dedekind s approach , the `` essence '' of continuity amounts to the numerical assertion that two non - rational numbers should be equal if and only if they define the same dedekind cut on the rationals .dedekind formulated his `` essence of continuity '' in the context of the geometric line in the following terms : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if all points of the straight line fall into two classes such that every point of the first class lies to the left of every point of the second class , then there exists one and only one point which produces this division of all points into two classes , this severing of the straight line into two portions ( dedekind ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we will refer to this essence as the _ geometric essence of continuity _ .. thus , defining infinitesimals as elements of violating the traditional archimedean property , we can start with the cut of into positive and negative elements , and then modify this cut by assigning all infinitesimals to , say , the negative side .such a cut does not correspond to an element of . ]dedekind goes on to comment on the epistemological status of this statement of the essence of continuity : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [ ]i may say that i am glad if every one finds the above principle so obvious and so in harmony with his own ideas of a line ; for i am utterly unable to adduce any proof of its correctness , nor has any one the power ( ibid . ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ having enriched the domain of rationals by adding irrationals , numbers defined completely by cuts not produced by a rational , dedekind observes : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ from now on , therefore , to every definite cut there corresponds a definite rational or irrational number , and we regard two numbers as different or unequal always and only when they correspond to essentially different cuts ( dedekind ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ by now , dedekind postulates that two numbers should be equal `` always and only '' [ i.e. , if and only if ] they define identical cuts on the _ rational _ numbers .thus , dedekind postulates that there should be `` one and only one '' number which produces such a division .dedekind clearly presents this as an exact arithmetic analogue to the geometric essence of continuity .we will refer to such a postulate as the _ rational essence of continuity_. dedekind s postulation of rational essence is not accompanied by epistemological worries as was his geometric essence a few pages earlier . yet, rational essence entails a suppression of infinitesimals : a pair of distinct non - rational numbers can define the same dedekind cut on , such as and with infinitesimal ; but one can not have such a pair if one postulates the rational essence of continuity , as dedekind does .dedekind s technical work on the foundations of analysis has been justly celebrated ( see d. fowler ) .whereas everyone before dedekind had _ assumed _ that operations such as powers , roots , and logarithms can be performed , he was the first to show how these operations can be defined , and shown to be coherent , in the realm of the real numbers ( see dedekind ) .meanwhile , the nature of his interpretive speculations about what does or does not constitute the essence " of continuity , is a separate issue .for over a hundred years now , many mathematicians have been making the assumption that space conforms to dedekind s idea of the essence of continuity " , which in arithmetic translates into the numerical assertion that two numbers should be equal if and only if they define the same dedekind cut on the rationals . such an assumption rules out infinitesimals . in the context of the hyperreal number system ,it amounts to an application of the standard part function ( see appendix [ rival2 ] ) , which forces the collapse of the entire halo ( cluster of infinitely close , or adequal , points ) to a single point .the formal / axiomatic transformation of mathematics accomplished at the end of the 19th century created a specific foundational framework for analysis .weierstrass s followers raised a philosophical prejudice against infinitesimals to the status of an axiom .dedekind s `` essence of continuity '' was , in essence , a way of steamrolling infinitesimals out of existence . in 1977 , e. nelson created a set - theoretic framework ( enriching zfc ) where the usual construction of the reals produces a number system containing entities that behave like infinitesimals .thus , the elimination thereof was not the only way to achieve rigor in analysis as advertized at the time , but rather a decision to develop analysis in just one specific way .a prevailing sentiment today is that one of the spectacular successes of the rigorous analysis was the justification of delta functions , originally introduced informally by to p. dirac ( 19021984 ) , in terms of distribution theory . butwas it originally introduced informally by dirac ?in fact , fourier and cauchy exploited the `` dirac '' delta function over a century earlier .cauchy defined such functions in terms of infinitesimals ( see ltzen and laugwitz ) .a function of the type generally attributed to dirac was specifically described by cauchy in 1827 in terms of infinitesimals .more specifically , cauchy uses a unit - impulse , infinitely tall , infinitely narrow delta function , as an integral kernel .thus , in 1827 , cauchy used infinitesimals in his definition of a `` dirac '' delta function . herecauchy uses infinitesimals and , where is , in modern terms , the `` scale parameter '' of the `` cauchy distribution '' , whereas gives the size of the interval of integration .cauchy wrote : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ moreover one finds , denoting by , two infinitely small numbers , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ( see cauchy ) .such a formula extracts the value of a function at a point by integrating against a delta function defined in terms of an infinitesimal parameter ( see and laugwitz ) .the expression ( for real ) is known as the _ cauchy distribution _ in probability theory .the function is called the probability density function .a cauchy distribution with an infinitesimal scale parameter produces a function with dirac - delta function behavior , exploited by cauchy already in 1827 in work on fourier series and evaluation of singular integrals .is there continuity between historical infinitesimals and robinson s theory ?historically , infinitesimals have often been represented by null sequences .thus , cauchy speaks of a variable quantity as becoming an infinitesimal in 1821 , and his variable quantities from that year are generally understood to be sequences of discrete values ( on the other hand , in his 1823 he used continuous variable quantities ) .infinitesimal - enriched fields can in fact be obtained from sequences , by means of an ultrapower construction , where a null sequence generates an infinitesimal .such an approach was popularized by luxemburg in 1962 , and is based on the work by e. hewitt from 1948 . even in robinsons approach based on the compactness theorem , a null sequence is present , though well - hidden , namely in the countable collection of axioms .thus , null sequences provide both a cognitive and a technical link between historical infinitesimals thought of as variable quantities taking discrete values , on the one hand , and modern infinitesimals , on the other ( see katz & tall ) .leibniz s heuristic _ law of continuity _ was implemented mathematically as o s theorem and later as the _ transfer principle _ over the hyperreals ( see appendix [ rival2 ] ) , while leibniz s heuristic law of homogeneity ( see leibniz ) and bos ) was implemented mathematically as the standard part function ( see katz and sherry ) .does lakatos s defense of infinitesimalist tradition rest upon an ideological agenda of kuhnian relativism and fallibilism , inapplicable to mathematics ?g. schubring summarizes fallibilism as an _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ enthusiasm for revising traditional beliefs in the history of science and reinterpreting the discipline from a theoretical , epistemological perspective generated by thomas kuhn s ( 1962 ) work on the structure of scientific revolutions . applying popper s favorite keyword of fallibilism ,the statements of earlier scientists that historiography had declared to be false were particularly attractive objects for such an epistemologically guided revision ( schubring ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ schubring then takes on lakatos in the following terms : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the philosopher imre lakatos ( 1922 - 1972 ) was responsible for introducing these new approaches into the history of mathematics .one of the examples he analyzed and published in 1966 received a great deal of attention : cauchy s theorem and the problem of uniform convergence .lakatos refines robinson s approach by claiming that cauchy s theorem had also been correct at the time , because he had been working with infinitesimals ( ibid . ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ however , schubring s summary of the philosophical underpinnings of lakatos interpretation of cauchy s sum theorem is not followed up by an analysis of lakatos s position ( see ) .it is as if schubring felt that labels of `` kuhnianism '' and `` fallibilism '' are sufficient grounds for dismissing a scholar .schubring proceeds similarly to dismiss laugwitz s reading of cauchy as solipsistic " .schubring accuses laugwitz of interpreting cauchy s conceptions as _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ some hermetic closure of a _ private _ mathematics ( schubring ) [ emphasis in the original the authors ] ; _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ as well as being `` highly anomalous or isolated '' .the fact is that laugwitz is interpreting cauchy s words according to their plain meaning ( see ) , as revealed by looking , as kuhn would suggest , at the context in which they occur .the context strongly recommends taking cauchy s infinitesimals at face value , rather than treating them as a sop to the management .the burden of proof falls upon schubring to explain why the triumvirate interpretation of cauchy is not `` solipsistic '' , `` hermetic '' , or `` anomalous '' .the latter three modifiers could be more apppropriately applied to schubring s own interpretation of cauchy s infinitesimals as allegedly involving a _ compromise _ with rigor , allegedly due to _ tensions _ with the management of the _ ecole polytechnique_. schubring s interpretation is based on cauchy s use of the term _ concilier _ in cauchy s comment on the first page of his _ avertissement _ : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ mon but principal a t de concilier la rigueur , do nt je mtais fait une lois dans mon _ cours danalyse _ , avec la simplicit qui rsulte de la considration directe des quantits infiniment petites ( cauchy ) . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ let us examine schubring s logic of conciliation . a careful reading of cauchy s _ avertissement _ in its entirety reveals that cauchy is referring to an altogether different source of tension , namely his rejection of some of the procedures in lagrange s _ mcanique analytique _ as unrigorous , such as lagrange s principle of the `` generality of algebra '' .while rejecting the `` generality of algebra '' and lagrange s flawed method of power series , cauchy was able , as it were , to sift the chaff from the grain , and retain the infinitesimals endorsed in the 1811 edition of the _mcanique analytique_. indeed , lagrange opens his treatise with an unequivocal endorsement of infinitesimals . referring to the system of infinitesimal calculus ,lagrange writes : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ lorsquon a bien conu lesprit de ce systme , et quon sest convaincu de lexactitude de ses rsultats par la mthode gomtrique des premires et dernires raisons , ou par la mthode analytique des fonctions drives , on peut employer les infiniment petits comme un instrument sr et commode pour abrger et simplifier les dmonstrations , and has convinced oneself of the correctness of its results by means of the geometric method of the prime and ultimate ratios , or by means of the analytic method of derivatives , one can then exploit the infinitely small as a reliable and convenient tool so as to shorten and simplify proofs '' ( lagrange ) . ]( lagrange ) . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ lagrange describes infinitesimals as dear to a scientist , being reliable and convenient . in his _ avertissement _ , cauchy retains the infinitesimals that were also dear to lagrange , while criticizing lagrange s `` generality of algebra '' ( see for details ) .it s useful here to evoke the use of the term concilier " by cauchy s teacher lacroix .gilain quotes lacroix in 1797 to the effect that _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ lorsquon veut concilier la rapidit de lexposition avec lexactitude dans le langage , la clart dans les principes , [ ] , je pense quil convient demployer la mthode des limites " ( p.xxiv ) . * footnote 20 ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ here lacroix , like cauchy , employs `` concilier '' , but in the context of discussing the _ limit _ notion. would schubring s logic of conciliation dictate that lacroix developed a compromise notion of limit , similarly with the sole purpose of accomodating the management of the _ ecole _ ?why are lakatos and laugwitz demonized , rather than analyzed , by schubring ?we suggest that the act of contemplating for a moment the idea that cauchy s infinitesimals can be taken at face value is unthinkable to a triumvirate historian , as it would undermine the epsilontic cauchy - weierstrass tale that the received historiography is erected upon .the failure to appreciate the robinson - lakatos - laugwitz interpretation , according to which infinitesimals are mainstream analysis from cauchy onwards , is symptomatic of a narrow archimedean - continuum vision .this appendix summarizes a 20th century implementation of an alternative to an archimedean continuum , namely an infinitesimal - enriched continuum .such a continuum is not to be confused with incipient notions of such a continuum found in earlier centuries in the work of fermat , leibniz , euler , cauchy , and others .johann bernoulli was one of the first to exploit infinitesimals in a systematic fashion as a foundational tool in the calculus.see footnote [ ber1 ] for a comparison with leibniz . ]we will therefore refer to such a continuum as a bernoullian continuum , or b - continuum for short .let us start with some basic facts and definitions .let be the field of real numbers , let be a fixed nonprincipal ultrafilter on ( the existence of such was established by tarski ) .the relation is an equivalence relation on the set .the set of hyperreals , or the b - continuum for short , is the quotient set addition , multiplication and order of hyperreals are defined by + [ ( s_n)]=_{\text{def}}[(r_n+s_n)],\ \ \ \ \ [ ( r_n ) ] \cdot [ ( s_n)]=_{\text{def}}[(r_n\cdot{s_n})],\ ] ] \prec{[(s_n)]}\leftrightarrow_{\text{def } } { \{n\in{\mathbb{n}}:r_n < s_n\}\in{\mathcal{f}}}.\ ] ] the standard real number is identified with equivalence class of the constant sequence , i.e. ] .the set of hypernaturals can be represented as a disjoint union where the set is just a copy of the usual natural numbers , and consists of infinite ( sometimes called `` unlimited '' ) hypernaturals .each element of is greater than every usual natural number , i.e. .\ ] ] is a non - archimedean , real closed field .the set of infinitesimal hyperreals is defined by ,\ ] ] where stands for the absolute value of , which is defined as in any ordered field .we say that is infinitely close to , and write , if and only if . to give some examples , the sequence represents a positive infinitesimal ] . and finally , sequence represents a nonzero infinitesimal ] maps to zero in the quotient if and only if one has where , as above , is a fixed nonprincipal ultrafilter on .^ { } \ar@{->>}[d]^{\rm st } & & { \mbox{i\!i\!r}}_{<\infty } \ar@{->>}[d]^{\rm st } \\ { { \mathbb q}}\ar[rr ] \ar@{^{(}- > } [ urr ] & & { { \mathbb r}}\ar[rr]^{\simeq } & & { { \mathbb r}}}\ ] ] to obtain a full hyperreal field , we replace by in the construction , and form a similar quotient we wish to emphasize the analogy with formula defining the a - continuum .we can treat both and as subsets of .note that , while the leftmost vertical arrow in figure [ helpful ] is surjective , we have a more detailed discussion of the ultrapower construction can be found in m. davis and gordon , kusraev , & kutateladze .see also baszczyk for some philosophical implications .more advanced properties of the hyperreals such as saturation were proved later ( see keisler for a historical outline ) . a helpful `` semicolon '' notation for presenting an extended decimal expansion of a hyperreal was described by a. h. lightstone .see also p. roquette for infinitesimal reminiscences .a discussion of infinitesimal optics is in k. stroyan , j. keisler , d. tall , and l. magnani and r. dossena .edward nelson in 1977 proposed an axiomatic theory parallel to robinson s theory .a related theory was proposed by hrbek ( who submitted a few months earlier and published a few months later than nelson ) .another axiomatic approach was proposed by benci and di nasso . as ehrlich ( * ? ? ?* theorem 20 ) showed , the ordered field underlying a maximal ( i.e. , _on_-saturated ) hyperreal field is isomorphic to j. h. conway s ordered field no , an ordered field ehrlich describes as the _ absolute arithmetic continuum_. infinitesimals can be constructed out of integers ( see borovik , jin , and katz ) .they can also be constructed by refining cantor s equivalence relation among cauchy sequences ( see giordano & katz ) . a recent book by terence tao contains a discussion of the hyperreals .the use of the b - continuum as an aid in teaching calculus has been examined by tall , ; ely ; katz and tall ( see also ) .these texts deal with a `` naturally occurring '' , or `` heuristic '' , infinitesimal entity and its role in calculus pedagogy.see footnote [ per1 ] for peirce s take on . ] applications of the b - continuum range from the bolzmann equation ( see l. arkeryd ) ; to modeling of timed systems in computer science ( see h. rust ) ; brownian motion , economics ( see r. anderson ) ; mathematical physics ( see albeverio _ et al . _ ) ; etc .we are grateful to m. barany , l. corry , n. guicciardini , and v. katz for helpful comments .albeverio , s. ; hegh - krohn , r. ; fenstad , j. ; lindstrm , t. : nonstandard methods in stochastic analysis and mathematical physics . _ pure and applied mathematics _ , * 122*. academic press , inc . , orlando , fl , 1986 .brting , k. : a new look at e. g. bjrling and the cauchy sum theorem . _archive for history of exact sciences _ * 61 * ( 2007 ) , no . 5 , 519535 .cajori , a history of mathematical notations , volumes 1 - 2 .the open court publishing company , la salle , illinois , 1928/1929 . reprinted by dover , 1993 .cauchy , a. l. ( 1853 ) note sur les sries convergentes do nt les divers termes sont des fonctions continues dune variable relle ou imaginaire , entre des limites donnes . in _oeuvres compltes _ , series 1 , vol .12 , pp .paris : gauthier villars , 1900 . crowe , m. : ten misconceptions about mathematics and its history . history and philosophy of modern mathematics ( minneapolis , mn , 1985 ) , 260277 , minnesota stud . philos ., xi , univ .minnesota press , minneapolis , mn , 1988 .dauben , j. : arguments , logic and proof : mathematics , logic and the infinite .history of mathematics and education : ideas and experiences ( essen , 1992 ) , 113148 , _ stud ._ , * 11 * , vandenhoeck & ruprecht , gttingen , 1996 .dedekind , r. : essays on the theory of numbers .i : continuity and irrational numbers .ii : the nature and meaning of numbers . authorized translation by wooster woodruff beman dover publications , inc ., new york , 1963 .ehrlich , p. : continuity . in encyclopedia of philosophy , 2nd edition ( donald m. borchert , editor ) ,macmillan reference usa , farmington hills , mi , 2005 ( the online version contains some slight improvements ) , pp .489517 .ehrlich , p. : the rise of non - archimedean mathematics and the roots of a misconception .i. the emergence of non - archimedean systems of magnitudes ._ archive for history of exact sciences _ * 60 * ( 2006 ) , no . 1 , 1121 . feferman , s. : conceptions of the continuum [ le continu mathmatique .nouvelles conceptions , nouveaux enjeux ] ._ intellectica _ * 51 * ( 2009 ) 169 - 189 .see also http://math.stanford.edu/~feferman/papers/conceptcontin.pdf gordon , e. i. ; kusraev , a. g. ; kutateladze , s. s. : infinitesimal analysis .updated and revised translation of the 2001 russian original .translated by kutateladze ._ mathematics and its applications _ , * 544*. kluwer academic publishers , dordrecht , 2002 .katz , m. ; tall , d. : the tension between intuitive infinitesimals and formal mathematical analysis .chapter in : bharath sriraman , editor .crossroads in the history of mathematics and mathematics education . _the montana mathematics enthusiast monographs in mathematics education _ * 12 * , information age publishing , inc . ,charlotte , nc , 2011 .see klein , felix : elementary mathematics from an advanced standpoint .i. arithmetic , algebra , analysis .translation by e. r. hedrick and c. a. noble [ macmillan , new york , 1932 ] from the third german edition [ springer , berlin , 1924 ] . originally published as elementarmathematik vom hheren standpunkte aus ( leipzig , 1908 ) .koetsier , t. : lakatos , lakoff and nez : towards a satisfactory definition of continuity . in explanation and proof in mathematics . philosophical and educational perspectives . edited by g. hanna , h. jahnke , and h. pulte .springer , 2009 .lakatos , i. : cauchy and the continuum : the significance of nonstandard analysis for the history and philosophy of mathematics . _ the mathematical intelligencer _ * 1 * ( 1978 ) , no .3 , 151161 ( paper originally presented in 1966 ) .laugwitz , d. : on the historical developement of infinitesimal mathematics .the conceptual thinking of cauchy . translated from the german by abe shenitzer with the editorial assistance of hardy grant . _ american mathematical monthly _ * 104 * ( 1997 ) , no . 7 , 654663 .lawvere , f. w. : toward the description in a smooth topos of the dynamically possible motions and deformations of a continuous body .third colloquium on categories ( amiens , 1980 ) , part i. _ cahiers topologie gom .* 21 * ( 1980 ) , no . 4 , 377392 .lutz , r. ; albuquerque , l. : modern infinitesimals as a tool to match intuitive and formal reasoning in analysis .logic and mathematical reasoning ( mexico city , 1997 ) ._ synthese _ * 134 * ( 2003 ) , no . 1 - 2 , 325351 . luxemburg , w. : nonstandard analysis . lectures on a. robinson s theory of infinitesimals and infinitely large numbers .pasadena : mathematics department , california institute of technology second corrected ed . , 1964 .newton , i. 1946 .sir isaac newton s mathematical principles of natural philosophy and his system of the world , a revision by f. cajori of a. motte s 1729 translation .berkeley : univ . of california press .newton , i. 1999 .the principia : mathematical principles of natural philosophy , translated by i. b. cohen & a. whitman , preceded by a guide to newton s principia by i. b. cohen .berkeley : univ . of california press .schubring , g. : conflicts between generalization , rigor , and intuition .number concepts underlying the development of analysis in 1719th century france and germany .sources and studies in the history of mathematics and physical sciences .springer - verlag , new york , 2005 .skolem , th .: ber die nicht - charakterisierbarkeit der zahlenreihe mittels endlich oder abzhlbar unendlich vieler aussagen mit ausschliesslich zahlenvariablen ._ fundamenta mathematicae _ * 23 * , 150 - 161 ( 1934 ) .stevin , simon : the principal works of simon stevin . vols .iia , iib : mathematics . edited by d. j. struik . c. v. swets & zeitlinger , amsterdam 1958 .iia : v+pp .1455 ( 1 plate ) .iib : 1958 iv+pp .459976 .strmholm , p. : fermat s methods of maxima and minima and of tangents .a reconstruction . _ archive for history of exact sciences _ * 5 * ( 1968 ) , no . 1 , 4769 .stroyan , k. : uniform continuity and rates of growth of meromorphic functions .contributions to non - standard analysis ( sympos . ,oberwolfach , 1970 ) , pp ._ studies in logic and foundations of mathematics _ , vol .69 , north - holland , amsterdam , 1972 .tall , d. : the psychology of advanced mathematical thinking , in _ advanced mathematical thinking_. edited by d. o. tall , mathematics education library , 11 .kluwer academic publishers group , dordrecht , 1991 .whiteside , d. t. ( ed . ) : the mathematical papers of isaac newton .iii : 16701673 . edited by d. t. whiteside , with the assistance in publication of m. a. hoskin and a. prag , cambridge university press , london - new york , 1969. * piotr baszczyk * is professor at the institute of mathematics , pedagogical university ( cracow , poland ) .he obtained degrees in mathematics ( 1986 ) and philosophy ( 1994 ) from jagiellonian university ( cracow , poland ) , and a phd in ontology ( 2002 ) from jagiellonian university .he authored _ philosophical analysis of richard dedekind s memoir _ stetigkeit und irrationale zahlen ( 2008 , habilitationsschrift ) .his research interest is in the idea of continuum and continuity from euclid to modern times . *mikhail g. katz * is professor of mathematics at bar ilan university , ramat gan , israel .two of his joint studies with karin katz were published in _ foundations of science _ : a burgessian critique of nominalistic tendencies in contemporary mathematics and its historiography " and stevin numbers and reality " , online respectively at a joint study with a. borovik and r. jin entitled `` an integer construction of infinitesimals : toward a theory of eudoxus hyperreals '' is due to appear in _ notre dame journal of formal logic _ * 53 * ( 2012 ) , no .4 . a joint study with david sherry entitled `` leibniz s infinitesimals :their fictionality , their modern implementations , and their foes from berkeley to russell and beyond '' is due to appear in _
the widespread idea that infinitesimals were `` eliminated '' by the `` great triumvirate '' of cantor , dedekind , and weierstrass , is refuted by an uninterrupted chain of work on infinitesimal - enriched number systems . the elimination claim is an oversimplification created by triumvirate followers , who tend to view the history of analysis as a pre - ordained march toward the radiant future of weierstrassian epsilontics . in the present text , we document distortions of the history of analysis stemming from the triumvirate ideology of ontological minimalism , which identified the continuum with a single number system . such anachronistic distortions characterize the received interpretation of stevin , leibniz , dalembert , cauchy , and others .
in addition to its intrinsic interest as one of nature s oldest and most important processes , photo - energy conversion is of great practical interest given society s pressing need to reduce reliance on fossil fuels by exploiting alternative energy production .photosynthesis maintains the planet s oxygen and carbon cycles in equilibrium and efficiently converts sunlight , while the possibility of its _ in vivo _study provides a fascinating window into the aggregate effect of millions of years of natural selection . among the most widespread photosynthetic systems are purple bacteria _ rsp .photometricum _ which manage to sustain their metabolism even under dim light conditions within ponds , lagoons and streamsthey absorb light through antenna structures in the biomolecular light harvesting complex 2 ( lh2 ) , and transfer the electronic excitation along the membrane to light harvesting complexes 1 ( lh1 ) which each contain a reaction center ( rc ) complex .if charge carriers are available ( i.e. the rc is in an open state ) , then the resulting reactions will feed the bacterial metabolism .it was recently observed that the photosynthetic membranes in _ rsp .adapt to the light intensity conditions under which they grow .illuminated under high light intensity ( hli ) ( / m where is the growing light intensity ) , membranes grow with a ratio of antenna - core complexes ( i.e. stoichiometry ) lh2/lh1 3.5 - 4 . for low light intensity ( lli )( / m ) , this ratio increases to 7 - 9 .the features that reveal an unexpected change in the ratio of harvesting complexes , in bacterias grown under hli and lli are shown in fig.[membs](a ) and fig.[membs](b ) , respectively . herewe present a quantitative theory to explain this adaptation in terms of a dynamical interplay between excitation kinetics and reaction - center dynamics .in particular , the paper lays out the model , its motivation and implications , in a progressive manner in order to facilitate understanding .although our model treats the excitation transport as a noisy , classical process , we stress that the underlying quantities being transported are quantum mechanical many - body excitations .the membrane architecture effectively acts as a background network which loosely coordinates the entire process . , dissipation rate , and normalized light intensity rate [ rates ] summarizes the relevant biomolecular complexes in purple bacteria _ rsp . , together with timescales governing the excitation kinetics and reaction center dynamics .each lh2 can absorb light with wavelengths 839 to 846 nm , while lh1 absorbs maximally at 883 nm .the lh1 forms an ellipse which completely surrounds the reaction center ( rc ) complex . within the rc , a dimer of bacterio - chlorophyls ( bchls ) known as the special pair p , can be excited .the excitation ( p ) induces ionization ( p ) of the special pair , and hence metabolism .the initial photon absorption is proportional to the complex cross - sections , which have been calculated for lh1 and lh2 complexes . with incident photons of wavelength , an 18 w / m light intensity yields a photon absorption rate for circular lh1 complexes in _ rb . sphaeroides _ given by , where is the lh1 absorption cross section . for lh2 complexes ,the corresponding photon capture rate is .extension to other intensity regimes is straightforward , by normalizing to unit light intensity .the rate of photon absorption normalized to 1 w / m intensity , will be for an individual lh1 , and for individual lh2 complexes .the complete vesicle containing several hundreds of complexes , will have an absorption rate where is the number of lh1 ( lh2 ) complexes in the vesicle , and is the light intensity .the number of rc complexes is therefore also equal to .excitation transfer occurs through induced dipole transfer , among bchls singlet transitions .the common inter - complex bchl distances 20 - 100 cause excitation transfer to arise through the coulomb interaction on the picosecond time - scale , while vibrational dephasing destroys coherences within a few hundred femtoseconds .the coulomb interaction de - excites an initially excited electron in the donor complex while simultaneously exciting an electron in the acceptor complex .as dephasing occurs , the donor and acceptor phase become uncorrelated .transfer rate measures from pump - probe experiments agree with generalized frster calculated rates , assuming intra - complex delocalization .lh2 transfer has not been measured experimentally , although an estimate of ps has been calculated .lh2 lh1 transfer has been measured for _r. sphaeroides _ as . due to formation of excitonic states , back - transfer lh1 lh2 is enhanced as compared to the canonical equilibrium rate for a two - level system , up to a value of .the lh1 mean transfer time has not been measured , but generalized frster calculation has reported an estimated mean time of 20 ps . lh1 rc transfer occurs due to ring symmetry breaking through second and third lowest exciton lying states , as suggested by agreement with the experimental transfer time of 35 - 37 ps at 77 k . increased spectral overlap at room temperature improves the transfer time to ps as proposed by .a photo - protective design makes the back - transfer from an rc s fully populated lowest exciton state to higher - lying lh1 states occur in a calculated time of .1 ps , close to the experimentally measured 7 - 9 ps estimated from decay kinetics after rc excitation .the first electron transfer step occurs in the rc within ps , used for quinol ( ) production .fluorescence , inter - system crossing , internal conversion and further dissipation mechanisms , have been included within an effective single lifetime of 1 ns . due to the small absorption rates in ,two excitations will only rarely occupy a single harvesting structure hence it is sufficient to include the ground and one exciton states for each harvesting complex .we now introduce the theoretical framework that we use to describe the excitation transfer , built around the experimental and theoretical parameters just outlined . in the first part of the paper , our calculations are all numerical however we turn to an analytic treatment in the latter part of the paper .we start by considering a collective state with sites resulting from lh1s , lh2s and hence rc complexes in the vesicle in terms of a set of states having the form in which any complex can be excited or unexcited , and a maximum of excitations can exist in the membrane . if only excitation kinetics are of interest , and only two states ( i.e. excited and unexcited ) per complex are assumed , the set of possible states has elements .we introduce a vector in which each element describes the probability of occupation of a collective state comprising several excitations .its time evolution obeys a master equation here is the transition rate from a site to a site .since the transfer rates do not depend on time , this yields a formal solution .small absorption rates lead to single excitation dynamics in the whole membrane , reducing the size of to the total number of sites .the probability to have one excitation at a given complex initially , is proportional to its absorption cross section , and can be written as , where subsets correspond to the lh1s , the lh2s and the rcs respectively .our interest lies in which is the normalized probability to find an excitation at a complex , given that at least one excitation resides in the network : in order to appreciate the effects that network architecture might have on the model s dynamics , we start our analysis by studying different arrangements of complexes in small model networks , focusing on architectures which have the same amount of lh1 , lh2 and rcs as shown in the top panel of fig.[archs0](a ) , ( b ) and ( c ) . the bottom panel fig.[archs0 ] ( d)-(f ) shows that values for rc , lh1 and lh2 complexes , respectively .fig.[archs0](d ) shows that the highest rc population is obtained in configuration ( c ) , followed by configuration ( a ) and ( b ) whose ordering relies in the connectedness of lh1s to antenna complexes .clustering of lh1s will limit the number of links to lh2 complexes , and reduce the probability of rc ionization . for completeness , the probability of occupation in lh1 and lh2 complexes ( figs.[archs0](e ) and ( f ) , respectively ) , shows that increased rc occupation benefits from population imbalance between lh1 enhancement and lh2 reduction . as connections among antenna complexes become more favored , the probability of finding an excitation on antenna complexes will become smaller , while the probability of finding excitations in rcs is enhanced .this discussion of simple network architectures , provides us with a simple platform for testing the notion of energy funneling , which is a phenomenon that is commonly claimed to arise in such photosynthetic structures .we start with a minimal configuration corresponding to a basic photosynthetic unit : one lh2 , one lh1 and its rc .figure [ green](a ) shows that excitations will mostly be found in the lh1 complex , followed by occurrences at the lh2 andlastly at the rc .figure [ green](b ) shows clearly the different excitation kinetics which arise when the rc is initially unable to start the electron transfer , and then after the rc population increases with respect to the lh2 s .this confirms that the energy funneling concept is valid for these small networks , i.e. excitations have a preference to visit the rc ( ) as compared to being transferred to the light - harvesting complexes ( ) .however , in natural scenarios involving entire chromatophores with many complexes , we will show that energy funneling is not as important due to increased number of available states , provided from all lh2s surrounding a core complex .given the large state - space associated with such multiple complexes , our subsequent model analysis will be based on a discrete - time random walk for excitation hopping between neighboring complexes . in particular, we use a monte carlo method to simulate the events of excitation transfer , the photon absorption and dissipation , and the rc electron transfer .we have checked that our monte carlo simulations accurately reproduce the results of the population - based calculations described above , as can be seen from figs.[green](a ) and ( b ) . + for finding the excitation at an lh2 ( dashed ) , lh1 ( dotted ) or at an rc ( continuous ) , for ( a ) , and ( b ) .crosses are the results from the monte carlo simulation.,title="fig : " ] for finding the excitation at an lh2 ( dashed ) , lh1 ( dotted ) or at an rc ( continuous ) , for ( a ) , and ( b ) .crosses are the results from the monte carlo simulation.,title="fig : " ]we now turn to discuss the application of the model to the empirical biological structures of interest , built from the three types of complex ( lh1 , =1 ; lh2 , =2 ; rc , k=3 ) .in particular , we have carried out extensive simulations to investigate the role of the following quantities in the complete chromatophore vesicles : * adjacency geometry of lh1s and lh2s .the lh2s are more abundant than lh1s and both complexes tend to form clusters , while lh2s are also generally found surrounding the lh1s . * the average time an excitation spends in complex type . * the probability of finding an excitation in complex type . * dissipation which measures the probability for excitations to dissipate at site , from which the probability of dissipation in core or antenna complexes can be obtained by adding all concerning complex type . * the sum over all complexes of the dissipation probability , which gives the probability for an excitation to be dissipated .the efficiency of the membrane is the probability of using an excitation in any rc , i.e. .excitations were absorbed by the membrane with rate .,title="fig : " ] excitations were absorbed by the membrane with rate .,title="fig : " ] figure [ dissmembranes](a ) shows that the membrane grown under low light intensity ( lli ) has highly dissipative clusters of lh2s , in contrast to the uniform dissipation in the high light intensity ( hli ) membrane ( see fig.[dissmembranes](b ) ) .this result is supported by a tendency for excitations to reside longer in lh2 complexes far from core centers ( not shown ) , justifying the view of lh2 clusters as excitation reservoirs . however , for lli and hli , the dissipation in lh1 complexes is undistinguishable . in table[ table1 ] we show the observables obtained using our numerical simulations .these show that : 1 . funneling of excitations : * the widely held view of the funneling of excitations to lh1 complexes , turns out to be a small network effect , which by no means reflects the behavior over the complete chromatophore . instead , we find that excitations are found residing mostly in lh2 complexes .* since a few lh2s surround each lh1 , the mean residence times in all complexes is very similar .2 . dissipation and performance : * excitations are dissipated more efficiently in individual lh1 complexes , since . *dissipation in a given complex type depends primarily on its relative abundance , since * hli membranes are more efficient than lli membranes ..residence time ( in picoseconds ) , dissipation , residence probability , unitary dissipation per complex ( ) , on corresponding to lh1 and lh2 complexes respectively .stoichiometry and efficiency are also shown . [cols=">,>,>,>,>,>,>,>,>,>,>,>",options="header " , ] for the present discussion , the most important finding from our simulations is that the adaptation of purple bacteria does _ not _ lie in the single excitation kinetics .in particular , lli membranes are seen to reduce their efficiency globally at the point where photons are becoming scarcer hence the answer to adaptation must lie in some more fundamental trade - off ( as we will later show explicitly ) . due to the dissimilar timescales between millisecond absorption and nanosecond dissipation ,multiple excitation dynamics are also unlikely to occur within a membrane .however we note that simulations involving multiple excitations , that include blockade ( fig.[multiple](a ) ) in which two excitations can not occupy the same site , does not appreciably lower the efficiency up to thirty excitations .we find that annihilation ( fig.[multiple](b ) ) , in which two excitations annihilate when they occupy the site at the same time , diminishes the membrane s performance equally in both hli and lli membranes. corresponds to the initial number of excitations in each realization.,width=7 ] together with formation of quinol .there is a dead time on the millisecond time - scale , before a new quinone becomes available . ]our findings above show that the explanation for the observed architecture adaptations ( hli and lli ) neither lies in the frequently quoted side - effect of multiple excitations , nor in the excitation dynamics alone . instead ,as we now explain , the answer as to how adaptation can prefer the empirically observed hli and lli structures under different illumination conditions , lies in the _ interplay _ between the excitation kinetics and reaction - center cycling dynamics . by virtue of quinones - quinol andcytochrome charge carriers , the rc dynamics features a ` dead ' ( or equivalently ` busy ' ) time interval during which quinol is produced , removed and then a new quinone becomes available .a single oxidation will produce in the reaction , and a second oxidation will produce quinol in the reaction . once quinol is produced , it leaves the rc and a new quinone becomes attached .the cycle is depicted in fig .[ rccycle ] , and is described in the simulation algorithm by closing an rc for a time after two excitations form quinol .this rc cycling time implies that at any given time , not all rcs are available for turning the electronic excitation into a useful charge separation .therefore , the number of useful rcs decreases with increasing .too many excitations will rapidly close rcs , implying that any subsequently available nearby excitation will tend to wander along the membrane and eventually be dissipated - hence reducing . for the configurations resembling the empirical architectures ( fig.[membs ] ) , this effect is shown as a function of in fig . [etatau](a ) yielding a wide range of rc - cycling times at which lli membrane is more efficient than hli .interestingly , this range corresponds to the measured time - scale for of milliseconds , and supports the suggestion that bacteria improve their performance in lli conditions by enhancing quinone - quinol charge carrier dynamics as opposed to manipulating exciton transfer . a recent proposal shown numerically that the formation of lh2 para - crystalline domains produces a clustering trend of lh1 complexes with enhanced quinone availability a fact that would reduce the rc cycling time .however , the crossover of efficiency at ms implies that even if no enhanced rc - cycling occurs , the hli will be less efficient than the lli membranes on the observed time - scale .the explanation is quantitatively related to the number of open rcs . figs .[ etatau](b ) , ( c ) and ( d ) present the distribution of open rcs , for both hli and lli membranes and for the times shown with arrows in fig.[etatau](a ) . when the rc - cycling is of no importance ( fig .[ etatau](b ) ) almost all rcs remain open , thereby making the hli membrane more efficient than lli since having more ( open ) rcs induces a higher probability for special pair oxidation . near the crossover in fig .[ etatau ] , both membranes have distributions centered around the same value ( fig . [etatau](c ) ) , indicating that although more rcs are present in hli vesicles , they are more frequently closed due to the ten fold light intensity difference , as compared to lli conditions .higher values of ( fig .[ etatau](d ) ) present distributions where the lli has more open rcs , in order to yield a better performance when photons are scarcer .note that distributions become wider when rc cycling is increased , reflecting the mean - variance correspondance of poissonian statistics used for simulation of .therefore the trade - off between rc - cycling , the actual number of rcs and the light intensity , determines the number of open rcs and hence the performance of a given photosynthetic vesicle architecture ( i.e. hli versus lli ) .guided by the monte carlo numerical results , we develop in sec .[ analyt ] an analytical model ( continuous lines in fig.[etatau ] ) that supports this discussion . of hli ( diamonds ) and lli ( crosses ) grown membranes , as a function of the rc - cycling time .continuous lines give the result of the analytical model .( b ) , ( c ) and ( d ) show the distributions of the number of open rcs for the times shown with arrows in the main plot for hli ( filled bars ) and lli ( white bars).,title="fig : " ] + of hli ( diamonds ) and lli ( crosses ) grown membranes , as a function of the rc - cycling time .continuous lines give the result of the analytical model .( b ) , ( c ) and ( d ) show the distributions of the number of open rcs for the times shown with arrows in the main plot for hli ( filled bars ) and lli ( white bars).,title="fig : " ] of hli ( diamonds ) and lli ( crosses ) grown membranes , as a function of the rc - cycling time .continuous lines give the result of the analytical model .( b ) , ( c ) and ( d ) show the distributions of the number of open rcs for the times shown with arrows in the main plot for hli ( filled bars ) and lli ( white bars).,title="fig : " ] of hli ( diamonds ) and lli ( crosses ) grown membranes , as a function of the rc - cycling time .continuous lines give the result of the analytical model .( b ) , ( c ) and ( d ) show the distributions of the number of open rcs for the times shown with arrows in the main plot for hli ( filled bars ) and lli ( white bars).,title="fig : " ] + for completeness , we now quantify the effect of incident light intensity variations relative to the light intensity during growth , with both membranes having .the externally applied light intensity , which corresponds to the ratio between the actual ( ) and growth ( ) light intensities , is varied in fig .[ etai](a ) .the lli membrane performance starts to diminish well beyond the growth light intensity , while the hli adaptation starts diminishing just above due to increased dissipation .the crossover in efficiency at results from the quite different behaviors of the membranes as the light intensity increases .in particular , in lli membranes excess photons are readily used for bacterial metabolism , and hli membranes exploit dissipation in order to limit the number of processed excitations . figs .[ etai](b ) , ( c ) and ( d ) verify that performance of membranes heavily depends on the number of open rcs . for instance , membranes subject to low excitation intensity ( fig . [ etai](b ) ) behave similarly to that expected for fast rc cycling times ( fig .[ etatau](a ) ) .the complete distributions , both for hli and lli conditions , shift to lower with increased intensity in the same manner as that observed with . even though these adaptations show such distinct features in the experimentally relevant regimes for the rc - cycling time and illumination intensity magnitude , figs.[etatau](c ) and ( d )show that the distributions of open rcs actually overlap . despite the fact that the adaptations arise under different environmental conditions , the resulting dynamics of the membranes are quite similar .note that within this parameter subspace of and , the lli membrane may have a larger number of open rcs than the hli adaptation .in such a case , the lli membrane will perform better than hli with respect to rc ionization .the inclusion of rc dynamics implies that the absorbed excitation will not find all rcs available . instead, a given amount of closed rcs will eventually alter the excitation s fate since probable states of oxidization are readily reduced . in a given lifetime , an excitation will find ( depending on and ) a number of available rcs which we refer to as _ effective stoichiometry _ which is different from the actual number reported by atomic force microscopy . of hli ( diamonds , / m ) and lli ( crosses , / m ) membranes , as a function of incident light intensity .continuous lines give the result of the analytical model .panels ( b ) , ( c ) and ( d ) show the distribution of the number of open rcs for light intensities corresponding to arrows in the main plot .hli are shaded bars and lli are white bars.,title="fig : " ] + of hli ( diamonds , / m ) and lli ( crosses , / m ) membranes , as a function of incident light intensity .continuous lines give the result of the analytical model .panels ( b ) , ( c ) and ( d ) show the distribution of the number of open rcs for light intensities corresponding to arrows in the main plot .hli are shaded bars and lli are white bars.,title="fig : " ] of hli ( diamonds , / m ) and lli ( crosses , / m ) membranes , as a function of incident light intensity .continuous lines give the result of the analytical model .panels ( b ) , ( c ) and ( d ) show the distribution of the number of open rcs for light intensities corresponding to arrows in the main plot .hli are shaded bars and lli are white bars.,title="fig : " ] of hli ( diamonds , / m ) and lli ( crosses , / m ) membranes , as a function of incident light intensity .continuous lines give the result of the analytical model .panels ( b ) , ( c ) and ( d ) show the distribution of the number of open rcs for light intensities corresponding to arrows in the main plot .hli are shaded bars and lli are white bars.,title="fig : " ] + is presented for the following membrane configurations :( b ) _ rsp . photometricum _bacteria ; ( c ) _ rs . palustris_ bacteria ; ( d ) completely unclustered vesicle ; ( e ) fully clustered vesicle .the symbols are crosses , circles , diamonds and boxes , respectively , title="fig : " ] is presented for the following membrane configurations : ( b ) _ rsp .bacteria ; ( c ) _ rs . palustris _bacteria ; ( d ) completely unclustered vesicle ; ( e ) fully clustered vesicle .the symbols are crosses , circles , diamonds and boxes , respectively , title="fig : " ]the empirical afm investigations show two main features which highlight the architectural change in the membrane as a result of purple bacteria s adaptation : the stoichiometry variation and the trend in clustering .[ fig23.2 ] shows the importance of the arrangement of complexes by comparing architectures ( b ) , ( c ) , ( d ) and ( e ) , all of which have a stoichiometry which is consistent with lli vesicles .[ fig23.2](a ) shows the difference between a given membrane s efficiency , and the mean of all the membranes , i.e. . the more clustered the rcs , the lower the efficiency in the short domain . as rc cyclingis increased , ( b ) becomes the least efficient while all other configurations perform almost equally .the explanation comes from the importance of the number of open rcs : as gets larger , many rcs will close and the situation becomes critical at , where decreases rapidly .configurations ( c ) , ( d ) and ( e ) all have the same number of rcs ( i.e. 44 ) , and the distribution of open rcs is almost the same in each case for any fixed rc cycling time . by contrast , ( b ) has fewer rcs ( i.e. 36 ) .therefore when is small , sparser rcs and exciton kinetics imply that the membrane architecture ( b ) will have better efficiency than ( c ) and ( e ) .the effect of the arrangement itself is lost due to slower rc dynamics , and the figure of merit that determines efficiency is the number of open rcs , which is lower for ( b ) . to summarize so far , we find that the arrangement of complexes changes slightly the efficiency of the membranes when no rc dynamics is included but with rc dynamics , the most important feature is the number of open rcs which is smaller for ( b ) .the nearly equal efficiency over the millisecond domain , emphasizes the relative insensitivity to the complexes geometrical arrangement .the slower the rc cycling , the more evenly available rcs will be dispersed in clustered configurations , resembling the behavior of sparse rc membranes .incoming excitations in clustered configurations may quickly reach a cluster bordering closed rcs , but must then explore further in order to generate a charge separation .although the longer rc closing times make membranes more prone to dissipation and decreased efficiency , it also makes the architecture less relevant for the overall dynamics . _the relevant network architecture instead becomes the dynamical one including sparse open rcs , not the static geometrical one involving the actual clustered rcs ._ the inner rcs in clusters are able to accept excitations as cycling times increase , and hence the rcs overall are used more evenly .this implies that there is little effect of the actual configuration , and explains the closeness of efficiencies for different arrangements in the millisecond range .within a typical fluorescence lifetime of 1 ns , a single excitation has travelled hundreds of sites and explored the available rcs globally . the actual arrangement or architecture of the complexesseems not to influence the excitation s fate , since the light intensity and rc cycling determine the number of open rcs and the availability for p oxidation .this implies that the full numerical analysis of the excitation kinetics , while technically more accurate , may represent excessive computational effort either due to the size of the state space within the master equation approach , or the number of runs required for ensemble averages with the stochastic method .in addition , within neither numerical approach is it possible to deduce the direct functional dependence of the efficiency on the parameters describing the underlying processes . to address these issues , we present here an alternative rate model which is inspired by the findings of the numerical simulations , but which ( 1 ) globally describes the excitation dynamics and rc cycling , ( 2 ) leads to analytical expressions for the efficiency of the membrane and the rate of quinol production , and ( 3 ) sheds light on the trade - off between rc - cycling and exciton dynamics .we start with the observation that absorbed excitations are transferred to rcs , and finally ionize special pairs or are dissipated . at any given time excitations will be present in the membrane .the rate at which they are absorbed is .excitations reduce quinone in the membrane at rcs due to p oxidation at a rate , or dissipate at a rate .both processes imply that excitations leave the membrane at a rate . here is the inverse of the mean time at which an excitation yields a charge separation at rcs when starting from any given complex , and it depends on the current number of open rcs given by .the rc cycling dynamics depend on the rate at which rcs close , i.e. where the 1/2 factor accounts for the need for two excitations to produce quinol and close the rc .the rcs open at a rate , proportional to the current number of closed rcs given by .hence the rc - excitation dynamics can be represented by two nonlinear coupled differential equations : in the stationary state , the number of absorbed excitations in a time interval and the number of excitations used to produce quinol , are given by yielding an expression for the steady - state efficiency : these equations can be solved in the stationary state for and , algebraically or numerically , if the functional dependence of is given .it is zero when all rcs are closed , and a maximum when all are open . making the functional dependence explicit , , fig .[ eta3d](a ) presents the relevant functional form for hli and lli membranes together with a linear and a quadratic fit .the dependence on the rate of quinone reduction requires quantification of the number of open rcs , with a notation where the fitting parameter comprises duplets where first and second components relate to the hli and lli membranes being studied .figure [ eta3d](a ) shows that favors a quadratic dependence of the form a linear fit smears out the apparent power - law behavior with fit value , in close agreement with the hli and lli membranes which have 67 and 36 rcs respectively .the linear fit can used to generate an analytical expression for as follows : we found no analytical solution for in the case where has a power law dependence . in the limit of fast rc cycling - time ( ) , has the simple form .if all transfer paths are summarized by , this solution illustrates that if the transfer - p reduction time is less than a tenth of the dissipation time , not including rc cycling . as can be seen in figs .[ etatau ] and [ etai ] , the analytical solution is in good quantitative agreement with the numerical stochastic simulation , and provides support for the assumptions that we have made .moreover , our theory shows directly that the efficiency is driven by the interplay between the rc cycling time and light intensity .figure [ eta3d](b ) shows up an entire region of parameter space where lli membranes are better than hli in terms of causing p ionization , even though the actual number of rcs that they have is smaller . in view of these results, it is interesting to note how clever nature has been in tinkering with the efficiency of lli vesicles and the dissipative behavior of hli adaptation , in order to meet the needs of bacteria subject to the illumination conditions of the growing environment . of an rc for hli ( diamonds ) and lli ( crosses ) membranes , together with a quadratic ( dashed line ) and linear ( continuous ) dependence on the number of closed rcs .the fitting parameters for are , ns ; and for , , , and , for hli and lli membranes respectively .( b ) as function of and , obtained from the complete analytical solution for lli ( white ) and hli ( grey ) membranes , title="fig : " ] of an rc for hli ( diamonds ) and lli ( crosses ) membranes , together with a quadratic ( dashed line ) and linear ( continuous ) dependence on the number of closed rcs .the fitting parameters for are , ns ; and for , , , and , for hli and lli membranes respectively .( b ) as function of and , obtained from the complete analytical solution for lli ( white ) and hli ( grey ) membranes , title="fig : " ] +photosynthetic membranes must provide enough energy to fulfill the metabolic requirements of the living bacteria . in order to quantify the quinol output of the vesicle, we calculate the quinol rate which depends directly on the excitations that ionize rcs .the factor accounts for the requirement of two ionizations to form a single quinol molecule .[ metw](a ) shows the quinol rate as a function of rc cycling time , when membranes are illuminated in their respective grown conditions .if rc cycling is not included ( ) the tenfold quinol output difference suggests that the hli membrane could increase the cytoplasmic ph to dangerously high levels , or that the lli membrane could starve the bacteria .however , the bacteria manage to survive in both these conditions and below we explain why . in the regime of millisecondrc cycling , the quinol rate in hli conditions decreases which is explained by dissipation enhancement acquired from only very few open rcs . such behavior in lli conditions appears only after several tens of milliseconds .the fact that no crossover occurs in quinol rate for these two membranes , suggests that different cycling times generate this effect . the arrows in fig .[ metw](a ) correspond to times where a similar quinol rate is produced in both membranes , in complete accordance with numerical studies where enhanced quinone diffusion lessens rc cycling times in lli adaptation .although these membranes were grown under continuous illumination , the adaptations themselves are a product of millions of years of evolution . using rc cycling times that preserve quinol rate in both adaptations ,different behaviors emerge when the illumination intensity is varied ( see fig .[ metw ] ) .the increased illumination is readily used by the lli adaptation , in order to profit from excess excitations in an otherwise low productivity regime . on the other hand ,the hli membrane maintains the quinone rate constant , thereby avoiding the risk of ph imbalance in the event that the light intensity suddenly increased .we stress that the number of rcs synthesized does not directly reflect the number of available states of ionization in the membrane .lli synthesizes a small amount of rcs in order to enhance quinone diffusion , such that excess light intensity is utilized by the majority of special pairs . in hli, the synthesis of more lh1-rc complexes slows down rc - cycling , which ensures that many of these rcs are unavailable and hence a steady quinol supply is maintained independent of any excitation increase .the very good agreement between our analytic results and the stochastic simulations , yields additional physical insight concerning the stoichiometries found experimentally in _ rsp . photometricum_ .in particular , the vesicles studied repeatedly exhibit the same stoichiometries , for hli , and for lli membranes .interestingly , neither smaller nor intermediate values are found . in hli ( diamonds , / m ) and lli ( crosses , / m ) grown membranes , as a function of rc cycling time .the times shown with arrows are used in ( b ) , where is presented as a function of incident intensity.,title="fig : " ] in hli ( diamonds , / m ) and lli ( crosses , / m ) grown membranes , as a function of rc cycling time .the times shown with arrows are used in ( b ) , where is presented as a function of incident intensity.,title="fig : " ] we now derive an approximate expression for the quinol production rate in terms of the environmental growth conditions and the responsiveness of purple bacteria through stoichiometry adaptation . following refs . , the area of chromatophores in different light intensities can be assumed comparable .initially absorption occurs with lh1 complexes of area , and lh2 complexes of area , which fill a fraction of the total vesicle area .this surface occupancy can be rearranged in terms of the number of rcs , and the stoichiometry , yielding the expression .the fraction has been shown to vary among adaptations , since lli have a greater occupancy than hli membranes due to para - crystalline lh2 domains in lli .accordingly , the absorption rate can be cast as .following photon absorption , the quinol production rate depends on the number of excitations within the membrane in the stationary state , and on the details of transfer through the rate . the assumed linear dependence requires knowledge of and the stationary - state solution for the mean number of closed rcs through .the stationary state in eq .[ ne ] yields such that the mean number of closed rcs is simply .the rate is the rate at which excitations oxidize any special pair when all rcs are open .the time to reach an rc essentially depends on the number of rcs and hence the stoichiometry . must be zero when no rcs are present ( ) and takes a given value when the membrane is made of only lh1s ( =0 ) .also the rc cycling time is expected to vary somewhat with adaptations due to quinone diffusion , which is supported in our analysis by the condition of bounded metabolic demands as presented in fig .[ metw](a ) .the linear assumed dependence of eq . [ linear ] is cast explicitly as a function of and with the number of open rcs from in the steady state , we have which can be solved for where .to determine , we employ a linear interpolation using the values highlighted by arrows in fig .[ metw](b ) .the requirements on are fulfilled by a fit , which is supported by our calculations for configurations of different stoichiometries ( see fig .[ spred](a ) ) .the filling fraction fraction is assumed linear according to the experimentally found values for hli ( ) and lli ( ) .the resulting expression is presented in fig .[ spred](b ) . in the high stoichiometry / high intensity regime, the high excitation number would dangerously increase the cytoplasmic ph .the contours shown in fig .[ spred](c ) of constant quinol production rate , show that only in a very small intensity range will bacteria adapt with stoichiometries which are different from those experimentally observed in _ rsp .( and ) . as emphasized in ref , membranes with =6 or =2 were not observed , which is consistent with our model .more generally , our results predict a great sensitivity of stoichiometry ratios for 30 - 40 w / m , below which membranes rapidly build up the number of antenna lh2 complexes . very recently , membranes were grown with 30w / m and an experimental stoichiometry of 4.8 was found .the contour of 2200 s predicts a value for stoichiometry of 4.72 at such light intensities .this agreement is quite remarkable , since a simple linear model would wrongly predict .our theory s full range of predicted behaviors as a function of light - intensity and stoichiometry , awaits future experimental verification . on stoichiometry of the membrane .( b ) as a function of stoichiometry variation and illumination intensity .( c ) quinol rate contours of s in black , blue , red and pink , respectively.,title="fig : " ] on stoichiometry of the membrane .( b ) as a function of stoichiometry variation and illumination intensity .( c ) quinol rate contours of s in black , blue , red and pink , respectively.,title="fig : " ] on stoichiometry of the membrane .( b ) as a function of stoichiometry variation and illumination intensity .( c ) quinol rate contours of s in black , blue , red and pink , respectively.,title="fig : " ]we have shown that excitation dynamics alone can not explain the empirically observed adaptation of light - harvesting membranes . instead , we have presented a quantitative model which strongly suggests that chromatic adaptation results from the interplay between excitation kinetics and rc charge carrier quinone - quinol dynamics .specifically , the trade - off between light intensity and rc cycling dynamics induces lli adaptation in order to efficiently promote p oxidation due to the high amount of open rcs .by contrast , the hli membrane remains less efficient in order to provide the bacteria with a relatively steady metabolic nutrient supply .this successful demonstration of the interplay between excitation transfer and rc trapping , highlights the important middle ground which photosynthesis seems to occupy between the fast dynamical excitation regime in which quantum - mechanical calculations are most relevant , and the purely classical regime of the millisecond timescale bottleneck in complete membranes .we hope our work will encourage further study of the implications for photosynthesis of this fascinating transition regime between quantum and classical behaviors . on a more practical level , we hope that our study may help guide the design of more efficient solar micropanels mimicking natural designs .this research was supported by banco de la repblica ( colombia ) and proyecto semilla ( 2010 - 2011 ) , facultad de ciencias universidad de los andes .[ [ residence - time - t_h_k ] ] residence time + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in the stochastic simulations , is the residence time of an excitation in the realization at complex : where is the number of times complex has been visited in all the stochastic realizations . [ [ dissipation - d_i ] ] dissipation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the dissipation measures the probability for excitations to dissipate at site , and can be obtained formally from where are the number of excitations dissipated at site and is the total number of absorbed excitations . [ [ residence - probability - p_r_k ] ] residence probability + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in practice within the stochastic simulations , for a given realization , an excitation will be a time in lh1s , a time in lh2s and a time in rcs .the residence probability will become the total time of all realizations in complex type is the numerator , while the denominator stands for the total time during which the excitations were within the membrane .10 scheuring s and sturgis j 2005 _ science _ * 309 * 484 johnson f s 1970 _ biological conservation _ * 2 * 83 janzen h h 2004 _ agriculture , ecosystems and environment _ * 104 * 399 knox r s 1977 ._ primary processes of photosynthesis _ , elsevier - north holland , p 55 pullerits t and sundstrom v 1996 _ acc . chem .res _ * 29 * 381 flemming g r and van grondelle r 1997 _ curr .biol . _ * 7 * 738 pfenning n 1978 _ the photosynthetic bacteria _ ( new york : plenum publishing corporation ) p 3 fassioli f , olaya - castro a , scheuring s , sturgis j n and johnson n f 2009 _ biophysical jour . _* 97 * 2464 jang s , newton m d and silbey r j ( 2004 ) _ phys .lett . _ * 92 * 218301 francke c and amesz j 1995 _ photosynthetic res . _ * 46 * 347 geyer t and heims v 2006 _ biophys .j. _ * 91 * 927 bahatyrova s , freese r n , siebert c a , olsen j d , van der werf k o , van grondelle r , niederman r a , bullough o a , otto c and hunter n c 2004 _ nature _ * 430 * 1058 lee h , cheng y , and flemming g 2007 _ science _ * 316 * 1462 engel g s , calhoun t r , read e l , ahn t , mancal t , cheng y , blankenship r , fleming g r 2007 _ nature _ * 446 * 782 hess s , chachisvilis m , timpmann k , m. jones , g. fowler , c. hunter , and v. sundstrom 1995 _ proc .natl acad .usa _ * 92 * 12333 ritz t , park s and schulten k 2001 _ j. phys .b _ * 105 * 8259 damjanovic a , ritz t , and schulten k 2000 _ int .j. quantum chem . _ * 77 * 139 bergstrom h , van grondelle r , and sunsdstrom v 1989 _ febs lett ._ * 250 * 503 visscher k , bergstrom h , sunsdtrom v , hunter n. c. and van grondelle r 1989 _ photosyn .res . _ * 22 * 211 van grondelle r , dekker j , gillbro t , and sundstrom v 1994 _ biochim .. acta _ * 1187 * 1 timpmann k , zhang f , freiberg a and sundstrom v 1993 _ biochim .* 1183 * 185
photosynthesis is arguably the fundamental process of life , since it enables energy from the sun to enter the food - chain on earth . it is a remarkable non - equilibrium process in which photons are converted to many - body excitations which traverse a complex biomolecular membrane , getting captured and fueling chemical reactions within a reaction - center in order to produce nutrients . the precise nature of these dynamical processes which lie at the interface between quantum and classical behaviour , and involve both noise and coordination are still being explored . here we focus on a striking recent empirical finding concerning an illumination - driven transition in the biomolecular membrane architecture of _ rsp . photometricum _ purple bacteria . using stochastic realisations to describe a hopping rate model for excitation transfer , we show numerically and analytically that this surprising shift in preferred architectures can be traced to the interplay between the excitation kinetics and the reaction center dynamics . the net effect is that the bacteria profit from efficient metabolism at low illumination intensities while using dissipation to avoid an oversupply of energy at high illumination intensities .
in this note , we show that the main result proposed in , i.e. a sufficient condition for excluding the presence of homoclinic and heteroclinic orbits , is not correct .hence this can not lead to their conjecture of a fourth kind of chaos in 3d polynomial ode systems characterized by the non - existence of homoclinic and heteroclinic orbits .moreover , we remark that the conjecture can not be correct , as explained below in detail .the main result of is stated in their theorem 1 , which leads to exclude the existence of a bounded trajectory for in any dynamical system characterized by a vector field with at least one lower bounded component , and from their proof it follows that this occurs independently of the existence of one or more equilibria in the system .we notice that the proof they give also implies the _ non existence of any closed orbit , i.e. limit cycle_. moreover , gives an example , in eq .( 3 ) , satisfying the assumption of the theorem 1 and showing a chaotic attractor illustrated in fig .1 . from this finding , they conjecture the existence of a new type of chaos . however , given a system with a chaotic attractor ( and let us assume that this is the case shown in their example , indeed form fig .[ fig1 ] of this note it can be assumed that the first return map on a suitable two - dimensional surface as qualitatively shown by the red line , leads to a two - dimensional map in chaotic regime ) then we have , by any definition of chaotic system ( see , e.g. , , , , and ) , _ the existence of infinitely many unstable limit cycles which densely cover the observed chaotic set . moreover , infinitely many homoclinic and heteroclinic orbits exist connecting these unstable limit cycles , which also are dense in the chaotic attractor_. it follows that it is not possible to identify a chaotic system characterized by the non - existence of homoclinic and heteroclinic orbits .-plane of attractor obtained from system ( 3 ) in for , and .the red line indicates a suitable plane for the return map . ]the paper suggests that the presence of chaos is subordinated to the existence of either homoclinic orbits _ of equilibria _ or heteroclinic orbits _ of equilibria_. which is not correct .indeed , the target of identifying a chaotic system characterized by the non - existence of homoclinic and heteroclinic orbits _ to one or more equilibria _ may be correct .however , in our opinion this is not interesting , as it is well known that chaotic attractors may exist also when neither homoclinic nor heteroclinic orbits _ to equilibria _ are present , as it is the case in the classical lorenz system , see , e.g. .we should emphasize that the existence of a homoclinic orbit of an equilibrium ( or heteroclinic connections between two equilibria ) , under other suitable assumptions , is relevant from a theoretical point of view , as it allows to rigorously prove the existence of chaos and also its persistence under perturbations when the homoclinic ( heteroclinic ) orbit no longer exists . in the following, we underline the presence of an inaccuracy in the proof of theorem 1 in .regarding the proof of theorem 1 in , an incorrect part is related to the unboundedness of the trajectories .consider a vector field belonging to class , the state variable of the system , and the time . assuming the existence of an , such that for at least one we have that , , then ( as pointed out by the authors in eq .( 2 ) of their paper ) we have to consider the inequality*for * which implies that , given a homoclinic or heteroclinic orbit , we have which is compatible with the existence of the orbits itself .then the authors state that the same result in holds also * for * getting divergence for any orbit for .however this is not correct , as for we have ] by a simple integration from to , we have assuming , by basic properties of definite integrals we have that for from which inequality follows , and for from which we obtain inequality .] in place of .inequality implies that , which does not exclude the existence of homoclinic ( or heteroclinic ) orbits .in this note we provide some arguments showing that the statement of theorem 1 in is not correct .moreover , we show the presence of a mistake related to the backward integration in the proof of the same theorem .the authors are grateful to g. i. bischi and l. cerboni baiardi for helpful comments on this note .the usual caveats apply .
the present note refers to a result proposed in , and shows that the theorem therein is not correct . we explain that a proof of that theorem can not be given , as the statement is not correct , and we underline a mistake occurring in their proof . since this note is supplementary to , the reader should consult this paper for further explanations of the matter and the symbols used . * keywords * : homoclinic orbit , heteroclinic orbit , chaos . * chinese library classification * * 2010 mathematics subject classification *
recent progress in modelling pedestrian dynamics is remarkable and many valuable results are obtained by using different models , such as the social force model and the floor field model . the former model is based on a system of coupled differential equations which has to be solved e.g. by using a molecular dynamics approach similar to the study of granular matter .pedestrian interactions are modelled via long - ranged repulsive forces . in the latter modeltwo kinds of floor fields , i.e. , a static and a dynamic one , are introduced to translate a long - ranged spatial interaction into an attractive local interaction , but with memory , similar to the phenomenon of chemotaxis in biology .it is interesting that , even though these two models employ different rules for pedestrian dynamics , they share many properties including lane formation , oscillations of the direction at bottlenecks , and the so - called faster - is - slower effect .although these are important basics for pedestrian modelling , there are still many things to be done in order to apply the models to more practical situations such as evacuation from a building with complex geometry . in this paper , we will propose a method to construct the static floor field for complex rooms of _ arbitrary _ geometry .the static floor field is an important ingredient of the model and has to be specified before the simulations .moreover , the effect of walls and contraction at a wide exit will be taken into account which enables us to obtain realistic behavior in evacuation simulations even for the case of panic situations .this paper is organized as follows . in sec .[ humandata ] , we cite experimental data of evacuations to illustrate the strategy of people in panic situations .then an extended floor field model is introduced in sec .[ camodel ] including a method of constructing the floor field and wall potentials . in sec .[ simulation ] results of simulations for various configurations of a room are investigated and concluding discussions are given in sec .first we discuss the different kinds of human behavior in panic situations .people in a room try to evacuate in case of fire with their own strategy .the strategies of the evacuation are well studied up to now , and we cite an example of an experiment of evacuation that was conducted in a large supermarket in japan .fire alarms and false smoke were set suddenly in the experiment , and after people had escaped from the building they have been interviewed about their choice of escape routes etc .data from more than 300 people were collected .the following list shows the statistics of the answers given : 1 .i escaped according to the signs and instructions , and also broadcast or guide by shopgirls ( 46.7% ) .i chose the opposite direction to the smoking area to escape from the fire as soon as possible ( 26.3% ) .i used the door because it was the nearest one ( 16.7% ) .i just followed the other persons ( 3.0% ) .i avoided the direction where many other persons go ( 3.0% ) .there was a big window near the door and you could see outside .it was the most `` bright '' door , so i used it ( 2.3% ) .i chose the door which i m used to ( 1.7% ) .we see that very different , sometimes even contradictory , choices were made indicating the complexity of an evacuation problem .if we assume that there are no signs and no guidance by broadcasts as well as no information about the location of the fire , then according to the questionnaires , people will try to evacuate by relying on both one s memory of the route to the nearest door and other people s behavior .this competition between collective and individual behavior is essential for modelling evacuation phenomena .it is included in the _ static and dynamic floor fields _ of our model that we have introduced in previous papers .in this section we will summarize the update rules of an extended floor field model for modelling panic behavior of people evacuating from a room .the space is discretized into cells of size which can either be empty or occupied by one pedestrian ( _ hard - core - exclusion _ ) .each pedestrian can move to one of the unoccupied next - neighbor cells ( or stay at the present cell ) at each discrete time step according to certain transition probabilities ( fig .[ figmove ] ) as explained below in sec .[ subrules ] . for the case of evacuation processes ,the _ static floor field _ describes the shortest distance to an exit door .the field strength is set inversely proportional to the distance from the door .the _ dynamic floor field _ is a _ virtual trace _ left by the pedestrians similar to the pheromone in chemotaxis .it has its own dynamics , namely diffusion and decay , which leads to broadening , dilution and finally vanishing of the trace . at for all sites of the lattice the dynamic field is zero , i.e. , .whenever a particle jumps from site to one of the neighboring cells , at the origin cell is increased by one .the model is able to reproduce various fundamental phenomena , such as lane formation in a corridor , herding and oscillation at a bottleneck .this is an indispensable property for any reliable model of pedestrian dynamics , especially for discussing safety issues. the update rules of our ca have the following structure : 1 .the dynamic floor field is modified according to its diffusion and decay rules , controlled by the parameters and . in each time step of the simulationeach single boson of the whole dynamic field decays with probability and diffuses with probability to one of its neighboring cells. 2 . for each pedestrian ,the transition probabilities for a move to an unoccupied neighbor cell are determined by the two floor fields and one s inertia ( fig .[ figmove ] ) .the values of the fields ( dynamic ) and ( static ) are weighted with two sensitivity parameters and : with the normalization . here represents the inertia effect given by for the direction of one s motion in the previous time step , and for other cells , where is the sensitivity parameter . , newly introduced in this paper , is the wall potential which is explained below . in ( [ trap ] )we do not take into account the obstacle cells ( walls etc . ) as well as occupied cells .each pedestrian chooses randomly a target cell based on the transition probabilities determined by ( [ formula ] ) .whenever two or more pedestrians attempt to move to the same target cell , the movement of _ all _ involved particles is denied with probability ] the friction parameter controls the resolution of conflicts in clogging situations .both cooperative and competitive behavior at a bottleneck are well described by adjusting .5 . \cdots $ ] these constants control diffusion and decay of the dynamic floor field .it reflects the randomness of people s movement and the visible range of a person , respectively .if the room is full of smoke , then takes large value due to the reduced visibility . through diffusion anddecay the trace is broadened , dilute and vanishes after some time . these parameters specify the wall potential .pedestrians tend to avoid walking close to walls and obstacles . is the maximum distance at which people feel the walls .it reflects one s range of sight or so - called personal space . is the sensitivity to the walls , and the ratio reflects to which degree deviations from the shortest route ( which is determined by the static floor field ) are accepted to avoid the walls .we focus on measuring the total evacuation time by changing the parameters and the configuration of the room , such as width , position and number of doors and obstacles . in all simulations we put , and , and vonneumann neighborhoods are used in eq .( [ trap ] ) for simplicity .the size of the room is set to 100 100 cells . in the previous papers ,the influence of the two floor fields on the total evacuation time has been studied in detail . here , the effects of inertia and wall potentials are investigated for concave rooms with some obstacles by using the dijkstra metric .pedestrians try to keep their preferred velocity and direction as long as possible .this is taken into account by adjusting the parameter . in fig .[ figinertia ] , total evacuation times from a room without any obstacles are shown as function of in the cases and .we see that it is monotonously increasing in the case , because any perturbation from other people becomes large if increases , which causes the deviation from the minimum route .introduction of inertia effects , however , changes this property qualitatively as seen in fig .[ figinertia ] .the _ minimum _time appears around in the case .this is well explained by taking into account the physical meanings of and .if becomes large , people become less flexible and all of them try to keep their own minimum route to the exit according to the static floor field regardless of congestion . by increasing ,one begins to feel the disturbance from other people through the dynamic floor field .this perturbation makes one flexible and hence contributes to avoid congestion .large again works as strong perturbation as in the case of , which diverts people from the shortest route largely .thus we have the minimum time at a certain magnitude of , which will depend on the value of and . to the dynamic floor field in the dependence of .the room is a simple square without obstacles and 50 simulations are averaged for each data point .parameters are and .,scaledwidth=45.0% ] if the width of an exit becomes large , a more careful treatment is needed in the calculation of the static floor field .people tend to rush to the center of the exit to avoid the walls .thus one should introduce an effective width of the exit by neglecting certain cells from its each end .we call this effect _ contraction _ in this paper , due to its similarity with the contraction effect in hydrodynamics where fluid runs through a orifice with a smaller diameter than that of the orifice immediately after the fluid goes out of it . the shortest distance from a cell in the room to one of the exit cellsis calculated by using the dijkstra metric , but only those exit cells near the center of the door are taken into account owing to the contraction . then we take the minimum of those shortest distances and use it as the value of the static floor field at the cell . herewe define the ratio of contraction of an exit as , where is the true width of the exit and is the effective width .if , i.e. , there is no contraction , and we see the artifact of two crowds near the edges of the exit ( fig . [ figcontra](a ) ) . introducing the contraction makes the evacuation behavior more realistic ( fig .[ figcontra](b ) ) . and ., title="fig:",scaledwidth=22.0% ] and ., title="fig:",scaledwidth=22.0% ] + ( a)(b ) let us investigate the effect of the position of obstacles to the total evacuation time . in fig .[ figobst ] , we set up two rooms such that the total area of obstacles in both rooms is the same .however , it is important to notice that the maximum length to the exit is different for these rooms .the maximum length in room ( b ) is 124.6 , which is longer than that in ( a ) ( 115.5 ) .this difference affects the static floor field and hence the dynamics of people .the average total evacuation time in case ( a ) is given by 357.48 time steps , while in ( b ) it is 379.22 time steps .this implies that , even though the area of obstacles is the same , their positions in the room will affect the evacuation dynamics considerably through the static floor field .it is worth mentioning that this simulation is different from the column problem we have studied previously .the existence of a small column in front of an exit does not change the static floor field so much , but works as a simple obstacle which divides the flow of evacuating people .and the initial density of the room is ., title="fig:",scaledwidth=22.0% ] and the initial density of the room is ., title="fig:",scaledwidth=22.0% ] + ( a)(b ) finally we study the effect of the width of an exit as well as the total number of doors in a room .we compare a room with an exit of size 10 to one with two exits of size 5 ( fig .[ figexit ] ) .although the total width of the exits is the same , the evacuation dynamics is different .the total evacuation time is 275 time steps in average for the case of one exit , but 245 for two exits ( fig.[figexit](a ) ) .if the two doors are set at opposite walls , the evacuation time is further improved to 220 time steps ( fig.[figexit](b ) ) .this is similar to the effect studied in sec .[ posob ] , because the minimum length in the case of fig .[ figexit ] ( b ) becomes 68.2 while it is 104.1 in ( a ) . and, title="fig:",scaledwidth=22.0% ] and ., title="fig:",scaledwidth=22.0% ] + ( a)(b )in this paper we have discussed the main properties of the floor field cellular automaton model for pedestrian dynamics . a method for the calculation of the static floor field andthe introduction of wall potentials has been proposed .both extensions improve the realism of evacuation simulations .also it is important to take into account the contraction in the case of wide exits .the existence of a minimal evacuation time in the case of finite inertia is found , which shows the importance of one s flexibility to respond to the other people s behavior in the case of an evacuation .finally it is shown that the position of obstacles in a room will affect the evacuation dynamics through the static floor field .thus in architectural planning it is important to consider a suitable position of obstacles in a room as well as the exits carefully .this work was supported in part by the ryukoku fellowship 2002 ( k. nishinari ) .a. kirchner , h. klpfel , k. nishinari , a. schadschneider , and m. schreckenberg , `` simulation of competitive egress behavior : comparison with aircraft evacuation data , '' physica a , vol.324 , p.691 ( 2003 ) .
the floor field model , which is a cellular automaton model for studying evacuation dynamics , is investigated and extended . a method for calculating the static floor field , which describes the shortest distance to an exit door , in an arbitrary geometry of rooms is presented . the wall potential and contraction effect at a wide exit are also proposed in order to obtain realistic behavior near corners and bottlenecks . these extensions are important for evacuation simulations , especially in the case of panics .
theoretical physics endeavors to produce models that predict observed physical phenomena . though sometimes the challenge is to develop a mathematical language describing the system of interest , often ( especially in the study of complex systems ) one can write down the exact dynamics and gain little insight the resulting expressions are too cumbersome , too messy , or too ill - conditioned to be useful . by exploiting symmetries and other characteristics of the particular system , one may find a simpler equivalent description of the dynamics .this description may be further simplified by using approximation techniques , e.g. , asymptotic limits and small - parameter expansions .much of the art of the field lies in finding and choosing ad hoc methods for deriving these simpler models ; however , more systematic methods are clearly desirable .control theorists have developed a variety of _ model reduction _ techniques that systematically produce simple models of complex systems .these notes will describe _ balanced truncation _ , a model reduction technique for linear systems which is readily available in a variety of formats ( e.g. matlab ) .balanced truncation has recently been applied in physics contexts , and the results suggest it will prove a useful tool for treating large systems in both classical and quantum settings . much of this discussion comes directly from dullerud and paganini . we will present the important concepts and results necessary to apply balanced truncation , omitting both the proofs and the algorithms .we refer the reader to and for a more complete mathematical discussion , and to matlab toolboxes and their documentation for the computational methods . in section [ sec : inout ]we will describe the input - output paradigm of control theory , and introduce state - space models , the class of systems treatable by balanced truncation .given an arbitrary state - space model , we will characterize the smallest state - space model with identical input - output characteristics in section [ sec : minreal ] , and in section [ sec : baltrunc ] we will show how balanced truncation is used to find smaller models with controlled approximation errors .in many physics settings one is more concerned with the macroscopic behavior of a large system , and less concerned with the system s microscopic details . as an admittedly contrived example , consider a pendulum in a plane at whose free end is a tank partially filled with a classical fluid ( see fig .[ fig : pendulum ] ) .suppose that at time the system is at rest in its stable equilibrium , and our only method of disturbing the system is to exert a time - varying torque at the pivot .suppose further that we are concerned only with the time evolution of pendulum s angle not with the distribution of the fluid , given by some high - dimensional variable .] an exact model including the full fluid state would give the system s exact response to a driving torque , but would be quite impractical . as we only wish to describe the angular response to the driving torque ( a mapping from one degree of freedom to another ) onesuspects a lower - dimensional model might suffice .for example , one might try treating the fluid as a point mass attached to the pendulum by a non - linear spring .systems of this sort are naturally phrased in a control theory language . in the typical control scenario ,a time - varying input drives a system with state giving output signal , and the system dynamics have the form such systems are depicted by a block diagram as shown in fig .[ fig : blockdiagram ] . in the example of the pendulum ,the system s state is given by so as to describe the evolution with the first - order dynamics ( [ eqn : dynamics ] ) .the input to the system is , and the system s output is . ] together with some initial condition ( typically ) , the functions and in ( [ eqn : dynamics ] ) define an _ input - output map _ taking to . since it is this relation with which we are concerned , rather than the system s internal dynamics , a theoretical model for the system will suffice if it describes . given some system of the form ( [ eqn : dynamics ] ) ,model reduction aims to produce simpler models ( i.e. models with a lower - dimensional state ) that approximate the original input - output map .we will consider models of the form where , , and , and , , and are time - independent real matrices of sizes , , and respectively .( this entire discussion also holds for complex - valued systems . )such linear models are called _ state - space models _ , and the _ order _ of a model is , the dimension of the state . for compactness ,the model with matrices , , and is denoted by ( this notation should not be confused with the matrix with blocks , , and . )often the output will only depend on the system state , i.e. .consider the dynamics ( [ eqn : linear ] ) under a change - of - basis , where is invertible but need not be unitary .the dynamics may then be written as thus changing the basis of the state space defines a mapping on state - space models given by ( note that is unchanged . )we will call such maps _similarity transformations_. because these transformations are merely a rewriting of the system dynamics , the input - output map remains the same . as a given state - space model is not a unique description of an input - output map , in the next sectionwe ask : what is the lowest - order model with the same as the given model ? after finding a lowest - order exact model , we will use balanced truncation to find lower - order models approximating .this procedure is summarized in fig . [fig : procedure ] .model , we first reduce the system to the lowest order with the same input - output map , and then reduce to an approximate model of order .[fig : procedure ] ]to find state - space models of lowest order with the exact of a given model , we will split the problem into two parts : _ controllability _ and _ observability_. we will then combine these ideas to find _ minimal realizations _ of the system .assume the system is initially in the state ; given a time , the _ controllable _ states are those for which there is an input signal yielding .the dynamics ( [ eqn : linear ] ) can be integrated to yield which gives a linear map .the controllable states form the image of this map . since the map is linear , the image is a subspace of the state - space , called the _controllable subspace_. we denote this subspace by , as controllability only depends on the matrices and .using properties of the matrix exponential it can be shown that the controllable subspace is the image of the _ controllability matrix _ : .\ ] ] thus we see that is independent of . it can also be shown that the controllable subspace is the image of the _ controllability gramian _, an matrix given by and the orthogonal subspace of uncontrollable states is given by the controllability gramian s kernel .if ( i.e. there exists an input signal to prepare any state ) then we say that is a _controllable pair_. it can be shown that given any and , we can find a similarity transformation such that the transformed matrices have the block structure \;\;\;\;\ ; { \widetilde{b } } = tb = \left [ \begin{array}{c } { \widetilde{b}}_{1 } \\ 0 \end{array } \right ] \ ] ] with a controllable pair . writing the state vector as corresponding to this block structure, we have because , these dynamics yield for all time . thus the dynamics for reduce to because is a controllable pair , we may choose an input to prepare any state , and thus the transformed controllable subspace is given by the states of the form .the orthogonal subspace , given by states of the form , is irrelevant to the input - output map since no input can affect these states .we now consider another problem with a similar structure .suppose the system is in some initial state and for all time .based on the output for , can we uniquely identify ? integrating the dynamics ( [ eqn : linear ] ) yields which gives a linear map ( where by we mean the output signal for ) .suppose two initial states and give the same .as is linear , the initial state must give .we call initial states giving output _ unobservable _ , since any unobservable state may be added to any other initial state without changing the output .the unobservable states form the kernel of ; as the map is linear these states form a subspace , called the _ unobservable subspace _ and denoted by .it can be shown that the unobservable subspace is given by the kernel of the _ observability matrix _ : .\ ] ] thus is independent of .it can also be shown that is given by the kernel of the _ observability gramian _ and the orthogonal subspace of observable states is given by the observability gramian s image . if , then the entire space is observable , and we say that is an _observable pair_. as the observable states are given by the image of ( [ eqn : obs_gram ] ) and the controllable states are given by the image of ( [ eqn : con_gram ] ) , is an observable pair if and only if is a controllable pair .it can be shown that given any and , we can find a similarity transformation such that the transformed matrices have the block structure \\ \\ { \widetilde{c } } & = & ct^{-1 } & = & \left [ \begin{array}{cc } \,\;{\widetilde{c}}_{1}\,\ ; & \;0\,\ ; \end{array } \right ] \end{array}\ ] ] with an observable pair . writing the state vector as corresponding to this block structure, we have thus the time evolution of is never affected by , and the output signal only depends on . because is an observable pair , we can uniquely identify an initial state based on .the transformed unobservable subspace is given by the states of the form , and is irrelevant to the input - output map since no output can be affected by these states .the notions of controllability and observability give us a means of deciding whether a state affects the system s input - output map : if a state is unobservable , it does not affect the output , and if a state is uncontrollable , it is unaffected by the input . only those states that are both controllable and observable are of relevance .we say that a state - space model given by matrices is a _minimal realization _ if no lower - order model gives the same input - output map . the intuition above can be made precise as follows : it can be shown that a model is a minimal realization if and only if all states are both controllable and observable , i.e. is a controllable pair and is an observable pair . given a state - space model given by , , and we may find a minimal realization by isolating only those dimensions which are both controllable and observable . to do so , we perform a _ kalman decomposition _ , which simultaneously performs the transformations ( [ eqn : con_separation ] ) and ( [ eqn : obs_separation ] ) as follows . for any state - space modelthere exists a similarity transformation such that the matrices of transformed model have the block structure and , writing the state vector as corresponding to the block structure , the controllable states are of the form and the observable states are of the form .thus only states of the form are both controllable and observable . eliminating all but the states yields the state - space model since we have only eliminated states irrelevant to the input - output map, this reduced system has the exact same as the original system .further , since states of the form were both controllable and observable , is a controllable pair and is an observable pair ; thus this model is a minimal realization .sometimes the uncontrollable and unobservable states are obvious from the form of a physical system s dynamics , e.g. some degrees of freedom are uncoupled , or some symmetry can be exploited .in other circumstances , the physics may not make it clear which states can be eliminated without affecting the input - output map . especially in the latter situation , an algorithmic method such as the kalman decomposition ( available in matlab )can be quite advantageous .nonetheless , one feels intuitively that one should always be able to find a minimal realization analytically if one is sufficiently clever .however , we do not expect in general that standard analytic methods will be useful in seeking lower - order approximations , as we will do in the next section .to apply balanced truncation , we will first assume that we have reduced the model to a minimal realization .we make the further assumption that the resulting system is exponentially stable , i.e. all eigenvalues of have strictly negative real part .( various methods exist to extend these methods to unstable systems , e.g. , , but we assume stability to prove the standard result . ) in the previous section we distinguished between states that were observable and unobservable ; we now wish to quantify the observability of the observable states .to do so , consider the output signal for that results from the initial state when there is no input .this signal is given by ( [ eqn : output ] ) .using the norm for the signal , we have where is the observability gramian given by ( [ eqn : obs_gram ] ) with . recall that the kernel of is independent of , so choosing will not change which states are observable and which states are unobservable .( system stability is required to ensure convergence of the integral in this limit . ) scaling the norm - squared of the output by the norm - squared of the initial state yields which quantifies the observability of the states in the direction of .from ( [ eqn : obs_norm ] ) we see that the observability gramian is hermitian and positive semidefinite .as we have a minimal realization , all states are observable ( ) , and so is strictly positive definite .thus in geometric terms analogous to the moment - of - inertia tensor , defines an `` observability ellipsoid '' in the state space , with the longest principal axes along the most observable directions ( see fig . [fig : ellipsoid ] ) .a similarity transformation given by transforms the observability gramian by .as need not be unitary , this transformation may rescale the ellipsoid s axes as well as rotate them . ]we quantify controllability in a similar fashion .suppose that at a time well in the past ( ) the system is in the state , and some input drives the system for , yielding a final state . as we have a minimal realization , all states are controllable and therefore can be prepared in this fashion .for each state there is a minimum signal size required to yield .the smaller this minimum signal , the more sensitive this state is to the input signal . thus states with a smaller are said to be more controllable .consider the controllability gramian given by ( [ eqn : con_gram ] ) with .just as , is hermitian and positive semidefinite , and because we have assumed a minimal realization , is strictly positive definite and therefore invertible .it can be shown that it follows that which quantifies the controllability of the states in the direction of .just as with , defines a `` controllability ellipsoid '' in state space , with the longest principal axes along the most controllable directions .a similarity transformation given by transforms the controllability gramian by .we have thus found that the gramians give us a useful measure of a state s observability and controllability .one might ask whether the observability and controllability matrices of ( [ eqn : obs_mat ] ) and ( [ eqn : con_mat ] ) could serve a similar purpose , but in fact they are only useful for determining _ whether _ a given state is observable / controllable .one might also ask if is necessary we could choose finite , and quantify controllability and observability on a finite time horizon .however , finite does not lead to the error bound in the approximations of the next section , which is the main result of these notes . with the above quantification of observability and controllability, one might be tempted to prescribe some algorithm like eliminating the least observable or least controllable dimensions in the state space to yield a lower - order approximate model .however , such an approach would not necessarily be successful .suppose , for example , that the least observable states were in the direction of the unit vector , but that states in this direction were extremely controllable .thus a small signal might lead to the internal state with large .though this state is the least observable , might be sufficiently large that the resulting output signal is non - negligible .instead , we wish to use observability and controllability to yield a single measure of a state s importance to the input - output map .it can be shown that given two positive definite square matrices and of the same size , there exists an invertible such that where is diagonal with positive real diagonal entries .such a similarity transformation is called a _ balancing transformation_. geometrically , balancing transforms the observability and controllability ellipsoids so that they are identical and their principal axes lie on the coordinate axes of the state space ( see fig . [fig : balancing ] ) . transforms the observability and controllability ellipsoids to an identical ellipsoid aligned with principle axes along the coordinate axes.[fig : balancing ] ] the resulting is unique up to permutation of the diagonal elements , so we may choose yielding the transformed gramians \ ] ] with .the are called _ hankel singular values _ ( hsvs ) ; in this transformed system is the quantitative measure of both the observability and controllability of the unit basis vector .thus the basis vectors have been sorted in order of relevance to the input - output map .once the system is balanced , we may truncate the state - space dimensions with low hsvs to yield lower - order approximate models .intuitively , the smaller the hsvs corresponding to truncated dimensions , the better the approximation .we will now make this idea precise .let the balanced state - space model with sorted hsvs be given by matrices , , and .choosing some such that is strictly greater than , we write the state vector where gives the first coordinates , and gives the last coordinates .we then write the state - space model in the corresponding block structure and truncate the least significant dimensions of the model , yielding the order model this procedure is called _ balanced truncation_. we now give bounds on the resulting approximation error .let be some input signal on that is finite with respect to the norm latexmath:[\[\label{eqn : l_2_norm } and be the resulting output signals of the original and truncated systems respectively .stability of the original system ( [ eqn : original ] ) and the strict inequality guarantees stability of the truncated system ( [ eqn : truncated ] ) ; since both systems are stable , the output signals and are also finite with respect to the norm ( [ eqn : l_2_norm ] ) , as is the error signal .now , letting be the _ distinct _ hsvs of the truncated dimensions , it can be shown that thus we have both an upper and lower bound on the worst - case errors resulting from balanced truncation .given the hankel singular values of a system , we may truncate to a system of desired order and bound the resulting error , or we may choose the order of truncation based on the maximum tolerable error .the advantages of balanced truncation as method of approximating the input - output map are two - fold .first , the error is bounded as above , in contrast to more traditional approximations where errors estimated by the order of some small parameter , e.g. .second , the algorithmic process is efficient .balanced truncation does not necessarily yield the _ optimal _order- approximation in the sense of the error bound ( [ eqn : bound ] ) , but finding the optimal approximation may be computationally difficult . for single - input single - output systems ( i.e. , ) , the error bound may also be described in terms of frequency responses . given a sinusoidal input ,a state - space model will give an output with the same frequency and a frequency - dependent amplitude and phase shift : let and be the amplitude and phase shifts for the exact system , and let and be the shifts for the approximate system .writing the exact and approximate shifts in complex form , we have thus even though sinusoidal inputs and outputs are unbounded in the norm ( [ eqn : l_2_norm ] ) , the error is controlled in this fashion .according to , balanced realizations first appeared in the control literature in 1981 , and the proof of the error bound on truncated models first appeared in 1984 .until recently , however , the physics community has made little use of this powerful tool .we believe that balanced truncation will be of particular value when building simulations of and theoretical models for the evolution of macroscopic quantities in large complex systems , and it is hoped that these notes will be helpful in such studies .
balanced truncation , a technique from robust control theory , is a systematic method for producing simple approximate models of complex linear systems . this technique may have significant applications in physics , particularly in the study of large classical and quantum systems . these notes summarize the concepts and results necessary to apply balanced truncation .
during the phanerozoic ( 540mya present ) , the diversity of the biosphere ( total number of species , also known as biodiversity ) has increased dramatically .the trend is most clear for intermediate taxonomic levels ( families and orders ) , as fossil species data is too incomplete and higher taxonomic levels ( phylum and class diversity ) have been fairly constant since the paleozoic .a recent review is given by benton(2001)benton01 .the most completely documented diversity trend is amongst marine animals , which exhibits a plateau during the paleozoic ( 540 - 300mya ) , followed by an accelerating diversity curve since the end of the permian .the corresponding trend amongst continental , or land animals is characterised by a clear exponential growth since the first species colonised dry land during the ordovician .benton argues that the terrestrial trend is more characteristic than the marine trend , owing to the far greater diversity shown amongst land animals , even though the marine fossil record is more complete .similar trends have been reported for plants .bedau _ et al._(1998)bedau - etal98 introduced a couple of measures to capture the amount of adaptation happening in a general evolutionary system .the basic idea is to compare the dynamics of the system with a neutral shadow system in which adaptation is destroyed by randomly mixing adaptive benefits amongst the components of the system ( think of the effects ultra - marxism might have on an economy ! ) .the amount of adaptive activity ( numbers of each component in excess of the shadow model integrated over time ) and adaptive creativity ( numbers of speciations per unit time exceeding a threshold of activity ) is measured .bedau has also introduced a general neutral shadow model that obviates the need to generate one on a case by case basis . using these measures , it is possible to distinguish 3 classes of activity : 1 .unadaptive evolution , when the mutation rate is so high that organisms have insufficient time to have their adaption tested before being killed off by another mutation 2 . adapted but uncreative evolution , when species are highly adapted , but mutation is so low that ecosystems remain in perpetual equilibrium 3 . creative , adaptive evolution ,when new species continuously enter the system , and undergo natural selection the biosphere appears to be generating open ended novelty not only is it creative , but it is _ unboundedly _ creative .evidence for this exists in the form of the intricate variety of mechanisms with which different organisms interact with each other and the environment , and also in the sheer diversity of species on the planet . whilst there is no clear trend to increasing organismal complexity, there is the clear trend to increasing diversity mentioned above , which is likely to be correlated with ecosystem complexity .bedau takes diversity as a third evolutionary measure , and distinguishes between _ bounded _ and _ unbounded creative _ evolution , according to whether diversity is bounded or not .all artificial evolutionary systems examined to date have , when creative , exhibited bounded behaviour .this was also the case of the _ _ model .bedau has laid down a challenge to the artificial life community to create an unbounded , creative evolutionary system .the heart of the idea of unbounded creative evolutionary activity is the creation and storage of information .the natural measure of this process is _ information based complexity _ , which is defined in the most general form in .the notion , drawing upon shannon entropy and kolmogorov complexity is as follows : a language , is a countable set of possible descriptions , and a map .we say that have the same meaning iff .denote the length of as and .the information content ( or _ complexity _ ) of a description is given by : in the usual case where the interpreter ( which defines ) only examines a finite number of symbols to determine a string s meaning , is bounded above by where is the size of the alphabet .this is equivalent to the notion of _ prefix codes _ in algorithmic information theory .now consider how one might measure the complexity of an ecosystem .diversity is like a count of the number of parts of a system it is similar to measuring the complexity of a motor car by counting the number of parts that make it up .but then a junkyard of car parts has the same complexity as the car that might be built from the parts . in the case of ecosystems ,we expect the interactions between species to be essential information that should be recorded in the complexity measure .but a simple naive counting of food web connections is also problematic , since how do we know which connections are significant to a functioning ecology ? to put the matter on a more systematic footing , consider a tolerance such that are considered identical if .now two different population dynamics and , where can be considered identical ( i.e. iff and may diverge exponentially in time , and that a better definition of equivalence would also require similarity of the attractor sets as well . the results derived here would only be a lower bound of the ecosystem complexity under this more refined definition of equivalence . ] at this point for the sake of concreteness , let us consider lotka - volterra dynamics : where refers to elementwise multiplication , is the net population growth rate and is the matrix of interspecific interaction terms . over evolutionary time , the growth coefficients , the self - interaction coefficients and the interspecific interaction coefficients form particular statistical distributions and repectively . since inequality ( [ defeq ] )must hold over all of the positive cone , it must hold for population density vectors and . in which case eq .( [ defeq ] ) can be broken into independent component conditions on and can be written : ) .the term for the growth coefficients is given by : where , and is the ecosystem diversity . the complexity term for the interaction terms is given by if is chosen very small , the total ecosystem complexity is proportional to .this is because the zeros of the interaction matrix are encoding information .however , if , then ( [ c_beta ] ) becomes : this gives flesh to our intuitive notion that complexity should somehow be proportional to the number of connections making up the food web .empirically , lotka - volterra dynamics has been shown to exhibit an inverse relationship between connectivity and diversity .may(1972)may72 demonstrated this relationship in connection with dynamical stability .however , it seems unlikely that an ecology undergoing evolution is often stable .if this result holds more generally , it implies that complexity is directly proportional to diversity , so that diversity indeed is a good proxy for ecosystem complexity .although earlier foodweb studies demonstrated this hyperbolic diversity - connectivity relationship , more recently collected data suggests a relationship of the form , with .30.4 .if complexity indeeds scales superlinearly with diversity as suggested by latter data , then a system displaying open - ended diversity growth is indeed growing in complexity , however a system displaying bounded diversity growth may still be growing in complexity ._ _ is an evolutionary ecology , and is to my knowledge the first published account of population dynamics being linked to an evolutionary algorithm .the next model to be developed in this genre is webworld , which features a more realistic ecological dynamics , and handles resource flow issues better .other models in this genre have appeared recently ._ _ is also the name of a software package used for implementing this model , as well as other models .the software is available from .the model consists of lotka - volterra ecology : is the population density vector , the growth rates ( net births - deaths in absence of competition ) , the interaction matrix , the ( species specific ) mutation rates and the migration rate . in the panmictic case ,the term is left out , and refers to total populations , rather than population densities .the mutation operator randomly adds new species into the system with phenotypic parameters ( , , and ) varied randomly from their parent species .a precise documentation of the mutation operator can be found in the _ _ technical report .the vector has integral valued components in assigning a real valued vector to it , the values are rounded up _randomly _ with probability equal to the fractional part .for instance , the value 2.3 has a 30% probability of being rounded up to 3 , and a 70% probability of being rounded down .negative values are converted to zero .if a species population falls to zero , it is considered extinct , and is removed from the system .it should be pointed out that this is a distinctly different mechanism than the threshold method usually employed to determine extinction , but is believed to be largely equivalent .diversity is then simply the number of species with , and connectivity is the proportion of interspecific connections out of all possible connections : spatial _ _is implemented as a spatial grid , with the term being replaced by the usual 5-point stencil .a _ specialist _ is a species that only depends on a restricted range of food sources , as opposed to a _ generalist _ which might depend on many food sources .a specialist has fewer incoming predator - prey links in the food web than does a generalist .much evolutionary variety is expressed in sophisticated defence mechanisms that serve to suppress outgoing predator - prey links . in this context, i will use the term _ specialist _ in a more general sense to refer to species with a small number of food web links . in order for the panmictic ecolab model to generate an increasing diversity trend, a corresponding specialisation trend must also be present ( which it is nt in the case of the usual mutation operator ) .interestingly , specialisation is usually considered to be the default mode of evolution .generalists only exist because they happen to be more robust against environmental perturbation .this experiment involves modifying the mutation operator to bias it towards removing interaction terms .the usual _ _ mutation operator operator adds or removes connections according to , where is a uniform random variate ( _ _ technical report)ecolab - tech - report . in this experiment , a new experimental parameter ( ` gen_bias ` )is introduced such that and the number of connections added or deleted is given by . by specifying a very negative value of , the mutation operator will tend to produce specialists more often than generalists .the code for this experiment is released as _= = = = ( -1,-.25)(5,3 ) ( 0,0)(0,3 ) ( 0,0)(5,0 ) ( 0,0)(3,3 ) ( 2,0)(2,3 ) ( 2,1)(3,1 ) ( 3,1)max ( 0,1.75)(2,2)(5,3 ) ( -.5,1.5) ( 2.5,-.5) a typical run with is shown in figs . [ diversity][connectivity ] .as described in , activity is weighted by the population density , not just presence of a particular species .the results show unbounded creative evolutionary activity ( class 4 behaviour ) . as can be seen from fig .[ connectivity ] , the system remains close to the hyperbolic critical surface , yet the dynamic balance has been removed by the specialisation trend . if we assume that , then unbounded diversity growth can only happen if vanishes at least as fast as ( see fig .[ div - conn ] ) .an ecosystem consisting entirely of specialists has a constant number of foodweb links per species , or .the presence of generalists in the ecosystem damps the growth in diversity , and unbounded growth is only possible if the proportion of generalists continually diminishes over time .in , i suggested that one possible explanation for the diversity growth since the end of the permian was the breakup of the supercontinent pangaea . a simple estimate given in that paper indicated that the effect might account for a diversity growth of about 3.5 times that existing during the permian .this was remarkably similar to the growth reported by , however it is worth noting that benton s data referred to families , not species .it is expected that the numbers of species _ per family _ also increased during that time . furthermore ,when continental organism are included , familial diversity today is more like 5 times the diversity during the permian .unbeknownst to me at the time , vallentine(1973)vallentine73 had proposed essentially the same theory , called _ biogeographic provincialism _ ( the notion that the number of biological provinces is increased through rearrangement of the continents ) .the idea received some serious support by signor(1990)signor90 , although in a later review he was less enthusiastic .tiffney and niklas(1990)tiffney - niklas90 examined plant diversity in the northern hemisphere and concluded that plant diversity correlated more with the land area of lowlands and uplands , rather than continental breakup .benton(1990)benton90 is characteristically sceptical of biogeographic provicialism as an explanation of the diversity trend through the phanerozoic .biogeography theory depends on an assumed dynamic balance between speciation and extinction , which appears to be contradicted by the fossil data for continental animals , which shows a strong exponential increase in diversity through the phanerozoic . since the __ model has this dynamic balance between speciation and extinction when the dynamics self - organise to the critical surface , i experimented with the spatial version of _ _ reported in .the maximum migration rate was swept up and down exponentially in time according to , _i.e. _ with a time constant of about 9500 timesteps , by scaling by 0.9 every 1000 timesteps ( and then inverting the scaling factor every 174,000 timesteps ) .it is a little hard to relate_ _ figures to biological evolution .the maximum growth rate in _ _ is , so the doubling time for the fastest organism in the ecosystem is around 100 timesteps .this might correspond to a year or so of real time .so migration rates are being forced much faster than is typical in the real world . however , in ecolab we also tend run the mutation rate quite high , with adaptive speciations happening every 1000 timesteps or so .if the mutation rate is too high , natural selection has no chance to weed out non - adaptive species , if too low , too much computing resource is need to obtain interesting dynamics . in practice ,the mutation rate is set about 2 orders of magnitude less than the critical amount needed for adaptation . in terms of speciation rates, the migration rate time constant might correspond to something of the order of years , instead of the 10 years or so one gets from considerations of doubling times .this code is released as ecolab 3.5 . due to a design flaw , performance of this code scales poorly with diversity , unless the code is run in parallel with one cell per execution thread . for this experiment ,the runs took place on a spatial grid , on a four processor parallel computer supplied by the australian centre for advanced computing and communications , apart from one run of a grid on a 9 processor system .work is currently underway to implement a spatial version of the _ _ 4.x code , which does not suffer from this performance problem .the results of a typical run is shown in figure [ mig - sweep ] .the run started with a maximum migration rate of 0.01 at the bottom right hand corner of the figure and swept down to before increasing .the migration rate was swept back and forwards 5 times over the 18 million time steps in the run .= the first thing to note was that the expected response of diversity to migration rate was not there .we would expect a response of the form , with in the case , and varying smoothly between 1 for the infinite migration ( panmictic ) case and 2 for zero migration .these results tentatively indicate that possibly does not vary smoothly at all , but is nearly constant for most values of .this needs to be resolved with further study .the second thing to note is the completely unexpected `` resonance '' at about .it is not peculiar statistical aberration , since the same result was obtained with completely different random number seeds , and fixing the migration rate at the resonance value produces an exponential growth in diversity ( figure [ resonance ] ) .= 3 more tests were performed to determine if this result is an artifact of discretisation , or a feature of the dynamics .the first involved changing the grid to a grid , which did not affect the location of the resonance .the second involved scaling all parameters in the model ( , , ) by 0.1 , which is equivalent to changing the timescale .if the effect was purely due to dynamics , one would expect the resonance to shift one order of magnitude higher on the scale , however little qualitative different was observed .the third test involved performing the migration operator every 1000 timesteps , instead of 100 .this did change the resonance value by 1 order of magnitude , ruling out certain classes of software faults .the choice of diversity as a proxy measure for ecosystem complexity is a good choice .complexity is obviously constrained by diversity , so that bounded diversity dynamics also implies bounded complexity dynamics .however , in the case of evolutionary lotka - volterra dynamics , the system will tend to self - organise to a critical surface where speciation is balanced by extinction .this surface defines the maximum allowed complexity for a given diversity value , which turns out to be proportional to the diversity .the analysis presented in this paper could be extended to other evolutionary ecologies as well . whilst there is still debate about whether the biosphere is exhibiting unbounded complexity growth ,i am persuaded by bentons(2001)benton01 argument that the growth is nothing short of spectacular . in this paperi examined two possible mechanisms for diversity growth _ specialisation _ which proves capable of delivering unbounded creative evolution in _ _ , and _biogeographic provincialism_. whilst i was only expecting biogeographic changes to deliver a modest impact on diversity , _ _ delivered a unexpected result of a `` resonance '' , where if the migration rate was tuned to this value , diversity grew exponentially .i would like to thank the _australian centre for adavanced computing and communications _ for the computing resources needed to carry out this project .bedau , m. a. ; snyder , e. ; and packard , n. h. 1998 .a classification of long - term evolutionary dynamics . in adami , c. ; belew , r. ; kitano , h. ; and taylor , c. , eds ., _ artificial life vi _ , 228237 .cambridge , mass .: mit press .channon , a. 2001 .passing the alife test : activity statistics classify evolution in geb as unbounded . in kelemen , j. , and sosk , p. , eds ., _ advances in artificial life _ , volume 2159 of _ lecture notes in computer science _ , 417 .berlin : springer .rechtsteiner , a. , and bedau , m. a. 1999 . a genetic neutral model for quantitative comparison of genotypic evolutionary activity .in floreano , d. ; nicoud , j .- d . ; and mondada , f. , eds . , _ advances in artificial life _ , volume 1674 of _ lecture notes in computer science _ , 109 .berlin : springer .standish , r. 1998 .cellular ecolab . in standish ,r. ; henry , b. ; watt , s. ; marks , r. ; stocker , r. ; green , d. ; keen , s. ; and bossomaier , t. , eds . , _ complex systems 98 complexity between the ecos : from ecology to economics_. http://life.csu.edu.au/complex : complexity onlinealso in _ complexity international _, * 6*. tiffney , b. h. , and niklas , k. j. 1990 .continental area , dispersion , latitudinal distribution and topographic variety : a test of correlation with terrestrial plant diversity . in allmon , w. , and norris , r. d. , eds ., _ biotic and abiotic factors in evolution_. chicago : univ .chicago press .
bedau has developed a general set of evolutionary statistics that quantify the adaptive component of evolutionary processes . on the basis of these measures , he has proposed a set of 4 classes of evolutionary system . all artificial life sytems so far looked at fall into the first 3 classes , whereas the biosphere , and possibly the human economy belongs to the 4th class . the challenge to the artificial life community is to identify exactly what is difference between these natural evolutionary systems , and existing artificial life systems . at alife vii , i presented a study using an artificial evolutionary ecology called _ _ . bedau s statistics captured the qualitative behaviour of the model . _ _ exhibited behaviour from the first 3 classes , but not class 4 , which is characterised by unbounded growth in diversity . _ _ exhibits a critical surface given by an inverse relationship between connectivity and diversity , above which the model can not tarry long . thus in order to get unbounded diversity increase , there needs to be a corresponding connectivity reducing ( or food web pruning ) process . this paper reexamines this question in light of two possible processes that reduce ecosystem connectivity : a tendency for specialisation and increase in biogeographic zones through continental drift .
the random domino automaton , proposed in , is a stochastic cellular automaton with avalanches .it was introduced as a toy model of earthquakes , but can be also regarded as an substantial extension of 1-d forest - fire model proposed by drossel and schwabl . the remarkable feature of the rda is the explicit one - to - one relation between details of the dynamical rules of the automaton ( represented by rebound parameters defined in cited article and also below ) and the produced stationary distribution of clusters of size , which implies distribution of avalanches . it is already shown how to reconstruct details of the `` microscopic '' dynamics from the observed `` macroscopic '' behaviour of the system . as a field of application of rda we studied a possibility of constructing the ito equation from a given time series and - in a broader sense - applicability of ito equation as a model of natural phenomena .for rda - which plays a role of a fully controlled stochastic natural phenomenon - the relevant ito equation can be constructed in two ways : derived directly from equations and by histogram method from generated time series .then these two results are compared and investigated in .note that the set of equations of the rda in a special limit case reduces to the recurrence , which leads to known integer sequence - the motzkin numbers , which establishes a new , remarkable link between the combinatorial object and the stochastic cellular automaton . in the present paper a finite version of random domino automaton is investigated .the mathematical formulation in finite case is precise and the presented results clarify which formulas are exact and allow to estimate approximations we impose in infinite case presented in .we also show , that equations of finite rda can reproduce results of , when size of the system is increasing and distributions satisfy an additional assumption ( for big ) . on the other hand , a time evolution of finite rda can exhibit a periodic - like behaviour ( the assumption for big is violated ) , which is a novel property .thus , based on the same microscopic rules , depending on a choice of parameters of the model , a wide range of properties is possible to obtain . in particular ,such behaviour is interesting in the context of recurrence parameters of earthquakes ( see e.g. ) . for other simple periodic - like models ,see .the finite case makes an opportunity to employ markov chains techniques to analyse rda .investigating the automaton in markov chains framework we arrive at several novel conclusions , in particular related to expected waiting times for some specified behaviour .this article completes and substantially extends previous studies of rda on the level of mathematical structure .we analyse properties of the automaton related to time evolution and others , as a preparation for further prospective comparisons with natural phenomena , including earthquakes .a matter of adjusting the model to the real data is left for the forthcoming paper .the plan of the article is as follows . mimicking in section [ sec : def ] we define the finite rda . in section [ sec : equations ] we derive respective equations for finite rda . in section [ sec : cases ] we will specify them for some chosen cases . in section [ sec : markov ] we will shortly describe markov chains setting and describe time aspects of frda .several examples are presented in section [ sec : examples ] .the last section [ sec : conclusions ] contain conclusions and remarks . in the appendixwe show non existence of exact equations for rda as well as present supplementary formulas and table [ tab : statesn10 ] displaying all states of rda of size .the rules for finite random domino automaton are the same as in . we assume : + - space is 1-dimensional and discrete consists of cells ; + - periodic boundary conditions ( the last cell is adjacent to the first one ) ; + - cell may be in one of two states : empty or occupied by a single ball ; + - time is discrete and in each time step an incoming ball hits one arbitrarily chosen cell ( each cell is equally probable ) .the state of the automaton changes according to the following rule : + if the chosen cell is empty it becomes occupied with probability ; with probability the incoming ball is rebounded and the state remains unchanged ; + if the chosen cell is occupied , the incoming ball provokes an avalanche with probability ( it removes balls from hit cell and from all adjacent cells ) ; with probability the incoming ball is rebounded and the state remains unchanged .the parameter is assumed to be a constant but the parameter is allowed to be a function of size of the hit cluster .the way in which the probability of removing a cluster depends on its size strongly influences evolution of the system and leads to various interesting properties , as presented in the following sections .we note in advance that in fact there is only one effective parameter which affects properties of the automaton .changing of and proportionally in a sense corresponds to a rescaling of time unit .a diagram shown below presents an automaton of size , with three clusters ( of size and ) in time .an incoming ball provokes an relaxation of the size * two * , thus in time there are two clusters ( of size and ) .l c|c|c|c|c|c|c|c|c|c|c|c|c|c & & + time & & & & & & & & & & & & & & + + time & & & & & & & & & & & & & & + & & & + + denote by the number of clusters of length , and by the number of empty clusters of length . due to periodic boundary conditions , the number of clusters is equal to the number of empty clusters in the lattice if two cases are excluded - when the lattice is full ( single cluster of size ) and when the lattice is empty ( single empty cluster of size ) .hence for we have the density of the system is defined as in this article we investigate a stationary state of the automaton and hence the variables and others are expected values and do not depend of time .in this section we derive equations describing stationary state of finite rda .the general idea of the reasoning presented below is : the gain and loss terms balance one another .the density may increase only if an empty cell becomes occupied , and the gain per one time step is .it happens with probability .density losses are realized by avalanches and may be of various size .the effective loss is a product of the size of the avalanche and probability of its appearance .any size contribute , hence the balance of reads we emphasise , the above result is exact no correlations were neglected .its form is directly analogous to the respective formula in .* gain . * a new cluster( can be of size only ) can be created in the interior of empty cluster of size . if the empty cluster is of size , then each cell is in interior .summing up contributions for all empty clusters , the probability is which can take a form ( for ) * loss . * two wayscontribute : joining a cluster with another one and removing a cluster due to avalanche .joining of two clusters can occur if there exists an empty cluster of length between them .the exception is when the empty -cluster is the only one empty cluster , and the system consists of a single cluster of length .hence , the probability of joining two clusters is the probability of avalanche is just gathering these terms one obtains equation for balance of the total number of clusters again we emphasise that the above result is exact no correlations were neglected .finite size of the system reflects in the appearance of instead of in the respective formula in .* there are two modes .+ ( a ) enlarging - an empty cluster on the edge of an -cluster becomes occupied .there are two such empty clusters except for the case when system contains a single cluster of length .hence , the respective rates are ( b ) relaxation rate for any is given by * gain .* again , there are two modes . + ( a ) enlarging . for ,there are following rates depending on the size of the cluster where is a probability that the size of empty cluster adjacent to the -cluster is bigger than .it is clear that formula does not have a factor , because there is only one empty cluster ( of size ) .\(b ) merger of two clusters up to the cluster of size .two clusters : one of size and the other of size will be combined if the ball fills an empty cell between them . probability is proportional to the number of empty -clusters between -cluster and -cluster , where is a probability of such merger .for there is a single cluster in the lattice ( there are no two clusters to merge ) - filling the gap between ends of -cluster is already considered in ( a ) . gathering the terms , one obtains where .the last equation has simple explanation .the state with all cells being occupied ( corresponding to ) can be achieved only from the state with a single empty cell ( corresponding to ) with probability . on the other hand ,the automaton leaves the state with all cells being occupied with probability .note that equations and are exact .correlations in the systems reflect in appearing of multipliers and .their values depends on possible configurations of states of the automaton . as shown in the appendix , for exact formulas for and as functions of do not exist .hence , it is necessary to propose approximated formulas .a mean field type approximation for is for a given cluster of size , the probability of appearance of an empty cluster of size is calculated as proportional to the number of empty -clusters divided by the sum of the numbers of all empty clusters with size not exceeding , because there is no room for larger .when merger of two clusters up to a cluster of size is considered , the room denoted by is of size and the room denoted by is of size - see a diagram below . hence a mean field type approximation for is of the form it is also instructive to consider another approximation section [ sec : examples ] contains quantitative estimation of proposed approximations . comparison of this approximation with exact results for small sizes is discussed in section [ sec : conclusions ] . in the paper an assumption of independence of clusters was considered . to have it adequate, it is required that there are no limitations in space , like those encountered when formulas and were considered . for systems that are big enough , i.e. , when , an empty cluster adjacent to a given -cluster can be of any size , and thus this is consistent with the requirement that when , which is required to have moments of the convergent .similarly , these formulas substituted into - give the respective set of equations considered in .the same reasoning can be applied to balance equations .the form of equation is left unchanged under the limit . for equation , , and it becomes of the form presented in .for fixed form of rebound parameters equations describing the automaton can be written in more specific form .this is the case for balance equations and , as well as for formulas for average cluster size and average avalanche size we emphasize , these formulas are exact correlations are encountered .we consider three special cases investigated in detail and illustrated by examples below. for and equation is of the form and equation also formulas for and are simplified only a little .equation is of the form hence the density is given by remarkably neat ( end exact ) formula note that there is no dependence on the size of the system ; for it remains the same .equation can be written as where we use equations and .hence the formula for is of the form in direct analogy with in case .thus , plays the role of , as indicated also in balance of equation .the formula for is the average cluster size is given by the average avalanche size is equal to the average cluster size because each cluster has the same probability to be removed from the lattice .the above formulas are exact ( include correlations ) and have good thermodynamic limit ( ) .note also that variables and depend on single parameter .formulas with dependence on can be rewritten as functions of density .equation is of the form equation can be written as where equation is used , namely . the average cluster size and the average avalanche size where note that also these formulas are exact ..states for the size of the lattice . [ cols="^,^,^,^",options="header " , ]the author would like to express his gratitude to professors zbigniew czechowski , adam doliwa and maciej wojtkowski for inspiring comments and discussions .m. biaecki and z. czechowski . on a simple stochastic cellular automaton with avalanches : simulation and analytical results . in v.de rubeis , z. czechowski , and r. teisseyre , editors , _ synchronization and triggering : from fracture to earthquake processes _ , pages 6375 .springer , 2010 .
finite version of random domino automaton ( frda ) - recently proposed in as a toy model of earthquakes - is investigated . respective set of equations describing stationary state of the frda is derived and compared with infinite case . it is shown that for the system of big size , these equations are coincident with rda equations . we demonstrate a non - existence of exact equations for size and propose appropriate approximations , the quality of which is studied in examples obtained within markov chains framework . we derive several exact formulas describing properties of the automaton , including time aspects . in particular , a way to achieve a quasi - periodic like behaviour of rda is presented . thus , based on the same microscopic rule - which produces exponential and inverse - power like distributions - we extend applicability of the model to quasi - periodic phenomena .
we consider the de - convolution problem in the case of signals where the convolution kernel is space invariant . in that casethe observed signal , is expressible as where denotes the true signal .the approximation of the integral operator via an elementary rectangle formula over an equispaced grid with nodes leads to a linear system with equations .when imposing proper boundary conditions , the related undetermined linear system becomes square and invertible and fast filter algorithms of tikhonov type can be employed . when talking of fast algorithms , given the size of the related matrices , we mean an algorithm involving a constant number ( independent of ) of fast trigonometric transforms ( fourier , sine , cosine , hartley transforms ) so that the overall cost is given by arithmetic operations .for instance , when dealing with periodic boundary conditions , we obtain circulant matrices which are diagonalizable by using the celebrated fast fourier transform ( fft ) . unfortunately such boundary conditions are not always satisfactory from the viewpoint of the reconstruction quality . in fact , if the original signal is not periodic the presence of ringing effects given by the periodic boundary conditions spoils the precision of the reconstruction .more accurate models are described by the reflective and antireflective boundary conditions , where the continuity of the signal and of its derivative are imposed , respectively .however the fast algorithms are applicable in this context only when symmetric point spread functions ( psfs ) are taken into consideration .the psf represents the blur of a single pixel in the original signal .therefore , since it is reasonable to expect that the global light intensity is preserved , the psf is nothing else that a global mask having nonnegative entries and total sum equal to ( conservation law ) .often in several application such a psf is symmetric and consequently the symbol associated to its mask is an even function .usually the antireflective boundary conditions lead to better reconstructions since linear signals are reconstructed exactly , while the periodic boundary conditions approximate badly a linear function by a discontinuous one and the reflective ones by a piece - wise linear function : in both the latter case gibbs phenomena ( called ringing effects ) are observed which are especially pronounced for periodic boundary conditions .the evidence of such fact is observed in several papers in the literature , e.g. . such good behavior of the antireflective boundary conditions comes directly from their definition , since the continuity of the first derivative of the signal was automatically imposed . from an algebraic viewpoint ,the latter property can be derived from the spectral decomposition of the coefficient matrix in the associated linear system .indeed , when considering a symmetric psf and antireflective boundary conditions , the linear system is represented by a matrix whose eigenvalues equal to ( the normalization condition of the psf coming from the conservation law ) are associated to an eigenvector basis spanning all linear functions sampled over a uniform grid with nodes , see . in , such a remark has been the starting point for defining and analyzing the antireflective transform and for designing fast algorithms for the spectral filtering of blurred and noisy signals .this algebraic interpretation is useful because it can be used for proposing generalizations that preserve the possibility of defining fast algorithms , while increasing the expected reconstruction quality especially when smooth or piece - wise smooth signals are considered . in this paper ,starting from the previous algebraic interpretation , we define higher order boundary conditions .this can be obtained by algebraically imposing that the spanning of quadratic or cubic polynomials over a proper uniform gridding are eigenvectors related to the normalized eigenvalue .our proposal improves the antireflective model when the true signal is regular enough close to the boundary .moreover , an important property of the proposed approach is that it allows to define fast algorithms also in the case of nonsymmetric psfs ( such as the blurring caused by motion ) .we note that reflective and antireflective boundary conditions can resort to fast transforms only in the case of symmetric boundary conditions , while in the case of nonsymmetric psf we have fast transforms only for periodic boundary conditions which usually provide poor restorations for nonperiodic signals . in general ,if some information on the low frequencies of the signal to be reconstructed are available , it is sufficient to impose such sampled components as eigenvectors of the blurring operator related to the eigenvalue ( we recall that the global spectrum will have as spectral radius ) . in such a way these componentwill be maintained exactly by the filtering algorithms since they cut only the spectral components related to small eigenvalues ( somehow close to zero ) which are presumed to be essentially associated to the noise . in reality, the noise by its random nature of its entries will be decomposable essentially in high frequencies while the true signal is supposed to be approximated in the complementary subspace of low frequencies .therefore , when applying filtering algorithms , if the blurring operator has non - negligible eigenvalues associated only to low frequencies ( for instance low degree polynomials ) , then the reconstruction of the signal will be reasonably good while the noise will be efficiently reduced . given this general context , the present note is aimed to define spectral decomposition of the blurring matrix such that the related transform given by the eigenvectors is fast , the conditioning of the transform is moderate ( for such an issue in connection with the antireflective transform see ) , and the low frequencies are associated only to non - negligible eigenvalues . the organization of the paper is as follows . section [ sec : bcs ] we introduce the deblurring problem investigating the spectral decomposition of the coefficient matrix for the different kinds of boundary conditions . in section [ sec : hord ] we define higher order boundary conditions starting from the spectral decomposition of the antireflective matrix .such transforms are used in tikhonov - like procedures in section [ sec : tik ] .section [ sec : numexp ] deals with a selection of numerical tests on the de - convolution of blurred and noisy signals and images . in section[ sec : md ] the proposals are extended to a multi - dimensional setting . finally section [ sec : concl ] is devoted to concluding remarks .in this section we introduce the objects of our analysis and we revisit the spectral decomposition of blurring matrices in the case of periodic , reflective , and antireflective boundary conditions .let be the true signal and the set of indexes in the field of view . given a psf , with ,we can associate to the psf the symbol [ [ periodic - boundary - conditions ] ] periodic boundary conditions + + + + + + + + + + + + + + + + + + + + + + + + + + + + are defined imposing for .the blurring matrix associated to periodic boundary conditions is diagonalized by the fourier matrix more precisely , the blurring matrix is where , for .we note that the eigenvalues can be easy computed by /[f^{(n)}{\vec{e}}_1]_i ] . [ [ antireflective - boundary - conditions-1 ] ] antireflective boundary conditions + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + are defined imposing ( see ) for .let be the sine transform matrix of order with entries the antireflective transform of order can be defined by the matrix ( see ) ,\ ] ] where for and where the permutation matrix has nontrivial entries , .we note that ; moreover is often called flip matrix. if the psf is symmetric and , the spectral decomposition of the coefficient matrix in the case of antireflective boundary conditions is with defined as for and .the eigenvalues of can be computed in real operations resorting to the discrete sine transform ( see ) .concerning the inverse antireflective transform , in we have given its expression and the resulting form is analogous to that of the direct transform . as a matter of fact ,given an algorithm for the direct transform , a procedure for computing the inverse transform needs only to have a fast way for multiply by a vector .[ rem_sc ] observe that is the subspace spanned by equi - spaced samplings of linear functions . in that caseits linear complement is given by , with , .unfortunately such a linear complement is not orthogonal and consequently the related transform can not be unitary , as long as we maintain such a trigonometric basis useful for the fast computations .up to standard normalization factors this choice leads to the antireflective transform .[ rem_eig ] implementing filtering methods , like tikhonov , is about fully preserved since the associated eigenvalues are .starting from remarks [ rem_sc ] and [ rem_eig ] , we define higher order boundary conditions which represent the main contribution of this work .the approach in section [ sec : bcs ] defines accurate boundary conditions imposing a prescribed regularity to the true signal .the study of the spectral decomposition of the associated coefficient matrices is a subsequent step for defining fast and stable filtering methods . in this section ,we define higher order boundary conditions starting from the eigenspace , i.e. , the signal components , that we wish to preserve .we start by imposing to preserve and by suggesting other choices for . by the way, the request of giving fast algorithms suggests the use of a cosine or exponential basis in place of that of sine functions , both for the direct and inverse transforms . to preserve polynomials of low degree and at the same time to resort to fast trigonometric transforms, we need a transform with a structure analogous to .therefore we need the cosine transform and the fourier matrix of order .we define and , explicitly and for .we note that the first column of and of are a sampling of the constant function .hence the span of the columns of or has a nontrivial intersection with .accordingly , we choose the two vectors for completing these trigonometric basis as a uniform sampling of a quadratic function instead of a linear function .more precisely , instead of we consider , where is a uniform sampling of a quadratic function in an interval that will be fixed later .the interval and the sampling grid for the basis functions of our transform are fixed according to the following remark .up to normalization , the column of is , where , for . extending the sampling grid such that the frequency is extended by continuity , we add the grid points and .since for all , we obtain exactly the two zero vectors in the first and the last row of , i.e. , the column of is the column of extended in and , for .the first and the last column of are the sampling of linear functions at the same equispaced points ] , . the fast transform associated to and can be defined as follows : ,\ ] ] with =\sqrt{\frac{2-\delta_{j,1}}{n-2}}\cos((j-1)a) ] since , for .it remains to define the eigenvalues associated to . since we want to preserve , similarly to what was done for in the case of the antireflective boundary conditions , we associate to and the eigenvalue .concerning the other frequencies , since they are defined by the cosine transform , we consider the eigenvalues of the reflective matrix in , but of order . in conclusion , for the case of a symmetric psf, we define a new blurring matrix using the following spectral decomposition where , for , and .we note that , while the eigenvalues , for , are the same of of order and hence they can be computed in by a discrete cosine transform .the product of by a vector can be computed mainly resorting to the inverse discrete cosine transform .the inverse of will be studied in subsection [ sec : sherm ] , where we will show that the product of by a vector can be computed mainly resorting to a discrete cosine transform .therefore the spectral decomposition can be used to define fast filtering methods in the case of symmetric psfs .moreover , we expect an improved restoration with respect to the antireflective model since preserves uniform samplings of quadratic functions while preserves only uniform samplings of linear functions .in the case of nonsymmetric psf we can use the exponential basis . up to normalization, the column of is , where and , for . extending the grid by continuity , we add and . with this extended grid, we can define the basis functions as a points uniform sampling of the interval \ , = \ , \left[-2\pi/(n-2 ) , \ ; 2\pi\right],\ ] ] where the grid points are we note that the interval and the grid points in the nonsymmetric case are different with respect to the symmetric case ( compare with and with ) .therefore , defining , where {i+1 } = ( b - x_i)^2 ] and =\exp({\mathrm{i}}(j-1)b)/\sqrt{n-2}=1/\sqrt{n-2} ] , by direct computation and . therefore , can be computed in by a trigonometric transform .for the implementation it can be explicitly computed and inserted into the code . given and , the sherman - morrison - woodbury formula is : it can be very useful for computing the inverse of when , taking into account the possible instability . applying the formulato we obtain \left(i+\left [ \begin{array}{ccc } 0 & { \vec{c}}_a^t & 0\\ 0 & { \vec{c}}_b^t & 0 \end{array } \right]\widetilde{t}_x^{-1}[{\vec{e}}_1 \ , | \ , { \vec{e}}_n ] \right)^{-1 } \left [ \begin{array}{ccc } 0 & { \vec{c}}_a^t & 0\\ 0 & { \vec{c}}_b^t & 0 \end{array } \right]\widetilde{t}_x^{-1}\\ & = & \widetilde{t}_x^{-1}-\left [ \begin{array}{cc } \alpha & 0\\ { \vec{v } } & j{\vec{v } } \\0 & \alpha \end{array } \right ] \left [ \begin{array}{cc } 1+{\vec{c}}_a^t{\vec{v } } & { \vec{c}}_a^tj{\vec{v}}\\ { \vec{c}}_b^t{\vec{v } } & 1+{\vec{c}}_b^tj{\vec{v } } \end{array } \right]^{-1 } \left [ \begin{array}{ccc } { \vec{c}}_a^t{\vec{v } } & { \vec{c}}_a^tx & { \vec{c}}_a^tj{\vec{v}}\\ { \vec{c}}_b^t{\vec{v } } & { \vec{c}}_b^tx & { \vec{c}}_b^tj{\vec{v } } \end{array } \right].\end{aligned}\ ] ] we note that and can be computed in and moreover they can be explicitly computed and inserted into the implementation like done for the vector . in this waythe matrix vector product for requires a fast discrete trigonometric transform of plus few lower order operations between vectors . from theorem [ th : inv ], it follows that the product of and , by a vector can be computed in and hence they are fast transforms .we consider the tikhonov regularization , where the regularized solution is computed as the solution of the following minimization problem where , is the properly chosen regularization parameter , is the observed signal , is the coefficient matrix and is a matrix such that ( see ) .the matrix is usually the identity matrix or an approximation of partial derivatives .it is convenient to define using the same boundary conditions of in order to obtain fast algorithms .for instance , equal to the laplacian with antireflective boundary conditions is .\ ] ] we note that .however , because .we have , for and defined according to ( note that ) . using the approach in section [ sec : hord ] , for high order boundary conditions the laplacian matrix can be defined similarly by where the grid points and are defined according to and respectively . the minimization problem is equivalent to the normal equations regarding the antireflective algebra , in it was observed that the transposition operation destroys the algebra structure and leads to worse restorations with respect to reflective boundary conditions . to overcome this problem , in the authors proposed the reblurring which replaces the transposition with the correlation operationmoreover , it was shown that the latter is equivalent to compute the solution of a discrete problem obtained by a proper discretization of a continuous regularized problem .practically , we replace with obtained imposing the same boundary conditions to the coefficient matrix arising from the psf rotated by 180 degrees .the matrices defined in , , and can be denoted by , since they are univocally defined by the function when the transform is fixed . with this notation since the rotation of the psf exchange with in , which corresponds to take . therefore , in the case of periodic boundary conditions , but this is not true in general for the other boundary conditions .if the psf is symmetric then for every boundary conditions . in the following we use the reblurring approach andhence we replace with if we use the same boundary conditions for and , the can be written as . in is proved that for antireflective boundary conditions defines a regularization method when and the psf is symmetric .if the spectral decomposition of is , where , and , then the spectral filter solution in is given by a largely used method for estimating the regularization parameter is the generalized cross validation ( gcv ) .for the method in , gcv determines the regularizing parameter that minimizes the gcv functional where is defined in . for ( periodic boundary conditions ) and ( reflective boundary conditions with a symmetric psf ), the equation is exactly equation . in this case , in it is proven that where and , for equal to and , respectively .here we have because is not unitary but it is `` close '' to a unitary matrix since it is a rank four correction of a unitary matrix . for the estimation of the svd of ( antireflective boundary conditions case ) see .therefore , we compute the regularization parameter by minimizing the same functional as in . more precisely present some signal deblurring problems .the restorations are obtained by employing tikhonov regularization using with smoothing operator equal to the laplacian .the code is implemented in matalab 7.0 . in the first examplethe observed signal is affected by a gaussian blur and of gaussian noise .true and observed signals are shown in figure [ fig : signal-1dsimm ] .we consider a low level of noise because in such case the restoration error is mainly due to the error of the boundary conditions model . since the psf is symmetric, we compare our blurring matrix with reflective and antireflective boundary conditions .let be the true signal , the relative restoration errors ( rre ) is plotted in figure [ fig : rre1dsimm ] . in such figureit is evident that provides restorations with a lower rre with respect to antireflective boundary conditions , which are already known to be more precise than reflective boundary conditions .moreover , the rre curve varying the regularization parameter is flatter with respect to the other boundary conditions .this allows a better estimation of the regularization parameter using the gcv .the value that gives the minimum of the gcv functional in is reported in figure [ fig : rre1dsimm ] with a ` * ' .it is evident that in the case of the rre obtained with is closer to the minimum with respect to antireflective boundary conditions .the minimum rre is for while it is for the antireflective boundary conditions .moreover , for we obtain which gives a rre equal to , while for antireflective boundary conditions gives a rre equal to .the quality of the restoration is validated also from the visual evidence of the restored signals . in figure[ fig : rest-1dsimm ] we show the restored signal corresponding to , which is the value of the regularization parameter corresponding to the minimum rre , and to .we note that gives a better restoration especially for preserving jumps in the signal . on the other handthis implies a slightly lose of the smoothness of the restored signal . eventually , using our proposal with gives a good enough restoration while this is not true for the antireflective boundary conditions . [cols="^,^ " , ]in section [ sec : hord ] we have given a framework to construct precise models for deconvolution problems using fast trigonometric transforms .the same idea could be applied to different problems having a shift invariant kernel .indeed , if we have information on the signal to restore , the set can be replaced by other functional spaces that we want to preserve . moreover ,higher order boundary conditions can be constructed , even if the numerical results show that for image deblurring problems this approach does not give substantial improvements .the introduced fast transforms was applied in connection with tikhonov regularization and the reblurring approach .however , they could be useful also for more sophisticated regularization methods like total variation for instance .the analysis of the tikhonov regularization in section [ sec : tik ] is useful also for the antireflective boundary conditions .indeed , it was not previously considered in the literature the case of and the choice of the regularization parameter using the gcv .since the proposed transforms are not orthogonal , they were applied in connection with the reblurring approach , but the theoretical analysis of the regularizing properties of such approach exists only in the case of antireflective boundary conditions and symmetric kernel ( see ) .therefore , a more detailed analysis , especially in the multidimensional case with a nonsymmetric kernel , should be considered in the future .biblio a. aric , m. donatelli , j. nagy , and s. serra capizzano , the anti - reflective transform and regularization by filtering , numerical linear algebra in signals , systems , and control ., in lecture notes in electrical engineering , edited by s. bhattacharyya , r. chan , v. olshevsky , a. routray , and p. van dooren , springer verlag , in press .a. aric , m. donatelli , and s. serra - capizzano , spectral analysis of the anti - reflective algebra , linear algebra appl . , 428 , 657675 ( 2008 ) .m. christiansen and m. hanke , deblurring methods using antireflective boundary conditions , siam j. sci .comput . , 30 , 855872 ( 2008 ) . m. donatelli , c. estatico , a. martinelli , and s. serra capizzano , improved image deblurring with anti - reflective boundary conditions and re - blurring , inverse problems , 22 , 20352053 ( 2006 ) .m. donatelli and m. hanke , on the condition number of the antireflective transform , manuscript ( 2008 ) .m. donatelli and s. serra capizzano , anti - reflective boundary conditions and re - blurring , inverse problems , 21 , 169182 ( 2005 ) .h. w. engl , m. hanke , and a. neubauer , regularization of inverse problems , kluwer , dordrecht ( 1996 ) . g. golub , m. health , and g. wahba , generalized cross - validation as a method for choosing good ridge parameter , technometrics , 21 , 215223 ( 1979 ) .g. h. golub and c. f. van loan , matrix computations , third edition , the johns hopkins university press , baltimore ( 1996 ) .p. c. hansen , j. g. nagy , and d. p. oleary , deblurring images : matrices , spectra , and filtering , siam , philadelphia , pa ( 2006 ) .m. ng , r. h. chan , and w. c. tang , a fast algorithm for deblurring models with neumann boundary conditions , siam j. sci .comput . , 21 , 851866 ( 1999 ) .l. perrone , kronecker product approximations for image restoration with anti - reflectiveboundary conditions , numer .linear algebra appl . , 13(1),122 ( 2006 ) .s. serra capizzano , a note on anti - reflective boundary conditions and fast deblurring models , siam j. sci .25(3 ) , 13071325 ( 2003 ) .
we study strategies for increasing the precision in the blurring models by maintaining a complexity in the related numerical linear algebra procedures ( matrix - vector product , linear system solution , computation of eigenvalues etc . ) of the same order of the celebrated fast fourier transform . the key idea is the choice of a suitable functional basis for representing signals and images . starting from an analysis of the spectral decomposition of blurring matrices associated to the antireflective boundary conditions introduced in [ s. serra capizzano , siam j. sci . comput . 25 - 3 pp . 13071325 ] , we extend the model for preserving polynomials of higher degree and fast computations also in the nonsymmetric case . we apply the proposed model to tikhonov regularization with smoothing norms and the generalized cross validation for choosing the regularization parameter . a selection of numerical experiments shows the effectiveness of the proposed techniques .
in safety - critical systems implement complex algorithms and feedback laws that control the interaction of physical devices with their environments .examples of such systems are abundant in aerospace , automotive , and medical applications .the range of theoretical and practical issues that arise in analysis , design , and implementation of safety - critical software systems is extensive , see , e.g. , , , and .while safety - critical software must satisfy various resource allocation , timing , scheduling , and fault tolerance constraints , the foremost requirement is that it must be free of run - time errors .formal verification methods are model - based techniques , , for proving or disproving that a mathematical model of a software ( or hardware ) satisfies a given _ specification , _i.e. , a mathematical expression of a desired behavior .the approach adopted in this paper too , falls under this category .herein , we briefly review _ model checking _ and _ abstract _ _ interpretation_. [ [ model - checking ] ] model checking + + + + + + + + + + + + + + in _ model checking _ the system is modeled as a finite state transition system and the specifications are expressed in some form of logic formulae , e.g. , temporal or propositional logic .the verification problem then reduces to a graph search , and symbolic algorithms are used to perform an exhaustive exploration of all possible states .model checking has proven to be a powerful technique for verification of circuits , security and communication protocols , and stochastic processes . nevertheless , when the program has non - integer variables , or when the state space is continuous , model checking is not directly applicable . in such cases ,combinations of various abstraction techniques and model checking have been proposed ; scalability , however , remains a challenge .[ [ abstract - interpretation ] ] abstract interpretation + + + + + + + + + + + + + + + + + + + + + + + is a theory for formal approximation of the _ operational semantics _ of computer programs in a systematic way .construction of abstract models involves abstraction of domains****typically in the form of a combination of sign , interval , polyhedral , and congruence abstractions of sets of data****and functions .a system of fixed - point equations is then generated by symbolic forward / backward executions of the abstract model .an iterative equation solving procedure , e.g. , newton s method , is used for solving the nonlinear system of equations , the solution of which results in an inductive invariant assertion , which is then used for checking the specifications . in practice , to guarantee finite convergence of the iterates , narrowing ( outer approximation ) operators are used to estimate the solution , followed by widening ( inner approximation ) to improve the estimate .this compromise can be a source of conservatism in analysis .nevertheless , these methods have been used in practice for verification of limited properties of embedded software of commercial aircraft .alternative formal methods can be found in the computer science literature mostly under _ deductive verification _ , _ type inference _ _ _ , _ _ and _ data flow analysis _ .these methods share extensive similarities in that a notion of program abstraction and symbolic execution or constraint propagation is present in all of them .further details and discussions of the methodologies can be found in , and .while software analysis has been the subject of an extensive body of research in computer science , treatment of the topic in the control systems community has been less systematic .the relevant results in the systems and control literature can be found in the field of hybrid systems .much of the available techniques for safety verification of hybrid systems are explicitly or implicitly based on computation of the reachable sets , either exactly or approximately .these include but are not limited to techniques based on quantifier elimination , ellipsoidal calculus , and mathematical programming .alternative approaches aim at establishing properties of hybrid systems through barrier certificates , numerical computation of lyapunov functions , or by combined use of bisimulation mechanisms and lyapunov techniques . inspired by the concept of lyapunov functions in stability analysis of nonlinear dynamical systems , in this paper we propose lyapunov invariants for analysis of computer programs . while lyapunov functions and similar concepts have been used in verification of stability or temporal properties of system level descriptions of hybrid systems , , , to the best of our knowledge, this paper is the first to present a systematic framework based on lyapunov invariance and convex optimization for verification of a broad range of code - level specifications for computer programs .accordingly , it is in the systematic integration of new ideas and some well - known tools within a unified software analysis framework that we see the main contribution of our work , and not in carrying through the proofs of the underlying theorems and propositions .the introduction and development of such framework provides an opportunity for the field of _ control _ to systematically address a problem of great practical significance and interest to both computer science and engineering communities .the framework can be summarized as follows: 1 .dynamical system interpretation and modeling ( section [ modeling ] ) .we introduce generic dynamical system representations of programs , along with specific modeling languages which include mixed - integer linear models ( milm ) , graph models , and mil - over - graph hybrid models ( mil - ghm ) .lyapunov invariants as behavior certificates for computer programs ( section [ chapter : lyapunovinvs ] ) .analogous to a lyapunov function , a lyapunov invariant is a real - valued function of the program variables , and satisfies a _ difference inequality _ along the trace of the program .it is shown that such functions can be formulated for verification of various specifications .3 . a computational procedure for finding the lyapunov invariants ( section [ chapter : computation ] ) . the procedure is standard and constitutes these steps : ( i ) restricting the search space to a linear subspace .( ii ) using convex relaxation techniques to formulate the search problem as a convex optimization problem , e.g. , a linear program , semidefinite program , or a sos program . (iii ) using convex optimization software for numerical computation of the certificates. interpret computer programs as discrete - time dynamical systems and introduce generic models that formalize this interpretation .we then introduce milms , graph models , and mil - ghms as structured cases of the generic models .the specific modeling languages are used for computational purposes. we will consider generic models defined by a finite state space set with selected subsets of initial states , and of terminal states , and by a set - valued state transition function , such that we denote such dynamical systems by [ concreterepdef]the dynamical system is a -representation of a computer program if the set of all sequences that can be generated by is equal to the set of all sequences of elements from satisfying the uncertainty in allows for dependence of the program on different initial conditions , and the uncertainty in models dependence on parameters , as well as the ability to respond to real - time inputs .[ integerdiv - ex]*integer division * ( adopted from ) : the functionality of program 1 is to compute the result of the integer division of ( dividend ) by ( divisor ) .a -representation of the program is displayed alongside .note that if and then the program never exits the while loop and the value of keeps increasing , eventually leading to either an overflow or an erroneous answer .the program terminates if and are positive . {cl}% \begin{tabular } [ c]{|l|}\hline \\\hline \end{tabular } & \hspace*{-0.05in}% \begin{tabular } [ c]{|l|}\hline \\\hline \end{tabular } \end{array } \\\text{{program 1 : the integer division program ( left ) and its dynamical system model ( right)\vspace*{-0.2in}}}%\end{gathered}\ ] ] in a -representation , the elements of the state space belong to a finite subset of the set of rational numbers that can be represented by a fixed number of bits in a specific arithmetic framework , e.g. , fixed - point or floating - point arithmetic . when the elements of are non - integers , due to the quantization effects , the set - valued map often defines very complicated dependencies between the elements of even for simple programs involving only elementary arithmetic operations .an abstract model over - approximates the behavior set in the interest of tractability .the drawbacks are conservatism of the analysis and ( potentially ) undecidability .nevertheless , abstractions in the form of formal over - approximations make it possible to formulate computationally tractable , sufficient conditions for a verification problem that would otherwise be intractable .[ def : abst]given a program and its -representation , we say that is an -representation , i.e. , an _ abstraction _ of , if , , and for all and the following condition holds: thus , every trajectory of the actual program is also a trajectory of the abstract model .the definition of is slightly more subtle . for proving finite - time termination ( ftt ) , we need to be able to infer that if all the trajectories of eventually enter then all trajectories of will eventually enter it is tempting to require that , however , this may not be possible as is often a discrete set , while is dense in the domain of real numbers .the definition of as in ( [ terminalabstract ] ) resolves this issue .construction of from involves abstraction of each of the elements in a way that is consistent with definition [ def : abst ] .abstraction of the state space often involves replacing the domain of _ floats _ or integers or a combination of these by the domain of real numbers .abstraction of or often involves a combination of domain abstractions and abstraction of functions that define these sets .semialgebraic set - valued abstractions of some commonly - used nonlinearities are presented in appendix i. interested readers may refer to for more examples including abstractions of fixed - point and floating point operations .specific modeling languages are particularly useful for automating the proof process in a computational framework . here , three specific modeling languages are proposed : _ mixed - integer linear models ( milm ) , _ _ graph models _ , and _ mixed - integer linear over graph hybrid models ( mil - ghm ) ._ proposing milms for software modeling and analysis is motivated by the observation that by imposing linear equality constraints on boolean and continuous variables over a quasi - hypercube , one can obtain a relatively compact representation of arbitrary piecewise affine functions defined over compact polytopic subsets of euclidean spaces ( proposition [ milm - prop ] ) . the earliest reference to the statement of universality of milms appears to be , in which a constructive proof is given for the one - dimensional case . a constructive proof for the general case is given in .[ milm - prop]*universality of mixed - integer linear models .* let be a piecewise affine map with a closed graph , defined on a compact state space ^{n}, ] .2 . letting the state transition function is defined by two matrices and of dimensions -by- and -by- respectively , according to : {cccc}% \hspace*{-0.04in}\vspace*{0.04in}x\hspace*{-0.02 in } & \hspace*{-0.02in}% w\hspace*{-0.02 in } & \hspace*{-0.02in}v\hspace*{-0.02 in } & \hspace * { -0.02in}1\hspace*{-0.04in}% \end{array } ] ^{^{t}}~|~~h[% \begin{array } [ c]{cccc}% \hspace*{-0.04in}\vspace*{0.04in}x\hspace*{-0.02 in } & \hspace*{-0.02in}% w\hspace*{-0.02 in } & \hspace*{-0.02in}v\hspace*{-0.02 in } & \hspace * { -0.02in}1\hspace*{-0.04in}% \end{array } ] ^{^{t}}=0,\text { } \left ( w , v\right ) \in\left [ -1,1\right ] ^{n_{w}}% \times\left\ { -1,1\right\ } ^{n_{v}}\right\ } .\vspace*{-0.2in}\label{milm1}%\ ] ] 3 .the set of initial conditions is defined via either of the following : 1 .if is finite with a small cardinality , then it can be conveniently specified by its elements .we will see in section [ chapter : computation ] that per each element of one constraint needs to be included in the set of constraints of the optimization problem associated with the verification task .if is not finite , or is too large , an abstraction of can be specified by a matrix which defines a union of compact polytopes in the following way:{cccc}% \hspace*{-0.04in}\vspace*{0.04in}x\hspace*{-0.02 in } & \hspace*{-0.02in}% w\hspace*{-0.02 in } & \hspace*{-0.02in}v\hspace*{-0.02 in } & \hspace * { -0.02in}1\hspace*{-0.04in}% \end{array } ] ^{^{t}}=0,~\left ( w , v\right ) \in\left [ -1,1\right ] ^{n_{w}}\times\left\ { -1,1\right\ } ^{n_{v}}\}.\vspace*{-0.1in}\label{milm2}%\ ] ] 4 .the set of terminal states is defined by{cccc}% \hspace*{-0.04in}\vspace*{0.04in}x\hspace*{-0.02 in } & \hspace*{-0.02in}% w\hspace*{-0.02 in } & \hspace*{-0.02in}v\hspace*{-0.02 in } & \hspace * { -0.02in}1\hspace*{-0.04in}% \end{array } ] ^{^{t}}\neq0,~\forall w\in\left [ -1,1\right ] ^{n_{w}},~\forall v\in\left\ { -1,1\right\ } ^{n_{v}}\}.\vspace*{-0.2in}\label{milm3}%\ ] ] therefore , is well defined .a compact description of a milm of a program is either of the form or of the form .the milms can represent a broad range of computer programs of interest in control applications , including but not limited to control programs of gain scheduled linear systems in embedded applications .in addition , generalization of the model to programs with piecewise affine dynamics subject to quadratic constraints is straightforward .a milm of an abstraction of the integerdivision program ( program 1 : section [ sec : genrep]) with all the integer variables replaced with real variables , is given by where{lll}%h_{0}= & h= & f=\\ \left [ \begin{array } [ c]{rrrrrrrr}% \vspace*{-0.42 in } & & & & & & & \\ 1 & 0 & 0 & -1 & 0 & 0 & 0 & 0\vspace*{-0.12in}\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\vspace*{-0.12in}\\ 0 & -2 & 0 & 0 & 0 & 1 & 0 & 1\vspace*{-0.12in}\\ -2 & 0 & 0 & 0 & 0 & 0 & 1 & 1\vspace*{-0.07in}% \end{array } \right ] , & \left [ \hspace*{-0.05in}% \begin{array } [ c]{rrrrrrrr}% \vspace*{-0.42 in } & & & & & & & \\ 0 & 2 & 0 & -2 & 1 & 0 & 0 & 1\vspace*{-0.12in}\\ 0 & -2 & 0 & 0 & 0 & 1 & 0 & 1\vspace*{-0.12in}\\ -2 & 0 & 0 & 0 & 0 & 0 & 1 & 1\vspace*{-0.07in}% \end{array } \hspace*{-0.05in}\right ] , & \left [ \begin{array } [ c]{rrrrrrrr}% \vspace*{-0.42 in } & & & & & & & \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\vspace*{-0.12in}\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\vspace*{-0.12in}\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1/m\vspace*{-0.12in}\\ 0 & -1 & 0 & 1 & 0 & 0 & 0 & 0\vspace*{-0.07in}% \end{array } \right ] \end{array}\ ] ] here , is a scaling parameter used for bringing all the variables within the interval .\vspace*{-0.05in} ] and is a polynomial function and is a semialgebraic set if is a deterministic map , we drop and define .a set of _ passport _ labels assigned to all edges , where is a semialgebraic set . a state transition along edge is possible if and only if 6 .a set of semialgebraic invariant sets are assigned to every node on the graph , such that equivalently , a state satisfying is unreachable .therefore , a graph model is a well - defined specific case of the generic model with and defined as: conceptually similar models have been reported in for software verification , and in for modeling and verification of hybrid systems .interested readers may consult for further details regarding treatment of graph models with time - varying state - dependent transitions labels which arise in modeling operations with arrays .* * remarks** 1 .the invariant set of node contains all the available information about the initial conditions of the program variables : 2 .multiple edges between nodes enable modeling of logical `` or '' or `` xor '' type conditional transitions .this allows for modeling systems with nondeterministic discrete transitions. 3 .the transition label may represent a simple update rule which depends on the real - time input .for instance , if and , ] in other cases , may represent an abstraction of a nonlinearity .for instance , the assignment can be abstracted by ( see eqn .( [ sinabst ] ) in appendix i). we proceed , we introduce the following notation : given a semialgebraic set and a polynomial function we denote by the set : [ [ construction - of - simple - invariant - setssecconstabst ] ] construction of simple invariant sets[sec : constabst ] + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + simple invariant sets can be included in the model if they are readily available or easily computable .even trivial invariants can simplify the analysis and improve the chances of finding stronger invariants via numerical optimization . 1 .simple invariant sets may be provided by the programmer .these can be trivial sets representing simple algebraic relations between variables , or they can be more complicated relationships that reflect the programmer s knowledge about the functionality and behavior of the program .2 . invariant propagation : assuming that are deterministic and invertible , the set is an invariant set for node furthermore , if the invariant sets are strict subsets of for all then ( [ constraint prop ] ) can be improved .specifically , the set is an invariant set for node note that it is sufficient that the restriction of to the lower dimensional spaces in the domains of and be invertible .3 . preserving equality constraints : simple assignments of the form result in invariant sets of the form at node provided that does not simultaneously update formally , let be such that is non - zero for at most one element and that is independent of then , the following set is an invariant set at node x=0\right\ } \vspace*{-0.15in}%\ ] ] the mil - ghms are graph models in which the effects of several lines and/or _ functions _ of code are compactly represented via a milm . as a result ,the graphs in such models have edges ( possibly self - edges ) that are labeled with matrices and corresponding to a milm as the transition and passport labels .such models combine the flexibility provided by graph models and the compactness of milms .an example is presented in section [ sec : casestudy ] .the specification that can be verified in our framework can generically be described as unreachability and finite - time termination .[ unreachability]a program is said to satisfy the unreachability property with respect to a subset if for every trajectory of ( [ softa1 ] ) , and every does not belong to a program is said to _ terminate in finite time _if every solution of ( [ softa1 ] ) satisfies for some several critical specifications associated with runtime errors are special cases of unreachability .absence of overflow can be characterized as a special case of unreachability by defining: where is the overflow limit for variable an out - of - bounds array indexing error occurs when a variable exceeding the length of an array , references an element of the array .assuming that is the corresponding integer index and is the array length , one must verify that does not exceed at location where referencing occurs .this can be accomplished by defining over a graph model and proving that is unreachable .this is also similar to assertion checking defined next .an _ assertion _ is a mathematical expression whose validity at a specific location in the code must be verified .it usually indicates the programmer s expectation from the behavior of the program .we consider _ assertions _ that are in the form of semialgebraic set memberships .using graph models , this is done as follows:{llccl}% \text{at location } i : & \text{assert } x\in a_{i } & \rightarrow & \text{define } & x_{-}=\left\ { \left ( i , x\right ) \in x~|~x\in x\backslash a_{i}\right\ } , \\[-0.05in]% \text{at location } i : & \text{assert } x\notin a_{i } & \rightarrow & \text{define } & x_{-}=\left\ { \left ( i , x\right ) \in x~|~x\in a_{i}\right\ } .\end{array } \vspace*{-0.1in}%\ ] ] in particular , safety assertions for division - by - zero or taking the square root ( or logarithm ) of positive variables are standard and must be automatically included in numerical programs ( cf .[ sec : bc ] , table [ table ii ] ) .a program invariant is a property that holds throughout the execution of the program .the property indicates that the variables reside in a semialgebraic subset .essentially , any method that is used for verifying unreachability of a subset can be applied for verifying invariance of by defining and vice versa . for mathematical correctness, we must show that if an -representation of a program satisfies the unreachability and ftt specifications , then so does the -representation , i.e. , the actual program .this is established in the following proposition .the proof is omitted for brevity but can be found in .[ abstraction]let be an -representation of program with -representation let and be such that assume that the unreachability property w.r.t . has been verified for .then , satisfies the unreachability property w.r.t . moreover , if the ftt property holds for , then terminates in finite time .since we are not concerned with undecidability issues , and in light of proposition [ abstraction ] , we will not differentiate between abstract or concrete representations in the remainder of this paper .analogous to a lyapunov function , a lyapunov invariant is a real - valued function of the program variables satisfying a _ difference inequality _ along the execution trace .[ def : lyapinv]a -lyapunov invariant _ _ for is a function such that where .thus , a lyapunov invariant satisfies the _ difference inequality _ ( [ softa2 ] ) along the trajectories of until they reach a terminal state .it follows from definition [ def : lyapinv ] that a lyapunov invariant is not necessarily nonnegative , or bounded from below , and in general it need not be monotonically decreasing .while the zero level set of defines an invariant set in the sense that implies , for all monotonicity depends on and the initial condition .for instance , if then ( [ softa2 ] ) implies that along the trajectories of however , may not be monotonic if though it will be monotonic for furthermore , the level sets of a lyapunov invariant need not be bounded closed curves .proposition [ prop : milmlyap ] ( to follow ) formalizes the interpretation of definition [ def : lyapinv ] for the specific modeling languages .natural lyapunov invariants for graph models are functions of the form which assign a polynomial lyapunov function to every node on the graph [ prop : milmlyap]let and properly labeled graph be the mil and graph models for a computer program the function ^{n}\mapsto\mathbb{r} i x\in x_{a}% \begin{array } [ c]{c}% \vspace*{-0.12in}\\ \rightarrow\\ \vspace*{-0.12in}% \end{array }x_{i-}:=\left\ { x\in\mathbb{r}^{n}~|~x\in \mathbb{r}^{n}\backslash x_{a}\right\ } i x\notin x_{a}% \begin{array } [ c]{c}% \vspace*{-0.12in}\\ \rightarrow\\ \vspace*{-0.12in}% \end{array } x_{i-}:=\left\ { x\in\mathbb{r}^{n}~|~x\in x_{a}\right\ } ix_{o}% \begin{array } [ c]{c}% \vspace*{-0.12in}\\ \rightarrow\\ \vspace*{-0.12in}% \end{array } x_{i-}:=\left\ { x\in\mathbb{r}^{n}~|~x_{o}% = 0\right\ } i\sqrt[2k]{x_{o } } % \begin{array } [ c]{c}% \vspace*{-0.12in}\\ \rightarrow\\ \vspace*{-0.12in}% \end{array }x_{i-}:=\left\ { x\in\mathbb{r}^{n}~|~x_{o}% < 0\right\ } i\log\left ( x_{o}\right ) % \begin{array } [ c]{c}% \vspace*{-0.12in}\\ \rightarrow\\ \vspace*{-0.12in}% \end{array } x_{i-}:=\left\ { x\in\mathbb{r}^{n}~|~x_{o}% \leq0\right\ } i% \begin{array } [ c]{c}% \vspace*{-0.12in}\\ \rightarrow\\ \vspace*{-0.12in}% \end{array } x_{i-}:=r^{n} ] consider the following program note that can be zero right after the assignment however , at location , can not be zero and division - by - zero will not occur .the graph model of an abstraction of program 3 is shown next to the program and is defined by the following elements : and . ] denote the set of sos polynomials in variables , i.e. the set of polynomials that can be represented as where is the polynomial ring of variables with real coefficients .then , a sufficient condition for ( [ semialgformulae ] ) is that there exist sos polynomials ] the second is to consider each of the different possibilities ( one for each vertex of ) separately .this approach can be useful if is small , and is otherwise impractical .more sophisticated schemes can be developed based on hierarchical linear programming relaxations of binary integer programs .a linear parameterization of the subspace of polynomial functionals with total degree less than or equal to is given by: where is a vector of length consisting of all monomials of degree less than or equal to in variables a linear parametrization of lyapunov invariants for graph models is defined according to ( [ nodewiselyap ] ) , where for every we have where is a selected degree bound for depending on the dynamics of the model , the degree bounds and the convex relaxation technique , the corresponding optimization problem will become a linear , semidefinite , or sos optimization problem .we present generic conditions for verification over graph models using sos programming .although lmi conditions for verification of _ linear graph models _ using quadratic invariants and the -procedure for relaxation of non - convex constraints can be formulated , we do not present them here due to space limitations .such formulations are presented in the extended report , along with executable matlab code in .the following theorem follows from corollary [ safetygraphcor1 ] .[ thm : graphnodepoly]*optimization - based graph model verification . *consider a program , and its graph model let be given by ( [ nodewiselyap ] ) , where then , the functions define a lyapunov invariant for if for all we have: \text { subject to } \left ( x , w\right ) \in\left ( \left ( x_{i}\cap\pi_{ji}^{k}\right ) \times \lbrack-1,1]^{n_{w}}\right ) \cap s_{ji}^{k}\vspace*{-0.2in}\label{d1d1}%\ ] ] furthermore , satisfies the unreachability property w.r.t .the collection of sets if there exist such that \text { subject to } x\in x_{\emptyset}\vspace*{-0.2in}\label{d1d0}\\ \sigma_{i}\left ( x\right ) -\varepsilon_{i } & \in\sigma\left [ x\right ] \text { subject to } x\in x_{i}\cap x_{i-},\ i\in\mathcal{n}\backslash\left\ { \emptyset\right\ } \vspace*{-0.25in}\label{d1d3}%\end{aligned}\]] as discussed in section [ section : sos - relaxation ] , the sos relaxation techniques can be applied for formulating the search problem for functions satisfying ( [ d1d1])([d1d3 ] ) as a convex optimization problem .for instance , if ^{n_{w}% } \right ) \cap s_{ji}^{k}=\left\ { \left ( x , w\right ) ~|~f_{p}\left ( x , w\right ) \geq0,\text { } h_{l}\left ( x , w\right ) = 0\right\ } , \vspace * { -0.2in}%\ ] ] then , ( [ d1d1 ] ) can be formulated as an sos optimization problem of the following form : , \text { s.t . }\tau_{p}% , \tau_{pq}\in\sigma\left [ x , w\right ] .\vspace*{-0.2in}%\ ] ] software packages such as sostools or yalmip can then be used for formulating the sos optimization problems as semidefinite programs .in this section we apply the framework to the analysis of program 4 displayed below.program 4 takes two positive integers ] as the input and returns their greatest common divisor by implementing the euclidean division algorithm .note that the main function in program 4 uses the integerdivision program ( program 1 ) .a global model can be constructed by embedding the dynamics of the integerdivision program within the dynamics of main .a labeled graph model is shown alongside the text of the program .this model has a state space ^{7}, ] is an element of the hypercube ^{7}. \text{property} \mathrm{q}\geq\mathrm{0} \mathrm{y}\geq\mathrm{1} \mathrm{\mathrm{dr}}\geq\mathrm{\mathrm{1}} \mathrm{\mathrm{rem}}% \geq\mathrm{0} \mathrm{\mathrm{dd}}\geq\mathrm{\mathrm{1}} \mathrm{x}\geq\mathrm{1} \mathrm{r}\geq\mathrm{0} \text{proven in round} \mathrm{\mathrm{i}} \mathrm{\mathrm{i}} \mathrm{\mathrm{i}} \mathrm{\mathrm{i}} \mathrm{\mathrm{ii}} \mathrm{\mathrm{ii}} \mathrm{\mathrm{ii}} \sigma_{\mathrm{f}_{2}}\left ( \mathrm{x}\right ) = -\mathrm{q} \mathrm{1}-\mathrm{y} \mathrm{1}-\mathrm{dr} -\mathrm{\mathrm{rem}} \mathrm{1}-\mathrm{dd} \mathrm{1}-\mathrm{x} -\mathrm{r} \left( \theta_{\mathrm{f}2\mathrm{f}2}^{1},\mu_{\mathrm{f}2\mathrm{f}2}% ^{1}\right ) \left ( 1,1\right ) \left ( 1,0\right ) \left ( 1,0\right ) \left ( 1,0\right ) \left ( 0,0\right ) \left ( 0,0\right ) \left ( 0,0\right ) \left ( \theta_{\mathrm{f}2\mathrm{f}2}^{2},\mu_{\mathrm{f}2\mathrm{f}2}% ^{2}\right ) \left ( 0,0\right ) \left ( 0,0\right ) \left ( 0,0\right ) \left ( 0,0\right ) \left ( 0,0\right ) \left ( 0,0\right ) \left ( 0,0\right ) ] . by applying theorem [ thm : graphnodepoly ] and sos programming using yalmip ,the following invariants are found and different rounding schemes lead to different invariants .note that rounding is not essential . ]( after post - processing , rounding the coefficients , and reverifying ) : {llll}% \sigma_{1\mathrm{f}_{2}}\left ( \mathrm{x}\right ) = 0.4\left ( \mathrm{y}% -\mathrm{m}\right ) \left ( 2+\mathrm{m}-\mathrm{r}\right ) & & & \sigma_{2\mathrm{f}_{2}}\left ( \mathrm{x}\right ) = \left ( \mathrm{q}% \times\mathrm{y}+\mathrm{r}\right ) ^{2}-\mathrm{m}^{2}\\ \sigma_{3\mathrm{f}_{2}}\left ( \mathrm{x}\right ) = \left ( \mathrm{q}% + \mathrm{r}\right ) ^{2}-\mathrm{m}^{2 } & & & \sigma_{4\mathrm{f}_{2}% } \left ( \mathrm{x}\right ) = 0.1\left ( \mathrm{y}-\mathrm{m}+5\mathrm{y}% \times\mathrm{m}+\mathrm{y}^{2}-6\mathrm{m}^{2}\right ) \\\sigma_{5\mathrm{f}_{2}}\left ( \mathrm{x}\right ) = \mathrm{y}+\mathrm{r}% -2\mathrm{m}+\mathrm{y}\times\mathrm{m}-\mathrm{m}^{{\normalsize 2 } } & & & \sigma_{6\mathrm{f}_{2}}\left ( \mathrm{x}\right ) = \mathrm{r}\times \mathrm{y}+\mathrm{y}-\mathrm{m}^{2}-\mathrm{m}% \end{array}\ ] ] the properties proven by these invariants are summarized in the table iii .the specifications that the program terminates and that ^{7} ] could not be established in one shot , at least when trying polynomials of degree for instance , certifies boundedness of all the variables except while and which certify boundedness of all variables including do not certify ftt .furthermore , boundedness of some of the variables is established in round ii , relying on boundedness properties proven in round i. given ( which is found in round i ) , second round verification can be done by searching for a strictly positive polynomial and a nonnegative polynomial satisfying: where the inequality ( [ eq : roundii ] ) is further subject to boundedness properties established in round i , as well as the usual passport conditions and basic invariant set conditions .{|r|c|c|c|c|}\hline invariant & & & & \\\hline & & & & \\\hline & & & & \\\hline \multicolumn{1}{|l|}{round i : } & & & & \\\hline \multicolumn{1}{|l|}{round ii: } & & & & \\\hline certificate for ftt & no & no & no & yes , \\\hline \end{tabular}\end{gathered}\ ] ] in conclusion , or in conjunction with or prove finite - time termination of the algorithm , as well as boundedness of all variables within ] for any the provable bound on the number of iterations certified by and is ( corollary [ safetygraphcor1]) if we settle for more conservative specifications , e.g. , ^{7} ] and sufficiently large then it is possible to prove the properties in one shot .we show this in the next section . for comparison , we also constructed the mil - gh model associated with the reduced graph in figure [ fig : redmodel ] .the corresponding matrices are omitted for brevity , but details of the model along with executable matlab verification codes can be found in .the verification theorem used in this analysis is an extension of theorem [ milp_correctness_theorem ] to analysis of mil - ghm for specific numerical values of though it is certainly possible to perform this modeling and analysis exercise for parametric bounded values of the analysis using the mil - ghm is in general more conservative than sos optimization over the graph model presented earlier .this can be attributed to the type of relaxations proposed ( similar to those used in lemma [ milp_invariance_lmi ] ) for analysis of milms and mil - ghms .the benefits are simplified analysis at a typically much less computational cost .the certificate obtained in this way is a single quadratic function ( for each numerical value of ) , establishing a bound satisfying table iv summarizes the results of this analysis which were performed using both sedumi 1_3 and lmilab solvers .{|r|c|c|c|c|c|}\hline & & & & & \\\hline solver : lmilab \cite{gahinetlmilab } : & & & & & \\\hline solver : sedumi \cite{strum1999 } : & & & & & nan\\\hline & & & & & \\\hline & & & & & \\\hline upperbound on iterations & e & e & e & e & e\\\hline \end{tabular}\end{gathered}\ ] ] the preceding results were obtained by analysis of a global model which was constructed by embedding the internal dynamics of the program s functions within the global dynamics of the main function .in contrast , the idea in _ modular analysis _ is to model software as the interconnection of the program s `` building blocks '' or `` modules '' , i.e. , functions that interact via a set of _ global _ variables .the dynamics of the functions are then abstracted via input / output behavioral models , typically constituting equality and/or inequality constraints relating the input and output variables . in our framework , the invariant sets of the terminal nodes of the modules ( e.g. , the set associated with the terminal node in program 4 ) provide such i / o model .thus , richer characterization of the invariant sets of the terminal nodes of the modules are desirable .correctness of each sub - module must be established separately , while correctness of the entire program will be established by verifying the unreachability and termination properties w.r.t .the global variables , as well as verifying that a terminal global state will be reached in finite - time .this way , the program variables that are _ private _ to each function are abstracted away from the global dynamics .this approach has the potential to greatly simplify the analysis and improve the scalability of the proposed framework as analysis of large size computer programs is undertaken . in this section ,we apply the framework to modular analysis of program 4 .detailed analysis of the advantages in terms of improving scalability , and the limitations in terms of conservatism the analysis is an important and interesting direction of future research .the first step is to establish correctness of the integerdivision module , for which we obtain the function is a -invariant proving boundedness of the state variables of integerdivision .subject to boundedness , we obtain the function which is a -invariant proving termination of integerdivision .the invariant set of node can thus be characterized by ^{4}~|~\mathrm{r\leq dr-1}\right\ } \vspace*{-0.15in}%\ ] ] the next step is construction of a global model .given , the assignment at : can be abstracted by , \text{~}\mathrm{w}\leq\mathrm{y-1,}\vspace*{-0.2in}%\ ] ] allowing for construction of a global model with variables and an external state - dependent input \mathrm{,} ]we took a systems - theoretic approach to software analysis , and presented a framework based on convex optimization of lyapunov invariants for verification of a range of important specifications for software systems , including finite - time termination and absence of run - time errors such as overflow , out - of - bounds array indexing , division - by - zero , and user - defined program assertions .the verification problem is reduced to solving a numerical optimization problem , which when feasible , results in a certificate for the desired specification .the novelty of the framework , and consequently , the main contributions of this paper are in the systematic transfer of lyapunov functions and the associated computational techniques from control systems to software analysis . the presented work can be extended in several directions .these include understanding the limitations of modular analysis of programs , perturbation analysis of the lyapunov certificates to quantify robustness with respect to round - off errors , extension to systems with software in closed loop with hardware , and adaptation of the framework to specific classes of software .1 . trigonometric functions : abstraction of trigonometric functions can be obtained by approximation of the taylor series expansion followed by representation of the absolute error by a static bounded uncertainty .for instance , an abstraction of the function can be constructed as follows:{|l|l|l|}\hline abstraction of & & \\\hline & & \\\hline & & \\\hline \end{tabular}\ ] ] abstraction of is similar .it is also possible to obtain piecewise linear abstractions by first approximating the function by a piece - wise linear ( pwl ) function and then representing the absolute error by a bounded uncertainty .section [ section : specmodels ] ( proposition [ milm - prop ] ) establishes universality of representation of generic pwl functions via binary and continuous variables and an algorithmic construction can be found in .for instance , if ] in the following way : \times\left\ { -1,1\right\ } \right\ } \vspace*{-0.25in}%\ ] ] more on the systematic construction of function abstractions including those related to floating - point , fixed - point , or modulo arithmetic can be found in the report .[ of proposition [ ftt2]]note that ( [ softa2a1])([softa2a3 ] ) imply that is negative - definite along the trajectories of except possibly for which can be zero when let be any solution of since is uniformly bounded on , we have: now , assume that there exists a sequence of elements from satisfying ( [ softa1 ] ) , but not reaching a terminal state in finite time .that is , then , it can be verified that if where is given by ( [ bnd on no .itrn . ] ) , we must have : which contradicts boundedness of [ of theorem [ bddness]]assume that has a solution where and let first , we claim that if we have and if have and hence the claim .now , consider the case since is monotonically decreasing along solutions of we must have: which contradicts ( [ inf g than sup]) note that if and then ( [ inf l than sup ] ) holds as a strict inequality and we can replace ( [ inf g than sup ] ) with its non - strict version .next , consider case for which , need not be monotonic along the trajectories .partition into two subsets and such that and now , assume that has a solution where and since and we have therefore, which contradicts ( [ inf g than sup]) next , assume that has a solution where and in this case , regardless of the value of we must have implying that and hence , contradicting ( [ inf g than zero]) note that if and either or then ( [ inf g than zero ] ) can be replaced with its non - strict version .finally , consider case .due to ( [ sup l than zero ] ) , is strictly monotonically decreasing along the solutions of the rest of the argument is similar to the case .[ of corollary [ bddness and ftt]]it follows from ( [ three3 ] ) and the definition of that: it then follows from ( [ o69 ] ) and ( [ one1 ] ) that: hence , the first statement of the corollary follows from theorem [ bddness ] .the upperbound on the number of iterations follows from proposition [ ftt2 ] and the fact that [ of corollary [ safetygraphcor1]]the unreachability property follows directly from theorem [ bddness ] .the finite time termination property holds because it follows from ( [ arcwiselyap ] ) , ( [ sgc1 ] ) and ( [ multiplicativetheta ] ) along with proposition [ ftt2 ] , that the maximum number of iterations around every simple cycle is finite . the upperbound on the number of iterations is the sum of the maximum number of iterations over every simple cycle . [ of lemma [ mipl_invariance_lemma]]define where ^{n}, ] recall that and that for all satisfying there holds : it follows from proposition [ prop : milmlyap ] that ( [ softa2 ] ) holds if: ^{n+n_{w}},\text { } l_{4}x_{e}\in\left\ { -1,1\right\ } ^{n_{v}}.\vspace * { -0.1in}\label{milp_1}%\ ] ] recall from the -procedure ( ( [ needs - s - procedure ] ) and ( [ s - procedure - sufficient ] ) ) that the assertion ^{n}$ ] holds if there exists nonnegative constants such that similarly , the assertion holds if there exists a diagonal matrix such that applying these relaxations to ( [ milp_1 ] ) , we obtain sufficient conditions for ( [ milp_1 ] ) to hold: together with the above condition is equivalent to the lmis in lemma [ milp_invariance_lmi ] .99 r. alur , c. courcoubetis , n. halbwachs , t. a. henzinger , p .- h .ho x. nicollin , a. oliviero , j. sifakis , and s. yovine . the algorithmic analysis of hybrid systems , _ theoretical computer science _ , vol .138 , pp . 334 , 1995 .b. blanchet , p. cousot , r. cousot , j. feret , l. mauborgne , a. min , d. monniaux , and x. rival .design and implementation of a special - purpose static program analyzer for safety - critical real - time embedded software .lncs v. 2566 , pp .85108 , springer - verlag , 2002 .e. m. clarke , o. grumberg , h. hiraishi , s. jha , d.e .long , k.l .mcmillan , and l.a .verification of the future - bus+cache coherence protocol . in _formal methods in system design _ , 6(2):217232 , 1995 .p. cousot , and r. cousot .abstract interpretation : a unified lattice model for static analysis of programs by construction or approximation of fixpoints . in _4th acm sigplan - sigact symposium on principles of programming languages _ , pages 238252 , 1977 .g. naumovich , l. a. clarke , and l. j. osterweil .verification of communication protocols using data flow analysis . in _ proc .4-th acm sigsoft symposium on the foundation of software engineering _ ,pages 93105 , 1996 .nesterov , h. wolkowicz , and y. ye .semidefinite programming relaxations of nonconvex quadratic optimization . in _handbook of semidefinite programming : theory , algorithms , and applications_. dordrecht , kluwer academic press , pp .361419 , 2000 .p. a. parrilo .minimizing polynomial functions . in _algorithmic and quantitative real algebraic geometry_. dimacs series in discrete mathematics and theoretical computer science , v. 60 , pp .83 - 100 , 2003 .m. roozbehani , a. megretski and , e. feron .optimization of lyapunov invariants in analysis of software systems , available at http://web.mit.edu/mardavij/www/publications.html also available at http://arxive.org m. roozbehani , a. megretski , e. frazzoli , and e. feron .distributed lyapunov functions in analysis of graph models of software . in hybrid systems :computation and control , springer lncs 4981 , pp 443 - 456 , 2008 .
the paper proposes a control - theoretic framework for verification of numerical software systems , and puts forward software verification as an important application of control and systems theory . the idea is to transfer lyapunov functions and the associated computational techniques from control systems analysis and convex optimization to verification of various software safety and performance specifications . these include but are not limited to absence of overflow , absence of division - by - zero , termination in finite time , presence of dead - code , and certain user - specified assertions . central to this framework are lyapunov invariants . these are properly constructed functions of the program variables , and satisfy certain properties****resembling those of lyapunov functions****along the execution trace . the search for the invariants can be formulated as a convex optimization problem . if the associated optimization problem is feasible , the result is a certificate for the specification . software verification , lyapunov invariants , convex optimization .
is well known that huffman coding yields a prefix code minimizing expected length for a known finite probability mass function .less well known are the many variants of this algorithm that have been proposed for related problems .for example , in his doctoral dissertation , humblet discussed two problems in queueing that have nonlinear terms to minimize .these problems , and many others , can be reduced to a certain family of generalizations of the huffman problem introduced by campbell in . in all such source coding problems, a source emits symbols drawn from the alphabet , where is an integer ( or possibly infinity ) .symbol has probability , thus defining probability mass function .we assume without loss of generality that for every , and that for every ( ) .the source symbols are coded into codewords composed of symbols of the -ary alphabet , most often the binary alphabet , . the codeword corresponding to symbol has length , thus defining length distribution .finding values for is sufficient to find a corresponding code .huffman coding minimizes .campbell s formulation adds a continuous ( strictly ) monotonic increasing _ _ cost function . the value to minimize is then campbell called ( [ campcost ] )the `` mean length for the cost function '' ; for brevity , we refer to it , or any value to minimize , as the _ _ penalty .penalties of the form ( [ campcost ] ) are called _ _ quasiarithmetic or _ _ quasilinear ; we use the former term in order to avoid confusion with the more common use of the latter term in convex optimization theory . notethat such problems can be mathematically described if we make the natural coding constraints explicit : the integer constraint , , and the kraft ( mcmillan ) inequality , given these constraints , examples of in ( [ campcost ] ) include a quadratic cost function useful in minimizing delay due to queueing and transmission , for nonnegative and , and an exponential cost function useful in minimizing probability of buffer overflow , for positive .these and other examples are reviewed in the next section .campbell noted certain properties for convex , such as those examples above , and others for concave .strictly concave penalize shorter codewords more harshly than the linear function and penalize longer codewords less harshly .conversely , strictly convex penalize longer codewords more harshly than the linear function and penalize shorter codewords less harshly .convex need not yield convex , although is clearly convex if and only if is .note that one can map decreasing to a corresponding increasing function without changing the value of ( e.g. , for ) .thus the restriction to increasing can be trivially relaxed. we can generalize by using a two - argument cost function instead of , as in ( [ cost ] ) , and adding to its range .we usually choose functions with the following property : a cost function and its associated penalty are _ _ differentially monotonic if , for every , whenever is finite and , .[ difmon ] this property means that the contribution to the penalty of an bit in a codeword will be greater if the corresponding event is more likely .clearly any will be differentially monotonic .this restriction on the generalization will aid in finding algorithms for coding such cost functions , which we denote as _ _ generalized quasiarithmetic penalties : let \rightarrow \rp \cup \{\infty\} ] for some .( let to show the following argument is valid over all . )we assume is also real analytic ( with respect to interval ) .thus all derivatives of the function and its inverse are bounded .given real analytic cost function and its real analytic inverse , is subtranslatory if and only if , for all positive and all positive summing to , where is the derivative of .first note that , since all values are positive , inequality ( [ transsuff ] ) is equivalent to we show that , when ( [ theq ] ) is true everywhere , is subtranslatory , and then we show the converse .let .using power expansions of the form on and , step ( a ) is due to ( [ theq ] ) , step ( b ) due to the power expansion on , step ( c ) due to the power expansion on , and step ( d ) due to the power expansion on ( where the bounded derivative of allows for the asymptotic term to be brought outside the function ) .next , evoke the above inequality times : taking , thus , the fact of ( [ transsuff ] ) is sufficient to know that the penalty is subtranslatory . to prove the converse ,suppose for some valid and .because is analytic , continuity implies that there exist and such that for all .the chain of inequalities above reverse in this range with the additional multiplicative constant . thus ( [ phiphi ] ) becomes for , and ( [ epsineq ] ) becomes , for any , which , taking , similarly leads to and thus the subtranslatory property fails and the converse is proved .therefore , for satisfying ( [ transsuff ] ) , we have the bounds of ( [ fbound ] ) for the optimum solution .note that the right - hand side of ( [ transsuff ] ) may also be written ; thus ( [ transsuff ] ) indicates that the average derivative of at the codeword length values is at most the derivative of at the value of the penalty for those length values .the linear and exponential penalties satisfy these equivalent inequalities with equality .another family of cost functions that satisfies the subtranslatory property is for fixed , which corresponds to proving this involves noting that lyapunov s inequality for moments of a random variable yields which leads to which , because , is the inequality we desire . another subtranslatory penalty is the quadratic quasiarithmetic penalty of ( [ quadratic ] ) , in which for .this has already been shown for ; when , we achieve the desired inequality through algebra : we thus have an important property that holds for several cases of interest . one might be tempted to conclude that every or every convex and/or concave is subtranslatory .however , this is easily disproved .consider convex .using cardano s formula , it is easily seen that ( [ transsuff ] ) does not hold for and .the subtranslatory test also fails for .thus we must test any given penalty for the subtranslatory property in order to use the redundancy bounds . because all costs are positive, the redundancy bounds that are a result of a subtranslatory penalty extend to infinite alphabet codes in a straightforward manner .these bounds thus show that a code with finite penalty exists if and only if the generalized entropy is finite , a property we extend to nonsubtranslatory penalties in the next subsection .however , one must be careful regarding the meaning of an `` optimal code '' when there are an infinite number of possible codes satisfying the kraft inequality with equality .must there exist an optimal code , or can there be an infinite sequence of codes of decreasing penalty without a code achieving the limit penalty value ? fortunately , the answer is the former , as the existence results of linder , tarokh , and zeger in can be extended to quasiarithmetic penalties .consider continuous strictly monotonic ( as proposed by campbell ) and such that is finite .consider , for an arbitrary , optimizing for with weights ( we call the entries to this distribution `` weights '' because they do not necessarily add up to . ) denote the optimal code a __ truncated code , one with codeword lengths thus , for convenience , for .these lengths are also optimal for , the distribution of normalized weights .following , we say that a sequence of codeword length distributions _ _ converges to an infinite prefix code with codeword lengths if , for each , the length in each distribution in the sequence is eventually ( i.e. , if each sequence converges to ) .given quasiarithmetic increasing and such that is finite , the following hold : 1 .there exists a sequence of truncated codeword lengths that converges to optimal codeword lengths for ; thus the infimum is achievable .2 . any optimal code for must satisfy the kraft inequality with equality .because here we are concerned only with cases in which the first length is at least , we may restrict ourselves to the domain .recall then there exists near - optimal such that and thus , for any integer , , using this to approximate the behavior of a minimizing , we have yielding an upper bound on terms for all .this implies thus , for any , the sequence is bounded for all , and thus has a finite set of values ( including ) .it is shown in that this sufficies for the desired convergence , but for completeness a slightly altered proof follows .because each sequence has a finite set of values , every infinite indexed subsequence for a given has a convergent subsequence .an inductive argument implies that , for any , there exists a subsequence indexed by such that converges for all , where is a subsequence of for .codeword length distributions ( which we call ) thus converge to the codeword lengths of an infinite code with codeword lengths .clearly each codeword length distribution satisfies the kraft inequality .the limit does as well then ; were it exceeded , we could find such that and thus such that causing a contradiction .we now show that is optimal .let be the codeword lengths of an arbitrary prefix code .for every , there is a such that for any if . due to the optimality of each , for all : and , taking , , leading directly to and the optimality of .suppose the kraft inequality is not satisfied with equality for optimal codeword lengths .we can then produce a strictly superior code .there is a such that .consider code .this code satisfies the kraft inequality and has penalty .thus is not optimal .therefore the kraft inequality must be satisfied with equality for optimal infinite codes .note that this theorem holds not just for subtranslatory penalties , but for any quasiarithmetic penalty .recall the definition of ( [ entropy ] ) , for .if is finite and either is subtranslatory or ( which includes all concave and all polynomial ) , then the coding problem of ( [ ccstar ] ) , has a minimizing resulting in a finite value for . if is subtranslatory , then . if , then there are such that for all .then so and the infimum , which we know to also be a minimum , is finite .we now examine algorithms for finding minimum penalty codes for convex cases with finite alphabets .we first present a notation for codes based on an approach of larmore .this notation is an alternative to the well known code tree notation , e.g. , , and it will be the basis for an algorithm to solve the generalized quasiarithmetic ( and thus campbell s quasiarithmetic ) convex coding problem . in the literature nodeset notation is generally used for binary alphabets , not for general alphabet coding . although we briefly sketch how to adapt this technique to general output alphabet coding at the end of subsection [ refine ] , an approach fully explained in , until then we concentrate on the binary case ( ) ._ the key idea : _ each node represents both the share of the penalty ( weight ) and the share of the kraft sum ( width ) assumed for the bit of the codeword .if we show that total weight is an increasing function of the penalty and show a one - to - one correspondence between optimal nodesets and optimal codes , we can reduce the problem to an efficiently solvable problem , the coin collector s problem . in order to do this ,we first assume bounds on the maximum codeword length of possible solutions , e.g. , the maximum unary codeword length of .alternatively , bounds might be explicit in the definition of the problem .consider for example the length - limited coding problems of ( [ llhuff ] ) and ( [ expll ] ) , upper bounded by .a third possibility is that maximum length may be implicit in some property of the set of optimal solutions ; we explore this in subsection [ refine ] .we therefore restrict ourselves to codes with codewords , none of which has greater length than , where ] , 2 . = nodes in with indices in ] and \times [ 1 , l_\mid-1] ] .thus we need merely to recompute which nodes are in and in . because is a subset of , and .given their respective widths , is a minimal weight subset of \times [ 1,l_{\mid}-1] ] .the nodes at each level of and may be found by recursive calls to the algorithm . in doing so, we use only space .time complexity , however , remains the same ; we replace one run of an algorithm on nodes with a series of runs , first one on nodes , then two on an average of at most nodes each , then four on , and so forth . formalizing this analysis : the above recursive algorithm for generalized quasiarithmetic convex coding has time complexity . as indicated , this recurrence relation is considered and proved in , but we analyze it here for completeness . to find the time complexity , set up the following recurrence relation : let be the worst case time to find the minimal weight subset of \times [ 1,l]$ ] ( of a given width ) , assuming the subset is monotonic .then there exist constants and such that , if we define and , and we let an adversary choose the corresponding , where is the base case. then , where is any function satisfying the recurrence which does .thus , the time complexity is .the overall complexity is space and time considering only flat classes , in general , as in table [ complexity ] .however , the assumption of distinct puts an undesirable restriction on our input . in their original algorithm from ,larmore and hirschberg suggest modifying the probabilities slightly to make them distinct , but this is unnecessarily inelegant , as the resulting algorithm has the drawbacks of possibly being slightly nonoptimal and being nondeterministic ; that is , different implementations of the algorithm could result in the same input yielding different outputs .a deterministic variant of this approach could involve modifications by multiples of a suitably small variable to make identical values distinct . in , another method of tie - breakingis presented for alphabetic length - limited codes . here , we present a simpler alternative analogous to this approach , one which is both deterministic and applicable to all differentially monotonic instances .recall that is a nonincreasing vector .thus items of a given width are sorted for use in the package - merge algorithm ; use this order for ties .for example , if we use the nodes in fig .[ nodesetnum ] , with probability , then nodes and are the first to be paired , the tie between and broken by order .thus , at any step , all identical - width items in one package have adjacent indices .recall that packages of items will be either in the final nodeset or absent from it as a whole .this scheme then prevents any of the nonmonotonicity that identical might bring about . in order to ensure that the algorithm is fully deterministic whether or not the linear - space version is used the manner in which packages and single items are merged must also be taken into account .we choose to merge nonmerged items before merged items in the case of ties , in a similar manner to the two - queue bottom - merge method of huffman coding .thus , in our example , the node is chosen whereas the package of items and is not .this leads to the optimal length vector , rather than or , which are also optimal . as in bottom - merge huffman coding ,the code with the minimum reverse lexicographical order among optimal codes is the one produced .this is also the case if we use the position of the `` last '' node in a package ( in terms of the value of ) in order to choose those with lower values , as in .however , the above approach , which is easily shown to be equivalent via induction , eliminates the need for keeping track of the maximum value of for each package . in this caseusing a bottom - merge - like coding method has an additional benefit : we no longer need assume that all to assure that the nodeset is a valid code . in finding optimal binary codes , of course , it is best to ignore an item with .however , consider nonbinary output alphabets , that is , .as in huffman coding for such alphabets , we must add `` dummy '' values of to assure that the optimal code has the kraft inequality satisfied with equality , an assumption underlying both the huffman algorithm and ours .the number of dummy values needed is where and where the dummy values each consist of nodes , each node with the proper width and with weight . with this preprocessing step ,finding an optimal code should proceed similarly to the binary case , with adjustments made for both the package - merge algorithm and the overall coding algorithm due to the formulation of the kraft inequality and maximum length .a complete algorithm is available , with proof of correctness , in .note that we have assumed for all variations of this algorithm that we knew a maximum bound for length , although in the overall complexity analysis for binary coding we assumed this was ( except for flat classes ) .we now explore a method for finding better upper bounds and thus a more efficient algorithm .first we present a definition due to larmore : consider penalty functions and .we say that is _ _ flatter than if , for probabilities and and positive integers and where , ( where , again , ) .a consequence of the convex hull theorem of is that , given flatter than , for any , there exist -optimal and -optimal such that is greater lexicographically than ( again , with lengths sorted largest to smallest ) .this explains why the word `` flatter '' is used .thus , for penalties flatter than the linear penalty , we can obtain a useful upper bound , reducing complexity .all convex quasiarithmetic penalties are flatter than the linear penalty .( there are some generalized quasiarithmetic convex coding penalties that are not flatter than the linear penalty e.g. , and some flatter penalties that are not campbell / quasiarithmetic e.g. , so no other similarly straightforward relation exists . ) for most penalties we have considered , then , we can use the upper bounds in or the results of a pre - algorithmic huffman coding of the symbols to find an upper bound on codeword length . a problem in which pre - algorithmic huffman coding would be useful is delay coding , in which the quadratic penalty ( [ quadratic ] ) is solved for values of and .in this application , only one traditional huffman coding would be necessary to find an upper bound for all quadratic cases . with other problems, we might wish to instead use a mathematically derived upper bound . using the maximum unary codeword length of and techniques involving the golden mean, , buro in gives the upper limit of length for a ( standard ) binary huffman codeword as which would thus be an upper limit on codeword length for the minimal optimal code obtained using any flatter penalty function , such as a convex quasiarithmetic function .this may be used to reduce complexity , especially in a case in which we encounter a flat class of problem inputs .in addition to this , one can improve this algorithm by adapting the binary length - limited huffman coding techniques of moffat ( with others ) in .we do not explore these , however , as these can not improve asymptotic results with the exception of a few special cases .other approaches to length - limited huffman coding with improved algorithmic complexity are not suited for extension to nonlinear penalties .with a similar approach to that taken by shannon for shannon entropy and campbell for rnyi entropy , one can show redundancy bounds and related properties for optimal codes using campbell s quasiarithmetic penalties and generalized entropies .for convex quasiarithmetic costs , building upon and refining larmore and hirschberg s methods , one can construct efficient algorithms for finding an optimal code .such algorithms can be readily extended to the generalized quasiarithmetic convex class of penalties , as well as to the delay penalty , the latter of which results in more quickly finding an optimal code for delay channels .one might ask whether the aforementioned properties can be extended ; for example , can improved redundancy bounds similar to be found ? it is an intriguing question , albeit one that seems rather difficult to answer given that such general penalties lack a huffman coding tree structure .in addition , although we know that optimal codes for infinite alphabets exist given the aforementioned conditions , we do not know how to find them .this , as with many infinite alphabet coding problems , remains open . it would also be interesting if the algorithms could be extended to other penalties , especially since complex models of queueing can lead to other penalties aside from the delay penalty mentioned here .also , note that the monotonicity property of the examples we consider implies that the resulting optimal code can be alphabetic , that is , lexicographically ordered by item number .if we desire items to be in a lexicographical order different from that of probability , however , the alphabetic and nonalphabetic cases can have different solutions .this was discussed for the length - limited penalty in ; it might be of interest to generalize it to other penalties using similar techniques and to prove properties of alphabetic codes for such penalties .the author wishes to thank thomas cover , john gill , and andrew brzezinski for feedback on this paper and research , as well as the anonymous reviewers for their numerous suggestions for improvement .discussions and comments from the stanford information theory group and benjamin van roy are also greatly appreciated , as is encouragement from brian marcus . herewe illustrate and prove the correctness of a recursive version of package - merge algorithm for solving the coin collector s problem .this algorithm was first presented in , which also has a linear - time iterative implementation .restating the coin collector s problem : in our notation , we use to denote both the index of a coin and the coin itself , and to represent the items along with their weights and widths . the optimal solution , a function of total width and items , is denoted .note that we assume the solution exists but might not be unique . in the case of distinct solutions , tie resolution for minimizing arguments may for now be arbitrary or rule - based ; we clarify this in subsection [ linear ] .a modified version of the algorithm considers the case where a solution might not exist , but this is not needed here . because a solution exists , assuming , for some unique odd and .( note that need not be an integer . if , and are not defined . ) _ case 2b . , , and : _ create , a new item with weight and width .this new item is thus a combined item , or _ _ package , formed by combining items and .let ( the optimization of the packaged version ) .if , then ; otherwise , .we show that the package - merge algorithm produces an optimal solution via induction on the depth of the recursion .the basis is trivially correct , and each inductive case reduces the number of items by one .the inductive hypothesis on and is that the algorithm is correct for any problem instance that requires fewer recursive calls than instance . if and , or if , then there is no solution to the problem , contrary to our assumption . thus all feasible cases are covered by those given in the procedure .case 1 indicates that the solution must contain an odd number of elements of width .these must include the minimum weight item in , since otherwise we could substitute one of the items with this `` first '' item and achieve improvement .case 2 indicates that the solution must contain an even number of elements of width .if this number is , neither nor is in the solution. if it is not , then they both are .if , the number is , and we have case 2a . if not , we may `` package '' the items , considering the replaced package as one item , as in case 2b . thus the inductive hypothesis holds and the algorithm is correct .[ pm ] presents a simple example of this algorithm at work , finding minimum total weight items of total width ( or , in binary , ) . in the figure , item width represents numeric width and item area represents numeric weightinitially , as shown in the top row , the minimum weight item with width is put into the solution set .then , the remaining minimum width items are packaged into a merged item of width ( ) .finally , the minimum weight item / package with width is added to complete the solution set , which is now of weight .the remaining packaged item is left out in this case ; when the algorithm is used for coding , several items are usually left out of the optimal set .a. rnyi , _ a diary on information theory_.1em plus 0.5em minus 0.4emnew york , ny : john wiley & sons inc . ,1987 , original publication : _ napl az informcielmletrl _ , gondolat , budapest , hungary , 1976 .j. aczl , `` on shannon s inequality , optimal coding , and characterizations of shannon s and rnyi s entropies , '' in _ symposia mathematica _ , vol .15.1em plus 0.5em minus 0.4em new york , ny : academic press , 1973 , pp . 153179 .j. katajainen , a. moffat , and a. turpin , `` a fast and space - economical algorithm for length - limited coding , '' in _ proceedings of the international symposium on algorithms and computation _ , dec .1995 , p. 1221 .a. aggerwal , b. schieber , and t. tokuyama , `` finding a minimum - weight -link path on graphs with the concave monge property and applications , '' _ discrete and computational geometry _ ,263280 , 1994 .
huffman coding finds a prefix code that minimizes mean codeword length for a given probability distribution over a finite number of items . campbell generalized the huffman problem to a family of problems in which the goal is to minimize not mean codeword length but rather a generalized mean of the form , where denotes the length of the codeword , denotes the corresponding probability , and is a monotonically increasing cost function . such generalized means also known as quasiarithmetic or quasilinear means have a number of diverse applications , including applications in queueing . several quasiarithmetic - mean problems have novel simple redundancy bounds in terms of a generalized entropy . a related property involves the existence of optimal codes : for `` well - behaved '' cost functions , optimal codes always exist for ( possibly infinite - alphabet ) sources having finite generalized entropy . solving finite instances of such problems is done by generalizing an algorithm for finding length - limited binary codes to a new algorithm for finding optimal binary codes for any quasiarithmetic mean with a convex cost function . this algorithm can be performed using quadratic time and linear space , and can be extended to other penalty functions , some of which are solvable with similar space and time complexity , and others of which are solvable with slightly greater complexity . this reduces the computational complexity of a problem involving minimum delay in a queue , allows combinations of previously considered problems to be optimized , and greatly expands the space of problems solvable in quadratic time and linear space . the algorithm can be extended for purposes such as breaking ties among possibly different optimal codes , as with bottom - merge huffman coding . optimal prefix code , huffman algorithm , generalized entropies , generalized means , quasiarithmetic means , queueing .
astronomy is increasingly becoming a computationally intensive field due to the ever larger datasets delivered by observational efforts to map ever larger volumes and provide ever finer details of the universe . in consequence ,conventional methods are often inadequate , requiring the development of new data reduction techniques .the x/-ray spectrometer , aboard the observatory , perfectly illustrates this trend .the telescope is dedicated to the analysis of both point - sources and diffuse emissions , with a high energy resolution .its imaging capabilities rely on a coded - mask aperture and a specific observation strategy based on a dithering procedure .after several years of operation , it also becomes important to be able to handle simultaneously all the data , in order , for example , to get a global view of the galaxy emission and to determine the contribution of the various emission components .the sky imaging with is not direct .the standard data analysis consists in adjusting a model of the sky and instrumental background to the data through a chi - square function minimization or a likelihood function maximization .the related system of equations is then solved for the intensities of both sources and background .the corresponding sky images are very incomplete and contain only the intensities of some selected sky sources but not the intensities in all the pixels of the image .hence , images obtained by processing small subsets of data simultaneously can not always be combined together ( co - added ) to produce a single image .instead , in order to retrieve the low signal - to - noise ratio sources or to map the low surface brightness `` diffuse '' emissions , one has to process simultaneously several years of data and consequently to solve a system of equations of large dimension .grouping all the data containing a signal related to a given source of the sky allows to maximize the amount of information on this specific source and to enhance the contrast between the sky and the background .ideally , the system of equations that connects the data to the sky model ( where the unknown parameters are the pixels intensities ) should be solved for both source intensities and variability timescales .this problem , along with the description and treatment of sources variability , is the subject of another paper .it is mandatory , for example when studying large - scale and weak structures in the sky , to be able to process large amounts of data simultaneously .the spatial ( position ) and temporal ( variability ) description of sources leads to the determination of several tens of thousands of parameters , if years of data are processed at the same time .consequently , without any optimization , the systems to be solved can exceed rapidly the capacities of most conventional machines . in this paperwe describe a technique for handling such large datasets .is a spectrometer provided with an imaging system sensitive both to point - sources and extended source / diffuse emission .the instrument characteristics and performance can be found in and .data are collected thanks to 19 high purity ge detectors illuminated by the sky through a coded - mask .the resulting field - of - view ( fov ) is and the energy ranges from 20 kev to 8 mev .the instrument can locate intense sources with an accuracy of a few arc minutes .the coded mask consists of elements which are opaque ( made of tungsten ) or transparent to the radiation .photons coming from a certain direction cast a shadow of the mask onto the detectors plane .the shadowgram depends on the direction of the source ( figure [ fig : shadowgram ] ) .the recorded counts rate in each detector of the camera is the sum of the contribution from all the sources in the fov .the deconvolution consists of solving a system of equation which relates a sky model to the data through a transfer function . in the case of, the imaging properties rely on the coded aperture , but also on a specific observing strategy : the dithering .the reconstruction of all the pixels of the sky image enclosed in the fov is not possible from a single exposure .indeed , dividing the sky into pixels ( the angular resolution ) , we obtain , for a fov, unknowns . however , a single exposure contains only 19 data values which are the number of observed counts in the 19 ge detectors and does not permit us to obtain the parameters necessary to determine the model of the sky and background .the related system of equations is thus undetermined .the dithering observation technique aims to overcome this difficulty . by introducing multiple exposures for a given field that are shifted by an offset that is small compared to the size of the fov , it is possible to increase the number of equations , by grouping exposures , until the system becomes determined and thus solvable .an appropriate dithering strategy has been used where the spacecraft continuously follows a dithering pattern throughout an observation . in general, the pointing direction varies around a target by steps of within a five - by - five square or a seven - point hexagonal pattern .a pointing ( exposure ) lasts between 30 and 60 minutes .thus , the dithering allows to construct a solvable system of equations .however , in addition to the variable instrumental background , sources are also variable on various timescales ranging from hours ( roughly the duration of an exposure ) to years .this is not a major problem at high energy ( e 100 kev ) , since there are only few emitting sources , whose intensities are rather stable in time with respect to the statistics . at lower energies ( e 100 kev ) and in most cases , the source intensities vary during the time spanned by the all the exposures .the chi - square , of the associated least - square problem , for this group can be relatively high , if sources intensity variations are not taken into account . in spite of this, it is possible to include a model of the source intensity variations in the formulation of the problem and to re - optimize the system of equations accordingly . nevertheless , including sources variability in the system of equations increases the number of unknowns to determine ( [ sec : material : foundation : problem ] ) since intensities , in each `` time - bin '' ( a segment of time where the intensity of a given source does not change statistically ) , are to be determined simultaneously along with the parameters which model the instrumental background .it is impossible from a single exposure ( 19 data values ) to obtain the sky image in the fov ; only a coarse image containing at most 19 sources can be obtained .this coarse image is under - sampled and contains information on only 19 pixels ( there is no information on the other pixels ) .hence , images can not always be combined together ( co - added ) to produce a single image .furthermore , crowded regions like the galactic center contain hundreds of sources and thus a single exposure can not provide the amount of information needed , even to build only a coarse image .the grouping of the exposures , by selecting all those that contain signal on a specific sky target , can provide the necessary information .the fov spanned by these exposures is large ( to ) and contains numerous sources .the signal ( counts and energies ) recorded by the camera on the 19 ge detectors is composed of contributions from each source in the fov plus the background . for sources located in the field of view, the data obtained from detector during an exposure ( pointing ) , for a given energy band , can be expressed by the relation : where is the response of the instrument for source ( function of the source direction relative to the telescope pointing axis ) , is the flux of source during pointing and the background both recorded during the pointing for detector . are the measurement errors on the data , they are assumed to have zero mean , to be independent and normally distributed with a known variance ( ) ] becoming a vector of length ) .the matrix ( eq .[ eqn : h0 ] ) is to be modified accordingly . when expanding matrix , column expanded in new columns , hence the number of intensities ( unknowns ) increases .schematically ( ) is mapped into a matrix ( ) , being the sum of all sources intervals ( ) , that is the number of ( the index j=0 correspond to the background ) .matrix is related to the background while is related to the sources response .parameters and are related to background and source intensity variations with the time ( number of exposures ) .box i illustrates schematically how the matrix is derived from the matrix . finally , the relation between the data and the sky model , similarly as in eq .[ eqn : h0 ] , is physically , corresponds to the transfer function or matrix , to the data and to the unknown intensities ( sources plus background ) to be determined ( a vector of length n ) .taking into account the variability of sources and instrumental background increases the size of the system of equation and the number of unknowns , but also increases the sparsity of the matrix related to the system of equations , which means that the underlying matrices have very few non - zero entries . in our application ,the matrix is sparse , thus matrix is even sparser .objective methods to construct the matrix from are described in .to give an idea , for the dataset which corresponds to ( years of data , the number of free parameters to be determined are between and depending on the energy band considered and hypotheses made on sources and background variability timescale ( [ sec : material : material ] ) .the material is related to the analysis of data accumulated between 2003 and 2009 with the spectrometer .the astrophysical application is the study of diffuse emission of the galaxy .the details can be found in .the goal is to disentangle the `` diffuse '' emission ( modeled with 3 spatially extended components ) from the point - sources emission and instrumental background .we need to sample these large - scale structures efficiently over the entire sky and consequently use the maximum amount of data simultaneously , since a single exposure covers only one - hundredth of the total sky area .the datasets consist of 38699 exposures that yield data points . in most cases considered here ,the background intensity is considered to be quite stable on a hours timescale , which corresponds to unknowns .a. the highest energy bands ( kev ) are less problematic in terms of number of parameters to determine , as illustrated by the 200 - 600 kev band .the sky model contains only 29 sources which are essentially not variable in time ( given the instrument sensitivity ) .the number of unknowns is .b. the lowest energy bands ( kev ) are more problematic .we use the 25 - 50 kev band .the sky model contains 257 sources variable on different timescales .when the background intensity is assumed to vary on hours timescale , intensity are to be determined .+ in some configurations , essentially used to assess the results , background intensity and/or strongest variable sources vary on the exposure timescale , and the number of unknowns could be as high as to .nevertheless , the matrices associated with these problems remain relatively structured . c. to avoid excessively structured matrices , we generate also matrices , with a variable number of columns , the number of segments for a particular source being a random number between 1 and .this results in a different number of parameters .another astrophysical application is the study of a particular source or sky region , here the crowded central region of the galaxy . in this case , it is possible to use a smaller number of exposures .we use 7147 exposures which cover a sky region of radius around the galactic center .we measure the intensity variations of a set of 132 sources .the number of parameters to determine is relatively small .details can be found in . a second matrix , used for verification purposes , has .it corresponds to the case where some sources are forced to vary on shorter timescales .the material consists of rectangular matrices and symmetric square matrices ( ) related to the above physical problems ( [ sec : material : foundation : problem ] ) .the characteristics of some of these matrices are described in table [ table : sparsity ] .the system we use in the experiments consists of an intel i7 - 3517u processor with 8 gb main memory .we ran the experiments on a single core , although our algorithms are amenable to parallelism . [ table : sparsity ] the mathematical problem described in section [ sec : material : foundation : problem ] and developed in [ sec : theory : methods : lsq ] requires the solution of several algebraic equations .first , if the chi - square statistic is used , a linear least - squares problem has to be solved to estimate the parameters of the model .second , elements ( entries ) of the inverse of a matrix have to be computed in order to determine the error bars ( variances of these parameters ) .third , in some cases , the parameters are adjusted to the data through a multi - component algorithm based on likelihood tests ( poisson statistics ) ; this problem leads to a non - linear system of equations ( [ app : two : mle ] ) .these three problems can be reduced to solving a linear system with a square matrix : a linear least - squares problem can be transformed into a square system by use of the normal equations ] ( and ) . similarly , computing entries of the inverse of a matrix amounts to solving many linear systems , as described in detail in section [ sec : theory : mumpsinv ] . for the above mentioned non - linear problem, we chose a newton - type method ; this involves solving several linear systems as well .our problems are large , but sparse ( cf .table [ table : sparsity ] ) , which justifies the use of sparse linear algebra techniques . in section [ sec : theory : largesystem ] , we describe how we selected a method suitable for our application . the system is , in most cases , overdetermined ( there are more equations - or measures here - than unknowns ) , therefore there is ( generally ) no exact solution , but a `` best '' solution , motivated by statistical reason , obtained by minimizing the following merit function , which is the chi - square : ^ 2\ ] ] is vector of length representing the data , ] is a diagonal matrix of order whose diagonal is ( , , ) . as for the lsq case , the variance of the solution is obtained thanks to the inversion of the hessian matrix ( note that in the limit , the likelihood ( ) and chi - square ( ) hessian matrices are the same ) . a guess solution to this non - linear optimization problem is the lsq solution .the fitting algorithm , based on the likelihood test statistic , is a non - linear optimization problem . to optimize a non - linear problem ,potentially with bound constraints , a newton type method , known for its efficiency and reliability can be used , as we already have a solver for large sparse systems at hand . a software package for large - scale non - linear optimization such as ipopt ( interior point optimizer ) can be used .ipopt uses a linear solver such as mumps or ma57 as a core algorithm . for more details on this large - scale non - linear algorithm ,see .a few similar software packages for large - scale non - linear optimization exist , among them lancelot , minos and snopt .sometimes the `` empty - field '' or `` uniformity map '' has to be computed with the solution . in order to preserve the linearity of the problem , we have adopted the algorithm described below .we consider that if the solution is known , coming back to the detector and pointing number in the above formula is the counts due to the sources , assumed to be known . is the background contribution , is assumed to be known and is to be estimated . at this stage , using the model of the sky described by [ eqn : a4 ] , a rough estimate of the pattern is .for the lsq statistic , we wish to minimize the following quantities for each of the working detectors , the lsq solution is for the mle statistic , we do not have to preserve the linearity of the problem and hence the computation of the improved `` empty - field '' pattern can be done during the non - linear optimization process . on another side ,the algorithm is simplified if we proceed similarly as in the lsq case .then , we wish to maximize the following quantities for each of the working detectors , \right)\\ & \mathrm{~for~d=1, ... ,n_p } \end{split}\ ] ] the mle solution is one should mention that it is possible to compute , similarly , an `` empty - field '' pattern on some restricted time interval instead of the whole dataset ; the best `` empty field '' for pointing intervals to is then , a sub - optimal algorithm to obtain both the sources and the background fluxes , as well as the improved `` empty - field '' pattern is described in algorithm [ table : algo_lsq_iterback ] .we start with an approximation and apply some iterative refinement . in practice, the algorithm converges in a few iterations . , compute the structure of the hessian ( or ) compute lsq or mle solution compute a new approximation of by minimizing again lsq or maximizing mle statistics update ( the first columns of and update the new hessian matrix ( sec .[ sec : calculation : hess ] ) ) if stops decreasing or the likelihood function stops increasing then go to step 8 compute at the solution ( if not already done ) and the diagonal of to obtain the uncertainties on the solution 99 amestoy , p. r. , duff , i. s. , koster , j. & lexcellent , j .- y ., 2001 , `` a fully asynchronous multifrontal solver using distributed dynamic scheduling '' , siam journal of matrix analysis and applications 23 ( 1 ) : 1541 .amestoy , p. r. , guermouche , a. , lexcellent , j .- y .& pralet , s. , 2006 , `` hybrid scheduling for the parallel solution of linear systems '' , parallel computing 32 ( 2 ) : 136156 .amestoy , p. r. , duff , i. s. , lexcellent , j .- y . , robert , y. , rouet , f .- h . & uar , b. , 2012 , `` on computing inverse entries of a sparse matrix in an out - of - core environment '' , siam journal on scientific computing 34 ( 4 ) : 1975 - 1999 .anderson , e. , bai , z. , dongarra , j. , greenbaum , a. , mckenney , a. , du croz , j. , hammerling , s. , demmel , j. , bischof , c & and sorensen , d. , 1990 , `` lapack : a portable linear algebra library for high - performance computers '' , proceedings of the 1990 acm / ieee conference on supercomputing , 211 .bai , z. , demmel , j. , dongarra , j. , ruhe , a. & van der vorst , h. 2000 , `` templates for the solution of eigenvalue problems : a practical guide '' , siam , philadelphia , 2000 .bobin j. , starck j .- l . & ottensamer r. , 2008 , ieee sel .signal proc ., 2 , 718 bouchet , l. , roques , j. p. & jourdain , e. , 2010 , , 720 , 177 .bouchet , l. , strong , a. , porter , t.a , et al ., 2011 , , 739 , 29 .bouchet , l. , amestoy , p. r. , buttari , a. , rouet , f .- h . & chauvin , m. , 2013 , , in press ( http://dx.doi.org/10.1051/0004-6361/201219605 ) campbell , y. e. & davis , t. a. , 1995 , `` computing the sparse inverse subset : an inverse multifrontal approach '' , tech . rep .tr-95 - 021 , cise dept ., univ . of florida .cash , w. , 1979 , , 228 , 939 .conn , a. r. , gould , i. m. , & toint , p. l. , 1996 , `` numerical experiments with the lancelot package ( release a ) for large - scale nonlinear optimization '' , mathematical programming , 73(1 ) , 73110 .davis , t. a. , 2005 , `` user guide for ldl , a concise sparse cholesky package '' , tech ., cise dept ., univ . of florida .dubath , p. , kndlseder , j. , skinner , g. k. , et al . , 2005 , , 357 , 420 .duff , i. s. , erisman , a. m. , gear , c. w. & reid , j. k. , 1988 , `` sparsity structure and gaussian elimination '' , acm signum newsletter 23 ( 2 ) , 2 - 8 .duff , i. s. , erisman a. m. & reid , j. k. , 1989 , `` direct methods for sparse matrices '' .oxford university press .duff , i. s. & reid , j. k. , 2004 , `` ma57 - a code for the solution of indefinite sparse symmetric linear systems '' , acm transactions on mathematical software , 30 , 118 .gill , p. e. , murray , w. , & saunders , m. a. , 1997 , `` snopt : an sqp algorithm for large - scale constrained optimization '' .technical report sol97 - 3 , department of eesor , stanford university , stanford , california 94305 , usa , 1997 .hastings , w. k. , 1970 , biometrika 57 ( 1 ) : 97109 .hoffman , a. j. , martin , m. s. & rose , d. j. , 1973 , `` complexity bounds for regular finite difference and finite element grids '' , siam journal on numerical analysis 10 ( 2 ) : 364369 .jensen , p. l. , clausen , k. , cassi , c. , et al ., 2003 , , 411 , l7 .kirkpatrick , s. , gelatt , c. d. & vecchi , m. p. , 1983, science , new series , 220 ( 4598 ) , 671680 .liu , j. w. h. , 1990 , `` the role of elimination trees in sparse factorization '' , siam journal on matrix analysis and applications , 11 ( 1 ) : 134172 .metropolis , n. , rosenbluth , a.w . ,rosenbluth , m.n . ,teller , a.h . & teller , e. , 1953 ) , journal of chemical physics 21 ( 6 ) : 10871092 .murtagh , b. a. , & saunders , m. a. , 1982 , `` a projected lagrangian algorithm and its implementation for sparse nonlinear constraints '' , mathematical programming study 16 ( constrained optimization ) , 84 - 117 .neal , r. m. , 1993 , `` probabilistic inference using markov chain monte carlo methods '' , technical report crg - tr-93 - 1 , department of computer science , university of toronto .nelder , j. , & mead , r. , 1965 , computer journal , vol .7 , no 4 , 308313 .puglisi , c. , 1993 , `` qr factorization of large sparse matrix overdetermined and square matrices with the multifrontal method in a multiprocessing environment '' , phd thesis , institut national polytechnique de toulouse , toulouse , france .roques , j. p. , schanne , s. , von kienlin , a. , et al . , 2003 , , 411 , l91 .saad , y. , 1996 , `` iterative methods for sparse linear systems '' : pws publications .slavova , t. , 2009 , `` parallel triangular solution in an out - of - core multifrontal approach for solving large sparse linear systems '' , phd thesis , institut national polytechnique de toulouse , france .stewart , g. w. , 1998 , `` matrix algorithms'',siam press , pennslyvania , 1998 .takahashi , k. , fagan , j. , and chen , m.s ., 1973 , `` formation of a sparse bus impedance matrix and its application to short circuit study '' , in power industry computer applications conference , 6369 .tang , j. , & saad , a. , 2009 , `` a probing method for computing the diagonal of the matrix inverse '' , tech .report umsi-2010 - 42 , minesota supercomputer institute , university of minnesota , minneapolis , mn , 2009 .ubertini , p. , lebrun , f. , di cocco , g. , et al . , 2003 , , 411 , l131 .vedrenne , g. , roques , j. p. , schonfelder , v. , et al . ,2003 , , 411 , l63 .wchter , a. and biegler . , l. t. , 2006 , mathematical programming , 106(1),2557 .wiaux y. , jacques l. , puy g.m scaife a. m. m. , vandergheynst p , 2009 , , 395 , 1733* x/-ray spectrometer data analysis * large astronomical data sets arising from the simultaneous analysis of years of data .* resolution of a large sparse system of equations ; solution and its variance . *the multifrontal massively parallel solver ( mumps ) to solve the equations . *mumps feature to compute selected inverse entries ( variance of the solution , ) .
nowadays , analyzing and reducing the ever larger astronomical datasets is becoming a crucial challenge , especially for long cumulated observation times . the x/-ray spectrometer is an instrument for which it is essential to process many exposures at the same time in order to increase the low signal - to - noise ratio of the weakest sources . in this context , the conventional methods for data reduction are inefficient and sometimes not feasible at all . processing several years of data simultaneously requires computing not only the solution of a large system of equations , but also the associated uncertainties . we aim at reducing the computation time and the memory usage . since the transfer function is sparse , we have used some popular methods for the solution of large sparse linear systems ; we briefly review these methods . we use the multifrontal massively parallel solver ( mumps ) to compute the solution of the system of equations . we also need to compute the variance of the solution , which amounts to computing selected entries of the inverse of the sparse matrix corresponding to our linear system . this can be achieved through one of the latest features of the mumps software that has been partly motivated by this work . in this paper we provide a brief presentation of this feature and evaluate its effectiveness on astrophysical problems requiring the processing of large datasets simultaneously , such as the study of the entire emission of the galaxy . we used these algorithms to solve the large sparse systems arising from data processing and to obtain both their solutions and the associated variances . in conclusion , thanks to these newly developed tools , processing large datasets arising from is now feasible with both a reasonable execution time and a low memory usage . methods : data analysis , methods : numerical , techniques : imaging spectroscopy , techniques : miscellaneous , gamma - rays : general
humans participate on a daily basis in a large number of distinct activities , from electronic communication , such as sending emails or browsing the web , to initiating financial transactions or engaging in entertainment and sports .given the number of factors that determine the timing of each action , ranging from work and sleep patterns to resource availability , it appears impossible to seek regularities in the apparently random human activity patterns , apart from the obvious daily and seasonal periodicities .therefore , in contrast with the accurate predictive tools common in physical sciences , forecasting human and social patterns remains a difficult and often elusive goal .yet , the need to understand the timing of human actions is increasingly important . indeed , uncovering the laws governing human dynamics in a quantitative manneris of major scientific interest , requiring us to address the factors that determine the timing of human actions .but these questions are driven by applications as well : most human actions have a strong impact on resource allocation , from phone line availability and bandwidth allocation in the case of internet or web use , all the way to the design of physical space for retail or service oriented institutions . despite these fundamental and practical driving forces ,our understanding of the timing of human initiated actions is rather limited at present . to be sure, the interest in addressing the timing of events in human dynamics is not new : it has a long history in the mathematical literature , leading to the development of some of the key concepts in probability theory , and has reemerged at the beginning of the century as the design problems surrounding the phone system required a quantitative understanding of the call patterns of individuals .but most current models of human activity assume that human actions are performed at constant rate , meaning that a user has a fixed probability to engage in a specific action within a given time interval .these models approximate the timing of human actions with a poisson process , in which the time interval between two consecutive actions by the same individual , called the waiting or inter - event time , follows an exponential distribution .poisson processes are at the heart of the celebrated erlang formula , predicting the number of phone lines required in an institution , and they represent the basic approximation in the design of most currently used internet protocols and routers . yet, the availability of large datasets recording selected human activity patterns increasingly question the validity of the poisson approximation . indeed ,an increasing number of recent measurements indicate that the timing of many human actions systematically deviate from the poisson prediction , the waiting or inter - event times being better approximated by a heavy tailed or pareto distribution .the difference between a poisson and a heavy tailed behavior is striking : the exponential decay of a poisson distribution forces the consecutive events to follow each other at relatively regular time intervals and forbids very long waiting times .in contrast , the slowly decaying heavy tailed processes allow for very long periods of inactivity that separate bursts of intensive activity .we have recently proposed that the bursty nature of human dynamics is a consequence of a queuing process driven by human decision making : whenever an individual is presented with multiple tasks and chooses among them based on some perceived priority parameter , the waiting time of the various tasks will be pareto distributed .in contrast , first - come - first - serve and random task execution , common in most service oriented or computer driven environments , lead to a uniform poisson - like dynamics .yet , this work has generated just as many questions as it resolved .what are the different classes of processes that are relevant for human dynamics ? what determines the scaling exponents ?do we have discrete universality classes ( and if so how many ) as in critical phenomena , or the exponents characterizing the heavy tails can take up arbitrary values , as it is the case in network theory ? is human dynamics always heavy tailed ?in this paper we aim to address some of these questions by studying the different universality classes that can appear as a result of the queuing of human activities .we first review , in sect .[ sec : poisson ] , the frequently used poisson approximation , which predicts an exponential distribution of interevent times . in sect .[ sec : empirical ] we present evidence that the interevent time probability density function ( pdf ) of many human activities is characterized by the power law tail in sect .[ sec : model ] we discuss the general characteristics of the queueing models that govern how humans time their various activities . in sects .[ sec : cobham]-[sec : barabasi ] we study two classes of queuing models designed to capture human activity patterns .we find that restrictions on the queue length play an important role in determining the scaling of the queuing process , allowing us to document the existence of two distinct universality classes , one characterized by ( sect .[ sec : cobham ] ) and the other by ( sect .[ sec : barabasi ] ) . in sect .[ sec : wait_interevent ] we discuss the relationship between interevent and waiting times . finally , in sec .[ sec : discussion ] we discuss the applicability of these models to explain the empirical data , as well as outline future challenges in modeling human dynamics .consider an activity performed with some regularity , such as sending emails , placing phone calls , visiting a library , or browsing the web . we can keep track of this activity by recording the timing of each event , for example the time each emailis sent by an individual .the time between two consecutive events we call _ the interevent time _ for the monitored activity and will be denoted by . giventhat the interevent time can be explicitly measured for selected activities , it serves as a test of our ability to understand and model human dynamics : proper models should be able to capture its statistical properties . .the horizontal axis denotes time , each vertical line corresponding to an individual event .note that the interevent times are comparable to each other , long delays being virtually absent . *( b ) * the absence of long delays is visible on the plot showing the delay times for 1,000 consecutive events , the size of each vertical line corresponding to the gaps seen in ( a ) . *( c ) * the probability of finding exactly events within a fixed time interval is , which predicts that for a poisson process the inter - event time distribution follows , shown on a log - linear plot in ( c ) for the events displayed in ( a , b ) . *( d ) * the succession of events for a heavy tailed distribution . * ( e ) * the waiting time of 1,000 consecutive events , where the mean event time was chosen to coincide with the mean event time of the poisson process shown in ( a - c ) .note the large spikes in the plot , corresponding to very long delay times .( b ) and ( e ) have the same vertical scale , allowing to compare the regularity of a poisson process with the bursty nature of the heavy tailed process . *( f ) * delay time distribution for the heavy tailed process shown in ( d , e ) , appearing as a straight line with slope -2 on a log - log plot .the signal shown in ( d - f ) was generated using in the stochastic priority list model discussed in appendix [ sec : preferential].,width=307 ] the most primitive model of human activity would assume that human actions are fundamentally periodic , with a period determined by the daily sleep patterns . yet , while certain periodicity is certainly present , the timing of most human actions are highly stochastic .indeed , periodic models are hopeless in capturing the time we check out a book from the library , beyond telling us that it should be within the library s operation hours .the first and still most widely used stochastic model of human activity assumes that the tasks are executed independently from each other at a constant rate , so that the time resolved activity of an individual is well approximated by a poisson process . in this casethe probability density function ( pdf ) of the recorded interevent times has the exponential form in practice this means that the predicted activity pattern , while stochastic , will display some regularity in time , events following each other on average at intervals .indeed , given that for a poisson process is finite , very long waiting times ( _ i.e. _ large temporal gaps in the sequence of events ) are exponentially rare .this is illustrated in fig .[ fig2]a , where we show a sequence of events generated by a poisson process , appearing uniformly distributed in time ( but not periodic ) .the poisson process was originally introduced by poisson in his major work applying probability concepts to the administration of justice .today it is widely used to quantify the consequences of human actions , such as modeling traffic flow patterns or accident frequencies , and is commercially used in call center staffing , inventory control , or to estimate the number of congestion caused blocked calls in mobile communications .it has been established as a basic model of human activity patterns at a time when data collection capabilities on human behavior were rather limited . in the past few years , however , thanks to detailed computer based data collection methods , there is increasing evidence that the poisson approximation fails to capture the timing of many human actions . scaling .( d ) the interevent time distribution between two consecutive transactions made by a stock broker .the distribution follows a power - law with the exponential cut - off .( e - g ) the distribution of the exponents ( ) characterizing the interevent time distribution of users browsing the web portal ( e ) , individual loans from the library ( f ) and the emails sent by different individuals ( g ) .the exponent was determined only for users whose total activity levels exceeded certain thresholds , the values used being 15 web visits ( e ) , 15 emails ( f ) and 10 books ( g ) .( h , l ) we numerically generate for 10,000 individuals interevent time distributions following a power - law with exponent .the distribution of the measured exponents follows a normal distribution similar to the distribution observed in ( e - g ) .if we double the time window of the simulation ( h ) the deviation around the average becomes much smaller ( l ) .( i - k ) the distribution of the number of events in the studied systems : number of html hits for each user ( i ) , the number of books checked out by each user ( j ) and the number of emails sent by different individuals ( k ) , indicating that the overall activity patterns of individuals is also heavy tailed ., width=307 ]evidence that non - poisson activity patterns characterize human activity has first emerged in computer communications , where the timing of many human driven events is automatically recorded . for example , measurements capturing the distribution of the time differences between consecutive instant messages sent by individuals during online chats have found evidence of heavy tailed statistics .professional tasks , such as the timing of job submissions on a supercomputer , directory listings and file transfers ( ftp requests ) initiated by individual users were also reported to display non - poisson features .similar patterns emerge in economic transactions , in the number of hourly trades in a given security or the time interval distribution between individual trades in currency futures . finally , heavy tailed distributions characterize entertainment related events , such as the time intervals between consecutive online games played by users . note , however , that while these datasets provide clear evidence for non - poisson human activity patterns , most of them do not resolve individual human behavior , but capture only the aggregated behavior of a large number of users . for example , the dataset recording the timing of the job submissions looks at the timing of _ all jobs _ submitted to a computer , by any user .thus for these measurements the interevent time does not characterize a _user but rather a _ population _ of users .given the extensive evidence that the activity distribution of the individuals in a population is heavy tailed , these measurements have difficulty capturing the origin of the observed heavy tailed patterns .for example , while most people send only a few emails per day , a few send a very large number on a daily basis .if the activity pattern of a large number of users is simultaneously captured , it is not clear where the observed heavy tails come from : are they rooted in the activity of a single individual , or rather in the heavy tailed distribution of user activities ? therefore , when it comes to our quest to understand human dynamics , datasets that capture the long term activity pattern of a _ single _ individual are of particular value. to our best knowledge only three papers have taken this approach , capturing the timing of printing jobs submitted by users , the email activity patterns of individual email users and the browsing pattern of users visiting a major web portal .these measurements offer direct evidence that the heavy tailed activity patterns emerge at the level of a _ single _ individual , and are not a consequence of the heterogeneous distribution of user activity . despite this evidence ,a number of questions remain unresolved : is there a single scaling exponent characterizing all users , or rather each user has its own exponent ?what is the range of these exponents ?next we aim to address these questions through the study of six datasets , each capturing individual human activity patterns of different nature .first we describe the datasets and the collection methods , followed by a quantitative characterization of the observed human activity patterns . _ web browsing : _ automatically assigned cookies allow us to reconstruct the browsing history of approximately 250,000 unique visitors of the largest hungarian news and entertainment portal ( origo.hu ) , which provides online news and magazines , community pages , software downloads , free email and search engine , capturing 40% of all internal web traffic in hungary .the portal receives 6,500,000 html hits on a typical workday .we used the log files of the portal to collect the visitation pattern of each visitor between 11/08/02 and 12/08/02 , recording with second resolution the timing of each download by each visitor .the interevent time , , was defined as the time interval between consecutive page downloads ( clicks ) by the same visitor . _email activity patterns : _ this dataset contains the email exchange between individuals in a university environment , capturing the sender , recipient and the time of each email sent during a three and six month period by 3,188 and 9,665 users , respectively .we focused here on the data collected by eckmann , which records 129,135 emails with second resolution .the interevent time corresponds to the time between two consecutive emails sent by the same user . _ library loans : _ the data contains the time with second resolution at which books or periodicals were checked out from the library by the faculty at university of notre dame during a three year period .the number of unique individuals in the dataset is 2,247 , together participating in a total of 48,409 transactions .the interevent time corresponds to the time difference between consecutive books or periodicals checked out by the same patron ._ trade transactions : _ a dataset recording all transactions ( buy / sell ) initiated by a stock broker at a central european bank between 06/99 and 5/03 helps us quantify the professional activity of a single individual , giving a glimpse on the human activity patterns driving economic phenomena . in a typical daythe first transactions start at 7 am and end at 7 pm and the average number of transactions initiated by the dealer in one day is around 10 , resulting in a total of 54,374 transactions .the interevent time represents the time between two consecutive transactions by the broker .the gap between the last transaction at the end of one day and the first transaction at the beginning of the next trading day was ignored . _ the correspondence patterns of einstein , darwin and freud : _ we start from a record containing the sender , recipient and the date of each letter sent or received by the three scientists during their lifetime .the databases used in our study were provided by the darwin correspondence project ( http://www.lib.cam.ac.uk/departments/darwin/ ) , the einstein papers project ( http://www.einstein.caltech.edu/ ) and the freud museum of london ( http://www.freud.org.uk ) .each dataset contains the information about each sent / received letter in the following format : sender , recipient , date , where either the sender or the recipient is einstein , darwin or freud .the darwin dataset contained a record of a total of 7,591 letters sent and 6,530 letters received by darwin ( a total of 14,121 letters ) .similarly , the einstein database contained 14,512 letters sent and 16,289 letters received ( total of 30,801 ) . for freudwe have 3183 ( 2675 ) sent ( received ) leters . note that 1,541 letters in the darwin database and 1,861 letters in the einstein database were not dated or were assigned only potential time intervals spanning days or months .we discarded these letters from the dataset .furthermore , the dataset is naturally incomplete , as not all letters written or received by these scientists were preserved . yet , assuming that letters are lost at a uniform rate , they should not affect our main findings .for these three datasets we do not focus on the interevent times , but rather the _ response _ or _ waiting times _ . the waiting time , , represents the time interval between the date of a letter received from a given person , and the date of the next letter from darwin , einstein or freud to him or her , _i.e. _ the time the letter waited on their desk before a response is being sent . to analyze einstein , darwin , and freud s response time we have followed the following procedure :if individual a sent a letter to einstein on date1 , we search for the next letter from einstein to individual a , sent on date2 , the response time representing the time difference , expressed in days .if there are multiple letters from einstein to the recipient , we always consider the first letter as the response , and discard the later ones .missing letters could increase the response time , the magnitude of this effect depending on the overall frequency of communication between the respective correspondence partners . yet ,if the response time follows a distribution with an exponential tail , then randomly distributed missing letters would not generate a power law waiting time : they would only shift shift the exponential waiting times to longer average values .thus the observed power law can not be attributed to data incompleteness . in the followingwe will break our discussion in three subsections , each focusing on a specific class of behavior observed in the studied individual activity patterns . in fig .[ f : fig2]a - c we show the interevent time distribution between consecutive events for a single individual for the first four studied databases : web browsing , email , and library visitation . for these datasetswe find that the interevent time distribution has a power - law tail with exponent , independent of the nature of the activity . given that for these activity patterns we collected data for thousands of users , we need to calculate the distribution of the exponent determined separatelly for each user whose activity level exceeds a certain threshold ( _ i.e. _ avoiding users that have too few events to allow a meaningful determination of ) . as fig . [ f : fig2]e - g shows , we find that the distribution of the exponents is peaked around .the scattering around in the measured exponents could have two different origins .first , it is possible that each user is characterized by a different scaling exponent .second , each user could have the same exponent , but given the fact that the available dataset captures only a finite time interval from one month to several months , with at best a few thousand events in this interval , there are uncertainties in our ability to determine numerically the exponent . to demonstrate that such data incompleteness could indeed explain the observed scattering , in figs .[ f : fig2]h and [ f : fig2]l we show the result of a numerical experiment , in which we generated 10,000 time series , corresponding to 10,000 independent users , the interevent time of the events for each user being taken from the same distribution . the total length in time of each time serieswas chosen to be .we then used the automatic fitting algorithm employed earlier to measure the exponents in figs .[ f : fig2]e - g to determine numerically the exponent for each user . in principle for each user we should observe the same exponent , given that the datasets were generated in an identical fashion . in practice , however , due to the finite length of the data , each numerically determined exponent is slightly different , resulting in the histogram shown in fig . [f : fig2]h . as the figure shows , even in this well controlled situationwe observe a scattering in the measured exponents , obtaining a distribution similar to the one seen in figs .[ f : fig2]e - g .the longer the time series , the sharper the distribution is ( fig .[ f : fig2]l ) , given that the exponent can be determined more accurately .the distributions obtained for the three studied datasets are not as well controlled as the one used in our simulation : while the length of the observation period is the same for each user , the activity level of the users differs widely . indeed , as we show in fig .[ f : fig2]i - k , the activity distribution of the different users , representing the number of events recorded for each user , also spans several orders of magnitude , following a fat tailed distribution .thus the degree of scattering of the measured exponent is expected to be more significant than seen in fig [ f : fig2]h and l , since we can determine the exponent accurately only for very active users , for which we have a significant number of datapoints .therefore , the obtained results are consistent with the hypothesis that each user is characterized by a scaling exponent in the vicinity of , the difference in the numerically measured exponent values being likely rooted in the finite number of events we record for each user in the datasets .this conclusion will be eventually corroborated by our modeling efforts , that indicate that the exponents characterizing human behavior take up discrete values , one of which , provide the empirically observed .b. ( a ) given two email users a and b , the response times of user a to b are the time intervals between a receiving an email from b and a sending an email to b. the response time distribution of user a is then computed taking into account the response times to all users he / she communicates with .the continuous line is a power law fit with exponent .( b ) given an user a , the inter - arrival times are the time intervals between the two consecutive arrivals of an email to user a , independently of the sender . the arrival time distribution of user a is obtained taking into account all the inter - arrival times for that user .the continuous line is a power law fit with exponent 0.98 .( c ) the real waiting time distribution of an email in a user s priority list , where represents the time between the time the user first sees an email and the time she sends a reply to it .the black symbol shown in the upper left corner represents the messages that were replied to right after the user has noticed it.,width=307 ] as we will see in the following sections , an important measure of the human activity patterns is the waiting time , , representing the amount of time a task waits on an individual s priority list before being executed . for the email dataset ,given that we know when a user receives an email from another user and the time it sends the next email back to her , we can determine the email s waiting or response time .therefore , we define the waiting time as the difference between the time user a receives an email from user b , and the time a sends an email to user b. in looking at this quantity we should be aware of the fact that not all emails a sends to b are direct responses to emails received from b , thus there are some false positives in the data that could be filtered out only by reading the text of each email ( which is not possible in the available datasets ) .we have measured the empirically obtained waiting time distribution in the email dataset , finding that the distribution of the response times indeed follows a power law with exponent ( fig .[ fig : arrival : response]a ) . .while for darwin and einstein the datasets provide very good statistics ( the power law regime spanning 4 orders of magnitude ) , the plot corresponding to freud s responses is not so impressive , yet still being well approximated by the power law distribution .note that while in most cases the identified reply is indeed a response to a received letter , there are exceptions as well : many of the very delayed replies represent the renewal of a long lost relationship.,width=307 ] in the case of the correspondence patterns of einstein and darwin we will focus on the response time of the authors , partly because we will see later that this has the most importance from the modeling perspective . as shown in fig .[ fig : letters ] , the probability that a letter will be replied to in days is well approximated by a power law ( [ ptaualpha ] ) with , the scaling spanning four orders of magnitude , from days to years .note that this exponent is significantly different from observed in the earlier datasets , and we will show later that modeling efforts indeed establish as a scaling exponent characterizing human dynamics . the dataset allows us to determine the interevent times as well , representing the time interval between two consecutive letters sent by einstein , darwin or freud to any recipient .we find that the interevent time distribution is also heavy tailed , albeit the quality of scaling is not as impressive as we observe for the response time distribution .this is due to the fact that we do not know the precise time when the letter is written ( in contrast with the email , which is known with second resolution ) , but only the day on which it was mailed . given that both einstein and darwin wrote at least one letter most days , this means that long interevent times are rarely observed .furthermore , owing to the long observational period ( over 70 years ) , the overall activity pattern of the two scientists has changed significantly , going from a few letters per year to as many 400 - 800 letters / year during the later , more famous phase of their professional life .thus the interevent time , while it appears to follow a power law distribution , it is by no means stationary .more stationarity is observed , however , in the response time distribution . for the stock broker we again focus on the interevent time distribution , finding that the best fit follows with and min ( see fig .this value is between observed for the users in the first three other datasets and observed for the correspondence patterns . yet , given the scattering of the measured exponents , it is difficult to determine if this represents a standard statistical deviation from or , the two values expected by the modeling efforts ( see sects . [ sec : cobham ] and [ sec : barabasi ] ) , or it stands as evidence for a new universality class . at this point we believe that the former case is valid , something that can be decided only once data for more users will become available .the exponential cutoff is not inconsistent with the modelling efforts either : as we will show in appendix [ general ] , such cutoffs are expected to accompany all human activity patterns with .the heavy tailed nature of the observed interevent time distribution has clear visual signatures .indeed , it implies that an individual s activity pattern has a bursty character : short time intervals with intensive activity ( bursts ) are separated by long periods of no activity ( figs .1d - f ) . therefore , in contrast with the relatively uniform activity pattern predicted by the poisson process , for a heavy tailed process very dense successions of events ( bursts ) are separated by very long gaps , predicted by the slowly decaying tail of the power law distribution .this bursty activity pattern agrees with our experience of an individual s normal email usage pattern : during a single session we typically send several emails in quick succession , followed by long periods of no email activity , when we focus on other activities .the empirical evidence discussed in the previous section raises several important questions : why does the poisson process fail to capture the temporal features of human activity ? what is the origin of the observed heavy tailed activity patterns in human dynamics ? to address these questions we need to inspect closely the processes that contribute to the timing of the events in which an individual participates .most of the time humans face simultaneously several work , entertainment , and family related responsibilities . indeed , at any momentan individual could choose to participate in one of several tasks , ranging from shopping to sending emails , making phone calls , attending meetings or talks , going to a theater , getting tickets for a sports event , and so on . to keep track of the various responsibilities ahead of them , individuals maintain a _ to do _ or _ priority _ list , recording the upcoming tasks .while this list is occasionally written or electronically recorded , in many cases it is simply kept in memory .a priority list is a dynamic entity , since tasks are removed from it after they are executed and new tasks are added continuously .the tasks on the list compete with each other for the individual s time and attention .therefore , task management by humans is best described as a queuing process , where the queue represents the tasks on the priority list , the server is the individual which executes them and maintains the list , and some selection protocol governs the order in which the tasks are executed . to define the relevant queuing model we must clarify some key features of the underlying queuing process , ranging from the arrival and service processes to the nature of the task selection protocol , and the restrictions on the queue length . in the followingwe discuss each of these ingredients separately , placing special emphasis on their relevance to human dynamics ._ server : _ the server refers to the individual ( or agent ) that maintains the queue and executes the tasks . in queuing theory we can have one or several servers in parallel ( like checkout counters in a supermarket ) .human dynamics is a _ single server _process , capturing the fact that an individual is solely responsible for executing the tasks on his / her priority list ._ task arrival pattern : _ the arrival process specifies the statistics of the arrival of new tasks to the queue . in queuing theory it is often assumed that the arrival is a poisson process , meaning that new tasks arrive at a constant rate to the queue , randomly and independently from each other. we will use this approximation for human queues as well , assuming that tasks land at random times on the priority list .if the arrival process is not captured by a poisson distribution , it can be modeled as a renewal process with a general distribution of interarrival times .for example , our measurements indicate that the arrival time of emails follows a heavy tailed distribution , thus a detailed modeling of email based queues must take this into account. we must also keep in mind that the arrival rate of the tasks to the list is filtered by the individual , who decides which tasks to accept and place on the priority list and which to reject . in principlethe rejection of a task is also a decision process that can be modeled as a high priority short lived task ._ service process : _ the service process specifies the time it takes for a single task to be executed , such as the time necessary to write an email , explore a web page or read a book . in queuing theory the service processis often modeled as a poisson process , which means that the distribution of the time devoted to the individual tasks has the exponential form ( [ ppoisson ] ) .however , in some applications the service time may follow some general distribution .for example , the size distribution of files transmitted by email is known to be fat tailed , suggesting that the time necessary to review ( read ) them could also follow a fat tailed distribution . in queuing theory it is often assumed that the service time is independent of the task arrival process or the number of tasks on the priority list . while we adopt this assumption here as well , we must also keep in mind that the service time can decrease if too many tasks are in the queue , as humans may devote less time to individual tasks when they have many urgent things to do . _selection protocol or queue discipline : _ the selection protocol specifies the manner in which the tasks in the queue are selected for execution .most human initiated events require an individual to weight and prioritize different activities .for example , at the end of each activity an individual needs to decide what to do next : send an email , do some shopping or place a phone call , allocating time and resources for the chosen activity .normally individuals assign to each task a priority parameter , which allows them to compare the urgency of the different tasks on the list .the time a task waits before it is executed depends on the method the agent uses to choose the task to be executed next . in this respectthree selection protocols are particularly relevant for human dynamics : ( _ i _ ) the simplest is the first - in - first - out ( fifo ) protocol , executing the tasks in the order they were added to the list .this is common in service oriented processes , like the first - come - first - serve execution of orders in a restaurant or getting help from directory assistance and consumer support .( _ ii _ ) the second possibility is to execute the tasks in a random order , irrespective of their priority or time spent on the list .this is common , for example , in educational settings , when students are called on randomly , and in some packet routing protocols .( _ iii _ ) in most human initiated activities task selection is not random , but the individual tends to execute always the highest priority item on his / her list .the resulting execution dynamics is quite different from ( _ i _ ) and ( _ ii _ ) : high priority tasks will be executed soon after their addition to the list , while low priority items will have to wait until all higher priority tasks are cleared , forcing them to stay longer on the list . in the following we show that this selection mechanism , practiced by humans on a daily basis , is the likely source of the fat tails observed in human initiated processes . _ queue length or system capacity : _ in most queuing models the queue has an infinite capacity and the queue length can change dynamically , depending on the arrival and the execution rate of the individual tasks . in some queuing processesthere is a physical limitation on the queue length .for example , the buffers of internet routers have finite capacity , so that packets arriving while the buffer is full are systematically dropped . in human activityone could argue that , given the possibility to maintain the priority list in a written or electronic form , the length of the list has no limitations . yet ,if confronted with too many responsibilities , humans will start dropping some tasks and not accept others .furthermore , while keeping track of a long priority list is not a problem for an electronic organizer , it is well established that the immediate memory of humans has finite capacity of about seven tasks . in other words ,the number of priorities we can easily remember , and therefore the length of our priority list , is bounded .these considerations force us to inspect closely the difference between finite and an unbounded priority lists , and the potential consequences of the queue length on the the waiting time distribution . in this paperwe follow the hypothesis that the empirically observed heavy tailed distributions originate in the queuing process of the tasks maintained by humans , and seek appropriate models to explain and quantify this phenomenon . particularly valuable are queuing models that do not contain power law distributions as inputs , and yet generate a heavy tailed output . in the followingwe will focus on priority queues , reflecting the fact that humans most likely choose the tasks based on their priority for execution . in the empirical datasets discussed in sect [ sec : empirical ] we focused on both the interevent time and the waiting time distribution of the tasks in which humans participate . in the following two sections we focus on the _ waiting time _ of a task on the priority list rather than the interevent times . in this contexthe waiting time , , represents the time difference between the arrival of a task to the priority list and its execution , thus it is the sum of the time a task waits on the list and the time devoted to executing it . in sect .[ sec : wait_interevent ] we will return to the relationship between the empirically observed interevent times and the waiting times predicted by the discussed models .our first goal is to explore the behavior of priority queues in which there are no restrictions on the queue length .therefore , in these models an individual s priority list could contain arbitrary number of tasks . as we will show below ,such models offer a good approximation to the surface mail correspondence patterns , such as that observed in the case of einstein , darwin and freud ( see sect . [sec : letters ] ) .therefore , we will construct the models with direct reference to the the datasets discussed in sect .[ sec : empirical ] .we assume that letters arrive at rate following a poisson process with exponential arrival time distribution . replacing letters with tasks, however , provides us a more general model , in principle applicable to any human activity .the responses are written at rate , reflecting the overall time a person devotes to his correspondence .each letter is assigned a discrete priority parameter upon arrival , such that always the highest priority unanswered letter ( task ) will be always chosen for a reply .the lowest priority task will have to wait the longest before execution , and therefore it dominates the waiting time probability density for large waiting times .this model was introduced in 1964 by cobham to describe some manufacturing processes .most of the analytical work in queuing theory has concentrated on the waiting time of the lowest priority task , finding that the waiting time distribution follows where and are functions of the model parameters , the characteristic waiting time being given by where is the traffic intensity .therefore , the waiting time distribution is characterized by a power law decay with exponent , combined with an exponential cutoff . with continuous priorities .the numerical simulations were performed as follows : at each step we generate an arrival and service time from an exponential distribution with rate and , respectively .if or there are no tasks in the queue then we add a new task to the queue , with a priority ] .the simulations show that in the limit the probability that a task spends time on the list has a power law tail with exponent ( fig .[ fig : l2]a ) . in the limit follows an exponential distribution ( fig .[ fig : l2]a ) , as expected for the random selection protocol . as the typical length of the priority list differs from individual to individual , it is important for the tail of to be independent of .numerical simulations indicate that this is indeed the case : changes in do not affect the scaling of .the fact that the scaling holds for as well indicates that it is not necessary to have a long priority list : even if an individual balances only two tasks at the same time , a bursty heavy tailed interevent dynamics will emerge .next we focus on the case , for which the model can be solved exactly , providing important insights into its scaling behavior that can be generalized for arbitrary values as well . for and a uniform new task priority distribution function , , in , as obtained from ( [ ptau ] ) ( lines ) and numerical simulations ( symbols ) , for ( squares ) , ( diamonds ) and ( triangles ) .the inset shows the fraction of tasks with waiting time , as obtained from ( [ ptau ] ) ( lines ) and numerical simulations ( symbols ) .( b ) average waiting time of executed tasks vs the list size as obtained from ( [ tauavel ] ) ( lines ) and numerical simulations ( symbols ) , for ( squares ) , ( circles ) and ( diamonds).,width=307 ] for the waiting time distribution can be determined exactly ( see appendix [ app : l2 ] ) , obtaining \ , & \tau_w>1 \end{array } \right .\label{ptau}\ ] ] independent of from which the task priorities are selected . in the limit from ( [ ptau ] ) follows that _ i.e. _ decays exponentially , in agreement with the numerical results ( fig .[ fig : l2]a ) .this limit corresponds to the random selection protocol , where a task is selected with probability in each step . in the limitwe obtain in this case almost all tasks have a waiting time , being executed as soon as they were added to the priority list .the waiting time of tasks that are not selected in the first step follows a power law distribution , decaying with .this behavior is illustrated in fig .[ fig : l2]a by a direct plot of in ( [ ptau ] ) for a uniform distribution in . for distribution has an exponential cutoff , which can be derived from ( [ ptau ] ) after taking the limit with fixed , resulting in where when we obtain that and , therefore , the exponential cutoff is shifted to higher values , while the power law behavior becomes more prominent .the curve systematically shifts , however , to lower values for , indicating that the power law applies to a vanishing task fraction ( see fig .[ fig : l2]a and ( [ ptau00 ] ) ) . in turn , when , corroborated by the direct plot of as a function of ( see inset of fig . [fig : l2]a ) . based on the results discussed above, the overall behavior of the model with a uniform priority distribution can be summarized as follows . for ,corresponding to the case when _ always _ the highest priority task is removed , the model does not have a stationary state .indeed , each time the highest priority task is executed , there is a task with smaller priority left on the list . with probability the newly added taskwill have a priority larger than , and will be executed immediately . with probability ,however , the new task will have a smaller priority , in which case the older task will be executed , and the new task will become the ` resident ' one , with a smaller priority . for a long periodall new tasks will be executed right away , until an another task arrives with probability that again pushes the non - executed priority to a smaller value .thus with time the priority of the lowest priority task will converge to zero , , and thus with a probability converging to one the new task will be immediately executed .this convergence of to zero implies that for the model does not have a stationary state .a stationary state develops , however , for any , as in this case there is always a finite chance that the lowest priority tasks will also be executed , thus the value of will be reset , and will converge to some .this qualitative description applies for arbitrary values .( a ) and ( b ) and different values of ( see legend ) .the inset in ( b ) shows the exponent for different ( points ) , indicating that for ( continuous line ) .( c ) rescaled plot of the waiting time distribution for .similar plots are obtained for larger vales of ( data not shown).,width=307 ] to quantify this qualitative picture we studied numerically the case assuming that is uniformly distributed in the interval . to investigatehow fast the system approaches the stationary state we compute the average priority of the lowest priority task in the queue , ( see fig .[ fig : scaling]a , b ) since it represents a lower bound for the average of any other priorities on the list .we find that for any values decreases exponentially up to a time scale , when it reaches a stationary value .the numerical simulations indicate that ^{\theta_l}\ . \label{xmin}\ ] ] for can calculate exactly , obtaining \ , \label{xminl2}\end{aligned}\ ] ] and therefore .for we determined from the best data collapse , obtaining the values shown in the inset of fig .[ fig : scaling]b , indicating that where is the value of for .these results support our qualitative discussion , indicating that for all and values the system reaches a stationary state .finally we measured the waiting time distribution after the system has reached the stationary state .the results for are shown in fig .[ fig : scaling]c , and similar results were obtained for other values .the data collapse of the numerically obtained indicates that when and , where in the limit .the simulations indicate that the model s behavior for is qualitatively similar to the behavior derived exactly for , but different scaling parameters characterize the scaling functions . for any , however , the waiting times scale as , _i.e. _ we have . asthe results in the previous subsections show , the model proposed to account for the universality class has some apparent problems .indeed , for truly deterministic execution ( ) the model does not have a stationary state .the problem was cured by introducing a random task execution ( ) , which leads to stationarity . in this case , however , a dependent fraction of tasks are executed immediately , and only the rest of the long lived tasks follow a power law .as converges to zero , the fraction of tasks executed immediately diverges , developing a significant gap between the power law regime , and the tasks displaying waiting time .is this behavior realistic , or represents an artifact of the model ?a first comparison with the empirical data would suggest that this is indeed an artifact , as measurements shown in fig . [f : fig2 ] and [ fig : arrival : response ] do not provide evidence of a large number of tasks that are immediately executed .however , when inspecting the measurement results we should keep in mind that they represents the intervent times , and not the waiting times . in the case when the waiting time can be directly measured , like in the email or mail based correspondence , there is some ambiguity to the real waiting time .indeed , in the email data , for example , we have measured as waiting time the time difference between the arrival of an email , and the response sent to it .while this offers an excellent approximation , from an individual s or a priority queue s perspective this is not the real waiting time .indeed , consider the situation when an email arrives at 9:00 am , and the recipient does not check her email until 11:56am , at which point she replies to the email immediately .from the perspective of her priority list the waiting time was less than a minute , as she replied as soon as she saw the email . in our dataset , however , the waiting time will be 3 hours and 56 minutes .thus the way we measured the waiting times can not identify the true waiting time of a task on a user s priority list .the email dataset allows us , however , to get a much better approximation of the real waiting times than we did before .indeed , for an email received by user a we record the time it arrives , and then the time of the first email sent by user a to any other user _ after _ the arrival of the selected email .it will be this time from which we start measuring the waiting time for email .thus if user a replies to at time , we consider that the email s waiting time , instead of considered in fig [ fig : arrival : response]a . the results , shown in fig [ fig : arrival : response]c , displays the same power law scaling with as we have seen in fig [ fig : arrival : response]a , but in addition there is a prominent peak at , cooresponding to emails responded to immediately .note that the peak s magnitude is orders of magnitude larger than the probabilities displayed by the large waiting times .this result suggests that what we could have easily considered a model artifact in fact captures a common feature of email communications .indeed , a high fraction of our emails is responded immediately , right after our first chance to read them , as predicted by the priority model discussed in this section .are there models that can provide the universality class without the high fraction of items executed imediatelly ? while we have failed to come up with any examples , we belive that developing such models could be quite valuable .as we discussed above , the empirical measurements provide either the interevent time distribution ( sects . [sec : alpha1 ] and [ sec : broker ] ) or the waiting time distribution ( sect .[ sec : letters ] ) of the measured human activity patterns .in contrast the model predicts only the waiting time of a task on an individual s priority list .what is the relationship between the observed interevent times and the predicted waiting times ?the basic thesis of our paper is that the waiting times the various tasks experience on an individual s priority list is responsible for the heavy tailed distributions seen in the interevent times as well .the purpose of this section is to discuss the relationship between the two quantities .the model predictions , that the waiting time distribution of the tasks follows a power law , is directly supported by one dataset in each universality class : the email data and the correspondence data . as discussed in sect .[ sec : empirical ] , we have measured the waiting time distribution for both datasets , finding that the distribution of the response times indeed follows a power law with exponent ( email ) and ( correspondence mail ) as predicted by the models . therefore , the direct measurement of the waiting times are likely rooted in the fat tailed response time distribution . for the other three datasets , however , such as web browsing , library visits and stock purchases , we can not determine the waiting time of the individual events , as we do not know when a given task is added to the individual s priority list . to explore the broader relationship between the waiting times and the interevent timeswe must remind ourselves that while during the measurements we are focusing on a specific task ( like email ) , the models assume the knowledge of _ all _ tasks that an individual is involved in .thus the empirical measurements offer only a selected subset of an individual s activity pattern . to see the relationship between and next we discuss two different approaches ._ queueing of different task categories : _ the first approach acknowledges the fact that tasks are grouped in different categories of priorities : we often do not keep in mind specific emails to be answered , but rather remember that we need to check our email and answer whatever needs attention .similarly , we may remember a few things that we need to shop for , but our priority list would often contain only one item : go to the supermarket . when we monitor different human activity patterns , we see the repetitive execution of these categories , like going to the library , or doing emails , or browsing the web . given this , one possible modification of the discussed models would assume that the tasks we monitor correspond to specific activity categories , and when we are done with one of them , we do not remove it from the list , but we just add it back with some changed priority .that is , checking our email does not mean that we deleted email activity from our priority list , but only that next has some different priority .if we monitor only one kind of activity , then a proper model would be the following : we have tasks , each assigned a given priority .after a task is executed , it will be reinserted in the queue with a new priority chosen from the same distribution .if we now monitor the time at which the different tasks exit the list , we will find that the interevent times for the _ monitored _ tasks correspond exactly to the waiting time of that task on the list .note that this conceptual model would work even if the tasks are not immediately reinserted , but after some delay . indeed in this casethe interevent time will be , and as long as the distribution from which is selected from is bounded , the tail of the interevent time distribution will be dominated by the waiting time statistics ._ interaction between individuals : _ the timing of specific emails also depends on the interaction between the individuals that are involved in an email based communication . indeed ,if user a gets an email from user b , she will put the email into her priority list , and answer when she gets to it .thus the timing of the response depends on two parameters : the receipt time of the email , and the waiting time on the priority list .consider two email users , a and b , that are involved in an email based conversation .we assume that a sends an email to b as a response to an email b sent to a , and viceversa .thus , the interevent time between two consecutive emails sent by user a to user b is given by , where is the waiting time the email experienced on user a s queue , and is the waiting time of the response of user b to a s email .if both users prioritize their tasks , then they both display the same waiting time distribution , _i.e. _ and . in this casethe interevent time distribution , which is observed empirically if we study only the activity pattern of user a , follows also .thus the fact that users communicate with each other turns the waiting time into an observable interevent times . in summary ,the discussed mechanisms indicate that the waiting time distribution of the tasks could in fact drive the interevent time distribution , and that the waiting time and the interevent time distributions should decay with the same scaling exponent . in reality , of course , the interplay between the two quantities can be more complex than discussed here , and perhaps even better mapping between the two measures could be found for selected activities .but these two mechanisms indicate that if the waiting time distribution is heavy tailed , we would expect that the interevent time distribution would be also affected by it ._ universality classes : _ as summarized in the introduction , the main goal of the present paper was to discuss the potential origin of the heavy tailed distributed interevent times observed in human dynamics . to start we provided evidence that in five distinct processes , each capturing a different human activity , the interevent time distribution for individual users follow a power law .our fundamental hypothesis is that the observed interevent time distributions are rooted in the mechanisms that humans use to decide when to execute the tasks on their priority list . to support this hypothesiswe studied a family of queuing models , that assume that each task to be executed by an individual waits some time on the individual s priority list and we showed that queuing can indeed generate power law waiting time distributions .we find that a model that allows the queue length to fluctuate leads to , while a model for which the queue length is fixed displays .these results indicate that human dynamics is described by at least two universality classes , characterized by empirically distinguishable exponents .note that while we have classifed the models based on the limitations on the queue lenght , we can not exclude the existence of models with fixed queue lenght that scale with , or models with fluctuating lenght that display scaling with , or some other exponents . in comparing these results with the empirical data, we find that email and phone communication , web surfing and library visitation belong to the universality class .the correspondence patterns of einstein , darwin and freud offer convincing evidence for the relevance of the exponent , and the related universality class , for human dynamics .in contrast the fourth process , capturing a stock broker s activity , shows .given , however , that we have data only for a single user , this value is in principle consistent with the scattering of the exponents from user to user , thus we can not take it as evidence for a new universality class .one issue still remains without a satisfactory answer : why does email and surface mail ( einstein , darwin and freud datasets ) belong to different universality classes ?we can comprehend why should the mail correspondence belong to the class : letters likely pile on the correspondent s desk until they are answered , the desk serving as an external memory , thus we do not require to remember them all .but the same argument could be used to explain the scaling of email communications as well , given that unanswered emails will stay in our mailbox until we delete them ( which is one kind of task execution ) .therefore one could argue that email based communication should also belong to the universality class , in contrast with the empirical evidence , that clearly shows .some difficulty in comparing the empirical data with the model predictions is rooted in the fact that the models predict the waiting times , while for many real systems only the interevent times can be measured .it is encouraging , however , that for the email and the surface mail based commnunication we were able to determine directly the waiting times as well , and the exponents agreed with those determined from the interevent times .in addition we argued that in a series of processes the waiting time distribution determines the interevent time distribution as well ( see sect .[ sec : wait_interevent ] ) .this argument closes the loop of the paper s logic , establishing the relevance of the discussed queueing models to the datasets for which only interevent times could be measured .we do not feel , however , that this argument is complete , and probably future work will strengthen this link . in this respecttwo directions are particularly promising .first , designing queueing models that can directly predict the observed interevent times as well would be a major advance .second , establishing a more general link between the waiting time and interevent times could also be of significant value .the results discussed in this paper leave a number of issues unresolved . in the followingwe will discuss some of these , outlining how answering them could further our understanding of the statistical mechanics of human driven processes . _ tuning the universality class : _ as we discussed above , the discussed models provide evidence for two distinct universality classes in human dynamics , with distinguishable exponents .the question is , are there other universality classes , characterized by exponents different from 1 and 3/2 ?if other universality classes do exist , it would be valuable not only to find empirical support for them , but also to identify classes of models that are capable of predicting the new exponents . in searching for new exponentswe need to explore several different directions .first , if one inserts some power law process into the queuing model , that could tune the obtained waiting time distribution , and the scaling exponents .there are different ways to achieve this .one method , discussed in appendix [ sec : preferential ] , is based on the hypothesis that while we always attempt to select the highest priority task , circumstances or resource availability may not allow us to achieve this .for example , our highest priority may be to get cash from the bank , but we can not execute this task when the bank is closed , moving on to some lower priority task .one way to account for this is to use a probabilistic selection protocol , assuming that the probability to choose a task with priority for execution in a unit time is , where is a parameter that interpolates between the random choice limit ( ii ) ( , ) and the deterministic case , when always the highest priority item is chosen for execution ( iii ) ( , ) .as shown in the appendix [ sec : preferential ] , the exponent will depend on as at this moment we do not have evidence that such preferential selection process acts in human dynamics .however , detailed datasets and proper measurement tools might help up decide this by measuring the function directly , capturing the selection protocol .such measurements were possible for complex networks , where a similar function drives the preferential attachment process .as we discussed above , the main goal of this paper was to demonstrate that the queuing of the tasks on an individual s priority list can explain the heavy tailed distributions observed in human activity patterns . to achieve this , we focused on models with poisson inputs , meaning that both the arrival time and the execution time are bounded . in some situations ,however , the input distributions can be themselves heavy tailed .this could have two origins : ( _ i _ ) heavy tailed arrival time distribution : as we show in fig .[ fig : arrival : response]b , there is direct evidence for this in the email communication datasets : we find that the interevent time of arriving emails can be roughly approximated with a power law with exponent .( _ ii _ ) the execution time could also be heavy tailed , describing the situation when most tasks are executed very rapidly , while a few tasks require a very long execution time .evidence for this again comes from the email system : the file sizes transmitted by email are known to follow a heavy tailed distribution .therefore , if we read every line of an email , in principle the execution time should also be heavy tailed ( _ i.e. _ the time we actually take to work on the response , including reading the original email ) .note , however , that measurements failed to establish a correlation between email size and the response time .it is not particularly surprising that both ( _ i _ ) and ( _ ii _ ) would significantly impact the waiting time distribution , generating a heavy tailed distribution for the waiting times even when the behavior of the model otherwise would be exponential , or change the exponent , thus altering the model s universality class .some aspects of this problem were addressed recently by blanchard and hongler . however , to illustrate the impact of the heavy tailed inputs in appendix [ app : service ] we study the model of sect .[ sec : cobham ] with a heavy tailed service time distribution with . finally , could the power law distributed arrival and execution times serve as the proper explanation for the observed heavy tailed interevent time distribution in human dynamics ? note that in a number of systems we observe heavy tailed distributed events without evidence for power law inputs . for example, the timing of the library visits or stock purchases by brokers does not appear to be driven by any known power law inputs , and they have negligible execution time compared with the average observed interevent times .similarly , the beginning of online games or instant messages is not driven by file sizes either , but only by the time availability for playing a game or sending a message , which is mostly a priority driven issue .therefore , while it is important to understand the impact of power law inputs on the scaling properties of various models , attempts to explain the waiting times solely based on the heavy tailed inputs only delegate the problem to an earlier cause ( the origin of the power law input ) . _potential model extensions : _ guided by the desire of constructing the simplest models that capture the essence of task execution , we have neglected many processes that are obviously present in human dynamics .for example , we assumed that the priority of the tasks is assigned at the moment the task was added to an individual s priority list , and remains unchanged for the rest of the queuing process .in reality the priorities themselves can change in time .for example , many tasks have deadlines , and one could assume that a task s priority diverges as the deadline is approaches . even in the absence of a clear deadlinesome priorities may incease in time , others may decrease .sometimes external factors change suddenly a task s priority for example , the priority of watering the lawn suddenly diminishes if it starts raining .the possibility of dropping tasks , either by not allowing them on the queue , or by simply deleting them from the queue , could also affect the waiting time distributions .tasks could be dropped if they were not executed for a considerable time interval , and thus become irrelevant , or when the individual is very busy , or some may be simply forgotten .obviously , the precise impact on the waiting time distribution will depend on the implementation of the task dropping conditions .it is important to understand if any or all of these processes could change the universality class of the waiting time distribution ._ model limitations : _ the studied datasets do not capture all tasks an individual is involved in , but only the timing of selected activities , like sending emails or borrowing books from the library . yet , we must consider the fact that between any two recorded events individuals participate in many other non - recorded activities .for example , if we find that an individual clicks on a new document every few seconds , likely he / she is fully concentrating on web browsing . however , when we notice a break of hours or days between two consecutive clicks , it is clear that in the meantime the individual was involved in a myriad of other activities that were not visible to us .the queuing models discussed here were designed to take into consideration all human activities , as we assume that the priority list of an individual contains all tasks the person is involved in .currently an understanding of the interplay between the _ recorded _ and the _ invisible _ activities is still lacking . _task optimization : _ the order in which we execute different tasks is often driven by optimization : we try to minimize the total time , or some cost functions .this is particularly relevant if the execution times depend on the order in which the tasks are executed .for example , often executing a certain task might be faster if we execute some other preparatory tasks before , and not in the inverse order . in principle optimizationcould be incorporated into the studied models by assuming that they determine the priority of the tasks .optimization raises several important questions for future work : how should we model optimization driven queueing processes ?can they also lead to power laws , and if so , will they result in new universality classes ? _ correlations : _ so far we have focused on the origin of the various distributions observed in human dynamics .distributions offer little information , however , about potential correlations present in the observed time series .such correlations were documented in ref . , observing that the correlation function of the interevent time series for printing job arrivals decayed as a power law .are such temporal correlations present in other systems as well ? what is their origin ?can the queuing models predict such correations ?answers to these questions could not only help better understand human dynamics , but could also aid in distingushing the various models from each other ._ network effects : _ in seaching for the explanation for the observed heavy tailed human activity patterns we limited our study to the properties of single queues . in reality none of our actionsare perforned independently most of our daily activity is embedded in a web of actions of other individuals .indeed , the timing of an email sent to user a may depend on the time we receive an email from user b. an important future goal is to understand how the various human activities and their timing is affected by the fact that the individuals are embedded in a network environment ._ non - human activity patterns : _ heavy tailed interevent time distributions do not occur only in human activity , but emerge in many natural and technological systems . for example , omori s law on earthquaqes records heavy tailed interevent times between consecutive seismic activities ; measurements indicate that the fishing patters of seabirds also display heavy tailed statistics ; plasticity paterns and avalanches in lungs show similar power law interevent times .while a series of models have been proposed to capture some of these processes individually , there is also a possibility that some of these modeling frameworks can be reduced to various queuing processes .some of the studied queuing models show close relationship to several models designed to capture self - organized criticality .could the mechanisms be similar at some fundamental level ?even if such higher degree of universality is absent , understanding the mechanisms and queuing processes that drive human dynamics could help us better understand other natural phenomena as well , from the timing of chemical reactions in a cell to the temporal records of major economic events or the timing of events in manufacturing processes and supply chains or panic .as we discussed in sect .[ sec : discussion ] , one possible modification of the priority model introduced and studied in sect .[ sec : barabasi ] involves the assumption that we do not always choose the highest priority task for execution , but rather the tasks are chosen stochastically , in increasing function of their priority .that is , the probability to choose a task with priority for execution in a unit time is , where is a predefined parameter of the model .this parameter allows us to interpolate between the random choice limit ( ii ) ( , ) and the deterministic case , when always the highest priority item is chosen for execution ( iii ) ( , ) .note that this parameterization captures the scaling of the model discussed in sect .[ sec : barabasi ] only in the and limits , but not for intermediate values .that is , the two limits of this model map into extreme limits of the model introduced in sect .[ sec : barabasi ] , but the intermediate and values do not map into each other .the probability that a task with priority waits a time interval before execution is .the average waiting time of a task with priority is obtained by averaging over weighted with , providing _ i.e. _ the higher an item s priority , the shorter is the average time it waits before execution . to calculate we use the fact that the priorities are chosen from the distribution , i.e. , which gives providing the relationship ( [ alphagamma ] ) between and , and indicating that with changing we can continuously tune as well . in the limit , which converges to the strictly priority based deterministic choice ( ) in the model , eq .( [ pt ] ) predicts , in agreement with the numerical results ( fig 3a ) , as well as the empirical data on the email interarrival times ( fig 2a ) . in the ( )limit is independent of , thus converges to an exponential distribution , as shown in fig . 3b . the apparent dependence of on the distribution from which the agent chooses the priorities may appear to represent a potential problem , as assigning priorities is a subjective process , each individual being characterized by its own distribution . according to eq .( [ pt ] ) , however , in the limit is independent of .indeed , in the deterministic limit the uniform can be transformed into an arbitrary with a parameter change , without altering the order in which the tasks are executed .this insensitivity of the tail to explains why , despite the diversity of human actions , encompassing both professional and personal priorities , most decision driven processes develop a heavy tail .consider the model discussed in sect . [sec : barabasi ] with .the task that has been just selected and its priority has been reassigned will be called the new task , while the other task will be called the old task .let and be the priority probability density function ( pdf ) and distribution function of the new tasks , which are given . in turn ,let and be the priority pdf and distribution function of the old task in the -th step . at the -th stepthere are two tasks on the list , their priorities being distributed according to and , respectively . after selecting one task the old task will have the distribution function where + ( 1-p)\frac{1}{2 } \label{q}\ ] ] is the probability that the new task is selected given the old task has priority , and + ( 1-p)\frac{1}{2 } \label{q1}\ ] ] is the probability that the old task is selected given the new task has priority . in the stationary state , , thus from ( [ rtrt ] ) we obtain \ . \label{rx}\ ] ] next we turn our attention to the waiting time distribution .consider a task with priority that has just been added to the queue .the selection of this task is independent from one step to the other .therefore , the probability that it waits steps is given by the product of the probability that it is not selected in the first steps and that it is selected in the -th step .the probability that it is not selected in the first step is , while the probability that it is not selected in the subsequent steps is . integrating over the new task s possible priorities we obtain \ , & \tau_w=1\\ \\% \displaystyle \int_0^\infty dr(x ) \tilde{q}(x ) \left [ 1 - q(x ) \right ] q(x)^{\tau_w-2}\ , & \tau_w>1\\ \end{array } \right .\label{ptau0}\ ] ] using ( [ q])-([rx ] ) and integrating ( [ ptau0 ] ) we finally obtain \ , & \tau_w>1 \end{array } \right .\label{ptau2}\ ] ] note that is independent of the pdf from which the tasks are selected .indeed , what matters for task selection is their relative order with respect to other tasks , resulting that all dependences in ( [ q])-([rx ] ) and ( [ ptau0 ] ) appears via .in sect . [ sec : barabasi ] we focused on a model with fixed queue length , demonstrating that it belongs to a new universality class with .next we derive a series of results that apply to any queuing model that has a _finite queue length _ , and is characterized by an _ arbitrary task selection protocol _ . in each time step there are tasks in the queue and one of them is executed . therefore where is the waiting time of the task executed at the -th step and , , is the time interval that task , that is still active at the -th step , has already spent on the queue .the first term in the l.h.s . of ( [ tault ] ) corresponds to the sum of the waiting times experienced by the tasks that were executed in the steps since the beginning of the queue , while the second term describes the sum of the waiting times of the tasks that are still on the queue after the step .given that in each time step each of the tasks experience one time step delay , the sum on the l.h.s .should equal . from ( [ tault ] )it follows that if all active tasks have a chance to be executed sooner or later , like the case for the model studied in sects .[ sec : barabasi ] in the regime , we have and the last term in ( [ tauave ] ) vanishes when .in contrast , for the numerical simulations indicate that after some transient time the most recently added task is always executed , while tasks remain indefinitely in the queue . in this case in the limit andthe last term in ( [ tauave ] ) is of the order of .based on these arguments we conjecture that the average waiting time of executed tasks is given by which is corroborated by numerical simulations ( see fig .[ fig : l2]b ) .it is important to note that the equality in ( [ tauave ] ) is independent of the selection protocol , allowing us to reach conclusions that apply beyond the model discussed in sect .[ sec : barabasi ] . from ( [ tauave ] )we obtain from this constraint follows that must decay faster than when , otherwise would not be bounded .indeed , it is easy to see that for any the average waiting time diverges for eq .( [ e : eq1 ] ) .thus , when , we must either have or where and when , where is a constant . that is , each time an exponent is observed ( as it is for the empirical data discussed in sect . [ sec : empirical ] ) , an exponential cutoff must accompany the scaling .for example , for the model discussed above with and we have and decays exponentially ( [ ptau00 ] ) , in line with the constraint discussed above .a basic difference between the models discussed in sect .[ sec : cobham ] and sects .[ sec : barabasi ] is the capacity of the queue .our results indicate that the model without limitation on the queue length displays , rooted in the fluctuations of the queue length .in contrast , the model with fixed queue length ( sect . [ sec : barabasi ] ) has , rooted in the queuing of the low priority tasks on the priority list .if indeed the limitation in the queue length plays an important role , we should be able to develop a model that can display a transition from the to the universality class as we limit the fluctuations in the queue length . in this sectionwe study such a model , interpolating between the two observed scaling regimes .we start from the model discussed in sect .[ sec : cobham ] , and impose on it a maximum queue length .this can be achieved by altering the arrival rate of the tasks : when there are tasks in the queue no new tasks will be accepted until at least one of the tasks is executed .mathematically this implies that the arrival rate depends on the queue length as in the stationary state the queue length distribution satisfies the balance equation where from ( [ llbalance ] ) we obtain the queue length distribution as suggesting the existence of three scaling regions ._ subcritical regime _, : if the arrival rate of the tasks is much smaller than the execution rate , the fact that the queue length has an upper bound has little significance , since will rarely reach its upper bound , but will fluctuate in the vicinity of .this regime can be reached either for and fixed or for and .therefore , in this case the waiting time distribution is well approximated by that of the model with an unlimited queue length , displaying the scaling predicted by eq .( [ w0 ] ) , _ i.e. _ either exponential , or a power law with , coupled with an exponential cutoff ( see fig .[ f : fig6]a ). _ critical regime _ : for we observe an interesting interplay between the queue length and . normally in this critical regime should follow a random walk with the return time probability density scaling with exponent .however , the limitation imposed on the queue length limits the power law waiting time distribution predicted by eq .( [ w0 ] ) , introducing a cutoff ( see fig .[ f : fig6]b ) .indeed having the number of tasks in the queue limited allows each task to be executed in a finite time ._ supercritical regime _ : when from ( [ ll ] ) follows that _ i.e. _ with probability almost one the queue is filled .thus , in the supercritical regime new tasks are added to the queue immediately after a task is executed .if we take the number of executed tasks as a new reference time then this model corresponds to the one discussed in sect .[ sec : barabasi ] , displaying , as supported by the numerical simulations ( see fig .[ f : fig6]a ) . , with a maximum queue length .the waiting time distribution is plotted for three values : ( circles ) , ( squares ) and ( diamonds ) .the data has been rescaled to emphasize the scaling behavior , where . in the insetwe plot the waiting time for , showing the crossover to the model discussed in sect .[ sec : barabasi ] in the limit and fixed.,width=307 ]in this appendix we study the model discussed in sec . [sec : cobham ] with a heavy tailed service time distribution with . in this caseit has been shown that this result is a consequence of the generalized limit theorem for heavy tailed distributions . let us focus on a selected task and assume that tasks need to be executed before it .therefore , the selected task s waiting time is given by where is the service time of the -th task executed before the given task . equation ( [ twm ] ) represents the sum of independent and identically distributed random variables , with pdf , which is known to follow a pdf with the same heavy tail , and thus resulting in ( [ wbeta ] ) .hence , in this case the heavy tail in the waiting time distribution is a consequence of the heavy tails in the service time distribution .we wish to thank diana kormos buchwald and tilman sauer for providing us the dataset capturing the einstein correspondence , and for helping us understand many aspects of einstein s life and correspondence patterns .similarly , we wish to thank alison m pearn from the darwin correspondence project for providing us the record of darwin s communications and thurston miller for providing us the library visit dataset .we have benefited from useful discussions with l.a.n .amaral and d. stouffer .b. , a. v. and z. d. are supported by nsf itr 0426737 , nsf act / sger 0441089 awards and by an award from the james s. mcdonnell foundation .is supported by fct ( portugal ) grant no .sfrh / bd/14168/2003 .i.k . wish to thank b. jank and the institute for theoretical sciences at university of notre dame for their hospitality during this collaboration , as well as the national office of research and technology in hungary for support .
the dynamics of many social , technological and economic phenomena are driven by individual human actions , turning the quantitative understanding of human behavior into a central question of modern science . current models of human dynamics , used from risk assessment to communications , assume that human actions are randomly distributed in time and thus well approximated by poisson processes . here we provide direct evidence that for five human activity patterns , such as email and letter based communications , web browsing , library visits and stock trading , the timing of individual human actions follow non - poisson statistics , characterized by bursts of rapidly occurring events separated by long periods of inactivity . we show that the bursty nature of human behavior is a consequence of a decision based queuing process : when individuals execute tasks based on some perceived priority , the timing of the tasks will be heavy tailed , most tasks being rapidly executed , while a few experiencing very long waiting times . in contrast , priority blind execution is well approximated by uniform interevent statistics . we discuss two queueing models that capture human activity . the first model assumes that there are no limitations on the number of tasks an individual can hadle at any time , predicting that the waiting time of the individual tasks follow a heavy tailed distribution with . the second model imposes limitations on the queue length , resulting in a heavy tailed waiting time distribution characterized by . we provide empirical evidence supporting the relevance of these two models to human activity patterns , showing that while emails , web browsing and library visitation display , the surface mail based communication belongs to the universality class . finally , we discuss possible extension of the proposed queueing models and outline some future challenges in exploring the statistical mechanisms of human dynamics . these findings have important implications not only for our quantitative understanding of human activity patterns , but also for resource management and service allocation in both communications and retail .
since the seminal papers of barabsi and albert , in which the authors showed that many socio - technical and natural systems have a very non - trivial , complex network structure , a large number of contributions were addressed , in which the methodology of complex networks was also used to study economic and financial systems .various issues related to data analysis and modelling of financial and economic networks were also discussed at subsequent polish symposia on econo- and sociophysics ( called in polish fens ) ( e.g. see ) . in this paperwe deal with the issue of spanning trees of the world trade web ( wtw ) , which was presented at the 7th fens in lublin . in the most general form, wtw is defined as the network of world - trade relations , where countries are represented by nodes and directed weighted links connecting them represent money flows from one country to another . in the last years , many stylized facts about wtw were reported , which , in accordance with the stage of development of the science of networks , were correspondingly related to : binary representation of this network , its weighted version , multinetwork ( commodity - specific ) properties , the inherent community structure , and even fractal properties .abundance of analyses and revealed stylized facts became the basis for theoretical models of wtw .and although , at present , the literature on wtw is quite extensive , the problem of spanning trees for this network appears there rather marginally ( the few examples in this regard are ) . the aim of this contribution is to address the maximum weight spanning trees for wtw in a more systematic way than in the previous works on this topic .the paper is organized as follows . in sect .[ sec2 ] , we present the real data sets and introduce basic concepts and definitions that are in use throughout this article . in sect .[ sec3 ] , we discuss results of real - data analysis . in sect .[ sec4 ] , spanning trees of real wtw are compared with the corresponding spanning trees for synthetic , fully connected networks , in which connection weights are calculated according to the gravity model of trade .we summarize our results in sect .results described in this paper are based on the trade data collected by gleditsch that contain , for each world country in the period 1950 - 2000 , the detailed list of bilateral import and export volumes .the data are employed to build a sequence of symmetric matrices , , corresponding to snapshots of weighted trade networks in the consecutive years , .each entry , , in any single matrix , , represents the average trade volume between and in a given year . to be precise , is calculated as follows : where refers to the volume of export from to , and stands for the volume of import from to . from the point of view of an external observer, these two values should be the same .however , due to differences in reporting procedures between countries , there are often small deviations between and .the same applies to trade in the opposite direction , thus justifying eq .( [ eq1 ] ) .( for a detailed discussion about symmetry issues between and see e.g. . ) in this paper , maximum weight spanning trees for wtw , which are characterized by a sequence of matrices , , with entries , , are obtained by using the prim s algorithm to graphs with connection weights equal to .apart from trade matrices , we also use several other quantities that make description of structural properties of wtw easier .in particular , to characterize trade performance of a country we define the so - called strength , , of the corresponding node .the quantity is calculated as the total weight of all connections that are attached to the node and it represents the total export ( or import ) of the considered country : where is the number of countries participating in the international trade in a given year .the total sum of the connections weights in wtw is defined as : and the total number of such connections is given by : where in the maximum weight spanning trees , the corresponding quantities : , , and are defined in a similar manner , but using the entries of the matrix instead of .all the data used in this study are given in millions of current year u.s .the same applies to the trading countries gdp ( gross domestic product ) values , .finally , the distance between countries , , is the distance between their capitals , and it is given in kilometers .in fig . [ fig1mst ] , maximum weight spanning trees for wtw in 1960 and 2000 are shown , which are obtained from real data . by analysing this figureone can understand , how geographical , political and historical conditions influence the global trade . in the spanning trees ,the thickness of edges and the size of nodes reflect the corresponding bilateral trade volume , , and the strength of the country , , respectively .the above means , that the bigger node is , the more significant is the role of the country in the international trade . in simple words ,large nodes represent the stronger world economies .such nodes ( countries ) usually have a higher number of nearest neighbours ( star - like nodes ) , what distinguishes them from the less significant economies ( leaf - like nodes with only one edge ) .it is clear from fig .[ fig1mst ] that for the past 50 years the united states of america ( usa ) and germany ( ger , as the federal republic of germany before 1990 ) have always played dominant roles in the international trade . at the same time , the economic importance of the other countries participating in trade changed ( grown or decreased ) . in particular , in 1960( see fig .[ fig1mst ] a ) , among the most influential economies in the world were also : france ( fra , whose position was strong because of its colonies ) and russia ( rus , then as the soviet union , whose leading role was determined by the existence of the so - called eastern bloc along with a number of socialist states elsewhere in the world ) . in general , in the sixties , the structure of the international trade network was strongly dependent on the political zones of influence , which established after the second world war and were reflected in the opposing economic organizations such as the council for mutual economic assistance ( comecon , 1949 - 1991 , under the leadership of the soviet union , blue nodes ) and the organisation for european economic co - operation ( oeec , 1948 - 1961 , bringing together the countries of western europe , which in 1961 was transformed into the organisation for economic co - operation and development , oecd , red nodes ) . in a similar manner , when analysing the spanning tree of wtw in 2000 ( see fig .[ fig1mst ] b ) , it becomes apparent that the backbone of wtw , which consists of star - like nodes , is created by the group of eight ( g8 ) member countries , i.e. the united states ( usa ) , canada ( can ) , japan ( jpn ) , germany ( ger ) , france ( fra ) , the united kingdom ( ukg ) , italy ( ita ) , russia ( rus ) ( red subtree in fig . [ fig1mst ] b ) .it was assumed that these countries represent the most developed economies in the world , with the largest gdp values and with the highest national wealths .it was not exactly the truth because g8 did not include china , which already were one of the fastest growing economy .therefore , in order to increase the representativeness of the group , since 2005 , there were meetings known as the g8 + 5 , with representatives from china ( chn ) , brazil , mexico , india ( ind ) , and south africa ( saf ) .the importance of these countries in the world trade network topology is evident and is further discussed in subsequent sections of this paper .wtw is a densely connected network . in ref . , the authors argued that probability distribution of the weights of edges , , in this network is a log - normal distribution .they have shown that although the tail of the distribution is fat with significant fluctuations , when it is shown in a double logarithmic scale it reveals a clear curvature instead of a linear behaviour .therefore , it can not be interpreted as a power - law .the similar findings were reported in subsequent studies , see e.g. . in this section, we report on weight and strength statistics of the maximum weight spanning trees for wtw in the period 1950 - 2000 .the cumulative versions of the corresponding distributions , and , are shown in fig .and although , the functional forms of the two distributions are questionable , it seems that the tail of the strength distribution can be described by a power - law , , with the time - independent characteristic exponent .the exponent is close to the exponent of the pareto distribution , which is common to many other wealth distributions , including , for example , the distribution of gdp values of all countries in the world ( see e.g. fig . 1 in ) .for small values of and the considered distributions are almost identical .this is due to the fact that in this range , the two distributions describe leaves of the spanning tree , for which the following equality holds : , i.e. each leaf has only one trade channel . the less obvious conclusion drawn from the observed compatibility of distributions is that the strength of the nodes are positively correlated with weights of the attached edges ( i.e. less developed economies have lower trade volumes ) . in fig .[ fig3 ] , the relative strength of each country , ( i.e. divided by the strength of the whole network ) , is shown in relation to the corresponding strength , , in the spanning tree .the figure shows that nodes which represent different countries in the spanning tree can be roughly divided into two groups .the first group includes mainly those countries that are in the tree as leaves .they mostly have small gdp and , respectively , low trade performance ( strength ) .the second and much less numerous group includes countries forming the skeleton of the tree .in the tree , they are usually represented by the star - like nodes .the first group , is characterized by the linear scaling relation : . in the second group ,the relation between and is not so obvious .however , if one wants to describe it as a linear relation , as in the first group , then the proportionality constant for the first group would be much smaller than for the second group .this indicates that , when creating the spanning tree , nodes of the first group lose more edges than nodes belonging to the second group .the reason may be that countries from different groups perform different functions in wtw .this makes the internal structure of wtw , which manifests itself in the maximum weight spanning tree , an interesting object to study .( solid squares ) and ii ) the percentage of the number of all trade channels which are included in the tree ( open circles ) . in the inset, the number of world countries vs. time is shown.,title="fig:",width=302 ] , eq .( [ ratio ] ) , between the average connection weight in the spanning tree , , and the corresponding average weight in the original wtw , .the figure shows data for real wtw and for two synthetic networks obtained from the gravity model of trade , eq .( [ gravity ] ) , with two different values of the distance coefficient : and .,title="fig:",width=266 ] to examine , how the maximum weight spanning tree for the trade network has changed over time , we analysed time dependence of the following quantities : and , which describe respectively : what percentage of the global trade volume is covered by the tree , and what percentage of the number of all trade channels is actually included in the tree . fig . [ fig4 ] shows that over the years 1950 - 2000 , these parameters decrease monotonically .in particular , the total trade within the tree , , which in the early fifties accounted for approximately of the global trade , , in the late nineties accounted for only of .similarly , in the early fifties , the number of trade channels in the tree , was approximately of all trade channels , , and in the late nineties , it was just , thus indicating that in the analysed period of time many new trade connections emerged . at first glance ,[ fig4 ] may indicate the declining role of the maximum weight spanning tree .on the other hand , however , when dividing by one gets the monotonically increasing ratio ( see fig .[ fig5 ] ) : where is the average connection weight in the spanning tree and stands for the average weight in the original trade network .this , in turn , points to a completely different conclusion : although the number of connections in wtw grows over time , these connections are not too significant . over the past 50 years, the ratio grow linearly in time , indicating the increasing , not decreasing , role of the tree and proving that the tree can really be regarded as the backbone of wtw .to complete our study on maximum weight spanning trees for wtw , we have investigated whether the famous gravity model of trade , which is the basic macroeconomic model of the international trade , can be used to reproduce the spanning trees of real networks . the gravity model of tradewas first proposed in 1962 by jan tinbergen , the physicist and the future first nobel prize winner in economic sciences .now , the model is one of the most recognizable empirical models in economics .drawing from newton s law of gravity , the gravity model relates the expected trade volume , , between two countries , and , positively to the product of their gdp s , i.e. , and negatively to the geographic distance , , between them .the simplest form of the gravity equation for the bilateral trade volume is : where is a constant and is the distance coefficient , which is obtained from the real data analysis and which was recently identified as being the fractal dimension of the trade system . as defined by eq .( [ gravity ] ) , in the gravity model of trade , one assumes that each country has a trade connection with any other country ( i.e. is non - zero for all pairs of countries ) . by this , synthetic trade networks ( whose spanning trees we want to study ) , which would have been constructed under the gravity law of trade , would be fully connected graphs .this is quite unrealistic .therefore , to make our study more reliable , we have decided to analyse only those trade channels , which are realized in real networks .more precisely : the studied synthetic networks have the same binary structure of trade connections as real wtw , but weights of these connections are calculated according to eq .( [ gravity ] ) .the above description means that , when studying gravity - based synthetic trade networks we employed the trading countries gdp values , , and distances between their capitals , , to built a sequence of matrices , whose entries were given by : where was used , the distance coefficient was assumed to be time - independent and equal to or , and was given by eq .( [ deltawij ] ) . to justify the value of ,we would like to note that the precise value of this parameter is irrelevant in this study : does not affect structure of the tree , nor the ratios : , , and .( symbols for the global trade , and , and the number of all connections , and , in synthetic and real trade networks are the same . )when it comes to the distance coefficient , the values of are meaningful in the sense that : means no dependence on the distance , while is the average value of this parameter in the period 1950 - 2000 .finally , maximum weight spanning trees for synthetic wtw were obtained by using the prim s algorithm to graphs with connection weights equal to . in fig .[ fig6mst ] , maximum weight spanning trees for synthetic gravity - like trade networks in 2000 are shown for two different values of the distance coefficient and .the two trees shown are very different from each other . when , the tree has the form of a star , in which all countries are connected with the strongest economy in the world , i.e. the united states of america ( usa ) .however , when the role of distance in trade is taken into account by using , then the spanning tree takes the form , which is very similar to the one which is shown in fig .[ fig1mst ] b. the remarkable difference between the two trees , fig .[ fig1mst ] b and fig .[ fig6mst ] b , is for connections between asian countries . in the synthetic wtw , china ( chn )is seen as an economic power which dominates asian trade network . in the real spanning tree ,china s economic importance resulting from the reported trade volumes is much smaller .this difference may be due to the fact that the real network had no time to adapt to new external conditions : trade is not keeping pace with the economic development of new economic powers such as china and india ( ind ) .this phenomenon can be compared to the phenomena of magnetic hysteresis , consisting in the fact that system s memory effects slow down the process of reaching the thermodynamic equilibrium . in fig .[ fig7 ] , time dependence of the global trade volume which is covered by the synthetic tree , , as compared with the total trade of the whole synthetic wtw , , is shown for two values of and .surprisingly , the results for are much more similar to the corresponding results obtained for the real network ( cf . fig . [ fig4 ] ) . on the other hand , after dividing the obtained values of by ( which are the same as in the real wtw ) , for both values of one gets the ratios which are very similar to those obtained in the analysis of real networks .in this paper , we have studied some statistical features of the weighted international - trade network by means of maximum weight spanning trees .we have discussed the role of large - sized countries in the network s backbone explaining some geographical , political and historical conditions influencing the structure of this backbone .we have compared the topological properties of this backbone to the analogous one created from the gravity model of trade .we show that the model correctly reproduces the backbone of the real - world economics .we have suggested how memory effects in the trade system may influence the obtained results .the work has been supported from the national science centre in poland ( grant no .2012/05/e / st2/02300 ) .barabsi , r. albert , _ emergence of scaling in random networks _ , science * 286 * , 509 ( 1999 ) .r. albert , a .-barabsi , _ statistical mechanics of complex networks _ , rev .phys . * 74 * , 47 ( 2002 ) .f. schweitzer , g. fagiolo , d. sornette , f. vega - redondo , a. vespignani , d.r .white , _ economic networks : the new challenges _ , science * 325 * , 422 ( 2009 ) .s. gworek , j. kwapie , s. drod , _ sign and amplitude representation of the forex networks _ , acta .phys . pol .a * 117*(4 ) , 681 ( 2010 ) .j. mikiewicz , _ analysis of time series correlation . the choice of distance metrics and network structure _ , acta .phys . pol .a * 121*(2 ) , b-89 ( 2012 ) .j. mikiewicz , _ network analysis of correlation strength between the most developed countries _ , acta phys . pol .a * 123*(3 ) , 589 ( 2013 ) .a. sienkiewicz , t. gubiec , r. kutner , z.r .struzik , _ dynamic structural and topological phase transitions on the warsaw stock exchange : a phenomenological approach _ , acta .phys . pol .a * 123*(3 ) , 604 ( 2013 ) .x. li , y.y .jin , g. chen , _ complexity and synchronization of the world trade web _ , physica a * 328 * , 287 ( 2003 ) . m.a .serrano , m. bogu , _ topology of the world trade web _ ,e * 68 * , 015101(r ) ( 2003 ) .d. garlaschelli , m.i .loffredo , _ fitness - dependent topological properties of the world trade web _ , phys .93 * , 188701 ( 2004 ) .serrano , m. bogu , a. vespignani , _ patterns of dominant flows in the world trade web _ , j. econ .* 2 * , 111 ( 2007 ) .k. bhattacharya , g. mukherjee , j. sarmaki , k. kaski , s. manna , _ the international trade network : weighted network analysis and modelling _ , j. stat .mech . : theory exp .p02002 ( 2008 ) .g. fagiolo , j. reyes , s. schiavo , _ world - trade web : topological properties , dynamics , and evolution _ ,e * 79 * , 036115 ( 2009 ) .m. barigozzi , g. fagiolo , d. garlaschelli , _ multinetwork of international trade : a commodity - specific analysis _ ,e * 81 * , 046104 ( 2010 ) .i. tzekina , k. danthi , d.n .rockmore , _ evolution of community structure in the world trade web _ ,j. b * 63*(4 ) , 541 ( 2008 ) .m. karpiarz , p. fronczak , a. fronczak , _ international trade network : fractal properties and globalization puzzle _ ,arxiv:1409.5963 [ physics.soc-ph ] .a. fronczak , p. fronczak , _ statistical mechanics of the international trade network _ ,e * 85 * , 056113 ( 2012 ) .a. fronczak , _ structural hamiltonian of the international trade network _ , acta phys .. suppl . * 5*(1 ) , 31 ( 2012 ) .r. mastrandrea , t. squartini , g. fagiolo , d. garlaschelli , _ enhanced reconstruction of weighted networks from strengths and degrees _ , new j. phys . * 16 * , 043022 ( 2014 ) .cha , j.w .lee , d .- s .lee , _ patterns of international trade and a nation s wealth _ , j. korean phys. soc . * 56*(3 ) , 998 ( 2010 ) .maeng , h.w .choi , j.w .lee , _ complex networks and minimal spanning trees in international trade network _ , int . j. mod. ser . * 16 * , 51 ( 2012 ) .lee , s.e .maeng , g .-lee , e.s .cho , _ applications of complex networks an analysis of world trade network _ , j. phys .* 410 * , 012063 ( 2013 ) .http://privatewww.essex.ac.uk//exptradegdp.html g. fagiolo , _ directed or undirected ?a new index to check for directionality relations in socio - economic networks _ , econ .3(34 ) , 1 ( 2006 ) .a. heston , r. summers , b. aten , _ world trade version 6.1 _ ( university of pennsylvania , 2002 ) .http://privatewww.essex.ac.uk//data-5.html a.v .deardorff , in _ the regionalization of the world economy _ , edited by j. a. frankel ( university of chicago press , chicago , 1998 ) , _ determinants of bilateral trade : does gravity work in a neoclassical world?_. j.e .anderson , _ a theoretical foundation for the gravity equation _ , amer .econ . rev . *69 * , 106 ( 1979 ) .j. h. bergstrand , _ the gravity equation in international trade : some microeconomic foundations and empirical evidence _ , rev .stat . * 67 * , 474 ( 1985 ) .
in this paper , we investigate the statistical features of the weighted international - trade network . by finding the maximum weight spanning trees for this network we make the extraction of the truly relevant connections forming the network s backbone . we discuss the role of large - sized countries ( strongest economies ) in the tree . finally , we compare the topological properties of this backbone to the maximum weight spanning trees obtained from the gravity model of trade . we show that the model correctly reproduces the backbone of the real - world economy .
the outer solar convection zone extends over 29 % of the solar radius and contains about 2 % of the mass of the sun ( christensen - dalsgaard , gough & thompson 1991 ; kosovichev & fedorova 1991 ) . within most of this region ,energy transport is dominated by convection , leading to a temperature gradient which only deviates slightly from being adiabatic . in particular ,the structure of the convection zone is essentially independent of the local value of the opacity .furthermore , matter is mixed on a time scale of months and hence the composition may be assumed to be uniform . in earlier phases of solar evolution ,the convection zone has extended considerably more deeply : thus it is normally assumed that the sun was fully convective before arriving on the main sequence , justifying the assumption that the early sun was chemically homogeneous .motion induced by convection is likely to extend beyond the boundaries of the convection zone .this can be observed in the solar atmosphere and has a significant effect on the atmospheric structure .penetration beneath the lower boundary of the convection zone can only be inferred indirectly , but is potentially far more important for overall solar structure and evolution .it may affect the temperature stratification in the upper parts of the radiative interior and cause mixing and transport of angular momentum , either through direct motion in the form of penetrating convective plumes or through convectively induced gravity waves ( schatzman , these proceedings ; zahn , these proceedings ) .clear evidence for such mixing is provided by the solar surface abundances of lithium and beryllium which are considerably reduced ( by factors of about 140 and 2 , respectively ; anders & grevesse 1989 ) , relative to the initial composition of the solar system ; this indicates that matter has been mixed to a temperature considerably higher than the maximum temperature at the base of the convectively unstable region during the main - sequence life of the sun . herei am concerned with the effects of convection on the spherically symmetric stellar structure and evolution , and how these effects can be investigated through observations of solar oscillations .many of these issues will be addressed in more detail in later papers in the present volume ; however , i hope to provide a general framework , as well as a basic impression of the data now available for testing the solar models .gough & weiss ( 1976 ) pointed out that the properties of the convection zone is essentially controlled by the thin , substantially superadiabatic region at its top .the integral of the superadiabatic gradient over this region determines the adiabat of the nearly adiabatic part of the convection zone and hence its overall structure .provided the treatment of the superadiabatic region is adjusted , by varying suitable parameters , such as to yield the same adiabat , the overall structure is insensitive to the details of that treatment . herei consider two simple parametrized treatments of convection .one is the mixing - length theory of bhm - vitense ( 1958 ; in the following mlt ) , with a mixing length proportional to the pressure scale height .the second is the formulation by canuto & mazzitelli ( 1991 ; cm ) , with a characteristic scale related to the distance to the top of the convection zone .a potentially more realistic description of the superadiabatic region can in principle be based on appropriate averages of numerical solutions of the time - dependent hydrodynamical equations of convection .i shall consider results of simulations carried out by nordlund , stein and trampedach ( stein & nordlund 1989 ; nordlund , these proceedings ; trampedach , these proceedings ) . unlike the simple formulations, this does not contain explicitly adjustable parameters ; hence it provides a prediction of the adiabat .an overview of the structure of the solar convection zone is provided by fig .[ gwcon ] , in a form originally introduced by gough & weiss ( 1976 ) .this is based mostly on model s of christensen - dalsgaard ( 1996 ) ; the model was computed with the opal opacity ( iglesias , rogers & wilson 1992 ) and equation of state ( rogers , swenson & iglesias 1996 ) and included settling and diffusion of helium and heavy elements , using coefficients from michaud & proffitt ( 1993 ) .convection was treated using the mlt . in addition, the figure shows superadiabatic gradients obtained with the calibrated cm formulation and the hydrodynamical simulations .it is evident that in all cases the region of substantially superadiabatic convection is restricted to the outer few hundred kilometres of the convection zone .as illustrated by fig .[ gwcon ] the region of significant superadiabaticity is extremely thin , compared with the extent of the solar convection zone .thus the detailed structure of this region matters little insofar as the overall structure of the star is concerned .however , it provides the transition between the stellar atmosphere and the almost adiabatic bulk of the convection zone .the structure of the atmosphere can be found observationally , in terms of semi - empirical atmospheric models .thus the integral over the superadiabatic gradient , which determines the change in specific entropy between the atmosphere and the interior of the convection zone , essentially fixes the adiabat of the adiabatic part of the convection zone .this , together with the equation of state and composition , largely determines the structure of the convection zone .the structure of the upper parts of the convection zone is also affected by the dynamical effects of convection , generally represented as a _ turbulent pressure _ ( see rosenthal , these proceedings ; antia & basu , these proceedings ) .these effects are often neglected in calculations of stellar models , however .to illustrate the properties of the convection zone it is instructive to consider a highly simplified model .i assume the equation of state for a fully ionized perfect gas ; then the adiabatic relation between pressure and density can be written as p = k ^ , [ prho ] where and may be assumed to be constant .neglecting also the mass contained in the convection zone the equation of hydrostatic support is = - g m r^2 , [ hydro ] where is distance to the centre of the star , is the mass of the star and is the gravitational constant . from equations ( [ prho ] ) and ( [ hydro ] ) one obtains g m ( 1 r - 1 r _ * ) = k^1/ - 1 , [ rsol ] where is the pressure at a point near the top of the convection zone and is the radius at this point , being the surface radius of the star. conditions at the base of the convection zone are determined by the transition to convective stability , where matching to the radiative interior fixes the radius and pressure at the convection - zone base .the condition of marginal convective instability is = , [ constab ] where is the radiation density constant , is the speed of light , is temperature , is luminosity , is opacity , and we neglected again the mass in the convection zone ; also .this condition , together with equation ( [ prho ] ) , the equation of state and the dependence of on and , determines the relation between and .it is most simply analyzed by considering the response of the model to a change in , keeping the other parameters of the model , including mass and composition , fixed .as confirmed by numerical computations , changes in the convective envelope and outer part of the radiative interior have little effect on the energy - generating core ; thus is largely unchanged and so therefore , according to equation ( [ constab ] ) , is . using the ideal gas law and equation ( [ prho ] )we therefore obtain 0 ( p t^4 ) = _ t - 4 k + 1 , where and ; thus - 4 - _ t ( 4 - _ t ) ( - 1 ) - ( _ p + 1 ) k .[ delpcz ] at the base of the solar convection zone , and .thus , using , we find that .we may now use equation ( [ rsol ] ) to find the resulting change in . assuming that and , we have that g m ( 1 - 1 r ) k^1/ - 1 ^1 - 1/ . [ rcz ] the change in evidently causes a change in ; assuming that the hydrostatic structure of the interior , defined by , changes little up to the base of the convection zone , - h_p , [ delrcz ] where is the pressure scale height evaluated at the base of the convection zone . from equation ( [ rcz ] )it therefore follows that r = - ( r ) ^2 h_p + r ( k^1/ ^1 - 1/ ) , [ delrs ] where is the depth of the convection zone . using again solar values , , , , and the relation obtained above between and ,we find , separating the contributions from the two terms in equation ( [ delrs ] ) 0.50 k - 0.26 k = 0.24 k , [ delrsnum ] and hence -0.02 k .[ deldcznum ] it is remarkable , and perhaps surprising , that in the solar case the depth of the convection zone appears to be virtually insensitive to changes in the adiabat of the convection zone , the change in surface radius resulting from the change in the radius at the base of the convection zone . as discussed below , this is confirmed by numerical results for solar models .note also that is approximately proportional to depth ; thus it is likely that the relative magnitude of the two terms in equation ( [ delrs ] ) , and hence the sign of the relation in equation ( [ delrsnum ] ) will be roughly the same for other stars , at least as long as the opacity derivatives do not change substantially .the dependence of on is used to calibrate solar models to have the observed radius , by adjusting .this might most simply be achieved by assuming a discontinuity in and at the top of the convection zone such that attains the correct value ( schwarzschild 1958 ) .however , more commonly a simplified physical description of convection is used , generally containing a parameter which can be adjusted to ensure the correct radius ; this might allow a safer extrapolation from the solar case , where such calibration can be made , to models of other stars where this is rarely possible .the main features of this calibration can be illustrated by noting that in the deeper part of the convection zone where matter is essentially fully ionized , is related to the specific entropy by s c_p k , [ entropy ] choosing the zero - point of entropy appropriately , where is the specific heat at constant pressure , and was set to .the value of in adiabatic part of the convection zone is related to the photospheric value by , where s = _ ^p^* c_p ( - ) p ; [ deltas ] here is a suitable point in the convection zone , such that . assuming that the atmospheric structure is approximately unchanged, the change in is obtained from the change in as k .[ delk ] for a given energy flux , is determined by the efficacy of convection in the superadiabatic region : if convective transport becomes more efficient , the superadiabatic gradient is reduced , and so therefore is and hence , which according to equation ( [ delrsnum ] ) leads to a smaller radius of the model .several formulations of convection , including the commonly used mixing - length theory , determine the convective efficacy in terms of a characteristic scale , such as the size or mean free path of a convective element .this is often parametrized as a multiple of a typical length scale in the model , such as the local pressure scale height or the distance to the boundary of the convection zone . in the limit of efficient convection , relevant to the larger part of the solar convection zone, the convective flux then satisfies .it follows that if the luminosity , and hence approximately the convective flux , is kept fixed , .according to equations ( [ deltas ] ) and ( [ delk ] ) we therefore have that k -2 s c_p - , where the last equality used solar values for ; hence , from equation ( [ delrsnum ] ) , r -0.24 .[ delrsal ] to illustrate the behaviour of convective envelopes discussed in sections 2.1 and 2.2 , i have calculated three static models , based on the composition profile of model s ( christensen - dalsgaard 1996 ) .some properties of the models are summarized in table 1 .models 1 and 2 have been calibrated to solar radius and luminosity by adjusting the convective efficacy and scaling the hydrogen abundance , as a function of mass , by a suitable factor ( christensen - dalsgaard & thompson 1991 ) . in all cases ,the envelope hydrogen abundance by mass is .model 1 used a version of the canuto & mazzitelli ( 1991 ) convection formulation , with mixing - length proportional to the distance to the convection - zone boundary , but including a parameter which allows calibration to the precise solar radius .model 2 used the bhm - vitense mixing - length treatment , with a mixing length proportional to the pressure scale height .model 3 is also based on the bhm - vitense formulation , but choosing a different mixing length ; the composition is the same as for model 2 .for these models turbulent pressure was ignored .in addition , the table lists an envelope model ( model 4 ) obtained by matching to a hydrodynamical simulation , in the manner of trampedach ( these proceedings ) ; convection was treated using mlt , with and a factor in the treatment of turbulent pressure adjusted to obtain a continuous match of pressure and density in the deepest part of the simulation .thus the adiabat of the deep convection zone is fixed by the properties of the simulation . for this modelthe hydrogen abundance is ..[tab_1 ] properties of static solar models treating convection with the canuto & mazzitelli formulation ( cm ) or the bhm - vitense mixing - length formulation ( mlt ) , as well as an envelope matched continuously to results of hydrodynamical simulation ( sim ) . the radius and of the convection zoneare given in units of the solar radius .for the remaining notation , see text .[ cols="^,^,^,^,^,^,^,^ " , ] -0.5truecm -0.5truecm figure [ superad ] shows the superadiabatic gradient and the integrated entropy change , calculated from equation ( [ deltas ] ) .it is evident that the cm formulation leads to a much higher and sharper peak in , with a corresponding strongly confined change in . however , with the calibration to solar radius , the value of the entropy in the adiabatic part of the convection zone , and hence the properties of the interior of the model , are essentially the same in models 1 and 2 , as shown in more detail by the results presented in table 1 .on the other hand , the decrease in in model 3 , relative to model 2 , causes an increase in , a corresponding increase in and hence an increase in the radius of the model .the magnitude of the relative change in , 2.3 % , is quite close to what is predicted by the simple approximation ( [ delrsal ] ) .also , it should be noticed that the depth of the convection zone has changed little , in accordance with equation ( [ deldcznum ] ) .the results for the hydrodynamical simulation , matched to an envelope model , can not be interpreted as simply in terms of the results of sections 2.2 and 2.3 .equation ( [ rcz ] ) still holds ; however , since for the envelope model is fixed , the relation defines the change in the depth of the convection zone , as shown in table 1 . in this caseone can not assume that the interior is unaffected by the change in , as was the case for complete models , and hence equation ( [ delrcz ] ) is no longer valid .also , because of the difference in composition between models 2 and 4 , say , there is no longer a simple connection between the changes in and .these questions deserve more careful study than can be attempted here , not least in connection with the calibration , described by trampedach ( these proceedings ) , of the mixing length on the basis of convection simulations .it should be noticed also that the matched envelope predicts a convection zone extending somewhat more deeply than in the calibrated models , as a result of the different value of .nevertheless , given the fact that no attempt was made in the simulation to match the solar adiabat , it is encouraging that the change in is relatively modest .comparisons such as the one attempted here are undoubtedly important tests of simulations of solar convection .the approximation used in equation ( [ prho ] ) is only valid if is constant .however , to the extent that the stratification can be assumed to be adiabatic , and are related by = _ 1 ( p ) _ s , [ prhoad ] where the dependence of on , and composition is determined by the equation of state of the gas .it follows that the structure of the adiabatic part of the convection zone is entirely specified by the equation of state , the composition and the actual value of the specific entropy .this property makes the adiabatic part of the convection zone a valuable tool for investigations of the equation of state of stellar material ( christensen - dalsgaard & dppen 1992 ) and forms the basis for helioseismic determinations of the solar envelope helium abundance ( kosovichev 1992 ) .it is evident that these analyses are possible only to within the accuracy of equation ( [ prhoad ] ) .since the thermodynamic effects under consideration are minute , this imposes severe constraints on the superadiabatic gradient .the behaviour for three different treatments of convection , in relation to the location of the dominant ionization zones , was illustrated in fig .[ gwcon ] . in the hydrogen and , at least for mlt , the first helium ionization zone , which is comparable with the effects introduced by current uncertainties in the equation of state .thus investigations of the equation of state have generally been concentrated on the second helium ionization zone , where is at most around in the mlt model . according to the cm formulation , where convective efficiency increases rapidly with depth, the allowable range might include also the first helium ionization zone .the hydrodynamical simulations do not extend sufficiently deeply to reach the second helium ionization zone but appear , from fig .[ gwcon ] , to yield a superadiabatic gradient intermediate between mlt and cm .these substantial differences indicate that the uncertainty in the treatment of convection might have a significant influence on the degree of adiabaticity even in the second helium ionization zone .these issues need further investigation , before very detailed tests of the equation of state and/or precise determination of the helium abundance can be carried out .i finally note that turbulent pressure might also influence tests of the thermodynamic properties of the solar plasma .for example , the ratio between the turbulent and total pressure at the upper edge of the second helium ionization zone in the mlt model 2 can be estimated as approximately .this could have significant effects on the relation between , , and composition inferred from helioseismic analyses .extensive reviews on solar oscillations and their application to helioseismology were given , , by gough & toomre ( 1991 ) , christensen - dalsgaard & berthomieu ( 1991 ) , gough & thompson ( 1991 ) and gough ( 1993 ) .since i consider only the spherically symmetric structure , the oscillation frequencies depend on the degree and radial order alone .i shall assume the adiabatic approximation , neglecting the energy gain or loss of the oscillations .this is an excellent approximation in almost the entire sun but breaks down in the near - surface region where the thermal time scale becomes comparable with the oscillation period . as discussed in detail by rosenthal ( these proceedings ) this region gives rise to other uncertainties in the computation of the frequencies , arising from the physics of the model and the oscillations ; thus in any case the presence of errors in the computed frequencies , arising from the near - surface region , must be kept in mind .the observed frequencies correspond mostly to acoustic modes .these are trapped between an upper turning point just below the photosphere and a lower turning point , at a distance from the centre determined by = l + 1/2 , [ lowturn ] where is the adiabatic sound speed .it follows from equation ( [ lowturn ] ) that high - degree modes are trapped near the solar surface whereas low - degree modes penetrate into the solar core .-0.3 cm perturbation analysis of the oscillation equations ( christensen - dalsgaard & berthomieu 1991 ; see also rosenthal , these proceedings ) shows that the near - surface effects cause changes in the frequencies of the form _nl^(ns ) = ( _ nl ) q_nl , [ delomns ] for modes of low or moderate degree . here , where is the mode energy , normalized by the squared surface amplitude , and is the energy of a radial ( ) mode , interpolated to .thus the behaviour of reflects the variation of the turning - point radius with the degree of the mode : high - degree modes involve a smaller part of the sun than do low - degree modes and therefore have smaller normalized energy and , hence according to equation ( [ delomns ] ) making their frequencies more susceptible to the near - surface effects . is a function of frequency which depends on the physics of the near - surface region ; it may be shown that if the errors in the calculation are confined extremely close to the surface , is a slowly varying function of which is small at low frequency .equation ( [ delomns ] ) motivates analyzing frequency differences in terms of .this scaling effectively reduces the frequency change resulting from near - surface effects to the equivalent change for a radial mode , by taking out the dependence on the penetration depth .thus if differences in structure were confined exclusively to the near - surface region , we might expect to depend on frequency alone , for modes of low or moderate for which the motion in the surface layers is almost radial .these principles may be illustrated by comparing models 1 and 2 computed with the cm and mlt treatments of convection and , according to fig .[ superad ] , differing only very near the surface .figure [ condiff]a shows differences , at fixed mass , between these models ; as discussed by christensen - dalsgaard & thompson ( 1996 ) the effects of near - surface model changes are most naturally represented in terms of such differences .it is evident that the change in the model is essentially confined to the superadiabatic region where the temperature gradients differ substantially .unscaled and scaled frequency differences between these two models are shown in panels ( b ) and ( c ) of fig .[ condiff ] .the unscaled differences show a fairly substantial dependence on degree which is largely suppressed by the scaling , except at high degree .-0.7 cm it is convenient to relate frequency differences between two models , or between the sun and a model , to the corresponding differences in structure .this is generally done on the assumption that the differences are sufficiently small that the relation is linear .asymptotic theory for acoustic modes then shows that ( christensen - dalsgaard , gough & prez hernndez 1988 ) s_nl _ nl _ nl = ( _ nl l+1/2 ) + ( _ nl ) , [ diffduv ] where is closely related to the scaling introduced in equation ( [ delomns ] ) , ( w ) = _ ^r ( 1-c^2 ) ^-1/2 _ r ccrc , [ diffduv - h ] and the function contains contributions from the near - surface region , including the uncertain aspects of the physics . in equation ( [ diffduv - h ] )the difference is evaluated at fixed radius .figure [ obsdiff]a shows differences between the first year s observations with the lowl instrument ( tomczyk 1995 ; tomczyk , schou & thompson 1996 ) and model s of christensen - dalsgaard ( 1996 ) , scaled as suggested by equation ( [ delomns ] ) . clearly the frequency differences depend primarily on frequency .this suggests that the differences between the sun and the model are dominated by the near - surface effects .indeed , the shape and magnitude of the differences are rather similar to the differences , shown in fig .[ condiff ] , between the cm and mlt models .this might suggest that the cm formulation may be a more accurate representation of the uppermost layers of the solar convection zone ( patern 1993 ) ; however , as discussed by rosenthal ( these proceedings ) this conclusion may be premature , given the other potential near - surface contributions to the frequency differences .equation ( [ diffduv ] ) indicates that the relative frequency differences can be separated into a contribution depending on frequency and a contribution depending on or , equivalently , the turning - point radius .indeed , by fitting this relation to relative differences corresponding to those shown in fig .[ obsdiff]a and subtracting the component corresponding to one obtains the residuals shown in fig .[ obsdiff]b which are clearly predominantly a function of .the most striking feature is the rapid variation for modes whose turning points are near , , close to the base of the convection zone .according to equation ( [ diffduv - h ] ) this suggests that there is a comparatively large component of the sound - speed difference at this point ; as we shall see in the next section , this is in fact the case .to investigate the causes of the residual frequency differences shown in fig .[ obsdiff]b a more careful analysis is required . in particular , it is preferable to move beyond the asymptotic approximation in equation ( [ diffduv ] ) . from general properties of the oscillation equationsone may express small differences in adiabatic frequency linearly in terms of differences in two variables characterizing the model , and ( gough & thompson 1991 ) .the actual differences between the observed frequencies and adiabatic frequencies of a model must also reflect the nonadiabatic effects on the frequencies and the inadequacies in the modelling of the near - surface region . as a result, the frequency differences can be expressed as = _ 0^r r + q_nl^-1 ( _ nl ) + _ nl , [ numdiff ] being the observational error ; here the kernels and are determined from the eigenfunctions in the model , while the penultimate term arises from the neglected physics in the near - surface region .-0.3 cm equation ( [ numdiff ] ) forms the basis for inverse analyses to infer properties of and .here i consider the so - called subtractive optimally localized averages ( sola ) technique , first introduced by pijpers & thompson ( 1992 ) ; details of the implementation were provided by basu ( 1996a ) .the principle is to construct linear combinations of equation ( [ numdiff ] ) with weights chosen such that the _ averaging kernel _ _c^2 , ( r_0 , r ) = _ nl c_nl ( r_0 ) k_c^2 ,^nl(r ) [ avker ] is a function localized near , whereas the remaining terms on the right - hand side of equation ( [ numdiff ] ) are minimized . in particular , the contribution from the near - surface problems , as given by the term in ,can largely be eliminated . to the extent that this procedure is successful, we obtain a localized estimate of , ( r_0 ) = _ nl c_nl ( r_0 ) _ nl _ nl _ 0^r _ c^2 , ( r_0 , r ) _ r c^2 c^2(r ) r .[ delcinv ] an estimate of the standard error in the result can be obtained from the observational standard deviations of . -0.2cm figure [ fig : delcinv ] shows the difference in squared sound speed resulting from an application of this procedure to the frequency differences illustrated in fig .[ obsdiff]a ( basu 1996b ) .it is evident that there is indeed a sharp feature in the sound - speed difference just below the convection zone ; this is responsible for the behaviour of the residual frequency differences around in fig .[ obsdiff]b .a second substantial feature is the dip in around , , at the edge of the nuclear - burning core , apparently followed by a rise in the deeper parts of the core . in the convection zonethe difference is relatively small , although with a rise in magnitude towards the surface ; this might be caused by errors in the equation of state or possibly by residual effects of the near - surface problems . to discuss the possible causes for the sound - speed differences in the radiative interiori note that , according to the ideal gas law , c^2 = _ 1 p , [ capprox ] where is boltzmann s constant , is the mean molecular weight and is the atomic mass unit .thus must reflect a difference in between the sun and the model . with this in mind , it is striking that the two regions of dramatic variation in coincide with regions where the composition , and hence , varies strongly .this is illustrated by the hydrogen profile shown in fig .[ xprof ] : beneath the convection zone the accumulation of helium settling out of the convection zone causes a sharp gradient in , whereas hydrogen burning , with an additional small contribution from helium settling , leads to a strong variation of in the core . in both cases , the difference between the solar and model sound speed could be reduced by smoothing the composition profile : this would increase , reduce and hence increase just below the convection zone and similarly reduce and at the edge of the core , with a corresponding increase in the inner core . to test this possibility , bruntt ( 1996 ) adjusted the hydrogen profile in model s in such a way as to approximate the sound - speed difference shown in fig .[ fig : delcinv ] .the profiles were constrained to correspond to the same total amount of hydrogen as for model s , to within 0.5 % , and to give the observed solar luminosity ; however , no assumptions were made about possible physical mechanisms which might cause the redistribution of hydrogen .the resulting profile is shown in fig .[ xprof ] as a dashed line , and the difference in sound speed between the modified model and model s was shown as the solid line in fig .[ fig : delcinv ] .evidently the change in composition has reproduced much of the sound - speed difference between the sun and the model .evidence for a smoother composition profile just below the convection zone was also found from inverse analysis by antia & chitre ( 1996 ) .such changes in composition are not implausible .indeed , as mentioned in the introduction , the depletion of lithium and beryllium demonstrates that mixing in the sun well below the convection zone must have taken place at some stage during solar main - sequence evolution .computations with rotationally - induced mixing accounting for the lithium depletion ( chaboyer 1995 ; richard 1996 ; see also zahn , these proceedings ) have largely succeeded in eliminating the bump in the sound - speed difference just below the convection zone ( gough 1996 ) . mixing would also be caused by convective penetration into the stable region ; even a very small fraction of convective eddies penetrating to a substantial distance could cause appreciable mixing , with little effect on the temperature structure .it has furthermore been suggested that gravity waves excited by convective penetration might lead to mixing ( montalbn 1994 ; montalbn & schatzman 1996 ; see also schatzman , these proceedings ) .it should be noted that the lithium depletion and change in the hydrogen profile are not automatically related : thus strong mixing in the early phases of solar evolution , as might have been caused by rotation , could have depleted lithium with little effect on the present hydrogen profile . in this sense , the information obtained from lithium and from the sound - speed inversion is complementary .unfortunately , mixing is not the only mechanism that might account for the sound - speed results .early mass loss ( guzik & cox 1995 ) might also reduce the lithium and beryllium abundances , and in addition improve the agreement with the inferred solar sound speed just below the convection zone ( gough 1996 ) . furthermore , it is obviously possible to change the sound speed by changing the temperature profile .this requires modifications to the opacity such that the condition of radiative energy transport is satisfied .tripathy , basu & christensen - dalsgaard ( 1996 ) showed that the inferred sound - speed difference in fig .[ fig : delcinv ] can be reproduced by means of a suitably chosen opacity modification of only a few per cent . thus independent information and ,if possible , tighter constraints on the opacity are required to separate the different possible causes for the remaining differences between the sun and solar models .the results of inversion for the sound - speed difference , such as those shown in fig . [ fig : delcinv ] , indicate a strikingly close agreement between the solar sound speed and that of normal solar models .this has little implication for the dynamics of the upper parts of the convection zone which have been adjusted , by calibration of the mixing length , to produce a model with the correct radius ; however , it does indicate that conditions at and below the base of the convection zone are not vastly different from those obtained from normal stellar modelling ( see also roxburgh , these proceedings ) .nonetheless , the most dramatic sound - speed difference does in fact occur in this region .although various explanations are possible , the most plausible of these is perhaps mixing induced by rotational instability , direct convective penetration or gravity waves .however , it is important to stress that , despite its great power , helioseismology can not on its own provide a full investigation of the problems of mixing in stellar interiors .this requires a combination of a better physical understanding of the mixing processes , data from other stars on , for example , the lithium depletion ( michaud & charbonneau 1991 ) , and tighter constraints on other aspects of the physics of the radiative interior , such as the opacity .it is encouraging that the hydrodynamical simulations discussed here result in a structure of the adiabatic part of the convection zone fairly close to that obtained in calibrated parametrized models .this offers some hope that such simulations might be used to provide a firmer extrapolation to other stars than the commonly used assumption of a constant mixing - length parameter ( ludwig , freytag & steffen , these proceedings ; trampedach , these proceedings ) .tests of such extrapolations might be provided by well - observed binary stars , such as the cen system ( edmonds 1992 ; fernandes & neuforge 1995 ) .observations of solarlike oscillations in these or other stars might clearly be extremely valuable in constraining the models , in terms of the properties of convection or other aspects of the structure .when such data are available , the meaning of the `` s '' in the title of subsequent conferences in this series might be subtly changed .i am grateful to hans bruntt for permission to show the results of the modified hydrogen profile in figs [ fig : delcinv ] and [ xprof ] , to regner trampedach for the matched envelope and hydrodynamical simulation and to mario monteiro for the implementation of the cm formalism .colin rosenthal is thanked for comments on an earlier version of the paper .this work was supported by the danish national research foundation through its establishment of the theoretical astrophysics center .anders e. & grevesse n. , 1989 ._ geochim . cosmochim. acta _ * 53 * , 197 antia h. m. & chitre s. m. , 1996 .submitted to _apj_. bhm - vitense e. , 1958 ._ z. astrophys . _ * 46 * , 108 basu s. , christensen - dalsgaard j. , prez hernndez f. & thompson m. j. , 1996a . + _ mnras _ * 280 * , 651 basu s. , christensen - dalsgaard j. , schou j. , thompson m. j. & tomczyk s. , 1996b . _ bull . astron .india _ * 24 * , 147 bruntt h. , 1996 . _batchelor thesis _ ,aarhus university .canuto v. m. & mazzitelli i. , 1991 ._ apj _ * 370 * , 295 chaboyer b. , demarque p. , guenther d. b. & pinsonneault m. h. , 1995 ._ apj _ * 446 * , 435 christensen - dalsgaard j. & dppen w. , 1992 ._ a&ar _ * 4 * , 267 .christensen - dalsgaard j. & berthomieu g. , 1991 . in _ solar interior and atmosphere _ , eds cox a. n. , livingston w. c. & matthews m. , space science series , university of arizona press , p. 401christensen - dalsgaard j. & thompson m. j. , 1991 ._ apj _ * 367 * , 666 christensen - dalsgaard j. & thompson m. j. , 1996 ._ mnras _ , in the press .christensen - dalsgaard j. , dppen w. , ajukov s. v. , 1996 ._ science _ * 272 * , 1286 christensen - dalsgaard j. , gough d. o. & prez hernndez f. , 1988 . _ mnras _ * 235 * , 875 christensen - dalsgaard j. , gough d. o. & thompson m. j. , 1991 . _apj _ * 378 * , 413 edmonds p. , cram l. , demarque p. , guenther d. b. & pinsonneault m. h. , 1992 ._ apj _ * 394 * , 313 fernandes j. & neuforge c. , 1995 ._ a&a _ * 295 * , 678 gough d. o. , 1993 . in _astrophysical fluid dynamics , les houches session xlvii _ , eds zahn j .- p . & zinn - justin j. , elsevier , amsterdam , 399 gough d. o. & thompson m. j. , 1991 . in _ solar interior and atmosphere _ , eds cox a. n. , livingston w. c. & matthews m. , space science series , university of arizona press , p. 519gough d. o. & toomre j. , 1991 ._ ara&a _ * 29 * , 627 gough d. o. & weiss n. o. , 1976 . _ mnras _ * 176 * , 589 gough d. o. , kosovichev a. g. , toomre j. , 1996 ._ science _ * 272 * , 1296 guzik j. a. & cox a. n. , 1995 ._ apj _ * 448 * , 905 iglesias c. a. , rogers f. j. & wilson b. g. , 1992 ._ apj _ * 397 * , 717 kosovichev a. g. & fedorova a. v. , 1991 .* 68 * , 1015 ( english translation : _ sov ._ * 35 * , 507 ) kosovichev a. g. , , 1992 ._ mnras _ * 259 * , 536 michaud g. & charbonneau p. , 1991 ._ space sci .rev . _ * 57 * , 1 michaud g. & proffitt c. r. , 1993 . in _ proc .iau colloq . 137: inside the stars _ , eds baglin a. & weiss w. w. , _ asp conf ._ * 40 * , 246 montalbn j. , 1994 ._ a&a _ * 281 * , 421 montalbn j. & schatzman e. , 1996. _ a&a _ * 305 * , 513 patern l. , ventura r. , canuto v. m. & mazzitelli i. , 1993. _ apj _ * 402 * , 733 pijpers f. p. & thompson m. j. , 1992 ._ a&a _ * 262 * , l33 richard o. , vauclair s. , charbonnel c. & dziembowski w. a. , 1996 ._ a&a _ * 312 * , 1000 rogers f. j. , swenson f. j. & iglesias c. a. , 1996 . _apj _ * 456 * , 902 schwarzschild m. , 1958 ._ structure and evolution of the stars _ , princeton university press , princeton , new jersey .stein r. f. & nordlund . , 1989 ._ apj _ * 342 * , l95 tomczyk s. , schou j. & thompson m. j. , 1996 .india _ * 24 * , 245 tomczyk s. , , 1995 . _ solar phys . _* 159 * , 1 tripathy s. c. , basu s. & christensen - dalsgaard j. , 1996 . in _ sounding solar and stellar interiors .iau symposium no 181 , poster volume _ , eds provost j. & schmider f. x. , in the press .
the overall framework for the study of solar convection and oscillations is the spherically symmetric component of solar structure . i discuss those properties of the solar interior which depend on convection and other possible hydrodynamical motion and the increasingly detailed information about the structure which is provided by helioseismic data . the most basic dependence of solar models on convection is the calibration to fix the solar radius . the dominant causes for differences in oscillation frequencies between the sun and solar models seem to be located near the top of the convection zone . however , there is also evidence for possible weak mixing below the convection zone and perhaps in the solar core . the former , at least , might be induced by penetration of convective motion into the stable layers below . solar structure , convection , helioseismology # 1 , # 1]*#1 ] *
relationships between information theory and statistical physics have been widely recognized in the last few decades , from a wide spectrum of aspects .these include conceptual aspects , of parallelisms and analogies between theoretical principles in the two disciplines , as well as technical aspects , of mapping between mathematical formalisms in both fields and borrowing analysis techniques from one field to the other .one example of such a mapping , is between the paradigm of random codes for channel coding and certain models of magnetic materials , most notably , ising models and spin glass models ( see , e.g. , ,,, , and many references therein ) .today , it is quite widely believed that research in the intersection between information theory and statistical physics may have the potential of fertilizing both disciplines .this paper is more related to the former aspect mentioned above , namely , the relationships between the two areas in the conceptual level .however , it has also ingredients from the second aspect . in particular , let us consider two questions in the two fields , which at first glance , may seem completely unrelated , but will nevertheless turn out later to be very related. these are special cases of more general questions that we study later in this paper .the first is a simple question in statistical mechanics , and it is about a certain extension of a model described in ( * ? ? ?* , problem 13 ) : consider a one dimensional chain of connected elements ( e.g. , monomers or whatever basic units that form a polymer chain ) , arranged along a straight line ( see fig .[ chain ] ) , and residing in thermal equilibrium at fixed temperature .the are two types of elements , which will be referred to as type ` 0 ' and type ` 1 ' . the number of elements of each type ( with being either ` 0 ' or ` 1 ' ) is given by , where ( and so , ) .each element of each type may be in one of two different states , labeled by , where also takes on the values ` 0 ' and ` 1 ' .the length and the internal energy of an element of type at state are given by and ( independently of ) , respectively .a contracting force is applied to one edge of the chain while the other edge is fixed .what is the minimum amount of mechanical work that must be carried out by this force , along an isothermal process at temperature , in order to shrink the chain from its original length ( when no force was applied ) into a shorter length , , where is a given constant ?the second question is in information theory . in particular, it is the classical problem of lossy source coding , and some of the notation here will deliberately be chosen to be the same as before : an information source emits a string of independent symbols , , where each may either be ` 0 ' or ` 1 ' , with probabilities and , respectively .a lossy source encoder maps the source string , , into a shorter ( compressed ) representation of average length , where is the coding rate ( compression ratio ) , and the compatible decoder maps this compressed representation into a reproduction string , , where each is again , either ` 0 ' or ` 1 ' . the fidelity of the reproduction is measured in terms of a certain distortion ( or distance ) function , , which should be as small as possible , so that would be as ` close ' as possible to .is required to be strictly identical to , in which case . however , in some applications , one might be willing to trade off between compression and fidelity , i.e. , slightly increase the distortion at the benefit of reducing the compression ratio . ] in the limit of large , what is the minimum coding rate for which there exists an encoder and decoder such that the average distortion , , would not exceed ?it turns out , as we shall see in the sequel , that the two questions have intimately related answers . in particular , the minimum amount of work , in the first question , is related to ( a.k.a .the _ rate distortion function _ ) , of the second question , according to provided that the hamiltonian , , in the former problem , is given by where is boltzmann s constant , and is the relative frequency ( or the empirical probability ) of the symbol in the reproduction sequence , pertaining to an optimum lossy encoder decoder with average per symbol distortion ( for large ) .moreover , the minimum amount of work , which is simply the free energy difference between the final equilibrium state and the initial state of the chain , is achieved by a reversible process , where the compressing force grows very slowly from zero , at the beginning of the process , up to a final level of where is the derivative of ( see fig . [ rd ] ) .thus , physical compression is strongly related to data compression , and the fundamental physical limit on the minimum required work is intimately connected to the fundamental information theoretic limit of the minimum required coding rate .this link between the the physical model and the lossy source coding problem is obtained from a large deviations perspective .the exact details will be seen later on , but in a nutshell , the idea is this : on the one hand , it is possible to represent as the large deviations rate function of a certain rare event , but on the other hand , this large deviations rate function , involves the use of the legendre transform , which is a pivotal concept in thermodynamics and statistical mechanics . moreover , since this legendre transform is applied to the ( logarithm of the ) moment generating function ( of the distortion variable ) , which in turn , has the form a partition function , this paves the way to the above described analogy .the legendre transform is associated with the optimization across a certain parameter , which can be interpreted as either inverse temperature ( as was done , for example , in ,,, ) or as a ( generalized ) force , as proposed here .the interpretation of this parameter as force is somewhat more solid , for reasons that will become apparent later .one application of this analogy , between the two models , is a parametric representation of the rate distortion function as an integral of the minimum mean square error ( mmse ) in a certain bayesian estimation problem , which is obtained in analogy to a certain variant of the fluctuation dissipation theorem .this representation opens the door for derivation of upper and lower bounds on the rate distortion function via bounds on the mmse , as was demonstrated in a companion paper .another possible application is demonstrated in the present paper : when the setup is extended to allow information sources with memory ( non i.i.d .processes ) , then the analogous physical model consists of interactions between the various particles . when these interactions are sufficiently strong ( and with high enough dimension ) , then the system exhibits phase transitions . in the information theoretic domain , these phase transitions mean irregularities and threshold effects in the behavior of the relevant information theoretic function , in this case , the rate distortion function .thus , analysis tools and physical insights are ` imported ' from statistical mechanics to information theory .a particular model example for this is worked out in section 4 .the outline of the paper is as follows . in section 2 ,we provide some relevant background in information theory , which may safely be skipped by readers that possess this background . in section 3 ,we establish the analogy between lossy source coding and the above described physical model , and discuss it in detail . in section 4, we demonstrate the analysis for a system with memory , as explained in the previous paragraph .finally , in section 5 we summarize and conclude .one of the most elementary roles of information theory is to provide fundamental performance limits pertaining to certain tasks of information processing , such as data compression , error correction coding , encryption , data hiding , prediction , and detection / estimation of signals and/or parameters from noisy observations , just to name a few ( see e.g. , ) . in this paper , our focus is on the first item mentioned data compression , a.k.a . _ source coding _, where the mission is to convert a piece of information ( say , a long file ) , henceforth referred to as the _ source data _ , into a shorter ( normally , binary ) representation , which enables either perfect recovery of the original information , as in the case of _ lossless compression _ , or non perfect recovery , where the level of reconstruction errors ( or distortion ) should remain within pre - specified limits , which is the case of _ lossy data compression_. lossless compression is possible whenever the statistical characterization of the source data inherently exhibits some level of _ redundancy _ that can be exploited by the compression scheme , for example , a binary file , where the relative frequency of 1 s is much larger than that of the 0 s , or when there is a strong statistical dependence between consecutive bits .these types of redundancy exist , more often than not , in real life situations .if some level of errors and distortion are allowed , as in the lossy case , then compression can be made even more aggressive .the choice between lossless and lossy data compression depends on the application and the type of data to be compressed .for example , when it comes to sensitive information , like bank account information , or a piece of important text , then one may not tolerate any reconstruction errors at all .on the other hand , images and audio / video files , may suffer some degree of harmless reconstruction errors ( which may be unnoticeable to the human eye or ear , if designed cleverly ) and thus allow stronger compression , which would be very welcome , since images and video files are typically enormously large .the _ compression ratio _ , or the _ coding rate _, denoted , is defined as the ( average ) ratio between the length of the compressed file ( in bits ) and the length of the original file .the basic role of information theory , in the context of lossless / lossy source coding , is to characterize the fundamental limits of compression : for a given statistical characterization of the source data , normally modeled by a certain random process , what is the minimum achievable compression ratio as a function of the allowed average distortion , denoted , which is defined with respect to some distortion function that measures the degree of proximity between the source data and the recovered data .the characterization of this minimum achievable for a given , denoted as a function , is called the _ rate distortion function _ of the source with respect to the prescribed distortion function .for the lossless case , of course , .another important question is how , in principle , one may achieve ( or at least approach ) this fundamental limit of optimum performance , ? in this context , there is a big gap between lossy compression and lossless compression .while for the lossless case , there are many practical algorithms ( most notably , adaptive huffman codes , lempel ziv codes , arithmetic codes , and more ) , in the lossy case , there is unfortunately , no constructive practical scheme whose performance comes close to .the simplest non trivial model of an information source is that of an i.i.d.process , a.k.a .a _ discrete memoryless source _ ( dms ) , where the source symbols , , take on values in a common finite set ( alphabet ) , they are statistically independent , and they are all drawn from the same probability mass function , denoted by . the source string is compressed into a binary representation depends on , the code should be designed such that the running bit - stream ( formed by concatenating compressed strings corresponding to successive from the source ) could be uniquely parsed in the correct manner and then decoded . to this end , the lengths must be collectively large enough so as to satisfy the kraft inequality . the details can be found , for example , in .] of length ( which may or may not depend on ) , whose average is , and the compression ratio is . in the decoding ( or decompression )process , the compressed representation is mapped into a reproduction string , where each , , takes on values in the _ reproduction alphabet _ ( which is typically either equal to or to a subset of , but this is not necessary ) .the fidelity of the reconstruction string relative to the original source string is measured by a certain distortion function , where the function is defined additively as , being a function from to the non negative reals .the average distortion per symbol is .as said , is defined ( in general ) as the infimum of all rates for which there exist a sufficiently large and an encoder decoder pair for , such that the average distortion per symbol would not exceed . in the case of a dms ,an elementary coding theorem of information theory asserts that admits the following formula where is a random variable that represents a single source symbol ( i.e. , it is governed by ) , is the mutual information between and , i.e. , being the marginal distribution of , which is associated with a given conditional distribution , and the minimum is over all these conditional probability distributions for which for , must be equal to with probability one ( unless also for some ) , and then the shannon entropy of , as expected . as mentioned earlier, there are concrete compression algorithms that come close to for large . for , however , the proof of achievability of is non constructive .the idea for proving the existence of a sequence of codes ( indexed by ) whose performance approach as , is based on the notion of _ random coding _ : if we can define , for each , an ensemble of codes of ( fixed ) rate , for which the average per symbol distortion ( across both the randomness of and the randomness of the code ) is asymptotically less than or equal to , then there must exist at least one sequence of codes in that ensemble , with this property .the idea of random coding is useful because if the ensemble of codes is chosen wisely , the average ensemble performance is surprisingly easy to derive ( in contrast to the performance of a specific code ) and proven to meet in the limit of large . for a given ,consider the following ensemble of codes : let denote the conditional probability matrix that achieves and let denote the corresponding marginal distribution of .consider now a random selection of reproduction strings , , each of length , where each , , is drawn independently ( of all other reproduction strings ) , according to this randomly chosen code is generated only once and then revealed to the decoder . upon observing an incoming source string , the encoder seeks the first reproduction string that achieves , and then transmits its index using bits , or equivalently , _ nats_. has the obvious interpretation of the number of bits needed to specify a number between and , the natural base logarithm is often mathematically more convenient to work with . the quantity can also be thought of as the description length , but in different units , called nats , rather than bits , where the conversion is according to nat bits . ]if no such codeword exists , which is referred to as the event of _ encoding failure _ , the encoder sends an arbitrary sequence of nats , say , the all zero sequence .the decoder receives the index and simply outputs the corresponding reproduction string . obviously , the per symbol distortion would be less than whenever the encoder does not fail , and so , the main point of the proof is to show that the probability of failure ( across the randomness of and the ensemble of codes ) is vanishingly small for large , provided that is slightly larger than ( but can be arbitrarily close to ) , i.e. , for an arbitrarily small .the idea is that for any source string that is _ typical _ to ( i.e. , the empirical relative frequency of each symbol in is close to its probability ) , one can show ( see , e.g. , ) that the probability that a single , randomly selected reproduction string would satisfy , decays exponentially as ] , the number of trials is much larger ( by a factor of ) than the reciprocal of the probability of a single ` success ' , $ ] , and so , the probability of obtaining at least one such success ( which is case where the encoder succeeds ) tends to unity as .we took the liberty of assuming that source string is typical to because the probability of seeing a non typical string is vanishingly small . from the foregoing discussion, we see that has the additional interpretation of the exponential rate of the probability of the event , where is a given string typical to and is randomly drawn i.i.d . under .consider the following chain of equalities and inequalities for bounding the probability of this event from above .letting be a parameter taking an arbitrary non positive value , we have : \right\}\right>\nonumber\\ & = & e^{-nsd}\left<\prod_{i=1}^n e^{sd(x_i,{\hat{x}}_i)}\right>\nonumber\\ & = & e^{-nsd}\prod_{i=1}^n\left < e^{sd(x_i,{\hat{x}}_i)}\right>\nonumber\\ & = & e^{-nsd}\prod_{x\in{{\cal x}}}\prod_{i:~x_i = x}\left < e^{sd(x,{\hat{x}}_i)}\right>\nonumber\\ & = & e^{-nsd}\prod_{x\in{{\cal x}}}\left[\left < e^{sd(x,{\hat{x}})}\right>\right]^{np(x)}\nonumber\\ & = & e^{-ni(d , s)}\end{aligned}\ ] ] where is defined as \right\}.\ ] ] the tightest upper bound is obtained by minimizing it over the range , which is equivalent to maximizing in that range .i.e. , the tightest upper bound of this form is , where ( the chernoff bound ) . while this is merely an upper bound , the methods of large deviations theory ( see , e.g. , ) can readily be used to establish the fact that the bound is tight in the exponential sense , namely , it is the correct asymptotic exponential decay rate of .accordingly , is called the _ large deviations rate function _ of this event . combining this with the foregoing discussion, it follows that , which means that an alternative expression of is given by .\ ] ] interestingly , the same expression was obtained in ( * ? ? ?* corollary 4.2.3 ) using completely different considerations ( see also ) . in this paper, however , we will also concern ourselves , more generally , with the rate distortion function , , pertaining to a given reproduction distribution , which may not necessarily be the optimum one , .this function is defined similarly as in eq .( [ rdc ] ) , but with the additional constraint that the marginal distribution that represents the reproduction would agree with the given , i.e. , . by using the same large deviations arguments as above , but for an arbitrary random coding distribution , one readily observes that is of the same form as in eq .( [ ldrdo ] ) , except that is replaced by the given ( see also ) .this expression will now be used as a bridge to the realm of equilibrium statistical mechanics .consider the parametric representation of the rate distortion function , with respect to a given reproduction distribution : .\ ] ] the expression in the inner brackets , can be thought of as the partition function of a single particle of `` type '' , which is defined as follows . assuming a certain fixed temperature , consider the hamiltonian imagine now that this particle may be in various states , indexed by . when a particle of type lies in state internal energy is , as defined above , and its length is .next assume that instead of working with the parameter , we rescale and redefine the free parameter as , where .then , has the physical meaning of a force that is conjugate to the length .this force is stretching for and contracting for . with a slight abuse of notation , the gibbs partition function ( * ? ? ?* section 4.8 ) pertaining to a single particle of type is then given by \right\},\ ] ] and accordingly , is the gibbs free energy per particle of type .thus , is the average per particle gibbs free energy ( or the gibbs free energy density ) pertaining to a system with a total of non interacting particles , from different types , where the number of particles of type is , .the helmholtz free energy per particle is then given by the legendre transform .\ ] ] however , for ( which is the interesting range , where ) , the maximizing is always non positive , and so , .\ ] ] invoking now eq .( [ ldrd ] ) , we readily identify that which supports the analogy between the lossy data compression problem and the behavior of the statistical mechanical model of the kind described in the third paragraph of the introduction : according to this model , the physical system under discussion is a long chain with a total of elements , which is composed of different types of shorter chains ( indexed by ) , where the number of elements in the short chain of type is , and where each element of each chain can be in various states , indexed by . in each state ,the internal energy and the length of each element are and , as described above .the total length of the chain , when no force is applied , is therefore . upon applying a contracting force ,states of shorter length become more probable , and the chain shrinks to the length of , where is related to according to the legendre relation is concave and is convex , the inverse legendre transform holds as well , and so , there is one to one correspondence between and . ]( [ legendre ] ) between and , which is given by where and are , respectively , the derivatives of and relative to .the inverse relation is , of course , where is the derivative of .since is proportional to the free energy , where the system is held in equilibrium at length , it also means the minimum amount of work required in order to shrink the system from length to length , and this minimum is obtained by a reversible process of slow increase in , starting from zero and ending at the final value given by eq .( [ fl ] ) .+ _ discussion _+ this analogy between the lossy source coding problem and the statistical mechanical model of a chain , may suggest that physical insights may shed light on lossy source coding and vice versa .we learn , for example , that the contribution of each source symbol to the distortion , , is analogous to the length contributed by the chain of type when the contracting force is applied .we have also learned that the local slope of is proportional to a force , which must increase as the chain is contracted more and more aggressively , and near , it normally tends to infinity , as in most cases .this slope parameter also plays a pivotal role in theory and practice of lossy source coding : on the theoretical side , it gives rise to a variety of parametric representations of the rate distortion function , , some of which support the derivation of important , non trivial bounds . on the more practical side ,often data compression schemes are designed by optimizing an objective function with the structure of thus plays the role of a lagrange multiplier .this lagrange multiplier is now understood to act like a physical force , which can be ` tuned ' to the desired trade off between rate and distortion . as yet another example , the convexity of the rate distortion function can be understood from a physical point of view , as the helmholtz free energy is also convex , a fact which has a physical explanation ( related to the fluctuation dissipation theorem ) , in addition to the mathematical one . at this point ,two technical comments are in order : 1 .we emphasized the fact that the reproduction distribution is fixed . for a given target value of ,one may , of course , have the freedom to select the optimum distribution that minimizes , which would yield the rate distortion function , , and so , in principle , all the foregoing discussion applies to as well .some caution , however , must be exercised here , because in general , the optimum may depend on ( or equivalently , on or ) , which means , that in the analogous physical model , the internal energy depends on the force ( in addition to the linear dependence of the term ) .this kind of dependence does not support the above described analogy in a natural manner .this is the reason that we have defined the rate distortion problem for a fixed , as it avoids this problem .thus , even if we pick the optimum for a given target distortion level , then this must be kept unaltered throughout the entire process of increasing from zero to its final value , given by ( [ fl ] ) , although may be sub optimum for all intermediate distortion values that are met along the way from to .an alternative interpretation of the parameter , in the partition function , could be the ( negative ) inverse temperature , as was suggested in ( see also ) . in this case, would be the internal energy of an element of type at state and , which does not include a power of , could be understood as being proportional to the degeneracy ( in some coarse graining process ) . in this case , the distortion would have the meaning of internal energy , and since no mechanical work is involved , this would also be the heat absorbed in the system , whereas would be related to the entropy of the system .the legendre transform , in this case , is the one pertaining to the passage between the microcanonical ensemble and the canonical one .the advantage of the interpretation of ( or ) as force , as proposed here , is that it lends itself naturally to a more general case , where there is more than one fidelity criterion .for example , suppose there are two fidelity criteria , with distortion functions and . here , there would be two conjugate forces , and , respectively ( for example , a mechanical force and a magnetic force ) , and the physical analogy carries over .on the other hand , this would not work naturally with the temperature interpretation approach since there is only one temperature parameter in physics .we end this section by providing a representation of and in an integral form , which follows as a simple consequence of its representation as the legendre transform of , as in eq .( [ ldrd ] ) .since the maximization problem in ( [ ldrd ] ) is a convex problem ( is convex in ) , the minimizing for a given is obtained by taking the derivative of the r.h.s ., which leads to this equation yields the distortion level for a given value of the minimizing in eq .( [ ldrd ] ) .let us then denote which means that taking the derivative of ( [ ds ] ) , we readily obtain \nonumber\\ & = & \sum_{x\in{{\cal x}}}p(x ) \left[\frac{\sum_{{\hat{x}}\in\hat{{{\cal x}}}}q({\hat{x}})d^2(x,{\hat{x}})e^{sd(x,{\hat{x } } ) } } { \sum_{{\hat{x}}\in\hat{{{\cal x}}}}q({\hat{x}})e^{sd(x,{\hat{x}})}}-\right.\nonumber\\ & & \left.\left(\frac{\sum_{{\hat{x}}\in\hat{{{\cal x}}}}q({\hat{x}})d(x,{\hat{x}})e^{sd(x,{\hat{x } } ) } } { \sum_{{\hat{x}}\in\hat{{{\cal x}}}}q({\hat{x}})e^{sd(x,{\hat{x}})}}\right)^2\right]\nonumber\\ & = & \sum_{x\in{{\cal x}}}p(x)\cdot\mbox{var}_s\{d(x,{\hat{x}})|x\}\nonumber\\ & \equiv&\mbox{mmse}_s\{d(x,{\hat{x}})|x\},\end{aligned}\ ] ] where is the variance of w.r.t.the conditional probability distribution the last line of eq .( [ derds ] ) means that the expectation of w.r.t . is exactly the mmse of estimating based on the ` observation ' using the conditional mean of given as an estimator . differentiating both sides of eq .( [ rds ] ) , we get or , equivalently , and in , this representation was studied extensively and was found quite useful . in particular , simple bounds on the mmse were shown to yield non trivial bounds on the rate distortion function in some cases where an exact closed form expression is unavailable .the physical analogue of this representation is the fluctuation dissipation theorem , where the conditional variance , or equivalently the mmse , plays the role of the fluctuation , which describes the sensitivity , or the linear response , of the length of the system to a small perturbation in the contracting force .if is interpreted as the negative inverse temperature , as was mentioned before , then the mmse is related to the specific heat of the system .the theoretical framework established in the previous section extends , in principle , to information sources with memory ( non i.i.d .sources ) , with a natural correspondence to a physical system of interacting particles .while the rate distortion function for a general source with memory is unknown , the maximum rate achievable by random coding can still be derived in many cases of interest . unlike the case of the memoryless source , where the best random coding distribution is memoryless as well ,when the source exhibits memory , there is no apparent reason to believe that good random coding distributions should remain memoryless either , but it is not known what the form of the optimum random coding distribution is .for example , there is no theorem that asserts that the optimum random coding distribution for a markov source is markov too .one can , however examine various forms of the random coding distributions and compare them .intuitively , the stronger is the memory of the source , the stronger should be the memory of the random coding distribution . in this section ,we demonstrate one family of random coding distributions , with a very strong memory , which is inspired by the curie weiss model of spin arrays , that possesses long range interactions .consider the random coding distribution where , and are parameters , and is the appropriate normalization constant .using the identity , we can represent as a mixture of i.i.d .distributions as follows : where is the memoryless source ^n}\ ] ] and the weighting function is given by \right]\right\}.\ ] ] next , we repeat the earlier derivation for each individually : where is a short hand notation for , which is well defined from the previous section since is an i.i.d . distribution . at this point ,two observations are in order : first , we observe that a separate large deviations analysis for each i.i.d .component is better than applying a similar analysis directly to itself , without the decomposition , since it allows a different optimum choice of for each , rather than one optimization of that compromises all values of .moreover , since the upper bound is exponentially tight for each , then the corresponding mixture of bounds is also exponentially tight .the second observation is that since is i.i.d . , depends on the source only via the marginal distribution of a single symbol , which is assumed here to be independent of .a saddle point analysis gives rise to the following expression for , the random coding rate distortion function pertaining to , which is the large deviations rate function : +r_{\theta}(d)\right\ } + \phi(b , j)\ ] ] where we next have a closer look at , assuming , and using the hamming distortion function , i.e. , since we readily obtain \nonumber\\ & & + \ln\cosh(b+\theta).\end{aligned}\ ] ] on substituting this expression back into the expression of , we obtain the formula \right\}\right ) + \phi(b , j),\end{aligned}\ ] ] which requires merely optimization over two parameters .in fact , the maximization over , for a given , can be carried out in closed form , as it boils down to the solution of a quadratic equation .specifically , for a symmetric source ( ) , the optimum value of is given by -\ln[2(1-d)],\ ] ] where the details of the derivation of this expression are omitted as they are straightforward .as the curie weiss model is well known to exhibit phase transitions ( see , e.g. , , ) , it is expected that , under this model , would consist of phase transitions as well . at the very least , the last term is definitely subjected to phase transitions in ( the magnetic field ) and ( the coupling parameter ) . the first term , that contains the minimization over , is somewhat more tricky to analyze in closed form .in essence , considering as a function of , substituting it back into the expression of , and finally , differentiating w.r.t . and equating to zero ( in order to minimize ) , then it turns out that the ( internal ) derivative of w.r.t . is multiplied by a vanishing expression ( by the very definition of as a solution to the aforementioned quadratic equation ) .the final result of this manipulation is that the minimizing should be a solution to the equation this is a certain ( rather complicated ) variant of the well known magnetization equation in the mean field model , , which is well known to exhibit a first order phase transition in whenever .it is therefore reasonable to expect that the former equation in , which is more general , will also have phase transitions , at least in some cases .in this paper , we have drawn a conceptually simple analogy between lossy compression of memoryless sources and statistical mechanics of a system of non interacting particles . beyond the belief that this analogy may be interesting on its own right, we have demonstrated its usefulness in several levels . in particular , in the last section , we have observed that the analogy between the information theoretic model and the physical model is not merely on the pure conceptual level , but moreover , analysis tools from statistical mechanics can be harnessed for deriving information theoretic functions .moreover , physical insights concerning phase transitions , in systems with strong interactions , can be ` imported ' for the understanding possible irregularities in these functions , in this case , non smooth dependence on and .
we draw a certain analogy between the classical information theoretic problem of lossy data compression ( source coding ) of memoryless information sources and the statistical mechanical behavior of a certain model of a chain of connected particles ( e.g. , a polymer ) that is subjected to a contracting force . the free energy difference pertaining to such a contraction turns out to be proportional to the rate distortion function in the analogous data compression model , and the contracting force is proportional to the derivative this function . beyond the fact that this analogy may be interesting on its own right , it may provide a physical perspective on the behavior of optimum schemes for lossy data compression ( and perhaps also , an information theoretic perspective on certain physical system models ) . moreover , it triggers the derivation of lossy compression performance for systems with memory , using analysis tools and insights from statistical mechanics .
we consider a geometric , non - local pde to model the shape evolution of mm - sized grains typically formed in shallow tropical seas called _ ooids _ in geosciences .shape evolution of particles is widely investigated both in the mathematical and in the geoscientific literature ( e.g. and the citations therein ) .most of the treated models are local ones , i.e. the evolution is determined by some pointwise law , for instance the _ curve - shortening flow _ is a good example for such a model in two spatial dimensions . having astrictly inward flow a steady state solution can be considered via some rescaling ( like fixing the area or the arch - length of the curve ) .another way of investigating some particular shapes is to track the flow in the backward infinite time limit . in case the direction of the flowis not prescribed _ a - priori _ , i.e shrinkage or growth either take effect during the evolution , one may inquire about the existence ( and some properties ) of a bounded ( finite ) _ steady state _ shape . regarding the proposed model we address such a question . the model investigated in this paper grabs the three crucial physical effects of ooid - growth : a chemical process leading to radial accumulation of material , abrasion of the grain due to collisions to the seabed and finally sliding ( friction ) which takes effect at shallow shores , which landform is widely anticipated as the principal venue for ooid formation .while the velocity of growth is independent of the size the particle , abrasion and friction both governed by mass - dependent laws . whence the material quantity in the particle can not be omitted from a realistic model ; it should be a _non - local _ one . in this paperwe focus on two spatial dimensions , non - locality manifests as an area - dependent speed of contraction .this paper is devoted to the rigorous investigation of the existence and uniqueness of steady state solutions of the model in establishing further work aiming to compare model predictions against observable shapes in nature .shape evolution might be interpreted as a process that moves any point of a closed , non - self - intersecting curve embedded in to the normal direction with a velocity depending on some physical features of the environment . in our model the evolution of is defined via where is the - time dependent - area enclosed by and the subscript refers to differentiation respect to time . and stand for the curvature and the unit ( inward directed ) normal of the curve at time , respectively . without loss of generalitywe assume the curve possesses a unique maximal diameter ( line between points p and p in fig .[ fig:01 ] . ) , which is designated to be the axis of an orthogonal basis located at the middle point of the pp segment . denotes the angle between the -direction and the local tangent to the curve . , and are positive real parameters of the problem associated with the physical environment and they are assumed to be time - independent during the evolution .the three key physical components of the proposed model can be easily identified : in the brackets the first , negative term stands for the _ growing _ of the particle , in the second term _abrasion _ is assumed to be a curvature - driven process and finally the affine term is associated with _ friction_. as the right - hand side of eq .( [ eq : geom_pde ] ) consists both positive and negative terms , a natural question arises as is there any steady state solution of the flow ?in specific , we seek shapes that fulfill whole along the curve .note , that the steady state shape is independent of as it scales solely the time and can not be reconstructed by pure observation of . infurther work we intend to investigate the question , whether the parameter pair can be reproduced just from the observation of a steady state shape ( in case it exists ) ? in this paper we demonstrate , that among smooth , convex curves any steady state shape must possess d2 symmetry and for a given parameter pair this shape is unique , thus the answer for the question is positive .any smooth , convex , steady state solution of eq .( [ eq : geom_pde ] ) with positive parameters ( and ) embedded in possesses d2 symmetry . with a maximal diameter pp b ) the curve segment used in the proof , scaledwidth=85.0% ] smooth , convex , steady state solutions of eq .( [ eq : geom_pde ] ) are uniquely determined by and , and for any positive values of the two parameters there exists a curve .we prove the first proposition in section 3 .where we assume that is known _ a - priori _ , this case is refereed as a _ local equation _ to distinguish it from the general , _ non - local equation_. section 4 .is devoted to prove proposition 2.2 . via a bijective mapping between the parameter spaces of the local and non - local equationsfor a moment let us assume that the area of the invariant curve is known _ a - priori_. as we investigate smooth , closed curves without self - intersections the derivation can be substantially simplified ( without loss of generality ) by considering solely the curve segment between the leftmost point p and the one that possesses a horizontal tangent and a positive coordinate .this latest is point q. in order to simplify the derivations we use several parametrizations of the curve segment in the sequel : the _ natural _ parametrization respect to the arc length , the parametrization respect to the coordinate and finally the parametrization respect to the inclination of the tangent of the curve . in this sectionwe employ the parametrization of respect to , and refers to the first derivative respect to . by this parametrization equation ( [ eq : steady_state ] )can be written as where there is a triple of parameters ( , , ) , all of them assumed to be fixed . as both and is multiplied by ,a convenient notation is defined via and which renders eq .( [ eq : local_01 ] ) to we aim to rewrite this equation to make it solely depend on and its derivatives .this step is similar to the case of the curve shortening flow : there an equation solely depending on the curvature reveals important features , here the form with provides the most convenient choice .( nonetheless , an ode with as the unknown function can be obtained as well . ) for a moment we reconsider the natural parametrization of the curve .as we investigate two , arbitrary close points along the curve , by the chain rule we derive the following expression between and ( using the fact , that the derivative of the slope respect to the arch length equals the curvature ) : note , that the negative sign relates to the fact , that by definition is decreasing between points p and q ( fig .[ fig:01 ] .b ) ) . in the virtue ( [ eq : de_kappa_gamma ] ) eq .( [ eq : local_02 ] ) takes the following form , which is a first order , nonlinear ode : from now on this equation is called _local_. there exist a closed - form solution for the local equation : where and the error function is given by its usual definition , .formal substitution verifies that this expression up to the arbitrary constant solves the local equation . in this textwe focus on smooth curves with at point p , which restricts .( it means opens the gate for non - smooth , steady state shapes with two or more vertices in case the equation is assumed to apply on smooth segments of a piecewise smooth curve . )substitution of the solution in eq .( [ eq : gamma(delta ) ] ) into the right - hand - side of ( [ eq : de_kappa_gamma ] ) yields this is the unique solution of the local equation , it can be demonstrated by a routine technique ( i.e. demonstrating contradiction from the assumption about another solution which is not given by ( [ eq : kappa(delta ) ] ) ) .detailed investigation of the properties of is needed for further development as these govern properties of the steady state solutions .first we carry out a convenient change of parameters via whence the solution ( keeping ) in ( [ eq : kappa(delta ) ] ) can be reformulated as the following properties of can be settled ( proof is provided in the appendix ) : 1 . is real ( ) . is continuous . is positive and equals .4 . has a maximum at . as is exactly one point , denoted to , where vanishes and solely depends on .there is no local extrema for between , thus it is monotonic in this range . to realize a steady state shape we need itself . by the virtue of ( [ eq : de_kappa_gamma ] ) since is monotonic decreasing in $ ] , the area below the solution function determines . in other words and ( i.e. and ) determine a unique steady state solution of eq .( [ eq : local_02 ] ) , we aim to determine the parameter range , where the curve associated with the solution in smooth . apparently ,if the area under between exceeds 1 , then we can construct a smooth shape : at the unique the area below equals 1 , i.e. this corresponds to point q with a tangent parallel to the axis . for connection between and is one to one , thus we can draw the _ physical realization_. for cases , at which the area below is smaller than 1 , the physical shapes are non - smooth ( in fact , they become concave as the curvature flips sign above and there is no other zero for ) . as we have seen , depends solely on and for fixed the value of is fixed , too .this leads to the conclusion that for any fixed there exist a critical value at which for further convenience for a fixed we define the set it follows , that for the integral on the left - hand - side of ( [ eq_kappa_int_1 ] ) is smaller than one which means the associated curve can not have a horizontal tangent at any point . having assumed convex , smooth curves this parameter - range is not in our interest . in case the integral on the left - hand side is bigger than 1 thus the shape can be realized . as provides a possible parametrization of the curve segment , the unique closed form solution in ( [ eq : gamma(delta ) ] ) can be realized as a unique curve in .finally we prove uniqueness for itself .so far we know that for a proper and the curve segment is uniquely determined .note , that has vertical tangent at p and horizontal tangent at q. as we consider smooth shapes the only way to glue the curve - segments to form a closed , non - intersecting curve is reflection along the and axes .it clearly hints to that a smooth steady state shape must possess d2 symmetry .it is worthy to remark that for we have implying that in this case the steady state shape is a circle .as the term of friction ( the one with parameter ) represents an affine flow , a first intuition says that the steady state curve should be an ellipse . in the appendixwe show that this is not the case , ellipses are not steady state as long as is strictly positive .solution of the local equation establish a solution for the non - local case ( eq . [ eq : steady_state ] ) as well . to see this ,let us fix the two parameters , and , follow the lines in this section to obtain a steady state solution . in case thereexists such a solution , measure the area enclosed by the curve .it simply leads to the parameters of the non - local equation via and . in the other way round ,if one knows a steady state solution of the non - local equation , calculation of the parameters in the local is straightforward .these observations imply that a smooth solution of the non - local case must possess d2 symmetry as well and it completes the proof . in the next sectionwe investigate the connection between the local and non - local models via the relations between their parameters .we turn to investigate steady state solutions of the non - local equation ( [ eq : steady_state ] ) . as we found that they possess d2 symmetry , we keep investigating a curve segment ( c.f . figure 1 . ) . to investigate uniqueness of solutions in ( [ eq : steady_state ] ) let us assign and if they result in an identical curve as a steady state solution of the proper model ( i.e. ( i.e. it fulfills [ eq : local_03 ] ) in case and ( [ eq : steady_state ] ) in case ) .in this sense we can talk about a _mapping between the parameter spaces_. observe , that holds , implying that is invariant under the above mentioned map . in order to facilitate this observation , instead of and use as one of the parameters in the problem .based on the previous section , in the local model only can result in a smooth curve .let us formally define the map at a fixed value of as : our program is to show that is injective and surjective , thus it is bijective implying smooth solutions of the non - local equation are unique as we had uniqueness of solutions for eq .( [ eq : local_03 ] ) . as we have seen , results in a smooth curve enclosing some positive area in .based on our construction , can be readily computed .it means , injectivity of hangs on strict monotonicity of the function over . to prove this ,let us consider two smooth solutions ( at a fixed value of ) of the local equation in ( [ eq : local_03 ] ) identified by the letters and .let us relate their parameters via where without loss of generality . by the virtue of eq .( [ eq : kappa2(delta ) ] ) is is clear , that not just the parameters , but the functions behind the steady state curve segments and are related as and at a fixed .a ) depicts the two segments and denote the arbitrary point - pair with a fix , which is used to determine the relation between the areas under the curve - segments .b ) the graphs of curvature functions for and , respectively.,scaledwidth=100.0% ] we choose two points along and ( fig .[ fig:02 ] . ) , one for each , such way that their tangent direction , is identical , the will refer to any quantity evaluated at these points ( e.g. is the parameter of the curve at the chosen point along ) .as is monotonic along , the position of the two points is well - defined . as we have seen in the previous section and related via ( [ eq : gamma_from_kappa ] ) , thus for our two curves we see , that must hold , which by eq .( [ eq : kappa_connection ] ) implies . by the properties of and ( [ eq : kappa_connection ] ) it is easy to see , that the curvatures at the chosen point - pair must fulfill because their parameter fulfill . from this observation and the positivity of all the involved quantitieswe conclude , that we switch to the parametrization of respect to the tangent direction . based on eq .( [ eq : de_kappa_gamma ] ) we see , that the area under can be computed as as we have demonstrated in ( [ eq : kappa_connection3 ] ) , the argument of the integral in the rhs of ( [ eq : area ] ) is smaller for than for , and this holds for any , whence we conclude finally we apply ( [ eq : c_connection ] ) to obtain as a steady state curve possesses d2 symmetry holds so we are left with the conclusion that which is exactly the monotonicity of the function .this proves that is injective , as different elements in can not be mapped to an identical value in .it is also worthy to note , that for all the area is obviously positive thus is a positive , monotonic , continuous function . to prove surjectivity we has to investigate the limits of as is varied .first we turn to investigate the limit as ( is still fixed ) . from the previous section we know , that the curvature at point p ( ) is maximal along and .curvature of any planar curve is the reciprocal of the radius of its osculating circle .it provides an estimate on the area under the curve via . in a similar way we use the fact , that the curvature is minimal at point q ( and there as well ) to obtain the following inequality : as both the lower and the upper expression in the above inequality approach as we conclude we investigate the limit .as is finite it is enough to investigate the area in the limit .we consider the already used identity between the curvature and and arch length .taking again the parametrization respect to we write where is the arch length between point p and the point with tangent inclination .as at the curvature at point q vanish we conclude , that thus the curve is unbounded .as the area under can be computed from the arc length ( is finite ! ) we conclude which provides the required limit as it means , the range of is indeed and based on the injectivity part of the proof the preimage is precisely . as f is injective and surjectivewe conclude that it must be _ one - to - one and onto_. this means , the global equation has a unique solution among smooth curves for any positive and . practical application of the results presented here and comparison of predicted steady state curves against observable shapes in nature will be carried out in a separate paper .i thank gbor domokos for his idea to investigate the model presented in the paper and the fruitful discussions about ooids .due to the dependence of the friction term ( the one multiplied by ) for our natural expectation is to have ellipses as invariant shapes .we show that this is not the case .first let us investigate the case when holds , thus for all point of .it implies _ circles _ are the only steady state shapes for . for the general case ( ) we use proof by contradiction .we assume an ellipse with semi axes is steady state .we parametrize the ( in this case elliptic ) arch between points a and p in the well - known way where is the parameter , , and .the curvature of the parametrically defined curve is given by with and denoting the first and second derivatives respect to .considering that the area of an ellipse is and we obtain , that the expression in eq .( [ eq : geom_pde ] ) can be written as at the endpoint of the major axis and . with this ins hand , after simplification we obtain in a very similar manner we substitute , and using the value for from eq .( [ eq : ell_c1 ] ) we obtain : finally , we take a third value of to demonstrate , that with the derived constants and the equation is not satisfied .for example , after substitution of , and into eq .( [ eq : ellipse ] ) we obtain the left side of this equation is not identically zero , with truncating around one can show , that only makes it vanish .we found that among ellipses only the circle is a possible steady state candidate , which happens at , as we have already demonstrated .in section 3 we listed several properties of . the proofsare provided here , a graph of a typical function ( given explicitly in ( [ eq : kappa2(delta ) ] ) ) is provided in figure [ fig:03 ] .below . 1 . is real ( ) .+ proof : since for any with , must be real . is continuous .+ proof : for all and continuity of implies the statement . is positive and equals .+ proof : since and , . has a maximum at .+ proof : substituting into the first and second derivatives of reveals , that and which indicates a maximum . as .+ proof : we use the polynomial expansion of the function . since a limit of a fraction of two polynomialsis determined by the highest powers of the polynomials , we write 6 .there is exactly one point , denoted to , where vanishes and solely depends on .+ proof : by the derivative of the function in eq .( [ eq : kappa2(delta ) ] ) ) can be also given by at any point , where vanishes this form can be arranged as where and are real valued , monotonic increasing functions with and .algebraic manipulations reveal that for all positive values holds , which implies there is one , and only one point , where .this is exactly the point , , where .observe , that is uniquely determined by .there is no local extrema for between , thus it is monotonic in this range .+ proof : after simple algebraic manipulations the derivative of ( [ eq : kappa(y)_sf ] ) can be written as based on points 1 .and 6 . above both terms in eq .( [ eq : kappa2(delta)_deriv ] ) are negative as long as , which implies a lack of local extrema . , _ formal asymptotic expansions for symmetric ancient ovals in mean curvature flow _ , networks and heterogeneous media , 8:18 , 2013 , _ the shape of pebbles _ , mathematical geology , 9:113122 , 1977 , _ the evolution of pebble size and shape in space and time _ , proc .a. , 468:3059-3079 , 2013 , doi:10.1098/rspa.2011.0562 , _ classification of compact ancient solutions to the curve shortening flow _, j. differential geom . ,84(3):455464 , 2010 , _ shortening embedded curves _ , the annals of mathematics ., 129:71 - 111 , 1989 , _ self - similar solutions to the curve shortening flow _ , trans ., 364:52855309 , 2012
we investigate steady state solutions of a nonlocal geometric pde that serves as a simple model of simultaneous contraction and growth of grains called ooids in geosciences . as a main result of the paper we demonstrate that the parameters associated with the physical environment determine a unique , steady state solution of the equation among smooth , convex curves embedded in . it is also revealed that any steady state solution possesses d2 symmetry . nonlocal pde , shape evolution , uniqueness of steady state solutions 35q86 , 35b06 , 34a26
the basal ganglia are critical brain structures for behavioral control , whose organization has been highly conserved during vertebrate evolution . altered activity of the basal ganglia underlies a wide range of human neurological and psychiatric disorders , but the specific computations normally performed by these circuits remain elusive .the largest component of the basal ganglia is the striatum , which appears to have a key role in adaptive decision - making based on reinforcement history , and in behavioral timing on scales from tenths of seconds to tens of seconds .the great majority ( ) of striatal neurons are gabaergic medium spiny neurons ( msns ) , which project to other basal ganglia structures but also make local collateral connections within striatum .these local connections were proposed in early theories to achieve action selection through strong winner - take - all lateral inhibition , but this idea fell out of favor once it became clear that msn connections are actually sparse ( nearby connection probabilities ) , unidirectional and relatively weak . nonetheless , striatal networks are intrinsically capable of generating sequential patterns of cell activation , even in brain slice preparations without time - varying external inputs . following previous experimental evidence that collateral inhibition can help organize msn firing ,an important recent set of modeling studies argued that the sparse connections between msns , though individually weak , can collectively mediate sequential switching between cell assemblies .it was further hypothesized that these connections may even be optimally configured for this purpose .this proposal is of high potential significance , since sequential dynamics may be central to the striatum s functional role in the organization and timing of behavioral output . in their work , ponzi and wickens used conductance - based model neurons ( with persistent and currents ) , in proximity to a bifurcation from a stable fixed point to a tonic firing regime .we show here that networks based on simpler leaky integrate - and - fire ( lif ) neurons can also exhibit sequences of cell assembly activation .this simpler model , together with a novel measure of structured bursting , allows us to more clearly identify the critical parameters needed to observe dynamics resembling that of the striatal msn network . among other results ,we show that the duration of gabaergic post - synaptic currents is crucial for the network ability to discriminate different input patterns .a reduction of the post - synaptic time scale , analogous to that observed for ipscs of msns in mouse models of huntington s disease ( hd ) , leads in our model to alteration of single neuron and population dynamics typical of striatal dynamics in symptomatic hd mice .finally , we qualitatively replicate the observed response of striatal networks in brain slices to altered excitatory drive and to reduction of gabaergic transmission between axon collaterals of striatal neurons .the latter effect can be induced by dopamine loss , therefore our results may help generate new insights into the aberrant activity patterns observed in parkinson s disease ( pd ) .the model is composed of leaky integrate - and - fire ( lif ) inhibitory neurons , with each neuron receiving inputs from a randomly selected 5 of the other neurons ( i.e. a directed erds - renyi graph with constant in - degree , where ) .the inhibitory post - synaptic potentials ( psps ) are schematized as -functions characterized by a decay time and a peak amplitude .in addition , each neuron is subject to an excitatory input current mimicking the cortical and thalamic inputs received by the striatal network . in order to obtain firing periods of any durationthe excitatory drives are tuned to drive the neurons in proximity of the supercritical bifurcation between the quiescent and the firing state , similarly to .furthermore , our model is integrated exactly between a spike emission and the successive one by rewriting the time evolution of the network as an event - driven map ( for more details see methods ) . since we will compare most of our findings with the results reported in a previous series of papers by ponzi and wickens ( pw ) it is important to stress the similarities and differences between the two models .the model employed by pw is a two dimensional conductance - based model with a potassium and a sodium channel , our model is simply a current based lif model .the parameters of the pw model are chosen so that the cell is in proximity of a saddle - node on invariant circle ( snic ) bifurcation to guarantee a smooth increase of the firing period when passing from the quiescent to the supra - threshold regime , without a sudden jump in the firing rate .similarly , in our simulations the parameters of the lif model are chosen in proximity of the bifurcation from silent regime to tonic firing . in the pw modelthe psps are assumed to be exponentially decaying , in our case we considered -functions . in particular , we are interested in selecting model parameters for which uniformly distributed inputs , where ] ms , corresponding to a firing rate of 8.33 hz not far from the average firing rate of the networks ( namely , hz ) .thus these neurons can be considered as displaying a typical activity in both regimes . as expected , the dynamics of the two neurons is quite different , as evident from the presented in fig .[ fig : isi_distribut](a ) and ( b ) . in both casesone observes a long tailed exponential decay of corresponding to a poissonian like behaviour . however the decay rate is slower for ms with respect to ms , namely hz versus hz .interestingly , the main macroscopic differences between the two distributions arises at short time intervals . for ms , ( see fig . [fig : isi_distribut](b ) ) an isolated and extremely narrow peak appears at isi .this first peak corresponds to the supra - threshold tonic - firing of the isolated neuron , as reported above .after this first peak , a gap is clearly visible in the followed by an exponential tail .the origin of the gap resides in the fact that isi , because if the neuron is firing tonically with its period isi and receives a single psp , the membrane potential has time to decay almost to the reset value before the next spike emission .thus a single psp will delay the next firing event by a fixed amount corresponding to the gap in fig . [fig : isi_distribut](b ) .indeed one can estimate analytically this delay due to the arrival of a single -pulse , in the present case this gives isi = 15.45 ms , in very good agreement with the results in fig .[ fig : isi_distribut](b ) .no further gaps are discernible in the distribution , because it is highly improbable that the neuron will receive two ( or more ) psps exactly at the same moment at reset , as required to observe further gaps .the reception of more psps during the ramp up phase will give rise to the exponential tail in the .in this case the contribution to the comes essentially from this exponential tail , while the isolated peak at isi has a negligible contribution . on the other hand , if , as in the case reported in fig .[ fig : isi_distribut](a ) , does not show anymore a gap , but instead a continuous distribution of values .this because now the inhibitory effects of the received psps sum up leading to a continuous range of delayed firing times of the neuron .the presence of this peak of finite width at short in the plus the exponentially decaying tail are at the origin of the observed . in fig .[ fig : isi_distribut ] ( e ) and [ fig : isi_distribut ] ( f ) the distributions of the coefficient are also displayed for the considered neurons as black lines with symbols .these distributions clearly confirm that the dynamics are bursting for the longer synaptic time scale and essentially poissonian for the shorter one .we would like to understand whether it is possible to reproduce similar distributions of the isis by considering an isolated cell receiving poissonian distributed inhibitory inputs . in order to verify this, we simulate a single cell receiving uncorrelated spike trains at a rate , or equivalently , a single poissonian spike train with rate . here , is the average firing rate of a single neuron in the original network .the corresponding are plotted in fig .[ fig : isi_distribut ] ( c ) and [ fig : isi_distribut ] ( d ) , for ms and 2 ms , respectively .there is a remarkable similarity between the reconstructed isi distributions and the real ones ( shown in fig .[ fig : isi_distribut](a ) and ( b ) ) , in particular at short isis .also the distributions of the for the reconstructed dynamics are similar to the original ones , as shown in fig .[ fig : isi_distribut ] ( e ) and [ fig : isi_distribut ] ( f ) .altogether , these results demonstrate that the bursting activity of inhibitory coupled cells is not a consequence of complex correlations among the incoming spike trains , but rather a characteristic related to intrinsic properties of the single neuron : namely , its tonic firing period , the synaptic strength , and the post - synaptic time decay . the fundamental role played by long synaptic time in inducing bursting activityhas been reported also in a study of a single lif neuron below threshold subject to poissonian trains of exponentially decaying psps .obviously this analysis can not explain collective effects , like the non trivial dependence of the number of active cells on the synaptic strength , discussed in the previous sub - section , or the emergence of correlations and anti - correlations among neural assemblies ( measured by ) due to the covarying of the firing rates in the network , as seen in the striatum slices and shown in fig .[ fig : ponzibenchmark ] ( c ) for our model . to better investigate the influence of on the collective properties of the network we report in fig .s5(a ) and ( b ) the averaged cv , , and for ] and a perturbed stimulation , where the stimulation currents differ only over a fraction of currents ( which are randomly chosen from the same distribution as the control stimuli ) .we measure the differences of the responses to the control and to the perturbed stimulations by measuring , over an observation window , the dissimilarity metric , defined in methods . the time averaged dissimilarity metric is reported as a function of in fig .[ fig : pattern_sep ] for two different values .it is clear that for any -value the network with longer synaptic response always discriminates better between the two different stimuli than the one with shorter psp decay .we have also verified that the metric is robust to the modification of the observation times , this is verified because the dissimilarity rapidly reaches a steady value ( as shown in fig .s7(a ) and ( b ) ) . of inputs differing from the control input , for the values of ( black circles ) and ( red squares ) with two different observation windows ( solid line ) and ( dashed line ) .other parameters used : , mv .remaining parameters as in fig .[ fig : ponzibenchmark].,scaledwidth=47.0% ] in order to better characterize the computational capability of the network and the influence due to the different duration of the psps , we measure the complexity of the output signals as recently suggested in . in particular , we have examined the response of the network to a sequence of three stimuli , each being a constant vector of randomly chosen currents .the three different stimuli are consecutively presented to the network for a time period , and the stimulation sequence is repeated for the whole experiment duration .the output of the network can be represented by the instantaneous firing rates of the neurons measured over a time window ms , this is a high dimensional signal , where each dimension is represented by the activity of a single neuron .the complexity of the output signals can be estimated by measuring how many dimensions are explored in the phase space , more stationary are the firing rates less variables are required to reconstruct the whole output signal .a principal component analysis ( pca ) performed over observations of the firing rates reveals that for ms the 80% of the variance is recovered already with a projection over a two dimensional sub - space ( red bars in fig .[ fig : pca_200 ] ( a ) ) . on the other hand , for a higher number of principal components is required to reconstruct the dynamical evolution ( black bars in fig .[ fig : pca_200 ] ( a ) ) , thus suggesting higher computational capability of the system with longer psps .these results are confirmed by analyzing the projections of the firing rates in the subspace spanned by the first three principal components shown in fig .[ fig : pca_200 ] ( b ) and ( c ) for ms and ms , respectively .the responses to the three different stimuli can be effectively discriminated by both networks , since they lie in different parts of the phase space . however , the response to the three stimuli correspond essentially to three fixed points for ms , while trajectories evolving in a higher dimension are associated to each constant stimulus for ms .these analyses confirm that the network parameters selected by employing the maximal criterion also result in a reproducible response to different stimuli , as well as in an effective discrimination between different inputs . in a recent work ponzi and wickens noticed that in their model the striatally relevant regimes correspond to marginally stable dynamical evolution . in the supporting information text s1we devote the sub - section _ linear stability analysis _ to the investigation of this specific point , our conclusion is that for our model the striatally relevant regimes are definitely chaotic , but located in proximity of a transition to linearly stable dynamics .however for inhibitory networks it is known that even linearly stable networks can display erratic dynamics ( resembling chaos ) due to finite amplitude perturbations .this suggests that the usual linear stability analysis , corresponding to the estimation of the maximal lyapunov exponent , is unable to distinguish between regular and irregular evolution , at least for the studied inhibitory networks .= 20 ms and = 2 ms , respectively .projection of the neuronal response along the first three principal components for b ) = 20 ms and c ) = 2 ms .each point in the graph correspond to a different time of observation .the three colors denote the response to the three different inputs , which are constant stimulation currents randomly taken as ] for , the experiment is then performed as explained in the text.,title="fig:",scaledwidth=32.0% ] = 20 ms and = 2 ms , respectively .projection of the neuronal response along the first three principal components for b ) = 20 ms and c ) = 2 ms .each point in the graph correspond to a different time of observation .the three colors denote the response to the three different inputs , which are constant stimulation currents randomly taken as ] mv and in enhancing , after 20 seconds , the system excitability to the range of values ] . for one gets ms , which is consistent with the psps duration and decay values reported in literature for inhibitory transmission among spiny neurons our model does not take in account the depolarizing effects of inhibitory psps for .the gaba neurotransmitter has a depolarizing effect in mature projection spiny neurons , however this depolarization does not lead to a direct excitation of the spiny neurons .therefore our model can be considered as an effective model encompassing only the polarizing effects of the psps for .this is the reason why we have assumed that the membrane potential varies in the range ] .the stm is constructed by calculating the firing rates of the neurons at regular time intervals ms . at each time the rates are measured by counting the number of spikes emitted in a window , starting at the considered time .notice that the time resolution here used is higher with respect to that employed for the cross - correlation matrix , since we are interested in the response of the network to a stimulus presentation evaluated on a limited time window .the firing rates can be represented as a state vector with . for an experiment of duration ,we have state vectors representing the network evolution ( denotes the integer part ) . in order to measure the similarity of two states at different times and ,we have calculated the normalized scalar product for all possible pairs during the time experiment .this gives rise to a matrix called the state transition matrix . in the case of the numerical experiment with two inputs reported in the section _results _ , the obtained stm has a periodic structure of period with high correlated blocks followed by low correlated ones ( see figs .s6(b ) and ( e ) for the complete stm ) . in fig[ fig : sequantialswitching ] ( b ) is reported a coarse grained version of the entire stm obtained by taking a block from the stm , where the time origin corresponds to the onset of one of the two stimuli .the block is then averaged over subsequent windows of duration , whose origin is shifted each time by .more precisely the averaged stm is obtained as follows : in a similar manner , we can define a dissimilarity metric to distinguish between the response of the network to two different inputs .we define a control input ] . for the modified input we register another sequence of state vectors on the same time interval , with the same resolution .the instantaneous dissimilarity between the response to the control and perturbed stimuli is defined as : its average in time is simply given by .we have verified that the average is essentially not modified if the instantaneous dissimilarities are evaluated by considering the state vectors and taken at different times within the interval $ ] and not at the same time as done in . following a metric of the ability of the network to distinguish between two different inputs can be defined in terms of the stm . in particular , let us consider the stm obtained for two inputs to , each presented for a time lag . in order to define authors in have considered the correlations of the state vector taken at a generic time with all the other configurations , with reference to eq .this amounts to examine the elements of the stm . by defining ( ) as the average of over all the times associated to the presentation of the stimulus ( ) ,a distinguishablity metric between the two inputs can be finally defined as in order to take in account the single neuron variability and the number of active neurons involved in the network dynamics we have modified by multiplying this quantity by the fraction of active neurons and the average coefficient of variation , as follows the above metric is reported in figs .[ fig : q0andcv](c),(d ) and fig .[ fig : cv1_cv2 ] ( a ) . in order to obtain a synchronized event transition matrix ( setm ) , we first coarse grain the raster plot of the network .this is done by considering a series of windows of duration ms covering the entire raster plot .a bursting event is identified whenever a neuron fires 3 or more spikes within the considered window . to signal the burst occurrence a dotis drawn at the beginning of the window . from this coarse grained raster plotwe obtain the network bursting rate ( nbr ) by measuring the fraction of neurons that are bursting within the considered window . when this fraction is larger or equal to the average nbr plus two standard deviations , a synchronized event is identified .each synchronized event is encoded in the synchronous event vector , a dimensional binary vector where the -th entry is 1 if the -th neuron participated in the synchronized event and zero otherwise . to measure the similarity between two synchronous events , we make use of the normalized scalar product between all the pairs of vectors obtained at the different times and in which a synchronized event occurred .this represents the element of the setm . in the sub - section _ discriminative and computationalcapability _ , a principal component analysis ( pca ) is performed by collecting state vectors , measured at regular intervals for a time interval , then by estimating the covariance matrix associated to these state vectors .similarly , in the sub - section _ physiological relevance for biological networks under different experimental conditions _ the pca is computed by collecting the synchronous event vectors , and the covariance matrix calculated from this set of vectors .the principal components are the eigenvectors of theses matrices , ordered from the largest to the smallest eigenvalue .each eigenvalue represents the variance of the original data along the corresponding eigendirection .a reconstruction of the original data set can be obtained by projecting the state vectors along a limited number of principal eigenvectors , obviously by employing the first eigenvectors will allow to have a more faithful reconstruction .the _ k - means _ algorithm is a widespread mining technique in which data points of dimension are organized in clusters as follows . as a first step a number of clustersis defined a - priori , then from a sub - set of the data samples are chosen randomly . from each sub -set a centroid is defined in the -dimensional space . at a second step ,the remaining data are assigned to the closest centroid according to a distance measure .after the process is completed , a new set of centroids can be defined by employing the data assigned to each cluster .the procedure is repeated until the position of the centroids converge to their asymptotic value .+ an unbiased way to define a partition of the data can be obtained by finding the optimal cluster division .the optimal number of clusters can be found by maximizing the following cost function , termed _ modularity _ : where , is the matrix to be clusterized , the normalization factor is ; accounts for the matrix element associated to the _ null model _ ; denotes the cluster to which the -th element of the matrix belongs to , and is the kronecker delta . in other terms , the sum appearing in eq .( [ eq : mod ] ) is restricted to elements belonging to the same cluster . in our specific study, is the _ similarity matrix _ corresponding to the setm previously introduced .furthermore , the elements of the matrix are given by , where , these correspond to the expected value of the similarity for two randomly chosen elements .if two elements are similar than expected by chance , this implies that , and more similar they are larger is their contribution to the modularity .hence they are likely to belong to the same cluster .the problem of modularity optimization is np - hard , nevertheless some heuristic algorithms have been developed for finding local solutions based on greedy algorithms . in particular , we make use of the algorithm introduced for connectivity matrices in , which can be straightforwardly extended to similarity matrices by considering the similarity between two elements , as the weight of the link between them . the optimal partition technique is used in the sub - section _ physiological relevance for biological networks under different experimental conditions _, where it is applied to the similarity matrix where the distance matrix . here is the vector defining the synchronized event projected in the first principal components , which accounts for the 80% of the variance .the authors had useful interactions with robert schmidt at an early stage of the project . d.a .-a.t . also acknowledge helpful discussions with alain barrat , yehezkel ben - ari , demian battaglia , stephen coombes , rosa cossart , diego garlaschelli , stefano luccioli , mel machmahon , rossana mastrandea , ruben moreno - bote , and viktor jirsa .this work has been partially supported by the european commission under the program marie curie network for initial training " , through the project n. 289146 , neural engineering transformative technologies ( nett )" ( d.a .- g . and a.t ) , by the a grant ( no .anr-11-idex-0001 - 02 ) funded by the french government `` program investissements davenir '' ( a.t . ) , by us national institutes of health awards ns078435 , mh101697 , da032259 and ns094375 ( j.b . ) , and by departamento adminsitrativo de ciencia tecnologia e innovacion - colciencias " through the program doctorados en el exterior - 2013 " ( d.a .- g . ) .stephenson - jones m , samuelsson e , ericsson j , robertson b , grillner s. evolutionary conservation of the basal ganglia as a common vertebrate mechanism for action selection . current biology .2011;21(13):10811091 .west mj , stergaard k , andreassen oa , finsen b. estimation of the number of somatostatin neurons in the striatum : an in situ hybridization study using the optical fractionator method .journal of comparative neurology .1996;370(1):1122 . oorschot de .total number of neurons in the neostriatal , pallidal , subthalamic , and substantia nigral nuclei of the rat basal ganglia : a stereological study using the cavalieri and optical disector methods .journal of comparative neurology .1996;366(4):580599 .taverna s , van dongen yc , groenewegen hj , pennartz cm .direct physiological evidence for synaptic connectivity between medium - sized spiny neurons in rat nucleus accumbens in situ .journal of neurophysiology .2004;91(3):11111121 .carrillo - reid l , tecuapetla f , ibez - sandoval o , hernndez - cruz a , galarraga e , bargas j. activation of the cholinergic system endows compositional properties to striatal cell assemblies .journal of neurophysiology .2009;101(2):737749 .guzmn jn , hernndez a , galarraga e , tapia d , laville a , vergara r , et al .dopaminergic modulation of axon collaterals interconnecting spiny neurons of the rat striatum .the journal of neuroscience .2003;23(26):89318940 .carrillo - reid l , hernndez - lpez s , tapia d , galarraga e , bargas j. dopaminergic modulation of the striatal microcircuit : receptor - specific configuration of cell assemblies .the journal of neuroscience .2011;31(42):1497214983 .miller br , walker ag , shah as , barton sj , rebec gv .dysregulated information processing by medium spiny neurons in striatum of freely behaving mouse models of huntington s disease .journal of neurophysiology .2008;100(4):22052216 .lpez - huerta vg , carrillo - reid l , galarraga e , tapia d , fiordelisio t , drucker - colin r , et al . the balance of striatal feedback transmission is disrupted in a model of parkinsonism .the journal of neuroscience .2013;33(11):49644975 .benettin g , galgani l , giorgilli a , strelcyn jm .lyapunov characteristic exponents for smooth dynamical systems and for hamiltonian systems ; a method for computing all of them .part 1 : theory .1980;15(1):920 .vergara r , rick c , hernndez - lpez s , laville j , guzman j , galarraga e , et al .spontaneous voltage oscillations in striatal projection neurons in a rat corticostriatal slice . the journal of physiology .2003;553(1):169182 .jidar o , carrillo - reid l , hernndez a , drucker - coln r , bargas j , hernndez - cruz a. dynamics of the parkinsonian striatal microcircuit : entrainment into a dominant network state .the journal of neuroscience .2010;30(34):1132611336 .tecuapetla f , carrillo - reid l , bargas j , galarraga e. dopaminergic modulation of short - term synaptic plasticity at striatal inhibitory synapses .proceedings of the national academy of sciences . 2007;104(24):1025810263 .plenz d , kitai st . up and down states in striatal medium spiny neurons simultaneously recorded with spontaneous activity in fast - spiking interneurons studied in cortex substantia nigra organotypic cultures . the journal of neuroscience .1998;18(1):266283 .klapstein gj , fisher rs , zanjani h , cepeda c , jokel es , chesselet mf , et al .electrophysiological and morphological changes in striatal spiny neurons in r6/2 huntington s disease transgenic mice .journal of neurophysiology .
striatal projection neurons form a sparsely - connected inhibitory network , and this arrangement may be essential for the appropriate temporal organization of behavior . here we show that a simplified , sparse inhibitory network of leaky - integrate - and - fire neurons can reproduce some key features of striatal population activity , as observed in brain slices [ _ carrillo - reid et al . , j. neurophysiology * 99 * ( 2008 ) 14351450 _ ] . in particular we develop a new metric to determine the conditions under which sparse inhibitory networks form anti - correlated cell assemblies with time - varying activity of individual cells . we find that under these conditions the network displays an input - specific sequence of cell assembly switching , that effectively discriminates similar inputs . our results support the proposal [ _ ponzi and wickens , plos comp biol * 9 * ( 2013 ) e1002954 _ ] that gabaergic connections between striatal projection neurons allow stimulus - selective , temporally - extended sequential activation of cell assemblies . furthermore , we help to show how altered intrastriatal gabaergic signaling may produce aberrant network - level information processing in disorders such as parkinson s and huntington s diseases . + + + + + + * author summary * + neuronal networks that are loosely coupled by inhibitory connections can exhibit potentially useful properties . these include the ability to produce slowly - changing activity patterns , that could be important for organizing actions and thoughts over time . the striatum is a major brain structure that is critical for appropriately timing behavior to receive rewards . striatal projection neurons have loose inhibitory interconnections , and here we show that even a highly simplified model of this striatal network is capable of producing slowly - changing activity sequences . we examine some key parameters important for producing these dynamics , and help explain how changes in striatal connectivity may contribute to serious human disorders including parkinson s and huntington s diseases .
power allocation problem on interference channels is modeled in game theoretic framework and has been widely studied - .most of the existing literature considered parallel gaussian interference channels .nash equilibrium ( ne ) and pareto optimal points are the main solutions obtained for the power allocation games .while each user aiming to maximize its rate of transmission , for single antenna systems , ne is obtained in under certain conditions on the channel gains that also guarantee uniqueness . under these conditionsthe water - filling mapping is contraction map .these results are extended to multi - antenna systems in . in the presence of multiple ne ,an algorithm is proposed in to find a ne that minimizes the total interference at all users among the ne .an online algorithm to reach a ne for parallel gaussian channels is presented in when the channel gains are fixed but not known to the users . its convergence is also proved . the power allocation problem on parallel gaussian interference channels that minimize the total power subject to rate constraints for each useris considered in , , and .ne is obtained under certain sufficient conditions in .sequential and simultaneous iterative water - filling algorithms are proposed in to find a ne .sufficient conditions for convergence of these algorithms are also studied .pareto optimal solutions are obtained by a decentralized iterative algorithm in assuming finite number of power levels for each user .in we consider a gaussian interference channel with fast fading channel gains whose distributions are known to all the users .we consider power allocation in a non - game - theoretic framework , and provide other references for such a set up . in , we have proposed a centralized algorithm for finding the pareto points that maximize the average sum rate , when the receivers have knowledge of all the channel gains and decode the messages from strong and very strong interferers instead of treating them as noise as done in all the above references . in this paper , we consider a stochastic game over additive gaussian interference channels , where the users want to maximize their long term average rate and have long term average power constraints ( for potential advantages of this over one shot optimization considered in the above references , see , ) . for this systemwe obtain existence of a ne and also develop a heuristic algorithm to find a ne under more general channel conditions for the complete information game .we also consider the much more realistic situation when a user knows only its own channel gains , whereas the above mentioned literature considers the problem when each user knows all the channel gains in the system .we consider two different partial information games . in the first partial information game , each transmitter is assumed to have knowledge of the channel gains of the links that are incident on its corresponding receiver from all the transmitters .later , in the other game , we assume that each transmitter has knowledge of its direct link channel gain only . for both the partial information games ,we find a ne using the heuristic algorithm developed in the paper . in each partial information game, we also present a lower bound on the average rate of each user at any nash equilibrium .this lower bound can be obtained by a user using a water - filling like , easy to compute power allocation , that can be evaluated with the knowledge of the distribution of its own channel gains and of the average power constraints of all the users .we present a distributed algorithm to compute pareto optimal and nash bargaining solutions for all the three proposed games .we obtain pareto optimal points by maximizing the weighted sum of the uitlities ( rates of transmission ) of the all users . throughout , each user requires the knowledge of the channel statics and the power policies of other users .later we relax this assumption and use bayesian learning to compute -nash equilibrium of the game in which only direct link channel gain is known at the corresponding transmitter .but in this case , we consider finite strategy set , i.e. , finite power levels rather than a continuum of powers considered before .the paper is organized as follows . in section [ sys_model ] ,we present the system model and the three stochastic game formulations .section [ one ] reformulates the complete information stochastic game as an affine variational inequality problem . in section [ gen_vi ], we propose the heuristic algorithm to solve the formulated variational inequality under more general conditions . in section[ incomplete ] we use this algorithm to obtain a ne when users have only partial information about the channel gains .pareto optimal and nash bargaining solutions are discussed in sections [ pareto ] , [ nb ] respectively and finally we apply bayesian learning in section [ bl ] .we present numerical examples in section [ ne ] .section [ concl ] concludes the paper .we consider a gaussian wireless channel being shared by transmitter - receiver pairs . the time axis is slotted and all users slots are synchronized .the channel gains of each transmit - receive pair are constant during a slot and change independently from slot to slot .these assumptions are usually made for this system , .let be the random variable that represents channel gain from transmitter to receiver ( for transmitter , receiver is the intended receiver ) in slot .the direct channel power gains and the cross channel power gains where , and are arbitrary positive integers .we assume that , is an sequence with distribution where if and if and and are probability distributions on and respectively .we also assume that these sequences are independent of each other .we denote by and its realization vector by which takes values in , the set of all possible channel states .the distribution of is denoted by .we call the channel gains from all the transmitters to the receiver an incident gain of user and denote by and its realization vector by which takes values in , the set of all possible incident channel gains .the distribution of is denoted by .each user aims to operate at a power allocation that maximizes its long term average rate under an average power constraint . since their transmissions interfere with each other , affecting their transmission rates , we model this scenario as a stochastic game .we first assume complete channel knowledge at all transmitters and receivers .if user uses power in slot , it gets rate , where and is a constant that depends on the modulation and coding used by transmitter and we assume for all .the aim of each user is to choose a power policy to maximize its long term average rate ,\ ] ] subject to average power constraint \leq \overline{p}_i , \text { for each } i,\label{avg_c}\ ] ] where denotes the power policies of all users except .we denote this game by .we next assume that the transmitter - receiver pair has knowledge of its incident gains only. then the rate of user is \right],\ ] ] where depends only on and denotes expectation with respect to the distribution of .each user maximizes its rate subject to ( [ avg_c ] ) .we denote this game by .we also consider a game assuming that each transmitter - receiver pair knows only its direct link gain .this is the most realistic assumption since each receiver can estimate and feed it back to transmitter . in this case , the rate of user is given by \right ] , \label{r_d}\ ] ] where is a function of only . here, denotes the channel gains of all other links in the interference channel except . in this game, each user maximizes its rate ( [ r_d ] ) under the average power constraint ( [ avg_c ] ) .we denote this game by .we address these problems as stochastic games with the set of feasible power policies of user denoted by and its utility by .let .we limit ourselves to stationary policies , i.e. , the power policy for every user in slot depends only on the channel state and not on . in fact now we can rewrite the optimization problem in to find policy such that ] for all .we express power policy of user by , where transmitter transmits in channel state with power .we denote the power profile of all users by . in the rest of the paper , we prove existence of a nash equilibrium for each of these games and provide algorithms to compute it .we denote our game by , where ] .a point is a nash equilibrium ( ne ) of game if for each user we now state debreu - glicksberg - fan theorem ( , page no . 69 ) on the existence of a pure strategy ne .[ dgf ] given a non - cooperative game , if every strategy set is compact and convex , is a continuous function in the profile of strategies and quasi - concave in , then the game has atleast one pure - strategy nash equilibrium .existence of a pure ne for the strategic games and follows from theorem [ dgf ] , since in our game is a continuous function in the profile of strategies and concave in for and .also , is compact and convex for each . the best - response of user is a function such that maximizes , subject to .a nash equilibrium is a fixed point of the best - response function . in the following weprovide algorithms to obtain this fixed point for . in section[ incomplete ] we will consider and .given other users power profile , we use lagrange method to evaluate the best response of user .the lagrangian function is defined by ).\ ] ] to maximize , we solve for such that for each .thus , the component of the best response of user , corresponding to channel state is given by where is chosen such that the average power constraint is satisfied .it is easy to observe that the best - response of user to a given strategy of other users is water - filling on where for this reason , we represent the best - response of user by .the notation used for the overall best - response , where and is as defined in ( [ wf ] ) .we use .it is observed in that the best - response is also the solution of the optimization problem as a result we can interpret the best - response as the projection of on to .we denote the projection of on to by .we consider ( [ opt_proj ] ) , as a game in which every user minimizes its cost function with strategy set of user being .we denote this game by .this game has the same set of nes as because the best responses of these two games are equal .the theory of variational inequalities offers various algorithms to find ne of a given game . a variational inequality problem denoted by defined as follows .let be a closed and convex set , and .the variational inequality problem is defined as the problem of finding such that we say that is * monotone if * strictly monotone if * strongly monotone if there exists an such that .we reformulate the nash equilibrium problem at hand to an affine variational inequality problem .we now formulate the variational inequality problem corresponding to the game .we note that ( [ opt_proj ] ) is a convex optimization problem . the necessary and sufficient condition for to be solution of the convex optimization problem ( , page 210 ) where is a convex function and is a convex set , is thus , given , we need for user such that for all . we can rewrite it more compactly as , where is a -length block vector with , the cardinality of , each block , is of length and is defined by and is the block diagonal matrix with each block defined by {ij } = \begin{cases } 0 & \text { if } i = j , \\\frac{\vert h_{ij } \vert^2}{\vert h_{ii } \vert^2 } , & \text { else . } \end{cases}\ ] ] the characterization of nash equilibrium in ( [ cond_1 ] ) corresponds to solving for in the affine variational inequality problem , where . in , we presented an algorithm to compute ne when is positive semidefinite . in , , we proved that being positive semidefinite is a weaker sufficient condition than the existing condition in .in this section we aim to find a ne even if is not positive semidefinite . for this, we present a heuristic to solve the in general .we base our heuristic algorithm on the fact that a power allocation is a solution of if and only if for any .we prove this fact using a property of projection on a convex set that can be stated as follows ( ) : [ lemma_proj ] let be a convex set .the projection of , is the unique element in such that let satisfy ( [ fp_eq ] ) for some . by the property of projection ( [ prop_proj ] ) , we have for all . using ( [ fp_eq ] ) in ( [ prop_fp ] ) , we have since , we have thus solves the .conversely , let be a solution of the . then we have relation ( [ vi_form ] ) , which can be rewritten as for any .comparing with ( [ prop_proj ] ) , from lemma [ lemma_proj ] we have that ( [ fp_eq ] ) holds .thus , is a fixed point of the mapping for any . we can interpret the mapping as a better response mapping for the optimization ( [ opt_proj ] ) .consider a fixed point of the better response .then is a solution of the variational inequality ( [ vi_complete ] ) .this implies that , given , is a local optimum of ( [ opt_proj ] ) for all . since the optimization ( [ opt_proj ] ) is convex , is also a global optimum .thus given , is best response for all , and hence a fixed point of the better response function is also a ne . to find a fixed point of ,we reformulate the variational inequality problem as a non - convex optimization problem the feasible region of , can be written as a cartesian product of , for each , as the constraints of each user are decoupled in power variables . as a result ,we can split the projection into multiple projections for each , i.e. , . for each user , the projection operation takes the form where is chosen such that the average power constraint is satisfied . using ( [ pro_form ] ) ,we rewrite the objective function in ( [ s_obj ] ) with as at a ne , the left side of equation ( [ sim_obj ] ) is zero and hence each minimum term on the right side of the equation must be zero as well .this happens , only if , for each , here , the lagrange multiplier can be negative , as the projection satisfies the average power constraint with equality . at a ne user will not transmit if the ratio of total interference plus noise to the direct link gain is more than some threshold .we now propose a heuristic algorithm to find an optimizer of ( [ s_obj ] ) .this algorithm consists of two phases .phase 1 attempts to find a better power allocation , using picard iterations with the mapping , that is close to a ne .we use phase 1 in algorithm [ s_min ] to get a good initial point for the steepest descent algorithm of phase 2. we will show in section [ ne ] that it indeed provides a good initial point for phase 2 . in phase 2, using the estimate obtained from phase 1 as the initial point , the algorithm runs the steepest descent method to find a ne .it is possible that the steepest descent algorithm may stop at a local minimum which is not a ne .this is because of the non - convex nature of the optimization problem .if the steepest descent method in phase 2 terminates at a local minimum which is not a ne , we again invoke phase 1 with this local minimum as the initial point and then go over to phase 2 .we present the complete algorithm below as algorithm [ s_min ] .fix and a positive integer max : initialization phase + initialize for all . go to phase 2 .+ : optimization phase + initialize , for each , = steepest_descent( ) where , , till go to phase 1 with .+ where evaluate using derivative approximation return the partial information games , unlike in complete information game , we can not write the problem of finding a ne as an affine variational inequality , because the best response is not water - filling and should be evaluated numerically .but we can still formulate the problem of finding a ne as a non - affine vi . in this section , we show that we can use algorithm [ s_min ] to find a ne even for these games .we first consider the game and find its ne using algorithm [ s_min ] .we write the variational inequality formulation of the ne problem . for user ,the optimization at hand is where $ ] .the necessary and sufficient optimality conditions for the convex optimization problem ( [ in_pr ] ) are where is the gradient of with respect to power variables of user .then is a ne if and only if ( [ ns_in ] ) is satisfied for all .we can write the inequalities in ( [ ns_in ] ) as where . equation ( [ vi_in ] ) is the required variational inequality characterization .a solution of the variational inequality is a fixed point of the mapping , for any .we use algorithm [ s_min ] , to find a fixed point of by replacing in algorithm [ s_min ] with . in this subsection , we interpret as a better response for each user . for this , consider the optimization problem ( [ in_pr ] ) . for this , using the gradient projection method , the update rule for power variables of user is the gradient projection method ensures that for a given , therefore , we can interpret as a better response to than . as the feasible space , we can combine the update rules of all users and write thus , the phase of algorithm [ s_min ] is the iterated better response algorithm .consider a fixed point of the better response .then is a solution of the variational inequality [ vi_in ] .this implies that , given , is a local optimum of ( [ in_pr ] ) for all .since the optimization ( [ in_pr ] ) is convex , is also a global optimum .thus given , is best response for all , and hence a fixed point of the better response function is also a ne .this gives further justification for phase 1 of algorithm [ s_min ] .indeed we will show in the next section that in such a case phase 1 often provides a ne for and ( for which also phase 1 provides a better response dynamics ; see section [ direct ] below ) .we now consider the game where each user has knowledge of only the corresponding direct link gain . in this casealso we can formulate the variational inequality characterization .the variational inequality becomes where , .\label{rate_direct}\ ] ] we use algorithm [ s_min ] to solve the variational inequality ( [ vi_d ] ) by finding fixed points of .also , one can show that as for , provides a better response strategy . in this subsection, we derive a lower bound on the average rate of each user at any ne .this lower bound can be achieved at a water - filling like power allocation that can be computed with knowledge of only its own channel gain distribution and the average power constraint of all the users . to compute a ne using algorithm [ s_min ] , each user needs to communicate its power variables to the other users in every iteration and should have knowledge of the distribution of the channel gains of all the users .if any transmitter fails to receive power variables from other users , it can operate at the water - filling like power allocation that attains at least the lower bound derived in this section .other users can compute the ne of the game that is obtained by removing the user that failed to receive the power variables , but treating the interference from this user as a constant , fixed by its water - filling like power allocation .we now derive the lower bound . in the computation of ne, each user is required to know the power profile of all other users .we now give a lower bound on the utility of user that does not depend on other users power profiles .we can easily prove that the function inside the expectation in is a convex function of for fixed using the fact that ( ) a function is convex if and only if for all and is such that .then by jensen s inequality to the inner expectation in , \nonumber \\ & \geq & \sum_{h_i \in \mathcal{i}}\pi(h_i ) \text{log } \left(1 + \frac { |h_{ii}|^2 p_i({\bf h}_i)}{1 +\sum_{j \neq i}|h_{ij}|^2\mathbb{e}[p_j(h_j)]}\right)\nonumber \\ & = & \sum_{h_i \in \mathcal{i}}\pi(h_i ) \text{log }\left(1 + \frac { |h_{ii}|^2 p_i({\bf h}_i)}{1 + \sum_{j \neq i}|h_{ij}|^2\overline{p_j}}\right ) . \label{in_lb}\end{aligned}\ ] ] the above lower bound of does not depend on the power profile of users other than .we can choose a power allocation of user that maximizes .it is the water - filling solution given by let be a ne , and let be the maximizer for the lower bound . then , , in particular for , .but , .therefore , .but , in general it may not hold that .we can also derive a lower bound on using convexity and jensen s inequality as in ( [ in_lb ] ) . in the case of , we have \overline{p_j}}\right).\ ] ] the optimal solution for maximizing the lower bound is the water - filling solution \overline{p_j}}{|h_{ii}|^2 } \right\}.\ ] ]in this section , we consider pareto optimal solutions to the game .a power allocation is pareto optimal if there does not exist a power allocation such that for all with at least one strict inequality .it is well - known that the solution of optimization problem , with , is pareto optimal .thus , since is compact and are continuous , a pareto point exists for our problem .we apply the weighted - sum optimization ( [ ws_pareto ] ) to the game to find a pareto - optimal power allocation .to solve the non - convex optimization problem in a distributed way , we employ augmented lagrangian method and solve for the stationary points using the algorithm in .we present the resulting algorithm to find the pareto power allocation in algorithm [ algo_pareto ] . define the augmented lagrangian as initialize for all . break fix initialize .player updates his power variables as + choose as . till for each .return we denote the gradient of with respect to power variables of player by . in algorithm[ algo_pareto ] , the step sizes are chosen sufficiently small .convergence of the steepest ascent function in algorithm [ algo_pareto ] is proved in . in a similar way, we can find pareto optimal points for partial information games and by solving the optimization ( [ ws_pareto ] ) with replaced by and respectively .we can extend the algorithm to compute pareto optimal power allocation for games and by appropriately redefining the augmented lagrangian as for game and for game . since this is a nonconvex optimization problem , algorithm [ algo_pareto ] converges to a local pareto point ( ) depending on the initial power allocation .we can get better local pareto points by initializing the algorithm from different power allocations and choosing the pareto point which gives the best sum rate among the ones obtained .we consider this in our illustrative examples .in general , pareto optimal points do not guarantee fairness among users , i.e. , algorithm [ algo_pareto ] may converge to a pareto point such that a particular user at the pareto point receives higher rate while another user at the same pareto point receives arbitrarily small rate . in this sectionwe consider nash bargaining solutions which are also pareto optimal solutions but guarantee fairness . in nash bargaining, we specify a disagreement outcome that specifies utility of each user that it receives by playing the disagreement strategy if the utility received in the bargaining outcome is less than that received in the disagreement outcome for any user .thus , by choosing the disagreement outcomes appropriately , the users can ensure certain fairness .the nash bargaining solutions are pareto optimal and also satisfy certain natural axioms .it is shown in that for a two player game , there exists a unique bargaining solution ( if the feasible region is nonempty ) that satisfies the axioms stated above and it is given by the solution of the optimization problem for an n - user nash bargaining problem , this result can be extended and the solution of an n - user bargaining problem is the solution of the optimization problem a nash bargaining solution is also related to proportional fairness , another fairness concept commonly used in communication literature .a utility vector is said to be _ proportionally fair _ if for any other feasible vector , for each , the aggregate proportional change is non - positive . if the set is convex , then nash bargaining and proportional fairness are equivalent .a major issue in finding a solution of a bargaining problem is choosing the disagreement outcome .it is more common to consider an equilibrium point as a disagreement outcome . in our problemwe can consider the utility vector at a ne as the disagreement outcome .we can also choose for each .for our numerical evaluations we have chosen the disagreement outcome to be a zero vector . to find the bargaining solution , i.e. , to solve the optimization problem ( [ nb_n ] ), we use the algorithm of section [ pareto ] used to find a pareto optimal point but with the objective function in section [ ne ] , we present a nash bargaining solution for the numerical examples we consider and observe that the nash bargaining solution obtained is a pareto optimal point which provides fairness among the users .in section [ gen_vi ] , we discussed a heuristic algorithm to find a ne when the matrix is not positive semidefinite . even though the heuristic can be used to compute a ne , we do not have a proof of its convergence to a ne . in this section ,we use bayesian learning that guarantees convergence to an -nash equilibrium of the partial information game where can be chosen arbitrarily small .we first define an -nash equilibrium .a point is an -nash equilibrium ( -ne ) of game if for each user bayesian learning to find a ne for finite games is introduced in . in finite games, there are a finite number of users and the strategy set of user , is finite for all . let the probability distribution on be the strategy of user , i.e. , for each , denotes the probability that user plays the action under the strategy . in the model considered in ,a static game is played repeatedly for an infinite horizon and users update their strategies each time the game is played .no user has knowledge about the opponents strategy . buteach user is provided with the actions chosen by the opponents in a time slot at the end of that time slot .let be the action chosen by user in time slot .in bayesian learning , each user has a belief about strategy of user , for each with .each user , after every time slot , finds the posterior belief on the opponents strategies from the prior beliefs using bayesian update rule , after observing the opponents actions in time slot . after finding the posterior beliefs , each user chooses a strategy that maximizes its utility , assuming that all the opponents follow their beliefs . following this procedure ,it is shown in that the posterior beliefs of all players converge to a -ne .we adapt this procedure to find a -ne of our game . for this , we consider finite power levels at which a user can transmit .let be the set of power levels at which user can transmit .these power levels can be different for different users . then the strategy set of user with average power constraint is where is the probability of occurrence of channel state .the strategy set of each user is finite . to use the traditional bayesian learning ,each user needs to know the action chosen by the other users .in general , a transmitter can not observe powers used by the other transmitters .the purpose of knowing the actions of other users in bayesian learning is to learn the strategies of other players and each user finds its best response with respect to the learned strategies of other users . in our problem , for each user to find its best response to a given strategy of the other users , it is enough to know the interference level that is experienced by its corresponding receiver . as the receiver can feedback the interference it has seen in a slot by the end of that slot to its transmitter , it is enough to learn the distribution of the interference rather than the strategy of the other users .hence , each user has belief on the distribution of the interference rather than having a belief on the strategies of opponents .each user updates its belief on the distribution on the interference using the bayesian update with the help of feedback from its receiver. let denote the set of possible interference levels for user .we denote the belief of user about the distribution of interference experienced at its receiver by . with respect to this belief , user finds the best response and chooses an action according to the best response .please note that is a probability mass function on where as is a probability mass function on .after every time slot user updates its belief based on the feedback received from its receiver using laplace estimator where is the number of time instances that the interference level occurred up to time , is the cardinality of , and is any positive integer .the laplace estimator ( [ lap_est ] ) uses bayesian update and guarantees absolute continuity condition that is necessary for convergence .thus , as each user plays its best response with respect to , the strategies converge to an -ne .we will use this algorithm on some examples in the next section .in this section we compare the sum rate achieved at a nash equilibrium and a pareto optimal point obtained by the algorithms provided above . in all our numerical computations we choose in computations of pareto points .in all the examples considered below , we have chosen with the step size in the steepest descent method and updated after iterations as .we choose a 3-user interference channel for examples 1 and 2 below . for example 1 , we take and .we assume that all elements of occur with equal probability , i.e. , with probability 0.5 .now , the matrix is not positive definite .thus , the algorithm in may not converge to a ne for .algorithm [ s_min ] converges to a ne not only for but also for and .we compare the sum rates for the ne under different assumptions in figure [ 4_plot ] .we have also computed that maximizes the corresponding lower bounds ( [ in_lb ] ) , evaluated the sum rate and compared to the sum rate at a ne .the sum rates at nash equilibria for and are close .this is because the values of the cross link channel gains are close and hence knowing the cross link channel gains has less impact . in example 2 , we take and .we assume that all elements of , and occur with equal probability .we compare the sum rates for the ne obtained by algorithm [ s_min ] in figure [ 4_plot2 ] .now we see significant differences in the sum rates . for this example, we compare the rates of each user at a pareto point and nash bargaining for games , , and in tables [ tab : g_a ] , [ tab : g_i ] , and [ tab : g_d ] respectively . from these tableswe can observe that the pareto optimal points yield better sum rate than at the ne .it can also be seen that the nash bargaining solutions provide more fairness than the pareto points .we consider a 2-user interference channel in example 3 .we take and .we assume that all elements of occur with equal probability for user 1 , and that the distributions of direct and cross link channel gains are identical for user 2 and are given by . in this example also , we use algorithm [ s_min ] to find ne for the different cases , and also obtain the lower bound for the partial information cases .we compare the sum rates for the ne in figure [ plot3 ] .we further elaborate on the usefulness of phase 1 in algorithm [ s_min ] .we quantify the closeness of to a ne by .if is a ne then , and for two different power allocations and we say that is closer to a ne than if .we now verify that the fixed point iterations in phase 1 of algorithm [ s_min ] take us closer to a ne starting from any randomly chosen feasible power allocation .for this , we have randomly generated feasible initial power allocations and run phase 1 for iterations for each randomly chosen initial power allocation , and compared the values of . in the following ,we compare the mean , over the 100 initial points chosen , of the values of immediately after random generation of feasible power allocations , to those after running phase 1 .we summarize the comparison of mean value of before and after phase 1 of algorithm [ s_min ] , in tables [ 4_table1 ] , [ 4_table2 ] and [ 4_table3 ] for examples 1 , 2 and 3 respectively .the first column of the table indicates the constrained average transmit snr in db .the second and the third columns correspond to the power allocation game with complete channel knowledge , .the fourth and the fifth columns correspond to the power allocation game with knowledge of the incident channel gains , . the sixth and the seventh columns correspond to the power allocation game with direct link channel knowledge , .the second , fourth and sixth columns indicate the mean of before running phase 1 , where is a randomly generated feasible power allocation .the mean value is evaluated over samples of different random feasible power allocations .the third , fifth and seventh columns indicate the mean value of after running phase 1 in algorithm [ s_min ] for the same random feasible power allocations .it can be seen from the tables that running phase 1 prior to phase 2 reduces the value of when compared with a randomly generated feasible power allocation .thus , the power allocation after running phase 1 will be a good choice of power allocation to start the steepest descent in phase 2 .it can also be seen that for all the three examples , for and , phase 1 itself converges to the ne , whereas for phase 1 may not converge . at snr of 20db , for , algorithm [ s_min ] converged in one iteration of phase 1 and phase 2 for examples 1 and 3 .for example 2 , algorithm [ s_min ] converged after phase 1 in the second iteration of phase 1 and phase 2 .phase 2 converged to a local optimum in about 200 iterations in example 1 , about 400 iterations for example 3 and about 250 iterations in example 2 .we have run algorithm [ s_min ] on many more examples and found that for and , phase 1 itself converged to the ne .we illustrate bayesian learning for example 2 with and for each .we assume that all elements of , and occur with equal probability .each user transmits data at rate given by ( [ rate_direct ] ) with a power level between and which is a multiple of .each user has a belief on the distribution of interference experienced by its receiver and uses bayesian learning to find a ne of .we tabulate these rates in table [ table4 ] for all the three users at a -ne computed via the bayesian learning algorithm , and we also compare the sum rates at ne obtained using variational inequality approach and bayesian learning in figure [ plot_baye ] . it can be seen from figure [ plot_baye ] that the sum rates achieved via the vi heuristic and using bayesian learning are very close . in our simulations for example 2 , at transmit snr of 15db , to find a ne of using variational inequality approach requires about iterations in phase .the overall run time of the vi based heuristic algorithm is about seconds for and seconds for on an _ i5 - 2400 _ processor with clock speed .bayesian learning converges to a ne in about iterations and the run time for the program on the same processor is about seconds . even though bayesian learning requires a larger number of iterations to converge, its per iteration complexity is less which reduces run time . but this run time increases significantly if we increase the number of feasible power levels .using vi approach and bayesian learning for example 2.,width=377,height=188 ]we have considered a channel shared by multiple transmitter - receiver pairs causing interference to one another .we formulated stochastic games for this system in which transmitter - receiver pairs may or may not have information about other pairs channel gains . exploiting variational inequalities , we presented a heuristic algorithm that obtains a ne in the various examples studied , quite efficiently . in the games with partial information , we presented a lower bound on the utility of each user at any ne .a utility of at least this lower bound can be attained by a user using a water - filling like power allocation , that can be computed with the knowledge of the distribution of its own channel gains and of the average power constraints of all the users .this power allocation is especially useful when any transmitter fails to receive the power variables from the other transmitters that are required for it to compute its ne power allocation . in all the games ,i.e. , , and , we also provide algorithms to compute the pareto points and nash bargaining solutions which yield better sum rate than the ne .the nash bargaining solutions are fairer to users than the pareto points .bayesian learning has been used to compute ne for general channel conditions .it is observed that , even though bayesian learning takes more iterations to compute ne than the heuristic , it requires less information about the other users and their strategies .but to use bayesian learning , we quantize the power levels and it is the price we pay for not having more information .this work is partially supported by funding from anrc . 9 g. scutari , d. p. palomar , s. barbarossa , `` optimal linear precoding strategies for wideband non - cooperative systems based on game theory - part ii : algorithms , '' _ ieee trans on signal processing _ , vol.56 , no.3 , pp .1250 - 1267 , march 2008 .x. lin , tat - ming lok , `` learning equilibrium play for stochastic parallel gaussian interference channels , '' available at http://arxiv.org/abs/1103.3782 . l. rose , s. m. perlaza , c. j. le martret , and m. debbah , `` achieving pareto optimal equilibria in energy efficient clustered ad hoc networks , '' _ proc .of international conference on communications _ , budapest , hungary , 2013 .k. w. shum , k .- k .leung , c. w. sung , `` convergence of iterative waterfilling algorithm for gaussian interference channels , '' _ ieee journal on selected areas in comm . , _vol.25 , no.6 , pp . 1091 - 1100 , august 2007 .g. scutari , f. facchinei , j. s. pang , l. lampariello , `` equilibrium selection in power control games on the interference channel , '' _ proceedings of ieee infocom _ ,pp 675 - 683 , march 2012 .m. bennis , m. le treust , s. lasaulce , m. debbah , and j. lilleberg , `` spectrum sharing games on the interference channel , '' _ ieee international conference on game theory for networks _ , turkey , 2009 . g. scutari , d. p. palomar , s. barbarossa , `` the mimo iterative waterfilling algorithm , '' _ ieee trans on signal processing _ , vol .57 , no.5 , may 2009 .g. scutari , d. p. palomar , s. barbarossa , `` asynchronous iterative water - filling for gaussian frequency - selective interference channels '' , _ ieee trans on information theory _ , vol.54 , no.7 ,july 2008 .l. rose , s. m. perlaza , m. debbah , `` on the nash equilibria in decentralized parallel interference channels , '' _ proc .of international conference on communications _ , kyoto , 2011 .l. rose , s. m. perlaza , m. debbah , and c. j. le martret , `` distributed power allocation with sinr constraints using trial and error learning , '' _ proc .of ieee wireless communications and networking conference _ , paris , france , april 2012 .j. s. pang , g. scutari , f. facchinei , and c. wang , `` distributed power allocation with rate constraints in gaussian parallel interference channels , '' _ ieee trans on information theory _ , vol .54 , no . 8 , pp .3471 - 3489 , august 2008 . k. a. chaitanya , u. mukherji and v. sharma , `` power allocation for interference channel , '' _ proc . of national conference on communications, new delhi , 2013 .a. j. goldsmith and pravin p. varaiya , `` capacity of fading channels with channel side information , '' _ ieee trans on information theory _ ,vol.43 , pp.1986 - 1992 , november 1997 .h. n. raghava and v. sharma , `` diversity - multiplexing trade - off for channels with feedback , '' _ proc . of 43rd annual allerton conference on communications , control , and computing _ ,k. a. chaitanya , u. mukherji , and v. sharma , `` algorithms for stochastic games on interference channels , '' _ proc . of national conference on communications _ , mumbai , 2015 .z. han , d. niyato , w. saad , t. basar and a. hjorungnes , `` game theory in wireless and communication networks , '' _ cambridge university press _ , 2012 .d. p. bertsekas and j. n. tsitsiklis , `` parallel and distributed computation : numerical methods , '' _ athena scientific _ , 1997 .h. minc , `` nonnegative matrices , '' john wiley sons , new york , 1988 .f. facchinei and j. s. pang , `` finite - dimensional variational inequalities and complementarity problems , '' springer , 2003 .d. conforti and r. musmanno , `` parallel algorithm for unconstrained optimization based on decomposition techniques , '' _ journal of optimization theory and applications _ ,vol.95 , no.3 , december 1997 . s. boyd and l. vandenberghe , `` convex optimization , '' cambridge university press , 2004. k. miettinen , `` nonlinear multiobjective optimization , '' kluwer academic publishers , 1999 .r. a. horn and c. r. johnson , `` matrix analysis , '' cambridge university press , 1985 .d. g. luenberger and yinyu ye , `` linear and nonlinear programming , '' edition , springer , 2008 .j. nash , `` the bargaining problem , '' _ econometrica _ , 18:155 - 162 , 1950 .f. kelly , a. maulloo , and d. tan , `` rate control for communication networks : shadow prices , proportional fairness and stability , '' _ journal of the operations research society _ ,237 - 252 , march , 1998 .h. boche , and m. schubert , `` nash bargaining and proportional fairness for wireless systems , '' _ ieee / acm transactions on networking _ , vol .1453 - 1466 , october , 2009 .e. kalai , and e. lehrer , `` rational learning leads to nash equilibrium , '' _ econometrica _ , vol .5 , pp . 1019 - 1045 , 1993 . b. m. jedynak , and s. khudanpur , `` maximum likelihood set for estimating a probability mass function , '' _ neural computation _1508 - 1530 , 2005 .
we consider a gaussian interference channel with independent direct and cross link channel gains , each of which is independent and identically distributed across time . each transmitter - receiver user pair aims to maximize its long - term average transmission rate subject to an average power constraint . we formulate a stochastic game for this system in three different scenarios . first , we assume that each user knows all direct and cross link channel gains . later , we assume that each user knows channel gains of only the links that are incident on its receiver . lastly , we assume that each user knows only its own direct link channel gain . in all cases , we formulate the problem of finding a nash equilibrium ( ne ) as a variational inequality ( vi ) problem . we present a novel heuristic for solving a vi . we use this heuristic to solve for a ne of power allocation games with partial information . we also present a lower bound on the utility for each user at any ne in the case of the games with partial information . we obtain this lower bound using a water - filling like power allocation that requires only knowledge of the distribution of a user s own channel gains and average power constraints of all the users . we also provide a distributed algorithm to compute pareto optimal solutions for the proposed games . finally , we use bayesian learning to obtain an algorithm that converges to an -nash equilibrium for the incomplete information game with direct link channel gain knowledge only without requiring the knowledge of the power policies of the other users . interference channel , stochastic game , nash equilibrium , distributed algorithms , variational inequality .
we consider a reinforcement learning agent that takes sequential actions within an uncertain environment with an aim to maximize cumulative reward .we model the environment as a markov decision process ( mdp ) whose dynamics are not fully known to the agent .the agent can learn to improve future performance by exploring poorly - understood states and actions , but might improve its short - term rewards through a policy which exploits its existing knowledge .efficient reinforcement learning balances exploration with exploitation to earn high cumulative reward .the vast majority of efficient reinforcement learning has focused upon the _ tabula rasa _ setting , where little prior knowledge is available about the environment beyond its state and action spaces . in thissetting several algorithms have been designed to attain sample complexity polynomial in the number of states and actions .stronger bounds on regret , the difference between an agent s cumulative reward and that of the optimal controller , have also been developed .the strongest results of this kind establish regret for particular algorithms which is close to the lower bound .however , in many setting of interest , due to the curse of dimensionality , and can be so enormous that even this level of regret is unacceptable . in many practical problems the agent _will _ have some prior understanding of the environment beyond _tabula rasa_. for example , in a large production line with machines in sequence each with possible states , we may know that over a single time - step each machine can only be influenced by its direct neighbors .such simple observations can reduce the dimensionality of the learning problem exponentially , but can not easily be exploited by a _tabula rasa _ algorithm .factored mdps ( fmdps ) , whose transitions can be represented by a dynamic bayesian network ( dbn ) , are one effective way to represent these structured mdps compactly .several algorithms have been developed that exploit the known dbn structure to achieve sample complexity polynomial in the _ parameters _ of the fmdp , which may be exponentially smaller than or .however , these polynomial bounds include several high order terms .we present two algorithms , ucrl - factored and psrl , with the first near - optimal regret bounds for factored mdps .ucrl - factored is an optimistic algorithm that modifies the confidence sets of ucrl2 to take advantage of the network structure .psrl is motivated by the old heuristic of thompson sampling and has been previously shown to be efficient in non - factored mdps .these algorithms are descibed fully in section [ sec : algos ] .both algorithms make use of approximate fmdp planner in internal steps . however , even where an fmdp can be represented concisely , solving for the optimal policy may take exponentially long in the most general case .our focus in this paper is upon the statistical aspect of the learning problem and like earlier discussions we do not specify which computational methods are used .our results serve as a reduction of the reinforcement learning problem to finding an approximate solution for a given fmdp . in many cases of interest , effective approximate planning methods for fmdps do exist . investigating and extending these methodsare an ongoing subject of research .we consider the problem of learning to optimize a random finite horizon mdp in repeated finite episodes of interaction . is the state space , is the action space , is the reward distibution over in state with action , is the transition probability over from state with action , is the time horizon , and the initial state distribution .we define the mdp and all other random variables we will consider with respect to a probability space .a deterministic policy is a function mapping each state and to an action . for each mdp and policy , we define a value function ,\ ] ] where denotes the expected reward realized when action is selected while in state , and the subscripts of the expectation operator indicate that , and for .a policy is optimal for the mdp if for all and .we will associate with each mdp a policy that is optimal for .the reinforcement learning agent interacts with the mdp over episodes that begin at times , . at each time , the agent selects an action , observes a scalar reward , and then transitions to .let denote the history of observations made _ prior _ to time .a reinforcement learning algorithm is a deterministic sequence of functions , each mapping to a probability distribution over policies which the agent will employ during the episode .we define the regret incurred by a reinforcement learning algorithm up to time to be : where denotes regret over the episode , defined with respect to the mdp by with and .note that regret is not deterministic since it can depend on the random mdp , the algorithm s internal random sampling and , through the history , on previous random transitions and random rewards .we will assess and compare algorithm performance in terms of regret and its expectation .intuitively a factored mdp is an mdp whose rewards and transitions exhibit some conditional independence structure . to formalize this definitionwe must introduce some more notation common to the literature . for any subset of indices let us define the scope set := \bigotimes\limits_{i \in z } { \mathcal{x}}_i ] to be the value of the variables with indices . for singleton sets we will write ] in the natural way .let be the set of functions mapping elements of a finite set to probability mass functions over a finite set . will denote the set of functions mapping elements of a finite set to -sub - gaussian probability measures over with mean bounded in ] such that , = \sum_{i=1}^l { \mathds{e}}\big [ r_i \big]\ ] ] for is equal to with each ) ] such that , \ \bigg\vert \x[z_i ] \right)\ ] ] a factored mdp ( fmdp ) is then defined to be an mdp with both factored rewards and factored transitions . writing a fmdp is fully characterized by the tuple where and are the scopes for the reward and transition functions respectively in for .we assume that the size of all scopes and factors so that the domains of and are of size at most .our first result shows that we can bound the expected regret of psrl .[ thm : reg psrl ] let be factored with graph structure . if is the distribution of and is the span of the optimal value function then we can bound the regret of psrl : \hspace{-3 mm } & \le & \hspace{-2 mm } \sum_{i=1}^l \left\ { 5\tau c |{\mathcal{x}}[z^r_i]| + 12\sigma\sqrt{| { \mathcal{x}}[z^r_i]|t \log\left(4 l | { \mathcal{x}}[z^r_i ] | k t\right ) } \right\ } \nonumber + 2 \sqrt{t } \\ & & \hspace{-36 mm } + 4 + { \mathds{e}}[\psi ] \left ( 1 + \frac{4}{t-4 } \right ) \sum_{j=1}^m \left\ { 5\tau |{\mathcal{x}}[z^p_j]| + 12\sqrt{| { \mathcal{x}}[z^p_j]| |{\mathcal{s}}_j | t \log\left(4 m | { \mathcal{x}}[z^p_j ] | k t\right ) } \right\}\end{aligned}\ ] ] we have a similar result for ucrl - factored that holds with high probability . [ thm : reg ucrl - factored ] let be factored with graph structure .if is the diameter of , then for any can bound the regret of ucrl - factored : | + 12\sigma\sqrt { | { \mathcal{x}}[z^r_i]| t \log\left(12 l | { \mathcal{x}}[z^r_i ] | k t / \delta\right ) } \right\ } \nonumber + 2\sqrt{t}\\ & & \hspace{-37 mm } + cd\sqrt{2 t \log(6/\delta ) } + \cd \sum_{j=1}^m \left\ { 5\tau |{\mathcal{x}}[z^p_j]| + 12\sqrt{| { \mathcal{x}}[z^p_j]| |{\mathcal{s}}_j | t \log\left(12 m | { \mathcal{x}}[z^p_j ] | k t / \delta \right ) } \right\}\end{aligned}\ ] ] with probability at least both algorithms give bounds| |s_j| t}\right) ] for psrl and scaled diameter for ucrl - factored .the span of an mdp is the maximum difference in value of any two states under the optimal policy .the diameter of an mdp is the maximum number of expected timesteps to get between any two states .psrl s bounds are tighter since and may be exponentially smaller . however , ucrl - factored has stronger probabilistic guarantees than psrl since its bounds hold with high probability for any mdp not just in expectation .there is an optimistic algorithm regal which formally replaces the ucrl2 with and retains the high probability guarantees .an analogous extension to regal - factored is possible , however , no practical implementation of that algorithm exists even with an fmdp planner. the algebra in theorems [ thm : reg psrl ] and [ thm : reg ucrl - factored ] can be overwhelming . for clarity, we present a symmetric problem instance for which we can produce a cleaner single - term upper bound .let be shorthand for the simple graph structure with , , and for and , we will write . [ cor : reg psrl ] if is the distribution of with structure then we can bound the regret of psrl : \le 15 m \tau \sqrt{j k t \log(2mj t)}\ ] ] [ cor : reg ucrl - factored ] for any mdp with structure we can bound the regret of ucrl - factored : with probability at least . both algorithms satisfy bounds of which is exponentially tighter than can be obtained by any -naive algorithm .for a factored mdp with independent components with states and actions the bound is close to the lower bound and so the bound is near optimal .the corollaries follow directly from theorems [ thm : reg psrl ] and [ thm : reg ucrl - factored ] as shown in appendix [ sec : clean symmetric bounds ] .our analysis will rely upon the construction of confidence sets based around the empirical estimates for the underlying reward and transition functions .the confidence sets are constructed to contain the true mdp with high probability .this technique is common to the literature , but we will exploit the additional graph structure to sharpen the bounds . consider a family of functions which takes to a probability distribution over . we will write unless we wish to stress a particular -algebra .let be a finite set , and let be a measurable space .the _ width _ of a set at with respect to a norm is our confidence set sequence is initialized with a set .we adapt our confidence set to the observations which are drawn from the true function at measurement points so that . each confidence setis then centered around an empirical estimate at time , defined by where is the number of time appears in and is the probability mass function over that assigns all probability to the outcome .our sequence of confidence sets depends on our choice of norm and a non - decreasing sequence . for each ,the confidence set is defined by : where is shorthand for and we interpret as a null constraint .the following result shows that we can bound the sum of confidence widths through time .[ thm : widths ] for all finite sets , measurable spaces , function classes with uniformly bounded widths and non - decreasing sequences : the proof follows from elementary counting arguments on and the pigeonhole principle .a full derivation is given in appendix [ sec : widths ] .with our notation established , we are now able to introduce our algorithms for efficient learning in factored mdps . psrl and ucrl - factoredproceed in episodes of fixed policies . at the start of the episodethey produce a candidate mdp and then proceed with the policy which is optimal for . in psrl, is generated by a sample from the posterior for , whereas ucrl - factored chooses optimistically from the confidence set .both algorithms require prior knowledge of the graphical structure and an approximate planner for fmdps. we will write for a planner which returns -optimal policy for . we will write for a planner which returns an -optimal policy for most optimistic realization from a family of mdps .given it is possible to obtain through extended value iteration , although this might become computationally intractable . psrl remains identical to earlier treatment provided is encoded in the prior .ucrl - factored is a modification to ucrl2 that can exploit the graph and episodic structure of .we write and as shorthand for these confidence sets | , x^{t-1}_1[z^r_i],d_t^{r_i}) ] generated from initial sets ,{\mathds{r}}} ] .we should note that ucrl2 was designed to obtain regret bounds even in mdps without episodic reset .this is accomplished by imposing artificial episodes which end whenever the number of visits to a state - action pair is doubled .it is simple to extend ucrl - factored s guarantees to this setting using this same strategy .this will not work for psrl since our current analysis requires that the episode length is independent of the sampled mdp .nevertheless , there has been good empirical performance using this method for mdps without episodic reset in simulation .* input : * prior encoding , * input : * graph structure , confidence , our common analysis of psrl and ucrl - factored we will let refer generally to either the sampled mdp used in psrl or the optimistic mdp chosen from with associated policy ) .we introduce the bellman operator , which for any mdp , stationary policy and value function , is defined by this returns the expected value of state where we follow the policy under the laws of , for one time step .we will streamline our discussion of and by simply writing in place of or and in place of or where appropriate ; for example .we will also write .we now break down the regret by adding and subtracting the _ imagined _ near optimal reward of policy , which is known to the agent . for clarity of analysiswe consider only the case of but this changes nothing for our consideration of finite . relates the optimal rewards of the mdp to those near optimal for .we can bound this difference by the planning accuracy for psrl in expectation , since and are equal in law , and for ucrl - factored in high probability by optimism .we decompose the first term through repeated application of dynamic programming : where is a martingale difference bounded by , the span of . forucrl - factored we can use optimism to say that and apply the azuma - hoeffding inequality to say that : the remaining term is the one step bellman error of the imagined mdp .crucially this term only depends on states and actions which are actually observed .we can now use the hlder inequality to bound we aim to exploit the graphical structure to create more efficient confidence sets .it is clear from that we may upper bound the deviations of factor - by - factor using the triangle inequality .our next result , lemma [ lem : factor bound ] , shows we can also do this for the transition functions and .this is the key result that allows us to build confidence sets around each factor rather than as a whole .[ lem : factor bound ] let the transition function class be factored over and with scopes .then , for any we may bound their l1 distance by the sum of the differences of their factorizations : ) - \tilde{p}_i(x[z_i ] ) \|_1\ ] ] we begin with the simple claim that for any ] , where can be verified case by case .we now consider the probability distributions over and over .we let be the joint probability distribution over .using the claim above we bound the l1 deviation by the deviations of their factors : we conclude the proof by applying this times to the factored transitions and .we now want to show that the true mdp lies within with high probability .note that posterior sampling will also allow us to then say that the sampled is within with high probability too . in order to show this , we first present a concentration result for the l1 deviation of empirical probabilities . [lem : weissman ] for all finite sets , finite sets , function classes then for any , the deviation the true distribution to the empirical estimate after samples is bounded : this is a relaxation of the result proved by weissman .lemma [ lem : weissman ] ensures that for any .we then define with | k^2) ] for all : \le { \mathds{p}}(a)^{-1 } { \mathds{e}}[\psi ] \le \left ( 1 - \frac{4 \delta}{k^2 } \right)^{-1 } { \mathds{e } } [ \psi ] = \left(1 + \frac{4\delta}{k^2 - 4\delta } \right ) { \mathds{e } } [ \psi ] \le \left(1 + \frac{4\delta}{1 - 4\delta } \right ) { \mathds{e } } [ \psi ] .\ ] ] plugging in and and setting completes the proof of theorem [ thm : reg psrl ] .the analysis of ucrl - factored and theorem [ thm : reg ucrl - factored ] follows similarly from and .corollaries [ cor : reg psrl ] and [ cor : reg ucrl - factored ] follow from substituting the structure and upper bounding the constant and logarithmic terms .this is presented in detail in appendix [ sec : clean symmetric bounds ] .we present the first algorithms with near - optimal regret bounds in factored mdps .many practical problems for reinforcement learning will have extremely large state and action spaces , this allows us to obtain meaningful performance guarantees even in previously intractably large systems .however , our analysis leaves several important questions unaddressed .first , we assume access to an approximate fmdp planner that may be computationally prohibitive in practice .second , we assume that the graph structure is known a priori but there are other algorithms that seek to learn this from experience . finally , we might consider dimensionality reduction in large mdps more generally , where either the rewards , transitions or optimal value function are known to belong in some function class to obtain bounds that depend on the dimensionality of .osband is supported by stanford graduate fellowships courtesy of paccar inc .this work was supported in part by award cmmi-0968707 from the national science foundation .we present elementary arguments which culminate in a proof of theorem [ thm : widths ] .[ lem : rad ] for all finite sets and any : where .let be the largest subsequence of such that \\forall i ] and so we must conclude that for all .this means that forms a subsequence of unique elements in , the total length of which must be bounded by .we now provide a corollary of this result which allows for episodic delays in updating visit counts .we imagine that we will only update our counts every steps .[ cor : rad ep ] let us associate times within episodes of length , for and . for all finite sets and any : where is the -fold composition of acting on . by an argument of visiting times similar to lemma [ lem : rad ]we can see that the worst case scenario for the episodic case is to visit each exactly times before the start of an episode , and then spend the entirety of the following episode within the state .here we have upper bounded by and by to complete our result .it will be useful to define notion of radius for each confidence set at each , by the triangle inequality , we have for all .[ lem : large rad ] let us write for and associate times within episodes of length , for and .for all finite sets , measurable spaces , function classes , non - decreasing sequences , any and : by construction of and noting that is non - decreasing in , we can say that for all so that now let be the -inverse of such that .applying corollary [ cor : rad ep ] to our expression times repeatedly we can say : where denotes the composition of -times acting on . if we take to be the lowest integer such that then , so that the whole expression is bounded by . note that for all , , if we write then , which completes the proof . using these results we are finally able to complete our proof of theorem [ thm : widths ]we first note that , via the triangle inequality .we streamline our notation by letting . reordering the sequence such we have that : we can see that . from lemma [ lem :large rad ] this means that , so that .this means that .therefore , which completes the proof of theorem [ thm : widths ] .we now provide concrete clean upper bounds for theorems [ thm : reg psrl ] and [ thm : reg ucrl - factored ] in the simple symmetric case , , and for all suitable and write . for a non - trivial problem setting we assume that , , . from section [ sec : bounds ] we have that & \le & 4 + 2 \sqrt{t } + m \left\ { 4(\tau j + 1 ) + 4\sqrt{8 \log(4mj t^2/\tau ) j t } \right\ } \\ & & + \ { \mathds{e}}[\psi]\left(1 + \frac{4}{t-4}\right ) m \left\ { 4(\tau j + 1 ) + 4\sqrt{8 k \log(4mj t^2 / \tau ) j t } \right\}\end{aligned}\ ] ] through looking at the constant term we know that the bounds are trivially satisfied for all , from here we can certainly upper bound . from here we can say that : & \le & \left\ { 4 + 4m\left(1 + \frac{14}{13 } { \mathds{e}}[\psi]\right)(\tau j + 1 ) \right\ } \\ & & + \sqrt{t } \left\ { 2 + 4\sqrt{8j \log(4mj t^2/\tau ) } + 4\sqrt{8 jk \log(4mj t^2/\tau ) } \frac{14}{13 } { \mathds{e}}[\psi ] \right\ } \\ & \le & 5 \left ( 1 + { \mathds{e}}[\psi ] \right ) m \tau j + \sqrt{t } \left\{ 12\sqrt{j\log(2mj t ) } + 12{\mathds{e}}[\psi ] \sqrt{jk\log(2mj t ) } \right\ } \\ & \le & 5 \left ( 1 + { \mathds{e}}[\psi ] \right ) m \tau j + 12m\left ( 1 + { \mathds{e } } [ \psi ] \sqrt{k } \right ) \sqrt{j t\log(2mj t ) } \\ & \le & \min(5 m \tau^2 j , t ) + 12 m \tau \sqrt{j k t \log(2mj t ) } \\ & \le & 15 m \tau \sqrt{j k t \log(2mj t)}\end{aligned}\ ] ] where in the last steps we have used that and .we now repeat a similar procedure of upper bounds for ucrl - factored , immediately replicating by in our analysis to say that with probability : where in the last step we used a similar argument
any reinforcement learning algorithm that applies to all markov decision processes ( mdps ) will suffer regret on some mdp , where is the elapsed time and and are the cardinalities of the state and action spaces . this implies time to guarantee a near - optimal policy . in many settings of practical interest , due to the curse of dimensionality , and can be so enormous that this learning time is unacceptable . we establish that , if the system is known to be a _ factored _ mdp , it is possible to achieve regret that scales polynomially in the number of _ parameters _ encoding the factored mdp , which may be exponentially smaller than or . we provide two algorithms that satisfy near - optimal regret bounds in this context : posterior sampling reinforcement learning ( psrl ) and an upper confidence bound algorithm ( ucrl - factored ) .
experimental techniques for determining the partial specific volume and partial specific adiabatic compressibility of proteins in solution have provided key insight into structural and catalytic events .the application of these methods has resulted in a broad base of knowledge about solvation effects , ligand binding and dissociation , the influence of protein domains on catalytic events , and protein folding pathways. theoretical methods offer the potential to link specific events to the thermodynamic observables that experimentalists measure in the course of their research . in particular , molecular dynamics ( md )can provide a powerful approach to correlating the molecular trajectories of proteins that may give rise to an experimentally measured molecular volume or change thereof .the state - of - the - art method to date for determining the apparent volume of a protein is the method of `` accessible surface area'', a method that employs a spherically - accessible surface integration of the protein s crystal structure .this method is attractive in that it is not computationally demanding ( and is thus accessible to the typical computational equipment of an experimental lab ) , however it calculates volumes that can be significantly different from those measured in solution. prior research on the volume of small molecules has been performed by our group and the methodology presented here is an application of those techniques to protein systems .this work demonstrates the validity of the md approach toward determining the apparent molecular volume of globular proteins .the apparent molecular volume of a protein is calculated as : the namd molecular dynamics package uses a langevin - hoover hybrid method where a piston is coupled to the equations of motion for a particle in the isothermal - isobaric ( npt)ensemble .a simulated annealing algorithm was implemented to perform a global energy minimization after making single amino - acid substitutions .the protein and water system is brought to a higher energy state and allowed to randomly walk through phase space as the system is cooled over a specific temperature schedule .post - simulation analysis consisted of calculating the correlation time of the volume signal using a block - averaging method in order to calculate an unbiased error in the volume , as well as structural analysis of the equilibrium structures ( rmsd , ramachandran plots ) versus their crystal structures .preparatory simulation stages ( namd ) consisted of constructing the solvated protein system and local energy minimization , followed by heating and equilibration steps .initial myoglobin structural coordinates were obtained from the rcsb protein data bank ( crystal structure entry 1dwr4 ) , solvated with 10,796 tip3p water molecules and then minimized by the method of conjugate gradients .the protein was not mutated in any way .the system was then heated to 300 k over a period of 1.2 ns , followed by equilibration in the npt ensemble for 0.2 ns .a production npt ( 300 k/1 atm/2.0 fs timestep ) run of 10.0 ns resulted in an average volume for the protein - water system of 341,875 ; simulating the bulk water alone over a trajectory of equal time duration yielded an average system volume of 319,775 .the difference of these values gives an apparent molecular volume of wild - type myoglobin corresponding to 22,100 0.747 / g. the correlation times of both volume signals were determined by block - averaging and the signals were then uncorrelated to provide an unbiased volume error of 0.001 / g. this computationally determined apparent volume of 0.747 0.001 / g agrees precisely with experimentally reported sound velocity measurements and is within the experimental error of that study .the equilibrium protein structure in solution was aligned and compared with the crystal structure of myoglobin and while the overall rmsd was minimal , a local region of amino acids near gly80 was found to be displaced by 6.0 due to solvation ./ g.,width=3 ] the halozyme of e. coli aspartate aminotransferase ( rcsb entry 1asm) a large dimer consisting of identical 404 amino acid subunits complete with lys258-bound pyridoxal-5-phosphate cofactors , was simulated in the npt ensemble with 58,361 water molecules .both native aspat and it s val39 mutant were simulated in order to compare molar volumes .an average apparent volume of 0.733 / g was calculated at the end of a 0.5 ns run , a result that is in good agreement with the experimentally measured value of 0.731 / g. work is in progress to obtain a longer timescale trajectory and calculate the associated volumetric error . a series of single - point mutations at val39 have been investigated experimentally to determine their effect on compressibility and volume .val39 was chosen due to its proximity to the binding site and was theorized to serve as a gating amino acid influencing substrate specificity .a positive linear correlation between adiabatic compressibility and apparent volume when bulky side chains were introduced suggested that these effects were due to the increased flexibility of the protein and an increase in cavity size caused by these mutations .the largest effect on protein volume was observed for the v39 g variant causing a shift from 0.731 / g to 0.696 / g. this single - point mutation induced a large conformational change in the dimer and an associated change in the apparent volume by -0.035 / g , one of the largest molecular volume changes observed due to a single residue mutation relative to the native protein . since the crystallographic structure of the v39 g mutant has not yet been elucidated ,manual alteration of the val39 side chain of the 1asm crystal structure was performed to give the v39 g initial configuration .a simulated annealing algorithm was developed and performed on the mutant to help facilitate the adoption of its new equilibrium conformation prior to performing production npt runs .the application of molecular dynamics for studying the volume of globular proteins can accurately model experimental data . the methods presented can offer insight into structural protein studies since conformational changes can be examined and correlated with experimentally observed volume changes in solution .the volumetric contribution of various regions of a protein ( including the more difficult case of a single - point mutation ) can be elucidated through use of this practical and consistent methodology .work is currently in progress to resolve the separate coulombic and van der waals contributions to the apparent volume .hybrid monte carlo ( hmc ) methods are also being developed to more efficiently explore the phase space of folding intermediates and to allow the use of additional potential energy terms .computations were performed at the usf research computing center where nsf - funded computational resources ( under grant no . che-0722887 ) were greatly appreciated .the authors acknowledge funding from the national science foundation ( grant no .che-0312834 ) .the authors also thank the space foundation ( basic and applied research ) for partial support .
molecular dynamics simulations of myoglobin and aspartate aminotransferase , with explicit solvent , are shown to accurately reproduce the experimentally measured molar volumes . single amino - acid substitution at val39 of aspartate aminotransferase is known to produce large volumetric changes in the enzyme , and this effect is demonstrated in simulation as well . this molecular dynamics approach , while more computationally expensive that extant computational methods of determining the apparent volume of biological systems , is quite feasible with modern computer hardware and is shown to yield accurate volumetric data with as little as several nanoseconds of dynamics .
experiments at the large hadron collider ( lhc ) have already started testing many models of particle physics beyond the standard model ( sm ) , and particular attention is being paid to the minimal supersymmetric sm ( mssm ) and to other scenarios involving softly - broken supersymmetry ( susy ) . in the last few years , parameter inference methodologieshave been developed , applying both frequentist and bayesian statistics ( see e.g. , ) . while the efficiency of markov chain monte carlo ( mcmc ) techniques has allowed for a full exploration of multidimensional models ,the likelihood function from present data is multimodal with many narrow features , making the exploration task with conventional mcmc methods challenging .a powerful alternative to classical mcmc has emerged in the form of nested sampling , a monte carlo method whose primary aim is the efficient calculation of the bayesian evidence , or model likelihood . as a by - product , the algorithm also produces samples from the posterior distribution .those same samples can also be used to estimate the profile likelihood .multinest , a publicly available implementation of the nested sampling algorithm , has been shown to reduce the computational cost of performing bayesian analysis typically by two orders of magnitude as compared with basic mcmc techniques .multinest has been integrated in the ` superbayes ` code for fast and efficient exploration of susy models .having implemented sophisticated statistical and scanning methods , several groups have turned their attention to evaluating the sensitivity to the choice of priors and of scanning algorithms .those analyses indicate that current constraints are not strong enough to dominate the bayesian posterior and that the choice of prior does influence the resulting inference .while confidence intervals derived from the profile likelihood or a chi - square have no formal dependence on a prior , there is a sampling artifact when the contours are extracted from samples produced from bayesian sampling schemes , such as mcmc or multinest .given the sensitivity to priors and the differences between the intervals obtained from different methods , it is natural to seek out a quantitative assessment of their performance , namely their _ coverage _ : the probability that an interval will contain ( cover ) the true value of a parameter .the defining property of a 95% confidence interval is that the procedure adopted for its estimation should produce intervals that cover the true value 95% of the time ; thus , it is reasonable to check if the procedures have the properties they claim .while bayesian techniques are not designed with coverage as a goal , it is still meaningful to investigate their coverage properties .moreover , the frequentist intervals obtained from the profile likelihood or chi - square functions are based on asymptotic approximations and are not guaranteed to have the claimed coverage properties .here we report on recent studies investigating the coverage properties of both bayesian and frequentist procedures commonly used in the literature .we also highlight the numerical and sampling challenges that have to be met in order to obtain a sufficienlty high - resolution mapping of the profile likelihood when adopting bayesian algorithms ( which are typically designed to map out the posterior mass , instead ) . for the sake of example, we consider in the following the so - called msugra or constrained minimal supersymmetric standard model ( cmssm ) , a model with fairly strong universality assumptions regarding the susy breaking parameters , which reduce the number of free parameters to be estimated to just five , denoted by the symbol : common scalar ( ) , gaugino ( ) and tri linear ( ) mass parameters ( all specified at the gut scale ) plus the ratio of higgs vacuum expectation values and , where is the higgs / higgsino mass parameter whose square is computed from the conditions of radiative electroweak symmetry breaking ( ewsb ) .coverage studies require extensive computational expenditure , which would be unfeasible with standard analysis techniques . therefore , in ref . a class of machine learning devices called artificial neural networks ( anns ) was used to approximate the most computationally intensive sections of the analysis pipeline .inference on the parameters of interest requires relating them to observable quantities , such as the sparticle mass spectrum at the lhc , denoted by , over which the likelihood is defined .this is achieved by evolving numerically the renormalization group equations ( rges ) using publicly available codes , which is however a computationally intensive procedure .one can view the rges simply as a mapping from , and attempt to engineer a computationally efficient representation of the function . in ,an adequate solution was provided by a three - layer perceptron , a type of feed - forward neural network consisting of an input layer ( identified with ) , a hidden layer and an output layer ( identified with the value of that we are trying to approximate ) . the weight and biases defining the network were determined via an appropriate training procedure , involving the minimization of a loss function ( here , the discrepancy between the value of predicted by the network and its correct value obtained by solving the rges ) defined over a set of 4000 training samples .a number of tests on the accuracy and noise of the network were performed , showing a correlation in excess of 0.9999 between the approximated value of and the value obtained by solving the rges for an independent sample .a second classification network was employed to distinguish between physical and un - physical points in parameter space ( i.e. , values of that do not lead to physically viable solutions to the rges ) .the final result of replacing the computationally expensive rges with the anns is presented in fig .[ fig : nn_comparison ] , which shows that the agreement between the two methods is excellent , within numerical noise . by using the neural network , a speed - up factor of about compared with scans using the explicit spectrum calculator was observed .comparison of bayesian posteriors obtained by solving the rges fully numerically ( black lines , giving 68% and 95% regions ) and neural networks ( blue lines and corresponding filled regions ) , from simulated atlas data .the red diamond gives the true value for the benchmark point adopted . from .,title="fig : " ] comparison of bayesian posteriors obtained by solving the rges fully numerically ( black lines , giving 68% and 95% regions ) and neural networks ( blue lines and corresponding filled regions ) , from simulated atlas data .the red diamond gives the true value for the benchmark point adopted . from .,title="fig : " ] we studied the coverage properties of intervals obtained for the so - called `` su3 '' benchmark point . to this end, we need the ability to generate pseudo - experiments with fixed at the value of the benchmark .we adopted a parabolic approximation of the log - likelihood function ( as reported in ref . ) , based on the measurement of edges and thresholds in the invariant mass distributions for various combinations of leptons and jets in final state of the selected candidate susy events , assuming an integrated luminosity of 1 for atlas . note that the relationship between the sparticle masses and the directly observable mass edges is highly non - linear , so a gaussian is likely to be a poor approximation to the actual likelihood function .furthermore , these edges share several sources of systematic uncertainties , such as jet and lepton energy scale uncertainties , which are only approximately communicated in ref .finally , we introduce the additional simplification that the likelihood is also a multivariate gaussian with the same covariance structure .we constructed pseudo - experiments and analyzed them with both mcmc ( using a metropolis - hastings algorithm ) and multinest .altogether , our neural network mcmc runs have performed a total of likelihood evaluations , in a total computational effort of approximately cpu - minutes .we estimate that the solving the rges fully numerically would have taken about 1100-cpu years , which is at the boundary of what is feasible today , even with a massive parallel computing effort .the results are shown in fig .[ fig : mcmc_coverage ] , where it can be seen that the methods have substantial over - coverage for 1-d intervals , which means that the resulting inferences are conservative .while it is difficult to unambiguously attribute the over - coverage to a specific cause , the most likely cause is the effect of boundary conditions imposed by the cmssm .when is composed of parameters of interest , , and nuisance parameters , , the profile likelihood ratio is defined as where is the conditional maximum likelihood estimate ( mle ) of with fixed and are the unconditional mles .when the fit is performed directly in the space of the weak - scale masses ( i.e. , without invoking a specific susy model and hence bypassing the mapping ) , there are no boundary effects , and the distribution of ( when is true ) is distributed as a chi - square with a number of degrees of freedom given by the dimensionality of . sincethe likelihood is invariant under reparametrizations , we expect to also be distributed as a chi - square . if the boundary is such that or , then the resulting interval will modified .more importantly , one expects the denominator since is unconstrained , which will lead to . in turn , this means more parameter points being included in any given contour , which leads to over - coverage .the impact of the boundary on the distribution of the profile likelihood ratio is not insurmountable .it is not fundamentally different than several common examples in high - energy physics where an unconstrained mle would lie outside of the physical parameter space .examples include downward fluctuations in event - counting experiments when the signal rate is bounded to be non - negative .another common example is the measurement of sines and cosines of mixing angles that are physically bounded between $ ] , though an unphysical mle may lie outside this region .the size of this effect is related to the probability that the mle is pushed to a physical boundary .if this probability can be estimated , it is possible to estimate a corrected threshold on . for a precise threshold with guaranteed coverage, one must resort to a fully frequentist neyman construction .a similar coverage study ( but without the computational advantage provided by anns ) has been carried out for a few cmssm benchmark points for simulated data from future direct detection experiments .their findings indicate substantial under - coverage for the resulting intervals , especially for certain choices of bayesian priors .both works clearly show the timeliness and importance of evaluating the coverage properties of the reconstructed intervals for future data sets. coverage for various types of intervals for the cmssm parameters , from realizations , employing mcmc for the reconstruction ( each pseudo - experiment is reconstructed with samples ) .green / circles ( red / squares ) is for the 68% ( 95% ) error . from .,title="fig : " ] coverage for various types of intervals for the cmssm parameters , from realizations , employing mcmc for the reconstruction ( each pseudo - experiment is reconstructed with samples ) .green / circles ( red / squares ) is for the 68% ( 95% ) error . from .,title="fig : " ] coverage for various types of intervals for the cmssm parameters , from realizations , employing mcmc for the reconstruction ( each pseudo - experiment is reconstructed with samples ) .green / circles ( red / squares ) is for the 68% ( 95% ) error . from .,title="fig : " ]for highly non - gaussian problems like supersymmetric parameter determination , inference can depend strongly on whether one chooses to work with the posterior distribution ( bayesian ) or profile likelihood ( frequentist ) .there is a growing consensus that both the posterior and the profile likelihood ought to be explored in order to obtain a fuller picture of the statistical constraints from present - day and future data .this begs the question of the algorithmic solutions available to reliably explore both the posterior and the profile likelihood in the context of susy phenomenology .the profile likelihood ratio defined in eq . is an attractive choice as a test statistics , for under certain regularity conditions , wilks showed that the distribution of converges to a chi - square distribution with a number of degrees of freedom given by the dimensionality of .clearly , for any given value of , evaluation of the profile likelihood requires solving a maximisation problem in many dimensions to determine the conditional mle .while posterior samples obtained with multinest have been used to estimate the profile likelihood , the accuracy of such an estimate has been questioned . as mentioned above, evaluating profile likelihoods is much more challenging than evaluating posterior distributions .therefore , one should not expect that a vanilla setup for multinest ( which is adequate for an accurate exploration of the posterior distribution ) will automatically be optimal for profile likelihoods evaluation . in ref . the question of the accuracy of profile likelihood evaluation from multinest was investigated in detail .we report below the main results .the two most important parameters that control the parameter space exploration in multinest are the number of live points which determines the resolution at which the parameter space is explored and a tolerance parameter , which defines the termination criterion based on the accuracy of the evidence .generally , a larger number of live points is necessary to explore profile likelihoods accurately .moreover , setting to a smaller value results in multinest gathering a larger number of samples in the high likelihood regions ( as termination is delayed ) .this is usually not necessary for the posterior distributions , as the prior volume occupied by high likelihood regions is usually very small and therefore these regions have relatively small probability mass . for profile likelihoods , however , getting as close as possible to the true global maximum is crucial and therefore one should set to a relatively smaller value . in ref . it was found that and produce a sufficiently accurate exploration of the profile likelihood in toy models that reproduce the most important features of the cmssm parameter space . in principle , the profile likelihood does not depend on the choice of priors .however , in order to explore the parameter space using any monte carlo technique , a set of priors needs to be defined .different choices of priors will generally lead to different regions of the parameter space to be explored in greater or lesser detail , according to their posterior density . as a consequence, the resulting profile likelihoods might be slightly different , purely on numerical grounds .we can obtain more robust profile likelihoods by simply merging samples obtained from scans with different choices of bayesian priors .this does not come at a greater computational cost , given that a responsible bayesian analysis would estimate sensitivity to the choice of prior as well .the results of such a scan are shown in fig .[ fig : cmssm_profile_1d ] , which was obtained by tuning multinest with the above configuration , appropriate for an accurate profile likelihood exploration , and by merging the posterior samples from two different choices of priors ( see for details ) .this high - resolution profile likelihood scan using multinest compares favourably with the results obtained by adopting a dedicated genetic algorithm technique , although at a slightly higher computational cost ( a factor of ) .in general , an accurate profile likelihood evaluation was about an order of magnitude more computationally expensive than mapping out the bayesian posterior . , located in the focus point region ) and the best - fit point found in the stau co - annihilation region ( ) respectively .the upper and lower panel show the profile likelihood and values , respectively .green ( magenta ) horizontal lines represent the ( ) approximate confidence intervals .multinest was run with 20,000 live points and ( a configuration deemed appropriate for profile likelihood estimation ) , requiring approximately 11 million likelihood evaluations . from . ]as the lhc impinges on the most anticipated regions of susy parameter space , the need for statistical techniques that will be able to cope with the complexity of susy phenomenology is greater than ever .an intense effort is underway to test the accuracy of parameter inference methods , both in the frequentist and the bayesian framework .coverage studies such as the one presented here require highly - accelerated inference techniques , and neural networks have been demonstrated to provide a speed - up factor of up to with respect to conventional methods . a crucial improvement required for future coverage investigationsis the ability to generate pseudo - experiments from an accurate description of the likelihood . both the representation of the likelihood function and the ability to generate pseudo - experiments are now possible with the workspace technology in roofit / roostats .we encourage future experiments to publish their likelihoods using this technology .finally , an accurate evaluation of the profile likelihood remains a numerically challenging task , much more so than the mapping out of the bayesian posterior .particular care needs to be taken in tuning appropriately bayesian algorithms targeted to the exploration of posterior mass ( rather than likelihood maximisation ) .we have demonstrated that the multinest algorithm can be succesfully employed for approximating the profile likelihood functions , even though it was primarily designed for bayesian analyses . in particular , it is important to use a termination criterion that allows multinest to explore high - likelihood regions to sufficient resolution ._ acknowledgements : _ we would like to thank the organizers of phystat11 for a very interesting workshop .we are grateful to yashar akrami , jan conrad , joakim edsj , louis lyons and pat scott for many useful discussions .l. moneta , k. belasco , k. cranmer , a. lazzaro , d. piparo , __ , _ the roostats project _ , _ proceedings of science _ ( 2010 ) proceedings of the 13th international workshop on advanced computing and analysis techniques in physics research , india , [ http://xxx.lanl.gov/abs/arxiv:1009.1003 ] .
we present recent results aiming at assessing the coverage properties of bayesian and frequentist inference methods , as applied to the reconstruction of supersymmetric parameters from simulated lhc data . we discuss the statistical challenges of the reconstruction procedure , and highlight the algorithmic difficulties of obtaining accurate profile likelihood estimates .
new publications are added to the scholarly record at an accelerating pace .this point is realized by observing the evolution of the amount of publications indexed in thomson scientific s citation database over the last fifteen years : 875,310 in 1990 ; 1,067,292 in 1995 ; 1,164,015 in 2000 , and 1,511,067 in 2005 .however , the extent of the scholarly record reaches far beyond what is indexed by thompson scientific . while thompson scientific focuses primarily on quality - driven journals ( roughly 8,700 in 2005 ) , they do not index more novel scholarly artifacts such as preprints deposited in institutional or discipline - oriented repositories , datasets , software , and simulations that are increasingly being considered scholarly communication units in their own right . while the size ( and growth ) of the scholarly record is impressive , the extent of its use is even more staggering .for instance , in november 2006 , elsevier s science direct , which provides access to articles from approximately 2,000 journals , celebrated its 1 billionth full - text download since counting started in april of 1999 . and, again , the extent of scholarly usage clearly reaches far beyond elsevier s repository .furthermore , usage events include not only full - text downloads , but also events such as requesting services from linking servers , downloading bibliographic citations , emailing abstracts , etc . to a large extent ,the effect of usage behavior on the scholarly process is a horizon that is only beginning to be understood and , if properly studied , will offer clues to the evolutionary trends of science , quantitative models of the value of scholarly artifacts , and services to support scholars .the andrew w. mellon funded mesur project at the research library of the los alamos national laboratory aims at developing metrics for assessing scholarly communication artifacts ( e.g. articles , journals , conference proceedings , etc . ) and agents ( e.g. authors , institutions , publishers , repositories , etc . ) on the basis of scholarly usage . in order to do this ,the mesur project makes use of a representative collection of bibliographic , citation and usage data .this data is collected from a wide variety of sources including academic publishers , secondary publishers , institutional linking servers , etc .expectations are that the collected data will eventually encompass tens of millions of bibliographic records , hundreds of millions of citations , and billions of usage events .mining such a vast data set in an efficient , performing , and flexible manner presents significant challenges regarding data representation and data access .this article presents , the owl ontology used by mesur to represent bibliographic , citation and usage data in an integrated manner .the proposed mesur ontology is practical , as opposed to all encompassing , in that it represents those artifacts and properties that , as previously shown in , are realistically available from modern scholarly information systems .this includes bibliographic data such as author , title , identifier , publication date and usage data such as the ip address of the accessing agent , the date and time of access , type of usage , etc .finally , another novel contribution of this work is the hybrid storage and access architecture in which relational database and triple store technology are combined .this is achieved by storing core data and relationships in the triple store and auxiliary data in a relational database .this design choice is driven by the need to keep the size of the triple store to a level that can realistically be handled by current technologies .the combination of the data architecture and scholarly ontology presented in this article provide the foundation for the large - scale modeling and analysis of scholarly artifacts and their usage .a semantic network ( sometimes called a multi - relational network or multi - graph ) is composed of a set of nodes ( representing heterogeneous artifacts ) connected to one another by a set of qualified , or labeled , edges . in a graphtheoretic sense , a semantic network is a directed labeled graph .because an edge is labeled , two nodes can be connected to one another by an infinite number of edges . however , in most cases , the possible interconnections between node types is constrained to a predetermined set .this predetermined set is made explicit in the semantic network s associated ontology .an ontology is generally defined as a set of abstract classes , their relationship to one another , and a collection of inference rules for deriving implicit relationships .an ontology makes no explicit reference to the actual instances of the defined abstract classes ; this is the role of the semantic network .an ontology is related to the developer s api in object oriented programming languages such as c++ and java ( minus the explicit representation of class methods / functions ) .for example , the set of relationships of an ontological class are known as the class properties and , in the object oriented lexicon , can be understood as class fields .also , a taxonomy is usually expressed in a semantic network ontology .a taxonomy of sub- and super - classes support the inheritance of class properties .for instance , if all mammals are warm blooded , then all humans are warm blooded because all humans are mammals . in an inheritance hierarchy , the warm blooded property of mammals is inherited by all sub - classes of mammal ( e.g. human ) .figure [ fig : ont - inst - demo ] diagrams the relationship between an ontology and its semantic network instantiation .the circles represents objects that are instances of the dash - dot pointed to abstract classes ( the squares ) .the three lower squares are subclasses of a more general top - level class ( denoted by the dashed edges ) .the horizontal edges in the ontology denote permissible property types in the instantiation and thus , corresponding horizontal labeled edges in the semantic network may exist .figure [ fig : ont - inst - demo ] does not expose the range of conceptual nuances that can be expressed by modern ontology languages and thus , only provides a rudimentary representation of the relationship between an ontology and its semantic network instantiation ., scaledwidth=30.0% ] the most popular semantic network representational framework is the resource description framework and schema , or rdf(s ) .rdf(s ) represents all nodes and edges by universal resource identifiers ( uri ) .the uri approach supports the use of namespacing such that the uri ` http://www.science.org#article ` has a different meaning , or connotation , than what may be understood by the uri ` http://www.newspaper.net#article ` .the web ontology language ( owl ) is an extension of rdf(s ) that supports a richer vocabulary ( e.g. promotes many set theoretical concepts ) .protg is perhaps the most popular application for designing owl ontologies .while owl is primarily a machine readable language , an owl ontology can be diagrammed using the unified modeling language s ( uml ) class diagrams ( i.e. entity relationship diagrams ) .modern semantic network data stores represent the relationship between two nodes by a _triple_. for instance , the triple , scaledwidth=40.0% ] states that the resource identified by knows the resource identified by , where and are nodes and ` http://xmlns.com/foaf/0.1/#knows ` is a directed labeled edge ( see figure [ fig : foaf - mini ] ) .the meaning of ` knows ` is fully defined by the uri ` http://xmlns.com/foaf/0.1/ ` .the union of instantiated foaf triples is a foaf semantic network .current platforms for storing and querying such semantic networks are called _triple stores_. many open source and proprietary triple stores currently exist .various querying languages exist as well .the role of the query language is to provide the interface to access the data contained in the triple store .this is analogous to the relationships between sql and relational databases .perhaps the most popular triple store query language is sparql .an example sparql query is .... select ?x where ( ?x foaf : knows vub : cgershen ) . .... in the above query , the ` ?x ` variable is bound to any node that is the domain of a triple with an associated predicate of ` http://xmlns.com/foaf/0.1/#knows ` and a range of ` http://homepages.vub.ac.be/#cgershen ` .thus , the above query returns all people who know ` vub : cgershen ` ( i.e. carlos gershenson ) .the ontology plays a significant role in many aspects of a semantic network .figure [ fig : full - system ] demonstrates the role of the ontology in determining which real world data is harvested , how that data is represented inside of the triple store ( semantic network ) , and finally , what queries and inferences are possible to execute . , scaledwidth=37.5% ]in general , an ontology s classes , their relationships , and inferences are determined according to what is being modeled , for what problems that model is trying to solve , and how that model s classes can be instantiated according to real world data .thus , there were three primary requirements to the development of the mesur ontology : 1 .realistically available real world data 2 .ability to study usage behavior 3 .scalability of the triple store instantiation . without real - world data ,an ontology serves only as a conceptual tool for understanding a particular domain and , in such cases , ontologies of this nature may be very detailed in what they represent .however , for ontologies that are designed to be instantiated by real world data , the ontology is ultimately constrained by data availability .thus , the mesur ontology is constrained to bibliographic and usage data since these are the primary sources of scholarly data . in the scholarly community , while articles , journals , conference proceedings , and the like are well documented and represented in formats that lend themselves to analysis , other information , such as usage data , tends to be less explicit due to the inherent privacy issues surrounding individual usage behavior .therefore , a primary objective of the mesur project is the acquisition of large - scale usage data sets from providers world - wide .the purpose of the mesur project is to study usage behavior in the scholarly process and therefore , usage modeling is a necessary component of the mesur ontology . given both usage and bibliographic data , it will be possible to generate and validate metrics for understanding the ` value ' of all types of scholarly artifacts .currently , the scholarly community has one primary means of understanding the value of a journal and thus its authors : the isi impact factor . with a semantic network data structure that includes not only article ( and thus , journal ) citation , but also authorship , usage , and institutional relationships , new metrics that not only rank journals , but also conferences , authors , and institutions will be created and validated .finally , the proposed ontology was engineered to handle an extremely large semantic network instantiation ( on the order of 50 million articles with a corresponding 1 billion usage events ) .the mesur ontology was engineered to make a distinction between required base - relationships and those , that if needed , can be inferred from the base - relations .futhermore , due to the fact that the mesur ontology was developed to support the large - scale analysis of usage , many of the metadata properties such as article title or author name are not explicitly represented in the ontology and thus , as will be demonstrated , such data can be accessed outside the triple store by reference to a relational database .other efforts have produced and exploited scholarly ontologies , but they do not cover the needs of the mesur project for two primary reasons .first , they generally lack the integration of publication , citation and usage data , which mesur requires in order to represent and analyze these crucial stages of the public scholarly communication process .second , scalability appears to not have been a major concern when designing the ontologies and thus , instantiating them at the order of what mesur will be representing is unfeasible . sometimes , the ontology is too elaborate , adding complexity that rarely pays off for the simple reason that it is hard to realistically come by data to populate defined properties ( e.g. detailed author or affiliation information ) .other times , the ontology requires the storage of information that can not realistically be represented for vast data collections using current triple store technologies .several scholarly ontologies are available in the daml ontology library .while they focus on bibliographic constructs , they do not model usage events .the same is true of the semantic community web portal ontology , which , in addition maintains many detailed classes whose instantiation is unrealistic given what is recorded by modern scholarly information systems .the scholonto ontology was developed as part of an effort aimed at enabling researchers to describe and debate , via a semantic network , the contributions of a document , and its relationship to the literature .while this ontology supports the concept of a scholarly document and a scholarly agent , it focuses on formally summarizing and interactively debating claims made in documents , not on expressing the actual use of documents .moreover , support for bibliographic data is minimal whereas support for discourse constructs , not required for mesur , is very detailed .the abc ontology was primarily engineered as a common conceptual model for the interoperability of a variety of metadata ontologies from different domains .although the abc ontology is able to represent bibliographic and usage concepts by means of constructs such as artifact ( e.g. article ) , agent ( e.g. author ) , and action ( e.g. use ) , it is designed at a level of generality that does not directly support the granularity required by the mesur project .an interesting ontology - based approach was developed by the ingenta metastore project .unfortunately , again , the ingenta ontology does not support expressing usage of scholarly documents , which is a primary concern in mesur .nevertheless , the approach is inspiring because ingenta faces significant challenges regarding scalability of the ontology - based representation , storage and access of their bibliographic metadata collection , which covers approximately 17 million journal articles .however , the scale of the mesur data set is several orders of magnitude larger , calling for optimizations wherever possible .for example , given the mesur project s focus on usage , storing bibliographic properties ( author names , abstract , titles , etc . ) in the triple store , as done by ingenta , is not essential . as a result , in order to improve triple store query efficiency , mesur stores such data in a relational database , and the mesur ontology does not explicitly represent these literals .the principles espoused by the ontologyx ontology are inspiring .ontologyx uses _ context _ classes as the glue " for relating other classes , an approach that was adopted for the mesur ontology .for instance , the mesur ontology does not have a direct relationship between an article and its publishing journal .instead , there exists a publishing context that serves as an n - ary operator uniting a journal , the article , its publication date , its authors , and auxiliary information such as the source of the bibliographic data .the context construct is intuitive and allows for future extensions to the ontology .ontologyx also helped to determine the primary abstract classes for the mesur ontology .unfortunately , ontologyx is a proprietary ontology for which very limited public information is available , making direct adoption unfeasible for mesur . as a matter of fact, all inspiration was derived from a single powerpoint presentation from the 2005 fbrb workshop . finally , in the realm of usage datarepresentation , no ontology - based efforts were found .nevertheless , the following existing schema - driven approaches were explored and served as inspiration : the openurl contextobject approach to facilitate oai - pmh - based harvesting of scholarly usage events , the xml log standard to represent digital library logs , and the counter schema to express journal level usage statistics .the mesur project makes use of a triple store to represent and access its collected data .while the triple store is still a maturing technology , it provides many advantages over the relational database model . for one ,the network - based representation supports the use of network analysis algorithms . for the purposes of the mesur project , a network - based approach to data analysis will play a major role in quantifying the value of the scholarly artifacts contained within it .other benefits that are found with triple store technologies that are not easily reproducible within the relational database framework include ease of schema extension and ontological inferencing .a novel contribution of the presented ontology is its solution to the problem of scalability found in modern triple store technologies .while semantic networks provide a flexible medium for representing and searching knowledge , current triple store applications do not support the amount of data that can be represented at the upper limit of what is possible with modern relational database technologies .therefore , it was necessary to be selective of what information is actually modeled by the mesur ontology . for the mesur project , much of the data associated with each scholarly artifact is maintained outside the triple store in a relational database .the typical bibliographic record contains , for example , an article s identifiers ( e.g. doi , sici , etc . ) , authors , title , journal / conference / book , volume , issue , number , and page numbers .typical usage information contains , for example , the users identifier ( e.g. ip address ) , the time of the usage event , and a session identifier .an example of the various bibliographic and usage properties are outlined in the table [ tab : docprop ] and table [ tab : useprop ] , respectively .note that the connection between the bibliographic record and the usage event occurs through the doc_id ( bolded properties ) .the doc_id is a internally generated identifier created during the mesur project s ingestion process ..example bibliographic properties [ tab : docprop ] [ cols="<,<",options="header " , ] the two tables demonstrate how bibliographic and usage data can be easily represented in a relational database . from the relational database representation , a rdf n - triple data file can be generated .one such solution for this relational database to triple store mapping is the d2r mapper .however , note that not all data in the relational database is exported to this intermediate format . instead, only those properties that promote triple store scalability and usage research were included .thus , article titles , journal issues and volumes , names of authors , to name a few , are not explicitly represented within the triple store and thus , are not modeled by the ontology . if a particular artifact property that is not in the ontology is required for a computation , the computing algorithm references the relational database holding the complete representation the acquired data .for example , bi - directional resolution of the artifact with doc_id is depicted in figure [ fig : query - model ] where the resolving identifier is specific to the artifact ( for the sake of diagram readability , assume that is b5e1ab73 - 26b5 - 41f0-a83f - b47b4d737 from table [ tab : docprop ] and [ tab : useprop ] ) .this model is counter to what is seen in other scholarly ontologies such as the ingenta ontology .this design choice was a major factor that prompted the engineering of a new ontology for bibliographic and usage modeling ., scaledwidth=37.5% ]the mesur ontology is currently at version 2007 - 01 at ` http://www.mesur.org/schemas/2007-01/mesur ` ( abbreviated ` mesur ` ) .full html documentation of the ontology can be found at the namespace uri .the following sections will describe how bibliographic and usage data is modeled to meet the requirements of understanding large - scale usage behavior , while at the same time promoting scalability .the most general class in owl is ` owl : thing ` .the mesur ontology provides three subclasses of ` owl : thing ` .these mesur classes are ` mesur : agent ` , ` mesur : document ` , and ` mesur : context ` .this is represented in figure [ fig : uml - thing ] where an edge denotes a ` rdfs : subclassof ` relationship . ,scaledwidth=25.0% ] the ` context ` classes serve as the glue " by which ` agent`s and ` document`s interact .a ` context ` is analogous to ` rdf : bag ` in that it is an n - ary operator unifying the literals and objects pointed to by its respective properties .all relationships between ` agent`s and ` document`s occurs through a particular ` context ` .however , as will be demonstrated , direct relationships can be inferred .all inferred properties are denoted by the ( i ) " notation in the following uml class diagrams .all inferred properties are superfluous relationships since there is no loss of information by excluding their instantiation ( the information is contained in other relationships ) .the algorithms for inferring them will be discussed in their respective ` context ` subsection .currently , all the mesur classes are specifications or generalizations of other classes .no holonymy / meronymy ( composite ) class definitions are used at this stage of the ontology s development .figure [ fig : full - ontology ] presents the complete taxonomy of the mesur ontology .this diagram primarily serves as a reference .each class will be discussed in the following sections . , scaledwidth=47.5% ] the ` agent ` taxonomy is diagrammed in figure [ fig : uml - agent ] .an ` agent ` can either be a ` human ` or an ` organization ` .a ` human ` is an actual individual whether that individual can be uniquely identified ( e.g. an document author ) or not ( e.g. a document user ) .the ` authored ` property is an inferred relationship and denotes that an ` agent ` authored a particular ` document ` and the ` published ` property denotes that an ` agent ` has published a ` document ` .the ` authored ` and ` published ` property can be inferred by information within the ` publishes ` context discussed later .similarly , the ` used ` property denotes that an ` agent ` has used a particular ` document ` .the ` used ` property can be inferred from the ` uses ` context .an ` organization ` is a class that is used for both bibliographic and usage provenance purposes .given that bibliographic and usage data , at the large - scale , must be harvested from multiple institutions , it is necessary to make a distinction between the various data providers . in many cases ,an ` organization ` can be both a bibliographic ( e.g. a publisher ) and a usage ( e.g. a repository ) provider .furthermore , an ` organization ` can also be an author s academic institution ( e.g. a university ) . finally , all ` agent`s can have any number of affiliations .for an ` organization ` , this is a recursive definition which allows ` a`n organization to have many affiliate ` organization`s while at the same time allowing for the ` human ` leaf nodes of an ` organization ` to be represented by the same construct .the rules governing the inference of the ` hasaffiliation ` and ` hasaffiliate ` properties are discussed in the section describing the ` affiliation ` context . ,scaledwidth=47.5% ] a ` document ` is an abstract concept of a particular scholarly product such as those depicted in figure [ fig : uml - document ] . ,scaledwidth=47.5% ] in general , ` document ` objects are those artifacts that are written , used , and published by ` agent`s .thus , a ` document ` can be a specific article , a book , or some grouping such as a ` journal ` , conference ` proceedings ` , or an ` editedbook ` .there are two ` document ` subclasses to denote whether the ` document ` is a collection ( ` group ` ) or an individually written work ( ` unit ` ) .a ` journal ` and ` proceedings ` is an abstract concept of a collection of volumes / issues .an edition to a proceedings or journal is associated with its abstract ` group ` by the ` partof ` property .the ` authoredby ` , ` containedin ` , ` publishedby ` , and ` contains ` properties can be inferred from the ` publishes ` context .also , the ` usedby ` property can be inferred from the ` uses ` context . as previously stated , all properties from the ` agent ` and ` document ` classes that are marked by the ( i ) " notation are inferred properties .these properties can be automatically generated by inference algorithms and thus , are not required for insertion into the triple store .what this means is that inherent in the triple store is the data necessary to infer such relationships .depending on the time ( e.g. query complexity ) and space ( e.g. disk space allocation ) constraints , the inclusion of these inferred properties is determined . at any time, these properties can be inserted or removed from the triple store .the various inferred properties are determined from their respective ` context ` objects .therefore , the mesur ` owl : objectproperty ` taxonomy provides two types of object properties : ` contextproperty ` and ` inferredproperty ` ( see figure [ fig : uml - property ] ) . ,scaledwidth=32.5% ] a ` context ` class is an n - ary operator much like an ` rdf : bag ` .current triple store technology expresses tertiary relationships .that means that only three resources are related by a semantic network edge ( i.e. a subject uri , predicate uri , and object uri ) .however , many real - world relationships are the product of multiple interacting objects .it is the role of the various ` context ` classes to provide relationships for more than three uris .the ` context ` classes are represented in figure [ fig : uml - context ] . ,scaledwidth=47.5% ] the ` context ` class has two subclasses : ` event ` and ` state ` .an ` event ` is some measurement done by some provider at a particular point in time .for example , the ` publishes ` and ` uses ` events are recorded by publisher and repositories at some point in time . as a side note ,the ` hasprovider ` property of the ` event ` class is an efficient model for the representation of provenance constructs . instead of reifying every statement with provenance data ( e.g. triple was supplied by provider ) ,a single triple is provided for each ` event ` ( e.g. event was supplied by provider ) . on the other side of the ` context ` taxonomy are the ` state ` contexts .a ` state ` is some measurement that can , in some cases , occur over a span of time and are used to represent complex relationships between artifacts or as a way of attaching high - level properties ( i.e. metadata ) to an artifact .the next sections will provide a detailed description of each ` context ` class along with spaqrl queries for inferring all the aforementioned ` inferredproperty ` properties . a `publishes ` event states , in words , that a particular bibliographic data provider has acknowledged that a set of authors have authored a unit that was published in a group by some publisher at a particular point in time .a ` publishes ` object relates a single bibliographic data provider , ` agent ` authors , a ` unit ` , an ` agent ` publisher , a ` group ` , and a publication iso-8601 date time literal .figure [ fig : publish - context ] represents a ` publishes ` context and the inferable properties ( dashed edges ) of the various associated artifacts .all inferred properties have a respective inverse relationship .note that both ` preprintarticle ` and ` book ` publishing are represented with owl restrictions ( i.e. they are not published in a ` group ` ) .the details of these restrictions can be found in the actual ontology definition . ,scaledwidth=47.5% ] the dashed edges in figure [ fig : publish - context ] denote properties that are a ` rdfs : subclassof ` the ` inferredproperty ` .for instance , the abstract triple is inferred given the results of the following sparql query , where for the sake of brevity , the ` prefix ` declarations are removed and the ` insert ` statement represents the insert of its triple argument into the triple store ..... select ?b where ( ?x rdf : type mesur : publishes ) ( ?x mesur : hasunit ?a ) ( ?x mesur : hasauthor ?b ) insert < ?a mesur : authoredby ?b > insert < ?b mesur : authored ? a > . .... to infer the ` group ` property ` contains ` and ` unit ` property ` containedin ` , the following sparql query and ` insert ` statements suffice ..... select ?b where ( ?x rdf : type mesur : publishes ) ( ?x mesur : hasunit ? a ) ( ? x mesur : hasgroup ?b ) insert < ?a mesur : containedin ?b mesur : contains ?.... finally , the ` published ` and ` publishedby ` properties are inferred by : .... select ? a ?b where ( ?x rdf : type mesur : publishes ) ( ?x mesur : haspublisher ?a ) ( ?x mesur : hasgroup ?b ) insert < ?a mesur : published ?b > insert < ?b mesur : publishedby ? a > . .... the ` uses ` context denotes a single usage event where an ` agent ` uses a ` document ` at a particular point in time .the ` uses ` context is diagrammed in figure [ fig : use - context ] .like the ` publishes ` context , the ` uses ` context is an n - ary construct . depending on the usage provider ,a session identifier and access type is recorded .a session identifier denotes the user s login session .an access type denotes , for example , whether the used ` document ` had its abstract viewed or was fully downloaded ., scaledwidth=45.0% ] the following sparql query and ` insert ` statements represent the inference of the ` usedby ` and ` used ` inverse properties of an ` article ` document and ` agent ` , respectively .also , note the last two ` insert ` statements .these statements demonstrate how ` group ` usage information can also be inferred ..... select ?c where ( ?x rdf : type mesur : uses ) ( ? xmesur : hasdocument ?a ) ( ?a rdf : type mesur : article ) ( ?x mesur : hasuser ?b ) ( ? y rdf : type mesur : publishes ) ( ?y mesur : hasunit ?a ) ( ?y mesur : hasgroup ?c ) insert < ?a mesur : usedby ?b > insert < ?b mesur : used ? a > insert < ?c mesur : usedby ?b mesur : used ?.... in many instances , one artifact is related to another by a particular semantic .however , in some instance , one artifact is related to another by a semantic label and a floating point weight value .furthermore , that weighted relationship may have been recorded over some period of time .the ` weightedrelationship ` state context is used to represent such relationships .the ` citation ` state context denotes a weighted citation and is a ` rdfs : subclassof ` the ` weightedrelationship ` . for ` unit ` to ` unit ` citation ,the weight value is ( or no weight property to reduce the triple store footprint ) and there are no start and end time points .however , for ` group ` to ` group ` citations , the weight of the ` citation ` represents how many times a particular ` group ` cites another over some period of time .hence , it is necessary to denote the start and end points of both the source and the sink nodes .figure [ fig : citation - context ] diagrams a ` citation ` context .furthermore , the sink and source types can be either an ` agent ` or a ` document ` , thus , ` organization ` to ` organization ` citations can be represented ., scaledwidth=47.5% ] given ` unit ` to ` unit ` citations , the ` citation ` weight between any two ` group`s can be inferred .the following example sparql query generates the ` citation ` object for citations from 2007 articles in the journal of informetrics ( issn : 1751 - 1577 ) to 2005 - 2006 articles in scientometrics ( issn : 0138 - 9130 ) .assume that the uri of the journals are their issn numbers , the date time is represented as a year instead of the lengthy iso-8601 representation , and the ` count ` command is analogous to the sql ` count ` command ( i.e. returns the number of elements returned by the variable binding ) ..... select ?x where ( ?x rdf : type mesur : citation ) ( ?x mesur : hassource ?a ) ( ?x mesur : hassink ?b ) ( ? a rdf : type mesur : article ) ( ?b rdf : type mesur : article ) ( ?y rdf : type mesur : publishes ) ( ?z rdf : type mesur : publishes ) ( ?y mesur : hastime ?t ) and ( ?t > 2004 and ?t < 2007 ) ( ? z mesur : hastime ?u ) and ? u = 2007 ( ?y mesur : hasunit ?a ) ( ?z mesur : hasunit ?b ) ( ?y mesur : hasgroup ?c ) ( ? z mesur : hasgroup ?d ) ( ? c mesur : partof urn : issn:1751 - 1577 ) ( ? d mesur : partof urn : issn:0138 - 9130 ) insert < _ 123 rdf : type mesur : citation > insert < _ 123 mesur : hassource urn : issn:1751 - 1577 > insert < _ 123 mesur : hassink urn : issn:0138 - 9130 > insert< _ 123 mesur : hasweight count(?x ) > insert < _ 123 mesur.hassourcestarttime 2007 > insert < _ 123 mesur : hassourceendtime 2007 > insert < _ 123 mesur.hassinkstarttime 2005 > insert < _ 123 mesur : hassinkendtime 2006 > ..... figure [ fig : coauthor - context ] diagrams the ` coauthor ` weighted relationship context .the weight value of this relationship denotes the number of times two authors have coauthored together over a some period of time ., scaledwidth=37.5% ] the following sparql query demonstrates how to infer the weighted ` coauthor ` relationship between the authors marko ( ` lanl : marko ` ) and herbert ( ` lanl : herbertv ` ) over all time .a time period for coauthorship counting can be inserted in a fashion similar to the ` citation ` example previous ..... select ?x where ( ?x rdf : type mesur : publishes ) ( ?x mesur : hasauthor lanl : marko ) ( ?x mesur : hasauthor lanl : herbertv ) insert < _ 123 rdf : type mesur : coauthor > insert< _ 123 mesur : hassource lanl : marko > insert < _ 123 mesur : hassink lanl : herbertv > insert < _ 123 mesur : hasweight count(?x ) > insert < _ 456 rdf : type mesur : coauthor > insert < _ 456 mesur : hassource lanl : herbertv > insert < _ 456 mesur : hassink lanl : marko > insert < _ 456 mesur : hasweight count(?x ) > ..... an ` affiliation ` context denotes that a particular ` human ` is affiliated with an ` organization ` or that an ` organization ` is affiliated with another ` organization ` .an ` affiliation ` can be represented as occurring over a particular period of time .an example of an ` affiliation ` state context is diagrammed in figure [ fig : affiliation - context ] ., scaledwidth=37.5% ] the ` hasaffiliate ` and ` hasaffiliation ` properties of the ` agent ` classes can be inferred by the following sparql query ..... select ?b where ( ?x rdf : type mesur : affiliation ) ( ?x mesur : hasaffiliator ?a ) ( ?x mesur : hasaffiliatee ?b ) insert < ?a mesur : hasaffiliate?b > insert < ?b mesur : hasaffiliation?a > ..... the primary objective of the mesur project is to study the relationship between usage - based value metrics ( e.g. usage impact factor ) and citation - based value metrics ( e.g. isi impact factor and the y - factor ) . the ` metric ` context allows for the explicit representation of such metrics .the ` metric ` context has both the ` numericmetric ` and ` nominalmetric ` subclasses .figure [ fig : numeric - context ] diagrams the 2007 ` impactfactor ` numeric metric context for a ` group ` .note that the ` context ` hierarchy in figure [ fig : uml - context ] does not represent the set of ` metric`s explored by the mesur project .this taxonomy will be presented in a future publication ., scaledwidth=37.5% ] the example sparql query and respective ` insert ` statements demonstrate how to calculate the 2007 impact factor for the proceedings of the joint conference on digital libraries ( jcdl issn : 1082 - 9873 ) . the 2007 impact factor for the jcdl is defined as the number of citations from any ` unit ` published in 2007 to articles in the jcdl proceedings published in either 2005 or 2006 normalized by the total number of articles published by jcdl in 2005 and 2006 ..... select ?x where ( ?x rdf : type mesur : publishes ) ( ?x mesur : hasunit ?a ) ( ?x mesur : hasgroup ?b ) ( ? b mesur : partof urn : issn:1082 - 9873 ) ( ? x mesur : hastime ?t ) and ( ?t > 2004 and ?t < 2007 ) ( ?y rdf : type mesur : citation ) ( ?y mesur : hassource ?c ) ( ?y mesur : hassink ?a ) ( ?z rdf : type mesur : publishes ) ( ?z mesur : hasunit ?c ) ( ? z mesur : hastime ?u ) and ? u = 2007 select ? y where ( ?y rdf : type mesur : publishes ) ( ?y mesur : hasgroup ?a ) ( ?a mesur : partof urn : issn:1082 - 9873 ) ( ?y mesur : hastime ?t ) and ( ?t > 2004 and ?t < 2007 ) insert < _ 123 rdf : type mesur : impactfactor > insert < _ 123 mesur : hasobject urn : issn:1082 - 9873 > insert < _ 123 mesur : hasstarttime 2007 > insert < _ 123 mesur : hasendtime 2007 > insert < _ 123 mesur : hasnumbericvalue ( count(?x ) / count(?y ) ) > ..... the 2007 usage impact factor for the jcdl proceedings can be calculated by using the following sparql queries and ` insert ` commands .the 2007 usage impact factor for the jcdl is defined as the number of usage events in 2007 that pertain to articles published in the jcdl proceedings in either 2005 or 2006 normalized by the total number of articles published by the jcdl in 2005 and 2006 ..... select ?x where ( ?x rdf : type mesur : uses ) ( ?x mesur : hasdocument ?a ) ( ?x mesur : hastime ?t = 2007 ( ?y rdf : type mesur : publishes ) ( ?y mesur : hasunit ?a ) ( ?y mesur : hasgroup ?c ) ( ? c mesur : partof urn : issn:1082 - 9873 ) ( ?y mesur : hastime ?u ) and ( ?u > 2004 and ?u < 2007 ) select ? y where ( ?y rdf : type mesur : publishes ) ( ?y mesur : hasgroup ?a ) ( ?a mesur : partof urn : issn:1082 - 9873 ) ( ?y mesur : hastime ?t ) and ( ?t > 2004 or ?t < 2007 ) insert < _ 123 rdf : type mesur : usageimpactfactor > insert < _ 123 mesur : hasobject urn : issn:1082 - 9873 > insert < _ 123 mesur : hasnumericvalue ( count(?x ) / count(?y ) ) > . ....as demonstrated , the presented metrics can be easily calculated using simple sparql queries .however , more complex metrics , such as those that are recursive in definition , can be computed using other semantic network algorithms . for example , the eigenvector - based y - factor can be computed in semantic networks using the grammar - based random walker framework presented in .the objective of the mesur project is to understand the space of such metrics and their application to valuing artifacts in the scholarly community .future work in this area will report the finding that are derived from such algorithms .this article presented the mesur ontology which has been engineered to provide an integrated model of bibliographic , citation , and usage aspects of the scholarly community .the ontology focuses only on that information for which large - scale real world data exists , supports usage research , and whose instantiation is scalable to an estimated 50 million articles and 1 billion usage events .a novel approach to data representation was defined that leverages both relational database and triple store technology .the mesur project was started in october of 2006 and thus , is still in its early stages of development . while a trim ontology has been presented , the effects of this ontology on load and query times is still inconclusive .future work will present benchmark results of the mesur triple store .this research is supported by a grant from the andrew w. mellon foundation .m. j. kurtz , g. eichhorn , a. accomazzi , c. s. grant , m. demleitner , and s. s. murray , `` the bibliometric properties of article readership information , '' _ journal of the american society for information science and technology _ , vol .56 , no . 2 ,pp . 111128 , 2005 .t. brody , s. harnad , and l. carr , `` earlier web usage statistics as predictors of later citation impact . ''_ journal of the american society for information science and technology _ , vol .57 , no . 8 , pp . 1060 1072 , 2006 .j. bollen , h. van de sompel , j. smith , and r. luce , `` toward alternative metrics of journal impact : a comparison of download and citation data , '' _ information processing and management _ , vol .41 , no . 6 , pp .14191440 , 2005 .[ online ] .available : http://www.arxiv.org/pdf/cs.dl/0503007 j. bollen and h. van de sompel , `` usage impact factor : the effects of sample characteristics on usage - based impact metrics , '' los alamos national laboratory , tech .rep . , 2006 .[ online ] .available : http://arxiv.org/abs/cs/0610154 n. f. noy , w. grosso , and m. a. musen , `` the knowledge model of protege-2000 : combining interoperability and flexibility , '' in _ international conference on knowledge engineering and knowledge management _ , juan - les - pins , france , 2000 .a. magkanaraki , g. karvounarakis , t. t. anh , v. christophides , and d. plexousakis , `` ontology storage and querying , '' cole nationale suprieure des tlcommunications , tech ., april 2002 .[ online ] .available : http://139.91.183.30:9090/rdf/publications/tr308.pdf s. staab , j. angele , s. decker , m. erdmann , a. hotho , a. maedche , h. p. schnurr , r. studer , and y. sure , `` semantic community web portals , '' in _9th international world wide web conference _ , amsterdam , netherlands , may 2000 .[ online ] .available : http://www9.org/w9cdrom/134/134.html s. b. shum , e. motta , and j. domingue , `` scholonto : an ontology - based digital library server for research documents and discourse , '' _ international journal on digital libraries _, vol . 3 , no . 3 , pp .237248 , 2000 .[ online ] .available : citeseer.ist.psu.edu/shum00scholonto.html m. a. goncalves , m. luo , r. shen , m. f. ali , and e. a. fox , `` an xml log standard and tool for digital library logging analysis , '' in _ecdl 2002 : lncs 2458 _ , m. agosti and c. thanos , eds.1em plus 0.5em minus 0.4emberlin : springer - verlag ,september 2002 , pp .. m. a. rodriguez , `` grammar - based random walkers in semantic networks , '' los alamos national laboratory , tech . rep .la - ur-06 - 7791 , 2007 .[ online ] .available : http://www.soe.ucsc.edu/~okram/papers/random-grammar.pdf
the large - scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving , privacy issues concerned with the dissemination of usage data , and the lack of a practical ontology for modeling the usage domain . as a remedy to the third constraint , this article presents a scholarly ontology that was engineered to represent those classes for which large - scale bibliographic and usage data exists , supports usage research , and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts ( e.g. authors and journals ) and an accompanying 1 billion usage events . the real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support . we present the ontology , discuss its instantiation , and provide some example inference rules for calculating various scholarly artifact metrics . = 1 [ ontologies ]
the problem of finding self - consistent stellar models for galaxies is of wide interest in astrophysics .usually , once the potential - density pair ( pdp ) is formulated as a model for a galaxy , the next step is to find the corresponding distribution function ( df ) .this is one of the fundamental quantities in galactic dynamics specifying the distribution of the stars in the phase - space of positions and velocities . although the df can generally not be measured directly , there are some observationally accesible quantities that are closed related to the df : the projected density and the light - of - sight velocity , provided by photometric and kinematic observations , are examples of df s moments .thus , the formulation of a pdp with its corresponding equilibrium dfs establish a self - consistent stellar model that can be corroborated by astronomical observations . on the other hand, a fact that is usually assumed in astrophysics is that the main part of the mass of a typical spiral galaxy is concentrated in a thin disc ( ) .accordingly , the study of the gravitational potential generated by an idealized thin disc is a problem of great astrophysical relevance and so , through the years , different approaches has been used to obtain the pdp for such kind of thin disc models ( see and toomre ( ) , as examples ) .now , a simple method to obtain the pdp of thin discs of finite radius was developed by , the simplest example of disc obtained by this method being the kalnajs disc ( ) . in a previous paper ( )we use the hunter method in order to obtain an infinite family of axially symmetric finite thin discs , characterized by a well - behaved surface density , whose first member is precisely the well - known kalnajs disc ( ) and , more recently , a detailed study of the kinematics characterizing such discs was be made by .so , in order to obtain self - consistent stellar models for this family of generalized kalnajs discs , we will consider at the present paper the derivation of some two - integral dfs for the first four members of the family . now , as is stated by jeans s theorem , an equilibrium df is a function of the isolating integrals of motion that are conserved in each orbit and , as it has been shown , it is possible to find such kind of dfs for pdps such that there is a certain relationship between the mass density and the gravitational potential .the simplest case of physical interest corresponds to spherically symmetric pdps , which are described by isotropic dfs that depends on the total energy .indeed , as was be shown by , it is possible to obtain this kind of isotropic dfs by first expressing the density as a function of the potential and then solving an abel integral equation .recently , found that a similar procedure can be performed in the axially symmetric case , where the equilibrium df depends on the energy and the angular momentum about the axis of symmetry , i.e. the two classical integrals of motion .they developed a formalism that essentially combines both the eddington formulae and the expansion in order to obtain the df s even part , starting from a density that can be expressed as a function of the radial coordinate and the gravitational potential .once such even part is determined , the df s odd part can be obtained by introducing some reasonable assumptions about the mean circular velocity or using the maximum entropy principle . on the other hand , another method appropriated to find dfs depending only of the jacobi s integral , , foraxially symmetric flat galaxy models was introduced by for the case of disc - like systems and basically consists in to express the df as a derivative of the surface mass density with respect to the gravitational potential .such method does not demands solving an integral equation , but instead is necessary properly to express the mass density as a function of the gravitational potential , a procedure that is only possible in some cases . in this paperwe will use both of the mentioned approaches in order to obtain equilibrium dfs for some generalized kalnajs discs .accordingly , the paper is organized as follows .first , in section [ sec : finteq ] , we present the fundamental aspects of the two methods that we will use in order to obtain the equilibrium dfs .then , in section [ sec : dfkal ] , we present a summary of the main aspects of the generalized kalnajs discs , at section [ sec : kal ] , and then , in sections [ sec : dfm1 ] [ sec : dfm4 ] , we derive the dfs for the first four members of the family .finally , in section [ sec : conc ] , we summarize our main results .assume that and are , respectively , the gravitational potential and the energy of a star in a stellar system .one can choose a constant such that the system has only stars of the energy and then define a relative potential and a relative energy ( ) , such that is the energy of escape from the system .both the mass density and the df ( is a position vector and is a velocity vector ) are related to through poisson s equation where is the gravitational constant . for the case of an axially symmetric system , it is customary to use cylindrical polar coordinates , where is denoted by . as it is well known , such system admits two isolating integrals for any orbit : the component of the angular momentum about the -axis , , and the relative energy .hence , by the jeans theorem , the df of a steady - state stellar system in an axially symmetric potential can be expressed as a non - negative function of and denoted by .such df , that vanishes for , is related to the mass density through eq .( [ int0 ] ) . in this subject, is usually separated into even and odd parts , and respectively , with respect to the angular momentum , where ,\ ] ] and .\ ] ] so , by using , the integral given by ( [ int0 ] ) can be expressed as ( ) d\varepsilon . \label{inta2}\ ] ] for a given mass density , this relation can be considered as the integral equation determining , while the odd part satisfies the relation \varepsilon .\label{dfgen2}\ ] ] this integral equation was first found by and then applied by into calculating the odd df for the binney model , under the assumption of having some realistic rotational laws .as is not known , we can not compute directly by eq .( [ dfgen2 ] ) but what we can do is to obtain the most probable distribution functions under some suitable assumptions .once is known , , and therefore , can be obtained by means of the maximum entropy principle ( see ) and we obtain where is the parameter depending on the total angular momentum . obviously , . also , the system is non - rotating when and maximally rotating as , i.e. , for , it is anticlockwise and {+}(\varepsilon , l_z) ] .the parameter reflects the rotational characteristics of the system .as it was pointed out by and recently by , the implementation of integral equations ( [ inta2 ] ) and ( [ dfgen2 ] ) demands that one can express as a function of and .this holds indeed for the case of disc - like systems , which surface mass density is related to through d\varepsilon .\label{inta3}\ ] ] in order to incorporate the formalisms developed for the 3-dimensional case to deal with disc - like systems , we have to generate a pseudo - volume density ( ) , according to which must take the place of in ( [ inta2 ] ) and ( [ dfgen2 ] ) .in particular , when , the corresponding even df is ( ) . \label{jo}\end{aligned}\ ] ] another simpler method to find equilibrium dfs corresponding to axially symmetric disc - like systems , was introduced by . such formalism deal with dfs that depend onthe jacobi s integral , i.e. the energy measured in a frame rotating with constant angular velocity ( subscript denotes a quantity measured in the rotating frame ) .it is convenient to define an effective potential in such way that , if we choose a frame in which the velocity distribution is isotropic , the df will be -independent and , from ( [ inta3 ] ) , the relation between the surface mass density and the df is reduced to here , and , i.e. the relative potential and the relative energy measured in the rotating frame . moreover , if one can express as a function of , differentiating both sides of ( [ metkal ] ) with respect to , we obtain note that in this formalism it is also necessary to express the mass density as a function of the relative potential .the generalized kalnajs discs form an infinite family of axially symmetric finite thin discs ( ) .the mass surface density of each model ( labeled with the positive integer ) is given by ^{m-1/2 } , \label{densidad}\ ] ] with the constants given by where is the total mass and is the disc radius .such mass distribution generates an axially symmetric gravitational potential , that can be written as where and are the usual legendre polynomials and the legendre functions of the second kind respectively , and are constants given by where is the gravitational constant . here , and are spheroidal oblate coordinates , related to the usual cylindrical coordinates through the relations in particular , we are interested in the gravitational potential at the disc , where and , so and . if we choose in such a way that , the corresponding relative potential for the first four members will be we shall restrict our attention to these four members .the formulae showed above defines the relative potentials that will be used to calculate the dfs by the implementation of the methods sketched in section [ sec : finteq ] .given the mass surface distribution ( [ densidad ] ) and the relative potential ( [ eq:4.22 ] ) , we can easily obtain the following relation : where ^{1/2}$ ] . as we are dealing with disc - like systems , it is necessary to compute the pseudo - volume density by eq .( [ seudorho ] ) in order to perform the integral ( [ inta2 ] ) , by using the right part of eq .( [ jo ] ) with , which is equivalent to calculate the fricke component of ( [ pseudo1 ] ) , we obtain the even part of a df which depends only on the relative energy at this point , we may notice that this corresponds to the df formulated by when .( note that there is a difference of constants as a result of a different definition of the relative potential and relative energy . ) to obtain a full df , we use the maximum entropy principle by means of eq .( [ dfgen1 ] ) and the result is as it is shown in figure [ fig : dfm1a ] , determines a particular rotational state in the stellar system ( from here on we set in order to generate the graphics , without loss of generality ) . as increases , the probability to find a star with positive increases as wella similar result can be obtained for , when the probability to find a star with negative decreases as decreases , and the corresponding plots would be analogous to figure [ fig : dfm1a ] , after a reflection about .we can generalize this result if we perform the analysis in a rotating frame . at first instance, it is necessary to deal with the effective potential in order to take into account the fictitious forces .choosing conveniently , the relative potential in the rotating frame takes the form so the corresponding mass surface density and the pseudo - volume density can be expressed as and the resulting even part of the df in the rotating frame is and it can be derived following the same procedure used to find or by the direct application of eq .( [ metkal2 ] ) .finally , one can come back to the original frame through the relation between , and to obtain ^{-1/2}}{2 \pi a \sqrt{\omega_0 ^ 2 - \omega^2 } } , \label{dfm1b1}\ ] ] which is totally equivalent to the binney and tremaine df for the kalnajs disc .its contours , showed in figure [ fig : dfm1c ] , reveals that the probability to find stars with is zero , while it has a maximum when ( this defines the white strip shown in the figure ) and decreases as increases .nevertheless , this is quiet unrealistic as it represent a state with and it means that the behavior of the system , in disagreement with the observations , behaves like a rigid solid .however , we can generate a better df if we took only the even part of ( [ dfm1b1 ] ) and using the maximum entropy principle to obtain as it is shown in figure [ fig : dfm1b ] , there is a zone of zero probability , just in the intersection of black zones produced by , and there are also two maximum probability stripes .the variation of the parameter leads to the change of inclination of the maximum probability stripes and it is easy to see that the df would be invariant under the sign of , by the definition of the even part .furthermore , plays a similar role than in figure [ fig : dfm1a ] , increasing the probability of finding stars with high as increases and vice versa . working in a rotating frame we found that the relative potential is given by , \label{psi21 } \end{split}\ ] ] while the surface mass density is given by ( [ densidad ] ) when .this case is a little more complicated than the usual kalnajs disc , because the analytical solution of the pseudo - volume density can not be performed with total freedom . we will need to operate in a rotating frame conveniently chosen in such a way that the relation between the mass surface density and the relative potential becomes simpler . from eq .( [ psi21 ] ) it is possible to see that if we choose the angular velocity as the relative potential is reduced to now , we can express the surface mass density easily in terms of the relative potential by the relation and the integral for the pseudo - volume density can be performed and we obtain using then eq . ( [ jo ] ) , the resulting even part of the df in the rotating frame and the full df in the original frame are and respectively , where could take the values according to ( [ omega1 ] ) and is the constant given by moreover , using the same arguments given in section ( [ sec : dfm2 ] ) , it s convenient to take the even part of ( [ dfm2a1 ] ) and obtain a new df using eq .( [ dfgen1 ] ) , which is given by a more general case can be derived without the assumption ( [ omega1 ] ) if we use the kalnajs method in order to avoid the pseudo - volume density integral .here we work in terms of the spheroidal oblate coordinates to obtain more easier the relation between the mass surface density and the relative potential , which can be expressed as now , it is possible to rewrite this expression as with and then , as can be expressed in terms of in the form the relation between and is now , by using eq .( [ metkal2 ] ) , we obtain ^{1/2},\ ] ] and the result in the original frame is ^{\frac{1}{2}}. \label{dfm2b1}\ ] ] obviously , this df is the same as ( [ dfm2a1 ] ) when the condition ( [ omega1 ] ) is satisfied . finally , by eq .( [ dfgen1 ] ) , the resulting df with maximum entropy is given by we can see the behavior of these dfs in figures [ fig : dfm2a ] and [ fig : dfm2b ] . in figures [ fig : dfm2a](a ) and [ fig : dfm2a](b ) we show the contours of for the two rotational states given by ( [ omega1 ] ) .such df is maximum over a narrow diagonal strep , near to the zero probability region and , similarly to the case showed in figure [ fig : dfm1b ] , the probability decreases as increases .stellar systems characterized by different are shown in figures [ fig : dfm2b](a ) and [ fig : dfm2b](b ) , where we plot the contours of ( this equals to when is given by ( [ omega1 ] ) ) . in this casethe df varies more rapidly as decreases , originating narrower bands .the remaining figures show the contours of and for different values of the parameter , showing a similar behavior than . once again , if we want to use kalnajs method , it is necessary to derive the relation and , according to ( [ denseta ] ) , it is posible if we can invert the equation of the relative potencial , which in this case is given by in order to obtain . to solve it, we must deal with a cubic equation and with its non - trivial solutions ; fortunately , we still have as a free parameter .one can easily note that it is possible to write ( [ psi3 ] ) as with and has be chosen as now , by replacing ( [ denseta ] ) into ( [ psi32 ] ) , we obtain and , by using eq .( [ metkal2 ] ) , coming back to the original frame , the result is while the respective df with maximum entropy is in figure [ fig : dfm3a](a ) we show the contour of , while in figures [ fig : dfm3a](b ) , [ fig : dfm3a](c ) and [ fig : dfm3a](d ) are plotted the contours of for different values of .we can see that the behavior of these dfs is opposite to the previous cases . as the jacobi s integral icreases , the df also increases . as we saw in section[ sec : dfm3 ] , in order to find a df using the kalnajs method , we must find as a function of the relative potential in order to obtain . for the disc ,the relative potential can be expressed as now , although we have to deal with a quartic equation , it is possible to rewrite ( [ psir4 ] ) as ^ 2 + \kappa_4,\ ] ] where and must to be chosen as finally , by using ( [ denseta ] ) and ( [ psi32 ] ) , we find the expression ^{7/2},\ ] ] which , by means of eq .( [ metkal2 ] ) , can be used to derive the even part of the df in the rotating frame , ^{5/2 } } { 16 \pi \kappa_1^{7/2 } \sqrt{(\sqrt{\varepsilon_r - \kappa_4 } - \kappa_3)(\varepsilon_r - \kappa_4)}}.\ ] ] therefore , the corresponding df in the original frame is given by ^{5/2}}{16 \pi \kappa_{1}^{7/2 } g(\varepsilon , l_{z})[g^{2}(\varepsilon , l_{z } ) + \kappa_{3}]},\label{dfm4a1}\ ] ] where and the respective df with maximum entropy is given by in figure [ fig : dfm4a](a ) we show the contour of , while in figures [ fig : dfm4a](b ) , [ fig : dfm4a](c ) and [ fig : dfm4a](d ) are plotted the contours of for different values of . as we can see , the behavior is analogous to the showed at figure [ fig : dfm3a ] .we presented the derivation of two - integral equilibrium dfs for some members of the family of generalized kalnajs discs previously obtained by .such two - integral dfs were obtained , esentially , by expresing them as functionals of the jacobi s integral , as it was sketched in the formalism developed by .now , since such formalism demands that the surface mass density can be written as a potential - dependent function , the above procedure can only be implemented for the first four members of the family , the discs with .indeed , the procedure requires that the expression given the relative potential as a function of the spheroidal variable can be analytically inverted in order to express the surface mass density as a function of the relative potential .so , we can do this in a simple way for the discs with to . however , when we must to solve an equation of grade larger than four , whose analytical solution do not exists . for the first two members of the family , the discs with , we also use the method introduced recently by in order to find the even part of the df and then , by introducing the maximum entropy principle , we can determines the full df .this procedure was also used for the other three discs , starting from the kalnajs method , so defining another class of two - integral dfs .such kind of dfs describes stellar systems with a preferred rotational state , characterized by the parameter .this paper can be considered as a natural complement of the work previously presented by and , where the pdp formulation and the kinematics , respectively , of the generalized kalnajs discs were analyzed .now , by the construction of the corresponding two - integral dfs , the first four members of this family can be considered as a set of self - consistent stellar models for axially symmetric galaxies .j. r - c . thanks the financial support from _ vicerrectora acadmica _ , universidad industrial de santander .
we present the derivation of two - integral distribution functions for the first four members of the family of generalized kalnajs discs , recently obtained by , and which represent a family of axially symmetric galaxy models with finite radius and well behaved surface mass density . in order to do this we employ several approaches that have been developed starting from the potential - density pair and , essentially using the method introduced by , we obtain some distribution functions that depend on the jacobi s integral . now , as this method demands that the mass density can be properly expressed as a function of the gravitational potential , we can do this only for the first four discs of the family . we also find another kind of distribution functions by starting with the even part of the previous distribution functions and using the maximum entropy principle in order to find the odd part and so a new distribution function , as it was pointed out by . the result is a wide variety of equilibrium states corresponding to several self - consistent finite flat galaxy models . stellar dynamics galaxies : kinematics and dynamics .
[ sec : intro ] boundary conditions are paramount in many areas of computer modeling in science . at the atomic level ,finite samples require appropriate boundary conditions in order that atoms in the interior behave as if they were part of a larger or infinite sample , or as closely to this as is possible .one example of this is the calculation of the electronic properties of covalent materials where the surface is terminated with h atoms so that all the chemical valency is satisfied . in this way the homo ( highest occupied molecular orbital ) and the lumo ( lowest unoccupied molecular orbital ) states inside the sample can be obtained that are not very different from those expected in the bulk sample . inmaterials science the electronic band structure of a sample of crystalline si could be obtained by determining the electronic properties of a finite cluster terminated with h bonds at the surface . in practicethis is rarely done , as it is more convenient to use periodic boundary conditions and hence use bloch s theorem , but this technique has been used recently in graphene nanoribbons . for most samples ,the nature of the boundary , fixed , free or periodic only alters the properties of the sample by the ratio of the number of atoms on the surface to those in the bulk .this ratio is where is the number of atoms ( _ later referred to as vertices _ ) and is the dimension .of course this ratio goes to zero in the thermodynamic limit as the size of the system and leads to the important result that properties become independent of boundary conditions for large enough systems .similar statements can be made about the mechanical and vibrational properties of systems _ except _ for isostatic networks that lie on the border of mechanical instability . in this casethe boundary conditions are important no matter how large , and special care must be taken with devising boundary conditions so that the interior atoms behave as if they were part of an infinite sample , in as much as this is possible . in figure[ fig : figure1 ] , we show a part of a scanning probe microscope ( spm ) image of a bilayer of vitreous silica which has the chemical formula sio .the sample consists of an upper layer of tetrahedra with all the apexes pointing downwards where they join a mirror image in the lower layer . in the figurewe show the triangular faces of the upper tetrahedra , which form rigid triangles with a ( red ) si atom at the center and the ( black ) o atoms at the vertices of the triangles which are freely jointed to a good approximation .we refer to these networks as _ locally isostatic _ as the number of degrees of freedom of the equilateral triangle in two dimensions is exactly balanced by the shared pinning constraints ( 2 at each of the 3 vertices , so that ) .while the bilayers are locally isostatic , so too are the projections of corner - sharing triangles which are the focus of this paper .we will use the _ berlin a _ sample as the example throughout so that we can focus on this single geometry for pedagogical purposes . because experimental samples are always finite in extent and usually have irregular boundaries , including internal regions that are either absent , or not imaged it is necessary to develop appropriate boundary conditions . note that the option of cutting a rectangular piece out of the experimental image is not available because of the amorphous nature of the network , which means that it is not possible for the left side to connect to the right side as with a regular crystalline network .even if this were possible , it would be unwise to discard experimental data and hence loose information . in this paperwe show how boundary conditions can be applied to locally isostatic systems which are not periodic . in this paper, we show rigorously that there are various ways to add back the exact number of missing constraints at the surface , in a way that they are sufficiently uniformly distributed around the boundary that the network is guaranteed to be isostatic everywhere .there is some limited freedom in the precise way these boundary conditions are implemented , and the boundary can be general enough to include internal holes . the proof techniques used here involves showing that _ all _ subgraphs have insufficient edge density for redundancy to occur . in the appendix , we give an algorithmic desctiption of our boundary conditions and discuss in detail how to ensure the resulting boundary is sufficiently generic . using the pebble game , we verified on a number of samples that anchored boundary conditions in which alternating free vertices are pinned results in a global isostatic state .the pebble game is an integer algorithm , based on laman s theorem , which for a particular network performs a rigid region decomposition , which involves finding the rigid regions , the hinges between them , and the number of floppy ( zero - frequency ) modes .we have used it to confirm that the locally isostatic samples such as that in this paper are isostatic overall with anchored boundary conditions .the results of this paper imply that , under a relatively mild connectivity hypothesis , this procedure is provably correct , and thus , relatively robust .additionally , the necessity of running the pebble game for each individual case is avoided .figure [ fig : figure2 ] shows _ sliding _ boundary condition .these make use of a different , simpler kind of geometric constraint at each unpinned surface site .the global effect on the network s degrees of freedom is like that of the anchored boundary conditions , and this setup is computationally reasonable . at the same time , the proofs for this case are simpler , and generalize more easily to handle situations such as holes in the sample . in figure[ fig : figure3 ] , we show the anchored boundary conditions . we have trimmed off the surface triangles in figure [ fig : figure1 ] that are only pinned at one vertex .this makes for a a more compact structure whose properties are more likely to mimic those of a larger sample , and makes our mathematical statements easier to formulate .in addition we have had to remove the 3 purple triangles at the lower right hand side in order to get an even number of unpinned surface sites .when the network is embedded in the plane , this is possible , except for very degenerate samples ( see figure [ fig : figure3 ] ) . .the boundary sites are shown as blue discs and the 3 purple triangles at the lower left figure [ fig : figure3 ] have been removed .the red si atoms at the centers of the triangles in figure [ fig : figure1 ] have also been removed for clarity .the boundary is formed as a smooth analytic curve by using a fourier series with 16 sine and 16 cosines terms to match the number of surface vertices , where the center for the radius is placed at the centroid of the 32 boundary vertices .note that sliding boundary conditions do not require an even number of boundary sites.,width=275 ] .the alternating anchored sites on the boundary are shown as blue discs and the 3 purple triangles at the lower right are removed to give an even number of unpinned surface sites .the red si atoms at the centers of the triangles in figure [ fig : figure1 ] have been suppressed for clarity.,width=275 ] used in the proof that there are no rigid subgraphs larger than a single triangle .( see lemma [ lemma : rigcomps-2].),width=275 ] , with the 3 purple triangles at the lower left are removed to give an even number of unpinned surface sites .the anchored sites are shown as blue discs , with an even number of surface sites in both graphs .the graph at the right has an even number of surface sites in _ both _ the outer and inner boundary .the red si atoms at the centers of the triangles have been suppressed for clarity .the green line goes through the boundary triangles.,title="fig:",width=275 ] , with the 3 purple triangles at the lower left are removed to give an even number of unpinned surface sites .the anchored sites are shown as blue discs , with an even number of surface sites in both graphs .the graph at the right has an even number of surface sites in _ both _ the outer and inner boundary .the red si atoms at the centers of the triangles have been suppressed for clarity .the green line goes through the boundary triangles.,title="fig:",width=275 ] , where the si atoms , shown as red discs , at the center of each triangle are emphasized in this three - coordinated network .dashed edges are shown connecting to the anchored sites.,width=275 ][ sec : comb ] intuitively , the internal degrees of freedom of systems like the ones in figures [ fig : figure1 ] and [ fig : figure3 ] correspond to the corners of trianges that are not shared .this is , in essence , the content of lemma [ lemma : indep ] proved below .proving lemma [ lemma : indep ] requires ruling out the appearance of _ additional _ degrees of freedom that could arise from _ sub - structures _ that contain more constraints than degrees of freedom .the essential idea behind combinatorial rigidity is that _ generically _ all geometric constraints are visible from the topology of the structure , as typified by laman s striking result showing the sufficiency of maxwell counting in dimension .genericity means , roughly , that there is no special geometry present ; in particular , generic instances of any topology are dense in the set of all instances . in what follows , _we will be assuming genericity _ , and then use results similar to laman s , in that they are based on an appropriate variation of maxwell counting .our proofs have a graph - theoretic flavor , which relate certain hypotheses about connectivity to hereditary maxwell - type counts .we will model the flexibility in the upper layer of vitrious silica bilayers as systems of triangles , pinned togehter at the corners .the joints at the corners are allowed to rotate freely .a triangle ring network is _ rigid _ if the only available motions preserving triangle shapes and the network s connectivity are rigid body motions ; it is _ isostatic _ if it is rigid , but ceases to be so once any joint is removed .these are an examples of body - pin networks - dimensional specialization of body - hinge frameworks first studied by tay and whiteley in general dimensions . in , there is a richer combinatorial theory of `` body - multipin '' structures , introduced by whiteley .see jackson and jordn and the references therein for an overview of the area . ] from rigidity theory .the combinatorial model is a graph that has one vertex for each triangle and an edge between two triangles if they share a corner ( figure [ fig : figure6 ] ) .since we are assuming genericity , we will identify a geometric realization with the graph from now on . inwhat follows , we are interested in a particular class of graphs , which we call _ triangle ring networks_. the definition of a triangle ring is as follows : ( a ) has only vertices of degree and ; is -connected , we need to remove at least vertices . ] ; ( b ) there is a simple cycle in that contains all the degree vertices , and there are at least degree vertices ; ( c ) any edge cut set , results in a graph connected components . ] in that disconnects a subgraph containing only degree vertices has size at least .to set up some terminology , we call the degree vertices _ boundary vertices _ and the degrees vertices _ interior vertices_. a subgraph spanning only interior vertices is an _interior subgraph_. the reader will want to keep in mind the specific case in which is planar with a given topological embedding and is the outer face , as is the case in our figures .this means that subgraphs strictly interior to the outer face have only interior vertices , which explains our terminology .however , as we will discuss in detail later , the setup is very general .if the sample has holes , can leave the outer boundary and return to it : provided that it is simple , all the results here still apply . a theorem of tay whiteley gives the degree of freedom counts for networks of -dimensional bodies pinned together .generically , there are no stressed subgraphs in such a network , with graph , of bodies and pins if and only if where and are the number of vertices and edges of the subgraph . if ( [ eq : tw - count ] ) holds for all subgraphs , the rigid subgraphs are all isostatic , and they are the subgraphs where ( [ eq : tw - count ] ) holds with equality .[ lemma : indep ] any triangle ring network satisfies ( [ eq : tw - count ] ) .suppose the contrary .then there is a vertex - induced subgraph on vertices that violates ( [ eq : tw - count ] ) .if contains a vertex of degree 1 then also violates ( [ eq : tw - count ] ) so we may assume that has minimum degree 2 .in this case , has at most vertices of degree , since it has maximum degree . in particular , may be disconnected from by removing at most edges .if is an interior subgraph , we get a contradiction right away . alternatively ,at least one of the degree vertices in is degree in , and so on .if exactly one is , then is not -connected . if both are , then and there are only boundary vertices . either case is a contradiction .[ corrigcomps ] the rigid subgraphs of a triangle ring network are the subgraphs containing exactly vertices of degree and every other vertex has degree .moreover , any proper rigid subgraph contains at most one boundary vertex of .the first statement is straightforward .the second follows from observing that if a rigid subgraph has two vertices on the boundary of , then can not be -connected , since all the edges detaching from are incident on a single vertex .when is planar , these rigid subgraphs are regions cut out by cycles of length in the poincar dual . more generally in the planar case , subgraphs corresponding to regions that are smaller triangle ring networks with degree vertices have degrees of freedom .now we can consider our first anchoring model , which uses _ slider pinning _slider _ constrains the motion of a point to remain on a fixed line , rigidly attached to the plane .when we talk about attaching sliders to a vertex of the graph , we choose a point on the corresponding triangle , and constrain its motion by the slider . in the results used below, this point should be chosen generically ; for example the theory does not apply if the slider is attached at a pinned corner shared by two of the triangles . since we are only attaching sliders to triangles corresponding to degree vertices in , we may always attach sliders at an unpinned triangle corner .the notion of rigidity for networks of bodies with sliders is that of being _ pinned _ : the system is completely immobilized .a network with sliders is _ pinned - isostatic _ if it is pinned , but ceases to be so if any pin or slider is removed .the equivalent of the white - whiteley counts in the presence of sliders is a theorem of katoh and tanigawa , which says that a generic slider - pinned body - pin network is independent if and only if the body - pin graph satisfies ( [ eq : tw - count ] ) and where is the number of sliders on vertices of . hereis our first anchoring procedure .[ theo : slider - pinning ] adding one slider to each degree boundary vertex of a triangle ring network gives a pinned - isostatic network .let be an arbitrary subgraph with vertices and vertices of degree at most .that ( [ eq : tw - count ] ) holds is lemma [ lemma : indep ] .the fact that the only vertices of which get a slider are vertices with degree 2 in implies that ( [ eq : pin - count ] ) is also satisfied , and , by construction .we may think of this anchoring as rigidly attaching a rigid wire to the plane then constraining the boundary vertices to move on it .provided that the wire s path is smooth and sufficiently non - degenerate , this is equivalent , for analyzing infinitesimal motions , to putting the sliders in the direction of the tangent vector at each boundary vertex .see also figure [ fig : figure2 ] . by removing triangles to form two internal _the boundary sites are shown as blue discs and the 3 purple triangles at the lower left figure [ fig : figure3 ] have been removed .the red si atoms at the centers of the triangles in figure [ fig : figure1 ] have also been removed for clarity .the green line forms a continuous _ boundary _ which goes through all the surface sites which must be an even number .the anchored ( blue ) sites then alternate with the unpinned sites on the green boundary curve which has to cross the bulk sample in two places to reach the two internal holes . herethere are 32 boundary sites , 5 boundary sites in the upper hole and 7 in the lower hole , giving a total even number of 44 boundary sites . where these crossings take place is arbitrary , but it is important that the anchored and unpinned surface sites alternate along whatever ( green ) boundary line is drawn.,width=275 ] next , we consider anchoring by immobilizing ( pinning ) some points completely .combinatorially , we model pinning a triangle s corner by adding two sliders through it .since we are still using sliders , the definitions of pinned and pinned - isostatic are the same as in the previous section .the analogue for ( [ eq : pin - count ] ) when we add sliders in groups of is : where is the number of immobilized corners .[ theo : thumbtack - pinning ] let be a triangle ring network with an even number of degree vertices on .then , following in cyclic order , pinning every other boundary vertex that is encountered results in a pinned - isostatic network .let be an arbitrary subgraph of .if at most one of the vertices of are pinned , there is nothing to do . for the moment , suppose that no vertex of degree in is pinned .let be the number of pinned vertices in .we will show that for each of the pinned vertices , there is a distinct unpinned vertex of degree or in .this implies that in , at which point we know ( [ eq : thumbtack - count ] ) holds for .to prove the claim , let be a pinned vertex of .traverse the boundary cycle from .let be the next pinned vertex of that is encountered .if the chain from to along is in , the alternating pattern provides an unpinned degree vertex that is degree in .otherwise , this path leaves , which can only happen at a vertex with degree or in . continuing the process until we return to ,produces at least distinct unpinned degree vertices , since each step considers a disjoint set of vertices of .now assume that does have a pinned vertex of degree . the theorem will follow if ( [ eq : thumbtack - count ] ) holds strictly for .let and be the pinned vertices in immediately preceding and following .the argument above shows that there are at least unpinned degree or vertices in on the path in between and on .since these are in , we are done . when there are an odd number of boundary vertices in , theorem [ theo : thumbtack - pinning ] does not apply .this next lemma gives a simple reduction in many cases of interest .[ theo : odd ] let be a planar triangle ring network , with the outer face .suppose that there are an odd number of boundary vertices . if is not a single cycle , then it is possible to obtain a network with an even number of boundary vertices by removing the intersection of a facial cycle of with , unless .the connectivity requirements for a triangle ring network , combined with planarity of imply that the intersection of and any facial cycle of is a single chain .every boundary vertex is in the interior of such a chain , so some facial cycle contributes an odd number of boundary vertices .removing the edges in changes the parity of the number of boundary vertices .so far , we have worked with networks of triangles pinned together .now we augment the model to also include bars between pairs of the triangles .we will always take the endpoints of the bars to be free corners of triangles that are boundary vertices in the underlying network .combinatorially we model this by a graph on the same vertex set as , with an edge for each bar between a pair of bodies . in this case , the tay whiteley count becomes : where is the number of edges in and is the number of edges in spanned by the vertices of .the anchoring procedures with sliders or immobilized vertices have analogues in terms of adding bars to create an isostatic network .these boundary conditions are illustrated on the right hand side of figure [ fig : figure8ab ] .also shown in figure [ fig : figure8ab ] in the left panel is a triangular scheme involving alternating unpinned surface sites , that is equivalent to anchoring . in both cases shown herethe sample is free to rotate with respect to the page ., with the 3 purple triangles at the lower left removed to give an even number of unpinned surface sites .on the left , alternating surface sites are connected to one another through triangulation of first and second neighbors , with the last three connections not needed ( these would lead to redundancy ) .hence there are three additional macroscopic motions when compared to figure [ fig : figure3 ] which can be considered as being pinned to the page rather than to the _ internal frame _ shown by blue straight lines . on the right we illustrate anchoring with additional bars which connect all unpinned surface sites , exceptagain three are absent , to avoid redundancy , and to give the three additional macroscopic motions when compared to figure [ fig : figure3],title="fig:",width=275 ] , with the 3 purple triangles at the lower left removed to give an even number of unpinned surface sites . on the left, alternating surface sites are connected to one another through triangulation of first and second neighbors , with the last three connections not needed ( these would lead to redundancy ) . hence there are three additional macroscopic motions when compared to figure [ fig : figure3 ] which can be considered as being pinned to the page rather than to the _ internal frame _ shown by blue straight lines . on the right we illustrate anchoring with additional bars which connect all unpinned surface sites , except again three are absent , to avoid redundancy , and to give the three additional macroscopic motions when compared to figure [ fig : figure3],title="fig:",width=275 ] [ theo : bars-1 ] if has boundary vertices , we obtain an isostatic framework by taking the edges of to be .consider the new bars . by construction and lemma [ lemma : indep ]we have .corollary [ corrigcomps ] and the connectivity hypotheses imply that no rigid subgraph of has more than of its degree vertices on the boundary of .this shows that no rigid subgraph of has a bar added to it .[ theo : bars-2 ] if has boundary vertices and is even , then taking to be any isostatic bar - joint network with vertex set consisting of boundary vertices chosen in an alternating pattern around results in an isostatic network .a triangulated -gon is a simple choice for . by lemma [ lemma : indep ], we are adding enough bars to remove all the internal degrees of freedom .the desired statement then follows from theorem [ theo : thumbtack - pinning ] by observing that pinning down the boundary vertices is equivalent , geometrically , to pinning down and then identifying the boundary vertices of to the vertices of .a result of white and whiteley on `` tie downs '' , then gives : in the situation of theorems [ theo : bars-1 ] and [ theo : bars-2 ] , adding _ any _ sliders results in a pinned - isostatic network .so far , we have shown how to render a floppy triangle ring network isostatic or pinned - isostatic .it is interesting to know when adding a single extra bar or slider results in a network that is stressed over all its members .this is a somewhat subtle question when adding bars or immobilizing vertices , but it has a simple answer for the sliding boundary conditions .we say that a triangle ring network is _ irreducible _ if : ( a ) every minimal edge cut set either detaches a single vertex from or both remaining components contain more than one boundary vertex of ; ( b ) every minimal edge cut set disconnects one vertex from .[ lemma : rigcomps-2 ] a triangle ring network has no proper rigid subgraphs if and only if is irreducible .recall , from corollary [ corrigcomps ] , that a proper rigid subgraph of has exactly vertices of degree and the rest degree .thus , can be disconnected from by a cut set of size or . in the former case , corollary [ corrigcomps ]implies that exactly one of the degree vertices in is a boundary vertex of .this means that witnesses the failure of ( a ) , and is not irreducible .conversely , ( a ) implies that , for a edge cut set not disconnecting one vertex , either side is either a chain of boundary vertices or has at least vertices of degree .finally , observe that cut sets of size are minimal if and only if they disconnect an interior subgraph on one side .corollary [ corrigcomps ] then implies that there is a proper rigid component that is an interior subgraph of if and only if ( b ) fails .[ theo : stresses ] let be a triangle ring network anchored using the procedure of theorem [ theo : slider - pinning ] .adding _ any _ bar or slider to results in a network with all its members stressed if and only if is irreducible .first consider adding a slider . because is pinned - isostatic, the slider creates a unique stressed subgraph .a result of streinu - theran implies that must have been fully pinned in . since any proper subgraph has an unpinned vertex of degree or , ( [ eq : pin - count ] ) holds strictly .thus , the stressed graph is all of . was not required .it is needed only for adding bars . ]if we add a bar , there is also a unique stressed subgraph. this will be all of , again by the result of streinu - theran , unless both endpoints of the bar are in a common rigid subgraph .that was ruled out by assuming that is irreducible .in this paper we have demonstrated boundary conditions for locally isostatic networks that incorporate the right number of constraints at the surface so that the whole network is isostatic .these boundary conditions should be useful in numerical simulations which involve finite pieces of locally isostatic networks. the boundary can be quite complex and involve both an external boundary with internal holes .our derivation of the new boundary conditions is based on a structural characterization of graphs which capture the combinatorics of silica bilayers .this shows that the degrees of freedom are associated with unpinned triangle corners on the boundary .we then present two methods to completely immobilize a triangle ring network : by attaching the boundary to a wire rigidly attached to the plane ; and by completely immobilizing alternate vertices on the boundary . to render a triangle ring network isostatic , we also have two methods : adding bars between adjacent boundary vertices in cyclic order ; and attaching alternating boundary vertices to an auxiliary graph that functions as a rigid frame . although our definition of a triangle ring network is most easily visualized when is planar and is the outer face , the combinatorial setup is quite a bit more general .the natural setting for networks with holes is to assume planarity , and then that all the degree vertices are on disjoint facial cycles in .the key thing to note is that the cycle in our definition does not need to be facial for theorem [ theo : thumbtack - pinning ] .for example , in figure [ fig : figure7 ] , goes around the boundary of an interior face that contains degree vertices . in general , the existence of an appropriate cycle is a non - trivial question , as indicated by figure [ fig : figure7 ] ( see also figure [ fig : figure5ab ] for other examples of complex anchored boundary conditions ) .what is perhaps more striking is that theorem [ theo : slider - pinning ] still applies whether or not such a exists , provided faces in defining the holes in the sample are disjoint from the boundary and each other . in applying anchored boundary conditions, it is important that the complete boundary has an even number of unpinned sites , which can include internal holes , which must then be connected using the green lines shown in the various figures .this gives a practical way of setting up calculations with anchored boundary conditions in samples with complex geometries and missing areas .support by the finnish academy ( aka ) project coalesce is acknowledged by lt .we thank mark wilson and bryan chen for many useful discussions and comments .this work was initiated at the aim workshop on configuration spaces , and we thank aim for its hospitality .32ifxundefined [ 1]ifx#1 ifnum [ 1]#1firstoftwo secondoftwo ifx [ 1]#1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12&12#1212_12%12[1][0] link:\doibase 10.1103/physrevb.76.233401 [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/0022-3093(94)00545-1 [ * * , ( ) ] link:\doibase 10.1088/0034 - 4885/78/7/073901 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.114.135501 [ * * , ( ) ] link:\doibase 10.1002/anie.201107097 [ * * , ( ) ] link:\doibase 10.1103/physrevb.87.214108 [ * * , ( ) ] http://stacks.iop.org/0953-8984/26/i=39/a=395401 [ * * , ( ) ] http://dx.doi.org/10.1007/bf01534980 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.75.4051 [ * * , ( ) ] link:\doibase 10.1103/physreve.53.3682 [ * * , ( ) ] link:\doibase 10.1007/s00454 - 010 - 9283-y [ * * , ( ) ] `` , '' ( ) http://dx.doi.org/10.1080/14786446408643668 [ * * , ( ) ] link:\doibase 10.1137/0401025 [ * * , ( ) ] link:\doibase 10.1007/bf01788678 [ * * , ( ) ] link:\doibase 10.1137/110846944 [ * * , ( ) ] link:\doibase 10.1137/0604049 [ * * , ( ) ] link:\doibase 10.1090/gsm/002 [ _ _ ] , , vol .( , ) pp .link:\doibase 10.1007/978 - 1 - 84628 - 970 - 5 [ _ _ ] , , vol .( , ) link:\doibase 10.1007/bf02187716 [ * * , ( ) ] link:\doibase 10.1007/s00454 - 008 - 9100-z [ * * , ( ) ] link:\doibase 10.1007/978 - 3 - 540 - 77974 - 2 [ _ _ ] , ed .( , ) pp ., http://arxiv.org/abs/1211.4116 [ `` , '' ] ( )algorithms [ alg : slider - pinning][alg : bars-2 ] in this appendix give a procedural description of the four boundary conditions discussed in this paper , and make clear the subtle differences between them .all of the algorithms in this appendix take as input a finite part of a locally isostatic network and output a globally isostatic one that is appropriate for further study . which of the boundary conditions is most appropriatewill depend on the intended application .computationally , it is convenient to work not only with the body graph , as in the main body of the paper , but also with its _ line graph _ that has as its vertices the triangle corners and edges the triangle sides .we denote by and the number of vertices and edges in and and the same quantities for .vertices in are denoted by and vertices in by .we assume that there is a constant - time mapping that maps each vertex of to the associated triangle in .for each boundary vertex of , will have a unique degree vertex , which we denote by .experimentally , will always be immediately visible .it is also computable in time from .if is planar with given facial structure , then also has an natural planar embedding , and vice - versa .further , if contains no pair of facial triangles with a common edge , then is determined by .this is the case in all of our examples . the allowed first - order motions of a triangle ring network satisfy the system we assume that maximizes the rank of ( [ eq : inf - bars ] ) , which happens for almost all choices of . by the theorems in this paper ,this rank is equal to when is a triangle ring network .now identify a set of vertices to which we will add one slider constraint . assign a vector to each .the slider constraints on the first - order motions are : to guarantee that the combined system ( [ eq : inf - bars])([eq : inf - sliders ] ) achieves its maximum rank ( for our sliding boundary condition ) , it is sufficient to pick each uniformly at random from the unit circle . to implement theorem [ theo : thumbtack - pinning ] , we could put two independent sliders at vertices of . however , it is simpler to regard ( [ eq : inf - bars ] ) as a matrix and then discard the columns corresponding to immobilized vertices . anchoring with additional bars amounts to adding edges to .thus , we describe them graph theoretically only .if the geometric constraints are desired , simply use the new graph to write down ( [ eq : inf - bars ] ) .the left panel of figure [ fig : figure8ab ] takes the graph from theorem [ theo : bars-2 ] to be a `` zig - zag triangulation '' of a polygon , which is easily seen to be isostatic .this next algorithm gives the implementation of theorem [ theo : bars-2 ] using this choice .
finite pieces of locally isostatic networks have a large number of floppy modes because of missing constraints at the surface . here we show that by imposing suitable boundary conditions at the surface , the network can be rendered _ effectively isostatic_. we refer to these as _ anchored boundary conditions_. an important example is formed by a two - dimensional network of corner sharing triangles , which is the focus of this paper . another way of rendering such networks isostatic , is by adding an external wire along which all unpinned vertices can slide ( _ sliding boundary conditions _ ) . this approach also allows for the incorporation of boundaries associated with internal _ holes _ and complex sample geometries , which are illustrated with examples . the recent synthesis of bilayers of vitreous silica has provided impetus for this work . experimental results from the imaging of finite pieces at the atomic level needs such boundary conditions , if the observed structure is to be computer - refined so that the interior atoms have the perception of being in an infinite isostatic environment .
production of aligned arrangements of anisotropic objects is a challenging task . while for example carbon nanotubes and silicon nanowires can be conveniently grown into arrays , the alignment of these structures is often far from perfect . when oriented growth is not an option , anisotropic objects can be aligned by mechanical agitation such as shear , flow , or vibration , and sometimes a magnetic or electric field can act as the orientating agent . both in the macro scale and nano scale , most methods fail to give perfect alignment .the factors limiting the alignment are not fully understood . in a composite material, the shape and width of the orientation distribution of its building blocks translates directly into other physical properties of the material .for example , the orientation distribution of carbon nanotubes within fibre ropes has a significant effect on the mechanical properties of the ropes . theoretical studies show that the orientation distribution shape selected for simulation of a carbon nanotube film has a drastic effect on the electrical properties of the film . nevertheless , instead of carefully examining the shape of the orientation spread , most experimental studies on alignment of nano and macro scale objects have been focused on obtaining a single number , an average alignment or an order parameter . in some cases , the orientation distributions of particle assemblies have been described with gaussians , lorentzians , the combination of both , and even with squared lorentzians .typically , these functions did not fit the data perfectly but discrepancies between the data and the model were not discussed .recently , a new better - fitting function , the generalized normal distribution , was applied to carbon nanotube orientation distributions which had been measured with high statistical accuracy using synchrotron radiation . if this function could be applied also to other systems , it could be a game - changer for the study of aligned structures . in this contribution, we show that the generalized normal distribution fits to previously published data for many objects from nanometre to centimetre sizes , and we can now compare the shapes of the different orientation distributions to each other by using the same set of parameters .a survey of the literature shows that one of the most common ways to define the orientation of particle assemblies from x - ray scattering experiments or the like is by calculating the hermans orientation parameter where the mean - square cosine is calculated from the scattered intensity by integrating over the azimuthal angle ( see fig . [fig : azimuth ] ) for perfect vertical orientation = 1 , for isotropic orientation = 0 , and for perfect horizontal orientation = -0.5 . in the following analysis, we have calculated the orientation parameter for all distributions as if they were perfectly vertically centred by shifting the accordingly in order to compare the orientation degree rather than direction of alignment .the orientation parameter can be calculated for any orientation distribution regardless of their shape . along the circle , on the right .the intensity at each is related to the propability of finding a rod oriented in this angle within the sample.,scaledwidth=45.0% ] in the next step , we compare the values of hermans orientation parameter to parameters describing the shape of the orientation distribution for different experimentally observed particle assembly systems by refitting literature data . recently , it was identified that the shape of the orientation distribution of multiwalled carbon nanotube arrays ( mwcnts ) can be modelled accurately with a family of symmetric distributions that include all shapes between laplace and gaussian . this _ generalized normal distribution _ ( gnd ) , also called _ exponential power distribution _ in the literature has the form , \label{gnd}\ ] ] where is a scaling factor related to the width , is the shape parameter determining the sharpness , and is the mean of the distribution . denotes the gamma function .the gnd reduces to the normal distribution when = 2 and to laplace distribution when = 1 . due to its generality, this distribution has actually been invented several times in the course of history . in diffusion studiesit is known as the _ stretched exponential function _ with , and in the field of relaxation dynamics in materials , it is also called the _ kohlrausch function _ [ or _ kohlrausch - williams - watts function _ ( kww ) ] after the physicist rudolf kohlrausch who applied it in the 19th century to describe electric charge decay. in most cases in the literature , uncertainties for the data were not available so goodness - of - fit is not reported here , but the relative likelihoods for model selection were calculated from the residual sum of squares rss = .we use the akaike information criterion, aic = , to compare the relative likelihoods , , of the gaussian , lorentzian and generalized normal distribution for the model selection . here, is the number of free parameters , the number of data points , is one of the three models , and aic is the smallest aic value obtained for the models .[ fig : fitcomparison ] depicts one example of of a fit with gaussian and gnd .it should be noted that the measured orientation distribution is often merely a projection of the real one .methods which rely on counting the angles of individual particles in two - dimensional images can not capture the shape of the three - dimensional orientation distribution .but since this is the case also for many scattering methods , such as small - angle x - ray scattering , where we also see only the projection of the three - dimensional distribution , the experimental orientation distributions should be comparable to each other .the effect of projection on the shape of the orientation distribution is not dramatic : the projection of a normal distribution is a normal distribution , and the projection of a generalized normal distribution is a generalized normal distribution , only with different and because the projection shifts the shape of the distribution closer to a gaussian . simulations of two - dimensional granular structures of cohesive , elongated particle assemblies can be found in the literature . in one of the the simulations , elongated particles with aspect ratio of 10 are dropped with random orientations onto a pile and the sticking coefficient of the particles is varied .the reported orientation distributions from this study are not exactly equivalent to the projected orientation distribution observed in many experiments , because the orientation of particles is defined solely with respect to horizontal plane , but the generalized normal distribution fits the simulated data : for non - sticking particles ( bond number bog = 0 , simulation1 ) , slightly sticking particles ( bond number bog = , simulation2 ) , and strongly sticking particles ( bond number bog = , simulation3 ) shape factors turn out to be =1.48 , 1.36 , and 1.62 , respectively .lllllllll objects & d ( m ) & & alignment & & ( ) & method & ref + cdse nanorods & 0.008 & 2.75 & rubbing & 1.83 & 31 & giwaxs & + polymer cryst .& & - & strain & - & - & xrd & + mwcnts & 0.0400.070 & - & grown & 1.371.65 & 2638 & saxs & + cellulose whiskers 1 & 1.95 & 4.1 & magnetic & 1.35 & 12 & xrd & + cellulose whiskers 2 & 7.18 & 3.2 & magnetic & 1.19 & 21 & xrd & + al platelets 1 & 10 & 0.030.05 & sediment & 1.211.55 & 19 - 28 & xrd & + al platelets 2 & 10 & 0.030.05 & pressed & 1.55.06 & 17.4.4 & xrd & + cellulose whiskers 3 & 16.1 & 6.4 & magnetic & 2.07 & 42 & xrd & + rice 1 & 1600 & 4.5 & shear & 1.59 & 20 & optical & + glass cylinders & 1900 & 3.5 & shear & 1.63 & 27 & optical & + rice 2 & 2000 & 3.4 & shear & 1.63 & 23 & optical & + rice 3 & 2800 & 2.0 & shear & 1.69 & 28 & optical & + wooden pegs & & 5.0 & shear & 1.39 & 15 & x - ray ct & + simulation 1 & - & 10 & [ cols="^ " , ] table [ tab : shape ] summarizes fit results of the generalized normal distribution to published literature data . for the majority of cases , the gnd gave the best fit ( table [ tab : aik ] ) , but close to and for large gaussian is better which is reasonable as the gaussian model has one free parameter less than the gnd model . for two casesthe lorentzian fits the best , with gnd being the second best model .it should be noted that the alignment spread in the case of cellulose whiskers might be due to orientation spread of cellulose crystallites within the whiskers rather than due to incomplete alignment in magnetic field .[ fig : hermans ] visualises the data presented in table [ tab : shape ] .the contour lines mark the hermans orientation parameter and this shows that the best orientation is found for high and low values , in the upper left corner of the graph .all of the experimental data is located in the lower right corner of the graph and there seems to be no prominent difference between nano and macro scale particles . the dashed line showing a fit to a few data points in which alignment has been obtained by shearing shows a sort of a limit to the orientation distribution shape .it is possible to achieve near to perfect alignment using these commonly used methods for alignment but then , according to this graph , we should expect the orientation distribution to be more laplace like than gaussian . in the case of al plateletsintercalated with polymer , two data sets for same particle type are available .the first one of sedimented particles shows poorer alignment than a second set where the sedimented particle assembly was further compressed .the orientations of sedimented al platelets should be dominated more by cohesive forces than the sedimented and pressed platelets .effect of a moderate amount of cohesion is seen both in simulation and experiment as a decrease in and increase in .the simulation with most sticky particles results in a broad orientation distribution with increased .the carbon nanotube forests could be described effectively as very sticky granular systems .the most prominent result from cohesion is the decrease in the overal orientation degree of the particles . despite the success in fitting most of the data presented in table [ tab : shape ] , there is also one data set which could not be fitted with the gnd . in case of polymer crystallites of poly(-caprolactone) oriented under strain , the orientation of crystallites did not follow the generalized normal distribution .this is an example of a system which is composed of particles that are interconnected .this situation is very different from all the other cases presented here .the applicability of the generalized normal distribution may very well be limited only to particle assemblies which allow free movement of the particles . , marked with symbols , as a function of scale parameter .the values of hermans orientation parameter are marked with contour lines .all parameters were obtained from fits with the generalized normal distribution ( equation ( [ gnd ] ) ) to experimental data ( table [ tab : shape ] ) .the dashed line ( ) is a fit to the data from rice and cdse nanorods .for multiwalled carbon nanotubes and al platelets each data point represent a different position on one sample.,scaledwidth=50.0% ]now that we have identified the generalized normal distribution to be a feasible model for a multitude of particulate systems , we need to consider its physical meaning .there exist several theoretical models for orientation distributions of particles in different environments , and it is not clear if some of them could actually have the same shape as the generalized normal distribution .fitting a non - cyclic function to the orientation distribution as a function of azimuthal angle is not fully correct , because it can not describe all the situations between isotropically oriented and fully oriented systems .a mathematically correct model would need to have cyclic properties .next , we inspect cyclic functions found in the literature to see if they could actually reproduce the shape of the generalized normal distribution. theoretical framework for the orientation distribution shape exists for example in the case of spheroidal particles in dilute suspension under shear .the function describing the orientation of spheroids of aspect ratio is given where is the misorientation of the symmetry axis of the particle compared to the flow direction .this shape should be valid also for other centrosymmetric particles , such as cylinders and discs , but fitting this function with the generalized normal distribution did not produce satisfying results . the maier - saupe distribution , which can be applied to describe the orientation distributions in liquid crystals and to study the chain orientation in cholesterol - lipid systems , is a much more promising candidate for the physical background of the generalized normal distribution .a thorough examination of the maier - saupe distribution for scattering data is given by mills et al . , and they define it where is a modified bessel function of the first kind and is a parameter related to the width of the distribution . for simplicity , we have omitted the normalization factor in equation ( [ maiersaupe ] ) but it can be found in the original publication . while can be fitted to great accuracy ( but not perfectly ) with the generalized normal distribution , the shape factor , , remains above 1.73 for all parameter values of the maier - saupe distribution and hence the maier - saupe model can not be the correct model to use in the case of most of the systems presented here .a special orientation distribution has been used to simulate acoustic non - woven fibre systems consisting of cylindrical subunits . this distribution is characterized mainly by the anisotropy parameter : ^{3/2}}.\ ] ] here , and are the altitude and longitude in spherical coordinates .the term in this equation is responsible for assigning the correct propability to each when we are interested in the volume orientation distribution but we may compare the number orientation distributions to each other without this normalization such that is constant for . for ,the cylinders are more oriented along the symmetry axis .again , does not have the shape of the generalized normal distribution .for modelling of orientations of graphene layers , the projected von mises - fisher distribution has been introduced here ] represents the angle of preferred orientation , is a concentration parameter , and is the modified struve function .the von mises - fisher distribution is a directional analogue of the gaussian distribution and hence it can not reproduce the shapes of the generalized normal distribution , apart from = 2. none of the cyclic functions , , presented above are able to capture the range of distribution shapes which we found in real systems . in conclusion , despite the shortcomings due to non - cyclicity , the generalized normal distribution is at the moment the most suited function for the study of moderately aligned systems , even if it can not be used to describe systems close to isotropic alignment .here we have shown that alignment of freely moving anisotropic objects both in nano and macro scale can be described by one function , the generalized normal distribution .spread of the experimental data in fig .[ fig : hermans ] allows us to draw some general conclusions about the alignment of anisotropic objects .we observe that the projection of orientation distribution of anisotropic particles is close to the laplace distribution ( ) when very good alignment is achieved .exponential decay occurs commonly in the field physics , and the laplacian orientation distribution may be a manifestation of an underlying relaxation processes , which follow an exponential decay .moderate or poor alignment will lead to a more gaussian distribution ( ) but slightly cohesive particles may behave differently .these findings should be taken into account in future studies of materials consisting of aligned anisotropic particles .i gratefully acknowledge financial support from the german research foundation ( dfg ) via sfb 986 `` m '' , project z2 .t. brzsnyi , d. breiby , r. c. hidalgo , t. kamal , and t. kimura are thanked for giving access to the original data from their publications .i. krasnov , r. gehrke and u. handge are thanked for inspirational discussions .i thank martin mller and andreas schreyer for providing the perfect working conditions .woodruff , j. h. ; ratchford , j. b. ; goldthorpe , i. a. ; mcintyre , p. c. ; chidsey , c. e. d. vertically oriented germanium nanowires grown from gold colloids on silicon substrates and subsequent gold removal ._ nano lett . _* 2007 * , _ 7 _ , 16371642 vainio , u. ; schnoor , t. i. w. ; koyiloth vayalil , s. ; schulte , k. ; mller , m. ; lilleodden , e. the orientation distribution of vertically aligned multi - walled carbon nanotubes . _* 2014 * , _ 118 _ , 95079513 brzsnyi , t. ; szab , b. ; trs , g. ; wegner , s. ; trk , j. ; somfai , e. ; bien , t. ; stannarius , r. orientational order and alignment of elongated particles induced by shear .lett . _ * 2012 * , _ 108 _ , 228302 brzsnyi , t. ; szab , b. ; wegner , s. ; harth , k. ; trk , j. ; somfai , e. ; bien , t. ; stannarius , r. shear - induced alignment and dynamics of elongated granular particles .e _ * 2012 * , _ 86 _ , 051304 carastan , d. j. ; amurin , l. g. ; craievich , a. f. ; do carmo gonalves , m. ; demarquette , n. r. morphological evolution of oriented clay - containing block copolymer nanocomposites under elongational flow .j. _ * 2013 * , _ 49 _ , 13911405 yadav , v. ; chastaing , j .- y . ; kudrolli , a. effect of aspect ratio on the development of order in vibrated granular rods .e _ * 2013 * , _ 88 _ , 052203 song , g. ; kimura , f. ; kimura , t. ; piao , g. orientational distribution of cellulose nanocrystals in a cellulose whisker as studied by diamagnetic anisotropy . _ macromolecules _ * 2013 * , _ 46 _ , 89578963 liu , t. ; kumar , s. effect of orientation on the modulus of swnt films and fibers . _nano lett ._ * 2003 * , _ 3 _ , 647650 simoneau , l .- p . ;villeneuve , j. ; aguirre , c. m. ; martel , r. ; desjardins , p. ; rochefort , a. influence of statistical distributions on the electrical properties of disordered and aligned carbon nanotube networks . _. phys . _ * 2013 * , _ 114 _ , 114312 boden , a. ; boerner , b. ; kusch , p. ; firkowska , i. ; reich , s. nanoplatelet size to control the alignment and thermal conductivity in coppergraphite composites ._ nano lett . _ * 2014 * , _ 14 _ , 36403644 hwang , j. ; gommans , h. h. ; ugawa , a. ; tashiro , h. ; haggenmueller , r. ; winey , k. i. ; fischer , j. e. ; tanner , d. b. ; rinzler , a. g. polarized spectroscopy of aligned single - wall carbon nanotubes . _ phys .b _ * 2000 * , _ 62 _ , r13310 wang , h. ; xu , z. ; eres , g. order in vertically aligned carbon nanotube arrays . __ * 2006 * , _ 88 _ , 213111 das , n. ch .; yang , k. ; liu , y. ; sokol , p. e. ; wang , z. ; wang , h. quantitative characterization of vertically aligned multi - walled carbon nanotube arrays using small - angle x - ray scattering . _ j. nanosci .nanotechnol . _ * 2011 * , _ 11 _ , 49955000 trottier , a. m. ; zwanziger , j. w. ; murthy , n. s. amorphous orientation and its relationship to processing stages of blended polypropylene / polyethylene fibers ._ j. appl .polymer sci . _ * 2008 * , _ 108 _ , 40474057 wang , b. n. ; bennett , r. d. ; verploegen , e. ; hart , a. ; cohen , r. e. quantitative characterization of the morphology of multiwall carbon nanotube films by small - angle x - ray scattering ._ j. phys .chem . _ * 2007 * , _ 111 _ , 58595865 hermans , j. j. ; hermans , p. h. ; vermaas , d. ; weidinger , a. quantitative evaluation of the orientation in cellulose fibres from the x - ray fibre diagram .chim . pay .b _ * 1946 * , _ 7 - 8 _ , 427447 nadarajah , s. a generalized normal distribution ._ j. appl .stat . _ * 2005 * , _ 7 _ , 685694 subbotin , m. t. on the law of frequency of errors .sbornik _ * 1923 * , _ 31 _ , 296301 nadarajah , s. acknowledgement of priority : the generalized normal distribution . _ j. appl ._ * 2006 * , _ 33 _ , 10311032 kohlrausch , r. theorie des elektrischen rckstandes in der leidener flasche . _* 1854 * , _ 167 _ , 179214 cardona , m. ; chamberlin , r. v. ; marx , w. the history of the stretched exponential function .( leipzig ) _ * 2007 * , _ 16 _ , 842845 akaike , h. a new look at the statistical model identification ._ ieee t. automat .contr . _ * 1974 * , _ 19 _ , 716723 hidalgo , r. c. ; kadau , d. ; kanzaki , t. ; herrmann , h. j. granular packings of cohesive elongated particles ._ granul . matter _ * 2012 * , _ 14 _ , 191196 breiby , d. w. ; chin , p. t. k. ; andreasen , j. w. ; grimsrud , k. a. ; di , z. ; janssen , r. a. j. biaxially oriented cdse nanorods ._ langmuir _ * 2009 * , _ 25 _ , 1097010974 kamal , t. ; shin , t. j. ; park , s .- y .uniaxial tensile deformation of poly(-caprolactone ) studied with saxs and waxs techniques using synchrotron radiation ._ macromolecules _ * 2012 * , _ 45 _ , 87528759 behr , s. ; vainio , u. ; mller , m. ; schreyer , a. ; schneider , g. large - scale parallel alignment of platelet - shaped particles through gravitational sedimentation ._ scientific reports _ * 2015 * , _ 5 _ , 9985 mueller , s. ; llewellin , e. w. ; mader , h. m. the rheology of suspensions of solid particles .* 2010 * , _ 466 _ , 12011228 mills , t. t. ; toombes , g. e. s. ; tristram - nagle , s. ; smiglies , d .- m . ;feigenson , g. w. ; nagle , j. f. order parameters and areas in fluid - phase oriented lipid membranes using wide angle x - ray scattering .. j. _ * 2008 * , _ 95 _ , 669681 schladitz , k. ; peters , s. ; reinel - bitzer , d. ; wiegmann , a. ; ohser , j. design of acoustic trim based on geometric modeling and flow simulation for non - woven . __ * 2006 * , _ 38 _ , 5666 bhlke , t. ; langhoff , t .- a .; lin , s. ; gross , t. homogenization of the elastic properties of pyrolytic carbon based on an image processing technique ._ zamm _ * 2013 * , _ 93 _ , 313328
a major challenge in the field of nanosciences is the assembly of anisotropic nano objects into aligned structures . the way the objects are aligned determines the physical properties of the final material . in this work , we take a closer look at the shapes of orientation distributions of aligned anisotropic nano and macro objects by examining previously published works . the data shows that the orientation distribution shape of anisotropic objects aligned by shearing and other commonly used methods varies size - independently between laplace and gaussian depending on the distribution width and on the cohesivity of the particles .
this document assumes basic knowledge of the computing _ grid _ concept , which is a paradigm for the modern distributed computing and data handling . for a general introduction to grid computing the readeris referred to eg .the most common starting point for constructing a computing grid is the globus toolkit .this toolkit provides a grid api and developing libraries as well as basic grid service implementations .the nordugrid project is a common effort by the nordic countries to create a grid infrastructure , making use of the available middleware . through the european datagrid project ( edg ) the nordugrid project has had extensive experience with the globus toolkit and with deploying and using a grid testbed . during thiswe have found some shortcomings in the globus toolkit and some problems with the edg testbed architecture that we would like to address on a grid testbed in the nordic countries . in this paperwe present a proposal for a grid architecture for a production testbed at the lhc experiments .it is not the intent to define a general grid system , but rather a system specific for batch processing suitable for problems encountered in high energy physics .interactive and parallel applications have not been considered .it is our goal to deploy a grid testbed based on this architecture for the lhc data challenges in the nordic countries .we will start by describing the various components of the system .focus will be given to the components which we have developed ourselves or taken from others , but heavily modified .after the description we will give a review of the task flow and the communication between the various components .it is assumed that the reader is familiar with the major components of the globus toolkit since the nordugrid testbed architecture uses this as the foundation .an example is the underlying security model - the grid security infrastructure ( gsi ) - which is a based on the ` x`.509 certificate system and takes care of authentication and authorisation and the gris and giis of the information system ( for an explanation of the gris , giis , vo terms see and ) .one of the aims of the architecture was to make the system able to install on top of an existing globus installation .rather than modifying the globus toolkit , we have made a clear boundary between the two architectures , thus enabling them to coexist and try out this new system without destroying an already working globus installation .the _ computing element _ ( ce ) is the backend of the system .it is where the execution of the application is performed .this is a very general element which can be anything from a single processor to complex computing clusters . in our caseit is at present limited to simple pbs ( portable batch system ) clusters .one of our basic ideas about grid implementations is that it should not impose any restriction on the local site. therefore it should be possible to use existing computing clusters and place them on the grid with minor or no special reconfiguration .one thing that we have stressed , which should not be imposed on the site , is that the individual worker nodes ( wn s ) of the cluster can not be required to have direct access to the internet .this excludes dependence on eg .global filesystems like afs and direct download from the internet from any wn .grid services will thus only be run from a front - end machine .this scenario does not however exclude the use of local network filesystems and nfs is actually often used for cluster - wide application distribution .similar to the computing element , the _ storage element _ ( se ) is the common term for another of the basic grid resources - storage. the storage can be as simple as a standard filesystem or eg .a database .the authorisation is handled using grid credentials ( certificates ) .a se can be _ local _ or _ remote _ to a ce . here _local _ means that access is done via standard filesystem calls ( eg .open , read and close ) and is usually realised by a nfs server .a remote se is usually a stand - alone machine running eg .gridftp server with local file storage .data replication is done by services running on the se .a dedicated pluggable gridftp server is beeing developed for use on the se . at the moment a simple file access plugin exists . the main reason for this is to have a way to provide a consistent certificate - based data access to the data .at least one other grid solution to a certificate - based filesystem exists .one advantage of the gridftp approach is that it is done entirely in user space and thus is very portable .the information about replicated data is contained in the _ replica catalog _ ( rc ) .this is an entirely add - on component to the system and as such is not a requirement .a stable , robust , scalable and reliable information system is the cornerstone of any kind of grid system . without a properly working information systemit is not possible to construct a functional grid .the globus project has laid down the foundation of a grid information system with their ldap - based metacomputing directory service ( mds ) .the nordugrid information system is built upon the globus mds .the information system described below forms an integral part of the nordugrid testbed architecture . in our testbed , the nordugrid mds plays a central role : all the information related tasks , like resource - discovery , grid - monitoring , authorised user information , job status monitoring , are exclusively implemented on top of the mds .this has the advantage that all the grid information is provided through a uniform interface in an inherently scalable and distributed way due to the globus mds .moreover , it is sufficient to run a single mds service per resource in order to build the entire system . in the nordugrid testbeda resource does not need to run dozens of different ( often centralized ) services speaking different protocols : the nordugrid information system is purely globus mds built using only the ldap protocol . the design of a grid information system is always deals with questions like how to represent the grid resources ( or services ) , what kind of information should be there , what is the best structure of presenting this information to the grid users and to the grid agents ( i.e. brokers ) .these questions have their technical answers in the so - called ldap schema files .the globus project provides an information model together with the globus mds .we found their model unsuitable for representing computing clusters , since the globus schema is rather single machine oriented .the edg suggested a different ce model which we have evaluated . the edg s ce - based schema fits better for computing clusters .however , its practical usability was found to be questionable due to improper implementation .because of the lack of a suitable schema , nordugrid decided to design its own information model ( schema ) in order to properly represent and serve its testbed. a working information system , as part of the nordugrid architecture , has been built around the schema .nevertheless , nordugrid hopes that in the not - so - far future a usable common grid information model will emerge .we think that the experience of the nordugrid users gained with our working system will provide a useful feedback for this process .the nordugrid testbed consists of different resources ( they can be referred as services ) , located at different sites .the list of implemented grid services contains computing resources ( linux clusters operated by pbs ) , ses ( at the moment basically disk space with a gridftp server ) and rcs .the designed information model naturally maps these resources onto a ldap - based mds tree , where each resource is represented by an mds entry . in this tree, each nordugrid resource operates as a separate gris .the various resources ( gris s ) can be grouped together to form virtual organisations ( vo ) which are served by a giis ( i.e. in our present testbed configuration , the resources within a country are grouped together to form a vo ) .the structure created this way is called a _hierarchical mds tree_. the nordugrid schema is a true mirror of our architecture : it contains information about computing clusters ( nordugrid - cluster objectclass ) , storage elements ( nordugrid - se objectclass ) and replica catalogs ( nordugrid - rc objectclass ) . in fig .[ fig : mdstree ] , a part of the nordugrid mds tree is shown .the clusters provide access to different pbs queues which are described by the nordugrid - pbsqueue objectclass ( fig .[ fig : pbsqueue ] shows an example queue entry ) . under the queue entries ,the nordugrid - authuser and the nordugrid - pbsjob entries can be found grouped in two distinct sub - trees ( the branching is accomplished with the nordugrid - info - group objectclass ) .the nordugrid - authuser entry contains all the user - dependent information of a specific authorised grid user . within the user entries, the grid users can find , among other things , how many cpus are available for them in that specific queue , what is the disk space a job can consume , what is an effective queue length ( taking into account the local unix mappings ) , etc . the nordugrid - pbsjob entries ( see fig .[ fig : pbsjob ] an example ) describe the grid jobs submitted to a queue .the detailed job information includes the job s unique grid i d , the certificate subject of the job s owner and the job status . the norway branch of the nordugrid mds tree , width=453 ] the nordugrid information system has been designed to be able to effectively serve the user interface ( ui ) ( job status query commands , free resource discovery utilities ) , the brokering agent ( in the present implementation it is integrated with the ui s job submission command ) and a general grid user who can access the grid information either directly with a simple ldapsearch or through an ldap - enabled grid portal .in our model , job management is handled by a single entity which we call the _ grid manager _ ( gm ) .it is the job of the gm to process user requests and prepare them for execution on the ce .it also takes care of post - processing the jobs before they leave the ce . in the globus toolkit context , the gm takes care of what is normally done by the globus job - manager backend scripts .in fact it is installed in a similar way to a standard globus jobmanager and can work perfectly together with already existing jobmanagers .authorisation and authentication is still done by the globus gatekeeper .the status of each job is recorded in a special status directory which also contains informational files needed by the gm .the grid manager uses a job i d assigned by the globus gatekeeper / jobmanager to distinguish between job requests .it starts by creating a session directory reachable by the wn and parsing the job request .the job request is passed in a resource specification language .this specifies , among other things , the list of input files needed for the job execution .the grid manager then proceeds to download files to the session directory from remote ses or copy files from local ses as specified by the job request.files can also be uploaded by the user submitting the job .once all files are in place the gm submits the job to the local scheduling system ( pbs ) .all information about job status must be retrieved through the mds .communication to the job such as cancellation is handled through the gm as a specific rsl requests .when the job leaves the wn(s ) it is the responsibility of the gm to clean up afterwards .the gm also manages the output files and uploads them to local or remote storage elements and registers uploaded files to the replica catalog .in contrast to eg . the grid implementation in the edg , the nordugrid _ user interface ( ui ) _ has significantly more responsibility .this is mainly due to the choice of having the resource broker placed at the ui rather than having it as a central service .we found that the edg implementation with a central broker , where all job requests as well as data payload had to pass through , would be a single point of failure and non - scalable .the nordugrid ui is at present command - line driven , while a web based solution is foreseen in the future .the ui is responsible for generating the user request in a _ resource specification language _( rsl ) based on the user input .the rsl we use has additional attributes to those provided by the globus toolkit .this xrsl has been enhanced to support enriched input / output capabilities and more specification of pbs requirements .all unneeded globus attributes has been deprecated .the ui matches the request or job options with the available resources as reported by the mds , and returns a list of matching resources .job options can reflect eg .: * required cpu time * required system type * required disk space * required runtime environment ( eg . application software ) * required memory * required data from se sin this section we describe the life of a job and expose the functions of the various components of the nordugrid architecture .an overview of the system with task flow can be seen in fig .[ fig : taskflow ] . the numbers on the figure refers to tasks which we describe below . 1 .the user interface does a filtered query against the giis and query the replica catalog to get the location of input data .based on these responses and its brokering algorithm the broker within the ui selects a remote resource .2 . the ui contacts the selected resource by submitting the xrsl file to the gatekeeper of the resource , along with the ( resolved by the broker ) physical file names .the gatekeeper does the required authentication and authorisation and passes the job request to the grid manager .4 . grid manager creates the session directory and downloads or copies files from storage elements to this directory 5 .user interface uploads input files and executables via gridftp to the session directory of the job 6 .after all files are pre - staged , the grid manager submits the job to the local resource management system ( pbs ) 7 .pbs schedules and executes the job on the worker nodes in the normal way 8 . on request ,information providers collect job , user , queue and cluster information , disk usage information and writes the information to the mds 9 .job status information is retrieved from the mds , the user interface monitors the job status by querying the information system .e - mail notification to the user during the various job stages by the grid manager is also possible . 10 . user interface may cancel and clean jobs by sending rsl request with special attribute set through the gatekeeper to the grid manager .the grid manager will then take care of stopping the job and removing the job session directory if requested 11 .when the job finishes the grid manager moves requested output results to storage elements and does registration in the replica catalog as specified by the user .the grid manager takes care of cleaning up the session directory after its lifetime has exceeded , too while the directory exists , the user interface can download the specified output files produced by the job . 12 .giis queries gris on demand based on cache timeout values in order to provide fresh enough information .the single giis in the figure really represents a whole virtual organisation hierarchy . in nordugrid testbedthe hierarchy has a central nordugrid giis which connects to country level giis s .the country level giis s either connects directly to resources or have an institutional layer in between . in the figure data and control connections are specified . in designing the system we have been very conscious about the location of data bottlenecks and places where the system will have scalability problems .the only data intensive transfers occur between the computing element and the storage elements and the user interface . in contrast to the edg there is no single point where all data has to pass through .all transfers are truly peer - to - peer .a very early prototype implementation of the architecture exists and is being tested and further developed .scalability tests still has to be performed .in order to have a functional testbed within a short time - frame , there are some areas which we have not given much attention and have taken simple but still secure and functional solutions . in the future we plan to include , for example a more advanced authorisation and accounting system . once we get experience with the system we are going to make the broker more sophisticated and make it able to perform user dependent choices such as rc location and preferred resource selection .we are very grateful for the developers of the globus project for their willingness to answer questions and providing an open development environment with access to early version of their toolkit .k. czajkowski , s. fitzgerald , i. foster , c. kesselman : _ grid information services for distributed resource sharing_. proceedings of the tenth ieee international symposium on high - performance distributed computing ( hpdc-10 ) , ieee press , august 2001 .k. czajkowski , i. foster , n. karonis , c. kesselman , s. martin , w. smith , s. tuecke : _ a resource management architecture for metacomputing systems .ipps / spdp 98 workshop on job scheduling strategies for parallel processing , pp .62 - 82 , 1998 .
this document gives an overview of a grid testbed architecture proposal for the nordugrid project . the aim of the project is to establish an inter - nordic testbed facility for implementation of wide area computing and data handling . the architecture is supposed to define a grid system suitable for solving data intensive problems at the large hadron collider at cern . we present the various architecture components needed for such a system . after that we go on to give a description of the dynamics by showing the task flow .
motivated by the problem of epidemic spread in human networks , we analyze the problem of controlling the spread of a disease by distributing vaccines throughout the individuals in a contact network .the problem of controlling spreading processes in networks appear in many different settings , such as epidemiology , computer viruses , or viral marketing .the dynamic of the spread depends on both the structure of the contact network , the epidemic model and the values of the parameters associated to each individual .we model the spread using a recently proposed variant of the popular sis epidemic model in which the infection rate is allowed to vary among the set of individuals in the network . in our setting, we can modify the individual infection rates , within a feasible range , by injecting different levels of vaccination in each node .injecting a particular level of vaccination in a node has also an associated cost , which can vary from individual to individual . in this context , we propose efficient convex framework to find the optimal distribution of vaccination resources throughout the networks . the dynamic behavior of spreading processes in networks have been widely studied . in ,newman studied the epidemic thresholds on several random graphs models .pastor - satorras and vespignani studied viral propagation in power - law networks .this initial work was followed by a long list of papers aiming to study the spread in more realistic network models .boguna and pastor - satorras considered the spread of a virus in correlated networks , where the connectivity of a node is related to the connectivity of its neighbors . in ,the authors analyze spreading processes in random geometric networks .the analysis of spreading processes in arbitrary contact networks was first studied by wang et al . for the case of discrete - time dynamics . in , ganeshet al . proposed a continuous - time markov process to relate the speed of spreading with the largest eigenvalue of the adjacency matrix of the contact network .the connection between the speed of spreading and the spectral radius of the network was also found for a wide range of spreading models in .the relationship between the spectral radius of a contact network and its local structural properties were explored in .the development of strategies to control the dynamic of a spread process is a central problem in public health and network security . in , borgs et al .proposed a probabilistic analysis , based on the theory of contact processes , to characterize the optimal distribution of a fixed amount of antidote in a given contact network .in , aditya et al . proposed several heuristics to immunize individuals in a network to control virus spreading processes . in the control systems literature ,wan et al .proposed in a method to design optimal strategies to control the spread of a virus using eigenvalue sensitivity analysis ideas together with constrained optimization methods .our work is closely related to the work in and , in which a continuous - time time markov processes , called the n - intertwined model , is used to analyze and control the spread of a sis epidemic model . in this paper ,we propose a convex optimization framework to efficiently find the cost - optimal distribution of vaccination resources in an arbitrary contact network . in our work , we use a heterogeneous version of the n - intertwined sis model to model a spread process in a network of individuals with different rate of being infected and recovered .we assume that we can modify the rates of infection of individuals , within a feasible range , by distributing vaccines to the individuals in the network .we assume that there is a cost associated to injecting a particular amount of vaccination resources to a each individual , where the cost function can vary from individual to individual .our aim is to find the optimal distribution of vaccination resources throughout the network in order to control the spread of an initial infection at a minimal cost .we consider two version of this problem : ( _ i _ ) the _ fractional case , _ in which we are allowed to inject a fractional amount of vaccination resources in each node of the network , and ( _ ii _ ) the _ combinatorial case , _ in which we fully vaccinate a selection of individuals in the network , leaving the rest of nodes unvaccinated .the paper is organized as follows . in sectionii , we introduce our notation , as well as some background needed in our derivations . in section iii ,we formulate our problem and provide an efficient solution based on convex optimization . in section iv , we study a combinatorial version of the problem studied in section iii and provide a greedy heuristic algorithm with a quality guarantee .we include some conclusions in section v.in this section we introduce some graph - theoretical nomenclature and the dynamic spreading model under consideration .let denote an undirected graph with nodes , edges , and no self - loops .we denote by the set of nodes and by the set of undirected edges of . if we call nodes and _ adjacent _ ( or neighbors ) , which we denote by .we define the set of neighbors of a node as .the number of neighbors of is called the _ degree _ of node i , denoted by .the adjacency matrix of an undirected graph , denoted by ] .the infection probability of an individual at node at time is denoted by .let us assume , for now , that the viral spreading is characterized by two positive parameters a constant infection rate and a curing rate .hence , the n - intertwined sis model in is described by the following set of ode s : for . as proved in , the exact probability of infectionis upper bounded by its approximation .a local stability analysis of the above system of ode s around the disease - free equilibrium , for all , provides the following result : [ prop : homogeneoussisstability]consider the n - intertwined sis epidemic model in ( [ eq : mieghem ] ) .then , an initial infection converge to zero exponentially fast if the above provides a simple condition to guarantee a controlled epidemic dynamics in terms of the largest eigenvalue of the adjacency matrix . in the following ,we derive a similar condition when the infection parameters vary from individual to individual within the network .a direct extension of the n - intertwined model for node - specific infection and curing rates , and , is we can write the above dynamics in matrix form as where , , , and .concerning the non - homogeneous epidemic model , we have the following result : [ prop : heterogeneous sis stability condition]consider the heterogeneous n - intertwined sis epidemic model in ( [ eq : heterosis ] ) .then , if an initial infection ^{n} ] .apart from the above properties , we make the following convexity assumptions on the cost function to obtain a tractable convex framework : the vaccination cost function , , is twice differentiable and satisfies the following constrain : for ] .we observe how the cost function is convex and presents diminishing returns , since the reduction in the infection rate for a given amount of investment is greater in the low - cost range than in the high - cost range . ).,scaledwidth=45.0% ] in this subsection we propose an optimization framework to find the cost - optimal allocation of vaccines in a given contact network with adjacency matrix .in particular , we consider the following problem : _ [ problem : optimalvaccinedistribution]given a curing rate profile , , and a vaccination cost function _ for ] . in the combinatorial vaccination problem ,we restrict the resources to be in the discrete set , . for this case , we propose a greedy approach that provides an approximation to the optimal combinatorial solution .we also provide quality guarantees for this approximation algorithm in subsection [ sub : quality - guarantee ] .the combinatorial vaccination problem can be stated as follows : _ [ problem : combinatorialvaccinedistribution]given a curing rate profile , , and a vaccination cost function _ for , _ find the optimal distribution of vaccines to control the propagation of an epidemic outbreak with an asymptotic exponential decaying rate at a total minimum cost ._ the optimal distribution of vaccines in problem [ problem : combinatorialvaccinedistribution ] can be characterized by the set of individuals that are chosen to be fully immunized , i.e. , the infection rates are switched from to for .let us assume that the vaccination cost function takes the values and .these extreme values are achieved using the following affine cost function hence , the total cost of vaccination satisfies where we have defined the constants and .thus , since , the optimal allocation of vaccines that minimizes is the same as the one that maximizes . therefore , defining the vectors and , problem [ problem : combinatorialvaccinedistribution ] can be stated as the following optimization problem : the solution to this problem is combinatorial in nature . in the following subsections we provide a greedy approach that approximates the combinatorial solution , as well as a quality guarantee of our approach .| l | c || c| g | c | c |c| parameters & metric & greedy & reverse greedy & degree threshold & centrality threshold & + & &3.6298 & 3.6440&3.2892&2.4518&3.9425 + & &0.0054&0.0355&0.0422&0.1982&n / a + & & 3.0098&3.0098&2.9246&2.0092&3.1406 + & & 0.0850&0.1383&0.0774&0.2575&n / a + & & 2.1484&2.1484&2.1201&1.7369&2.1787 + & & 0.4383&0.4383&0.6278&1.0101&n / a + in this subsection , we provide a greedy algorithm that iteratively updates the set of nodes that will be ( fully ) vaccinated in order to control the spreading of an epidemic outbreak . in each step of our algorithm, we denote the set of nodes that are chosen to be part of the vaccination group at .we iteratively add to this group the node that provides the most benefit per unit cost , where the benefit of vaccinating a is the increment it induces in .more formally , given a vaccination group , we define the diagonal matrix of associated infection rates as , where is the -dimensional indicator vector for the set .thus , the benefit per unit cost of adding node to is measured by the function a conventional greedy approach could be defined by the iteration with and , where this iteration is repeated until is satisfied .notice that the resulting vaccination group is feasible and satisfies the spectral condition needed to control the spreading of an epidemic outbreak . in practice , we observe that a modification of this greedy approach provides better results . in this modified version , we start with a vaccination set ( i.e. , all the individuals are vaccinated ) and iteratively remove individuals according to the iteration with , where this iteration is repeated until is satisfied .the final vaccination group is chosen to be .notice that , the resulting vaccination group is feasible and .we denote this approach the _ reverse greedy algorithm_. since our approach is heuristic for a combinatorial problem , we provide a quality guarantee via lagrange duality theory in the following subsection . using lagrange duality theory ,we provide quality guarantees for the performance of our greedy approach by computing the dual optimal .[ sdp ] given the optimization problem the primal optimal can be upper bounded by computed according to the lagrange dual which is a convex semidefinite program .notice that matrix in the semidefinite constrain can be written as , where is the unit vector in the standard basis . from , we construct the lagrangian where is kept as a domain constraint and .see section 5.9 of for further details on the lagrange dual of semidefinite constraints . using the properties of trace to simplify and decouple we get the dual objective is derived by maximizing the lagrangian with respect to the primal variables due to the decoupling in the primal optimization in can be done for each node , independently .since each node has only 2 options we can consider each case explicitly by defining it is possible to compute as a threshold function of , but for the purpose of constructing the dual it is better to use an epigraph formulation to rewrite as with the addition constraints that since the dual is a minimization and is strictly increasing in , either or must be achieved with equality , ensuring that the definition is satisfied at the optimal point . to conclude , our dual is given by minimizing subject to the domain constraint and the epigraph constraints and .this is a standard form sdp as defined in section 4.6 of .the solution is guaranteed to satisfy by weak duality , section 5.2 . theorem [ sdp ] tells us that for any optimization problem of the form we can get an accuracy certificate by solving the dual . since we do not have a strong duality, we do not expect to be attainable ( i.e , ) .the solution to the dual gives us some insight into the primal optimizers via the threshold solution to , it appears we can deduce the primal optimizers from , but in practice for most nodes , making it impossible to determine .in some cases there are nodes that have not equal to the threshold .these nodes have their optimal action specified by and .this at least allows for a reduction of the dimension of the primal problem which due to its combinatorial form could be a very large improvement .several papers in the literature advocate for vaccination strategies based on popular centrality measures , such as the degree or eigenvector centrality . in this subsection ,we compare our greedy heuristic to vaccination strategies based on centrality measures . in our simulation , we use the adjacency matrix with 247 nodes previously used in subsection [ sub : numerical - results ] and the same values for the parameters , , and for all . in table[ table ] , we include the values of the objective function and the residual value of for each possible value of . in each case , we run the greedy algorithm and the reverse greedy algorithm ( both proposed in section [ sub : greedy - approach ] ) , as well as two previously proposed algorithms based on the degree and the eigenvalue centrality metrics . in the last column of table [ table ] , we also include the upper bound provided by theorem [ sdp ] .observe that our greedy algorithms are always within 10% of the upper bound .furthermore , the reverse greedy algorithm is outperforms the others , specially those based on centrality measures . in fig .[ fig_3 ] , we illustrate the relationship between the outcome of each algorithm and the degrees of the nodes . in the abscissae, we represent degrees in log scale , and in the ordinate we provide the fraction of nodes of a particular degree that are vaccinated in the solution generated by each algorithm .we observe how all four algorithms completely vaccinate the set nodes with degrees beyond a threshold . on the other hand , in the range of intermediate degrees, we observe that degree alone is not sufficient information to decide the vaccination level of a node .in other words , simply vaccinating nodes based on degree does not always provide the best results . in fig .[ fig_4 ] , we illustrate the relationship between the outcome of each algorithm and the eigenvector centrality . in the abscissae , we represent the cumulative fraction of nodes with centrality greater or equal to a given value being vaccinated .we observe how all four algorithms completely vaccinate the set nodes with the highest centralities .however , since the curves in this figure are not monotonically increasing , there must be cases in which lower centrality nodes are vaccinated , but other nodes with higher centrality are left unvaccinated . in other words , vaccinating higher centrality nodes does not always provide the best results .the reason neither degree nor centrality adequately capture the importance of nodes is that the eigenvectors of the matrix change when the set of vaccinated nodes change .the shift in the eigenvectors is a result of the fact that optimal vaccination strategy actually depends on the parameters and , not just the network . with this in mind, we can not expect an optimal solution to arise from an algorithm that depends only on the graph structure . our algorithms work because they are greedy with respect to this effect .we have studied the problem of controlling the dynamic of the sis epidemic model in an arbitrary contact network by distributing vaccination resources throughout the network .since the spread of an epidemic outbreak is closely related to the eigenvalues of a matrix that depends on the network structure and the parameters of the model , we can formulate our control problem as a spectral optimization problem in terms of semidefinite constraints . in the fractional vaccination case , where intermediate level of vaccination are allowed ,we have proposed a convex optimization framework to efficiently find the optimal allocation of vaccines when the function representing the vaccination cost satisfies certain convexity assumptions . in the combinatorial vaccination problem , where individuals are not allowed to be partially vaccinated ,we propose a greedy approach with quality guarantees based on lagrangian duality .we illustrate our results with numerical simulations in a real online social network .
we consider the problem of controlling the propagation of an epidemic outbreak in an arbitrary contact network by distributing vaccination resources throughout the network . we analyze a networked version of the susceptible - infected - susceptible ( sis ) epidemic model when individuals in the network present different levels of susceptibility to the epidemic . in this context , controlling the spread of an epidemic outbreak can be written as a spectral condition involving the eigenvalues of a matrix that depends on the network structure and the parameters of the model . we study the problem of finding the optimal distribution of vaccines throughout the network to control the spread of an epidemic outbreak . we propose a convex framework to find cost - optimal distribution of vaccination resources when different levels of vaccination are allowed . we also propose a greedy approach with quality guarantees for the case of all - or - nothing vaccination . we illustrate our approaches with numerical simulations in a real social network .
dana scott introduced domains more than thirty years ago as an appropriate mathematical universe for the semantics of programming languages .a domain is a partially ordered set with intrinsic notions of completeness and approximation .recently , the authors have proven the existence of a natural domain theoretic structure in probability theory and quantum mechanics . the way to understandthis structure is with the aid of the concept _ partiality . _ _ _ to illustrate , in the domain , the powerset of the natural numbers ordered by inclusion , a finite set will be partial , while the set will be total . in the domain , the domain of bit streams with the prefix order , a finite string is partial , the infinite strings are total . in the domain , the collection of compact intervals ] with rational is partial , while a one point interval \downarrow\downarrow ] and [l]{\raisebox{.4ex } { }}}}}x=\{a\in d : x\ll a\} \downarrow\downarrow ] contains a directed set with supremum for all .a poset is _ continuous _ if it has a basis .a poset is -_continuous _ if it has a countable basis ._ _ _ _ _ _ _ a _ continuous dcpo _ is a continuous poset which is also a dcpo ._ _ _ the collection of functions ordered by extension where is the cardinality of , is an -algebraic dcpo : _ * for directed , * * is a countable basis for * the least element is the unique with the next example is due to scott . _ the collection of compact intervals of the real line :a , b\in{{\mathbb r}}\ \&\ a\leq b\}\ ] ] ordered under reverse inclusion \sqsubseteq[c , d]\leftrightarrow[c , d]\subseteq[a , b]\ ] ] is an -continuous dcpo : _* for directed , , * , and * :p , q\in{{\mathbb q}}\ \&\p\leq q\} \uparrow\uparrow ] is a basis for the scott topology on a continuous dcpo .the last result also holds for continuous posets ._ a basic open set in is [l]{\raisebox{.4ex } { }}}}}[a , b]=\{x\in{{\mathbf i}\,\!{\mathbb r}}:x\subseteq(a , b)\}\ ] ] while a basic open set in is for finite . _ with the algebraic domains , we come closest to the ideal of ` finite approximation . '_ an element of a poset is _ compact _ if .a poset is _ algebraic _ if its compact elements form a basis ; it is -_algebraic _ if it has a countable basis of compact elements ._ _ _ _ _ _ _ _ the powerset of the naturals ordered by inclusion is an -algebraic dcpo : _ * for directed set , , * and * is a countable basis for the next domain is of central importance in recursion theory ( odifreddi ) . _the set of partial mappings on the naturals =\{\:f\:|\:f:{{\mathbb n}}\rightharpoonup{{\mathbb n}}\mbox { is a partial map}\}\ ] ] ordered by extension is an -algebraic dcpo : _ * for directed set ] is a countable basis for . ] .,\mu) ] is the length of a string . the upper space of a locally compact metric space with in each case , we have we have previously seen how order can implicitly capture topology . with the addition of measurement , we can also describe rates of change .we restrict ourselves to an extremely brief discussion of this . _the _ topology _ on a continuous dcpo has as a basis all sets of the form [l]{\raisebox{.4ex } { }}}}}x\:\cap\downarrow\!\!y ] interestingly , the informatic derivative on is _ equivalent _ to the classical derivative for maps despite the fact that it strictly generalizes it . _ _we now consider the domain of dimensional mixed states in their spectral order .this order makes use of a simpler domain of dimensional classical states in their bayesian order . after introducing these domains ,we show how they can be used to provide order theoretic derivations of the classical and quantum logics .natural measurements in these cases are the entropy functions of shannon and von neumann .thus , and fall right into line with the examples of the last section . despite this , these domains are not continuous .they do possess a notion of approximation , though , which we discuss in the next section ._ let .the _ classical states _ are ^n:\sum_{i=1}^nx_i=1\right\}.\ ] ] a classical state is _ pure _ when for some ; we denote such a state by . __ _ _ _ pure states are the actual states a system can be in , while general mixed states and are epistemic entities .if we know and by some means determine that outcome is not possible , our knowledge improves to where is obtained by first removing from and then renormalizing . the partial mappings which result , with dom, are called the _ bayesian projections _ and lead one directly to the following relation on classical states . _ for , for , the relation on is called the _ bayesian order . _ _ _ _ to motivate ( [ inductiverule ] ) , if , then observer knows less than observer . if something transpires which enables each observer to rule out exactly as a possible state of the system , then the first now knows , while the second knows .but since each observer s knowledge has increased by the same amount , the first must _ still _ know less than the second : _ _ the order on two states ( [ twostates ] ) is derived from the graph of shannon entropy on ( left ) as follows : ( 60,40 ) ( 0,0)(20,60)(40,0 ) ( 0,0)(1,0)50 ( 0,0)(0,1)40 ( -10,35) ( 45,5) ( 60,43 ) ( 0,40)(20,-20)(40,40 ) ( -10,45) ( 32,45) ( 17.2,0) ( 40,43 ) ( 0,40)(3,-5)20 ( 20,6.5)(3,5)20 ( -10,45) ( 32,45) ( 16.9,-1) the pictures above yield a canonical order on : [ mixing ] there is a unique partial order on which has and satisfies the mixing law \ \rightarrow\ x\sqsubseteq(1-p)x+py\sqsubseteq y\,.\ ] ] it is the bayesian order on classical two states . the _ least element _ in a posetis denoted , when it exists .a more in depth derivation of the order is in . _ _ is a dcpo with maximal elements and least element .the bayesian order can also be described in a more direct manner , the _ symmetric characterization ._ let denote the group of permutations on and denote the collection of _ monotone _ classical states ._ _ _ _ [ classicalsymmetries ] for , we have iff there is a permutation such that and for all with . thus , the bayesian order is order isomorphic to many copies of identified along their common boundaries .this fact , together with the pictures of and at representative states in figure 1 , will give the reader a good feel for the geometric nature of the bayesian order .let denote an -dimensional complex hilbert space with specified inner product _ a _ quantum state _ is a density operator , i.e. , a self - adjoint , positive , linear operator with the quantum states on are denoted ._ _ _ _ a quantum state on is _ pure _ if the set of pure states is denoted .they are in bijective correspondence with the one dimensional subspaces of _ _ _ classical states are distributions on the set of pure states by gleason s theorem , an analogous result holds for quantum states : density operators encode distributions on the set of pure states up to equivalent behavior under measurements . _a _ quantum observable _ is a self - adjoint linear operator _ _ _ if our knowledge about the state of the system is represented by density operator , then quantum mechanics predicts the probability that a measurement of observable yields the value .it is where is the projection corresponding to eigenvalue and is its associated eigenspace in the _ spectral representation _ of ._ _ _ let be an observable on with for a quantum state on , _ for the rest of the paper, we assume that all observables have for our purposes it is enough to assume ; the set is chosen for the sake of aesthetics .intuitively , then , is an experiment on a system which yields one of different outcomes ; if our a priori knowledge about the state of the system is , then our knowledge about what the result of experiment _ will be _ is thus , determines our ability to _ predict _ the result of the experiment . _ _ _ _ so what does it mean to say that we have more information about the system when we have than when we have ?it could mean that there is an experiment which ( a ) serves as a physical realization of the knowledge each state imparts to us , and ( b ) that we have a better chance of predicting the result of from state than we do from state .formally , ( a ) means that and , which is equivalent to requiring =0 ] , where =ab - ba ] and in . _ this is called the _ spectral order _ on quantum states . _ _ is a dcpo with maximal elements and least element , where is the identity matrix .there is one case where the spectral order can be described in an elementary manner ._ as is well - known , the density operators can be represented as points on the unit ball in for example , the origin corresponds to the completely mixed state , while the points on the surface of the sphere describe the pure states .the order on then amounts to the following : iff the line from the origin to passes through . _ like the bayesian order on , the spectral order on also be characterized in terms of symmetries and projections . in its symmetric formulation ,_ unitary operators _ on take the place of permutations on , while the projective formulation of shows that each classical projection is actually the restriction of a special quantum ` projection ' with ._ _ the logics of birkhoff and von neumann consist of the propositions one can make about a physical system .each proposition takes the form `` the value of observable is contained in '' for classical systems , the logic is , while for quantum systems it is , the lattice of ( closed ) subspaces of in each case , implication of propositions is captured by inclusion , and a fundamental distinction between classical and quantum that there are pairs of quantum observables whose exact values can not be simultaneously measured at a single moment in time finds lattice theoretic expression : is distributive ; is not .we now establish the relevance of the domains and to theoretical physics : the classical and quantum logics can be _ derived _ from the bayesian and spectral orders using the _ same _ order theoretic technique ._ _ _ _ _ an element of a dcpo is _ irreducible _ when the set of irreducible elements in is written _ _ _ the order dual of a poset is written ; its order is for , the classical lattices arise as and the quantum lattices arise as it is worth pointing out that these logics consist exactly of the states traced out by the motion of a searching process on each of the respective domains . to illustrate , let for denote the result of first applying the bayesian projection to a state , and then reinserting a zero in place of the element removed .now , beginning with , apply one of the .this projects away a single outcome from , leaving us with a new state . for the new stateobtained , project away another single outcome ; after iterations , this process terminates with a pure state , and all the intermediate states comprise a path from to .now imagine all the possible paths from to a pure state which arise in this manner .this set of states is exactly ( see figure 2 ) .the formal notion of information content studied in measurement is broad enough in scope to capture shannon s idea from information theory , as well as von neumann s conception of entropy from quantum mechanics .shannon entropy is a measurement of type a more subtle example of a measurement on classical states is the retraction which rearranges the probabilities in a classical state into descending order .von neumann entropy is a measurement of type another natural measurement on is the map which assigns to a quantum state its spectrum rearranged into descending order .it is an important link between classical and quantum information theory . by combining the quantitative and qualitative aspects of information, we obtain a highly effective method for solving a wide range of problems in the sciences . as an example , consider the problem of _ rigorously _ proving the statement `` there is more information in the quantum than in the classical . '' _ _ the first step is to think carefully about why we say that the classical is contained in the quantum ; one reason is that for any observable , we have an isomorphism =0\}\simeq\delta^n\ ] ] between the spectral and bayesian orders .that is , each classical state can be assigned to a quantum state in such a way that _ information is conserved _ _ : _ _ _ + + + + + + .this realization , that both the qualitative _ and _ the quantitative characteristics of information are preserved in passing from the classical to the quantum , solves the problem . __ let . then 1 ._ there is an order embedding with _ _ _ 2 ._ for any , there is no order embedding with _part ( ii ) is true for any pair of measurements and .the proof is fun : if ( ii ) is false , then restricts to an injection of into , using and .but no such injection can actually exist : is infinite , is not .we have already mentioned that the domains and are not continuous . the easiest way to see why is to take note of the fact that the bayesian order on is _ degenerative _ _ :if , then using this property , it is easy to show that the only approximation of a state like is by construct an increasing sequence whose last two components are equal such that .nevertheless , these domains do possess a notion of approximation ._ _ _ _ let be a dcpo . for , we write iff for all directed sets , the _ approximations _ of are [l]{\raisebox{-.4ex } { }}}}}x:=\{y\in d : y\ll x\},\ ] ] and is called _ exact _ if [l]{\raisebox{-.4ex } { }}}}}x ] from to , it is scott continuous with for .the analogous result holds for _ _ _ an element is a _ coordinate _ if either or [l]{\raisebox{-.4ex } { }}}}}\mathrm{ir}(d).$ ] _ _ _ in the case of and , a coordinate is either a _ proposition _ or an _ approximation of a proposition ._ equivalently , a coordinate is a state on one of the lines joining to a proposition . _ _ _ _ each state is the supremum of coordinates . the result above , proven in , holds for both and .we do not expect all domains to have this property , but the role of partiality in defining ` coordinate ' as either an irreducible or an _ approximation _ of an irreducible may be worth taking note of in trying to develop a general and useful set of axioms for the description of partiality . ideally , these axioms will_ _ * generalize continuous domains , * include and as examples , * aid in the description of a fundamental topology , which will be equivalent to the scott topology in the case of continuous dcpo s , and * be relatable to implicit uses of the notion in physics , such as ` dynamics ' ( i.e. , causality relations on light cones ) .the interested reader will notice that exact dcpo s definitely satisfy the first two criteria .we do not know about the other two ( or even what the last one may mean ) .nevertheless , we hope this paper will serve as a useful guide for those intent on looking .99 b. coecke . _ entropic geometry from logic . _ proceedings of mathematical foundations of programming semantics 19 , electronic notes in theoretical computer science , vol .83 , 2003 . `arxiv : quant - ph/0212065 ` b. coecke and k. martin . _ a partial order on classical and quantum states . _ oxford university computing laboratory , research report prg - rr-02 - 07 , august 2002 . ` http://web.comlab.ox.ac.uk/oucl/ publications / tr / rr-02 - 07.html `
we revisit the standard axioms of domain theory with emphasis on their relation to the concept of partiality , explain how this idea arises naturally in probability theory and quantum mechanics , and then search for a mathematical setting capable of providing a satisfactory unification of the two .
and world - wide communications are changing computing concepts as were established some years ago . in the past, information was stored in trusted hosts where data was managed . in that case , it is easy to see that information security depends on the protection of the hosts themselves .protection of hosts should be considered as a part of the operating systems design and not as an add - on .in fact , to ignore last approach opens a wide variety of security holes in modern computer systems . in any case , hosts protection is more advanced that mobile code protection at present time .distributed computing requires information to be managed in untrusted and sometimes unknown hosts in the network ; consequently , new data management models must be developed for distributed environments .some protection schemes based in the use of partial result authentication codes ( pracs ) ensures a _ perfect forward integrity _ but does not assures backward integrity . in these cases ,the mobile agent carries the keys that will be used to protect the messages .each key could be applied to protect only one message being removed from the agent data area after using .even forward integrity could be compromised .for example suppose that an agent follows a route , where both and , with and , are malicious hosts .the first hostile node in the route of the agent , that is , could provide a copy of the keys not removed from the mobile agent to the second malicious host , .the second host could use this information to counterfeit data provided by the hosts that apply these keys .this fact makes all the hosts in the subset vulnerable .the same problem will happen if the agent returns to the first malicious host later , showing that backward integrity can not be assured in any case .the term _ message _ , as applied in this paper , stands for each block of information provided by the remote hosts to either other hosts or mobile agents . the word _ field _ identifies a chunk of the message itself .we want to note some differences that can be found between mobile agents , mobile code and intelligent agents , to show in detail the problem that could be solved by using the threat described in our work : * _ mobile agents . _a mobile agent is a software object that have a code area and a data part .both code and data areas will convey from a host to another one but the execution thread will not be preserved .mobile agents could be easily implemented by using serialization in programming languages as java . *_ mobile code ._ the most important difference between mobile agents and mobile code is that the latter allows the execution thread to be preserved when the agent goes from one host to the next one . as the execution threadwill be changed in each host visited by the mobile code , to protect this area is not easy . * _ intelligent agents . _an intelligent agent is a software object that have the ability of process information retrieved autonomously .these agents does not require to be mobile .intelligent agents are an active field of study in _ artificial intelligence _( ai ) at present .a given agent could be classified in more than one of the groups described above .for example , a mobile agent could be programmed in a way that allows it to decide the route that it will follow by evaluating the information provided by the _ peer _ ( remote ) hosts , making it both a mobile and an intelligent agent simultaneously .the threat we propose in our work allows an agent to be protected against both malicious hosts and other agents . these hosts and agentscould try to make unauthorized modifications on either the code area or the data space of the agent or even to remove the information provided by other hosts .our goal is to protect code and data areas against both counterfeit and erasing .our technique is based in the use of standard public - key encryption algorithms also known as asymmetric ciphers instead of symmetric ciphers .we will not propose nor recommend the use of a specific cipher over others in our work .we are not developing a protection algorithm for the execution thread .the execution thread is changed by each host where the mobile code arrives . as a consequence, it can not be easily protected without using a logging system . in our opinion , encryption techniquescould not be used to provide full protection of mobile code .in this section , we will introduce the notation used in our work .this notation will be applied to describe digital signature and encryption of data in both the agent server and the peer hosts . to describe the routes followed by the agents we will define the set , where and ; this set will denote the hosts followed by the mobile agent released by a given agent server in its -th route . in this set , , where and , determines the -th host in the -th route followed by the agent sent by the server . _digital signature_ digital signature of data can be used to protect the information provided against counterfeiting and erasing by allowing , at the same time , to be read and authenticated by other hosts .it could be useful when agents are used in negotiation processes between hosts . in this case, only the private / public key pairs related with the remote hosts are used to protect the data stored in the agents allowing the information to be read and authenticated by any host or agent without a knowledge of the private - key used to digitally sign the message .we will denote the digital signature of any given message using the expression : \enspace , \ ] ] where and are the plain - text message and its signature respectively , identifies the private - key associated with the host , and stands for the digital signature algorithm applied to protect the information provided by the peer hosts against unauthorized modifications .the message can be authenticated by using the public - key associated with the -th host in the agent route . that public - key may be available to any host .the security of the host is not compromised by storing a copy of that public - key in a public - key server . as we will shown below , the use of _ certification authorities _ ( cas ) to authenticate the public - keys itself is highly recommended .we must apply = f_{{\rm pub}_{{\rm h}_{i}^{r}}}^{-1 } \left [ f_{{\rm pri}_{{\rm h}_{i}^{r } } } \left [ { \rm m}_{i , j}^{r } \right ] \right ] \nonumber\enspace , \ ] ] where and are the plain - text message and the digital signature of again , stands for the public - key associated with the host and identifies the digital signature authentication algorithm . _data encryption_ a mobile agent based infrastructure would require an additional security level .suppose that a server drops an agent that will retrieve information that must be covered to any host in the network except the server that has released the agent itself . in this case ,data encryption should be used to hide the information stored in the agent data area .data encryption requires the use of an additional key pair , the private / public key pair related with the agent server .encryption of data requires the use of the public - key provided by the agent server to the remote hosts as a part of the mobile agent itself ( ) , and the private - key associated with the remote host that provides information to the agent. we will show below that the private / public key pair related with the server that has released the agent can not be counterfeited without invalidating the agent itself because it is a part of the code area .we will define the encryption process as : \right ] \enspace .\label{eq : encryption}\ ] ] in this case , and are the plain - text message and its cipher - text respectively .the symbols and in equation ( [ eq : encryption ] ) stand for the public - key related to the agent server and the private - key associated with the -th host in the -th route followed by an agent dropped by the server respectively . in order to decrypt the cipher - text we must use the private - key associated with the agent server ( ) , and the public - key provided by the peer host that has encrypted the message : \right ] \label{eq : decryption } \\ & \!\!\!=\!\!\ ! & f_{{\rmpub}_{{\rm h}_{i}^{r}}}^{-1 } \left [ f_{{\rm pri}_{\rm s}}^{-1 } \left [ f_{{\rm pub}_{\rm s } } \left [ f_{{\rm pri}_{{\rm h}_{i}^{r } } } \left [ { \rm m}_{i , j}^{r } \right ] \right ] \right ] \right ] \nonumber\enspace .\end{aligned}\ ] ] the elements that appear in equation ( [ eq : decryption ] ) must be interpreted in the same way as shown in other equations in this section . in this case , and are , respectively , the private and the public keys for the agent server ; and are the private and the public keys for the remote host .the message that has been covered by using public - key cryptography is , that is the -th message provided by the -th host in the route followed by the agent , and the cipher - text itself is .classical protection schemes do not allow a mobile agent to protect its own code area against unauthorized modification easily .suppose that a server digitally signs the code of an agent before dropping it .this server must provide copies of the public - key used to other hosts .this public - key is required to authenticate the code area itself .this can be done easily by providing a copy of this key to a key - server or by using a ca .obviously , the agent should be instructed to get this key showing at least the address of both the server that has released it and the host that stores a copy of the key .at this moment a malicious host , let us say , could change the agent code area and sign it by using its own private / public key pair .it is not difficult to prove that this modification will not be discovered by the remote hosts if the code is changed in such a way that the agent points to the new public - key .the hostile host only needs to assure that the signed code area provided by the agent server is recovered before the agent returns to the server that has dropped it .it is easy to see that code protection could not depend on classical cryptography even if the keys used to authenticate this area are certified and are provided by external trusted authorities . we need to develop a way to link the data provided by the peer hosts with the code part of the agent at the same time. this will allow us to protect data and code areas simultaneously .other protection threats have been proposed in recent years .for example , the use of both code and data areas mess up techniques as described in . in this reference , hohl recommends the use of variable names that does not means anything .he also proposed that the code should not be modularized ( _ i.e. _ , it must be written without using subroutines ) and to choose a data representation that makes the program difficult to understand .this author proposed the use of _ variable recomposition _ techniques , conversion of _ compile - time control flow elements _ into run - time data dependent jumps and the _ insertion of dead code _ , that is code that will not be executed when the agent is running , into the agents .these techniques are based in mixing up the contents of the variables and creating new variables that contains a few bits of data of some of the original variables .these are recovered by changing the way the code of the agent handles the access to the variables .another alternative could be to develop a secure infrastructure for mobile agents . the use of _ _ encrypted functions _ _ could be obtained without any explicit knowledge of providing ] stands for the digital signature of the code area .the code area includes both the agent code and an identification number ( i d ) for the agent .therefore , this identification number is unique because it is generated from the agent server identificator , which is unique in the network ( for example the ip - address in ipv4 or ipv6 format of the agent server itself ) , and an agent number , which is unique for that server . in our case, we will obtain .obviously , ] .it is possible to do it and , at the same time , to hide the message in such a way that only the authorized host ( the agent server ) could decrypt it .we propose to partially encrypt a digitally signed message using : \right ] \enspace ; \ ] ] assuring that the code area can not be changed by a malicious host because each one can authenticate the crc of the digital signature , provided by the agent server using ( [ eq : crc ] ) , and the public - key of that server simultaneously .this information can be verified by any host but the message can only be decrypted using the private - key of the agent server , .we propose to apply the next set of equations to check the code and data areas of the agent and decrypt the message : & \!\!\!=\!\!\ ! & f_{{\rmpub}_{{\rm h}_{i}^{r}}}^{-1 } \left [ { \rm c}_{i , j}^{r } \right ] \label{eq : certify}\\ { \rm m}_{i , j}^{r } & \!\!\!=\!\!\ ! & f_{{\rmpri}_{\rm s}}^{-1 } \left [ f_{{\rm pub}_{\rm s } } \left [ { \rm m}_{i , j}^{r } \right ] \right ] \enspace .\label{eq : addec}\end{aligned}\ ] ] the former equation ( [ eq : certify ] ) can be used by any host because the public - key related with the -th host in the -th route followed by an agent sent by the server ( ) is available publically to all the hosts that need it .this equation will be required to check data and code integrity at the same time .equation ( [ eq : addec ] ) is applied by the agent server using its own private - key , , to decrypt the information provided by the host , after checking the cipher - text by applying ( [ eq : certify ] ) .one of the main goals of our work is to provide a threat that allows mobile agents to be protected against attacks like those described in section v - a .we propose to use public - key ciphers , also known as asymmetric cryptosystems , instead of symmetric ciphers because the latter allows a simplified key management in distributed environments .public - keys can be shared between hosts in a network without requiring secure communication channels like these obtained using , for example , the _ transport layer security _ ( tls ) protocol .detailed information about the tls protocol can be found in .some important requeriments must be considered in the development of a public - key propagation infrastructure for mobile agents : * _ certification authorities ._ it is easy to see that uncertified public - keys can not be trusted .it is not a good practice to send keys directly to the servers that need them .we need a network infrastructure that allows the nodes to assure what host owns each private / public key pair .for example middleman attack , the greatest known vulnerability of public - key based ciphers , can be avoided by using a trusted third party to verify and sign the keys transmitted over the network . * _ non - interactive protocol . _ as pointed out it a security model for mobile agents should conceive protocols that require minimal interaction between the agent and the server that has sent it .the server may want to go off - line , consequently , the public - key should be provided by an independent host .our threat allows the public - key related with the agent server ( ) to be provided as a part of the agent .our threat to protect mobile agents offers some important advantages too . * _ secure communication channels are not required ._ this is a common advantage of public - key cryptography . as only public - keys are transmitted over the network untrusted communication channelscan be established to share the keys .these keys can not be used to falsify information or decrypt data provided to the agents . *_ we do not need to know what host owns each public - key ._ obviously , this fact is only true if different access privileges are not assigned to each agent in function of the server that has released it .if different access permissions are required cas must be used to authenticate the keys provided to the remote hosts and to assign the right access privileges to each agent server . certified public - keysare required even if different access permissions are not assigned to mobile agents in function of the server that has dropped it .in fact , each peer host must identify other hosts in the network in soon a way that it does not allows host impersonation techniques . trustedthird parties are needed to avoid well known threats like the middleman attack .we must consider that changing the public - keys carried by mobile agents the public - keys associated with the agents servers will invalidate data retrieved by the agents . if the information stored in the mobile agent data area is removed from the agent when this public - key is falsified other hosts will not have a way to determine that the agent have been modified without authorization .but this fact will be discovered by the agent server after the agent return .these keys does not require to be authenticated using a ca .changing public - key for a given agent must be avoided once they have been provided to the _ route servers _ ( rss ) .as we noted in section iii , both code and data areas can be protected against counterfeiting and erasing by malicious hosts and agents .this can be achieved by adding a field to each message retrieved by a mobile agent as presented in ( [ eq : crc ] ) .this field , , will provide information about both the code part of the agent and the public - key of its originating server .this field will be stored in such a way that it does not allow changing the code of the agent without invalidating it .each agent have its own i d to avoid the possibility of overwriting the information provided by peer hosts with old data retrieved by other agents sent by the same server .this field is stored as a part of the code area . as a consequence , \right]$ ] changeswhen new agents are dropped .even obsolete information provided to the same agent in the past can not be used to cover new data .each host must generate a field , we called , unique for each message .all these fields should be sent to the rss : where is the agent identification number mentioned above .the field must be sent to rss digitally signed by applying : \enspace , \ ] ] using the private / public key pair for the host that provides that information .any host can check each message provided by by using the fields , where and , stored in .those fields must be authenticated by using : = f_{{\rm pub}_{{\rm h}_{i}^{r}}}^{-1 } \left [ f_{{\rm pri}_{{\rm h}_{i}^{r } } } \left [ { \rm f}_{i}^{r } \right ] \right ] \enspace .\ ] ] to assure the integrity of the message each rs should send a random message to the host that wants to provide a new ( or an updated ) message .this random message must be digitally signed and returned to the rs that released it .the random message signature must be checked before the rs accepts . in other case , a malicious host can provide an obsolete message to the rss overwriting the mobile agent data area with old information corresponding to the false message simultaneously .in this section we classify the attacks that is possible to try against the mobile agents and other hosts in the network .we show how our protection threat allows us to protect the agents and , in some cases , even remote hosts against these attacks .the main goal of our investigation is to protect the mobile agents against both malicious hosts and other agents that can counterfeit the code part and/or data areas of the agents .these hostile agents and hosts can try to remove information carried by the agents too .as noted , these attacks could arrive from both other agents and the hosts where the agents are stored . in both cases, we will protect the agent using the same threat .as we shown in section iii the agent code area can be protected by adding two crcs to each message provided by the peer hosts .these crcs provides information that allows peer hosts to authenticate both the public - key associated with the agent server and the digital signature of the code area of the agent , that has been provided by the agent server itself .these fields must be matched with each message stored in the data area by the hosts followed in the agent trip , as appears in the set . at last, each host authenticates the code area of the agent too before running it using that digital signature previously checked . as boththe crc related with the digital signature of the code area and the crc for the public - key of the agent server can not be changed during agent trip , but the latter is unique for a given agent , data area is protected at the same time . as a consequence , the attacks described below can be avoided .the attacks against the information carried by mobile agents can be classified in three groups : ( _ i _ ) attacks trying to erase data carried by the agents ( also known as a mobile agent `` brainwashing '' in bibliography ) , ( _ ii _ ) attacks trying to falsify data provided by other hosts and ( _ iii _ ) attacks trying to uncover non - public information carried by the agents . _ removing information_ to avoid a mobile agent `` brainwash '' each host must provide information about the number of messages stored in a particular agent to the rss as we described in section iv .it is easy to see that all the information needed to protect the data area can not be stored in the agent .suppose for example that a mobile agent carries a set of signed messages in its data area .the set in ( [ eq : dasgn ] ) stands for the signed data area of the -th agent released by a server .all the messages are signed in a way that only the hosts that has provided these messages can change its contents . if the rss do not provides information about the number of messages given by each host to the agent server or , more generally to other hosts that requests it , any hostile node ( any malicious host in the network ) can remove a message , let us say where identifies the host that provides the message removed and stands for the -th message provided by that host . in this case , the set of messages carried by the agent , , is changed to without invalidating the data area . in this case , the set is the falsified data area of the agent .the same problem happens when encryption is used if the number of messages carried by a mobile agent is not provided to the rss in any way . in this casethe encrypted data area of the agent : can be modified by removing one of the cipher - texts provided by the remote hosts .for example , the cipher - text that corresponds to the -th message provided by the -th host in the agent route can be erased by changing the agent data area to : even if information about the number of messages carried by the agent is provided in a way that an hostile peer host can not change it , data area can be altered by overwriting it with a _ bit - copy _ of an old data area . to avoid attacks based in the techniques described abovewe propose to send information about the number of messages provided to the agent to the rss shown in the code area of the mobile agent itself .these hosts can not be changed without invalidating the agent itself .a copy of the fields , as shown in equation ( [ eq : crc ] ) , is all the information needed to manage it ; in fact , these fields are required to authenticate data provided by each host visited by the mobile agent as shown above ._ counterfeit of data_ even if data have been digitally signed or encrypted information provided can be falsified .as noted above , data area can be protected against `` brainwashing '' by storing information about the amount of messages provided by each host visited by the agent in separated hosts .we need a way to protect the information provided by peer hosts to the agent against being overwritten with old signed data provided by those hosts in the past . in section iiiwe propose to link data provided as a part of the agent , the fields in as shown in ( [ eq : crc ] ) , with each messages retrieved by the agent .if this field is not included any hostile host can change the data area of the agent as appears in ( [ eq : dasgn ] ) , overwriting a valid signed message with an old message signed by the same host using the same private / public key pair , let us say : the same problem happens with encrypted messages if the field , obtained by applying ( [ eq : crc ] ) , is not provided as a part of the messages . in this case , the encrypted data area of the mobile agent in ( [ eq : daencr ] ) can be counterfeited by changing one of the cipher - texts provided by the remote hosts , for example the message can be changed to : to avoid information provided by peer hosts to be overwritten by using a bit - copy with old signed data obtained in the same trip we propose to add another field to .this field is introduced in section iii .each host can provide a field in each message .we denoted this field as .this field is changed when the remote host wants to sign or encrypt a new message for the agent .this field will be provided to the rss and must be matched against the copies stored in by each host that wants to check data and code integrity .each host visited by the agent provides a set of fields , where to the rss as presented in ( [ eq : idmess ] ) . _cryptanalysis_ in our work we propose to protect data provided by using standard cryptographic techniques that can be attacked by using cryptanalysis .we are not making assumptions about the encryption algorithms used to cover data nor the keys length that may vary in function of the security requirements .as noted in , the fact that public - key based ciphers allows predictable patterns to survive the encryption process making this technology vulnerable to cryptanalysis is well known to cryptanalysts ; as a consequence , standard compression techniques should be applied before encryption to increase data security . _middleman attack_ the man - in - the - middle attack is probably the greatest known vulnerability of asymmetric cryptosystems .mobile agent based infrastructures can be attacked by using a middleman attack variant . in this case, a malicious host will intercept both the public - key send to the rss by the remote host and the agent itself .this host will generate a private / public key pair to falsify data provided by that host and provide a copy of the false public - key to the rss . to avoid the attacks based on this threat the use of cas to verify and sign the keys used by the hosts to protect its own data is recommended .the mobile agents can be attacked in a way that do not require to modify either the code area or the data part of the agents .the main goal of these attacks can be to damage the agent infrastructure itself by destroying the agents or releasing new agents instead of the original one ._ removing agents_ any malicious host can remove the agents when arrive to it .there are no - way to avoid this attack against the agents but or protection threat allows other host to try to discover what host has killed the agent by requesting information about the route followed by the agent . _releasing new agents_ a hostile host can remove all the information stored in the mobile agent and change the code area .modifying the code area requires gathering a fake private / public key pair for the agent server but now it is possible because all the fields have been removed from the data area of the agent .the new private / public key pair can be used to sign a modified code area of the agent .this fact can be discovered , at least , by the server that has released the agent when it comes back .if other hosts have a copy of the public - key of the agent server , either obtained by other channels or sent in the past , these hosts can discover the unauthorized modification of the agent too .our goal is to protect mobile agents against attacks from both peer hosts and other agents .we are not trying to develop a threat to protect hosts against malicious agents .attacks against remote hosts could be initiated from both agents and hosts .the code area protection allows an agent to be protected against malicious changes that could affect how it works . at the same time, the code protection allows a host to be protected against _ denial of service _ ( dos ) attacks by agent cloning in the sense that the number of clones could be easily verified by using the agent identification number described above .this allows a host to protect itself by controlling the resources provided to the agents in a _ per - agent _ basis .the agent identification number allows a host to identify the number of clones of a given agent .the code protection threat proposed in our work do not permits a hostile host to change the code part of an agent provided by an agent server without invalidating it but , obviously , this host could release its own malicious agents .mobile agents are an extremely vulnerable piece of software because they are executed in untrusted environments .both code and data areas must be protected against malicious hosts and agents .the former requires techniques that does not allow a malicious host to hide the identity of the real agent owner .the latter requires information provided by remote hosts to be protected against counterfeit and erasing .the main advantages of the algorithm proposed in this paper are that : * _ secure communication channels are not required _ allowing a mobile agent to be transmitted over untrusted channels and even stored in malicious hosts where the agent will be shown as plain - text even if trusted communication channels between hosts are established . * _ both code and data areas are protected _ against counterfeit and erasing ; consequently , mobile agents are a more secure and robust platform . *_ each host could change its own information _ when required .this allows a host to update information provided permitting the development of more sophisticated agent - based applications , where negotiation between agents and hosts is required .we hope that our protection scheme allows mobile agent based infrastructures to be protected against other attacks based on threats not covered in this article or even unknown at present .if this can be achieved , our threat could be a good design principle for mobile agent based information networks .the authors would like to thank dr .agustn nieto , dr .jos manuel noriega and dr .r. osorio for reviewing the draft of the article , recommend us the use of the notation proposed by the _ american mathematical society _ ( ams ) in this work and provide us a place to work . without their many helpful comments this work would not be possible .christian f. tschudin , `` intelligent information agents agent based information discovery and management on the internet , '' in _ lecture notes in computer science ( lncs ) , springer - verlag inc . , new york , ny , usa _ , m. klusch , ed . , july 1999 , pp . 431445 .
mobile code based computing requires development of protection schemes that allow digital signature and encryption of data collected by the agents in untrusted hosts . these algorithms could not rely on carrying encryption keys if these keys could be stolen or used to counterfeit data by hostile hosts and agents . as a consequence , both information and keys must be protected in a way that only authorized hosts , that is the host that provides information and the server that has sent the mobile agent , could modify ( by changing or removing ) retrieved data . the data management model proposed in this work allows the information collected by the agents to be protected against handling by other hosts in the information network . it has been done by using standard public - key cryptography modified to support protection of data in distributed environments without requiring an interactive protocol with the host that has dropped the agent . their significance stands on the fact that it is the first model that supports a full - featured protection of mobile agents allowing remote hosts to change its own information if required before agent returns to its originating server . assurance , asymmetric ciphers , cryptography , data protection , distributed networks , information retrieval , mobile agents .
massive multiple - input multiple - output ( mimo ) systems are widely regarded as a disruptive technology for next - generation ( i.e. , 5 g ) communication systems . by equipping a base station ( bs ) with an unprecedented number of antennas ( a few hundreds or a thousand ) in a centralized or distributed fashion , such a system can reduce cell interference substantially through the simplest signal processing method because the channel vector between the users and the bs becomes quasi - orthogonal . however , a large number of antennas significantly complicate the design of hardware for the implementation of massive mimo in _production_. in particular , such systems require an analog - to - digital converter ( adc ) unit for each receiver antenna ; therefore , using many antennas results in a need for an equivalent number of adcs .the exponential increase in cost and power consumption attributed to high - speed and high - resolution adcs bits typically adopt a flash architecture where the input voltage is compared with each of the tap voltages simultaneously . ] is a major bottleneck in deploying massive mimo systems .a solution to this problem involves using a very low - resolution adc ( e.g. , 13 bit ) unit at each radio frequency chain . *previous work * : low - resolution adcs have favorable properties such as reduced circuit complexity , low power consumption , and feasible implementability .however , these converters inevitably deteriorate performance and complicate receiver design .the effect of low - resolution adcs on channel capacity has been studied for single - input single - output ( siso ) channels and , recently , mimo channels . through a single antenna with 1-bit quantization at the receiver, channel capacity can be achieved by qpsk signaling . in 1-bit mimo cases ,however , high - order constellations can be used to generate rates higher than those produced with qpsk signaling .in addition to information - theoretical studies , other works related to estimation / detection based on low - resolution samples include time / frequency synchronization , channel estimation , data detection , and related performance analysis .the strongly nonlinear characteristic of the quantization process complicates the precise estimate of continuous variables .for example , a very long training sequence ( requiring over 50 times the number of users ) is necessary to achieve the same performance as the perfect csi case in a mimo system with 1-bit adcs .such pilot overhead can not be sustained by a practical system ; to reduce pilot cost , a joint channel - and - data estimation method that used predicated payload data to aid channel estimation was proposed in .however , this technique enhances the computational complexity of the receiver .other practical issues , such as automatic gain control ( agc ) and time / frequency synchronization , have not been thoroughly examined in receivers with the pure low - resolution adc architecture . motivated by the aforementioned considerations , we present a mixed - adc receiver architecture named mixed - adc massive mimo system in the current study . in this architecture , most antennas were installed with low - resolution adcs while a few antennas were equipped with full - resolution adcs .a special case of the mixed - adc massive mimo system was initially proposed by ; in this scenario , _1-bit _ adcs replace the low - resolution adcs . under the mixed - adc framework, csi can be obtained in a round - robin manner in which high - resolution adcs are connected to different antennas at various symbol times to estimate the corresponding channel coefficients . in the process , good - quality csi is available at the receiver without significant pilots overhead .the available high - resolution chains can also assist in estimating other parameters and thus facilitate the establishment of several front - end designs , such as agc and time / frequency synchronization . from an economic perspective , the mixed - adc architecture is promising because it can be implemented by adding antennas with low - resolution adcs to the existing bs .* contributions * : in this study , we aim to examine mixed - adc massive mimo systems from a practical engineering perspective .in contrast to , which emphasizes the analysis of mutual information , our approach focuses on the mimo ( or multiuser ) detection problem at the receiver . an extensive review of the large family of various mimo detection algorithms is provided in , and mimo detectors based on quantized samples are studied in .to the best of our knowledge , no research has shed light on mimo detection problems in mixed - adc architecture , the design and performance of which are in fact a central concern related to this architecture .this paper makes the following specific contributions : * by exploiting probabilistic bayesian inference , we provide a _ unified _ framework to develop a family of mimo detectors .this framework is labeled as the ( generic ) bayes detector . to compute the bayesian estimate, we must establish a prior distribution for the transmitted data and a likelihood function for the statistical model of receiver signals .upon adopting the true prior and likelihood functions , the bayes detector can achieve the best estimate in the mean squared error ( mse ) sense . properly postulating mismatched prior and likelihood functionscan yield many low - complexity and popular detectors , such as the linear minimum mean - squared - error ( lmmse ) and maximal - ratio - combining ( mrc ) receivers .* the exact expression for the likelihood of a receiver signal with quantization is complex given the highly nonlinear property of the quantizer .the `` mixture '' architecture complicates the design of the bayes detector further .a natural question is the following : _ how close to the best performance can a conventional mimo detector operate without considering the exact ( while annoying ) nonlinear effect of the quantizers ? _ to answer this question , we adopt a traditional heuristic that treats quantization noise as additive and independent .this heuristic is known as the pseudo - quantization noise ( pqn ) model ( * ? ? ?* chapter 4 ) . by postulating a mismatched likelihood using this model in the bayes detector, we can reduce computational cost significantly while degrading performance only slightly . * to achieve the bayes detector , we employ a recently developed technique called _ generalized approximate message passing _ ( gamp ) algorithm .we adapt this approximation for the mixed - adc architecture by specifying the corresponding adjustment in nonlinear steps . by applying the central - limit theorem ( clt ) to the large system limit, we derive an approximate analytical expression for the _ state evolution _ ( se ) of the bayes detector .a series of metrics , including bit error rate ( ber ) and mse , can be predicted , and computer simulations are conducted to verify the accuracy of our analysis .the performance of the mixed - adc massive mimo receivers can be quickly and efficiently evaluated .several useful observations are made based on the analysis to optimize the receiver design .* notation * : throughout this paper , vectors and matrices are presented in bold typeface , e.g. , and , respectively , while scalars are presented in regular typeface , e.g. , .we use and to respectively represent the transpose and conjugate transpose of a matrix . and respectively denote the real and imaginary parts of a complex matrix ( vector ) .normal distributions with mean and variance are denoted by while indicates a complex gaussian distribution . specifically , [ or denotes the probability density function ( pdf ) of a gaussian random variable with mean and variance .finally , let with .we consider an uplink multiuser mimo system that has single - antenna users and one bs equipped with an array of antennas . the discrete time complex baseband received signal given by where \in\mathbb{c}^k} ] denotes the additive white gaussian noise ( awgn ) , \in\mathbb{c}^{n\times k}} ] row and ensure that the analog baseband is within a proper range , e.g. , .this assumption is without loss of generality because in practice , a variable gain amplifier with automatic gain control is used before quantization .moreover , the equivalent row normalized channel simplifies the discussion in this study . ] .the entries of are independent and identically distributed ( i.i.d . )random variables with , the entries of are i.i.d . and distributed as , and the entries of are i.i.d . . if we suppose that , then the signal - to - noise ratio ( snr ) of the system is defined by .the real and imaginary components of the received signal at each antenna are quantized separately by an adc .the quantization bits are set differently for different antennas in the mixed - adc massive mimo system .all the complex valued quantizers are _ abstractly _ described as so that the quantized signal can be written as where is applied element - wise and defined such that .if a statement is for a generic antenna index , we often omit the subscript index from for brevity . in this paper , we mainly focus on uniform midrise quantizers with quantization step size .such a quantizer maps a real - valued to denote each real channel although it should be specified as or . ]input that falls in ] , and denotes the hadamard product ; i.e. , ] , , , , and are applied element - wise . to better understand the algorithm, we provide some intuition on each step of algorithm [ alg : gamp ] .lines 12 compute an estimate of the product and the corresponding variance .the first term of is a plug - in estimate of and the second term provides a refinement by introducing onsager correction in the context of amp . using , lines34 then compute the posterior estimate of the residual and the inverse residual variances , where is the posterior estimate of the un - quantized received signal by considering the likelihood function .lines 56 then use these residual terms to compute and , where can be interpreted as an observation of under an awgn channel with zero mean and variance of . finally , lines 78 estimate the posterior mean and variances by considering the prior . using the likelihood function of ( [ 3.2 ] ) to algorithm [ alg : gamp ] , we can obtain analytic expressions of and , which are [ 3.4 ] where is a complex - valued variable . the integral in ( [ eq : deffr ] ) is given by ] [ eq : deffr ] that the integration interval ] being the quantization error. this heuristic is known as the pqn model . in the pqn model , the quantization error and its input generally assumed to be independent , and is usually modeled as a complex gaussian random variable with zero mean and variance . ] and the summation in ( [ 4.2a])-([4.2c ] ) adds all the corresponding interval together .the quantization bins and the quantization level are configurable .thus , the se equations can incorporate arbitrary quantization processes , such as nonuniform quantization .moreover , in ( [ 4.2a])-([4.2c ] ) indicates the effect of the _mixed_-adc architecture .interestingly , when particularizing our results to the case with a non - mixed adc architecture , we recover the same asymptotic ber expression ( [ 4.7 ] ) as found in with the replica method .\(3 ) _ computational simplicity_. the analytical result is computationally simple because the corresponding parameters ( i.e. , ) can be obtained in an iterative way , with each iteration only involving scalar summations ( [ 4.2a])([4.2c ] ) and scalar estimation computations ( [ 4.2d])([4.2 g ] ) .in fact , using ( [ 4.1 ] ) , we can predict the se in time of algorithm [ alg : gamp ] in the mixed - adc massive mimo system .therefore , instead of performing time - consuming monte carlo simulations to obtain the corresponding performance metrics , we can predict the theoretical behavior by se equations in a very short time .we now show special cases of the se equations in section [ sec 4.1 ] . to present a relatively intuitive analytical result , all the expressions and discussions in this subsectionare centered on a qpsk input distribution although our analysis can be incorporated with arbitrary input distributions . , , and are given in ( [ 4.2a])([4.2c ] ) without the superscript iteration index , and the terms and can be simplified as [ b.7 ] and [ cols="<,<,<",options="header " , ] + 2 & 1.0826 & 0.9619 & 1.0080 + 3 & 0.6014 & 0.5836 & 0.5895 + 4 & 0.3390 & 0.3345 & 0.3360 + 5 & 0.1941 & 0.1936 & 0.1883 + 6 & 0.1042 & 0.1037 & 0.1041 + 7 & 0.0568 & 0.0568 & 0.0569 + in the pqn model , the nonlinear quantization process is treated as additive quantization error with variance in ( [ 3.6 ] ) to simplify computation .so far , we have used the variance of a uniform distribution on a quantization interval $ ] to model , i.e. , .we now examine this approximation .figure [ fig5 ] shows the mses of the pdq - optimal detector with varying numbers of quantization bit depth for gaussian transmitted inputs . from the results in figures [ fig5](a ) and [ fig5](b ), we find that in a typical massive mimo system ( e.g. , ) , the mses of the pdq - optimal detector are not sensitive to the value of as long as its value is small enough , e.g. , .however , for the case when the number of antennas are compatible with that of the users ( e.g. , ) , we should be more careful in selecting the value of .comparing the triangle markers that correspond to the lowest mse and the cross markers determined by , we find that approximating by the latter rule is generally good enough .although not precise , this rule of thumb seems effective .we have discussed the optimal parameter settings for the quantization step size and the regularization factor . using these optimal parameters ,we now investigate the performance of the mixed - adc architecture .our discussions focus on the following question : _ how close to the best performance can a mimo detector operate without considering the exact nonlinear effect of the quantizers ? _ in the following subsections , we refer to a system where most of the antennas adopt 1-bit resolution adcs while the rest have full precision adcs as _ the mixed 1-bit _ architecture .we also have the _mixed 2-bit _ architecture and others .figure [ fig6 ] illustrates the ber versus the snr under different mixed architectures for qpsk inputs .clearly , the mixed architecture helps improve the performance .for example , for the mixed 1-bit architecture , merely installing high - resolution adcs ( i.e. , ) eliminates the error floor caused by the distortion of the mismatched measure of the pdq - optimal detector as well as the 1-bit quantization .if we further increase the fraction of the full - precision adcs to ( i.e., ) , the mixed pdq - optimal detector can achieve similar performance as the pure 1-bit dq - optimal detector .also , in the mixed 2-bit architecture , an 2-bit quantization mixed pdq - optimal detector achieves a performance similar to the pure 2-bit dq - optimal detector .recall that the dq - optimal detector is considered to achieve the best estimate in terms of minimizing the mse .we conclude that the mixed architecture can help maintain the promised performance while significantly reducing the computational complexity by ignoring the exact but complex quantization process .in addition , comparing figures [ fig6](a ) and [ fig6](b ) , we notice that the mixed architecture is more advantageous to the pdq - optimal detector than the dq - optimal detector , which will also be validated by figure [ fig8 ] .in such an input type , the pdq - optimal detector is exactly the linear detector because both the postulated and the actual inputs are gaussian .we start by providing insights into the effect of quantization based on the non - mixed architecture .figure [ fig7 ] illustrates the mses of the detectors versus the antenna configuration ratios for snr= and snr= . as shown in figure [ fig7](a ) , when snr= , the performance degradation compared to the unquantized case caused by 1-bit quantization is approximately 3db , indicating that ignoring the exact quantization effect is feasible in the low snr regime . in the high snr regime, however , a 1-bit resolution generally incurs significant performance losses as shown in figure [ fig7](b ) . in a typical _ massive _ mimo system ( e.g. , ), the pdq - optimal detector generally causes 1-bit loss compared with the dq - optimal detector .for example , the 2-bit pdq - optimal detector has the same performance as the 1-bit dq - optimal detector .alternatively , the loss caused by the simplification of the quantization process can be compensated by doubling the number of receiver antennas .figures [ fig7](a ) and [ fig7](b ) shows that the mses of the detectors generally improve by 36db for each 1-bit rate increase .the higher the snr is , the closer the mse improvement is to 6db .in contrast to figure [ fig7 ] , which focuses on the non - mixed architecture , figure [ fig8 ] illustrates the results under the mixed architecture .it can be seen that the mixed architecture can help narrow the performance gap resulting from the pqn model .for example , in a massive mimo system with , the pure 1-bit pdq - optimal detector incurs approximately 6db loss compared with the pure 1-bit dq - optimal detector .their corresponding gaps are respectively reduced to 3db and 1db when and full precision adcs are installed .using a unified framework , we specified three kinds of detectors for a mixed - adc massive mimo receiver by postulating mismatched measures in the bayes detector .the asymptotic performances of these detectors were analyzed by the se equations and their accuracy were validated by monte carlo simulations .the se equations can be quickly and efficiently evaluated ; thus , they are useful in system design optimization .we provided useful observations to aid design optimization .in particular , the results showed that we can reduce the computational burden by treating the complex nonlinear quantization process as a pqn model .ignoring such a complex quantization process may cause 1-bit performance degradation on the high - snr regime but does not have a significant effect in the low - snr regime .moreover , the mixed receiver architecture can help narrow the performance gap resulting from the pqn model .in this appendix , we derive the se equations of algorithm [ alg : gamp ] in the large - system limit ( i.e. , ) by following . for conceptual clarity and ease of explanation , we restrict the derivation to a corresponding real - valued system although proposition 1 is for the complex - valued case . in addition , for brevity , we omit all the iteration index .we recall from lines 78 of algorithm [ alg : gamp ] that the conditional mean and variance of denoted by and , respectively , are taken w.r.t .the marginal posterior given in ( [ eq : px ] ) , which is determined by .therefore , to obtain the se equations , we have to determine the asymptotic behavior of and . where the expectation and variance are taken w.r.t . . according to the clt in the large system limit , can be regarded as a gaussian random variable with mean and variance .let and we use and to denote the first - order and second - order derivatives of with respect to , respectively . then , line 5 of algorithm [ alg : gamp ] is expressed as where letting yields then , we explore the asymptotic ( or the large system ) behavior of and . here , represents the quantized signals that belong to discrete set . to compute the expectations in ( [ a.9 ] ) ,we need the joint distribution because and depend on two correlated variables and .our strategy to obtain is via the marginal of .therefore , we calculate the joint distribution first . both and sums over many independent terms .therefore , according to the clt , they are gaussian random variables .their means are zero because has zero mean .the entries of the covariance matrix between and is here and hereafter , we denote for notation simplicity .we find that the covariance matrix becomes asymptotically independent of the index .altogether , these provide the bivariate gaussian distribution : now , we are ready to calculate the joint distribution . for notation simplicity ,let . using ( [ a.11 ] ) , we can further calculate as , where , , and . ] where is the vector containing the elements of excluding , and we define using the joint distribution ( [ a.12 ] ) , we can compute the asymptotic behavior of ( [ a.9 ] ) as and , similarly , note that and are asymptotically independent on the indexes in the large system limit .we also drop the index of defined in ( [ a.13 ] ) .moreover , we notice that defined in ( [ a.3 ] ) become independent of the index , and read as where and are the asymptotic limits of and , respectively , and can be expressed as together with these results , ( [ a.14 ] ) and ( [ a.15 ] ) can be further simplified as thus , we obtain the asymptotic behavior of in ( [ a.7 ] ) as where and with and given in ( [ a.16 ] ) . note that becomes asymptotically independent of the index .next , we consider the asymptotic behavior of given in line 6 of algorithm [ alg : gamp ] . following the similar argument presented in this paper, we can prove that conditioned on is asymptotic gaussian .again , we drop the index in the large system limit , and we show that the asymptotic mean and variance of is and with , where and represent the derivatives of with respect to . therefore we denote where and . using ( [ a.17 ] ) and ( [ a.19 ] ) ,the marginal posterior ( [ eq : px ] ) can be rewritten as finally , we conclude that the asymptotic behavior of in algorithm [ alg : gamp ] can be characterized by ( [ a.19 ] ) , where the parameters are determined by ( [ a.16 ] ) and ( [ a.18 ] ) in conjunction with the parameters given in ( [ a.21 ] ) . for the complex - valued case ,the signal power of the real or imaginary part is .therefore , the equivalent quantization interval should multiply a factor of as in ( [ 4.3 ] ) .besides , the relative scalar multiplication should be modified to complex multiplication as in ( [ 4.2 ] ) and ( [ 4.4 ] ) .a. mezghani and j. a. nossek , `` on ultra - wideband mimo systems with 1-bit quantized outputs : performance analysis and input optimization , '' in _ proc .theory ( isit ) _ , nice , france , 2007 , pp .12861289 .j. singh , o. dabeer , and u. madhow , `` on the limits of communication with low - precision analog - to - digital conversion at the receiver , '' _ ieee trans ._ , vol .57 , no . 12 ,36293639 , dec . 2009 .k. nakamura and t. tanaka , `` performance analysis of signal detection using quantized received signals of linear vector channel , '' in _ proc .theory and its applications ( isita ) _ , auckland , new zealand , dec .2008 , pp . 15 .
using a very low - resolution analog - to - digital convertor ( adc ) unit at each antenna can remarkably reduce the hardware cost and power consumption of a massive multiple - input multiple - output ( mimo ) system . however , such a pure low - resolution adc architecture also complicates parameter estimation problems such as time / frequency synchronization and channel estimation . a mixed - adc architecture , where most of the antennas are equipped with low - precision adcs while a few antennas have full - precision adcs , can solve these issues and actualize the potential of the pure low - resolution adc architecture . in this paper , we present a unified framework to develop a family of detectors over the massive mimo uplink system with the mixed - adc receiver architecture by exploiting probabilistic bayesian inference . as a basic setup , an optimal detector is developed to provide a minimum mean - squared - error ( mmse ) estimate on data symbols . considering the highly nonlinear steps involved in the quantization process , we also investigate the potential for complexity reduction on the optimal detector by postulating the common _ pseudo - quantization noise _ ( pqn ) model . in particular , we provide asymptotic performance expressions including the mse and bit error rate for the optimal and suboptimal mimo detectors . the asymptotic performance expressions can be evaluated quickly and efficiently ; thus , they are useful in system design optimization . we show that in the low signal - to - noise ratio ( snr ) regime , the distortion caused by the pqn model can be ignored , whereas in the high - snr regime , such distortion may cause 1-bit detection performance loss . the performance gap resulting from the pqn model can be narrowed by a small fraction of high - precision adcs in the mixed - adc architecture . massive mimo , mimo detector , low - resolution adc , mixed architecture , bayesian inference .
a minority game can be exemplified by the following simple market analogy ; an odd number of traders ( agents ) must at each time step choose between two options , buying or selling a share , with the aim of picking the minority group .if sell is in minority and buy in majority one may expect the price to go up to satisfy demand and vice versa if buy is in minority , thus motivating the minority character of the game .clearly , there is no way to make everyone content , at least half of the agents will inevitably end up in the majority group each round . as the losing agents will try to improve their lotthere is no static equilibrium .instead , agents might be expected to adapt their buy or sell strategies based on perceived trends in the history of outcomes . the minority game proposed by zhang and challet formalizes this type of market dynamics where agents of limited intellect compete for a scarce resource by adapting to the aggregate input of all others .each agent has a set of strategies that , depending on the recent past history of minority groups going time steps back , gives a prediction of the next minority being buy or sell .the agent uses at each time step her highest scoring strategy which has most accurately predicted correct minority groups historically .the state space of the game is given by the strategy scores of each agent together with the recent history of minority groups , and the discrete time evolution in this space represents an intricate dynamical system .what makes the game appealing from a physics perspective is that it can be described using methods for the statistical physics of disordered systems , with the set of randomly assigned strategies corresponding to quenched disorder . in particular challet , marsili , andco - workers showed that the model can be formulated in terms of the gradient descent dynamics of an underlying hamiltonian , plus noise .the asymptotic dynamics corresponds to minimizing the hamiltonian with respect to the frequency at which agents use each strategy , a problem which in turn can be solved using the replica method . in a complementary development coolen solved the statistical dynamics of the problem in its full complexity using generating functionals .the game is controlled by the parameter , where is the number of distinct histories that agents take into account , which tunes the system through a phase transition ( for ) at a critical value . in the symmetric ( or crowded ) phase , , the game is quasi - periodic with period where a given history gives alternately one or the other of the outcomes for minority group .a somewhat oversimplified characterization of the dynamics is that the information about the last winning minority group for a given history gives a crowding effect where many agents want to repeat the last winning outcome which then counterproductively instead puts them in the majority group .the crowding also gives large fluctuations of the size of the minority group .in the asymmetric ( or dilute ) phase , , agents are sufficiently uncorrelated that crowding effects are not important and there is no periodic behavior . instead ,as exemplified in figure [ timeseries ] the score dynamics is random but with a net correlation between agents that makes fluctuations in the size of the minority group small .the dilute occupation of the full strategy space gives rise to a non - uniform frequency distribution of histories which can be beneficial for agents with strategies that are tuned to this asymmetry .in this paper we study the dynamics of the minority game in the asymmetric phase by formulating a simplified statistical model , focusing on finding probability distributions for the relative strategy scores .in particular , we study the original formulation of the game with sign - payoff for which quantitative results are challenging to derive . by sorting the strategies based onhow strongly they are correlated with the average over all strategies in the game , we find that sufficient statistical information can be extracted to formulate a quantitatively accurate model for .we discuss how the relative score for each agent can be derived from the master equation of a random walk on a chain with asymmetric jump probabilities to nearest neighbor sites , and how these jump probabilities can be calculated from the basic dynamic update equation of the scores .the corresponding probability distributions of scores are either of the form of exponential localization or diffusion with a drift . in the appendiceswe show that the model is related to but independent from the hamiltonian formulation and we show how it can also be readily applied to the game with linear payoff where the master equation has long - range hopping . +although the mg is well understood from the classic works discussed above , it is our hope that the simplified model of the steady state attendance and score distributions presented in this paper provides an alternative and readily accessible perspective on this fascinating model .in order to give an overview of our results and for completeness we start by providing the formal definition of the minority game and some basic properties . at each discrete time step every agent gives a binary _ bid _ , all of which are collected into a total _ attendance _ ( odd ) and the winning minority group is then identified through .a binary string of the past winning bids , called a history , is provided as global information to each agent upon which to base her decision for the following round .there are thus with different histories . at her disposaleach agent has two randomly assigned strategies ( a.k.a .strategy tables ) that provide a unique bid for each history .the bid of strategy of agent in response to history is given by and the full strategy is the dimensional random binary vector .there are thus a total of distinct strategies available .the agent uses at each time step the strategy that has made the best predictions for minority group historically .this is decided by a score for each strategy which is updated according to , irrespectively of the strategy actually being used or not .( here the superscript on just indicates that the attendance will depend on the history giving the bids at time . ) ties , i.e. , are decided by a coin toss .since it is only the relative score between an agent s two strategies that is important in deciding which strategy to use , one may focus on the relative score this is updated according to where and where is an agents `` difference vector '' that takes values or for each history .evolution of strategy scores for the two strategies of four ( ) representative agents in a game with agents and a memory of length ( ) . at each time stepevery agent uses the one of her two strategies which has the highest momentary score , given by how well the strategy has predicted the past minority groups .the corresponding score difference ( inset ) shows the distinction between frozen agents that consistently use a single strategy , and fickle agents that switch between strategies . ] to make the dynamics generated by these equations more concrete , figure [ timeseries ] shows the scores of the strategies of four particular agents , for one realization of a game with , , together with the corresponding relative scores ( inset ) , over a limited time interval . as exemplified by this figure agents come in two flavors , known as frozen and ficklean agent is frozen if one of her strategies performs consistently better than the other , such that on average the score difference is diverging , whereas fickle agents have a relative score that meanders around switching their used strategy .the motion of for both fickle and frozen agents is a random walk with a bias towards or away from .a basic problem is to characterize and understand this random walk and derive the corresponding probability distribution ; the probability to find agent at position at time .as presented in section [ model ] we can quantify the correlation between an agent s strategies , specified by , and the total attendance , which in turn allows for characterizing the mean ( time averaged ) step size in terms of a distribution over agents .in agreement with earlier work we find that has two contributions ; one center ( ) seeking bias term which arises from self interaction ( the used strategy contributes to the attendance and as such is more likely to be in the majority group ) and a fitness term which reflects the relative adaptation of the agent s two strategies to the time averaged stochastic environment of the game .the distribution of step sizes over the population of agents are shown in figure [ distribution ] where frozen agents are simply those where the fitness overcomes the bias , such that for or for , whereas for fickle agents for and vice versa . knowing the mean step size of an agent allows for a formulation in terms of a one dimensional random walk ( fig .[ chain ] ) with corresponding jump probabilities , as presented in section [ distributions ] . depending onwhether it is more likely to jump towards the center or not ( fickle or frozen respectively ) the master equation on the chain can be solved in terms of a stationary exponential distribution centered at or ( in the continuum limit ) a normal distribution with a variance and mean that grow linearly in time ( diffusion with drift ) .these are the distributions depending on . in simulations over many agentsit is natural to consider the full distribution , with thus the probability of finding an agent at time with relative score . in terms of scaled coordinates and we find that the distribution only depends on .the model distributions show excellent agreement with direct numerical simulations ( fig .[ pxt ] and [ large_a ] ) with no fitting parameters .this result for the full distribution of relative scores together with its systematic derivation for the original sign - payoff game represent the main results of this paper . in appendix [ hamiltonian ]we discuss the relation between the model presented in this work and the formulation in terms of a minimization problem of a hamiltonian generator of the asymptotic dynamics .we find that one way to view the present model is as a reduced ansatz for the ground state where the only parameters are the fraction of positively and negatively frozen agents ( solved for self - consistently ) instead of the full space of the frequency of use of each strategy .with this ansatz closed expressions can be derived for the steady state distributions irrespective of the form of the hamiltonian . in appendix [ linearpay ]we show how the model applies to the game with linear payoff .we will now turn to describing the statistical model in some detail and derive the results discussed in the previous section .we define for each agent the sum and difference of strategies for each bid and ( as discussed above ) .clearly , being the sum of two random numbers is distributed over with probability .a non - zero value of means that agent always has the same bid for history independently of which strategy it has in play .the sum over all agents , , thus gives a constant history dependent but time independent background contribution to the attendance .( in the sense that every time history occurs in the time series it gives the same contribution . )this background is , for large , normally distributed with mean zero and variance an interesting property of the minority game is that there is a `` gauge '' freedom with respect to an arbitrary choice of which is called strategy and which is , thus corresponding to a change of sign of .such a sign change will simply result in a change of sign of having no consequence on which strategy is actually in play .( it is the strategy in play which is an observable , not whether it is labeled by 1 or 2 . )nevertheless , it turns out that making a consistent definition of the order of strategies is helpful in formulating a simple statistical model .explicitly we order the two strategies ( `` fix the gauge '' ) of all agents such that shortly we will describe the distribution over agents of , to quantify its anticorrelation with .to proceed we write the attendance at a time step with history as where depending on which strategy agent is playing . again , the relative strategy score of agent is updated according to eqn .[ step1 ] . given the background contribution to the attendance we expect there to be a surplus of in the steady state with our choice of gauge because the strategy 1 is expected to be favored by the score update function .( in other words , strategy 1 is expected to have a higher fitness . ) however , this correlation is not trivial as the accumulated score also depends on the dynamically generated contribution the attendance .as discussed previously some fraction of the agents are frozen , in the sense of always using the same strategy , .we make an additional distinction ( made significant by our choice of gauge ) and separate the group of frozen agents into those with ( fraction ) , and those with ( fraction ) , such that .clearly , we expect the former to be more plentiful than the latter .we will now derive steady state distributions over agents for the mean step size .for this purpose we will write the attendance as where corresponding to the three categories of agents discussed previously .we will make the following simplifying approximations for these three components : the fickle component we will model as completely disordered , such that is random , and correspondingly ( for large n ) is normally distributed with mean zero and variance with the fraction of fickle agents .( thus , neglecting that the fickle agents would also have a net anticorrelation with the background ). we will assume the frozen agents to simply be a sum of independent random variables drawn from the distribution of , thus neglecting that the agents that are frozen may come from the extremes of this distribution .to proceed , we need to find the distribution of , i.e. how it varies over the set of agents .( henceforth we will usually drop the index and regard the objects as drawn from a distribution . )begin by defining , which is thus disordered with respect to the sign of .is what is called in the literature . in this paperwe reserve for the object where strategies are ordered such that , corresponding to . ]the object is independent of ( ignoring corrections due to limiting the available bids ) , taking values with probability , which gives mean zero and variance . consider the joint object , for large this becomes normally distributed with mean zero and variance .now , to quantify the correlation between and we define the object which consequently has mean and . we will represent this distribution by assuming that each component are independent gaussian random variables with a mean that is linearly dependent on . with this assumptionwe find the conditional distribution where , and , and where we write the normal distribution over with mean and variance as .this quantifies that is on average anticorrelated with which is expected to place strategy 1 in the minority group more often than strategy 2 . using eqn .[ xidist ] we can also calculate the distributions of ( ) as the sum of ( ) correlated objects , giving with conditional variances and . given the model expressions for the distributions of all the components of the score update equation ( eqn [ step1 ] ) we will find the distribution of mean ( time averaged ) step sizes . as a first step we integrate out the fast variable to get a conditional on time averaged step size .( over a long time series of the game every history will occur many times , we thus average over all those occurrences of a single history . )this corresponds to \,.\end{aligned}\ ] ] the second term , which is a self - interaction , follows from the discrete nature of the original problem .it gives a negative bias for the used strategy coming from the fact that if the net attendance from all other agents is zero , the used strategy puts the agent in the majority group .( the factor in the delta function is to account for the fact that the attendance , as defined in eqn [ attendance ] , changes in steps of two and the factor comes from the fact that only the used strategy enters the attendance . )integrated this gives where we have identified the first term as a fitness which quantifies the relative fitness of the agent s two strategies and the second as a negative bias for the used strategy as discussed previously . to calculate the distribution of mean step sizeswe will assume that histories occur with the same frequency such that .this is in fact not the case for a single realization of the game in the dilute phase , some histories occur more often than others , as one can see directly from any simulation in this regime .nevertheless , for large we will assume that this variation of occurrences of averages out . as discussed extensively in the literaturethe overall behavior of the game is insensitive to whether the actual history is used ( endogenous information ) as input to the agents or if a random history is supplied ( exogenous information ) .this is also confirmed by the present work through the good agreement between the model using exogenous information and simulations in which we use the actual history .assuming large and given the assumption of independence of the distributions for different we expect the distribution to approach a gaussian ( by the central limit theorem ) with mean with as in eqn [ deltamu ] , and with variance .the integrals are readily done analytically as described in the appendix [ app ] , but the expressions are very lengthy .the main features can be expressed in the following form : where are functions that only depend on and through , change slowly as a function of the arguments in the physically relevant regime ( fig .[ changes ] ) and which satisfy and . as seen from eqn .[ delta_expressions ] , the mean bias is towards , the used strategy is penalized , while the mean fitness is positive acting to increase the relative score , consistent with our choice of gauge as discussed earlier .the only appreciable contribution to the variance comes from the fitness term scaling as whereas the bias has a variance that scales with and thus negligible ( as is the cross term ) .the variance can be written where also changes slowly in the relevant regime ( fig .[ changes ] ) and satisfies .the width of the fitness distribution explains the fact that even though consistent with , there are also some agents with a large negative fitness which implies . the fact that thus does not necessarily imply that strategy is more successful than strategy as the correlation with the other frozen agents is also an important factor. for large , both the mean and variance of the fitness vanish , as can be understood as a result of there being too few agents compared to the number of possible outcomes to maintain any appreciable correlation between an agents strategies and the aggregate background , . in this limit , since the bias term always penalizes the used strategy there can be no frozen agents .we also see that both the mean and width of the distribution for given scales with , consistent with simulations ( fig . [ distribution ] ) . the fraction of frozen agents as a function of from the statistical model ( eqns .[ fraction_pos ] and [ fraction_neg ] ) compared to results from direct numerical simulations of the game .the frozen agents are divided into two groups and depending on if they are frozen with relative score or respectively .the fact that follows from our convention ( eqn [ gauge ] ) .also shown is the total fraction of frozen agents from the replica calculation for linear payoff ( eqns .3.41 - 3.44 of ) .( each data point is averaged over 20 runs with time steps each ( steps for ) . ) ] for each agent the score difference moves with a mean step per unit time of where is drawn from the distribution . if the fitness is high , such that , the agent will have a net positive movement and the agent is frozen , with and growing unbounded .the fraction of positive frozen agents is given by \ , .\label{fraction_pos}\end{aligned}\ ] ] similarly , if the fitness is relatively very poor , such that the agent is frozen ( with ) with magnitude growing unbounded .the fraction of negatively frozen agents is given by \ , , \label{fraction_neg}\end{aligned}\ ] ] and correspondingly the complete fraction of frozen agents and fickle agents are found . since , , and are functions of , , and , the two equations allow for solving for and as a function of the only parameter . we find that the solutions are readily found by forward iteration , and the results are plotted and compared to direct simulations of the game in figure [ frozen_fraction ] . is found by measuring the mean step size and identifying those that for have or for have as shown in fig .[ distribution ] . ]the fit is good , but there is no indication of a phase transition for small in this simplified model .distributions for mean step per unit time at for ( top ) and ( bottom ) , comparing direct simulations of the game to the statistical model ( eqn .[ deltaeqn ] ) .the fraction of frozen agents with ( ) is indicated by _ fr,+ _ and similarly for ( ) .the distributions of step sizes are different for and because of the convention as explained in fig .[ frozen_fraction ] .( simulations averaged over time steps , excluding a equilibration time . ) ] from simulations we can also measure the distribution of mean step sizes to compare to the model , which is shown in figure [ distribution ] . there we show an intermediate value of , the fit in terms of mean and width is not as good close to and almost perfect for large , but everywhere the data seems well represented by a normal distribution .we also use the mean step size distributions from simulations to calculate the fraction of frozen agents , figure [ frozen_fraction ] .( the naive way to distinguish between frozen and switching agents ; to introduce a cut - off at some time , with any agents with considered frozen , makes it difficult to distinguish between frozen and switching agents with near 0 . )we now use the fact that each agent is characterized by an average step size per unit time , specified by the fitness , to describe the movement of the relative score on the set of integers . consider that the agent at time step has score difference , what is the probability that at time the score difference is ? in each time step , can only change by as given by the basic score update equation [ step1 ] .we specify the respective probabilities with for and for .the mean probability that remains unchanged is as this corresponds to , meaning that the agent s two strategies have the same bid which on average ( over ) will be the case for half of the histories .it should also be clear that the stepping probabilities can not depend on the magnitude of , only the sign , because the difference in score between strategies does not enter the game , only which strategy is currently used .the case has to be treated separately ; we toss a coin to decide which strategy is used , thus the probability for a increment is and for a increment is .the movement of thus corresponds to a one - dimensional random walk on a chain , with asymmetric jump probabilities , as sketched in figure [ chain ] .the movement of the relative strategy score of an agent is described by a random walk on a chain with jump probabilities for ( i.e. strategy 1 in play ) and for ( i.e. strategy 2 in play ) . at the boundary due to the coin toss choice of strategy the probabilities are altered as in the figure . ] to relate the probabilities to the mean step size we note that for , , which together with the conservation of probability and the fact that gives where results for follow from the same analysis for . keeping in mind that for a fickle agent and this is of course consistent with and .a frozen agent is instead given by or . with the known probabilities we can write down a master equation on the chain for the probability distribution ( implicit dependence ) and at the boundary assuming that the distribution is stationary , such that , andconcentrating on , we find after some manipulations the equation has the exponential solution in the last step we used equation [ p_eq ] and the fact that from equation [ delta_expressions ] the mean step size is small such that . from thiswe can identify a decay length , which characterizes the range of positive excursions of the score difference of the fickle agent .clearly , this solution requires ( ) to be bounded , as is the case for fickle agents . from the same analysis for the fickle agents with have the distribution .what remains is to match up the solutions for positive and negative at the interface .this can be solved exactly , but given that the exponential prefactor is small we settle for the approximate expression from this expression we see that the distribution is asymmetric , such that given that on average agents are more likely to be found with .this opens up for a more sophisticated modelling ( left for future work ) where this aspect is fed back into the initial statistical description of the sum of fickle agents through the dynamical variable , the total attendance of the fickle agents , acquiring a mean depending on .for the frozen agents the master equation is the same , but given ( or ) we expect a drift of the mean of the distribution . thus focusing on long timeswe can consider one or the other of eqs .[ master1 ] depending on whether the agent is frozen with or . for and assuming that the agent at time is at site ( neglecting the influence any excursions to ) we can write down an exact expression for in terms of a multinomial distribution . alternatively , and simpler, we can take the continuum limit and to find the fokker - planck equation given the initial condition this has the solution with and , thus describing diffusion with a drift .given that we now have a description of the relative score distribution of a single agent in terms of an asymmetric exponential decay or diffusion , we can also consider the full distribution of relative scores over all agents , by integrating over the distribution of mean step sizes .defining the scaled variables and we write , corresponding to the stationary distribution of the fickle agents and diffusive distributions of the frozen agents with and respectively .the first component is where corresponds to and respectively , and where . for the frozen agents we have where .these expressions are compared to direct simulations of the game for intermediate in fig .the simulations are averaged over a specific time window and the diffusive component eqn .[ diffusion ] is integrated over the corresponding scaled time window .the agreement is excellent over the complete stationary and diffusive components of the distribution and shows the data collapse in terms of scaled coordinates . in fig .[ large_a ] we also show a comparison for large where the simulations have no frozen agents and all fickle agents are localized by a length close to the value . full scaled distribution with over all agents for compiled by averaging simulations over scaled time window to .the model results ( fickle+frozen ) are , using equations [ fickle ] and [ diffusion ] . also shown are model results using only fickle agents .+ the following time windows are used : for , to ; for , to ; for , to , which correspond to the same and .( simulations are averaged over 80 runs for and 15 runs for and . ) ] the asymmetry of these plots is an artefact of our gauge choice which implies that on average agents will use strategy 1 ( ) more frequently than strategy 2 ( ) . to restorethe full symmetry is simply a matter of symmetrizing the distributions around .finally , we remark that the formal solution in terms of an exponential distribution of strategy scores for frozen agents was derived in from a fokker - planck equation for the linear payoff game .see appendix [ hamiltonian ] and [ linearpay ] for a further discussion of the comparison between the present model and the hamiltonian formulation .distribution at large .there are no frozen agents , and the simulated and model ( `` fickle '' ) distributions are stationary . also shown is the asymptotic behavior where all agents are symmetrically localized with localization length , and a simulation at which approaches this asymptotic behavior .( simulations averaged over time steps . ) ]we have studied the asymmetric phase of the basic minority game , focusing on the statistical distribution of relative strategy scores and the original sign - payoff formulation of the game .we formulate a statistical model for the attendance that relies on a specific gauge choice in which the two strategies of each agent are ordered with respect to the background ( for all agents ) . using this model we can derive a distribution of the mean step per time increment for the relative scores , specified in terms of a bias for the used strategy and the relative fitness of the two strategies . the relative strategy score for each agentis conveniently described as a random walk on an integer chain , where the jump probabilities are calculated from the mean step .the probability distribution of observing the agent at some position on the chain at a given time is either given by a static asymmetric exponential localized around for fickle agents or to diffusion with a drift for frozen agents .excellent agreement with direct simulations of the game for the score distribution confirms the basic validity of the modelling . at the same time , as discussed in the appendix , the fluctuations of the attendance are overestimated by the model . by contrasting with the hamiltonian formulation of the dynamics the reason for this discrepancyis readily understood from viewing the model as a crude ansatz for full minimization problem .this also opens up for improving the model by introducing some variational parameters without having to confront the full complexity of the minimization of a non - quadratic hamiltonian for general payoff functions . + we thank erik werner for valuable discussions .simulations were performed on resources at chalmers centre for computational science and engineering ( c3se ) provided by the swedish national infrastructure for computing ( snic ) .the integrals to calculate the mean and variance for the distribution of average step sizes , eqn .[ deltaeqn ] , are gaussian integrals including the error function . to solve these we first rescale the variables in terms of the variance , etc . andperform the integral over the distribution of agents which evaluates to ( ) and .we are left with integrals }\nonumber\\ & & e^{-\frac{1}{2}(\frac{\omega+\sqrt{\phi_1}x+\sqrt{\phi_2}y}{\sqrt{\varphi}})^2}\,,\end{aligned}\ ] ] }\nonumber\\ & & \omega\ , \text{erf}(\frac{\omega+\sqrt{\phi_1}x+\sqrt{\phi_2}y}{\sqrt{2\varphi}})\,,\end{aligned}\ ] ] and }\nonumber\\ & & \text{erf}^2(\frac{\omega+\sqrt{\phi_1}x+\sqrt{\phi_2}y}{\sqrt{2\varphi}})\,,\end{aligned}\ ] ] to evaluate these we use the following integral formulas and where is a symmetric ( positive definite ) matrix , and and are real constants .the bias term thus follows from a direct application of the first integral formula to a 3x3 matrix .the fitness term follows from a substitution and to apply the second integral formula over and subsequently the first integral formula on a 2x2 matrix .the variance can be calculated by the substitution for , , followed by integrating out and to finally apply the third integral formula over .the actual expressions are quite lengthy , but the important features can be represented according to eqs .[ delta_expressions ] and [ sigma_expression ] in terms of functions , , and . after solving for for the fractions of frozen agents and using eqs . [ fraction_pos ] and [ fraction_neg ], we can consider these functions as dependent only on the control parameter .the dependence on is plotted in figure [ changes ] , to point out that these functions change little over the whole relevant range . the parameter dependence of the three quantities specifying the mean and variance of the distribution of mean step sizes according to equations [ delta_expressions ] and [ sigma_expression ] . ]here we connect the formalism in the present work to the solution using the replica method , following closely the presentation in and . expressing the attendance for given history in terms of fluctuations around a mean as where is a gaussian random variable with mean zero and variance ( to be determined self - consistently ) .this is related to expression ( [ aoft ] ) , where we take an explicit statistical form , assumed to correspond to background plus frozen agents .also , in the model in this paper we have the magnitude of as , with the fraction of fickle agents .this is not assumed in the present treatise , but as we will see the outcome is related .there is also the explicit expression , eqn .[ at1 ] , for the attendance , where depending on which strategy is momentarily used by the agent . taking the time average of this and assuming that the frequency of use is not influenced by the rapid switches of history we write , where for frozen agents and for fickle .as discussed in the fluctuations of are statistically independent such that for , whereas by definition . with thiswe can write , noting that .now , evaluating the variance of the attendance using eqn . [ at1 ] and , we find this we can alternatively write ( using eqn . [ at ] ) as . here , the _ predictability _ , also has the alternative form ( using eqn . [ at1 ] ) correspondingly we find for the rapidly fluctuating field the variance ( using ) .the latter expression has no contribution from frozen agents ( as expected ) , and assuming that the distribution of is quite strongly centred at it will be close to , but always lower than , our assumed value of .consider now the fixed history time averaged step size for agent , , with the aim is to find a hamiltonian generator of the long time dynamics such that the time and history averaged update is given by ( note that this expression is not equivalent to eqn .[ deltaeqn ] ) .the latter is the mean of a distribution , whereas the present object represents the full distribution of average step sizes over agents corresponding to different . )a function that does this is where such that , which evaluates to thinking of the long - time evolution of the score difference for agent which has an average step size , we find that if the agent will be frozen positive , with and similarly if it will be frozen negative , with . only if the agent will be fickle , with .considering that we find the three cases : corresponds to , corresponds to , and corresponds to .the solution to this thus corresponds to finding the minimum of with respect to .the minimization of eqn .[ hcal ] however , looks like a formidable problem in the thermodynamic limit , and we are not aware that it has been pursued in the literature .( note that such that an expansion is not appropriate . )this is in contrast to the case of linear payoff ( se appendix [ linearpay ] ) where which is a quadratic form in the variables . for the latter casethe minimization problem has been solved using the replica method .the equilibrium score distributions that we focus on in the present work have been solved for in but to the best of our knowledge not for the sign - payoff game .also , it appears that these distributions have not been discussed or studied in any detail , or compared to simulations , in earlier work .here we repeat the analysis of the main paper for the case of linear payoff where eqn .[ step1 ] is replaced by we apply the same distributions , eqs .[ xidist]-[ydist ] , for the relative bid , the contribution to the attendance of the positively ( ) frozen agents , and the negatively ( ) frozen agents and write ( eqn . [ aoft ] ) . here is the background ( mean zero , variance ) and is the contribution from the fickle agents ( with assumed mean zero ) . integrating over time at fixed history , integrates to zero because of linearity , giving where we have explicitly inserted the negative bias term for the used strategy .averaging over histories in the large limit we find that the bias is just a constant and the fitness is normal with mean and variance given by \,,\end{aligned}\ ] ] where as before and and are the respective fractions of frozen agents .we note that the step size is of order for the linear payoff , compared to order for the sign payoff game . similarly in both cases , for large the fitness drops out , ensuring that there are no frozen agents . for moderate fraction of frozen agents need to be solved for self - consistently through the equations as for the sign - payoff game the results from solving these equations numerically are in good agreement with simulation data in the dilute phase as shown in fig .[ linear_fracs ] .( note , compared to fig .[ frozen_fraction ] , that both the data and model results for the fraction of frozen agents are very similar and quite insensitive to whether sign - payoff or linear payoff is used . )the fraction of frozen agents as a function of for linear payoff .also shown is the total fraction of frozen agents from the replica calculation ( eqns .3.41 - 3.44 of ) ( each data point is averaged over 20 runs with time steps each ( steps for ) . ) ] the fluctuations of attendance with are compared to simulations in fig .[ sigsqh ] .these are clearly significantly overestimated by the model .( similar results are found for the sign - payoff game and model . ) following the exposition in appendix [ hamiltonian ] , the reasons for this discrepancy is quite clear .the model always overestimates the fluctuations , and since we are assuming that only the frozen agents contribute to we also miss the contribution of the fickle agents to reduce .there seems to be a quite clear path to improve the model along these lines , which is left for future work .here we opt for the simplicity of solving the present model and the fact that it does give quantitative agreement with distribution of realtive strategy scores . as a next step we can find the score distributions by solving the master equation on an integer chain .in contrast to the t game where scores are only updated by 0 or , we now have to consider longer range hopping where scores are updated by integer steps in the range to .taking into account the individual time averaged step size ( for and respectively ) and the fact that has variance , we expect that the jump propabilities are well represented by a normal distribution ( for a jump from to ) the master equation takes the form taking the continuum limit over space and ignoring complications due to the boundary , this can be solved in terms of exponential localization for fickle agents ( and ) and diffusion with a drift for frozen agents ( or ) . for fickle agents the score distributions are given by for and respectively , which in the large limit reduces to . for frozen agents the distributions are given by for positively and negatively frozen agents respectively .d. challet , y .- c . zhang . _emergence of cooperation and organization in an evolutionary game_. physica a , * 246 * , 407 ( 1997 ) .zhang , _ evolving models of financial markets_. europhys . news * 29 * , 51 ( 1998 ) c.h .yeung , y .- c ._ minority games_. encyclopedia of complexity and systems science , 5588 - 5604 ( 2009 ) .a. chakrabortia , d challeta , a chatterjeec , m marsilie , yi - cheng zhang , b. k. chakrabartid ._ statistical mechanics of competitive resource allocation using agent - based models _ , phys .rep . * 552 * , 1 ( 2015 ) .m. mezard , g. parisi and m. virasoro _ spin glass theory and beyond : an introduction to the replica method and its applications _ , world scientific lecture notes in physics : volume 9 .world scientific , singapore ( 1987 )
we study the equilibrium distribution of relative strategy scores of agents in the asymmetric phase ( ) of the basic minority game using sign - payoff , with agents holding two strategies over histories . we formulate a statistical model that makes use of the gauge freedom with respect to the ordering of an agent s strategies to quantify the correlation between the attendance and the distribution of strategies . the relative score of the two strategies of an agent is described in terms of a one dimensional random walk with asymmetric jump probabilities , leading either to a static and asymmetric exponential distribution centered at for fickle agents or to diffusion with a positive or negative drift for frozen agents . in terms of scaled coordinates and the distributions are uniquely given by and in quantitative agreement with direct simulations of the game . as the model avoids the reformulation in terms of a constrained minimization problem it can be used for arbitrary payoff functions with little calculational effort and provides a transparent and simple formulation of the dynamics of the basic minority game in the asymmetric phase
many complex systems , including social , biological , physical , economic , and computer systems , can be studied using network models in which the nodes represent the constituents and links or edges represent the interactions between constituents .one of important measures of the topological structure of a network is its connectivity distribution , which is defined as the probability that a randomly selected node has exactly edges . in traditional random graphs as well as in the small - world networks the connectivity distribution shows exponential decay in the tail .however , empirical studies on many real networks showed that the connectivity distribution exhibits a power law behavior for large .networks with power - law connectivity distributions are called _ scale - free _ ( sf ) networks . typical examples of sf networks include the internet , world - wide - web , scientific citations , cells , the web of actors , and the web of human sexual contacts .the first model of sf networks was proposed by barabsi and albert ( ba ) . in ba networks two important ingredientsare included in order to obtain power law behavior in the connectivity distributions , namely the networks are continuously _ growing _ by adding in new nodes as time evolves , and the newly added nodes are _ preferentially attached _ to the highly connected nodes .the idea of incorporating preferential attachment in a growing network has led to proposals of a considerable number of models of sf networks ( see also refs . and references therein ) . in most growing network models ,all the links are considered equivalent . however , many real systems display different interaction strengths between nodes .it has been shown that in systems such as the social acquaintance network , the web of scientists with collaborations , and ecosystems , links between nodes may be different in their influence and the so - called the weak links play an important role in governing the network s functions .therefore , real systems are best described by weighted growing networks with non - uniform strengths of the links . only recently, a class of models of weighted growing networks was proposed by yook , jeong and barabsi ( yjb ) . in the basic weighted scale - free ( wsf ) model of yjb , both the topology and the weight are driven by the connectivity according to the preferential attachment rule as the network grows .it was found that the total weight distribution follows a power law , with an exponent different from the connectivity exponent .it was also shown analytically that the different scaling behavior in the weight and connectivity distributions are results of strong logarithmic corrections , and asymptotically ( i.e. , in the long time limit ) the weighted and un - weighted models are identical . in real systemsone would expect that a link s weight and/or the growth rate in the number of links of a node depend not only on the popularity " of the node represented by the connectivity , but also on some intrinsic quality of the node .the intrinsic quality can be collectively represented by a parameter referred to as the fitness " . besides popularity, the competitiveness of a node in a network may depend , taking for example a node being an individual in a certain community , on the personality , survival skills , character , etc .. a newly added node may take into account of these factors beside popularity in their decision on making connections with existing nodes and on the importance of each of the established links .clearly , there is always a spectrum of personality among the nodes and therefore a distribution in the fitness . while one may argue that factors determining the popularity may overlap with those in fitness , it is not uncommon that popularity is not the major factor on the importance of a connection .for example , we often hear that a popular person actually has very few good friends , and an influential and powerful figure in a network may often be someone very difficult to work with . in the present work ,we generalize the wsf model of yjb to study the effects of fitness . in our model ,the weights assigned to the newly added links are determined stochastically either by the connectivity with probability or by the fitness of nodes with probability .the scaling behavior of the total weight distribution is found to be highly sensitive to the weight assignment mechanism through the parameter .the plan of the paper is as follows . in sec .[ sec : model ] , we present our model and simulation results . in sec .[ sec : solution ] , we derive an analytical expression between the total weight and the total connectivity of a node and provide a theoretical explanation on the features observed in the numerical results .results on a generalized model with a fitness - dependent link formation mechanism are presented in sec .[ sec : discussion ] , together with a summary of results .the topological structure of our model follows that of the ba model of sf networks .a small number ( ) of nodes are created initially . at each time step , a new node with ( ) links is added to the network .these links will connect to pre - existing nodes in the system according to the preferential attachment rule that the probability of an existing node being selected for connection is proportional to the total number of links that node carries , i.e. , the procedure creates a network with nodes and links after time steps .geometrically , the network displays a connectivity distribution with a power law decay in the tail with an exponent , regardless of the value of .a weighted growing network is constructed by assigning weights to the links as the network grows . to incorporate a fitness - dependent weight assignment mechanism ,a fitness parameter is assigned to each node .the fitness is chosen randomly from a distribution , which is assumed to be a uniform distribution in the interval ] , the integration in eq .( [ twei ] ) can be carried out to give {i}(t ) -\frac{1}{4}p[(\ln{\frac{4t}{t_{i}^{0}}})^{2 } -4\ln{2}\ln{\frac{t}{t_{i}^{0}}}]+c,\ ] ] where is an integration constant .eq.([finalwei ] ) implies that the different scaling behavior in and as shown in the simulations are results of the logarithmic correction term , which can be tuned by the parameter . for , eq .( [ finalwei ] ) gives leading to the same scaling behavior of and , as observed in the simulation results . for corresponding to the wsf model of yjb , the dynamical behavior of most from that of . for arbitrary , follows a similar form with dependence coming into the second term on the right hand side of eq.([finalwei ] ) .our model can be easily generalized to allow for a fitness - dependent link formation mechanism . in the basic model with fitness , the probability that a new link is established with an existing node is determined jointly by the node s connectivity and fitness with to study the effects of fitness , we study a generalization of our model by replacing eq.([probk ] ) by eq.([probke ] ) for link formation , while keeping eqs .( [ weik ] ) and ( [ weif ] ) for weights assignments .the connectivity distribution follows a generalized power law with an inverse logarithmic correction of the form with .4 shows the numerical results for the total weight distribution for three different values of , and .it is found that follows the same generalized power law form as , but with a different exponent that depends on . for , . only for , and have the same exponent of .for the cumulative distribution of weights of individual links , the numerical results are similar to those shown in fig .3 . in summary , we proposed and studied a model of weighted scale - free networks in which the weights assigned to links as the network grows are stochastically determined by the connectivity of nodes with probability and by the fitness of nodes with probability .the model leads to a power law probability distribution for the total weight characterized by an exponent that is highly sensitive to the probability .if the weight is driven solely by the fitness , i.e. , , follows the same scaling behavior of with the same exponent .similar results were also found in a generalized model with a fitness dependent link formation mechanism .an expression relating the total weight and the total connectivity of a node was derived analytically .the analytical result was used to explain the features observed in the results of numerical simulations . in closing , we note that although the total weight distribution and the connectivity distribution carry different exponents and for in our model , still follows a power law , i.e. , has the same functional form as .the same feature was also found in the generalized model .however , one would expect that in some complex real systems even the functional forms of and may be different .it remains a challenge to introduce simple and yet non - trivial models that give one behavior for the geometrical connection among the constituents and another behavior for the extend of connectivity between constituents .figure 1 : the weight distribution as a function of the total weight on a log - log scale for different values of .the two solid lines are guide to the eye corresponding to the exponents and , respectively .figure 2 : the total weight of a randomly selected node with fitness ( ) as a function of time on a log - log scale for different values of .the solid line is a guide to the eye corresponding to an exponent .figure 4 : the weight distribution as a function of the total weight on a log - log scale for different values of in a model with fitness - dependent link formation mechanism .the two solid lines are plotted according to the form of eq.([connectivity ] ) , but with an exponent characterizing that takes on the values and respectively .
we propose and study a model of weighted scale - free networks incorporating a stochastic scheme for weight assignments to the links , taking into account both the popularity and fitness of a node . as the network grows the weights of links are driven either by the connectivity with probability or by the fitness with probability . results of numerical simulations show that the total weight associated with a selected node exhibits a power law distribution with an exponent , the value of which depends on the probability . the exponent decreases continuously as increases . for , the total weight distribution displays the same scaling behavior as that of the connectivity distribution with , where is the exponent characterizing the connectivity distribution . an analytical expression for the total weight is derived so as to explain the features observed in the numerical results . numerical results are also presented for a generalized model with a fitness - dependent link formation mechanism .
the fractional order dynamical systems have attracted remarkable attention in the last decade .many authors have studied the chaotic and hyperchaotic dynamics of various fractional order systems , such as those of duffing , lorenz , rssler , chua , l , chen , etc ., which are introduced by changing the time derivative in the corresponding ode systems , usually with the fractional derivative in the caputo or riemann - liouville sense of order .one interesting problem is to analyze the lowest value of parameter under which fractional order dynamical systems show chaotic or hyperchaotic behaviors .stability analysis , synchronization and control of fractional order systems by using different techniques are also widely investigated and are of great interest due to their application in control theory , signal processing , complex networks , etc . in this paperwe investigate the possibility to control unstable equilibria and unstable periodic orbits in fractional order chaotic systems by a time - delayed feedback method .pyragas introduced the time - delayed feedback control ( tdfc ) in 1992 by constructing a control force in a form of a continuous feedback proportional to the difference between the present and an earlier value of an appropriate system variable , i.e. ] , i. e. , is the first integer which is not less than . in the following , we consider the fractional orders to be in the interval . in the case , where is the kronecker delta , the generalized control scheme ( [ 2.1 ] ) reduces to tdfc with a diagonal coupling , and when the control force is applied only to a single system component and consists only of contributions of the same component , it yields the original tdfc control scheme introduced by pyragas .let be an arbitrary equilibrium point of the system ( [ 2.1 ] ) in the absence of control ( ) , being a solution to the nonlinear algebraic system : assuming that is an unstable equilibrium point of the uncontrolled system , we wish to find the domain in the parameter space of the feedback gains and the time - delay for which becomes locally asymptotically stable under tdfc force ( [ 2.2 ] ) .the stability of under a non - diagonal feedback control ( [ 2.1])([2.2 ] ) can be determined by linearizing ( [ 2.1 ] ) around , which leads to the linear autonomous system : where is the jacobian matrix of the free - running system , with calculated at , are the transformed coordinates in which the equilibrium point is at the origin , and are the components of the feedback control force in the new coordinates , i.e. .\label{2.9}\ ] ] by applying the laplace transform to eqs .( [ 2.7 ] ) and by using the formula for the laplace transform of the fractional derivative in the caputo sense : =s^\alpha x_i(s)-\sum_{k=0}^{m-1}\widetilde{x}_i^{(k)}(0+)s^{\alpha-1-k } , \label{2.10}\ ] ] where ] are positioned in the left complex s - plane ( ) .the assumption is equivalent to the claim that \neq0 ] , with given by eq .( [ 2.12 ] ) , and .we take to be an unstable equilibrium of ( [ 2.1 ] ) in the absence of external perturbation ( ) , and the jacobian matrix of .one can easily deduce that =+\infty , \label{2.21}\ ] ] and =\det[-\mathbf{\widehat{a}}]=\prod_{i=1}^n(-e_i ) , \label{2.22}\ ] ] where are the eigenvalues of .evidently , if has an odd number of positive real eigenvalues , then <0 ] is changed from negative to positive when sweeps the real interval . since ] , meaning that the equilibrium point can not be stabilized by the time - delayed feedback controller ( [ 2.2 ] ) .the result is summarized in the following theorem . * theorem 2 .( odd - number limitation ) * let be an unstable equilibrium point of the fractional - order system ( [ 2.1 ] ) in the absence of control ( ) , and the corresponding jacobian matrix at .if has an odd number of positive real eigenvalues , then the time - delayed feedback control ( [ 2.2 ] ) can not stabilize the unstable equilibrium for any values of the control parameters and . the result is an extension of the odd - number limitation theorem to fractional - order systems with respect to unstable fixed points .we note that the odd - number limitation has recently been refuted by fiedler et al . for the case of unstable periodic orbits in systems described by ordinary differental equations . to illustrate the time - delayed feedback control in fractional order chaotic systems, we consider a fractional order rssler system in the form : +b , \end{array } \right .\label{2.23}\ ] ] where \label{2.24}\ ] ] is the pyragas feedback controller applied through a single component ( -channel ) , and , and are the parameters of the free - running system . in the following , we take , , and , for which the uncontrolled system has a chaotic attractor ( see fig .1 ) .the unperturbed rssler system has two equilibrium points and , where linearization around the equilibrium points leads to the following linear autonomous system : where is the jacobian matrix , and , , are the transformed coordinates in which the corresponding fixed point is at the origin . according to eq .( [ 2.20 ] ) , the equilibrium point of the linearized system ( [ 2.26 ] ) is asymptotically stable if and only if for all the eigenvalues of the jacobian matrix .the equilibrium point has eigenvalues and .it is an unstable saddle point of index 1 since , .the equilibrium point has eigenvalues and , and it is an unstable saddle point of index 2 since , . in the presence of tdfc , the linearized version of the system ( [ 2.23 ] ) around the equilibrium points states : according to theorem 1 , the zero solution of system ( [ 2.28 ] ) is asymptotically stable if and only if all the roots of the characteristic equation : =0 \label{2.29}\ ] ] have negative real parts , i.e. , where the characteristic matrix is given by : the characteristic eq . ( [ 2.29 ] ) can be numerically analyzed to obtain the domains of control for the unstable steady states in the plane parametrized by the feedback gain and the time delay . in the absence of control , the equilibrium point has an odd number ( one ) of positive real eigenvalues , and according to theorem 2 , it can not be stabilized by the tdfc method .this result has been confirmed by a numerical analysis of the characteristic eq .( [ 2.29 ] ) , showing absence of stability domain in the parameter plane .this observation is further confirmed by a numerical simulation of the system ( [ 2.23 ] ) under tdfc ( [ 2.24 ] ) .on the other hand , the fixed point can be controlled by tdfc , and the resulting stability domain is shown in fig .the stability islands ( shaded areas ) denote the values of the control parameters and for which all the eigenvalues of the characteristic eq .( [ 2.29 ] ) are lying on the left complex -plane , thus satisfying the stability condition ( [ 2.17 ] ) .for these values of the control parameters , the control of the fixed point is successful . as a verification, we performed a computer simulation of tdfc by numerically integrating the system ( [ 2.23])([2.24 ] ) .the resulting diagrams are shown in fig .the simulations were done by using a predictor - corrector adams - bashford - moulton numerical scheme for solving fractional order differential equations .panels ( a ) , ( b ) and ( c ) depict the dynamics of the state variables , and , respectively , and panel ( d ) shows the corresponding time series of the control signal . in the simulations ,the control parameters were and , belonging to the domain of successful tdfc control depicted in fig .as expected , the simulation confirms a successful stabilization of the unstable equilibrium .moreover , as indicated from panel ( d ) in fig .3 , the control signal vanishes when the control is achieved , meaning that the control scheme is noninvasive .we note that the above analysis has been repeated for different parameter values of the free - running system . in each case , the resulting stability domains computed from eqs .( [ 2.29])([2.30 ] ) are in agreement with the numerical simulation of the tdfc method .specifically , for , , and variable , we observed a decrease in the stability region as is increased from to . on the other hand ,as becomes smaller than , the complex - conjugate eigenvalues of the equilibrium point eventually escape the instability region described by the matignon formula ( [ 2.20 ] ) , resulting in a stable equilibrium even without control .the critical value that corresponds to this eigenvalue - crossing of the conic surface between the different stability regions can be calculated from eq .( [ 2.20 ] ) . in this case , .in recent papers , it has been demonstrated , both numerically and analytically , that the original pyragas tdfc scheme can be improved significantly by modulating the time delay in an interval around some nominal delay value . in both deterministic and stochastic variants of such a delay variation ,the stability domain was considerably changed , resulting in an extension of the stability area in the control parameter space if appropriate modulation is chosen . in the following, we will demonstrate numerically the successfulness of this variable - delay feedback control in the case of fractional - order chaotic rssler system ( [ 2.23 ] ) with , , and . in this case , the feedback force is given by : , \label{3.1}\ ] ] where we choose a time - varyng delay in a form modulated around a nominal delay value with a sine - wave modulation of amplitude and frequency .obviously , if then , and the variable - delay feedback control is reduced to the classical pyragas tdfc scheme . with this choice of the feedback force ,the control parameters of the proposed variable - delay scheme are , , and , and thus , the control parameter space is four - dimensional . for visualisation purposes , we may fix two of the control parameters and investigate the stability domains in the parametric plane spanned by the remaining two control parameters . to demonstrate the superiority of variable - delay feedback control over tdfc , we fix the modulation amplitude and the frequency and investigate the control domain in parameter space .numerical simulations show that the stability area is gradually increasing as is increased from zero .figure 4 shows such a control domain for and .the grey region indicates those values of the control parameters and for which the control of the unstable equilibrium is successful .the stability domain is obtained by numerically integrating the linearized system with the jacobian matrix given by eq .( [ 2.27 ] ) .it is evident that the control domain is significantly enlarged in comparison to the one in tdfc in fig .we note that numerical integration of the system ( [ 3.3 ] ) for the unstable equilibrium shows failure of the variable - delay control scheme for any values of and , suggesting validity of the odd - number limitation theorem also in the case of a time - varying delay . as a demonstration of the variable - delay feedback control in the fractional - order rssler system , in panels ( a)(c ) of fig .5 we show the dynamics of the state variables , and for and , fixing the modulation amplitude and frequency .the time series indicate a successful stabilization of the unstable equilibrium .the method of control is again noninvasive , as indicated by the vanishing feedback force in panel ( d ) of fig .we note that for these parameter values , the control via tdfc is unsuccessful , as can be perceived from the stability domain in the tdfc case depicted in fig .the pyragas delayed feedback control method was originally aimed to stabilize unstable periodic orbits embedded into the chaotic attractor of the free - running system . for this purpose ,the time delay in the feedback loop was chosen to coincide with the period of the target orbit . by tuning the feedback gain to an appropriate value ,the stabilization is achieved and the controller perturbation vanishes , leaving the target orbit and its period unaltered . in this section , we will give a brief demonstration of the pyragas method to control unstable periodic orbits in the fractional - order rssler system ( [ 2.23])([2.24 ] ) .as in the previous discussion , we use , , and , for which the system is chaotic in the absence of external perturbation . in order to estimate the periods of the unstable orbits which are typically not known a priori , we use the fact that the signal difference at a successful control asymptotically tends to zero if the delay of the controller is adjusted to match the period of the target orbit .the method consists of calculating the dispersion of the control signal at a fixed value of the feedback gain for a given range of values of the delay , excluding the transient period .the resulting logarithmic plots of the dependence of the dispersion on the delay may contain several segments of finite -intervals for which is practically zero , and a sequence of isolated resonance peaks with very deep minima .the former correspond to the stability domain of the fixed point , and the latter are the points at which coincides with some accuracy to the periods of the unstable periodic orbits in the original system .the estimated values of the periods can be made more accurate if one repeats this `` spectroscopy '' procedure for a larger sampling resolution of the interval encompassing the resonance peaks . in this way, we have obtained the periods of the unstable period - one , period - two , and period - three orbits : , and .the plot of the dispersion vs the delay for is shown in panel ( a ) of fig .6 . the same approach could be used to calculate the intervals of the feedback gain for which the corresponding orbits can be stabilized with the pyragas controller . in panel( b ) of fig .6 we depict the dependence of the dispersion on the feedback gain when the delay time coincides with the period of the first unstable periodic orbit . in this case , the interval of the parameter for which the orbit is stabilized is estimated to be ] for a period - two , and $ ] for a period - three orbit .it is observed that the control interval of the feedback gain becomes narrower as the period of the target orbit is increased . in fig .7 we show the results of the stabilization of period - one orbit ( ) for .panels ( a ) and ( b ) show the projection of the system trajectory in and planes , respectively , after the control of the target period - one orbit has been established .the time - series of the state variables are given in panels ( c)(e ) , and panel ( f ) shows the feedback force that vanish after the controller is switched - on , warranting a noninvasiveness of the control procedure .analogous results related to stabilization of period - two and period - three orbits are given in figs .8 and 9 .a detailed bifurcation analysis of the chaotic rssler system described by ordinary differential equations and subjected to a time - delayed feedback control has been performed recently , revealing multistability and a large variety of different attractors that are not present in the free - running system .a similar analysis in the case of fractional - order chaotic systems is left for future studies .we have shown that the time - delayed feedback control can be used to stabilize unstable steady states and unstable periodic orbits in fractional - order chaotic systems .although the control method was illustrated specifically for the fractional order rssler system , it has also successfully been applied to stabilize unstable equilibria and unstable periodic orbits in various other fractional - order dynamical systems . in all the cases ,delayed feedback control with a variable time - delay significantly enlarges the stability region of the steady states in comparison to the classical pyragas tdfc scheme with a constant delay .we find that equilibrium points that have an odd number of positive real eigenvalues can not be stabilized by tdfc for any values of the feedback control parameters .the result is known as the odd - number limitation theorem , which extends to the case of fractional - order systems , as purported by theorem 2 .the odd - number limitation is also confirmed numerically in the case of a variable - delay feedback control .an analytical treatment of delayed feedback control of unstable periodic orbits in fractional - order systems is still lacking , and constitutes a promising subject for a future research . applying the extended versions of the delayed feedback controller to fractional - order systems is another interesting topic not tackled in this paper .this is especially important regarding the observations for the system used in this paper , that the control domains are becoming smaller for higher orbits , such that the periodic orbits of periods higher than three practically can not be stabilized by the original controller . a detailed analysis of the variable - delay feedback control in fractional - order systems , including a theoretical understanding of the method and numerical computation of the stability domains in different parameter planes and for different types of delay modulationsare also left for future studies .
we study the possibility to stabilize unstable steady states and unstable periodic orbits in chaotic fractional - order dynamical systems by the time - delayed feedback method . by performing a linear stability analysis , we establish the parameter ranges for successful stabilization of unstable equilibria in the plane parametrizad by the feedback gain and the time delay . an insight into the control mechanism is gained by analyzing the characteristic equation of the controlled system , showing that the control scheme fails to control unstable equilibria having an odd number of positive real eigenvalues . we demonstrate that the method can also stabilize unstable periodic orbits for a suitable choice of the feedback gain , providing that the time delay is chosen to coincide with the period of the target orbit . in addition , it is shown numerically that delayed feedback control with a sinusoidally modulated time delay significantly enlarges the stability region of the steady states in comparison to the classical time - delayed feedback scheme with a constant delay .
the field of first - principles alloy theory has made substantial progress over the last two decades .it is now possible to predict relatively complex solid - state phase diagrams starting from the basic principles of quantum mechanics and statistical mechanics .since no experimental input is required , these _ ab - initio _ calculations have been useful for clarifying the phase diagram of several new materials .several excellent reviews on the topic exist .the accuracy of calculated phase diagrams is currently limited by two factors .first , one needs , as a starting point , the energy of the alloy in various atomic configurations and hence , one is limited by the accuracy of the quantum - mechanical calculations used to obtain these energies . typically , methods based on density functional theory ( dft ) , such as the local density approximation ( lda ) or the generalized gradient approximation ( gga ) are used .a second shortcoming arises from the fact that , in order to reduce computational requirements , the sampling of the partition function to obtain the free energy is only done over a limited number of degrees of freedom .typically , these include substitutional interchanges of atoms but no atomic vibrations .attempts to either assess the validity of this approximation or to devise computationally efficient ways to account for lattice vibrations are currently the focus of intense research .this interest is fueled by the observation that phase diagrams obtained from first principles often incorrectly predict transition temperatures .it is hoped that lattice vibrations could account for some of the remaining discrepancies between theoretical calculations and experimental measurements .three main questions are addressed in this paper . 1 .do lattice vibrations have a sufficiently important impact on phase stability that their thermodynamic effects need to be included in phase diagram calculations ? 2 .what are the fundamental mechanisms that explain the relationship between the structure of a phase and its vibrational properties ? 3. how can the effect of lattice vibrations be modeled at a reasonable computational cost ? this paper is organized as follows .first , section [ generalities ] presents the basic formalism that allows the calculation of phase diagrams , along with the generalization needed to account for lattice vibrations . a review of the theoretical and experimental literature seeking to quantify the impact of lattice vibrations on phase stability is then presented in section [ evidence ] .the main mechanisms through which lattice vibrations influences phase stability are described in section [ secmecha ] .section [ compute ] describes the methods used to calculate vibrational properties while section [ experimental ] presents the experimental techniques allowing their measurement .finally , section [ modellatvib ] discusses the strengths and weaknesses of a variety of models of lattice vibrations .phase stability at constant temperature is determined by the free energy should be used instead of the helmoltz free energy , but at atmospheric pressure , the term is negligible for an alloy . ]the free energy can be expressed as a sum of a configurational contribution and vibrational contributions .the configurational contribution accounts for the fact that atoms can jump from one lattice site to another , while vibrational contribution accounts for the vibrations of each atom around its equilibrium position .the first part of this section presents the traditional formalism used in alloy theory to determine the configurational contribution .the second part introduces the basic quantities that determine whether lattice vibrations have a significant effect on phase stability .the third part describes how the traditional formalism can be adapted when lattice vibrations do need to be accounted for .one of the goals of alloy theory is to determine the relative stability of phases characterized by a distinct ordering of atomic species on a given periodic array of sites .this array of sites , called the _ parent lattice _ , can be any crystallographic lattice augmented by any motif .a convenient representation of an alloy system is the ising model . in the common case of a binary alloy system ,the ising model consists of assigning a spin - like occupation variable to each site of the parent lattice , which takes the value or depending on the type of atom occupying the site. a particular arrangement of spins of the parent lattice is called a _ configuration _ and can be represented by a vector containing the value of the occupation variable for each site in the parent lattice .although this framework can be extended to arbitrary multicomponent alloys , we focus on the case of binary alloys , since all the studies we review consider binary alloys only . when all the fluctuations in energy are assumed to arise solely from configurational change , the ising model is a natural way to represent an alloy .the thermodynamics of the system can then be summarized in a partition function of the form : where , and is the energy when the alloy has configuration . it would be computationally intractable to compute the energy of every configuration from first - principles .fortunately , the configurational dependence of the energy can be parametrized in a compact form with the help of the so - called cluster expansion .the cluster expansion is a generalization of the well - known ising hamiltonian .the energy ( per atom ) is represented as a polynomial in the occupation variables : where is a cluster ( a set of sites ) .the sum is taken over all clusters that are not equivalent by a symmetry operation of the space group of the parent lattice , while the average is taken over all clusters that are equivalent to by symmetry .the coefficients in this expansion embody the information regarding the energetics of the alloy and are called the effective cluster interaction ( eci ) .the _ multiplicities _ indicate the number of clusters that are equivalent by symmetry to ( divided by the number of lattice sites ) . it can be shown that when _ all _ clusters are considered in the sum , the cluster expansion is able to represent any function of configuration by an appropriate selection of the values of . however , the real advantage of the cluster expansion is that , in practice , it is found to converge rapidly. a sufficient accuracy for phase diagram calculations can be achieved by keeping only clusters that are relatively compact ( _ e.g. _ short - range pairs or small triplets ) .the unknown parameters of the cluster expansion ( the eci ) can then determined by fitting them to the energy of relatively small number of configurations obtained , for instance , through first - principles computations .the cluster expansion thus presents an extremely concise and practical way to model the configurational dependence of an alloy s energy .how many eci and structures are needed in practice ?a typical well - converged cluster expansion of the energy of an alloy consists of about 20 to 30 eci and necessitates the calculation of the energy of around 40 to 50 ordered structures ( see , for instance , ) .a faithful modeling of the qualitative features of the phase diagram ( correct stable phases and topology ) typically requires far fewer eci ( as little as 1 pair interaction ) and correspondingly less structures , as illustrated by the numerous examples given in . in general multicomponent systems ,the number of eci and ordered structures required to achieve a given precision unfortunately grows rapidly with the number of species ( ) .for instance , in ternaries , each pair interaction is characterized by 3 interaction parameters instead of only one in the binary case .for this reason , very few first - principle calculations of ternary phase diagrams have been attempted ( see for a recent example , or for a survey ) .although the cluster expansion usually allows a very compact representation of the energetics of an alloy system , there are two situations where a standard cluster expansion is known to converge slowly. systems where long - range elastic interactions are important due to a large atomic size mismatch between the alloyed species may require that elastic interactions be explicitly accounted for through the use of a so - called reciprocal space cluster expansion .another situation , as recently identified by , is when the electronic structure of the system exhibits a very strong configurational dependence due to symmetry - breaking effects . in the caseswhere a short - range cluster expansion does provide a sufficient accuracy , the process of calculating the phase diagram of an alloy system can be summarized as follows .first , the energy of the alloy in a relatively small number of configurations is calculated , for instance through first - principles computations .second , the calculated energies are used to fit the unknown coefficients of the cluster expansion ( the eci ) .finally , with the help of this compact representation , the energy of a large number of configurations is sampled , in order to determine the phase boundaries .this latter step can be accomplished with either the cluster variation method ( cvm ) , the low - temperature expansion ( lte ) , or monte - carlo simulations .the previous section described the framework allowing the calculation of phase diagrams under the assumption that the thermodynamics of the alloy is determined solely by configurational excitations .accounting for vibrational excitations introduces corrections to this simplified treatment .this section presents the basic quantities that enable an estimation of the magnitude of the effect of lattice vibration on alloy thermodynamics . to understand the effect of lattice vibrations on phase stability , it is instructive to decompose the configurational ( `` config '' ) and vibrational ( `` vib '' ) parts of the free energy of a phase into an energetic contribution and an entropic contribution : we take the convention that is the energy of the alloy system when all atoms are frozen at their average position at a given temperature . in the approximation of harmonic lattice vibrations and in the limit of high temperature , the vibrational energy is simply determined by the equipartition theorem and is independent of the phase considered . hence as long as these approximations are appropriate , lattice vibrationsare mainly expected to influence phase stability through their entropic contribution . intuitively , the vibrational entropy is a measure of the average stiffness of an alloy , as can be best illustrated by considering an simple system made of large number of identical harmonic oscillators. the softer the oscillators are , the larger their oscillation amplitude can be , for a fixed average energy per oscillator .hence , the system samples a larger number of states and the entropy of the system increases . in summary ,the softer the alloy , the larger the vibrational entropy .a phase with a large vibrational entropy is stabilized relative to other phases , since a larger vibrational entropy results in a lower free energy , as seen by equation ( [ emts ] ) . from a statistical mechanics point of view, this fact can be understood by observing that a phase that encloses more states in phase space is more likely to be visited , as the system undergoes microscopic transitions , and therefore exhibits an increased stability .the central role of vibrational entropy can be further appreciated by considering the effect of vibrations on a phase transition between two phases and which differ only by their average configuration ( _ e.g. _ an order - disorder transition ) .if the vibrational entropy difference between the two phases is , the transition temperature obtained with both configurational and vibrational contributions ( ) is related to the transition temperature obtained with configurational effects only ( ) by where is the change in configurational entropy upon phase transformation .this result is exact in the limit of small vibrational effects , high temperature and harmonic vibrations .a correction to this result that accounts for anharmonicity can be found in .equation ( [ tcshift ] ) indicates that the quantity determining the magnitude of the effect of lattice vibration on phase stability is the ratio of the vibrational entropy difference to the configurational entropy difference .for this reason , most investigations aimed at assessing the importance of lattice vibrations focus on estimating vibrational entropy differences between phases . since the configurational entropy ( per atom ) for a binary alloy at concentration is bracketed by equations ( [ tcshift ] ) and ( [ svibbnd ] ) provide us with a absolute scale to gauge the importance of vibrations .as we will see , typical vibrational entropy differences are of the order of , indicating that corrections of the order of 30% to the transition temperature may not be uncommon . while it is clear that vibrational excitations introduce quantitative corrections to the simple picture of alloy thermodynamics based on configurational excitations only , more profound effects of a qualitative nature are also possible .vibrational effects may lead to deviations from the traditional belief that , at high enough temperature , all short - range order in a disordered material disappears . while a fully disordered state clearly maximizes configurational entropy ,it is not clear that the total entropy is necessarily maximized in the state of maximum configurational disorder .the presence of short - range order may _ increase _ the total entropy , relative to a fully disordered alloy , through an increase of the vibrational entropy .vibrational entropy somewhat challenges our intuition , which is largely derived from tacitly assuming that configurational disorder is _ all _ disorder .it is even conceivable that vibrational entropy could induce a transition from a disordered to an ordered phase with increasing temperature , if the vibrational entropy difference between the ordered and disordered phases is larger and opposite to the configurational entropy difference .while this phenomenon has , so far , not been observed in metallic alloys , presumably because of the large configurational entropy associated with disordering , it does occur in molecular systems , such as in diblock copolymer melts , where the configurational entropy ( per monomer ) is small .the cluster expansion formalism presented in section [ alloyth ] appears to focus solely on configurational excitations .this section , shows that , in fact , non - configurational sources of energy fluctuations can naturally be taken into account within the cluster expansion framework through a process called `` coarse graining '' of the partition function .this procedure also clarifies the nature of the physical states that are represented by a configuration of the ising model .all the thermodynamic information of a system is contained in its partition function : , \label{pf_gen}\ ] ] where is the energy of the system in state . in the case of a crystalline alloy system, the sum over all possible states of the system can be conveniently factored as following : \label{factorize}\ ] ] where * is a so - called parent lattice : it is a set of sites where atoms can sit . in principle , the sum would be taken over any bravais lattice augmented by any motif . * is a configuration on the parent lattice : it specifies which type of atom sits on each lattice site . * denotes the displacement of each atom away from its ideal lattice site .* is a particular electronic state ( both the spatial wavefunction and spin state ) when the nuclei are constrained to be in a state described by .* is the energy of the alloy in a state characterized by , , and .each summation is taken over the states that are included in the set of states defined by the `` coarser '' levels in the hierarchy of states .for instance , the sum over displacements includes all displacements such that the atoms remain close to the undistorted configuration on lattice .while equation ( [ factorize ] ) is in principle exact , practical first - principles calculations of phase diagrams typically rely on various simplifying assumptions .the sum over electronic states is often reduced to a single term , namely , the electronic ground state .the validity of this approximation can be assessed by ensuring that different structures have a similar electronic density of states in the vicinity of the fermi level . if needed , the contribution of electronic entropy is , at least in its one - electron approximation , relatively simple to include without prohibitive computational requirements .a simplifying assumption that is much more difficult to relax is the reduction of the sum over displacements to a single term .this simplification has been extensively used in alloy theory , because calculating the summation over involves intensive calculations . the particular displacement representing a given configuration is typically chosen to be a local minimum in energy that is close to the undistorted ideal structure where atoms lie exactly at their ideal lattice sites .usually , this state is found by placing the atoms at their ideal lattice positions and relaxing the system until a local minimum of the energy is obtained . in this fashion, the state chosen is the most probable one in the neighborhood of phase space associated with configuration . in this approximation, the partition function takes the form of an ising model partition function : with .it turns out that the same statistical mechanics techniques developed in the context of the ising model can also be used in the more general setting where atoms are allowed to vibrate ( and where electrons are allowed to be excited above their ground state ) .all is needed is to replace the energy by the _ constrained free energy _ , defined as : \right ) .\label{consf}\ ] ] in other words , it is the free energy of the alloy , when its state in phase space is constrained to remain in the neighborhood of the ideal configuration .this process , called the `` coarse graining '' of the partition function , is naturally interpreted as integrating out the fast degrees of freedom ( _ e.g. _ vibrations ) before considering slower ones ( _ e.g. _ configurational changes ) .this process is illustrated in fig .[ coarsefig ] .the quantity to be represented by a cluster expansion is now the constrained free energy .the only minor complication is that the effective cluster interactions become temperature dependent .there is some level of arbitrariness in the precise definition of the set of displacement over which the summation is taken in equation [ consf ] .however , in the common case where there is a local energy minimum in the neighborhood of and where the system spends most of its time visiting a neighborhood that can be approximated by a harmonic potential well , the set of displacements over which the summation is taken has little effect on the calculated thermodynamic properties . under the above assumptions , calculating the partition function of a constrained harmonic system and a harmonic system that allows infinite displacements gives essentially the same result : & \approx & \sum_{v\in { \bf\sigma}}\exp \left [ -\beta e_{h}(l,{\bf\sigma},v , t)\right ] \\ & \approx & \sum_{\text{all } v}\exp \left [ -\beta e_{h}(l,{\bf\sigma},v , t)\right]\end{aligned}\ ] ] where ] : it can be shown that the free energy of the system ( restricted to remain close to a given configuration ) is given by : where is the potential energy of the system at its equilibrium position and is planck s constant .phase transitions in alloys typically occur at a temperature where the high temperature limit of this expression is an accurate approximation : the usual criterion used to determine the temperature range where high temperature limit is reached is the debye temperature .note that the factor is often omitted because it cancels out when calculating vibrational free energy differences . in the high temperature limit , another important form of cancellation occurs : the atomic masses have no effect on the free energies of formation .this important result , shown in appendix [ masscancel ] , rules out that masses play any significant role in determining phase stability at high temperatures . as mentioned before , a convenient measure of the magnitude of the effect of lattice vibrations on phase stabilityis the vibrational entropy , which can be obtained from the vibrational free energy by the well known thermodynamical relationship .contrary to the vibrational free energy of formation , the vibrational entropy of formation is temperature - independent in the high - temperature limit of the harmonic approximation , allowing a unique number to be reported as a measure of the importance of vibrational effects . in a crystal ,the determination of the normal modes is somewhat simplified by the translational symmetry of the system .let denote the number of atoms per unit cell .let denote the displacements away from its equilibrium position of atom in cell .let be the force constant relative to atom in cell and atom in cell and let .bloch s theorem indicates that the eigenvectors of the dynamical matrix are of the form where denotes the cartesian coordinates of one corner of cell and is a point in the first brillouin zone .this fact reduces the problem of diagonalizing the matrix to the problem of diagonalizing a matrix for various values of .this can be shown by a simple substitution of equation ( [ eperio ] ) into equation ( [ heharm]).the dynamical matrix to be diagonalized is given by , , , where is the coordinate of atom within the cell . while all convention yield different dynamical matrices , they all have the same eigenvalues . ] as before , the resulting eigenvalues for , give the frequencies of the normal modes ( = ) .the function for a given is called a phonon branch , while the plot of the -dependence of all branches along a given direction in space is called the phonon dispersion curve . in periodic systems ,the phonon dos is defined as where the integral is taken over the first brillouin zone .the above theory relies , of course , on the availability of the force constant tensors .the determination of these force constant tensors is the focus of this section . before describing the methods used for their determination, we will first review important properties of the force constant tensors .while the number of unknown force constants to be determined is in principle infinite , it can , in practice , be reduced to a manageable finite number with the help of the following two observations .first , the force constant between two atoms and beyond a given distance can be neglected .second , the symmetry of the crystal imposes linear constraints between the elements of the force constant tensors. the accuracy of the approximation made by truncating the range of force constant can be tested by gradually increasing the range of interactions , until the quantities to be determined no longer vary substantially .it is important to note that most thermodynamic quantities can be written as a weighted integral of the phonon dos and their convergence rates are thus much faster than the pointwise convergence rate of the phonon dos itself .that is , the errors on the dos at each frequency tend to be quickly averaged out when the contributions of each frequency are added .the restrictions on the force constants imposed by the symmetry of the lattice can be expressed as follows .consider the force constant of atoms and located at and and consider a symmetry transformation that maps a point of coordinate to , where is a matrix and and translation vector .in general , if the crystal is left unchanged by such a symmetry operation , the force constant tensors should be left unchanged as well .this fact imposes the following constraints on the spring tensors : additional constraints on the force constants can be derived from simple invariance arguments .the most important constraints , obtained by noting that rigid translations and rotations must leave the forces exerted on the atoms unchanged , are additional constrains can be found in .there are essentially three approaches to determining the force constants : analytic calculations , supercell calculations and linear response calculations .analytic calculations are only possible when the energy model is sufficiently simple to allow a direct calculation of the second derivatives of the energy with respect to atomic displacements , as in the case of empirical pair potential models . for first - principles calculations , either one of the two following methods have to be used .[ [ the - supercell - method ] ] the supercell method + + + + + + + + + + + + + + + + + + + + the supercell method , consists of slightly perturbing the positions of the atoms away from their equilibrium position and calculating the reaction forces . equating the calculated forces to the forces predicted from the harmonic model yields a set of linear constraints that allows the unknown force constants to be determined . equations per atom . ] when the force constants considered have a range that exceeds the extent of the primitive cell , a supercell of the primitive cell has to be used .( the simultaneous movement of the image atoms introduces linear constrains among the forces that prevent the determination of some of the force constants . ) while any choice of the perturbations that allows the force constants to be determined is in principle equally valid , a few simple principles drastically narrow down the number of perturbations that need to be considered . for a given supercell , there is a only of finite number of non - redundant perturbations to consider .a minimal set of non - redundant perturbations can be obtained as follows . *consider in turn each atom in the asymmetric unit of the primitive cell .* mark the chosen atom ( and its periodic images in the other supercells ) and consider it as distinct from other atoms of the same type .( this operation effectively removes some of the symmetry operation of the space group of the crystal . ) * construct the point group of the site where this atom is located .( is a matrix . ) * move the chosen atom along a direction such that the space spanned by the vectors ( for all ) has the highest dimensionality possible . *if the resulting dimensionality is less than three , consider an additional direction such that the space spanned by the vectors for has the highest dimensionality possible .* if the resulting dimensionality is less than three , consider a direction orthogonal to and .the resulting displacements for all atoms in the asymmetric unit gives a minimal list of perturbations that is sufficient to find all the force constants that can possibly be determined with the given supercell .this result follows from the observation that any other possible displacement can be written as a linear combination of the displacements considered above ( or displacements that are symmetrically equivalent to them ) . when determining force constants with the supercell method , it is important to verify that the presence of small numerical noise in the calculated forces does not result in too much error in the fitted force constants . to minimize noise in the fitted force constants, it may be necessary to use more than the minimum possible number of perturbations .the additional perturbations should ideally be based on different supercells , to minimize the systematic errors introduced by the movement of the image atoms . when ab - initio calculations are used to calculate the forces, it is especially important to iterate the electronic self - consistency steps to convergence . even though the energy may appear to be well converged ,the forces may not yet be .energy is the solution to a minimization procedure , while forces are not . as a result ,errors on the energy are of a second order in the minimization parameters , while the errors on the forces are of the first order in the minimization parameters . for the same reason , special attentionshould be given to the structural relaxations .the true system is not exactly harmonic and the calculated forces may exhibit anharmonic components that introduce noise into the fitted force constants .this problem can be alleviated by considering an additional set of perturbations , where the displacements have the opposite sign .subtracting the calculated forces obtained for this new set of displacements from the corresponding displacements of the opposite sign exactly cancels out all the odd - order anharmonic terms .of course , for perturbations such that the negative displacement is equivalent by symmetry to the corresponding positive displacement , this duplication is unnecessary , because the terms of odd order are already zero by symmetry .additional guidelines for fitting force constants can be found in .[ [ linear - response ] ] linear response + + + + + + + + + + + + + + + linear response calculations seek to directly evaluate the dynamical matrix for a set of points .the starting point of the linear response approach is evaluation of the second - order change in the electronic energy induced by a atomic displacements from perturbation theory . within this framework ,practical schemes to compute vibrational properties in semiconductors and metallic systems have been devised . in this sectionwe will not discuss the theory behind linear response calculations which can be found in a recent review , but rather focus on how the results of linear response calculations can be used in the context of alloy phase diagram calculations .the dynamical matrices calculated from linear response theory are exact in the sense that they account for arbitrarily long - range force constants . while in the supercell method inaccuraciesarise from the truncation of the force constants , the limit in precision for linear response calculations arises from the use of a small number of points to sample the brillouin zone . to address this issue ,two methods can be used .a set of special points can be selected through the chadi - cohen or monkhorst - pack schemes .special points are selected so that the integral over the brillouin zone of a function that contains no fourier components above a given frequency can be exactly evaluated by a weighted average of the function at each special point . sincethermodynamic quantities can be written as integrals of functions of the dynamical matrix over the brillouin zone , the procedure is straightforward to apply in this context .the other approach is the so - called fourier inversion method ( see , for instance , ) .the calculated dynamical matrices from a set of points are used to determine the value of the force constants up to a certain interaction range .the resulting harmonic model can then be used to calculate the dynamical matrix at any point in the brillouin zone , allowing a much finer sampling of the brillouin zone for the purpose of performing the numerical integration required to determine any thermodynamic quantity .the fourier inversion method is preferable when the function to be integrated exhibits high - frequency components , while the dynamical matrix itself , , does not .such a situation would arise when the function is highly nonlinear .the smoothness of then ensures that it can be represented with a small number of fourier components .the less well - behaved function can then be accurately integrated with as many points as needed , using the dynamical matrix calculated from the spring model . in the case of vibrational free energy calculations ,the special points method has been observed to converge rapidly with respect to the number of points , so that the fourier inversion method is probably unnecessary .point , which could lead to high frequency components that are difficult to integrate accurately .however , in three - dimensional systems , this logarithmic singularity contributes very little to the value of the free energy , so that the rate of convergence of the integral as a function of the number of points is not dramatically slowed down by the presence of the singularity . as a result, the special point method can safely be used in practical calculations of the vibrational free energy . ] for a given set of special points , there is an approximate correspondence between the number of fourier components that can be integrated exactly and the range of force constants that can be determined .the correspondence is exact only when the lattice has one atom per cell and when the function is linear .while supercell and linear response calculations are in principle equivalent in terms of the information they provide , they have complementary advantages in terms of computational efficiency .the linear response method is the most efficient way to perform high - accuracy calculations that would otherwise be tedious and computer intensive with the supercell method .however , when a high accuracy is not needed , the supercell method has the advantage that various simplifying assumptions regarding the structure of the force constant tensors can transparently be used to drastically reduce computational requirements .it is not clear at which level of accuracy the cross - over between the efficiency of each approach occurs , but it is important to keep both approaches in mind .another consideration is that in the continuously evolving field of computational solid state physics , new first - principles energy methods are continually developed , and the derivation of the appropriate linear response theory always follows the derivation of simple force calculations .hence , despite the elegance of linear response theory , it is to be expected that the supercell method will always remain of interest . while the harmonic approximation is remarkably accurate given its simplicity , it has one important limitation : it is unable to model thermal expansion and its impact on vibrational properties .both the free energy and the entropy can be obtained from the the heat capacity : hence , a simple way to account for thermal expansion is to use the following well known thermodynamic relationship between the heat capacity at constant pressure and at constant volume : where is the coefficient of volumetric thermal expansion while is the bulk modulus . in a purely harmonic model, there is no thermal expansion and is equal to .the term can thus be viewed as correction arising from anharmonic effects .equation ( [ cpcv ] ) is directly useful in the context of experimental measurements where , and can be directly measured . in the following section ,we describe the computational techniques used to handle anharmonicity .a simple modification to the harmonic approximation , called the quasiharmonic approximation , allows the calculation of thermal expansion at the expense of a moderate increase in computational cost . in the quasi - harmonic approximation ,the phonon frequencies are allowed to be volume - dependent , which amounts to assuming that the force constant tensors are volume - dependent ( see , for instance , ) .this approximation has recently been shown to be extremely reliable , enabling accurate first - principles calculations of the thermal expansion coefficients of many elements up to their melting points .the best way to understand this approximation is to study a simple model system where it is essentially exact . consider a linear chain ( with periodic boundary conditions ) of identical atoms interacting solely with their nearest neighbors through a pair potential of the form : let be the average distance between two nearest neighbors and let denote the displacement of atom away from its equilibrium position . the total potential energy ( per atom ) of this systemis then given by this expression can be simplified by noting that all the terms that are linear in cancel out when summed over . the first three terms , , give the elastic energy of a motionless lattice while the remaining terms account for lattice vibrations .the important feature of this equation is that , even within the harmonic approximation , the prefactor of the harmonic term , , depends on the anharmonicity of the potential ( through ) . in the more realistic case of three - dimensional systems , this length - dependence translates into a volume - dependence of the harmonic force constants .the volume dependence of the phonon frequencies induced by the volume - dependence of the force constants is traditionally modeled by the grneisen parameter which is defined for each branch and each point in the first brillouin zone .but since we are interested in determining the free energy of a system , it is convenient to directly parametrize the volume dependence of the free energy itself .this dependence has two sources : the change in entropy due to the change in the phonon frequencies and the elastic energy change due to the expansion of the lattice : where is the energy of a motionless lattice constrained to remain at volume , while is the vibrational free energy of a harmonic system constrained to remain at volume .the equilibrium volume at temperature is obtained by minimizing this quantity with respect to .the resulting free energy at temperature is then given by .. however , since the volume is a macroscopic quantity , its distribution can be considered a delta function and the sum reduces to a single term : the free energy at the volume that minimizes the free energy . ]let us consider a particular case that illustrates the effect of temperature on the free energy , at the cost of a few reasonable assumptions .we assume that * the elastic energy of the motionless lattice is quadratic in volume ; * the high temperature limit of the free energy can be used .as shown in appendix [ anhapp ] , in this approximation , the volume expansion as a function of temperature takes on a particularly simple form : where is an average grneisen parameter : the resulting temperature dependence of the free energy is given by these expressions provide a simple way to account for thermal expansion .they also allow us to estimate the changes in vibrational entropy as a function of temperature that is due to thermal expansion : in metallic alloys , this quantity is typically of the order of .there are two main simulation - based approaches to handling anharmonicity : monte carlo ( mc ) and molecular dynamics ( md ) . while both approaches are able to model anharmonicity at any level of accuracy , they suffer from two limitations .first , they are computationally demanding and therefore have , to date , been limited to simple energy models .second , they are unable to model quantum mechanical aspects of vibrations and are therefore limited to the high temperature limit .there is an interesting and useful complementarity between the quasi - harmonic model and simulation techniques .quantum effects typically become negligible in the temperature range where strong anharmonic effects , which can not be modeled accurately within the quasiharmonic framework , become important .the use of simulation techniques to determine vibrational properties bypasses the coarse graining framework presented in section [ coarse ] : both configurational and vibrational excitations are treated on the same level . when a simple energy model provides a sufficient accuracy , one can calculate thermodynamic properties directly from mc simulations where both atomic displacements and changes in chemical species are allowed during the simulation .while a full determination of a phase diagram from simulations has , to our knowledge , not been attempted , both md ( ) and mc ( ) have been used to determine differences in vibrational free energy between two phases . because neither md nor mc are able to provide free energies directly , a special integration technique needs to be used .the idea is to express a thermodynamic quantity inaccessible to mc as an integral of a quantity that _ can _ be obtained through mc .a simple example is the change free energy as a function of temperature at constant pressure , which can be derived from the gibbs - helmholtz relation where is the internal energy .another example is the change in entropy as a function of temperature ( at constant pressure ) which can be expressed in terms of the heat capacity : often , the most computationally efficient path of integration between two states is not physically meaningful .for instance , one can gradually change the interatomic potentials during the course of the simulation , in order to model a change in the configuration of the alloy , without requiring atoms to jump between lattice sites .this task is achieved by defining an effective hamiltonian that gradually switches from the hamiltonian associated with phase to the hamiltonian associated with phase as the switching parameter goes from 0 to 1 .this convenient path of integration permits the calculation of free energy differences between phases at a reasonable computational cost , with the help of the following thermodynamic relation : where is the average of the energy calculated using hamiltonian ( and similarly for .force constants and anharmonic contributions are ultimately always derived from an energy model . in this section , we discuss various energy models , from empirical potential models to first - principles techniques , and the error or bias they may introduce in the vibrational properties .simple pairwise potentials of functionals ( such as the embedded atom method ) are computationally efficient so that all vibrational properties can often be determined without any approximations beyond the ones associated with the specific energy model .for this reason , the use of simple energy models has proven to be an invaluable tool to understand trends in vibrational entropies and to test a number of approximations .several potential sources of error can arise when using pair potentials or pair functionals .the first one is that vibrational entropy is extremely sensitive to the precise nature of the relaxations that take place in an alloy and a simple energy model may not be able to accurately predict these relaxations .this problem is particularly apparent when considering the wide range of values found in the different calculations of the vibrational entropy change upon disordering of the ni compound .but , as shown in table [ ni3alsv ] , most of the discrepancies can be explained from differences in the predicted volume change upon disordering .this is often aggravated by the fact that simple energy models are often not fitted to phonon properties .the problem was noted in where the embedded atom potentials used were fitted to various structural energies and elastic constants .the acoustic modes were accurately extrapolated from the fit to the elastic constants , but the phonon frequencies associated with the optical modes were overestimated by about % .the question of the accuracy of simple energy models clearly merits further attention . in this respect ,the fit of simple energy models to the results of _ ab - initio _calculations offers a promising way to include vibrational effects . in oxides ,electronic polarization has to be included in order to correctly model both the low frequency acoustic modes and the high frequency optical modes .electronic polarization in oxides can be approximated with the so - called core and shell model . while quantum mechanical methods are computationally more intensive , they generally provide more accurate force constants .the most obvious error introduced by the common local density approximation ( lda ) is its systematic underprediction of lattice constants which leads to an overestimation of elastic constants and phonon frequencies .this systematic error makes it difficult to compare the absolute values of calculated vibrational properties with experimental measurements .however , for the purpose of calculating phase diagrams , this bias may be less of a concern , because phase stability is determined by differences in free energies , and one would expect a large part of this systematic error to cancel out . a practical way to alleviatethe lda bias is to perform calculations at a negative pressure such that the calculated equilibrium volume agrees with the experimentally observed volume . as shown in , a very good estimate of the required negative pressurecan be obtained by a concentration - weighted average of the pressure associated with the elemental solids . for the purpose of calculating elastic properties , this approach appears to outperform the most popular alternative to lda , the generalized gradient approximation ( gga ) .the basic formalisms presented in section [ alloyth ] and [ bvk ] provide two natural ways to control the trade - off between accuracy and computational requirements . in the context of alloy theory ( section [ alloyth ] ), the range of the effective clusters interactions included in the cluster expansion controls how accurately the configurational dependence of vibrational properties is modeled . in the context of the harmonic ( or quasiharmonic ) treatment of lattice vibrations ( section [ bvk ] ) ,the range of the force constants included in the born - von krmn model controls the accuracy of the calculated vibrational properties for a given configuration . in principle , any desired accuracy can be reached , given sufficient computing power , by increasing the range of the interactions in both the cluster expansion and the born - von krmn models .this section seeks to answer the important question of how far these two ranges of interactions need to be pushed in order to reach the accuracy required in a typical phase diagram calculation . the evidence that spring models including only short - range force constants are able to correctly model vibrational quantities comes from various sources .first and second nearest neighbor spring model are routinely used to fit data obtained from neutron scattering measurement of phonon dispersion curves . in the theoretical literature ,there have been direct studies of the convergence as a function of the range of interaction considered .all _ ab - initio _ studies find that short - range force constants ( first or second nearest neighbor ) permit an accurate determination of thermodynamical quantities in metals and group iv semiconductors .it is important to note that this rapid convergence of most thermodynamic quantities occurs even when the pointwise convergence rate of the phonon dos is slow .as noted before , this property arises from the fact that thermodynamic quantities are averages taken over all phonon modes and errors tend to average out .in ionic systems , the presence of long - range electrostatic interactions may require long - range force constants .however , this electrostatic effect can easily be modeled using pair interactions at a moderate computational cost .once the forces predicted from a simple electrostatic model have been subtracted , the residual forces should be parameterizable with a short - range spring model .some of the ab - initio studies of convergence have suggested additional simplifications to force constant tensors : instead of attempting to compute all force constants in each tensor , is it possible to obtain reliable results by keeping only the largest terms .we now present a hierarchy of approximations that is a formalization of these findings . to obtain a more intuitive representation of a given force constant tensor , we express it in a basis such that the first cartesian axis is aligned along the line joining atom and .the second axis is then taken along the highest symmetry direction orthogonal to the first axis while the third axis is chosen so obtain a right handed orthogonal coordinate system . in the absence of symmetry , the most general force constant tensor has 9 independent elements .the first simplification , is to neglect the three body terms in the harmonic model of the energy ( _ e.g. _ with ) .physically , such terms arise from the deformation of the electronic cloud surrounding atom that is caused by moving atom and that affect the force acting on atom . clearly , for any force constant other than the nearest neighbor , this effect is negligibly small . even for nearest neighbor tensors , it is the most natural contribution to neglect first .in can be readily shown that a solid consisting only of pairwise harmonic interaction , the tensor associated with a pair of atoms is symmetric : ( this constraint is distinct from the conventional constraint : ) the elements of the force constant tensor can be ranked in decreasing order of expected magnitude based on three simple assumptions : 1 .force constants associated with stretching a bond are larger than the ones associated with bending it .2 . terms relating orthogonal forces and displacements are smaller than those relating parallel forces and displacements .3 . in the planeperpendicular to the bond , the anisotropy in the force constants is smaller than the magnitude of the force constants themselves .we then obtain with this hierarchy of force constants is important to keep in mind , given that the off - diagonal elements of the spring tensors are the most difficult to obtain from supercell calculations , requiring much bigger supercells than diagonal elements .there is evidence that even keeping only the stretching ( ) and isotropic bending ( ) terms of the nearest neighbor spring tensor can provide vibrational entropies with an accuracy of about .if this observation turns out to be generally applicable , this offers a simple way to account for vibrational effects in phase diagram calculations .if a cluster expansion of the vibrational free energy only requires a small number of eci to accurately model the configurational - dependence of the vibrational free energy , it then becomes practical to determine the values of these eci from a small number of very accurate calculations of the vibrational free energy of a few structures .the issue of the speed of convergence of the cluster expansion is also related to the task of devising efficient ways to compute vibrational properties of disordered alloys : the faster the cluster expansion converges , the easier it is to model a disordered phase ( see appendix [ appdisord ] ) .the calculations of the vibrational entropy change upon disordering has proven to be a very effective way to assess the importance of lattice vibrations , since this quantity can be straightforwardly used to estimate the effect of lattice vibrations on transition temperatures with the help of equation ( [ tcshift ] ) .the central question is thus whether the cluster expansion of the vibrational free energy converges quickly with respect to the number of eci .this is a question distinct from the range of force constants needed to obtain accurate vibrational properties .the range of eci needed to represent the configurational dependence of vibrational free energy may very well exceed the range of the force constants .even in simple born von krmn model systems , there is no direct correspondence between eci and force constants , except in special cases ( see section [ bpmodel ] ) .once relaxations are introduced in the model , then all hope of a simple correspondence is lost . in this context , the question of the existence of a rapidly converging cluster expansion of vibrational properties has to be answered through numerical experiments .simple energy models offer the possibility to test , at a reasonable computational cost , the speed of convergence of a cluster expansion .explicit calculations of a well converged cluster expansion of vibrational entropy in a lennard - jones solid have indicated that a small number of eci ( 9 ) can provide a good accuracy ( ) .other benchmarks of the speed of convergence , based on studies of disordered alloys , also indicate that concise and accurate cluster expansions are possible .experiments that seek to link features of projected phonon dos to the local chemical environment of the atoms suggest that short - range eci should be able to successfully model vibrational entropy differences .one potential source of concern is the difficulty associated with accounting for the size mismatch effect using a short - range eci . in the context of cluster expansions of the energy , relaxations of the atoms away from their ideal lattice site as a result of size mismatch are known to introduce both non negligible long range pair eci and numerous multiplet eci .a cluster expansion of the vibrational free energy is expected to exhibit the same problems .all the full phonon dos ab - initio calculations of vibrational entropies in alloy systems performed so far have relied on the rapid convergence of the cluster expansion .while efforts to quantify the error introduced by truncating the cluster expansion in ab - initio calculations have been made , the issue of the speed of convergence of the cluster expansion in the context of vibrational properties clearly merits further study , especially in light of the importance of the size mismatch effect .the experimental literature on the thermodynamics of lattice vibrations in alloys relies on mainly three techniques . in _differential calorimetry _measurements , the heat capacity of two samples in a different state of order is compared over a range of temperatures .if the upper limit of the range of temperatures is chosen to be sufficiently low , substitutional exchanges will not occur and the difference in heat capacity can be assumed to arise solely from vibrational effects .integration of the difference in heat capacity ( divided by temperature ) then yields a direct measure of the vibrational entropy differences between the two samples of the range of temperature considered .this , of course , assumes that the lower temperature bound is sufficiently low , so that the vibrational entropy of both samples can be assumed to be zero at that temperature .it also assumes that the electronic contribution to the heat capacity is negligible . in practice ,both assumptions are typically satisfied .the main problem with this method is that one is usually interested in vibrational entropy differences at the transition temperature of the alloy , which is usually above the upper limit of the temperature range used in the heat capacity measurements .the heat capacity therefore needs to be extrapolated to high temperature .this constitutes the main source of inaccuracies in this method .examples of the use of this method can be found in .a second method is the measurement of phonon dispersion curve through _ inelastic neutron scattering_. for ordered alloys that can be produced in large single crystals , this method is very powerful .once the dispersion curves along special directions in reciprocal space are measured , they can be used to fit born - von krmn spring models which , in turn , yield the normal frequencies for any point in the brillouin zone . with the help of the standard statistical mechanics techniques described in section [ bvk] , this information is sufficient to determine the vibrational entropy .examples of applications of this method can be found in .the applicability of this method is unfortunately limited by the availability of large single crystals .the case of disordered alloys presents an even more fundamental problem : disordered alloys do not have well defined dispersion curves and there is no straightforward way to fit the spring constants of a spring model from the experimental data .this problem is usually addressed by using the virtual crystal approximation , in which different constituent atoms are replaced by one `` average '' type of atom ( see appendix [ appdisord ] ) .unfortunately , this approximation has repeatedly been shown to have a very limited accuracy for the purpose of measuring vibrational entropy differences .nevertheless , single crystal phonon dispersion curve measurements for ordered alloys present a unique opportunity to perform a stringent test of the accuracy of theoretical models .a third method is the determination of the phonon density of states from _ incoherent neutron scattering _ measurements .in contrast to the preceding approach , this method can readily be applied to disordered systems and to compounds for which single crystals are not available .the main limitation of this approach is that different atomic species have different neutron scattering cross - sections .the scattered intensity at each frequency measures a `` density of states '' , where each mode is weighted by the scattering intensity of the atoms participating in the mode in question .thus , one needs some prior information about the vibrational modes in order to reconstruct the true phonon dos from the experimental data . in the case of alloys, there is not a one - to - one correspondence between the measured data and the vibrational entropy .this problem can be alleviated by choosing alloy systems where the scattering intensity of each species is similar .other techniques have been used to measure vibrational entropy differences .some researchers have used the fact that vibrational entropy and thermal expansion are directly related , to estimate vibrational entropy differences from accurate thermal expansion measurements .the measurement of inelastic nuclear resonant scattering spectrum has also been used to relate changes in the phonon dos to changes in the short - range order of a disordered alloy .finally , relatively noisy estimates of vibrational entropy differences can be obtained from x - ray debye - waller factors or from the measurement of mean square relative displacement ( msrd ) of the atoms relative to their neighbors through extended electron energy - loss fine structure ( exelfs ) .while the ability to control the level of approximation discussed in the previous section is extremely useful , there remains the problem that , very often , only considering the first few levels in this hierarchy of approximations already involves substantial computational requirements .for this reason , models of lattice vibrations that involve fewer parameters but more physical intuition may provide a practical mean of including vibrational effects in phase diagram calculations . in this section, we will present the advantages and weaknesses of each method , in light of the three fundamental mechanisms described in the section [ secmecha ] .there have been many attempts ( see , for instance , ) to find ways to express the relationship between the vibrational free energy and the dynamical matrix in a form that illustrates the intuition behind the `` bond proportion '' mechanism . in a variety of simple model systems ,a convenient exact expression can be derived for the nearest neighbor eci in the expansion of the vibrational free energy in the high temperature limit . for simple nearest - neighbor spring models with central forces in linear chains ,square lattices or simple cubic lattices , the nearest neighbor eci is given by where , and are , respectively , the spring constants associated with , and bonds and is the dimensionality of the system .it has been noted , on the basis of numerical experiments , that the same expression performs well for other lattices .this success arises from the fact that , as shown in appendix [ anal1nn ] , equation ( [ eci1nn ] ) is the first order approximation to the true vibrational entropy change in a large class of systems which satisfies the following assumptions : * the high temperature limit of the vibrational entropy is appropriate ; * the nearest - neighbor force constants can be written as where denotes the ( scalar ) stiffness of the spring connecting sites and with occupations and while the are dimensionless spring constant tensors .the are assumed equivalent under a symmetry operation of the space group of the parent lattice ; * all force constants are such that equation ( [ eci1nn ] ) applies to simple harmonic models with nearest neighbor springs on the fcc , bcc or sc primitive lattices ( and , approximately , on the hcp lattice ) , as long as the above assumptions are satisfied . both stretching and bending terms are allowed in the spring tensors , as long as their relative magnitude is independent of ( _ e.g. _ when the bending terms are always , say , 10% of the corresponding stretching term , regardless of the magnitude of the stretching term ) .if force constants , other than bending or stretching terms , are important , the bond proportion model ceases to be valid .this can be seen by the following argument .the `` bond proportion '' picture requires every bond of a certain type ( for instance , bonds ) to have an identical spring tensor .however , the point symmetry of each bond can be different and similar chemical bonds in different environment face different symmetry - induced constraints on their spring tensors .the only way to reconcile these observations is use a spring tensor that is compatible with the highest possible symmetry , ensuring that it is also compatible with any other environment with a lower symmetry . with the highest possible symmetry ,only two independent terms remain in the spring tensor : the stretching and bending terms .equation ( [ eci1nn ] ) embodies the essential intuition behind the effect of the alloy s state of order on its vibrational free energy .when one replaces an bond and a bond by two bonds , the vibrational free energy will decrease only if the stiffness of bonds , , exceeds the geometrical average stiffness of the bonds between identical species .this observation allows the determination of the expected effect of vibrations on the shape of the phase diagram by simple arguments .the link between the nearest neighbor eci of the expansion of the vibrational entropy can be summarized by the expression : where the `` '' and `` '' correspond to ordering and segregating systems , respectively , and where is a dimensionless parameter that only depends on the lattice type and the ordering tendency of the system ( for instance , for fcc , in ordering systems and in segregating systems , while for bcc , in both cases ) .it is straightforward to include vibrational effects in phase diagram calculations using the `` bond proportion '' model .all that is needed is an estimate of the stiffness of , and bonds , which could come , for instance , from supercell calculations of the nearest neighbor force constants in a few simple structures or from the bulk moduli of the pure elements and one ordered compound .the nearest neighbor eci then obtained can be simply added to the cluster expansion of the energy . while equation ( [ eci1nn ] )is useful to estimate the importance of the `` bond proportion '' mechanism in a given system , one can avoid some of the approximations involved in deriving equation ( [ eci1nn ] ) at the expense of only a modest amount of additional effort .one can find the exact phonon dos of the nearest neighbor born - von krmn model for a variety of configurations of the alloy , which allows a more accurate cluster expansion of the vibrational energy to be derived . in this fashion , the condition specified in equation ( [ cond1o ] ) is no longer needed and the vibrational entropy can be calculated at any temperature .it is important to keep in mind that two important assumptions are made when invoking the `` bond proportion '' mechanism .first , vibrational entropies are solely determined by the nearest neighbor force constants .there is theoretical evidence that nearest neighbor spring models can predict vibrational entropy differences with an accuracy of about in metallic and semiconductor systems .given that configurational entropy differences are typically of the order of , this precision should be sufficient for practical phase diagram calculations .the second assumption is that each type of chemical bond is assumed to have an intrinsic stiffness that is independent of its environment .first - principles calculations on the li - al and on the pd - v system unfortunately indicate that the stiffness of a chemical bond does change substantially as a function of its environment .this problem is serious , as it considerably limits the applicability of the `` bond proportion '' model .these changes of the intrinsic stiffness of the bonds as a function of their environment are precisely the focus of the two other suggested sources of vibrational entropy changes . in summary , while the `` bond proportion '' model gives an elegant description of one of the mechanisms suggested to be at the origin of vibrational entropy differences , it completely ignores the two other mechanisms , namely , the volume and size mismatch effects .perhaps the most widespread approximation to the phonon dos is the debye model ( see , for instance , ) , where the phonon problem is solved in the acoustic limit . in this case , the phonon dos is approximated by : where and is the debye temperature , given by : where is the debye sound velocity , defined by where the right - hand side is the directional average of a function of the three sound velocities .the free energy of a debye solid is given by : where the debye function is given by since the debye sound velocity is a complicated function of all elastic constants of the material , an approximation to the debye temperature that only involves the bulk modulus proves extremely useful .such an approximation was derived by moruzzi , janak and schwarz ( mjs ) for cubic materials : where is the average atomic volume is the bulk modulus and is the concentration weighted arithmetic mean of the atomic masses . as noted in , in the high temperature limit , the mjs model does not exhibit the required property that the masses have no effect on the vibrational free energy of formation , although using a geometric average of the masses fixes this problem .the quasiharmonic approximation can be used , within debye theory , to account for mild anharmonicity . in the so - called debye - grneisen approximation ,the volume - dependence of the phonon dos is modeled by a single grneisen parameter and the effect of volume can be summarized by simply making the debye temperature volume - dependent : where is the debye temperature at volume and is the grneisen parameter . despite of its inaccurate description of the true phonon dos at high frequencies , the debye and debye - grneisen models are quite successful at modeling the changes in vibrational properties of a given compound as a function of temperature . for instance , the thermal properties of pure metals calculated in mjs approximation are surprisingly accurate .the reason for this success is that most thermodynamic quantities ( _ e.g. _ free energy , entropy , heat capacity , etc . )exhibit their most dramatic variations at low temperature , where the low frequency phonon modes that are correctly described by the debye model have a dominant effect . in the high temperature regime ,thermodynamic quantities are determined by the classical equipartition theorem , and any harmonic model gives the correct behavior .debye - like models are expected to perform well in systems where the differences in vibrational free energy between compounds can be explained by uniform shifts of the phonon dos , such as when the volume effect operates alone .such a behavior has been observed in embedded atom calculations on the ni al system but in no other systems so far .the mjs approximation has been used to include vibrational effects in phase diagram calculations and has resulted in an improved agreement with experimental results . however , as shown in , the debye approximation and its successors can have significant shortcomings when used to calculate phase diagrams .a significant part of the vibrational free energy differences between different compounds arises from changes in the high frequency portion of the phonon dos , which debye - like models describe incorrectly . in some cases ,the mjs approximation can even lead to an incorrect prediction of the sign of the vibrational entropy difference . in summary ,the debye model and its derivatives capture the essential physics behind only one of the advocated mechanisms responsible for the configurational dependence of vibrational free energy : the volume effect .approximations based on the debye model , however , fail to account for the possibility that the state of order also has a direct impact ( _ i.e. _ not through the volume ) on the shape of the phonon dos ( as predicted , for instance , by the `` bond proportion '' model ) .the perfect complement to the debye model is the einstein model ( see , for instance , ) , which focuses on the high frequency region of the phonon dos , instead of its low frequency region .the einstein model assumes that a crystalline solid can be modelled by a collection of independent harmonic oscillators ( 3 per atom ) sharing a common frequency .this frequency can , for instance , be determined by computing the natural frequency of oscillation of one atom when all others are frozen in place .this approach , known as the local harmonic model , has proven especially useful to calculate vibrational entropies associated with defects .the einstein model can also be combined with a debye model to better fit experimental calorimetry data or thermal expansion data .the local harmonic model is of little use whenever the system of interest exhibits translational symmetry , because the calculations required to determine the unknown parameters of an einstein model from first - principles directly provide force constants .the latter could be used to obtain a more precise description of the dos rather than the single - value dos characterizing the einstein model .the einstein model is nevertheless extremely useful for conceptual purposes , as we will now illustrate .as shown in appendix [ einstein ] , the vibrational free energy of a system is bounded by above and by below by the free energy of two einstein - like systems : while the upper bound is obtained from the usual local harmonic model , where surrounding atoms do not relax , the lower bound is obtained when the surrounding atoms are allowed to relax freely .another way to interpret these bounds is that at one extreme , each atom sees the others as having an infinite mass , while at the other extreme , each atom sees the other atoms as being massless .this result supports the view that vibrational free energy can be meaningfully considered as a measure of the average stiffness of each atom s local environment . a more rigorous way of defining the contribution of an atom to the total vibrational free energy is the use of the projected dos ( see , for instance , ) .this approach does not in any way simplify the calculation of vibrational properties , because the full phonon dos is needed as an input , but it is a useful way to interpret the experimentally measured or calculated phonon dos . to obtain the contribution of atom to the dos , the idea is to weight each normal mode by the magnitude of the vibration of atom : where is the eigenvector ( normalized to unit length ) associated with the mode of frequency . since extensive thermodynamic properties are linear in the dos, atom - specific local thermodynamic properties can be readily defined from the projected dos .note that , by construction , all the projected dos sum up to the true phonon dos and thus , all the local extensive thermodynamic quantities sum up to the corresponding total quantity . in _ ab - initio _ calculations ,most of the computational burden comes from the calculation of the force constant tensors .it would thus be extremely helpful if the force constants determined in one structure could be used to predict force constants in another structure . from the failures of the `` bond proportion '' model ,however , we know that forces constants obtained from one structure are not directly transferable to another structure nevertheless , a simple modification of the transferable force constant approach yields substantial improvements in precision .first - principles calculation the pd - v system revealed that most of the variation in the stiffness of a given chemical bond across different structures can be explained by changes in bond length .this suggests that the transferable quantity to consider is a `` bond stiffness vs. bond length '' relationship . as a first approximation, a linear relationship can be used where and denote the stretching and isotropic bending terms , respectively and where and describe the stiffness of the bond at its ideal length while are analogous to bond - specific grneisen parameters .the other parameters of the spring tensor are unlikely to follow such simple relationships because they may be required to vanish according to the local symmetry of the bond , independently of its length ( this is discussed in more details in appendix [ anal1nn ] ) .this approximation was shown to be successful in the pd - v system .figure [ fitpp ] illustrates the ability of this simple model to predict bond stiffness in different structures .a similar analysis performed with the data on the ni - al system from reference is shown in fig .[ nialfkall ] .table [ svslpred ] compares the predictions obtained from the `` bond stiffness vs. bond length '' model with more accurate calculations .there are numerous advantages to this approach . from a conceptual point of view, this model presents a concise way to represent all three mechanism suggested to be the source of vibrational entropy differences .the `` bond proportion '' mechanism is the particular case obtained when little changes in bond length occur .the volume effect results from expanding all bonds by the same factor .the size mismatch effect ( or the `` stiff sphere '' picture ) is also modeled since the local change in stiffness resulting from locally compressed or expanded regions are explicitly taken into account . a straightforward way to represent the source of vibrational changes is to overlap the stiffness vs. length relationship and the changes in average length and stiffness in different states of order , as shown in fig .[ shiftstiff ] .a second advantage of this method is computational efficiency .the unknown parameters of the model can be determined by a small number of supercell or linear response calculations .after that , the knowledge of the relaxed geometry of a structure is sufficient to determine the stiffness of all chemical bonds . finding the vibrational entropy of the structure then just reduces to a computationally inexpensive born - von krmn phonon problem .it is important to note that the knowledge of the relaxed geometries of a set of structures is a natural by - product of first - principles calculations of structural energies , which are needed to construct the cluster expansion of the energy in phase diagram calculations , whether vibrational effects are included or not . since computational requirements do not grow rapidly with the number of structures considered , this opens the way for a much more accurate representation of the configurational - dependence of the vibrational free energy .a third advantage of transferable bond stiffness vs. bond length relationships is that they contain all the information needed to account for thermal expansion as well , within the quasi - harmonic approximation .the slopes of the stiffness vs. length relationships for each chemical bonds explicitly defines the changes in phonon frequencies as volume changes .since the bulk modulus of each structure is also a by - product of structural energy calculations , all the ingredients needed for a quasi - harmonic treatment of thermal expansion are available .lattice vibrations can have a significant impact on phase transition temperatures , short - range order , solubility limits , and the sequence in which phases appear as a function of temperature .the standard framework of alloy theory can be straightforwardly extended to account for lattice vibrations using the concept of coarse graining of the partition function .once the degrees of freedom associated with lattice vibrations are integrated out , one is left with a standard ising model , where the energy of each spin configuration is replaced by its vibrational free energy . the efficient evaluation of the vibrational free energy of each configuration is the main problem limiting the inclusion of lattice vibrations in phase diagram calculations .a number of investigations have sought to assess the importance of vibrational effects on phase stability , in order to ensure that the efforts involved in computing vibrational properties are justified .the conclusion of the most reliable of these studies is that vibrational entropy differences are typically on the order of to , which is comparable to the magnitude of configurational entropy differences ( at most in binary alloys ) , thereby indicating that vibrations have a nonnegligible impact .the calculation of the vibrational free energy of a particular configuration of the alloy reduces to the well known phonon problem in crystals . while the standard harmonic treatment of this problem lacks the ability to model thermal expansion , which can have a significant impact on thermodynamic properties in alloys , this limitation is easily overcome with the help of the quasiharmonic model .an exact solution to the phonon problem for all possible configurations requires excessive computing power .however , the tradeoff between accuracy and computational requirements can be controlled in two ways , namely through the selection of the range of force constants in the born - von krmn model , and through a choice of the number of eci used to expand the configuration dependence of the vibrational free energy . while there is evidence that the range of force constants can be kept very small ( first nearest neighbor springs ) , the configurational dependence of the vibrational free energy is too complex to permit a drastic reduction in the number of eci .given the substantial computing power required to undertake lattice dynamics calculations , many attempts have been made to devise simpler models .for many years , the mjs approximation appeared to be a very promising way to include vibrational effects in phase diagram calculations , because it systematically improved the agreement between first - principles calculations and experimental measurements .this success may have been the result of two fortunate circumstances ( i ) first - principles phase diagram calculations typically overestimate transitions temperature and ( ii ) the mjs approximation nearly always yields a downward correction to the transition temperature . as the accuracy of phase diagram calculations improved through the use of longer - range cluster expansions , the systematic bias in the calculated transitions temperature substantially decreased . simultaneously , more sophisticated models of lattice vibrations indicated that lattice vibrations do not always results in a reduction in the transition temperatures .the net effect of these two trends is that , although the accuracy of first - principles calculations has increased over the years , obtaining improved agreement with experiment is now a much more stringent test . as a result, perfectly valid and accurate calculations of vibrational effects sometimes reduce the agreement with experiments . hence ,before one can unambiguously assess the importance of lattice vibrations through a full phase diagram calculation , all potential sources of error have to be carefully controlled , such as the precision of the energy model used and more importantly , the accuracy of the cluster expansion . to date , the most convincing evidence that taking lattice vibrations into account significantly improves agreement with experimental results comes from calculations of the lattice dynamics associated with a specific atomic configuration ( _ e.g. _ a given compound or an isolated point defect ) . in these settings , most sources of errors are under control and definite answers can be given .although the availability of more accurate computational tools has revealed that the trends in vibrational entropy differences between phases is far more complex that anticipated ten years ago ( ) , a simple picture of the mechanisms at work is now emerging .all the known sources of vibrational entropy differences can be conveniently summarized by the `` bond stiffness vs. bond length '' model . in this picture , each type of chemical bond is characterized by a length - dependent spring constant .changes in vibrational entropy can originate from both changes in the proportion of each chemical bond and changes in their lengths as a result of local and global relaxations .this model not only provides an intuitive understanding of lattice vibrations in alloys , but also a practical way of including their effects in phase diagram calculations .this stiffness vs. length relationship of each type of chemical bond can be inferred from a small number of lattice dynamics calculations .the vibrational properties of any configuration can then be obtained at a very low computational cost from the knowledge of the equilibrium geometry of this configuration , an information that is already a natural by - product of any phase diagram calculation .future investigations of the effect of lattice vibrations on phase stability should head towards three main directions . 1 . while reporting error bars is an important part of any experimentalists work, theorists should devote significantly more effort to quantifying the uncertainties of their calculations .this would make it possible to clearly identify situations where the improved agreement with experimental results following the inclusion of vibrational effects is truly significant or merely the result of fortunate coincidences .it is admittedly difficult to quantify the errors introduced by the energy model ( such as the lda ) , but standard statistical techniques can clearly be used to quantify any error due to fitting the _ ab initio _ data with a simple model .2 . given the difficulty of extracting vibrational entropies from experimental data , theorists should undertake the computation of quantities that _ can _ be directly measured .for instance , a born - von krmn model directly enables the simulation of incoherent neutron scattering data , while the inverse procedure is a highly non unique operation .the calculation of thermal expansion coefficients would also be a very sensitive test .there have so far been very few accurate phase diagram calculations that include the effect of lattice vibrations .the main limitation remains the determination of a cluster expansion that accurately models the configurational dependence of vibrational free energy .the `` bond length vs. bond stiffness '' model should prove to be an extremely useful tool in achieving this goal .although this approximation has been very successful in all systems to which it has been applied , the confirmation of its validity in a wider range of systems is crucial. it would also be interesting to devise a hierachy of increasingly accurate approximations that would include the `` bond length vs. bond stiffness '' model as a particular case .this work was supported by the u.s .department of energy , office of basic energy sciences , under contract no .de - f502 - 96er 45571 .gerbrand ceder acknowledges support of union minire through a faculty development chair .axel van de walle acknowledges support of the national science foundation under program dmr-0080766 during his stay at northwestern university .this appendix shows that the vibrational entropy of formation is independent of the atomic masses in the high temperature limit , as several authors have noted . in the high - temperature limit, the vibrational entropy is determined by the product of the frequencies of all normal modes of vibrations , which can be related to the eigenvalues of the dynamical matrix of the system ( up to a constant ) : using the properties of determinants , we can write : where is the diagonal matrix of all the atomic masses of the system ( each repeated three times ) while is the matrix obtained by regrouping all the force constant tensors in a single matrix ( analogously to equation ( [ bigdynmat ] ) ) .now consider the change in the value of when an atoms of type and atoms of type are combined to form an alloy .let the subscripts , and respectively denote the properties of an alloy , a pure crystal of element and a pure crystal of element . all the terms involving masses exactly cancel one another .the material presented in this appendix combines standard results regarding the grneisen framework that can be found , for instance , in .two assumptions are made .first , the elastic energy of the motionless lattice is assumed quadratic in volume : where is the bulk modulus , the equilibrium volume at ( ignoring zero - point motion ) and .second , the high temperature limit of the vibrational free energy is used : in this approximation , the volume - dependence of takes on a particularly simple form : where is an average grneisen parameter . in the high - temperature limit ,an average grneisen parameter can easily be defined , because the population of the phonon modes is no longer temperature - dependent , and any change in entropy can be unambiguously attributed to shifts in phonon frequencies . at lower temperatures ,the changes in phonon population would need to be accounted for as well .if we assume that the volume - dependence of the vibrational free energy is linear in volume , we have : minimizing this expression with respect to yields : where is the coefficient of volumetric thermal expansion . the resulting temperature dependence of the free energy ( for one given configuration ) is given by it is interesting to note that half of the vibrational free energy decrease due to thermal expansion is canceled by the energy increase of the motionless lattice .hence , vibrational entropy differences originating from differences in thermal expansion between phases have , relative to other sources of vibrational entropy changes , half the effect on phase stability .although in phase diagram calculations , the use of the cluster expansion bypasses the problem of directly calculating the vibrational entropy of a disordered phase , there are cases where it is of interest to directly calculate the vibrational properties of the disordered state .for instance , in studies that seek to assess the importance of lattice vibrations , it is instructive to compute the vibrational entropy change upon disordering an alloy , since this quantity can be straightforwardly used to estimate the effect of lattice vibrations on transition temperatures with the help of equation ( [ tcshift ] ) . hereare the most common methods used to model the disordered state .perhaps the most obvious and brute force approach to modeling the disordered state is the use of a large supercell where the occupation of each site is chosen at random .this approach was chosen in all eam calculations as well as in other investigations .unfortunately , it is generally not feasible in the case of _ ab initio _ calculations .the virtual crystal approximation ( vca ) consists of replacing each atom in a disordered alloy by an `` average '' atom whose properties are determined by a concentration weighted average of the properties of the constituents . in the limit where the chemical species have nearly identical properties , this approximation is justified .this model has been commonly used to interpret neutron scattering measurements of phonon dispersion curves in the case of disordered alloys .it has also been used in a some theoretical investigations .however , the virtual crystal approximation has been repeatedly shown to be insufficiently accurate for the purpose of calculating differences in vibrational entropies between distinct compounds .its weaknesses are numerous : it is unable to model `` bond proportion '' effects , volume effects and local relaxations .it also fails to give a mass - independent high temperature limit .special quasirandom structures ( sqs ) combine the idea of cluster expansion with the use of supercells .sqs are the periodic structures that best approximate the disordered state in a unit cell of a given size .the quality of a sqs is described by the number of its correlations that match the ones of the true disordered state .there is thus a direct connection between cluster expansions and sqs : if there exists a truncated cluster expansion that is able to predict properties.of the disordered state there exists an sqs that provides an equally accurate description of the disordered state .sqs have been very successfully used to obtain electronic and thermodynamic properties of disordered materials ( see , for example , ).the accuracy of the sqs approach in the context of phonon calculations has been benchmarked using embedded atoms potentials which allow for the comparison with the `` exact '' vibrational entropy of the disordered state with a large supercell .it has been found that , for the purpose of calculating vibrational properties , an sqs having only 8 atoms in its unit cell already provide a good approximation of the disordered state in the case of an fcc alloy at concentration . while the performance of this small sqs is remarkable in a model system where local relaxations are disallowed , it tends to degrade somewhat when relaxations are allowed to take place .this effect can naturally be explained by the fact that relaxations are known to introduce important multibody terms in the cluster expansion , which translates into the requirement that the sqs must correctly reproduce the corresponding multibody correlations .the success of small sqs opened the way for the use of more accurate energy models to calculate vibrational properties of disordered alloys .sqs have been applied to the _ ab - initio _ calculation of vibrational entropy in disordered ni and pd alloys the einstein model of a solid , the free energy , in the high temperature limit , is given by where and are , respectively , the dynamical matrix and force constant matrix of the system while is the matrix of the masses : it can be shown that for any positive definite matrix implying that where the right - hand side expression is nothing but the free energy of the system in the einstein approximation . a lower boundcan be obtained by a similar technique , by using the inverse of the force constant matrix the interpretation of the inverse is simple : it is the matrix that maps forces exerted on the atoms to the resulting displacements of the atoms . is related to the oscillation frequency of a single atom when all other atoms are held in place , is related to the oscillation frequency of an atom when all surrounding atoms are allowed to relax so that the force exerted on them remains zero as atom moves .atom has mass while all other atoms are considered massless and relax instantaneously .atoms located infinitely far away from atom are held in place with an infinitesimal force . in conclusion ,the free energy of a system is bounded by above and by below by the free energy of two einstein - like systems : appendix generalizes the results found in in order to handle more general lattice types .we show that , in an important class of systems , the bond proportion model is in fact the first order approximation to the true change in vibrational entropy induced by a change in the proportion of the different types of chemical bonds .the alloy system is assumed to satisfy the following conditions : * the high temperature limit is appropriate ; * the nearest - neighbor force constants can be written as where denotes the ( scalar ) stiffness of the spring connecting sites and with occupations and while the are dimensionless spring constant tensors .the are assumed equivalent under a symmetry operation of the space group of the parent lattice ; * all force constants are such that consider a -dimensional solid made of atoms connected by springs of characterized by symmetrically equivalent tensors . without loss of generality , the masses of all atoms are set to unity since the formation entropies in the high temperature limit are independent of the atomic masses ( see appendix [ masscancel ] ) . in the high temperature limit, the vibrational free energy per atom is given by : where the sum is taken over the nonzero eigenvalues of the dynamical matrix of the system .( the zero eigenvalues correspond the modes where the whole crystal moves rigidly . in the thermodynamic limit , these few missing degrees of freedom are inconsequential . )because all springs in the system are equivalent to each other , matrix can be written as where is a matrix of dimensionless geometrical factors independent of but specific to the type of lattice . from this expression of , it follows naturally that eigenvectors of are independent of and that its eigenvalues can be written as where the are geometric factors independent of .consider what happens to when the stiffness of one of the springs is changed from to .let denote the corresponding change in matrix .to the first order , the resulting changes in the eigenvalues are given by : where is the ( dimensionless ) eigenvector of associated with eigenvalue . since is linear in the spring constants , we can write where is matrix of geometrical factors independent of and but specific to the type of lattice . while matrix also depends on which spring is being modified , the matrices corresponding to each spring are equivalent under a symmetry operation of the crystal s space group .the changes in the eigenvalues can then be expressed as : where is a dimensionless number independent of and . substituting these results into equation ( [ svib ] ) ,we obtain : to the first order , we can express the vibrational entropy change as where is a dimensionless geometrical factor depending only on the lattice type . in the limit of , we can obtain the change in vibrational entropy due to a change in all the spring constants by simply summing the effect of the change in the stiffness of each spring : to determine the value of , we consider the following particular case for which the exact vibrational entropy change is known .once the value of is known , it can be used in any other case sharing a same lattice type . in a solid bound by springs of stiffness given by , if the stiffness of all springs is increased by , each eigenvalue becomes and the vibrational entropy becomes : where is the number of nearest neighbors and denotes a sum over all nearest neighbor bonds .since this result is exact to the first order , we can compare it to equation ( [ ds1 ] ) and identify the unknown constant to be .we thus obtain the following result : we now turn to the problem of calculating the vibrational entropy of mixing in a binary alloy .we first define a normalized dynamical matrix as follows : where is the spring constant of an bond if site is occupied by a atom similarly for a site occupied by a atom . for the purpose of calculating free energy of formation, this normalized dynamical matrix gives the same result as the usual dynamical matrix because the factors in the denominator exactly cancel out , for the same reason masses cancel out ( see appendix [ masscancel ] ) .this transformation normalizes the spring constant associated with bonds and bonds to while the spring constant associated with bond becomes where , and respectively denote the true spring constants of , and bonds .the usefulness of this normalization is to extend the applicability of equation ( [ dvibo1 ] ) to the case where and are very different .let us start with a phase separated mixture of and atoms .let us think of this system as one where all atoms are identical but where the springs connecting them can be either one of three types , or . where the springs are placed defines which type of atom sits at each site .we now replace one bond in the pure phase by an bond and one bond in the pure phase by an bond . by equation ( [ dvibo1 ] ) ,the resulting change in vibrational entropy per atom is : to satisfy the assumptions of the above derivation , we require that .if we create a total number of bonds , we perform the above operation times and the vibrational entropy change is : to the first order ( when ) , this expression is equivalent to the nearest neighbor eci of the cluster expansion of the vibrational free energy is thus : extreme case of anharmonicity occurs when the energy surface , in the neighborhood of a configuration , has no local minimum .as noted in and , this situation occurs sufficiently frequently to deserve a particular attention .a typical example of such a situation occurs when the fcc - based structure is unstable with respect to a deformation along the bain path , which leads to a bcc - based structure .while it is possible to construct a separate cluster expansion for the fcc and bcc phases , the fundamental question that arises is : what is the free energy of the structure ? since it is unstable , the standard harmonic expression for the free energy can clearly not be used .one suggested solution to this problem , described in , is to perform the coarse graining in a different order than presented in section [ coarse ] .the sum over configurations is performed first , and the vibrational properties of the configurational averaged alloy are then calculated .the main limitation of this approach is that it would be extremely difficult to compute the averaged vibrational properties by any other method than by the so - called virtual crystal approximation ( see section [ appdisord ] ) .another limitation is that it only addresses instabilities with respect to cell shape distortions , ignoring instabilities with respect to internal degrees of freedom ( _ i.e. _ atomic positions ) . while the coarse graining technique is most naturally interpreted as integrating out the `` fast '' degrees of freedom ( _ e.g. _ vibrations ) before considering `` slower '' ones ( _ e.g. _ configurational changes) , the time scale of the various types of excitations is , in fact , irrelevant .the partition function is simply a sum over states which can be calculated in any order .as long as we can associate any vibrational state of the system with a configuration , the coarse graining procedure remains valid . under this point of view , it is clear that it does not matter whether there is even a local minimum of energy in the portion of phase space associated with configuration .what is important , however , is that the neighborhood of configuration in phase space is thoroughly sampled ( _ i.e. _ that the constrained system is ergodic ) over a macroscopic time scale .there is no need for ergodicity within the time scale of the configurational excitations .if the neighborhood of a given configuration is not fully sampled before the alloy jumps to another configuration , it is still possible that the unsampled portion of phase space around will be visited at a later time , when the system returns to the neighborhood of configuration .the ergodicity requirement at the macroscopic time scale imposes the important but intuitively obvious constraint that the phase space neighborhood of configuration can not contain states that are associated to different phases of the system .this discussion shows that there is no fundamental limitation to the applicability of the standard coarse graining framework in the presence of instability .however , we still need to describe how the free energy of an unstable configuration could be determined in practice .the task is simplified by the fact that the free energy of an unstable stable does not need to be extremely accurately determined , because unstable states are relatively rarely visited , even at high temperatures .nevertheless , it is important to assign a free energy to those unstable states , to ensure that the ising model used to represent the alloy is well - defined . the free energy associated with one configuration can be obtained by integrating $ ] with respect to over the portion of phase space associated with . in the classical limit, we can label the vibrational states by the position each particle takes and the integration limits can be found by geometrical arguments .the quantum mechanical equivalent of this operation is complex , , where is the ( multibody ) hamiltonian of the system .the trace can computed in any convenient basis and in particular one could use dirac delta functions . in this fashion , it is possible to define a localized free energy by summing only over the delta functions located in the neighborhood of one configuration . ] but unlikely to be needed in practice .the unstable states are essentially never visited at low temperatures , where a quantum mechanical treatment would be essential .focusing on the classical limit , we consider an unstable configuration .let be the dynamical matrix evaluated at the saddle point of the energy surface closest to the ideal undistorted configuration . can not be associated with a saddle point and the derivation would have to be modified .in particular the bounds of integration would have to be made asymmetric .] we consider that when the state of the system is such that one atom moves away from its position at the saddle point by more than , it should not longer be considered part of configuration . for an instability with respect to internal degrees of freedom ( atomic positions ) , a natural choice for would be half the average nearest neighbor interatomic distance .for an instability with respect to unit cell deformation , could be half the change in the average nearest neighbor distance induced by the displacive transformation .the boundedness of the portion of phase space associated with can be accounted for by replacing the usual classical partition function associated with one normal mode of oscillation by where is the -th eigenvalue of the dynamical matrix , is planck s constant and is a measure of the size of the phase space neighborhood of along the direction associated with normal mode .this size parameter can be expressed in terms of the parameter just introduced .let where is the -th eigenvector of and is the mass of atom .after normalizing so that , the number of atom in the system , we can then write where the maximum is taken over all nearest neighbor pairs of atoms .this choice of integration bounds approximately defines a neighborhood of such that no atom moves farther than from its position at the saddle point ( relative to its neighbors ) . in this approximation ,the free energy of an unstable state is given by : where is the frequency of normal mode and where the error function for real or imaginary arguments is given by the suggested definition of the free energy of an unstable configuration has interesting properties .first , as the neighborhood size increases , the expression reduces to the usual harmonic expression .the effect of the correction is not limited to unstable modes : modes that are so soft that it is likely that the motion of the atoms exceeds are also affected .there may obviously be other definitions of .the above example simply gives an example of how it could be calculated .going back to our initial example of the instability , we can now outline how this problem could be handled within the traditional coarse graining scheme .two separate clusters expansion need to be constructed , one for the bcc phases and one for the fcc phases .but since we now know how to assign a free energy to the unstable configuration , the fcc cluster expansion can be successfully defined .the free energy attributed to the configuration should be sufficiently high so that the free energy curve of the fcc phase in the vicinity of 0.5 concentration will lie above the free energy curve of the bcc phase , as it should .the fact that both cvm or monte carlo calculations on the fcc lattice would attribute a positive probability to -like structures should not be regarded as a problem : this is precisely what will ensure that the calculated fcc free energy curve lies above the bcc one .the discussion has so far been concerned with the expression of the partition function , which is the relevant quantity to consider when the phase diagram is calculated with the cvm or the low temperature expansion .let us now consider the implications of this approach to monte - carlo simulations .thermodynamic quantities derived from averages , such as the average energy , are obviously unaffected by the presence of unstable configurations . for quantities derived from fluctuations , such as the heat capacity ,slight modifications are needed . in traditional monte carlo simulations ,the heat capacity arising from vibrational degrees of freedom is consistently neglected , and any thermodynamic quantity obtained from monte carlo can be unambiguously interpreted as the configurational contribution . in the more general setting presented here , there is an overlap between vibrational and configurational fluctuations and the only way to obtain well defined thermodynamic quantities is to fully account for the vibrational fluctuations .fortunately , there is a straightforward way to do so .the total variance of the energy ( or any other quantity ) can be exactly expressed as a sum of the variance within each configuration and the variance of the average energy of each configuration : where is the probability of finding the system in state , while and .the first term is the usual fluctuation obtained from monte carlo .the second term is a correction which takes the form of a simple configuration average of fluctuations within each configuration .the fluctuation of a system constrained to remain in the vicinity of configuration is usually just as simple to determine as its average properties . in the case of energy ,the fluctuations within each configuration are simply related the heat capacity of a harmonic solid .the main objective of this section was to show that there is no fundamental problem associated with unstable states in coarse graining formalism .while it is true that the free energy of an unstable configuration is not uniquely defined , once a particular way to coarse grain phase space is chosen , the free energy of all configurations can be defined in a consistent fashion .there are admittedly some practical issues to be resolved regarding the practical implementation of coarse graining in the presence of instabilities , but the approach suggested in this section indicates that these difficulties can be overcome .de fontaine , d. , j. althoff , d. morgan , m. asta , s. foiles , and d. j. a. quong , 1998 , in _ phase transformations and systems driven far from equilibrium _ , edited by e. ma , m. atzmon , p. bellon , and r. trivedi mater .soc . , warrendale , pa , p. 175 .
a longstanding limitation of first - principles calculations of substitutional alloy phase diagrams is the difficulty to account for lattice vibrations . a survey of the theoretical and experimental literature seeking to quantify the impact of lattice vibrations on phase stability indicates that this effect can be substantial . typical vibrational entropy differences between phases are of the order of to /atom , which is comparable to the typical values of configurational entropy differences in binary alloys ( at most /atom ) . this paper describes the basic formalism underlying _ ab initio _ phase diagram calculations , along with the generalization required to account for lattice vibrations . we overview the various techniques allowing the theoretical calculation and the experimental determination of phonon dispersion curves and related thermodynamic quantities , such as vibrational entropy or free energy . a clear picture of the origin of vibrational entropy differences between phases in an alloy system is presented that goes beyond the traditional bond counting and volume change arguments . vibrational entropy change can be attributed to the changes in chemical bond stiffness associated with the changes in bond length that take place during a phase transformation . this so - called `` bond stiffness vs. bond length '' interpretation both summarizes the key phenomenon driving vibrational entropy changes and provides a practical tool to model them . submitted to _ reviews of modern physics_.
sequence alignment deals with the problem of identifying similarities between two different sequences of objects , represented by `` letters '' from some `` alphabet '' .this problem has a long history in combinatorics and in probability theory where one wishes to find the longest common subsequence ( lcs for short ) between two random sequences of letters .more recently , sequence alignment has become a central notion in evolutionary biology where it is used to probe functional , structural or evolutionary relationships between dna or rna strands or proteins . in this setting onewishes to quantify how `` close '' two sequences of genetic information are by identifying the lcs of the same gene in different species . given a pair of fixed sequences of letters of lengths and , the length of their lcs is defined by the recursion \label{recursion}\ ] ] with the boundary conditions for all .the variable is 1 if the letters at the positions and match each other , and 0 if they do not . if one ignores the correlations between different , and takes them from the bimodal distribution , one gets the bernoulli matching ( bm ) model of sequence alignment . to get a model closest to the original lcs problem, one has to put . in the thermodynamic limit of infinitely long sequencesthis problem has been studied in some detail . with , , sepplinen derived rigorously the law of large numbers limit .asymptotically the quantity is a random variable converging a.s . to a function of which he computedexplicitly . using an exact mapping to a directed polymer problem , complemented with scaling arguments, it was shown more recently that asymptotically the quantity is a random variable of the form where are known scale factors and is a random variable drawn from the tracy - widom distribution of the largest eigenvalue of gue random matrices . in subsequent work some related quantities were obtained for the thermodynamic limit using a mapping to a 5-vertex model and applying the bethe ansatz . in this paperwe compute analytically the exact distribution of for _ finite _ sequences by a mapping of the bm problem to a stochastic exclusion process .the mapping of the sequence alignment problem onto the asymmetric exclusion process has been proposed in .the hopping dynamic considered in is the asymmetric exclusion process with sublattice - parallel update which admits a transfer - matrix formulation and diagonalization of the matrix for finite and .since our interest lies in an analytical solution for arbitrary and , we choose another mapping onto a discrete - time fragmentation process which is equivalent to a totally asymmetric simple exclusion process with backward sequential update .this allows us to use earlier results obtained directly from bethe ansatz for this stochastic lattice gas model .specifically , we will express the probability that the length of the lcs is at most by the probability that the number of jumps of a selected particle in the exclusion process up to time is at least .we also outline how ( [ lcsbminf ] ) arises in the thermodynamic limit from the result for finite sequences .in fig . [ fig1](a )we illustrate the lcs problem in matrix form for two sequences of lengths and from an alphabet of the four letters used in dna sequencing .the vertical sequence is read from bottom to top , the horizontal sequence is read from left to right .whenever two letters match there is a bold face 1 in the matrix .the recursion ( [ recursion ] ) ( its solution is shown in small blue numbers in the matrix ) generates a terrace - like structure ( red lines ) where the number of terraces is .the boundary condition of the recursion amounts to assigning value 0 to the boxes containing the letters of the two sequences and also to the empty lower left starting corner . solving the recursion ( [ recursion ] ) , one can note that blue numbers in fig .[ fig1](a ) appear with different statistical weights .indeed , numbers at left corners of terraces appear when and have weight .all the rest of numbers at edges of terraces do not depend on and have therefore weight .all remaining numbers appear when having weight .it is useful to view the grid that defines the matching matrix as a square lattice with bulk sites , embedded in the rectangle of size .each square ( defining the dual square lattice ) is labelled with and . due to the terraces ,each site can take one of five different states .it may be ( i ) traversed horizontally or vertically by a ( red ) terrace line ( ii ) represent a left or right corner of a terrace , or ( iii ) be empty .( vertical , bottom to top ) and ( horizontal , left to right ) .matches are denoted by a bold face 1 in the matrix .the small blue integers are the solution of the recursion ( [ recursion ] ) with boundary conditions ( not shown in ( a ) , but in ( b ) ) .the red lines are the level lines that separate terraces of different height .the dashed line follows the lcs of length 5 .( b ) mapping to the five vertex model obtained by interchanging the colours of the vertical lines and identifying lines with arrows as shown in fig .[ fig2](a ) .vertex weights are marked by an x , weights are marked by a bullet ., title="fig:",scaledwidth=40.0% ] ( vertical , bottom to top ) and ( horizontal , left to right ) .matches are denoted by a bold face 1 in the matrix .the small blue integers are the solution of the recursion ( [ recursion ] ) with boundary conditions ( not shown in ( a ) , but in ( b ) ) .the red lines are the level lines that separate terraces of different height .the dashed line follows the lcs of length 5 .( b ) mapping to the five vertex model obtained by interchanging the colours of the vertical lines and identifying lines with arrows as shown in fig .[ fig2](a ) .vertex weights are marked by an x , weights are marked by a bullet ., title="fig:",scaledwidth=40.0% ] by construction , in the bm model each empty site has weight , each left corner of each line has weight , and all remaining sites have weight 1 .this property allows for a mapping to a five - vertex model , see fig .[ fig1](b ) . for reasons that become apparent belowwe define this mapping slightly differently from by interchanging the colour of all vertical lines .the resulting pattern of intersecting black and red lines then becomes isomorphic to the pattern of in- and outging arrows in the five - vertex model with vertex weights given by the weights of the bm model .one simply identifies black ( red ) horizontal lines with right - pointing ( left - pointing ) arrows and black ( red ) vertical lines with up - pointing ( down - pointing ) arrows as shown in fig .2(a ) .( a ) mapping of line intersection to vertices of the six - vertex model which is effectively a five - vertex model since one of the vertex weights is zero .( b ) a way to avoid line intersections in the five vertex model : if a horizontal line has a left adjacent vertical line below and a right vertical line above , it is replaced by the diagonal shortcut.,scaledwidth=40.0% ] a similar terrace - like structure appears in the anisotropic 3d directed percolation model ( adp ) solved by rajesh and dhar .a difference is that levels lines in the adp can overlap .the overlapping lines can be separated by successive shifts of terraces , then one gets the five - vertex model with the same vertex weights as described above .however , the shifts destroy the domain - wall boundary conditions which are essential for thw exaxt solution of the bm model .then , the analogy between the adp and the bm models can be used only in the thermodynamic limit as it was demonstrated in .it is useful to consider a further mapping of this 5-vertex model onto a discrete - time stochastic process , considering the vertical direction as time and horizontal one as discrete space . to this end, we first turn the red vertex lines ( arrows pointing left or down ) into non - intersecting particle world lines by replacing a right - left turn with a diagonal `` shortcut '' , as shown in fig .[ fig2](b ) . after a space reflection this yields a non - intersecting line ensemble as shown in fig . 3. mapping of the five - vertex model to non - intersecting worldlines after space reflection .the green worldline is the first line ( seen from the right ) which does not reach the top of the grid.,scaledwidth=40.0% ] a final mapping is aimed to obtain the line ensemble of particle world lines of the discrete - time totally asymmetric exclusion process ( tasep ) with the backward sequential update , introduced in and solved in . to this end, we consider each trajectory in fig .[ fig3 ] and replace each move upward by a diagonal move right and each diagonal move left by a move upward ( fig .[ fig4 ] ) . in a more formal way, we consider a new square lattice and draw new trajectories using the correspondence , see fig .the sites of the new lattice are denoted by coordinates numbered by integers and . by construction the red lines move upward or diagonally and define the world lines of exclusion particles which jump only to the right .the vertex weights assign the appropriate probability to each path ensemble .worldlines of the tasep. the space - time grid of the exclusion process is the square lattice with new coordinates numbered by integers and .the green worldline is the first line ( seen from the right ) which does not reach the target position that yields for .,scaledwidth=70.0% ] the backward sequential dynamics encoded in the vertex weights may be described as follows . in each time particle positionare updated sequentially from right to left , starting from the rightmost particle .each step of a particle by one lattice unit in positive direction has probability , provided the neighbouring target site is empty . if the target site is occupied , the jump attempt is rejected with probability 1 .no backward moves are allowed , making the exclusion process totally asymmetric .the horizontal boundary condition of the original sequence matching problem maps into an initial condition where at time particles occupy consecutive dual lattice points . since the motion of a particle is not influenced by any particles to its left, we may extend the lattice to minus infinity .the vertical boundary condition is equivalent to extending the lattice to plus infinity , such that at time all sites are vacant .thus one has a tasep on an initially half - filled infinite lattice with step initial state . however , only the first particles contribute to the statistical properties of the bm model . in the exclusion picturethe terrace height has a simple probabilistic interpretation .it counts the number of world lines that intersects with a diagonal in the square lattice starting from the point ( the left dotted line in fig .hence , at any given time step , the terrace increases at each site from right to left by one unit , unless a world line has been crossed when going from right to left .therefore the number of trajectories ending at time and the length of the lcs of the bm model on the rectangle are related by .our aim is the evaluation of the probability distribution $ ] of the bernoulli model . having the tasep interpretation of the original model , we need to evaluate an appropriate sum over end points of trajectories of particles . to do this , we select the first trajectory ( counted from the right ) which does not end at time in the target range of the dual lattice given by the top row with ( the green line in fig.4 ) .an important observation is that the sum of weights of all trajectories ending at times ( all lines to the left of the green line ) is 1 for the conservation of probabilities in the tasep .then , the distribution is the sum over the probabilities of all trajectories with end points right of the green line and over end points of the green line itself .hence of all particles only the rightmost particles are relevant for the computation of .the initial positions of these particles are . following the relation between terrace height and particle trajectories as discussed above, we may consider the final positions at the moment of time . by the construction, we have and .we first consider . in this caseno particle has reached site . in particular, this implies that the first particle ( initially at site ) has not reached site .the complementary probability for this event is the probability that the first particle has jumped at least times up to time . hence now consider . then is the joint probability that after time steps all rightmost particles ( located initially on ) have reached sites and the next particle ( located initially on ) has not reached site .this is equivalent to the joint probability that the particle originally at has jumped at least times and the particle originally at has jumped not more than times . by construction of the processthis joint probability may be expressed as the statistical weight of all paths where the particle initially at jumps at least times minus the statistical weight of all paths where the particle initially at jumps at least times .we have come to a known problem of the tasep statistics .consider an infinite chain , the left half of which is initially occupied by particles while the right half is empty .the problem is to find the probability that the particle ( counted from the right ) of the infinite cluster hops at least times up to time . with this quantity ,we obtain for the partition function the expression for every . for , eq.([partfunction ] ) reduces to ( [ partfunction1 ] ) .we remark that eq.([partfunction1 ] ) may be viewed as incorporated in eq.([partfunction ] ) in agreement with the notion that the transition probability in an exclusion process with no particles ( second argument of for ) is equal to 1 ( this is the trivial transition from the empty lattice to the empty lattice ) .to derive eq.([partfunction ] ) more formally , let be the probability that particles located initially at will be at at time .then , the partition function can be written as given the positions of particles at , the sum of conditional probabilities for the first particle to reach any position is and the notations in ( [ norm ] ) and ( [ rest ] ) mean that probabilities to find a particle at with respect to positions of other particles , in contrast to probabilities conditioned with respect to the initial conditions . using ( [ rest ] ), we get the partition function in the form where the time argument is omitted .the probability for the continuous time tasep has been found in and for the discrete - time tasep with the backward sequential update in : where in terms of , the first sum in ( [ twoparts ] ) is and the second one is which gives ( [ partfunction ] ) again . for the tasep with parallel update computed by johansson who used combinatorial methods of the theory of symmetric groups . for the present model with backward sequential updatethe solution was obtained by rkos and schtz using the bethe ansatz method . for more general initial conditions ,this problem has been solved by nagao and sasamoto .the expression for obtained in by evaluation of sums of reads where is the jump probability considered in . with( [ partfunction1 ] ) and ( [ partfunction ] ) this gives the exact distribution of the lcs in the bernoulli matching problem .the cumulative distribution = \sum_{m=0}^{q } \lambda_{x , y}^m\ ] ] takes the simple form with the natural convention that .in it was shown explicitly that which for the bm model is expected by symmetry . the result ( [ cumulative ] )provides a simple relation between the cumulative distribution of the length of the lcs in the bm model and the distribution of the time - integrated current in the backward - sequential tasep for the step function initial condition .the probability that the length of the lcs is at most is given by the probability that the number of jumps across bond up to time is at least .we now turn to a brief discussion how the asymptotic results ( [ lcsbminf ] ) of and follow from the cumulative distribution ( [ cumulative ] ) . to this endwe need the asymptotic properties of for large arguments . for parallel update johanssonhas derived the asymptotics using results from the random matrix theory . by a transformation proved in yields the asymptotics of the distribution for the discrete time tasep with the backward sequential update .one finds ,n , n\omega(\gamma , q)+n^{1/3}\sigma(\gamma , q)\chi ) = f_{gue}(\chi ) \label{asympt}\ ] ] with and the function is the tracy - widom distribution of the gaussian unitary ensemble .the form of ( [ asympt ] ) indicates that given and , the non - trivial scaling regime in time is given by the third argument . in the present case , and fixed and we search for the scaling regime of . in our notations , , , and . from eq.([asympt ] ) we have to find the asymptotics of for large and , we represent it in the form then , the leading term can be found from the equation which gives which coincides with the expression found by sepplinen using probabilistic methods .the substution of eq.([q ] ) with eq.([q_zero ] ) into eq.([y ] ) leads to the equation for which can be resolved in the leading order of . in the first order, we substitute into the second term of the rhs of eq.([y ] ) to get where the coefficient at in the expansion of the first term of the rhs of eq.([y ] ) is then , is the ratio and we obtain both results eq.([q_zero ] ) and eq.([r ] ) coincide with corresponding expressions obtained in from a comparison with johansson s result for the directed polymer problem .we have considered the bernoulli matching model of sequence alignment with the aim of deriving the exact probability that longest common subsequence of two sequences of finite lengths has length . by a series of mappingswe have transformed the matching problem to the time evolution of the totally asymmetric simple exclusion process with backward sequential update in a step - function initial state . in this mappingthe computation of the probability distribution turns into the distribution of the time - integrated current through a certain bond .this problem has been solved by rkos and schtz by bethe ansatz methods .thus the desired result for the bernoulli matching model for finite sequences has been obtained in explicit form through some coordinate transformations from the bethe ansatz . in the thermodynamic limitwe recover the earlier results of majumdar and nechaev through an asymptotic analysis where we use the fact that in the thermodynamic limit there is a scaling form of the current distribution found by johansson which involves the distribution of the largest eigenvalue of the gue ensemble of random matrices . adapting this result to the present setting requires again some nontrivial coordinate transformation .we find the occurrence of an eigenvalue distribution of random matrices ( which is valid also for finite sequence lengths , but for the laguerre ensemble ) intriguing .we thank s.nechaev , s. majumdar and k. mallick for many useful discussions .we acknowledge the support of the dfg . v.p. appreciates also the support of the rfbr grant no .06 - 01 - 00191a .part of this work was done while g.m.s .was the weston visiting professor at the weizmann institute of science .d. sankoff and j. kruskal , _ time warps , string edits , and macromolecules : the theory and practice of sequence composition _( addison wesley , reading , massachussets , 1983 ) . m.s .waterman , _ introduction to computational biology _ ( chapman and hall , london , 1994 ) .d. gusfield , _ algorithms on strings , trees , and sequences _ ( cambridge university press , cambridge , 1997 ) .j. boutet de monvel , eur .j. b * 7 * , 293 ( 1999 ) ; phys .e * 62 * , 204 ( 2000 ) .t. sepplinen , ann .appl . probab .* 7 * , no . 4 , 886 ( 1997 ) .s.n . majumdar and s. nechaev , phys .e * 72 * , 020901(r ) ( 2005 ) .majumdar , k. mallick and s. nechaev , arxiv:0710.1030v1 [ cond-mat.stat-mech](2007 ) .r.bundschuh , phys .e * 65 * , 031911 ( 2002 ) . r.bundschuh and t.hwa , discrete appl* 104 * , 113 ( 2000 ). n. rajewsky , a. schadschneider , and m. schreckenberg , j.phys . a * 29*,l305 ( 1996 ) .k. johansson , commun .math . phys . *209 * , 437 ( 2000 ) .r.rajesh and d.dhar , phys.rev . lett . *81 * , 1646 ( 1998 ) .a. rkos and g.m .schtz j. stat .* 118 * , 511 ( 2005 ) .t. nagao and t. sasamoto , nucl .b * 699*,487 ( 2004 ) .schtz , j. stat . phys .* 88*,427 ( 1997 ) .priezzhev , in _ proceedings statphys 22 _ pramana-j.phys . * 64 * , 915 ( 2005 ) ; cond - mat/0211052 ( 2002 ) .tracy and h. widom , commun .159 * , 151 ( 1994 ) ; * 177 * , 727(1996 ) .
through a series of exact mappings we reinterpret the bernoulli model of sequence alignment in terms of the discrete - time totally asymmetric exclusion process with backward sequential update and step function initial condition . using earlier results from the bethe ansatz we obtain analytically the exact distribution of the length of the longest common subsequence of two sequences of finite lengths . asymptotic analysis adapted from random matrix theory allows us to derive the thermodynamic limit directly from the finite - size result .
in this paper we develop the results of .we consider tensor structured linear systems , which arise naturally from high dimensional problems , e.g. pdes .the number of unknowns grows exponentially w.r.t .the number of dimensions which makes standard algorithms inefficient even for moderate this problem is known as the _ curse of dimensionality _ , and is attacked by different low parametric approximations , e.g. _ sparse grids _ and _ tensor product methods _ .a particularly simple , elegant and efficient representation of high dimensional data is a linear tensor network , also called the _ matrix product states _ ( mps ) and _ tensor train _ ( tt ) format .the mps approach was originally proposed in the quantum physics community to represent the quantum states of many body systems .this representation was re - discovered as the tt format by oseledets and tyrtyshnikov , who were looking for a proper method to generalize a low rank decomposition of matrices to high dimensional arrays ( tensors ) .the mps approach came with the _ alternating least squares _ ( als ) and _ density matrix renormalization group _ ( dmrg ) algorithms for the ground state problem .the als considers the minimization of the rayleigh quotient over the vectors with a fixed tensor structure , while dmrg does the same allowing the rank of the solution to change .experiments from quantum physics point out that the convergence of the dmrg is usually notably fast , while the one of the als can be rather poor .the general numerical linear algebra context in which the tt format is introduced allows to think more widely about the power of tensor representations .for instance , we can apply dmrg like techniques to high dimensional problems other than just the ground state problem , e.g. interpolation of high - dimensional data , solution of linear systems , fast linear algebra in tensor formats .we can also consider better alternatives to the dmrg , which follow the same _ alternating linear scheme _ ( als ) framework , but are numerically more efficient .a tempting goal is to obtain an algorithm which has the dmrg - like convergence and the als - like numerical complexity . in we present such an algorithm for a solution of symmetric positive definite ( spd ) linear systems in higher dimensions .the central idea in is to support the alternating steps , i.e. optimization in a fixed tensor manifold , by steps which _ expand _ the basis in accordance with some classical iterative algorithms .a _ steepest descent _ ( sd ) algorithm is a natural choice for spd problems .the _ enrichment _ step uses the essential information about the _ global _ residual of the large high dimensional system on the local optimization step , that helps to escape the spurious local minima introduced by the nonlinear tensor formulation and ensure the global convergence .the convergence rate of the whole method can then be established adapting a classical theory . in contrast , optimization in the fixed tensor manifolds can be analyzed via the gauss seidel theory and only _ local _ convergence estimates are available , which hold only in a ( very ) small vicinity of the exact soution .the global enrichment step used in algorithms `` '' and `` '' in modifies all components of the tensor train format simultaneously .there is nothing particularly wrong with this , but it is interesting to mix the same steps differently to obtain the algorithm which works with only one or two neighboring components at once , similarly to the dmrg technique . in this paperwe develop such a method , namely the _ alternating minimal energy _ ( amen ) algorithm .we prove the global convergence of amen and estimate the convergence rate w.r.t .the one of the steepest descent algorithm .we also propose several methods to compute the required local component of the global residual , using either the svd based approximation , or incomplete cholesky decomposition , or low rank als approximation .the rest of the paper is organized as follows . in section [ sec : def ] we introduce necessary definitions and notations . in section [ sec : amen ] we propose the amen algorithm , then we compare it with similar algorithms from and prove the convergence theorem . in section [ sec : prac ] we discuss efficient methods to compute the required component of the residual . in section [ sec : num ] we test the algorithm on a number of high dimensional problems , including the non - symmetrical fokker planck and chemical master equations , for which the efficiency of the method is not fully supported by the theory . in all exampleswe observe a convincing fast convergence and high efficiency of the proposed method , as well as the advantages of the amen algorithm over the previously proposed ones .this paper is based on the notations of , which we recall briefly here .we consider linear systems in space , i.e. assume that a vector has indices and such arrays are referred to as . ] , so that , and the operator is formulated as follows , for where is the -th identity vector .the particular model parameters were fixed to the values the chemical master equation serves as an accurate model for gene transcription , protein production and other biological processes .however , its straightforward solution becomes impossible rapidly with increasing number of species .existing techniques include the monte - carlo - type methods ( so - called ssa and its descendants ) , as well as more tensor - related ones : sparse grids , greedy approximations in the canonical tensor format and tensor manifold dynamics .the first two approaches only relax the curse of dimensionality to some extent ; typical examples involve up to 10 dimensions and may take from 15 minutes to many hours on high - performance machines .tensor - product low - rank approaches seem to be more promising .unfortunately , we can not estimate a possible potential of greedy or manifold dynamics methods , whereas up to now our alternating linear solution technique appears to be more efficient . for more intensive study of the cme applications ofthe amen and dmrg methods see and , respectively .note that for systems with moderate dimensions and smaller time steps , the dmrg method can be of a good use for such problems , as was demonstrated in . however , as we will see , the amen algorithm appears to perform better than dmrg for more complicated problems .two specific tricks allow to take more benefits from the tensor structuring .first , we employ the crank - nicolson discretization in time , but instead of the step - by - step propagation , consider the time as a -th variable and formulate one global system encapsulating all time steps , where , is the time step size , and the initial state is , is the first identity vector . in particularly , we choose , , and .such a time interval is not enough to reach the stationary solution , but the transient process is also of interest . as a result , we end up with a -dimensional system of size .second , we prepare all the initial data and seek the solution not in the -dimensional tt - format directly , but in the so - called quantized tt format : we reshape additionally all tensors to the sizes , and apply the -dimensional tt decomposition , but with each mode size reduced to . however , the matrix is strongly nonsymmetric , which makes difficulties for the dmrg approach .we fix the truncation tolerance for the solution to , and track the frobenius - norm error of the dmrg solution w.r.t .the reference one , obtained by the amen+svd method with tolerance , versus the dimension , see fig .[ fig : casc_dmrg ] .since the dmrg technique takes into account only local information on the system , its accuracy deteriorates rapidly with the increasing dimension . a stagnation in a local minimum is also reflected by a sharp drop of the cpu time , since the method skips the `` converged '' tt blocks .this makes the dmrg unreliable for high dimensional problems , even if the qtt format allows to get rid of large mode sizes .now , we fix the dimension , and compare both the error and residual accuracies of all methods , as well as the computational times . in all cases ,the frobenius - norm tolerance was set to , and the enrichment rank to .first of all , since our methods are proven to converge in the spd case , we shall examine both the initial and symmetrized systems ( fig .[ fig : casc_amen_symm ] ) .a well - known way to treat a general problem via a symmetric method is the normal , or symmetrized formulation , .however , both the condition number and the tt ranks of are the squared ones of , and this approach should be avoided when possible .three particular techniques are considered : the dmrg method , the amen+svd ( marked as `` amen '' in fig . [ fig : casc_amen_symm ] ) and the one .the symmetrized versions are denoted by the `` -s '' tag .in addition , note that the convergence of the methods may be checked locally due to the zero total correction after the enrichment in alg .[ alg : amen ] : before recomputing the -th block , calculate the local residual provided by the previous solution .if it is below the threshold for all , the method may be considered as converged , and stopped .occurrences of this fact are marked by red rectangles ( `` stop '' ) .we observe that the symmetrization allows the dmrg method to converge at least to the accuracy , but increases the cpu time by a factor greater than 100 due to the squaring of the tt ranks and condition number of the matrix .contrarily , for the amen and methods the symmetrization is completely inefficient and redundant : despite pessimistic theoretical estimates , the nonsymmetric algorithms converge rapidly to an accurate solution approximation .though the non - symmetrized methods may admit oscillations in the residual , the frobenius - norm error threshold is almost satisfied in both amen and methods .nevertheless , the amen algorithm appears to be more accurate thanks to the enrichment update in each step .also , its local stopping criterion is trustful : it fires just after the real error becomes smaller than the tolerance , which is not the case for other methods . since both amen - type methods in this testexploit the svd - based residual approximation , they demonstrate almost the same cpu times .however , using the additional techniques from section [ sec : prac ] we can reduce the complexity while maintaining almost the same accuracy , see fig .[ fig : casc_var_amen ] .while the amen+chol method still operates with the exact residual , the amen+als only needs to compute scalar products of the true residual and its low - rank approximation , which makes it more efficient than the amen+svd method , as well as the one .finally , we test the performance of the two amen realizations with respect to the enrichment rank ( tt - rank of ) , see fig .[ fig : casc_amr_rho ] .as expected , the higher is , the more accurate solution can be computed . on the other hand , it is not necessary to pick very large ranks , since the corresponding accuracy improvement does not overcome the significant increase in cpu time .another example of high - dimensional problems arising in the context of probability distribution modeling , is the fokker - planck equation ( see e.g. ) . as a particular application , consider the 8-dimensional fokker - planck equation of the polymer micro - model arising in the non - newtonian fluid dynamics .the polymer molecules in a solution are subject to the brownian motion , and are often modeled as bead - spring chains ( see fig .[ fig : beadspring ] ) .the spring extensions , being the degrees of freedom of the dynamical system , become the coordinates in the fokker planck equation .we consider the case of 4 two - dimensional _ finitely extensible nonlinear elastic _ ( fene ) springs in the shear flow regime according to , where is the stacked spring extension vectors ( is the displacement of the -th spring in the -th direction ) , is a spring interaction tensor , is a flow velocity gradient ( shear flow case ) , and is the fene spring force .note that the singularity in limits the maximal length of a spring to .moreover , the probability density at the point ( and any with larger modulus ) is zero .therefore , the domain shrinks to the product of balls a quantity of interest is the average polymeric contribution to the stress tensor , with the normalization assumption . to recast the problem domain into a hypercube , the polar coordinates are employed , .the discretization is done via the spectral elements method ( see e.g. ) .we will vary the number of spectral elements in each radial direction , but the number of angular elements ( in ) is fixed to . with typical values , we end up with tensors of size and dense populated matrices , which are intractable in the full format .since the spectral differentiation matrices are found to be incompressible in the qtt format , the 8-dimensional tt representation is used .we would like to compute the stationary state of , so we use the simple implicit euler ( inverse power ) method as the time discretization , where is the mass matrix , is the stiffness matrix .the time integration was performed until , which is enough to approximate the steady state with a satisfactory accuracy , and the ( unnormalized ) initial state was chosen , which corresponds to the zero velocity gradient . since is not a `` time step '' but a parameter of the inverse power method , we will check the performance w.r.t . as well .in the previous example we have observed that the amen+svd method is in fact superfluous , since the amen+als method delivers the same accuracy with lower cost . both mode sizes ( up to ) andtt ranks ( up to ) in this example are relatively large , so we will consider only the amen+als .we set the frobenius - norm threshold to , and the enrichment rank .the initial guess for the amen+als method is taken from the previous euler step .first , let us track the evolution of the stress tensor components versus euler iterations , see fig .[ fig : stress ] .we see that the stress does really stabilize in the chosen time range .moreover , the last component tends to zero , and can therefore be used as an in - hand measure of the accuracy .in addition , we compare and with the reference values computed with and , see fig .[ fig : stress_acc ] . for all except ( which is too large ) , and ,the accuracy attained is of the order .note that typical accuracies of greedy or mc methods for many - spring models are of the order .finally , the computational times can be seen in fig .[ fig : fpe_ttimes ] .as expected , the complexity increases quadratically with the number of spectral elements .an interesting feature is that the total cpu time decays with increasing .it points out that the performance of the amen method depends weakly on , and henceforth on the matrix spectrum . on the contrary ,the quality of the initial guess ( in terms of both ranks and accuracy ) is crucial .this may motivate attempts to relate the amen methods to newton or krylov iterations in a future research .in this paper we develop a new version of the fast rank adaptive solver for tensor structured symmetric positive definite linear systems in higher dimensions . similarly to the algorithms from , the proposed amen method combines the one - dimensional local updates with the steps where the basis is expanded using the information about the global residual of the high dimensional problem . however , in amen the same steps are ordered in such a way that only one or two neighboring cores are modified at once . both methods from and the amen converge globally , and the convergence rate is established w.r.t .the one of the steepest descent algorithm .the practical convergence in the numerical experiments is significantly faster than the theoretical estimate .the amen algorithm appears to be more accurate in practical computations than the previously known methods , especially if local problems are solved roughly .the asymptotic complexity of the amen is linear in the dimension and mode size , similarly to the algorithms from .the complexity w.r.t .the rank parameter is sufficiently improved taking into the account that a limiting step is the approximation of the residual , where the high accuracy is not always essential for the convergence of the whole method .we propose several cheaper alternatives to the svd - based tt - approximation , namely the cholesky decomposition and the inner als algorithm .the als approach provides a significant speedup , while maintaining almost the same convergence of the algorithm .finally , we apply the developed amen algorithm to general ( non - spd ) systems , which arise from high dimensional fokker planck and chemical master equations .theoretical convergence analysis can be made similarly to the fom method , which is rather pessimistic and puts very strong requirements on the matrix spectrum . in numerical experimentswe observe a surprisingly fast convergence , even for strongly non symmetric systems . herethe amen demonstrates a significant advantage over the dmrg technique , which is known to stagnate , especially in high dimensions , see and fig .[ fig : casc_amen_symm ] .there are many directions of a further research based on the ideas of and this paper .first , the ideas developed in this paper can be generalized to other problems , e.g. finding the ground state of a many - body quantum system or a particular state close to a prescribed energy .the combination of update and basis enrichment steps looks very promising for a wide class of problems , as soon as the corresponding classical iterative algorithms can be adapted to provide a proper basis expansion in higher dimensions .a huge work is done in the community of greedy approximation methods , where the cornerstone is a subsequent rank - one update of the solution .second , there is a certain mismatch between the theoretical convergence estimates , which are at the level of the one step steepest descent algorithm , and the practical convergence pattern , which looks more like the one of the gmres .this indicates that there are further possibilities to improve our understanding of the convergence of the amen and similar methods .our rates can benefit from sharp estimates of the progress of the one - dimensional update steps , which at the moment are available only in a small vicinity of a true solution , which is hard to satisfy in practice , see .the superlinear convergence observed in numerical experiments inspires us to look for possible connections with the theory of krylov type iterative methods and a family of newton methods .finally , we look forward to solving more high dimensional problems , and are sure that they will bring new understanding of the advantages and drawbacks of the proposed method , and new questions and directions for a future research .40 , _ reduction of the chemical master equation for gene regulatory networks using proper generalized decompositions _ ,j. numer . methengng , 00 ( 2011 ) , pp .115 . ,_ a new family of solvers for some classes of multidimensional partial differential equations encountered in kinetic theory modeling of complex fluids _ , journal of non - newtonian fluid mechanics , 139 ( 2006 ) , pp . 153 176 . , _ sparse grids _ , acta numerica , 13 ( 2004 ) , pp. 147269 . ,_ simulation of dilute polymer solutions using a fokker - planck equation _ , computers & fluids , 33 ( 2004 ) , pp .687696 . , _ tt - gmres : on solution to a linear system in the structured tensor format _ , arxiv preprint 1206.5512 ( to appear in : rus . j. of num .an . and math . model . ) , 2012 . , _ tensor - product approach to global time - space - parametric discretization of chemical master equation _ , preprint 68 , mpi mis , 2012 ., _ fast solution of multi - dimensional parabolic problems in the tensor train / quantized tensor train format with initial application to the fokker - planck equation _ , siam j. sci .comput . , 34 ( 2012 ) , p. a3016a3038 . , _ solution of linear systems and matrix inversion in the tt - format _, siam j. sci .34 ( 2012 ) , pp .a2718a2739 . , _ alternating minimal energy methods for linear systems in higher dimensions .part i : spd systems _ , arxiv preprint 1301.6068 , 2013 ., _ finitely correlated states on quantum spin chains _ , communications in mathematical physics , 144 ( 1992 ) , pp . 443490 . , _ a general method for numerically simulating the stochastic time evolution of coupled chemical reactions _ , journal of computational physics , 22 ( 1976 ) , pp. 403434 . , _ tensor spaces and numerical tensor calculus _, springer verlag , berlin , 2012 . ,_ a solver for the stochastic master equation applied to gene regulatory networks _ , journal of computational and applied mathematics , 205 ( 2007 ) , pp .708 724 . , _ the alternating linear scheme for tensor optimization in the tensor train format _ , siam j. sci .34 ( 2012 ) , pp .a683a713 . , _ a dynamical low - rank approach to the chemical master equation _ , bulletin of mathematical biology , 70 ( 2008 ) , pp . 22832302 . ,_ direct solution of the chemical master equation using quantized tensor trains _ ,research report 04 , sam , eth zrich , 2013 . , _ approximation of tensors in high - dimensional numerical modeling _ , constr .appr . , 34 ( 2011 ) ,257280 .height 2pt depth -1.6pt width 23pt , _ tensor - structured numerical methods in scientific computing : survey on recent advances _ , chemometr .intell . lab .syst . , 110 ( 2012 ) , pp .119 . ,_ matrix product ground states for one - dimensional spin-1 quantum antiferromagnets _ , europhys .lett . , 24 ( 1993 ) , pp .293297 . , _ tensor decompositions and applications _ , siam review , 51 ( 2009 ) , pp. 455500 . , _ a fast solver for fokker - planck equation applied to viscoelastic flows calculations : 2d fene model _ , journal of computational physics , 189 ( 2003 ) , pp .607 625 . , _ a priori model reduction through proper generalized decomposition for solving time - dependent partial differential equations _ , computer methods in applied mechanics and engineering , 199 ( 2010 ) , pp .16031626 ., _ dmrg approach to fast linear algebra in the tt format _, comput . meth .math , 11 ( 2011 ) , pp .382393 .height 2pt depth -1.6pt width 23pt , _ tensor - train decomposition _ , siam j. sci .comput . , 33 ( 2011 ) , pp .22952317 . , _ tt - cross approximation for multidimensional arrays _, linear algebra appl ., 432 ( 2010 ) , pp ., _ thermodynamic limit of density matrix renormalization _ ,, 75 ( 1995 ) , pp .35373540 . , _ the fokker - planck equation : methods of solutions and applications , 2nd ed ._ , springer verlag , berlin , heidelberg , 1989 . , _ local convergence of alternating schemes for optimization of convex problems in the tt format _, siam j num .anal . , ( ( 2013 ) ) . to appear ., _ iterative methods for sparse linear systems _ , siam , 2003 . , _ fast revealing of mode ranks of tensor in canonical format _ , numertheor . meth .appl . , 2 ( 2009 ) , pp .439444 ., _ fast adaptive interpolation of multi - dimensional arrays in tensor train format _, in proceedings of 7th international workshop on multidimensional systems ( nds ) , ieee , 2011. , _ quadrature and interpolation formulas for tensor products of certain class of functions _ , dokl .nauk sssr , 148 ( 1964 ) , pp .soviet math .dokl . 4:240 - 243 , 1963 . ,_ spectral methods in matlab _ , siam , philadelphia , 2000 . ,_ stochastic processes in physics and chemistry _ , north holland , amsterdam , 1981 . ,_ a qmc approach for high dimensional fokker - planck equations modelling polymeric liquids _ , math ., 68 ( 2005 ) , pp .4356 . ,_ density - matrix algorithms for quantum renormalization groups _ , phys .b , 48 ( 1993 ) , pp . 1034510356 .# 1#2 ( ) as was observed in the numerical experiments , the amen method works successfully even being applied directly to non - symmetric systems .though we can not support this behavior with sharp estimates , one may proceed similarly to section [ sec : amen ] , and establish a formal theory , relating the amen to the full orthogonalization method .like in the spd case , we begin the analysis from the two - dimensional case . given a linear system and some basis , the projection method is performed as follows , given an initial guess , we assume , and .then it holds also .so , performs an oblique projection of the residual .its analysis is often conducted with the help of the orthogonal projection , i.e. the residual minimization on . the case is known as the minres method .its convergence was analysed in e.g. , i.e. is the acute angle between and . the worst convergence rate is estimated as and for a positive definite matrix is guaranteed to be less than 1 .the same approach may be used for the block case as well , obviously , if , it holds . unfortunately , for the oblique projection one can not guarantee the monotonous convergence in general . however , assuming a certain well - conditioning of the system , we may relate the old and new residuals by a factor smaller than 1 as well .first of all , notice that is orthogonal to , then , , where . for the angle we can derive the following chain of inequalities , which we get . on the other hand , so that .therefore , the residual estimates as follows , it holds since the minimization over is a restriction w.r.t . the minimization over in the full space .hence , .however , might be greater than , and even greater than 1 .if contains the -th krylov subspace , we obtain the so - called fom method .the progress of the fom can be related to that of the gmres as follows , where , are the progresses of the -step gmres and fom , resp .note the similar term in lemma [ lem : fom ] .the condition may reflect the residual approximation , i.e. , but . both svd- and als - based approximations ( see section [ sec : prac ] ) fit to this scheme : the svd approximation reads , where is the singular vectors , and the als approximation reads .lemma [ lem : fom ] applies immediately to the two - dimensional amen method , by setting . despite the generally pessimistic estimate, it occurs in practice that is nonsingular , and moreover , is rather small such that and converges rapidly . a nice property of theorem[ thm : amen ] is that it itself does not rely on a particular form of .we only needed that the galerkin conditions make the error strictly smaller than .[ lem : amr ] suppose in the -th step of the multidimensional amen method , the als step provides the residual decrease and the exact computation of the rest cores after the enrichment provides the residual decrease where from lemma [ lem : fom ] with .then , the total convergence rate of the amen method is bounded by the exact solution for the second block is the oblique projection , hence the last two terms in are similar to that in theorem [ thm : amen ] , but now it is not orthogonal to .therefore , we can only use the triangle inequality , the first term is the residual after the galerkin solution , which is bounded by . for the second term, we have the recursion assumption , that is however , the only way to relate and is to use the angle between and , employing , since , and , it holds therefore , for the total residual we have plugging in the als update , the final estimate for now writes as follows , which finishes the recursion .contrarily to the symmetric positive definite case , where the total progress of the amen method was deteriorating with , but less than 1 in any case , here we may have a situation when the progress bound given by lemma [ lem : amr ] is greater than 1 . up to this moment , the only available estimate is , since we enrich the basis by , i.e. the first krylov vector only . in principle , it is possible to include a larger approximate krylov basis into the enrichment , i.e. (k ) } & \cdots & q^{[m-1](k)}\end{bmatrix},\ ] ] where } = \tau(q^{[p](k ) } , \ldots , q^{[p](d ) } ) \approx a_k^p z_k$ ] , .however , this was not found to be reasonable in practical experiments . in all considered cases ,the decays and provided by the single enrichment appeared to be sufficiently small to ensure the convergence , fast enough to overcome the work required to prepare several krylov vectors .
in this paper we accomplish the development of the fast rank adaptive solver for tensor structured symmetric positive definite linear systems in higher dimensions . in this problem is approached by alternating minimization of the energy function , which we combine with steps of the basis expansion in accordance with the steepest descent algorithm . in this paper we combine the same steps in such a way that the resulted algorithm works with one or two neighboring cores at a time . the recurrent interpretation of the algorithm allows to prove the global convergence and to estimate the convergence rate . we also propose several strategies , both rigorous and heuristic , to compute new subspaces for the basis enrichment in a more efficient way . we test the algorithm on a number of high dimensional problems , including the non - symmetrical fokker planck and chemical master equations , for which the efficiency of the method is not fully supported by the theory . in all examples we observe a convincing fast convergence and high efficiency of the proposed method . _ keywords : _ high dimensional problems , tensor train format , als , dmrg , steepest descent , convergence rate , superfast algorithms .
at a fundamental level nature is governed by the laws of quantum mechanics , but until recently such phenomena were mostly a curiosity studied by physicists . however , significant advances in theory and technology are increasingly pushing quantum phenomena into the realm of engineering , as building blocks for novel technologies and applications from chemistry to computing .e.g. , advances in laser technology enable ever more sophisticated coherent control of atoms , molecules and other quantum systems .recent advances in nanofabrication have made it possible to create nanostructures such as quantum dots and quantum wells that behave like artificial atoms or molecules and exhibit complex quantum behaviour .cold - atom systems and the creation of bose condensates demonstrate that even macroscopic systems can exhibit quantum coherence .harnessing the potential of quantum systems is a challenging task , requiring exquisite control of quantum effects and system designs that are robust with regard to fabrication imperfection , environmental noise and loss of coherence .although significant progress has been made in designing effective controls , most control design is model - based , and available models for many systems do not fully capture their complexity .model parameters are often at best approximately known and may vary , in particular for engineered systems subject to fabrication tolerances .experimental system identification is therefore crucial for the success of quantum engineering .while there has been significant progress in quantum state identification and quantum process tomography , we require dynamic models if we wish to control a system s evolution . furthermore , effective protocols must take into account limitations on measurement and control resources for initial device characterization .this presents many challenges , from determing how much information can be obtained in a given setting to effective and efficient protocols to extract this information .here we illustrate some problems and solutions for the case of identifying the dynamics of a three - level system .one of the first questions to consider before attempting to find explicit protocols for experimental system identification is clearly what information we can hope to extract about a given system with a certain limited set of resources .for instance , given a system with a hilbert space of dimension , it is well known that the ability to prepare and measure the system in a set of computational basis states is insufficient for quantum process tomography , even if the process is unitary .however , recent work shows that a substantial amount of information about the generators of the dynamics can be obtained for hamiltonian and even dissipative systems at least generically , by mapping the evolution of the computational basis states stroboscopically over time .more precisely , this is done by determining the probabilities that a measurement of the observable produces the outcome after the system was initialized in the computational basis state and allowed to evolve for time for a number of different times .this begs the question how much information we can hope to obtain in general from such experiments . in this paperwe consider hamiltonian systems , whose evolution is governed by the schrodinger equation with a fixed hamiltonian and , for which we have .let and be hermitian operators representing the hamiltonian and the measurement , respectively , and let be a positive operator with representing the initial state of the system .if , and are simultaneously blockdiagonalizable , i.e. , there exists a decomposition of the hilbert space such that where , and are operators on the hilbert spaces , then we can at most identify up to , where is the identity on the subspace .if is block - diagonal then any initial state starting in a subspace must remain in this subspace .thus , the dynamics on each subspace is independent , with . per hypothesis and also blockdiagonal , so = \sum_s \tr[m_s u_s(t ) \rho_s u_s(t)^\dag] ] ] shows that and are indistinguishable .thus , there are some limitations on the maximum amount of information we can obtain about the system by initializing and measuring the system in a fixed computational basis . in particular , if and commute , we can infer that and are simultaneously diagonalizable , and assuming the eigenvalues of are distinct , this fixes the hamiltonian basis , i.e. , we have , where is the projector on the eigenspace of corresponding to ,i.e. , the computational basis state . however , no information about the eigenvalues or the transition frequencies can be obtained by measuring , all of which are constant in this case .maximum information about the hamiltonian can be obtained if and are not simulataneously block - diagonalizable .this is the generic case , and in this case we can identify at most up to a diagonal unitary matrix and a global energy shift , i.e. , , as was noted in .the term is generally physically insignificant as it gives rise only a global phase factor , which is generally unobservable , as the abelian phase factors cancel , for any .the diagonal unitary matrix represents the freedom to redefine the measurement basis states , as .the phases can not be ignored in general but in certain special cases they can be effectively eliminated .for example , if is known to be real - symmetric , a common case in physics , then we can choose all basis vectors to be real and restrict to .moreover , if the off - diagonal elements in the computational basis are known to be real and positive , , then with as above . hence , with this additional constraint the hamiltonian is effectively uniquely determined ( up to a global energy level shift and global inversion of the energy levels ) .a constructive procedure for reconstructing a generic unknown hamiltonian from stroboscopic measurements of the observables at fixed times using bayesian parameter estimation techniques was also given in .the previous section shows that when essentially no a - priori information about the hamiltonian is available then even measurement of all the observables is not sufficient to uniquely determine the hamiltonian .however , in many cases some a - priori knowledge about the system is available .for instance , the transition frequencies of the system , where are the eigenvalues of the hamiltonian , may be known from available spectroscopic data , and we may be able to infer basics such as the level structure and allowed transitions from fundamental physical principles . in such casesthe identification problem can be substantially simplified and far less information may be required . as a specific simple example , consider a three - level system with known transition frequencies and and no direct transitions between states and subject to external fields driving the and transitions , respectively .if our computational / measurement basis coincides with the eigenbasis of the undriven system , then we know that the hamiltonian of the driven system must be of the form with and ] , where for . if the field amplitudes are constant , this hamiltonian is constant and we could use the general protocol in to fully characterize the dynamics by stroboscopically measuring the probabilities for at sufficiently many times .this requires the ability to initialize the system in all three basis states and measure the populations of all three states . due to conservation of probability and symmetry , the requirements can be reduced to initialization and measurement in two basis states , e.g. , and , as the remaining probabilities can be inferred from the other two , but we can do even better by using all the information available .we shall assume and are real and positive .for notational convenience , let and be the polar coordinates of the vector , i.e. , and with and ] and setting .this shows that there are three frequency components , and , whose amplitudes determine .the form of suggests fourier analysis to determine the parameters and , e.g. , by identifying the non - zero fourier components .the highest frequency peak will be at and the corresponding peak amplitude uniquely determines . in some cases ( as in the example shown in fig .[ fig1 ] ) there may be only one clearly identifiable non - zero peak in the power spectrum , which could correspond to either or .this problem can in principle be overcome by estimating from the average signal , from which we can obtain estimates for the coefficients and .if then we identify the non - zero - frequency peak with , otherwise with .alternatively , we can estimate the base frequency and the signal amplitudes using a bayesian approach .the signal in our case is a linear combination of the basis functions , and . following standard techniques ,we maximize the log - likelihood function ,\ ] ] where is the number of basis functions , in our case , is the number of data points , and where the elements of -vector are projections of the -data vector onto a set of orthonormal basis vectors derived from the non - orthogonal basis functions evaluated at the respective sample times . concretely , setting , let and be the eigenvalues and corresponding ( normalized ) eigenvectors of the matrix with , and let be a matrix whose columns are . then we have and with , and the corresponding coefficient vector is . in our case the is a function of a single frequency and is the frequency for which achieves its global maximum. if is the corresponding coefficient vector , we can obtain the best estimate for and thus by minimizing with as defined above .thus , the problem of finding the most likely model is reduced to finding the global maximum of .unfortunately , this is not an easy task as is sharply peaked and can have many local extrema and a substantial noise floor depending on the number and accuracy of the data points .one way to circumvent this problem is to use the peaks in the discrete fourier spectrum of the data as input for a gradient - based optimization of . to make the peak detection simpler and more robust , especially when the data is noisy ,we find the position of the highest peak in the rescaled power spectrum |^2 + 1] ] and ] and ] were generated with the number of samples ranging from to .regular and irregular time vector samplings were considered , where for irregular samples a ( fast ) non - uniform fourier transform was used . for each test system and time vector ,noisy data vectors were generated by simulating actual experiments , noting that in a laboratory experiment each data point would normally be estimated by initializing the system in state , letting it evolve for time , and performing a projective measurement , whose outcome is random , either or . to estimate the probability the experiment is repeated many times and approximated by the relative frequency of s .the simplest approach is to use a fixed number of experiment repetitions for each time , but noting that the uncertainty of the estimate of is shows that it is advantageous to adjust the number of repetitions for each time to achieve a more uniform signal - to - noise ratio .specifically , for each data point we sample until or we reach a maximum number of repetitions ( here ) .although the projection noise for a single data point is poissonian , the overall error distribution for a large number of samples is roughly gaussian , justifying the use of a gaussian error model in the bayesian analysis .as the resolution of the discrete fourier transform and hence the scaled power spectrum is approximately , and generally somewhat less for irregular sampling , the uncertainty in the peak positions of the power spectrum will generally be at least , limiting the accuracy of the frequency estimates , in our case to , regardless of the number of data points .this is evident in fig .[ fig1 ] , which shows that the peak in power spectrum is relatively broad , compared to the peak in the likelihood function .furthermore , the frequency range covered by the power spectrum depends on the sampling frequency , or the number of data points , with the largest discernible frequency approximately .if the system frequency is outside this range covered by the power spectrum , we are unable to detect it . for example , for a system with , we require and thus data points ( see fig .[ fig1 ] ) .if and are sufficiently large to avoid such problems , the location of the global maximum of the power spectrum usually provides a good starting point for finding the global optimum of the log - likelihood function but we can generally substantially improve the frequency estimates using the likelihood . of 14440 data sets analyzed ( 30 test systems sampled at different times ) differed by less than 1% from the true system frequency , or , i.e. , with in about half ( 7321 ) the cases . for almost all failed cases the number of data points was too small and outside the range of the power spectrum .even when restricted to the successful cases as defined above , the median of was , while the median of the relative error of the final estimate obtained by maximizing the likelihood was .we also considered finding the global maximum of the likelihood by other means , especially in those cases for which the power spectrum does not provide a useful initial frequency estimator . since we have a function of a single parameter and evaluation of the likelihood , especially when the number of data points is small , is not expensive , it is possible to find the global maximum simply by exhaustive search .interestingly , we found that log - likelihood still had a clearly identifiable global maximum in many cases even when the number of data points was far below the minimum number of sample points required to detect a peak in the power spectrum .e.g. , for the system shown in fig .[ fig1 ] , the likelihood function still has a sharp peak around the system frequency even if the number of samples is reduced to , while the peak is no longer detectable in the power spectrum even for samples . however , as we reduce the number of samples additional peaks in the likelihood function tend to emerge at multiples or fractions of , as shown in the top inset of fig . [ fig1 ] .this means that we can no longer unambiguously identify the true frequency .such aliasing problems leading to sampling artefacts in the data analysis can be sustantially reduced by avoiding uniform sampling at equally spaced times ( cf fig .[ fig1 ] , top inset ) . in particular low - discrepancy sequences have been introduced with the aim to create a sampling with minimal regular patterns causing sampling artefacts , but also minimising the average gap between the samples for a fixed number of samples . herein particular we use a stratified sampling strategy , where a point is placed in each stratum of a regular grid according to a uniform probability distribution .this _ may _ be improved further using other low - discrepancy sequences .the results are relevant as a significant reduction in the number of data points required reduces experimental overheads substantially .this comes at additional computational costs , as finding the global maximum of the likelihood function for irregular samplings with very few data points forms a hard optimization problem .several standard optimization algorithms ( simple pattern search and stochastic gradient decent ) failed to reliably detect the global optimum , and exhaustive search had to be used . for data sampled at different times in $ ] .for the power spectra have a single peak in the plotted range , which is a reasonable estimate for .for and below , the main peak is outside the range of the power spectrum and the former no longer contains any useful information . yet, the log - likelihood still has a clearly identifiable global maximum at even for data vectors with as few as 32 data points , provided a non - uniform sampling is used .for uniform sampling with the top inset shows that has many peaks of approximately equal height due to aliasing effects ( dashed black line ) . ]we have considered hamiltonian identification using stroboscopic measurement data of a fixed observable .if the system can only be initialized in the measurement basis states then a completely unknown hamiltonian can not be uniquely identified even if we can measure the population of all basis states as a function of time .if a - priori information is available , however , complete identification of the system parameters is often possible with substantially reduced resources .we have illustrated this for the case of a three - level system where we can only monitor the population of state over time , starting in , without the possibility of dynamic control or feedback as was considered in .the results may be applicable to improve the efficiency of identification schemes for other systems .e.g. , recent work on system identification for spin networks has shown that the relevant hamiltonian parameters of a spin chain can be identified by mapping the evolution of the first spin and fourier analysis , but the scheme requires repeated quantum state tomography of the first spin for many times , which is experimentally expensive .sgs acknowledges funding from epsrc arf grant ep / d07192x/1 , the epsrc qip interdisciplinary research collaboration ( irc ) , hitachi and nsf grant phy05 - 51164 .fcl acknowledges funding for rivic one wales national research centre from wag .
identifying the hamiltonian of a quantum system from experimental data is considered . general limits on the identifiability of model parameters with limited experimental resources are investigated , and a specific bayesian estimation procedure is proposed and evaluated for a model system where a - priori information about the hamiltonian s structure is available .
microblogging services like twitter have evolved from merely posting a status or quote to an intra - user interaction tool that connect celebrities , politicians , and others to the public .they have also evolved to act as a narration tool and an information exchange describing current publicly recognized events and incidents . in 2011 , during the egyptian revolution , thousands of posts and resources were shared during the 18 days of the uprising .these resources could have crucial value in narrating the personal experience during this historic event , acting as a first draft of history written by the public . in our previous work, we proved that shared resources on the web are prone to loss and disappearance at nearly constant rate .we found that after only one year we lost nearly 11% of the resources linked in social posts and continued to lose an average of 7.3% yearly . in some cases ,this disappearance is not catastrophic as we can rely on the public archives to retrieve a snapshot of the resource to fill into the place of the missing resource . in another study we measured how much of the web is archived and found that 16%79% of uris have at least one archived copy .unfortunately , there is still a large percentage of the web that is not archived and thus a huge amount of resources are not archived and prone to total loss upon disappearance from the live web .this evolution in the role of social media and the ease of reader interaction and dissemination could be used as a possible solution to metigate or prevent the loss of the unarchived shared resources .fortunately , when a user tweets or shares a link , it leaves behind a trail of copies , links , likes , comments , other shares . if the shared resource is later gone these traces , in most cases , still persist .thus , in this paper we investigate if the other tweets that also linked to the resource can be mined to provide enough context to discover similar resources that can be used as a substitute for the missing resource . to do this , in this study we extract up to the 500 most recent tweets about linked uris and we propose a method of finding the social link neighborhood of the resource we are attempting to reconstruct .this link neighborhood could be mined for identifiers and alternative related resources .social media has been the focus of numerous studies in the last decade .twitter , for example , was analyzed by kwak et al . where they aimed to identify the characteristics of the twittersphere , retweeting , and the diffusion speed of posts by using algorithms like pagerank in ranking users .bakshy et al . investigated 1.6 million users along with the tweet diffusion events to identify influencers on twitter and their effect in content spread . to answer questions in regards to the production , flow , and consumption of information on twitter , wu et al .analyzed the intra - user interactions and found that nearly 20k elite users are responsible for generation of nearly 50% of urls shared .intuitively , this shows that popularity plays an important role in the content disseminated .they also found that type of the content published and the type of users broadcasting this content affect the lifespan of the tweet activity .along with understanding the nature of the social media , researchers analyzed user behavior on the social networks in general . by analyzing user activityclick logs , beneventu et al . aimed to get a better understanding of social interactions social browsing patterns .zhao and rosson aimed to explore the reasons of how and why people use twitter and this use s impact on informal communication at work .following the how and the why , gill et al . attempted to answer the next question of what is the user - generated content is about by investigating personal weblogs to detect the effects of personality , topic type , and the general motivation in published blogs .yang and counts investigated the information diffusion speed , scale and range in twitter and how they could be predicted .this in - depth analysis and study of the social media , its nature , the information dissemination patterns , and the user behavior and interaction paved the way for the researchers to have a better understanding of how the social media played a major role in narrating publicly significant events .these studies prove that user - generated content in social media is of crucial importance and can be considered the first draft of history .vieweg et al .analyzed two natural hazard events ( the oklahoma grass fires and the red river floods in 2009 ) and how microblogging contributed in raising the situational awareness of the public .starbird and palen analyzed how the crowd interact with politically sensitive context regarding the egyptian revolution of 2011 .starbird et al . in another studyutilized collaborative filtering techniques for identifying social media users who are most likely to be on the ground during a mass disruption event .mark et al . investigated weblogs to examine societal interactions to a disaster over time and how they reflect the collective public view towards this disaster .in our previous work we showed that this content is vulnerable to loss .similar to regular web content and websites , there are several reasons explaining this loss .mccown et al .analyzed some of the reasons behind the disappearance and reappearance of websites .mccown and nelson also examined several techniques to counter the loss prior to its occurrence in social networking websites like facebook .as for regular web pages , klein and nelson analyzed the means of using lexical signatures to rediscover missing web pages .given that the web resource itself might not be available for analysis or might be costly to extract , several studies investigated other alternatives to having the resource itself .other studies investigated the use of the page s url to aid web page categorization without resorting to the have the webpage itself .xiaoguang et al . utilized class information from neighboring pages in the link graph to aid the classification .we start our analysis by revisiting the experiment conducted in march of 2012 , in which we modeled the existence of shared resources on the live web and the public archives . in that experiment , we examined six publicly - recognized events that occurred between june 2009 and march 2012 , extracting six sets of corresponding social posts .each of the selected posts include an linked resource and hashtags related to the events .consequently , we tested the existence of the embedded resources on the live web and in the public archives . after calculating the percentages lost and archived we estimated the existence as a function of time . in this paper , we start by revisiting this year - old estimation model and checking its validity after a year before proceeding with our analysis of reappearance and extracting the social context of the missing resources .then we investigated how this context could be utilized in guiding the search in extracting the best possible replacement for the missing resource . in the 2012 model, we found a nearly linear relationship between the number of resources missing from the web and time ( equation [ eq : prediction ] ) , and a less linear relationship between the amount archived and time ( equation [ eq : archived ] ) . as a year has passed , we need to analyze our findings and the estimation calculated to see if it still matches our prediction . for each of the six datasetsinvestigated , we repeat the same experiment of analyzing the existence of each of the resources on the live web . a resource is deemed missing if its http responses terminate in something other than 200 , including `` soft 404s '' .table [ tab : afterayear ] shows the results from repeating the experiment , the predicted calculated values based on our model , and the corresponding errors .figure [ fig : afterayear ] illustrates the measured and the estimated plots for the missing resources .the standard error is 4.15% which shows that our model matched reality .p0.25 cm | p1.2cm|| p0.8 cm | p0.8 cm || p0.8 cm | p0.8 cm || p0.8 cm | p0.8 cm || p0.8 cm | p0.8 cm || p0.5 cm || p0.8 cm | & & & & & & & + & measured & 37.10% & 37.50% & 28.17% & 30.56% & 26.29% & 31.62% & 32.47% & 24.64% & 7.55% & 12.68% + & predicted & 31.72% & 31.42% & 31.96% & 30.98% & 30.16% & 29.68% & 29.60% & 28.36% & 19.80% & 11.54% + & error & 5.38% & 6.08% & 3.79% & 0.42% & 3.87% & 1.94% & 2.87% & 3.72% & 12.25% & 1.14% + & & * 4.15% * + + & measured & 48.61% & 40.32% & 60.80% & 55.04% & 47.97% & 52.14% & 48.38% & 40.58% & 23.73% & 0.56% + & predicted & 61.78% & 61.18% & 62.26% & 60.30% & 58.66% & 57.70% & 57.54% & 55.06% & 37.94% & 21.42% + & error & 13.17% & 20.86% & 1.46% & 5.26% & 10.69% & 5.56% & 9.16% & 14.48% & 14.21% & 20.86% + & & * 11.57% * + + to verify the second part of our model we calculate the percentages of resources that are archived at least once in one of the public archives .table [ tab : afterayear ] illustrates the archived results measured , predicted , and the corresponding standard error .figure [ fig : afterayear ] also displays the measured and predicted corresponding plots for the archived resources . while the archived content percentages had a higher error percentage of 11.57% and proceeded to become further less linear with time .this fluctuation in the archival percentages convinced us that a further analysis is needed . in measuring the percentage of resources missing from the live web , we assumed that when a resource is deemed missing it remains missing .it was also assumed that if a snapshot of the resource is present in one of the public archives the resource is deemed archived and that this snapshot persists indefinitely .utilizing the response logs resulting from running the existence experiment in 2012 and in 2013 we compare the corresponding http responses and the number of mementos for each resource . as expected , portions of the datasets disappeared from the live web and were labeled as missing .an interesting phenomena occurred as several of the resources that were previously declared as missing became available again as shown in table [ tab : reappear ] .a possible explanation of this reappearance could be a domain or a webserver being disrupted and restored again .another possible explanation is that the previously missing resources could be linked to a suspended user account that was reinstated . to eliminate the effect of transient errors ,the experiment was repeated three times in the course of two weeks ..percentages of resources reappearing on the live web and disappearing from the public archives per event . [ cols="^,^,^,^,^,^,^,^",options="header " , ] web resource can fall into one of the categories as shown in table [ tab:2 ] .these categories were adopted from the work of mccown and nelson .c|c|c| & * archived * & * not archived * + * available * & replicated & vulnerable + * missing * & endangered & unrecoverable + [ tab:2 ] if a resource is available on the live web and also archived in public archives then it is considered replicated and safe .the resource is considered vulnerable if it persists on the web but has no available archived versions .if a resource is not available on the live web but has an archived version then it is considered endangered as it relies on the stability and the persistence of the archive .the worst case scenario occurs when the resource disappears from the live web without being archived at all thusly , be considered unrecoverable . in our studywe focus on the latter category and how we can utilize the social media in identifying the context of the shared resource and select a possible replacement candidate to fill in the position of the missing resource and maintain the same context of the social post .a shared resource leaves traces even after it ceases to exist on the web .we attempt to collect those traces and discover context for the missing resource .since twitter for example restricts the length of the posts to be 140 characters only , an author might rely mostly on the shared resource in conveying a thought or an idea by embedding a link in the post and resorting to limiting the associated text . thusly , obtaining context is crucial when the resource disappears . to accomplishthat , we try to find the social link neighborhood of the tweet and the resource we are attempting this context discovery .when a link is shared on twitter for example , it could be associated with describing text in the form of the status itself , hashtags , usertags , or other links as well .these co - existing links could act as a viable replacement to the missing resource under investigation while the tags and text could provide better context enabling a better understanding of the resource .given the uri of the resource under investigation , we utilize topsy s api to extract all the available tweets incorporating this uri . in social media, a resource s uri can be shared in different forms with the aid of url shortening . to elaborate , a link to google s web page http://www.google.com could be shared also in several forms like http://goo.gl/xymol , http://bitly.com/xerh58 , and http://t.co/xfiakbhnp3 .each of these forms redirects to the same final destination uri .fortunately , topsy s api handles this by searching their index for the final target url rather than the shortened form .a maximum of 500 tweets of the most recent tweets posted could be extracted from the api regarding a certain url .the content from all the tweets is collected to form a social context corpus . from this corpuswe extract the best replacement tweet by calculating the longest common n - gram .this represents the tweet with the most information that describes the target resource intended by the author . within some tweets , multiple linkscoexist within the same text .these co - occurring resources share the same context and maintain a certain relevancy in most cases .a list of those co - occurring resources are extracted and filtered for redundancies . finally , the textual components of the tweets are extracted after removing usertags , uris , social interaction symbols like `` rt '' .we named the document composed of those text - only tweets in the form of phrases the _ `` tweet document''_. figure [ fig : json ] illustrates the json object produced from social mining the resource as described above ..... reconstruction : { " uri " : " http://ws-dl.blogspot.com/2012/02/2012-02-11-losing-my-revolution -year.html " , " related tweet count " : 290 , " related hashtags " : " # history # jan25 # sschat # arabspring # jrn112 # archives # in # revolution # iipc12 # mppdigital # egypt # recordkeeping # twitter # egyptrevolution # digitalpreservation # preservation # webarchiving # or2012 # 1anpa # socialmedia " , " users who talked about this " : " ] : ) ... " , " all associated unique links : " : " http://t.co/zrastg5o http://t.co/exhlstrf http://t.co/3gib6oi3 http://t.co/arvqcqfp ... " , " all other links associated : " : " http://www.cs.odu.edu/~mln / pubs / tpdl- 2012/tpdl-2012.pdf http://dashes.com/anil/2011/01/if-you-didnt-blog-it-it-didnt-happen.html " , " most frequent link appearing : " : " http://t.co/0a1q2fzz " , " number of times the most frequent link appearing : " : 19 , " most frequent tweet posted and reposted : " : " you may have seen this already .arab spring digital content is apparently being lost . " , " number of times the most frequent tweet appearing : " : 23 , " the longest common phrase appearing : " : " you may have seen this already arab spring digital content is apparently being lost " , " number of times the most common phrase appearing : " : 28 } .... from the social extraction phase above we gathered information that helps us to infer the aboutness and context of a resource . given this context ,can we utilize it in obtaining a viable replacement resource to fill in the missing one and provide the same context ? to answer this, we utilize the work of klein and nelson in defining the lexical signatures of web pages as discussed earlier .first , we extract the tweet document as described above .next , we remove all the stop words and apply porter s stemmer to all the remaining words .we calculate the term frequency of each stemmed word and sort them from highest occurring to the lowest .finally , we extract the top five words to form our tweet signature . on the one hand , and using this tweet signature as a query , we utilize google s search engine to extract the top 10 resulting resources . on the other hand ,we collect all the other co - occurring pages in the tweets obtained by the api .these pages combined produce a replacement candidate list of resources .one or more of which can be utilized as a viable replacement of the resource under investigation . to choose which resource is more relevant and a possibly better replacement we utilize once more the tweet document extracted earlier . for each of the extracted pages in the candidate list , we download the representation and utilize the boilerpipe library in extracting the text within .the library provides algorithms to detect and remove the `` clutter '' ( boilerplate , templates ) around the main textual content of a web page . having a list of possible candidate textual documents and the tweet document , the next step is to calculate similarity .the pages are sorted according to the cosine similarity to the tweets page describing the resource under reconstruction . at this stagewe have extracted contextual information about the resource and a possible replacement .the next step is to measure how well the reconstruction process was undergone and how close is this replacement page is to the missing resource .since we can not measure the quality of the discovered context or the resulting replacement page to the missing resource , we have to set some assumptions .we extract a dataset of resources that are currently available on the live web and assume they no longer exist .each of these resources are textual based and neither media files nor executables .each of these resources has to have at least 30 retrievable tweets using topsy s api to be enough to build context .we collect a dataset of 731 unique resources following these rules .we perform the context extraction and the replacement recommendation phases .we download the resource under investigation ( ) and the list of candidate replacements from the search engines ( ) and the list of co - occurring resources ( ) . for each we use the boilerpipe library to extract text and use cosine similarity to perform the comparisons . for each resource, we measure the similarity between the ( ) and the extracted tweet page . for each element in ( )we calculate the cosine similarity with the tweet page and sort the results accordingly from most similar to the least .we repeat the same with the list of co - occurring resources ( ) .then we calculate the similarity between ( ) and ( ) indicating the top result obtained from the search engine index .then , we compare ( ) with each of the elements in ( ) and ( ) to demonstrate the best possible similarity .figure [ fig : eval ] illustrates the different similarities sorted for each measure and shows that 41% of the time we can extract a significantly similar replacement page ( ) to the original resource ( ) by at least 70% similarity .finally , we needed to validate the effectiveness of using the tweet signature as a query string to the search engine . using the tweet signature extracted from tweets associated with an existing resource against the search engine api and locating the rank in which the resource appear in the results list , we calculate the mean reciprocal rank to be 0.43 . ]in this study we verify our previous analysis and estimation of the percentage missing of the resources shared on social media .the function in time still holds in modeling the percentage disappearing from the web . as for the model estimated for the amount archived it showed an alteration .the slope of the regression line in the model stayed the same while the y - intercept varied .we deduce that a possible explanation to this phenomena is due to timemap shrinkage .previously , timemaps incorporated search engine caches as mementos which was removed in the most recent memento revision .next , we classified web resources into four different categories in regards to existence on the live web and in public web archives .then we considered the unrecoverable category where the resource is deemed missing from the live web whilst not having any archived versions .since we can not perform a full reconstruction or retrieval , we utilize the social nature of the shared resources by using topsy s api in discovering the resource s context . using this context and the co - occurring resourceswe apply a range of heuristics and comparisons to extract the most viable replacement to the missing resource from its social neighborhood .finally , we performed an evaluation to measure the quality of this replacement and found that for 41% of the resources we can obtain a significantly similar replacement resource with at least 70% similarity . for our future work, we would like to expand our investigation to incorporate other resources of different types like images and videos .a further investigation is crucial to better rank the results and account for the different types of resources .this work was supported in part by the library of congress and nsf iis-1009392 . 4 ainsworth , scott g. and alsum , ahmed and salaheldeen , hany and weigle , michele c. and nelson , michael l. : how much of the web is archived ? in _ proceedings of the 11th annual international acm / ieee joint conference on digital libraries , jcdl 11 _ , pages 133 - 136 , ( 2011 ) .bakshy , eytan and hofman , jake and mason , winter and watts , duncan : identifying influencers on twitter . in _ proceedings of the 4th acm international conference on web search and data mining , wsdm 11 _ , ( 2011 ) .bar - yossef , ziv and broder , andrei z. and kumar , ravi and tomkins , andrew .: sic transit gloria telae : towards an understanding of the web s decay . in _ proceedings of the 13th international conference on world wide web , www 04 _ , pages 328 - 337 , ( 2004 ) .baykan , eda and henzinger , monika and marian , ludmila and weber , ingmar : purely url - based topic classification . in _ proceedings of the 18th international conference on world wide web , www 09 _ , pages 1109 - 1110 , ( 2009 ) .f. benevenut , t. rodrigues , m. cha , and v. almeida . : characterizing user behav- ior in online social networks . in _ proceedings of acm sigcomm internet measure- ment conference , sigcomm 09 _ , pages 49 - 62 , ( 2009 ) .gill , alastair j. and nowson , scott and oberlander , jon : what are they blogging about ?personality , topic and motivation in blogs . in _ proceedings of the international aaai conference on weblogs and social media , icwsm 09 _ , ( 2009 ) .kan , min - yen : web page classification without the web page . _ in proceedings of the 13th international world wide web conference on alternate track papers & posters , www alt .04 _ , pages 262 - 263 , ( 2004 ) .klein , martin and nelson , michael l. : revisiting lexical signatures to re - discover web pages . in _ proceedings of the 12th european conference on research and advanced technology for digital libraries , ecdl 08 _ , pages 371 - 382 , ( 2008 ) .kwak , haewoon and lee , changhyun and park , hosung and moon , sue . :what is twitter , a social network or a news media ? in _ proceedings of the 19th international conference on world wide web , www 10 _ , pages 591 - 600 , ( 2010 ) .mark , gloria and bagdouri , mossaab and palen , leysia and martin , james and al - ani , ban and anderson , kenneth : blogs as a collective war diary . in _ proceedings of the acm 2012 conference on computer supported cooperative work , cscw 12 _ , pages 37 - 46 , ( 2012 ) .qi , xiaoguang and davison , brian d. : knowing a web page by the company it keeps . in _ proceedings of the 15th acm international conference on information and knowledge management , cikm 06 _, pages 228 - 237 , ( 2006 ) .salaheldeen , hany m. and nelson , michael l. : losing my revolution : how many resources shared on social media have been lost ? in _ proceedings of the second international conference on theory and practice of digital libraries , tpdl 12 _ , pages 125 - 137 , ( 2012 ) .wu , shaomei and hofman , jake m. and mason , winter a. and watts , duncan j. : who says what to whom on twitter . in _ proceedings of the 20th international conference on world wide web , www 11 _ , pages 705 - 714 , ( 2011 ) .starbird , kate and muzny , grace and palen , leysia : learning from the crowd : collaborative filtering techniques for identifying on - the - ground twitterers during mass disruptions . in _ proceedings of the 9th international iscram conference , iscram 12 _ , ( 2012 ) .starbird , kate and palen , leysia : ( how ) will the revolution be retweeted ? : information diffusion and the 2011 egyptian uprising . in _ proceedings of the acm 2012 conference on computersupported cooperative work , cscw 12 _ , pages 7 - 16 , ( 2012 ) .vieweg , sarah and hughes , amanda l. and starbird , kate and palen , leysia : microblogging during two natural hazards events : what twitter may contribute to situational awareness . in _ proceedings of the sigchi conference on human factors in computing systems , chi 10 _ , pages 1079 - 1088 , ( 2010 ). d. zhao and m. b. rosson . :how and why people twitter : the role that micro- blogging plays in informal communication at work . _ in proceedings of the acm 2009 international conference on supporting group work .group 09 _ , pages 243- 252 , ( 2009 ) .
in previous work we reported that resources linked in tweets disappeared at the rate of 11% in the first year followed by 7.3% each year afterwards . we also found that in the first year 6.7% , and 14.6% in each subsequent year , of the resources were archived in public web archives . in this paper we revisit the same dataset of tweets and find that our prior model still holds and the calculated error for estimating percentages missing was about 4% , but we found the rate of archiving produced a higher error of about 11.5% . we also discovered that resources have disappeared from the archives themselves ( 7.89% ) as well as reappeared on the live web after being declared missing ( 6.54% ) . we have also tested the availability of the tweets themselves and found that 10.34% have disappeared from the live web . to mitigate the loss of resources on the live web , we propose the use of a `` tweet signature '' . using the topsy api , we extract the top five most frequent terms from the union of all tweets about a resource , and use these five terms as a query to google . we found that using tweet signatures results in discovering replacement resources with 70+% textual similarity to the missing resource 41% of the time . archiving , social media , digital preservation , reconstruction = 10000 = 10000
in the past decade great interest has been devoted to the study of _ limited information related _ control problems .limited information related control is defined as follows : given a physical plant and a set of performance specifications such as tracking , design a controller based on limited information such that the resulting closed - loop system meets the prespecified performance specifications .there are generally two sources of limited information , one is signal quantization , and the other is signal transmission through various networks . in designing a digital control system , signal quantization induced by signal converters such as a / d , d / a and computer finite word - length limitation is unavoidable . to compensate this ,traditional design methods generally proceed like this : first design a controller ignoring the effect of signal quantization , then model it as external white noise and analyze its effect on the designed system .if the performance is acceptable , it is okay ; otherwise , adjust controller parameters such as the sampling frequency , or do redesign ( including the choice of converters ) until satisfactory performance is obtained .recently the following problems have been asked : 1 . how to study the effect of signal quantization more rigorously ?more precisely , how will it genuinely affect the performance of the underlying control system ? 2 .if there are positive answers to the above question , can one design better controllers based on this knowledge ? to address these two problems , stability , the fundamental requirement of a control system , has been studied recently in somewhat detail .delchamps [ 1990 ] studied the problem of stabilizing an _ unstable _ linear time - invariant discrete - time system via state feedback where the state is quantized by an arbitrarily given quantizer of _ fixed _ quantization sensitivity .it turned out that there are no state feedback strategies ensuring asymptotic stability of the closed - loop system in the sense of lyapunov .instead , the resulting closed - loop system behaves chaotically .fagnani & zampieri [ 2003 ] continued this research in the context of a linear discrete - time scalar system .based on the flow information provided by the system invoked by quantization , stabilizing methods based on the lyapunov approach and chaotic dynamics of the system were discussed .ishii & francis [ 2003 ] studied the quadratic stabilization of an unstable linear time - invariant continuous - time system by designing a digital controller whose input was the quantized system state ; an upper bound of sampling periods was calculated geometrically using state feedback for the system with a carefully designed quantizer of fixed quantization sensitivity , by which the trajectories of the closed - loop system would enter and stay in a region of attraction around the origin .clearly in order to achieve asymptotic stability , quantizers with _quantization sensitivities must be adopted . in brockett & liberzon [ 2000 ] , for the system , by choosing a quantizer with time - varying sensitivities , a linear time - invariant feedback was designed to yield global asymptotic stability .this problem was also studied in elia & mitter [ 2001 ] for exponential stability using logarithmic quantizers . in nair & evans [ 2002 ] , exponential stabilization of the system with a quantizer is studied under the framework of probability theory .more interestingly , the simultaneous effect of sampling period and quantization sensitivity was studied in bamieh [ 2003 ] , where it is shown via simulation that system performance would become _ unbounded _ as if a quantizer of fixed sensitivity was inserted into a control loop composed of a system and an unstable controller .therefore it is fair to say that the problem performance of quantized systems is quite complicated as well as challenging .much research is still required in this area .another representation of limited information is signals suffering from time - delays or even loss , which are ubiquitous in the networked control systems ( wong & brockett [ 1997 ] , walsh _ et al . _ [ 2001 ] , and ray [ 1987 ] ) .the fast - developing secure , high speed networks ( varaiya & walrand [ 1996 ] and peterson & davie [ 2000 ] ) make control over networks possible . compared to the traditional point - to - point connection , the main advantages of connecting various system components such as processes , controllers , sensors and actuators via communication networks are wire reduction , low cost and easy installation and maintenance , etc .thanks to these merits , networked control systems have been built successfully in various fields such as automobiles ( krtolica _ et al . _ [ 1994 ] , ozguner _et al . _ [ 1992 ] ) , aircrafts ( ray [ 1987 ] and sparks [ 1997 ] ) , robotic controls ( malinowshi _ et al . _[ 2001 ] , safaric _ et al . _[ 1999 ] ) and so on .in addition , in the field of distributed control , networks may provide distributed subsystems with more information so that performance can be improved ( ishii & francis [ 2002 ] ) .however , networks inevitably introduce time delays and packet dropouts due to network propagation , signal computation and coding , congestion , etc . , which lead to limited information for the system to be controlled as well as the controller , thus complicating the design of controllers and degrading the performance of control systems or even destabilizing them ( zhang _ et al ._ [ 2001 ] ) .therefore it is very desirable to reduce time delays and packet dropouts when implementing a networked control system . for the limitation of space , fornow we will concentrate on discussing a network protocol proposed by walsh , beldiman , bushnell , and hong , _ et al . _ ( walsh _ et al . _[ 1999 , 2001 , 2002a , 2002b ] ) since our proposed one is in the same spirit as theirs . fora more complete review on networked control systems and more references , please refer to zhang & chen [ 2003 ] .one effective way to avoid large time delays and high probability of packet dropouts is by reducing network traffic . in a series of papers published by walsh , beldiman , bushnell , and hong , _ et al . _( walsh _ et al . _[ 1999 , 2001 , 2002a , 2002b ] ) , a network protocol called try - once - discard ( tod ) is proposed . in that scheme, there is a network along the route from a mimo plant to its controller . at each transmission time, each sensor node calculates the importance of its current value by comparing it with the latest one , the larger the difference is , the more important the current value is , then the most important one gets access to the network . for this scheme , based on the lyapunov method and the perturbation theory , a minimal timewithin which there must have at least one network transmission to guarantee stability of networked control systems is derived .this network protocol , tod , essentially belongs to the category of dynamical schedulers . in comparison with static schedulers such as token rings, it allocates network resources more effectively .however , a supervisor computer , i.e. , a central controller , is required to compare those differences and decide which node should get access to the network at each transmission time .it is therefore complicated and possibly difficult to implement . in this paper, we introduce another technique aiming at reducing network traffic .consider the feedback system in fig . 1, where is a discrete - time system of the form : with the state , the input , the output and the reference input respectively ; is a stabilizing controller : with its state .let ] , then the closed - loop system from to is \eta ( k)+\left [ \begin{array}{cc } b & 0 \\ 0 & b_{d}\end{array } \right ] \left [ \begin{array}{c } v(k ) \\ -z(k)\end{array } \right ] + \left [ \begin{array}{c } 0 \\ b_{d}\end{array } \right ] r(k ) , \label{clsys2 } \\e_{c}(k ) & = & \left [ \begin{array}{cc } -c & 0\end{array } \right ] \eta \left ( k\right ) + r(k ) , \notag\end{aligned}\ ] ] where and are given in eqs . ( [ constraint1])-([constraint2 ] ) . to test whether the scheme adopted here is useful in the framework of networked control systems, we have to address at least the following two concerns : * the stability of the system in fig .since stability is fundamental to any control system , the first question about this system is its stability . in this paper , the lyapunov stabilityis studied in detail : 1 . given that both and are stable , the system is _ locally _ exponentially stable ( lemma 1 ) .however , the behavior of the state trajectory ( , ) , starting outside the stability region , is hard to predict .a scalar case is studied in detail to illustrate various dynamics the system can exhibit ( sec .2.1 ) : its trajectory may converge to an equilibrium which is not necessarily the origin ( proposition 1 , corollary 1 ) , or be periodic ( theorem 3 , theorem 4 ) , or aperiodic ( theorem 1 , theorem 2 ) , which can either be quasiperiodic or exhibit sensitive dependence on initial conditions a sign of chaos , advocating novel control method chaotic control .3 . for higher - order cases ,a positively invariant set is constructed ( theorem 5 ) .finally it is proved that the set of all initial points whose closed - loop trajectories tend to an equilibrium as has lebesgue measure zero if either or is unstable ( theorem 6 ) .* this research is mainly devoted to the study of networked control systems ( ncss ) , hence it is natural and necessary to analyze its effectiveness in the framework of networked control systems .an example is used to illustrated the efficacy of our scheme ( sec . 3 ) .the outline of this paper as follows . sec .2 is devoted to the study of stability .an example is constructed to show the effectiveness of our scheme in sec .3 . some concluding remarks are in sec .in this section , we discuss the stability of the system in eq .( [ clsys2 ] ) .firstly a sufficient condition ensuring local exponential stability is derived .secondly concentrated mainly on scalar cases , the intriguing behavior of the dynamics of the system is studied in detail .it appears that the system behaves chaotically .finally it is proven that the lebesgue measure of the set of trajectories converging to a certain equilibrium is zero if either the system or the controller is unstable . letting , the system in eq .( [ clsys2 ] ) becomes \eta ( k)+\left [ \begin{array}{cc } b & 0 \\ 0 & b_{d}\end{array } \right ] \left [ \begin{array}{c } v(k ) \\ -z(k)\end{array } \right ] , \notag \\ \left [ \begin{array}{c } u_{c}(k ) \\ y_{c}\left ( k\right)\end{array } \right ] & = & \left [ \begin{array}{cc } 0 & c_{d } \\ c & 0\end{array } \right ] \eta ( k)+\left [ \begin{array}{cc } 0 & d_{d } \\ 0 & 0\end{array } \right ] \left [ \begin{array}{c } v(k ) \\ -z(k)\end{array } \right ] , \label{switch } \\ \left [ \begin{array}{c } v(k ) \\ -z(k)\end{array } \right ] & = & \left [ \begin{array}{c } h_{1}\left ( u_{c}\left ( k\right ) , v(k-1)\right ) \\ -h_{2}\left ( y_{c}\left ( k\right ) , z(k-1)\right)\end{array } \right ] , ~~ k\geq0 . \notag\end{aligned}\ ] ] then , we have the following result regarding local stability . if both the system and the controller are stable , then the origin is locally exponentially stable . *proof : * define , ~~ \tilde{c}=\left [ \begin{array}{cc } 0 & c_{d } \\ c & 0\end{array } \right ] .\ ] ] since both and are stable , where is the spectral radius of a square matrix .then for any given satisfying , there exists a matrix norm such that [ huang , 1984 ] .furthermore , this matrix norm satisfies for any two matrices and of dimension .therefore , for a vector of dimension , one can define a vector norm such that .one way to define such a norm is the following : let denote the zero vector of dimension , define \right\| _ { * } , \ ] ] then \right\| _ { * } \leq \left\| m\right\| _ { * } \left\| \left [ x,\mathcal{o},\cdots , \mathcal{o}\right ] \right\| _ { * } = \left\| m\right\| _ { * } \left| x\right| _ { * } .\ ] ] for a vector of dimension , denote by the zero vector of dimension , define ^{^{\prime } } \right| _ { * } ] .then , \]]where , ^{^{\prime } } ] , under the same iteration , we get another trajectory of , the following plot ( fig.16 ) is the difference between these two of the last 1200 points of the iteration : from this figure , one can clearly see sensitive dependence on initial conditions .in general , the spectra of a chaotic orbit will be continuous .here we draw the spectrum of starting from ] .we call such a surface .the point is the origin of on .furthermore , the line the stable manifold of and similarly , the line the unstable manifold of .suppose a trajectory of the system starts from a point and is governed by , if ( or in general ) on some surface , then the trajectory will contract along and stretch along .due to the eq .( [ sub2_b ] ) , after some time , will move according to the stable subsystem . at this moment, will leave the surface , and move toward the origin . due to the eq .( [ sub1_b ] ) , after some time , it will move again on some surface for some ] , , we have is arbitrarily chosen , the result follows . in particular , assume we have a scalar system with a static state feedback : where . then following the above procedure , where can be any equilibrium .an upper bound has been found for all equilibria .will any of these equilibria be stable if either or in unstable ?we have a result reminiscent of that in delchamps [ 1990 ] assume either or is unstable , and is invertible , then the set of all initial points whose closed - loop trajectories tend to an equilibrium as has lebesgue measure zero .* proof : * denote this set by .let be the generalized stable eigenspace of eq .( [ switch ] ) , then the lebesgue measure of is zero since eq . ( [ switch ] ) is unstable .suppose , following the process in the proof of corollary 1 , there exist and some vector such that \varpi , \label{unstab1}\]]and eq .( [ no - update ] ) holds for all . since is unstable , for all .furthermore , the invertibility of implies that is uniquely determined by . due to the uniqueness of the state trajectory the system in eq .( [ switch ] ) , note also that this system is essentially a system with unit time delay , the trajectory starting from is identical to that starting from .define a mapping as and satisfying eqs .( unstab1 ) and ( [ no - update ] ) , then is injective. therefore the lebesgue measure of is zero .in this section , one example will be used to illustrate the effectiveness of the scheme proposed in this paper . in this example , the networked control system consists of two subsystems , ( each composed of a system and its controller ) , the outputs of the controlled systems will be sent respectively to controllers via a network . for the ease of notation, we denote the two systems , their controllers and their outputs by , , , , and respectively . heretwo transmission methods will be compared : one is just letting the outputs transmitted sequentially , i.e. , the communication order is $ ] .another method is adding the nonlinear constraint to the subsystem composed of and , if the difference between the two adjacent signals are greater than , then this subsystem gets access to the network ; otherwise the other gets access . here , we will compare the tracking errors produced under these two schemes respectively .for convenience , we call the first method the _ regular static scheduler _ and the second the _ modified static scheduler_. the controlled system is : x_{1}\left ( k\right ) \\ & & + \left [ \begin{array}{c } 0.0050 \\ 0.0991 \\ -0.0052 \\ -0.1155\end{array } \right ] w\left ( k\right ) + \left [ \begin{array}{cc } -0.0050 & -0.0000 \\ -0.1000 & -0.0001 \\ 0.0000 & -0.0005 \\ 0.0103 & -0.0105\end{array } \right ] u_{1}\left ( k\right ) , \\ z_{1}\left ( k\right ) & = & \left [ \begin{array}{cccc } 1 & 0 & 0 & 0 \\ 1 & 0 & -1 & 0\end{array } \right ] x_{1}\left ( k\right ) + \left [ \begin{array}{c } -1 \\ 0\end{array } \right ] w\left ( k\right ) , \\ y_{1}\left ( k\right ) & = & \left [ \begin{array}{cccc } 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0\end{array } \right ] x_{1}\left ( k\right ) , \end{aligned}\ ] ] and is : x_{2}\left ( k\right ) \\ & & + \left [ \begin{array}{c } 0.0000 \\ 0.0100 \\ -0.0001 \\ -0.0102\end{array } \right ] w\left ( k\right ) + \left [ \begin{array}{cc } -0.0000 & -0.0000 \\ -0.0100 & -0.0000 \\ 0.0000 & -0.0000 \\ 0.0001 & -0.0010\end{array } \right ] u_{2}\left ( k\right ) ,\\ z_{2}\left ( k\right ) & = & \left [ \begin{array}{cccc } 1 & 0 & 0 & 0 \\ 1 & 0 & -1 & 0\end{array } \right ] x_{2}\left ( k\right ) + \left [ \begin{array}{c } -1 \\ 0\end{array } \right ] w\left ( k\right ) , \\ y_{2}\left ( k\right ) & = & \left [ \begin{array}{cccc } 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0\end{array } \right ] x_{2}\left ( k\right ) , \end{aligned}\ ] ] where is a unit step . and are tracking errors .controllers and can be obtained using the technique in chen & francis [ 1995 ] .denote the first element of by and that of by ; the second element of by and that of by , then the subsystem with variables is controlled by and the subsystem with variables is controlled by .the simulation results are in figs .1819 . from these two figures, one finds that the tracking errors approach zero faster under the modified static schedular than under the regular one .note that both systems are unstable .if one of the two systems is stable , one can expect better convergence rate .in essence , our scheme is based on the following principle : allocate access to the network to the systems with faster dynamics first , then take care of the systems of slower dynamics . in this way, we hope we can improve system performance .interestingly , a similar idea is explored in hristu & morgansen [ 1999 ] .in this paper , a new networked control technique is proposed and its effectiveness is illustrated via simulations .the complicated dynamics of this type of systems is studied both numerically and theoretically .a simulation shows that the scheme proposed here has possible application in networked control systems .there are several problems guiding our further research : 1 ) continuity of state trajectories with respect to the initial points under space partition induced by the discontinuities of the system .2 ) how to find a precise characterization of the attracting set for our system , and is it topologically transitive ( i.e. , is it a chaotic attractor ) ?topological transivity , an indispensable feature of a chaotic attractor , is closely related to ergodicity of a map . as discussed in sec .2.1.3 , the proof of topological transitivity or ergodicity is difficult for our system from the point of view of measure theory due to the singularity of the map and its violation of conditions in lasota & yorke [ 1973 ] .however , this investigation is unavoidable should one want to find the chaotic attractor inherited in the system studied .3 ) for different system parameters , different aperiodic orbits can be obtained , what are the differences among these orbits ?in particular , given two aperiodic orbits , one generated from a system having no periodic orbits and the other generated by a system having periodic orbits , is there any essential difference between them ?4 ) in sec . 2.1.2, periodic orbits are constructed for some originally stable ( ) and originally unstable ( ) systems .however given a system , how to determine if there are periodic orbits , and if so , how to find all of them is still an unsolved problem .5 ) how to effectively design controllers based on chaotic control ? obviously the solution of this problem depends on the forgoing ones .6 ) how to incorporate properly the scheme proposed in this paper into the framework of networked control systems ? the simulation in sec . 3 is naive , more research is required here to make the proposed scheme practical .the first author is grateful to discussions with dr .michael li .this work was partially supported by nserc .the authors are also grateful to the anonymous reviewers and the editor for their resourceful comments and constructive suggestions .malinowshi , a. , booth , t. , grady , s. & huggins , b. [ 2001 ] `` real time control of a robotic manipulator via unreliable internet connection , '' 27th ieee conf .indus . elect .society , iecon01 , 70 , 170 - 175 .zhang , g. & chen , t. [ 2003 ] `` analysis and design of networked control systems , '' technical report in the group of advanced control systems , 2003 ( available at http://www.ece.ualberta.ca/ gfzhang / research / research.htm ) .
in this paper , a nonlinear system aiming at reducing the signal transmission rate in a networked control system is constructed by adding nonlinear constraints to a linear feedback control system . its stability is investigated in detail . it turns out that this nonlinear system exhibits very interesting dynamical behaviors : in addition to local stability , its trajectories may converge to a non - origin equilibrium or be periodic or just be oscillatory . furthermore it exhibits sensitive dependence on initial conditions a sign of chaos . complicated bifurcation phenomena are exhibited by this system . after that , control of the chaotic system is discussed . all these are studied under scalar cases in detail . some difficulties involved in the study of this type of systems are analyzed . finally an example is employed to reveal the effectiveness of the scheme in the framework of networked control systems . * keywords * : stability , attractor , nonlinear constraint , chaos , bifurcation , tracking , networked control systems .
the continued progress in the field of numerical relativity has demonstrated the feasibility of evolving strong curvature fields , including black holes , on a computer .recent calculations of spacetimes with black holes include simulations of highly distorted black holes , colliding black holes , the formation of black holes from imploding gravitational waves , and balls of collisionless matter .calculations like these are important stepping stones to full 3d simulations of two coalescing black holes .such simulations will be very important for understanding gravitational signals that will be detected by new gravitational wave detectors , such as the ligo and virgo laser interferometers and advanced bar detectors .however , as black holes are accompanied by singularities , their presence in numerical spacetimes leads to extreme dynamic ranges in length and time , making it difficult to maintain accuracy and stability for long periods of time .all calculations of black hole spacetimes to date , like those mentioned above , develop difficulties at late times due to the large dynamic ranges that must be computed .these difficulties are even more severe when black holes are evolved in 3d . the traditional way to deal with these problems has been to take advantage of the coordinate degrees of freedom inherent in the einstein equations to avoid the extreme curvature regions .the `` many fingers of time '' in relativity allows one to evolve particular regions of space without evolving the regions in which singularities are present or forming . these so - called singularity avoiding slicing conditions wrap up around the singular region ( see fig .[ fig : wrap ] ) so that a large fraction of the spacetime outside the singular region can be evolved .several different types of singularity avoiding slicings have been proposed and applied with variable degrees of success to a number of problems .however , these conditions by themselves do not completely solve the problem ; they merely serve to delay the breakdown of the numerical evolution . in the vicinity of the singularity ,these slicings inevitably contain a region of abrupt change near the horizon and a region in which the constant time slices dip back deep into the past in some sense .this behavior typically manifests itself in the form of sharply peaked profiles in the spatial metric functions , `` grid stretching '' , large coordinate shift on the black hole throat , _etc_. these features are most pronounced where the time slices are sharply bent towards the past ( as shown in fig .[ fig : wrap ] ) for a reason that will be discussed below .numerical simulations will eventually crash due to these pathological properties of the slicing . as these problems are even more severe in 3d , where much longer evolutions will be required to study important problems like the coalescence of two black holes , it is essential to investigate alternative methods to handle singularities and black holes in numerical relativity .cosmic censorship suggests that in physical situations , singularities are hidden inside black hole horizons . because the region of spacetime inside the horizon can not causally affect the region of interest outside the horizon , one is tempted to cut away the interior region containing the singularity and evolve only the singularity - free region outside . to an outside observerno information will be lost since the region cut away is unobservable .the procedure of cutting away the singular region will drastically reduce the dynamic range , making it easier to maintain accuracy and stability . with the singularity removed from the numerical spacetime ,there is in principle no physical reason why black hole codes can not be made to run indefinitely without crashing .although the desirability of a horizon boundary condition has been raised many times in the literature , it has proved to be difficult to implement such a scheme in a dynamical evolution .the boundary condition which one needs to impose on a black hole horizon , which is a one - way membrane , should be some form of out - going ( into the hole ) boundary condition . however , except for the case of linear nondispersive fields propagating in a flat spacetime , we are not aware of any satisfactory numerical out - going wave boundary conditions . waves in relativity can be nonlinear , dispersive and possess tails and other complications .moreover , what a wave is in the near zone is not even well - defined .the development of a general out - going wave boundary condition in numerical relativity is certainly highly nontrivial . in a recent paper we demonstrated that a horizon boundary condition can be realized .here we present a more detailed discussion of our methods and various extensions to that earlier work .there are two basic ideas behind our implementation of the inner boundary condition : ( 1 ) we use a `` horizon locking coordinate '' which locks the spatial coordinates to the spatial geometry and causal structure .this amounts to using a shift vector that locks the horizon in place near a particular coordinate location , and also keeps other coordinate lines from drifting towards the hole . in we investigated one particular type of shift condition , namely the `` distance freezing '' shift . herewe report on various choices of shift conditions , including the original `` distance freezing '' shift that freezes the proper distance to the horizon , an `` expansion freezing '' shift that freezes the rate of expansion of outgoing null rays , an `` area freezing '' shift that freezes the area of radial shells , and the minimal distortion shift that minimizes the global distortion in the 3-metric .some of these shifts have the advantage that they can be generalized more easily to geometries and coordinate systems other than the spherical one .the use of these shift vectors will be discussed in detail in section iii .the basic message is that the idea of a horizon locking coordinate is robust enough for many different implementations , with some implementations likely extendible to the general 3d case .( 2 ) we use a finite differencing scheme which respects the causal structure of the spacetime , which essentially means that spatial derivatives are computed at the `` center of the causal past '' of the point being updated .such a differencing scheme is not only essential for the stability of codes using large shift vectors as in those with `` horizon locking coordinates '' , but also eliminates the need of explicitly imposing boundary conditions on the horizon . as pointed out in , this is , in a sense , the horizon boundary condition without a boundary condition . since the horizon is a one - way membrane , quantities on the horizon can be affected only by quantities outside but not inside the horizon . hence , in a finite differencing scheme which respects the causal structure , all quantities on the horizon can be updated solely in terms of known quantities residing on or outside the horizon , and there is no need to impose boundary conditions to account for information not covered by the numerical evolution .such an approach can be applied to all kinds of source terms that one may want to deal with in numerical relativity .[ otherwise one would have to develop an out - going ( into the black hole ) wave boundary condition for each physical problem , i.e. , one out - going wave condition for gravitational waves , one for em fields , one for perfect fluids , _etc_. ] in ref . we implemented causal differencing in a first order way , which respected the causal structure but did not carefully take account of the exact light cone centers . in this paperwe report on the results obtained in the second generation of our code which explicitly and accurately takes account of the causal past of a grid zone .the basic idea in constructing the causal differencing scheme is that , for a set of differential equations written in a spacetime coordinate system with a shift vector , the finite differenced version of the equations can be obtained by:(i ) transforming the coordinate system to one without a shift , ( ii ) choosing a differencing method for this set of differential equations in the usual manner , as required by the physics involved , and ( iii ) transforming the resulting finite differenced equations back to the original coordinate system .the equations so obtained can be very different from those obtained by applying directly the usual differencing method to the set of differential equations with a shift .a direct application can easily lead to a set of unstable finite differenced equations when the shift is large .that is , the actions of coordinate transformation and finite differencing do not commute . by going through a coordinate system without a shift vector , we guarantee that the stability of the final differenced equations is independent of the shift vector .a general overview of the program we adopt is the following : suppose we want to numerically evolve a collapsing star . the initial data can be set up and evolved for a while with some suitable gauge conditions , while looking out for the generation of an apparent horizon . when one is formed and grows to a certain finite size , a shift vector can be introduced to maintain the apparent horizon at a constant coordinate positionthis determines the shift vector right at the apparent horizon .the shift at other grid points is determined by criteria which are consistent with the choice of the shift at the apparent horizon . in this paper, we study a subset of the above scenario in which a spherically symmetric black hole exists in the initial data .generalizations of our basic methods to more complicated geometries are discussed throughout this paper .we use the 3 formalism which views spacetime as a foliation by spatial 3-surfaces with a metric and extrinsic curvature tensor .( we adopt the usual notation and use latin letters to denote 3-dimensional indices . ) in this picture the spacetime metric can be written as where is the lapse function that determines the foliation of the spacetime and is the shift vector specifying the three - dimensional coordinate transformations from time slice to time slice . in this formalism , the evolution equations become \nonumber \\ & & + \beta^c d_c k_{ab } + k_{ac } d_b \beta^c + k_{cb } d_a \beta^c , \label{evolk}\end{aligned}\ ] ] and the hamiltonian and momentum constraints are respectively where is the 3-ricci tensor , is its trace , is the trace of , and is the covariant derivative associated with the 3-metric .we use the framework of spherically symmetric spacetimes to illustrate the idea of locking the horizon and the coordinate system . as shown by bernstein , hobill , and smarr ( denoted henceforth by bhs ) , the numerical construction of even a schwarzschild spacetime is nontrivial with a general choice of lapse and shift .both in this paper and in , the lapse can be arbitrarily specified .the shift is taken to be always in the radial direction with only one component , consistent with spherical symmetry .the spatial line element is taken to be the coordinates are the standard spherical coordinates on the constant 2-spheres and is a radial coordinate related to the schwarzschild isotropic coordinate by , where is a length scale parameter which is equal to the mass of the black hole . with this coordinate ,the throat is located at . the line element ( [ 3metric ] )is easily generalized to one which is suitable for numerical studies of axisymmetric spacetimes , and it includes both the radial gauge and the quasi - isotropic or isothermal gauge .the conformal factor is a function that depends only on and is specified on the initial time slice so that it satisfies the hamiltonian constraint with time symmetry and conformal flatness .the evolution equations for the 3-metric and extrinsic curvature ( here and ) are : \nonumber\\ & & + \, \alpha \ ,h_a \left ( \frac{2 h_a}{a}-\frac{h_b}{b}\right ) \label{hadot}\\ \dot{h}_b = & & \psi^{-4}\left [ \alpha r_{\theta\theta } - \alpha ' \left ( \frac{b'}{2a } + \frac{2 \psi ' b}{a\psi } \right ) + \frac{4h_b\beta \psi ' } { a \psi } + \beta \frac{h_b'}{a}\right ] + \alpha \frac{h_ah_b}{a}~ , \label{hbdot}\end{aligned}\ ] ] and the hamiltonian and momentum constraints respectively become where and are the 3-ricci components and the extrinsic curvature is written as to help simplify the form of the equations .we evolve eqns .( [ adot ] ) to ( [ hbdot ] ) with the time - symmetric schwarzschild solution as the initial data : , and . in the numerically evolved spacetime, the topology of the constant hypersurfaces is given by the single einstein - rosen bridge , although the geometry can be different .with the radial coordinate behaving as , the grid can cover a large range of circumferential radius . at the outer boundary, it suffices for our present purpose to take the metric as fixed .the inner boundary condition is the subject of this paper .bhs have developed one of the most accurate codes to date to evolve single black hole spacetimes in one spatial dimension .they carry out a thorough treatment using maximal slicing , with zero or a minimal distortion shift and nine different methods for finite differencing the evolution equations including maccormack , brailovskaya and leapfrog schemes .results presented in this paper will be compared to this standard .we first demonstrate the difficulty of using singularity avoiding slicings ( e.g. , maximal slicing ) using the bhs code .bhs find that the most accurate evolutions are obtained by using the maccormack or brailovskaya differencing schemes , maximal slicing and zero shift vector , although the leapfrog scheme is of comparable accuracy and is preferred in 2d .results obtained with the bhs code are considered to be very accurate , but as in all codes designed so far to evolve black holes , it develops difficulties at late times . in figs .[ fig : grr ] and [ fig : ham ] we show the best results using the bhs code with 400 zones to cover the domain from to 6 . the solid lines in fig .[ fig : grr ] are the radial metric component shown from to at every intervals .the dashed line is the coordinate position of the apparent horizon versus time .we see that the horizon is growing in radius , due to the infalling of coordinates and a spike is rapidly developing near the horizon which eventually causes the code to crash . the inaccuracy generated by the sharp spike is shown explicitly in fig .[ fig : ham ] , where the violation of the hamiltonian constraint is plotted at various times . figs .[ fig : grr ] and [ fig : ham ] indicate the code has developed substantial errors by . at this point ,the peak value of the hamiltonian constraint stops growing , as the spike in the radial metric component can no longer be resolved .errors in the apparent horizon mass are approximately 25% . at time numerical instabilities begin to grow , causing the code to crash shortly thereafter .the development of the spike is the combined effect of the grid points falling into the black hole and the collapse of the lapse .the coordinate points at smaller radii have larger infall speeds causing the radial metric component to increase towards smaller . however , at the same time , there is a competing effect due to the use of the singularity avoiding time slice .the motion of the grid points close to is frozen due to the `` collapse of the lapse . '' at small radii well inside the horizon , the latter effect dominates , and can not increase in time .this causes to develop a peak at a place slightly inside the horizon where the difference in the infalling speed of the grid points is large , but the lapse has not completely collapsed .bhs investigated using the minimal distortion shift vector as a means of reducing the shear in the metric components .however , they found that although the sharp gradients in the radial metric component are eliminated from the region containing the event horizon , the shear is transformed to the throat , where volume elements vanish as the singularity is approached .a key to stable and accurate evolutions for long times is to utilize the shift vector to lock the coordinate system with respect to the geometry of the spacetime so that there is no infalling of grid points .an obvious feature in the black hole geometry that can be used for such a purpose is the apparent horizon . here for convenience of discussion we refer to the apparent horizon as the two - dimensional spatial surface having a unit outward pointing 3-vector satisfying where is the expansion of the outgoing null rays . for a rigorous discussion of the apparent horizon , see e.g. , . in this present work, we assume that there is one and only one such surface in the black hole spacetime .although the apparent horizon is not defined locally in space , it is defined locally in time , and hence is a convenient object ( in comparison to the event horizon ) to work with in numerical evolutions .there are two key steps in locking the coordinate system to the black hole geometry .we first lock the position of the apparent horizon to a fixed coordinate location .then all grid points in the spacetime are fixed with respect to the apparent horizon .in spherical geometry with the 3-metric ( [ 3metric ] ) , the expansion of outgoing null rays reduces to where the differential operator is the outgoing null vector , the action of which on the surface area is zero at the apparent horizon .therefore the coordinate location of the apparent horizon is given by the root of this equation .one might attempt to solve for the shift in this equation to make constant in time .however , despite the apparent existence of a term in the operator , the equation defining the horizon is independent of , once the time derivative of the metric function in this equation is expressed in terms of known quantities on the present time slice .this is to be expected as the location of the apparent horizon on a particular time slice should be independent of the value of on that slice . instead, as the time rate of change of is a function of the shift , one can determine the `` horizon locking '' shift by solving for the shift in the equation that is , we require the zero of the function at to be time independent .this gives a condition on .alternatively , one can also determine the shift by requiring that the area of the coordinate surface defining the horizon at some time be held fixed from that point in time forward .in a nondynamical spacetime , such as schwarzschild , this condition will also lock the coordinate location of the horizon . for the present spherically symmetric case, this requirement can be written as {\eta = \eta_{ah}}=0 \ , .\label{arealock}\ ] ] one can think of other ways to determine the shift at the apparent horizon for locking it .the methods given by ( [ shiftcond ] ) or ( [ shiftcond2 ] ) are chosen for their extensibility into the 3d case .although the present discussion has focused on `` locking '' the horizon , in general one would like to have the ability to fully control the motion of the horizon , i.e. , place the horizon at the coordinate location of our choosing , which need not be one fixed value for all times .for example , if matter is falling into the black hole , it would be natural to have the horizon of the black hole expand in coordinate location .a particularly interesting case is to have the black hole move across the numerical grid with the coordinate location of the horizon changing accordingly in time .such controlled motion of the horizon can be achieved by a simple variant of the method described above .work in this direction will be discussed in detail elsewhere .preventing the apparent horizon from drifting is only part of the story , as we must also specify the shift at other locations to prevent pathological behavior of the coordinate system throughout the spacetime and to prevent grid points from `` crashing '' into the apparent horizon .we have investigated the following four implementations of the shift : * `` distance freezing '' shift*:with the apparent horizon fixed at constant , one can tie all grid points to the horizon by requiring the proper distance between grid points to be constant in time . in the spherically symmetric case , this determines the shift through the differential equation ( [ adot ] ) .setting gives equation ( [ shifta ] ) can be solved for by integrating from the horizon to the outer boundary for regions outside the black hole , and from the horizon to the inner boundary of the numerical grid for regions inside the horizon .we use a fourth - order runge - kutta method to solve the first order equation on each time slice .the use of this shift condition has been briefly discussed in . in section v below , we present results obtained using this shift , with recent improvements in implementation incorporated .* `` area freezing '' shift*:alternatively , one can choose to freeze in time the area of the surfaces of constant radial coordinate so that .this yields the following equation for an advantage of using the `` area freezing '' shift is that it is tied nicely to the apparent horizon when the horizon is locked with eq .( [ arealock ] ) . in fact , eq .( [ areas ] ) is simply an application of eq .( [ arealock ] ) not just on the ah , but everywhere .another advantage of such a choice is that it yields an algebraic expression for the shift , hence eliminating the need for a spatial integration , as in eq .( [ dfreeshift ] ) .furthermore , this shift condition allows the surface area ( a sensitive function in the evolution of black hole spacetimes ) to be well - defined in time and not subject to numerical discretization errors .however , just like the distance freezing shift , this shift condition is strongly coordinate dependent .the usefulness of the distance freezing shift in a full 3d black hole spacetime depends on one s ability to pick a suitable `` radial direction '' from the hole , e.g. , the direction of maximum grid stretching .similarly , the usefulness of the area freezing shift in the full 3d case depends on one s ability to pick suitable closed 2-surfaces for locking . in the following we turn to two other shift conditions that are completely geometric in nature , andtherefore can be generalized in a straightforward manner to other coordinate systems and to 3d treatment .* `` expansion freezing '' shift*:a choice of shift condition closely related to the area freezing shift is obtained by freezing the expansion ( [ theta ] ) of closed surfaces which have spatially uniform expansions , i.e. , in spherical symmetry , these surfaces are simply surfaces of constant radial coordinates .we have studied the implementation of both condition ( [ tex ] ) and the similar condition the results are similar .this shift condition , just like the area freezing shift , ties in naturally with the horizon locking condition ( [ shiftcond ] ) .it also yields an algebraic expression for , namely , \label{tex3}\end{aligned}\ ] ] with . on the horizon , but this is not so inside nor outside the horizon .this condition can be generalized to the 3d case in a straightforward manner as the constant surfaces can be regarded , in some sense , as concentric surfaces centered at the hole . at presentwe are developing a scheme for determining such closed 2d surfaces with uniform expansion in 3d space .* minimal distortion shift : * a final option that we consider in this paper is the minimal distortion shift , written in covariant form as in spherical coordinates , this reduces to the following second order equation for one of the most attractive properties of this shift condition is its geometric nature .it minimizes coordinate shear in a global sense .also its formulation is completely independent of coordinates so that it may be equally applied to a single black hole in spherical coordinates as to a two black hole coalescence in 3d cartesian coordinates .however , this shift vector is more difficult to implement numerically , particularly in three dimensions .we solve separately for the two regions inside and outside the apparent horizon . outside the horizonwe treat the equation for as a two point boundary value problem using ( [ hlock ] ) or ( [ arealock ] ) to fix at the horizon and setting at the outer edge . inside the horizonwe use a second order backward substitution method specifying and at the horizon . is computed from the outer domain solution , thus allowing for smooth extensions of the numerical solution through the horizon .the difficulty in extending this to the 3d case is that one has to solve a set of coupled elliptic partial differential equations with an irregular inner boundary on each time slice during the evolution .all the above discussed shift conditions have been found to successfully lock the coordinate system to the spacetime geometry in the spherically symmetric case .the results will be given in section v. the basic point we want to make here is that the idea of locking the coordinate system to the geometry in black hole spacetimes by making use of the apparent horizon is robust enough that there can be many different ways to implement it .this robustness makes it promising for implementation in 3d . an important point to noteis that these shifts can be applied to either all of the spacetime grid , or just in the vicinity of the black hole .the freedom of turning the shift off at a distance away from the black hole , so that the coordinates are not necessarily everywhere locked , is important when we go away from spherical symmetry .for example , in multiple black hole spacetimes , one would like to be able to lock the grid in one part of the spacetime to one hole , while locking other parts to other black holes .we will demonstrate in section v that such partial locking is possible .one consequence of introducing a nonzero shift vector in horizon locking coordinates is that inside the horizon the future light cone is tilted inward towards smaller .if the shift is such that the horizon stays at constant coordinate value , i.e. , the horizon locking shift of section iii above , the light cone will be completely tilted to one side inside the horizon .this feature of the light cone is convenient for the implementation of the horizon boundary condition .it allows us to excise the singular region inside some fixed grid point .since grid points are fixed to the coordinates , data at a particular grid point depends only on past data from grid points at equal or larger _ coordinate _ values inside the horizon .note that without such a horizon locking shift vector this will not necessarily be true .the remaining task is then to construct a finite differencing scheme which can maintain the causal relations between grid points . of course , due to the courant stability requirement , causal relations between grid points can not be maintained exactly .inevitably information propagates slightly faster than the speed of light in a finite differencing equation .however , by keeping buffer zones inside the horizon in the numerical evolution domain , the light cones will be tilted to such an extent that even the innermost grid point can be evolved with information on grid points in the buffer zones while having the courant condition safely satisfied .with such a scheme there is no need to supply boundary conditions at the inner edge of the grid , since all points which can be affected by the inner edge point , even on the finite differencing level , are off the grid .we shall see that the causal differencing scheme is useful not only for imposing the horizon boundary condition , but that it is also essential for stability when evolving the differential equations with a large shift vector . in the following subsections, we shall first illustrate this with a simple scalar field example , before turning to the general relativistic case .consider a simplified flat space and a simple scalar field described by introduce a shift vector by performing a coordinate transformation the spacetime becomes with the evolution equation ( [ scalev ] ) in the first order form becomes one might attempt to finite difference ( [ 34 ] ) and ( [ 35 ] ) in terms of the usual leapfrog scheme : in obvious notation .however , a straightforward von neumann stability analysis shows that for any given , the system of finite difference equations ( [ 36 ] ) and ( [ 37 ] ) is _ unstable _ for a large enough .[ fig : cenun ] demonstrates the development of obtained by this scheme . the initial data for is a gaussian represented by the dashed line and .the shift is taken to be the following function of space and time that the field is experiencing a shift increasing in time , but decreasing in , analogous to the black hole case . is plotted at equal time intervals up to time as solid lines in the figure .the initial gaussian splits into two . due to the shiftone component has a large coordinate velocity moving rapidly to the left , while the other component has a much smaller coordinate velocity .we see that at the last plotted time , the evolution becomes unstable .the reason for this instability is easy to understand . in trying to update the data at the point on the slice[ the point , the `` region of finite differencing '' used on the right - hand side of ( [ 37 ] ) is from to .however , the point has the edges of its backward light cone on the slice , with the presence of a shift , given by on the left , and on the right .hence if , or for a negative , part of the `` region of causal dependence '' of the point lies outside the `` region of finite differencing '' , and the evolution becomes unstable . to enforce that the finite differencing scheme follows the causal structure of the numerical grid , which becomes nontrivial in the presence of a shift ,the idea we propose is the following : * in trying to finite difference a set of differential equations with a shift , like eqns .( [ 34 ] ) and ( [ 35 ] ) , we first transform back to a coordinate without shift . in the case above , this means going back to ( [ 28 ] ) . * in this new coordinate system without shift , the causal structure is trivial , and the finite differencing scheme can be picked in the usual manner according to the structure of the equation . in the present case , for example , it can be the leapfrog scheme notice that we have denoted as , instead of , since is different from * the finite differenced equations ( [ 39 ] ) and ( [ 40 ] ) are in terms of and .we then transform these difference equations back to the coordinate system with a shift .this procedure gives the `` causal leapfrog differencing '' where .notice that here the dependent variable is _not _ transformed back to . in using the variable ,the final equations are simpler and do not involve the derivative of .the price to pay is that , if we want to reconstruct from , we have to obtain also the function of the coordinate transformation .the function can be evolved with the auxiliary equation a straightforward von neumann analysis shows that the stability of the two sets of finite differenced equations ( [ 39 ] ) , ( [ 40 ] ) and ( [ 41 ] ) , ( [ 42 ] ) are the same , as they should be by construction .( [ 41 ] ) and ( [ 42 ] ) have the correct causal property independent of the value of the shift .[ fig : cau ] shows the evolution of with this causal leapfrog differencing scheme using the same initial data , grid parameters and shift as in fig .[ fig : cenun ] .the evolution is carried out to , with shown at equal time intervals .clearly the evolution is stable .the situation with the einstein equations is no different from the scalar field equation as far as the problem of finite differencing is concerned . to find the finite differenced version of eqns .( [ adot ] ) to ( [ hbdot ] ) , we transform to a coordinate system without a shift the line element in these new coordinates can be written as it is easy to see that the two sets of metric functions are related by in terms of this new set of variables , the einstein evolution equations ( [ adot ] ) to ( [ hbdot ] ) become finite differencing these equations is straightforward .for example , if the leapfrog scheme is chosen , upon transforming back to the coordinates , the resulting finite differenced equations have exactly the same structure as ( [ 41 ] ) and ( [ 42 ] ) .there are terms without , evaluated on the slice at the causal center of the backward light cone of the point , and terms involving evaluated on the slice at the corresponding location of the causal center on this slice . also similar to the scalar field case, we keep the dependent variables in the finite differenced equations as , , , ) .they are related to ( , , , ) by the auxiliary function , which is evolved with eqn .( [ 49 ] ) .it is also possible to evolve directly with the variables ( , , ) with full causal differencing . in this caseno auxiliary function is needed .the details of this approach will be presented elsewhere .in this section we discuss results obtained by implementing the horizon boundary scheme .first we outline the issues common to all varieties of the boundary scheme , and then we present results for each different implementation . to study how well the horizon boundary condition works ,we compare results with those obtained using the bhs black hole code , which is one of the most accurate codes to date in evolving spherical black holes .the most accurate evolutions in the bhs code are obtained with maximal slicing and zero shift , which is the case to which we shall be comparing our results . in this case , the apparent horizon is initially located on the throat of the black hole , which is the inner boundary of the computational domain . as the spacetime evolves the coordinates fall towards the hole , so the coordinate location of the horizon will move out . the evolution begins with the lapse collapsing to zero behind the horizon and the radial metric function increasing as the coordinates collapse inward . to stop the radial metric function from developing a sharp peak ( as shown in fig .[ fig : grr ] ) , we smoothly introduce a horizon locking shift over a period of time .the phase - in period for the shift typically lasts starting from , and so the full horizon boundary condition is in place by , and the inner grid points close to the singularity are dropped from the numerical evolution .we retain 10 to 20 grid points inside the apparent horizon as buffer zones for added stability . having these `` buffer zones '' inside the horizonhelps because of two reasons .first , in the horizon locking coordinate the light cones are tilted more as we go further inside the horizon .second , any inaccuracy generated at the innermost grid point has a longer distance to leak through , and hence decreases substantially in amplitude , before it can affect the physically relevant region outside the horizon . by the time the shift is fully phased in, the radial metric function is still of order unity everywhere .the lapse is also of order unity throughout the grid , with the smallest value being 0.3 at the inner boundary of the grid . the subsequent evolution , using the same grid parameters as in the bhs code , gives us a direct comparison between implementing and not , the horizon boundary condition . without the horizon boundary condition, one has to let the lapse collapse to a value even below in the inner region , i.e. , one does not evolve that part of the spacetime in order to help prevent the code from crashing .nevertheless , the code will still subsequently crash as the radial metric function develops a sharp peak near the horizon ( growing from a value of order 1 to a peak value well over 100 within a span of just a few grid points , depending on the resolution ) . with the horizon boundary condition in place ,there is no need to collapse the lapse .in fact the lapse is kept frozen from the time the boundary condition is fully phased in .we stress that the boundary condition permits accurate evolutions for extremely long times ( see below ) with lapses of order unity throughout the entire calculation in space and in time .another issue is the implementation of causal differencing for the black hole spacetime . in our first treatment of this problem we introduced causal differencing in a first order way by taking into account the direction of light cones when constructing difference operators , but without accounting for the width and the precise location of the center of the backward light cones .basically , the implementation in is similar to using one - sided derivatives in a region where the flow of information is one - sided . herewe report on the results obtained with the full causal differencing scheme using the `` tilde '' variables as described in section iv , and we see a substantial improvement in accuracy .we begin by presenting results of implementing the horizon boundary condition using our `` distance freezing '' shift that freezes the radial metric function in time . in fig .[ fig : hlock : a ] we show the evolution of the radial metric function up to a time of for a run with 400 radial zones ( ) .note that is a logarithmic radial coordinate that runs from at the throat to at the outer boundary .the outer edge corresponds to in the isotropic coordinate .the horizon is locked at after the boundary scheme is fully phased in by .the inner grid points are dropped at that point , so the lines at later times do not cover the inner region .we see that changes rapidly initially before and during the phase - in period .after that , the evolution gradually settles down . from to the profile barely changes .although the shift is designed to make a constant in time , due to discretization errors , the freeze is not perfect .however , a perfect locking is also not necessary , as demonstrated by the stability and the accuracy of the results shown here .this slow drifting in is to be compared to the sharp peak in fig .[ fig : grr ] produced by the bhs code without using the horizon boundary condition . in fig .[ fig : hlock : b ] we show the evolution of , the angular metric function .we see that the shift which is designed to freeze also freezes .again the evolution of settles down by , and the change in the profile becomes negligible .this shows that the shift vector has succeeded in locking the coordinate system to the geometry . with the geometry of the schwarzschild spacetime being static , and the time slicing ( lapse ) not changing in time, all metric functions must become frozen . in fig .[ fig : hlock : m ] we show the mass of the apparent horizon , defined by on the vertical axis we show the difference between the analytic value and the numerically computed value .by the mass is just for the `` distance freezing '' shift with full causal differencing , with resolution of 200 grid points .we also show results obtained with our earlier first order implementation of the boundary condition scheme , which did not use full causal differencing .although the old results are already quite good , the improvement due solely to causal differencing is clear . for comparison ,we show also the error in mass given by the bhs code with the much higher resolution of 400 grid zones .the improvement one can get by imposing the horizon boundary condition is obvious. figs .[ fig : block : a ] , [ fig : block : b ] , and [ fig : block : m ] , show , respectively , the evolution of , and , analogous to figs .[ fig : hlock : a ] , [ fig : hlock : b ] , and [ fig : hlock : m ] , but for the case of the horizon boundary implemented with the `` area freezing '' shift discussed in section iv .we note that the basic features are the same . before and during the phase - in , and are rapidly changing .after the phase - in , the metric functions and basically settle down . although the shift is designed to freeze in this case , also gets frozen .we see in fig .[ fig : block : b ] that is much more accurately locked compared to the in fig .[ fig : hlock : b ] .notice that in figs .[ fig : block : a ] and [ fig : block : b ] , and have different spatial distributions compared to those of figs . [ fig : hlock : a ] and [ fig : hlock : b ] .the grid points are tied to the geometry by the shifts in both cases , but they are tied at different locations . in the areafreezing shift case , the shift at the horizon is treated as the shift at other positions . the error in mass , which is a very sensitive indicator of accuracy is plotted against time for different resolutions in fig .[ fig : block : m ] .again the bhs case is shown for comparison .the next set of figs .[ fig : elock : a ] , [ fig : elock : b ] , [ fig : elock : m ] , and [ fig : expb ] are for the expansion freezing shift , a choice of shift vector which is geometrically motivated and coordinate independent .we again see that the grid points are basically locked to the geometry by this condition after phasing in . however , we see that there is an intermediate region outside the horizon but not too far from the black hole for which the locking is less perfect compared to the previous two choices of shift .the basic reason is that although the expansion is monotonically increasing with the radial coordinate near the horizon , it is not the case farther out , as shown in fig .[ fig : expb ] . near the peak of the expansion , where the expansion is nearly a constant with respect to changing radial position , the determination of a shift required to lock the surface of constant expansion is clearly more difficult . as a result , both in fig .[ fig : elock : a ] and in fig .[ fig : elock : b ] show some evolution in that region , indicating motion of the grid .however , as this region is further away from the horizon , such motion of the grid is not causing any serious difficulty , and the evolution is still stable and accurate , as shown by the error in the mass plot of fig .[ fig : elock : m ] . figs .[ fig : mdlock : a ] , [ fig : mdlock : b ] , and [ fig : mdlock : m ] are the corresponding figures for the final minimal distortion shift case .bhs reported that the use of a minimal distortion shift is troublesome in the region near the throat , as the volume element is small there . with the horizon boundary condition, the region near the throat is cut off explicitly , and the minimal distortion shift with boundary conditions for the shift equation set on the horizon ( _ cf ._ see sec .iii.b ) works as nicely as the other horizon locking shifts . as a further check on the accuracy of the calculations , in fig .[ fig : alkham ] we show the distributions of the violation of the hamiltonian constraint at for the bhs code without a horizon boundary condition , and at for the four different implementations of the horizon conditions .as the evolutions are completely unconstrained , the value of the hamiltonian provides a useful indicator of the accuracy .the distributions of the hamiltonian constraint show steady profiles in time for all of our horizon boundary schemes .we show the comparison to the bhs result at instead of , since the bhs code is no longer reliable after for reasons discussed above . to put the bhs result on the same scaleit is divided by a factor of 10 .all runs are done with the same grid parameters and a resolution of 400 zones .the increase in accuracy provided by the horizon boundary condition is obvious . for spacetimes with multiple black holes ,the grid points should not all be locked with respect to just the horizon of one of the holes . in fig .[ fig : loclk ] we show the effect of turning off the shift at a finite distance away from the black hole .the shift is smoothly set to zero at , with the shift at small being the area freezing shift . in the region that the coordinate is not locked , we see in fig . [fig : loclk ] that the value of is decreasing in time as expected , since the grid points there are falling towards the hole .the motion of the grid points would be different if there were another hole further out in the spacetime .this study of using a `` localized shift '' provides evidence that our scheme is flexible enough to handle more complicated situations .one last question one might have is the long term stability of the horizon boundary scheme . in fig .[ fig : lgtrm ] , we show a run up to with a resolution of 200 grid zones and using the `` distance freezing '' shift .errors in the mass and the hamiltonian constraint evaluated at the horizon are shown as a function of time .we note that at the end of the run the mass error is just 4% .stable and accurate evolutions for such a long time are likely to be required for the spiraling two black hole coalescence problem .progress in numerical relativity has been hindered for 30 years because of the difficulties in avoiding spacetime singularities in the calculations . in this paper , we have presented several working examples of how an apparent horizon boundary scheme can help circumvent these difficulties .we have demonstrated this scheme to be robust enough that it allows many different ways of implementation . also , as shown by the results in section v , it is likely that even an approximate implementation of horizon boundaries can yield stable and accurate evolutions of black hole spacetimes. such approximate implementations can be most important in extending this work to 3d spacetimes and will be discussed elsewhere . throughout this work we have sought to consider ideas that are applicable or extensible to the most general 3d case , where no particular symmetry , gauge condition , metric form , or evolution scheme is assumed .in fact four different gauge conditions were demonstrated here with the same code , and in no case were the equations specialized to the particular gauge under investigation .although in this paper we have only implemented this scheme in spherical vacuum spacetimes , neither sphericity nor vacuum are intrinsic restrictions on the scheme . indeed , in we have shown that the horizon boundary condition scheme works in the case of a scalar field infalling into a black hole .the follow - up study of that case will be reported elsewhere .the discussion in this paper has been carried out in terms of a free evolution scheme .for constrained evolution , the horizon boundary condition can be constructed in the same spirit as discussed here . with the implementation of a horizon locking shift, the boundary values of the elliptic constraint equations can be obtained from data in the region of causal dependence on the previous time slice , using the evolution equations .furthermore , the apparent horizon condition itself could be used to provide a boundary value of some quantity for constraint equations .although we have shown only cases where the horizon is locked to the coordinate system , as pointed out in section iii , this is not a requirement for the evolution .the shift can be used to control the motion of the horizon .such controlled motion will be particularly useful in moving black holes through the numerical grid , as will be needed , for example , in the long term evolution of the two black hole inspiral coalescence .we found that such controlled motion can be easily achieved in the scheme described in this paper .the details of this will be reported elsewhere .other issues that need to be addressed are that the apparent horizon location may jump discontinuously , or multiple horizons may form . these problems can be handled by simply tracking newly formed horizons and phasing in a new boundary in place of the old one(s ) .multiple black holes can be handled by locking the coordinates only in the vicinity of the black holes .we are beginning to examine these issues .we are happy to acknowledge helpful discussions with andrew abrahams , david bernstein , matt choptuik , david hobill , ian redmount , larry smarr , jim stone , kip thorne , lou wicker , and clifford will .we are very grateful to david bernstein for providing a copy of his black hole code , on which we based much of this work .j.m . acknowledges a fellowship ( p.f.p.i . ) from ministerio de educacin y ciencia of spain .this research is supported by the ncsa , the pittsburgh supercomputing center , and nsf grants nos .phy91 - 16682 , phy94 - 04788 , phy94 - 07882 and phy / asc93 - 18152 ( arpa supplemented ) .
it was recently shown that spacetime singularities in numerical relativity could be avoided by excising a region inside the apparent horizon in numerical evolutions . in this paper we report on the details of the implementation of this scheme . the scheme is based on using ( 1 ) a horizon locking coordinate which locks the coordinate system to the geometry , and ( 2 ) a finite differencing scheme which respects the causal structure of the spacetime . we show that the horizon locking coordinate can be affected by a number of shift conditions , such as a `` distance freezing '' shift , an `` area freezing '' shift , an `` expansion freezing '' shift , or the minimal distortion shift . the causal differencing scheme is illustrated with the evolution of scalar fields , and its use in evolving the einstein equations is studied . we compare the results of numerical evolutions with and without the use of this horizon boundary condition scheme for spherical black hole spacetimes . with the boundary condition a black hole can be evolved accurately well beyond , where is the black hole mass .
a data set consisting of univariate points is usually ranked in ascending or descending order .univariate order statistics ( i.e. , the ` smallest value out of ' ) and derived quantities have been studied extensively .the median is defined as the order statistic of rank when is odd , and as the average of the order statistics of ranks and when is even .the median and any other order statistic of a univariate data set can be computed in time .generalization to higher dimensions is , however , not straightforward .alternatively , univariate points may be ranked from the outside inward by assigning the most extreme data points depth 1 , the second smallest and second largest data points depth 2 , etc .the deepest point then equals the usual median of the sample .the advantage of this type of ranking is that it can be extended to higher dimensions more easily .this section gives an overview of several possible generalizations of depth and the median to multivariate settings .surveys of statistical applications of multivariate data depth may be found in , , and .let be a finite set of data points in .the _ tukey depth _ or _ halfspace depth _ ( introduced by and further developed by ) of any point in ( not necessarily a data point ) determines how central the point is inside the data cloud .the halfspace depth of is defined as the minimal number of data points in any closed halfspace determined by a hyperplane through : thus , a point lying outside the convex hull of has depth , and any data point has depth at least 1 .figure [ fig : ldep ] illustrates this definition for .( which is not a data point itself ) has depth 1 because the halfspace determined by contains only one data point.,scaledwidth=80.0% ] the set of all points with depth is called the depth region .the halfspace depth regions form a sequence of nested polyhedra .each is the intersection of all halfspaces containing at least data points .moreover , every data point must be a vertex of one or more depth regions .the point with maximal halfspace depth is called the _tukey median_. when the innermost depth region is larger than a singleton , the tukey median is defined as its centroid .this makes the tukey median unique by construction .note that the depth regions give an indication of the shape of the data cloud . based on this ideaone can construct the _ bagplot _ , a bivariate version of the univariate boxplot .figure [ fig : bag ] shows such a bagplot .the cross in the white disk is the tukey median .the dark area is an interpolation between two subsequent depth regions , and contains 50% of the data .this area ( the `` bag '' ) gives an idea of the shape of the majority of the data cloud .inflating the bag by a factor of 3 relative to the tukey median yields the `` fence '' ( not shown ) , and data points outside the fence are called outliers and marked by stars .finally , the light gray area is the convex hull of the non - outlying data points .more generally , in the multivariate case one can define the _ bagdistance _ of a point relative to the tukey median and the bag .assume that the tukey median lies in the interior of the bag , not on its boundary ( this excludes degenerate cases ) .then the bagdistance is the smallest real number such that the bag inflated ( or deflated ) by around the tukey median contains the point .when the tukey median equals , it is shown in that the bagdistance satisfies all axioms of a norm except that only needs to hold when .the bagdistance is used for outlier detection and statistical classification . an often used criterion to judge the robustness of an estimator is its _breakdown value_. the breakdown value is the smallest fraction of data points that we need to replace in order to move the estimator of the contaminated data set arbitrarily far away .the classical mean of a data set has breakdown value zero since we can move it anywhere by moving one observation .note that for any estimator which is equivariant for translation ( which is required to call it a location estimator ) the breakdown value can be at most 1/2 .( if we replace half of the points by a far - away translation image of the remaining half , the estimator can not distinguish which were the original data . ) the tukey depth and the corresponding median have good statistical properties .the tukey median is a location estimator with breakdown value for any data set in general position .this means that it remains in a predetermined bounded region unless or more data points are moved . at an elliptically symmetric distributionthe breakdown value becomes 1/3 for large , irrespective of .moreover , the halfspace depth is invariant under all nonsingular affine transformations of the data , making the tukey median affine equivariant .since data transformations such as rotation and rescaling are very common in statistics , this is an important property .the statistical asymptotics of the tukey median have been studied in .the need for fast algorithms for the halfspace depth has only grown over the years , since it is currently being applied to a variety of settings such as nonparametric classification .a related development is the fast growing field of functional data analysis , where the data are functions on a univariate interval ( e.g. time or wavelength ) or on a rectangle ( e.g. surfaces , images ) .often the function values are themselves multivariate .one can then define the depth of a curve ( surface ) by integrating the depth over all points as in .this functional depth can again be used for outlier detection and classification , but it requires computing depths in many multivariate data sets instead of just one . * remark : centerpoints .* there is a close relationship between the tukey depth and centerpoints , which have been long studied in computational geometry .in fact , tukey depth extends the notion of centerpoint .centerpoint _ is any point with halfspace depth .a consequence of helly s theorem is that there always exists at least one centerpoint , so the depth of the tukey median can not be less than .1 . simplicial depth ( ) .the depth of equals the number of simplices formed by data points that contain .formally , \}.\ ] ] the simplicial median is affine equivariant with a breakdown value bounded above by . unlike halfspace depth, its depth regions need not be convex .oja depth ( ) .this is also called simplicial volume depth : \ } \bigr)^{-1}.\ ] ] the corresponding median is also affine equivariant , but has zero breakdown value .3 . projection depth .we first define the _ outlyingness _ ( ) of any point relative to the data set as where the median absolute deviation ( mad ) of a univariate data set is the statistic .the outlyingness is small for centrally located points and increases if we move toward the boundary of the data cloud . instead of the median andthe mad , also another pair of a location and scatter estimate may be chosen .this leads to different notions of projection depth , all defined as general projection depth is studied in .when using the median and the mad , the projection depth has breakdown value 1/2 and is affine equivariant .its depth regions are convex .4 . spatial depth ( ) .spatial depth is related to multivariate quantiles proposed in : the spatial median is also called the median ( ) .it has breakdown value 1/2 , but is not affine equivariant ( it is only equivariant with respect to translations , multiplication by a scalar factor , and orthogonal transformations ) .for a recent survey on the computation of the spatial median see .a comparison of the main properties of the different location depth medians is given in table [ tab : prop ] .[ tab : prop ] ' '' '' median & breakdown value & affine equivariance + ' '' '' tukey & worst - case & yes + & typically & + simplicial & & yes + oja & & yes + projection & & yes + spatial & & no + following we now define the depth of a point relative to an arrangement of hyperplanes .a point is said to have zero arrangement depth if there exists a ray that does not cross any of the hyperplanes in the arrangement .( a hyperplane parallel to the ray is counted as intersecting at infinity . )the arrangement depth of any point is defined as the minimum number of hyperplanes intersected by any ray from .figure [ fig : rdepd1 ] shows an arrangement of lines . in this plot , the points and have arrangement depth 0 and the point has arrangement depth 2 .the arrangement depth is always constant on open cells and on cell edges .it was shown ( ) that any arrangement of lines in the plane encloses a point with arrangement depth at least , giving rise to a new type of `` centerpoints . ''this notion of depth was originally defined ( ) in the dual , as the depth of a regression hyperplane relative to a point configuration of the form in .regression depth ranks hyperplanes according to how well they fit the data in a regression model , with containing the predictor variables and the response .a vertical hyperplane ( given by constant ) , which can not be used to predict future response values , is called a `` nonfit '' and assigned regression depth 0 .the regression depth of a general hyperplane is found by rotating in a continuous movement until it becomes vertical .the minimum number of data points that is passed in such a rotation is called the regression depth of .figure [ fig : rdepd2 ] is the dual representation of figure [ fig : rdepd1 ] .( for instance , the line has slope and intercept and corresponds to the point in figure [ fig : rdepd1 ] . )the lines and have regression depth 0 , whereas the line has regression depth 2 . in statisticsone is interested in the _ deepest fit _ or regression depth median , because this is a line ( hyperplane ) about which the data are well - balanced .the statistical properties of regression depth and the deepest fit are very similar to those of the tukey depth and median .the bounds on the maximal depth are almost the same . moreover , for both depth notions the value of the maximal depth can be used to characterize the symmetry of the distribution ( ) .the breakdown value of the deepest fit is at least and under linearity of the conditional median of given it converges to 1/3 . in the next section, we will see that the optimal complexities for computing the depth and the median are also comparable to those for halfspace depth . for a detailed comparison of the properties of halfspace and regression depth , see .the arrangement depth region is defined in the primal , as the set of points with arrangement depth at least .contrary to the tukey depth , these depth regions need not be convex .but nevertheless it was proved that there always exists a point with arrangement depth at least ( ) .an analysis - based proof was given in .* remark : arrangement levels .* arrangement depth is undirected ( isotropic ) in the sense that it is defined as a minimum over all possible directions .if we restrict ourselves to vertical directions ( i.e. , up or down ) , we obtain the usual levels of the arrangement known in combinatorial geometry .the absence of preferential directions makes arrangement depth invariant under affine transformations .although the definitions of depth are intuitive , the computational aspects can be quite challenging .the calculation of depth regions and medians is computationally intensive , especially for large data sets in higher dimensions . in statistical practicesuch data are quite common , and therefore reliable and efficient algorithms are needed . for the bivariate caseseveral algorithms have been developed early on .the computational aspects of depth in higher dimensions are currently being explored .algorithms for depth - related measures are often more complex for data sets which are not in general position than for data sets in general position . for example , the boundaries of subsequent halfspace depth regions are always disjoint when the data are in general position , but this does not hold for nongeneral position .preferably , algorithms should be able to handle both the general position case and the nongeneral position case directly . as a quick fix , algorithms which were made for general positioncan also be applied in the other case if one first adds small random errors to the data points . for large data sets, this ` dithering ' will have a limited effect on the results .table [ tab : ldep ] gives an overview of algorithms , each of which has been implemented , to compute the depth in a given point in .these algorithms are time - optimal , since the problem of computing these bivariate depths has an lower bound ( , ) . the algorithms for halfspace and simplicial depth are based on the same technique .first , data points are radially sorted around . then a line through is rotated .the depth is calculated by counting the number of points that are passed by the rotating line in a specific manner .the planar arrangement depth algorithm is easiest to visualize in the regression setting . to compute the depth of a hyperplane with coefficients , the data are first sorted along the -axis . a vertical line is then moved from left to right and each time a data point is passed , the number of points above and below on both sides of is updated . in general ,computing a median is harder than computing the depth in a point , because typically there are many candidate points .for instance , for the bivariate simplicial median the currently best algorithm requires time , whereas its corresponding depth needs only .the simplicial median seems difficult to compute because there are candidate points ( namely , all intersections of lines passing through two data points ) and the simplicial depth regions have irregular shapes , but of course a faster algorithm may yet be found .fortunately , in several important cases the median can be computed without computing the depth of individual points . a linear - time algorithm to compute a bivariate centerpointwas described in .table [ tab : lmed ] gives an overview of algorithms to compute bivariate depth - based medians .for the bivariate tukey median the lower bound was proved in , and the currently fastest algorithm takes time ( ) .the lower bound also holds for the median of arrangement ( regression ) depth as shown by .fast algorithms were devised by and .the computation of bivariate halfspace depth regions has also been studied .the first algorithm required time per depth region .an algorithm to compute all regions in time is constructed and implemented in .this algorithm thus also yields the tukey median .it is based on the dual arrangement of lines where topological sweep is applied .a completely different approach is implemented in .they make direct use of the graphics hardware to approximate the depth regions of a set of points in time , where the pixel grid is of dimension .recently , constructed an algorithm to update halfspace depth and its regions when points are added to the data set .the first algorithms to compute the halfspace and regression depth of a given point in with were constructed in and require time .the main idea was to use projections onto a lower - dimensional space .this reduces the problem to computing bivariate depths , for which the existing algorithms have optimal time complexity . in theoretical output - sensitive algorithms for the halfspace depthare proposed .an interesting computational connection between halfspace depth and multivariate quantiles was provided in and .more recently , provided a generalized version of the algorithm of together with c++ code .for the depth regions of halfspace depth in higher dimensions an algorithm was recently proposed in . for the computation of projection depthsee . the simplicial depth of a point in can be computed in time , and in the fastest algorithm needs time . for higher dimensions, no better algorithm is known than the straightforward method to compute all simplices . when the number of data points and dimensions are such that the above algorithms become infeasible , one can resort to approximate algorithms . for halfspace depthsuch approximate algorithms were proposed in and .an approximation to the tukey median using steepest descent can be found in . in an algorithm is described to approximate the deepest regression fit in any dimension .computational geometry has provided fast and reliable algorithms for many other statistical techniques .linear regression is a frequently used statistical technique .the ordinary least squares regression , minimizing the sum of squares of the residuals , is easy to calculate , but produces unreliable results whenever one or more outliers are present in the data .robust alternatives are often computationally intensive .we here give some examples of regression methods for which geometric or combinatorial algorithms have been constructed . 1 . regression .this well - known alternative to least squares regression minimizes the sum of the absolute values of the residuals , and is robust to vertical outliers .algorithms for regression may be found in , e.g. , and .least median of squares ( lms ) regression ( ) .this method minimizes the median of the squared residuals and has a breakdown value of 1/2 . to compute the bivariate lms line ,an algorithm using topological sweep has been developed .an approximation algorithm for the lms line was constructed in .the recent algorithm of uses mixed integer optimization .3 . median slope regression ( , ) .this bivariate regression technique estimates the slope as the median of the slopes of all lines through two data points .an algorithm with optimal complexity is given in , and a more practical randomized algorithm in .4 . repeated median regression ( ) .median slope regression takes the median over all couples ( -tuples in general ) of data points . here , this median is replaced by nested medians . forthe bivariate repeated median regression line , provide an efficient randomized algorithm .the aim of cluster analysis is to divide a data set into clusters of similar objects . partitioning methods divide the data into groups .hierarchical methods construct a complete clustering tree , such that each cut of the tree gives a partition of the data set . a selection of clustering methods with accompanying algorithmsis presented in .the general problem of partitioning a data set into groups such that the partition minimizes a given error function is np - hard .however , for some special cases efficient algorithms exist . for a small number of clusters in low dimensions , exact algorithms for partitioning methods can be constructed .constructing clustering trees is also closely related to geometric problems ( see e.g. , , ) .all results not given an explicit reference above may be traced in these surveys .m. hubert , p.j .rousseeuw , and s. van aelst .similarities between location depth and regression depth . in l.t .fernholz , editor , _ statistics in genetics and in the environmental sciences _ , pages 153162 .birkhuser verlag , basel , 2001 .mount , n.s .netanyahu , k. romanik , r. silverman , and a.y .wu.a practical approximation algorithm for the lms line estimator . in _ proc .acm - siam sympos . discrete algorithms _ ,pages 473482 , new orleans , 1997 .r. serfling . a depth function and a scale curve based on spatial quantiles . in y. dodge , editor , _ statistical data analysis based on the l1-norm and related methods _ , pages 2538 , birkhaser , basel , 2002 .
during the past two decades there has been a lot of interest in developing statistical depth notions that generalize the univariate concept of ranking to multivariate data . the notion of depth has also been extended to regression models and functional data . however , computing such depth functions as well as their contours and deepest points is not trivial . techniques of computational geometry appear to be well - suited for the development of such algorithms . both the statistical and the computational geometry communities have done much work in this direction , often in close collaboration . we give a short review of this work , focusing mainly on depth and multivariate medians , and end by listing some other areas of statistics where computational geometry has been of great help in constructing efficient algorithms .
effective hamiltonians and interactions are routinely used in shell - model calculations of nuclear spectra .the published mathematical theory of the effective hamiltonian is complicated and usually focuses on perturbation theoretical aspects , diagram expansions , etc . . in this article, we recast and reinterpret the basic elements of the theory geometrically .we focus on the geometric relationship between the exact eigenvectors and the effective eigenvectors , both for the usual non - hermitian bloch - brandow effective hamiltonian , and for the hermitian effective hamiltonian , which we dub the canonical effective hamiltonian due to its geometric significance .this results in a clear geometric understanding of the de - coupling operator , and a simple proof and characterization of the hermitian effective hamiltonian in terms of subspace rotations , in the same way as the non - hermitian hamiltonian is characterized by subspace projections . as a by - product, we obtain a simple and stable numerical algorithm to compute the exact effective hamiltonian .the goal of effective interaction theory is to devise a hamiltonian in a model space of ( much ) smaller dimension than the dimension of hilbert space , with _ exact _ eigenvalues of the original hamiltonian , where is usually considered as a perturbation .the model space is usually taken as the span of a few eigenvectors of , i.e. , the unperturbed hamiltonian in a perturbational view .effective hamiltonians in -body systems must invariably be approximated ( otherwise there would be no need for ) , usually by perturbation theory , but a sub - cluster approximation is also possible . in that case , the exact -body canonical effective hamiltonian is computed , where . from this, one extracts an effective -body interaction and apply it to the -body system . in this case, we present a new algorithm for computing the exact effective interaction that is conceptually and computationally simpler than the usual one which relies on both matrix inversion and square root , as the only non - trivial matrix operation is the singular value decomposition ( svd ) .the article is organized as follows . in sec .[ sec : tools ] we introduce some notation and define the singular value decomposition of linear operators and the principal angles and vectors between two linear spaces . in sec .[ sec : effham ] we define and analyze the bloch - brandow and canonical effective hamiltonians .the main part consists of a geometric analysis of the exact eigenvectors , and forms the basis for the analysis of the effective hamiltonians .we also discuss the impact of symmetries of the hamiltonian , i.e. , conservation laws . in sec .[ sec : algorithms ] we give concrete matrix expressions and algorithms for computing the effective hamiltonians , and in the canonical case it is , to the author s knowledge , previously unknown . in sec .[ sec : discussion ] we sum up and briefly discuss the results and possible future projects .we shall use the dirac notation for vectors , inner products and operators , in order to make a clear , basis - independent formulation . by , , etc ., we denote ( finite dimensional ) hilbert spaces , and vectors are denoted by kets , e.g. , , as usual .our underlying hilbert space is denoted by , with . in general, is infinite .we shall , however , assume it to be finite .our results are still valid in the infinite dimensional case if is assumed to have a discrete spectrum and at least linearly independent eigenvectors .we are also given a hamiltonian , a linear , hermitian operator ( i.e. , ) on .its spectral decomposition is defined to be thus , and are the ( real ) eigenvalues and ( orthonormal ) eigenvectors , respectively .we are also given a subspace , called the model space , which in principle is arbitrary .let be an orthonormal basis , for definiteness , viz , let be its orthogonal projector , i.e. , the basis is commonly taken to be eigenvectors for .the orthogonal complement of the model space , , has the orthogonal projector , and is called the excluded space .this division of into and transfers to operators in .these are in a natural way split into four parts , viz , for an arbitrary operator , where maps the model space into itself , maps into , and so forth .it is convenient to picture this in a block - form of , viz , a recurrent tool in this work is the singular value decomposition ( svd ) of an operator . here , and are arbitrary .then there exists orthonormal bases and of and , respectively , and non - negative real numbers with for all , such that this is the singular value decomposition ( svd ) of , and it always exists .it may happen that some of the basis vectors do not participate in the sum ; either if , or if for some .the vectors are called right singular vectors , while are called left singular vectors .the values are called singular values , and is one - to - one and onto ( i.e. , nonsingular ) if and only if for all , and .the inverse is then as easily verified .a recursive variational characterization of the singular values and vectors is the following : the latter equality implicitly states that the maximum is actually real .the svd is very powerful , as it gives an interpretation and representation of _ any _ linear operator as a simple scaling with respect to one orthonormal basis , and then transformation to another .the singular vectors are not unique , but the singular values are .important tools for comparing linear subspaces and of are the principal angles and principal vectors .the principal angles generalize the notion of angles between vectors to subspaces in a natural way .they are also called canonical angles .assume that ( if , we simply exchange and . )then , principal angles ] were the smallest possible such that the principal vectors are the orthonormal bases of and that are closest to each other . and .action of projectors and on indicated[fig : rotation ] ] we now define the unitary operator that rotates into according to this description , i.e. , we should have . in fig .[ fig : rotation ] the plane spanned by and if is depicted .recall , that .note that if and only if , and the plane degenerates into a line . if , the vector is defined so that it together with is an orthonormal basis for the plane , viz , where thus , is an orthonormal basis for , whose dimension is , where is the number of .the set is an orthonormal basis for which contains for all . the operator is now defined as a rotation in , i.e. , by elementary trigonometry , in terms of the orthonormal basis , we obtain a manifest planar rotation for each , i.e. , on the rest of the hilbert space , , is the identity .the operator implements the so - called direct rotation of into . from eqn .( [ eq : z - ortho2 ] ) we obtain ( { |{\chi_k}\rangle}{\langle{\chi_k}| } + { |{\xi_k}\rangle}{\langle{\xi_k}| } ) \notag \\ & & + \sum_{k=1}^m \sin(\theta_k ) ( { |{\chi_k}\rangle}{\langle{\xi_k}| } - { |{\xi_k}\rangle}{\langle{\chi_k}| } ) , \label{eq : direct - rot - def}\end{aligned}\ ] ] it is instructive to exhibit the lie algebra element such that . since we have eqn . , is is easy to do this . indeed , taking the exponential of by summing the series for and , we readily obtain , the desired result . moreover , observe that the term in eqn .( [ eq : g ] ) commutes with the term , so , is exhibited as a sequence of commuting rotations using the canonical angles . for reference , we review some properties of the bloch - brandow effective hamiltonian , which we denote by .the effective eigenvectors are defined by since are the orthogonal projections of onto , we deduce that the bloch - brandow effective eigenvectors are _ the closest possible to the exact model space eigenvectors_. in this sense , the bloch - brandow effective hamiltonian is the optimal choice .it is obvious that is non - hermitian , as rejecting the excluded space eigenvector components renders the effective eigenvectors non - orthonormal , i.e. , in terms of similarity transforms , we obtain by setting , the so - called de - coupling operator or correlation operator .it is defined by and the equation again , for this to be a meaningful definition , must be a basis for . since , , and eqn .( [ eq : psi - eff - general ] ) becomes for we thus obtain after this initial review , we now relate to the geometry of and .the svd of is readily obtainable by expanding the principal vectors in the eigenvectors , sets which both constitute a basis for , and inserting in eqn .( [ eq : omega ] ) .we have that is , the result is which is the svd of .the operator is thus exhibited as an operator intimately related to the principal angles and vectors of and : it transforms the principal vectors of into an orthonormal basis for , with coefficients determined by the canonical angles . using eqn .( [ eq : chi2 ] ) we obtain an alternative expression , viz , hermitian effective hamiltonians have independently been introduced by various authors since 1929 , when van vleck introduced a unitary transformation to decouple the model space to second order in the interaction . in 1963 , primas considered an order by order expansion of this using the baker - campbell - hausdorff formula and commutator functions to determine , a technique also used in many other settings in which a transformation is in a lie group , see , e.g. , ref . and references therein .this approach was elaborated by shavitt and redmon , who were the first to mathematically connect this hermitian effective hamiltonian to , as in eqn . below .in the nuclear physics community , suzuki has been a strong advocate of hermitian effective interactions and the -body sub - cluster approximation to the -body effective interaction .hermiticity in this case is essential . even though a hermitian effective hamiltonian is not unique due to the non - uniqueness of , the various hermitian effective hamiltonians put forward in the literature all turn out to be equivalent . in the spirit of klein and shavitt employ the term `` canonical effective hamiltonian '' since this emphasizes the `` natural '' and geometric nature of the hermitian effective hamiltonian , which we denote by .recall the spectral decomposition where the ( orthonormal ) effective eigenvectors are now defined by the following optimization property : _ the effective eigenvectors are the closest possible to the exact eigenvectors while still being orthonormal ._ thus , where the bloch - brandow approach _ globally _ minimizes the distance between the eigenvectors , at the cost of non - orthonormality , the canonical approach has the unitarity constraint on the similarity transformation , rendering hermitian . given a collection of vectors , which are candidates for effective eigenvectors ,define the functional ] .the global minimum , when is allowed to vary freely , is attained for , the bloch - brandow effective eigenvectors .however , the canonical effective eigenvectors are determined by minimizing ] . here , we discuss the impact of such symmetries of on the effective hamiltonian ; both in the bloch - brandow and the canonical case .we point out the importance of choosing a model space that is an invariant of as well , i.e. , =0 ] , i.e. , has the same continuous symmetry .let be an observable such that =0 ] .thus , =0 ] ( in addition to the assumption ( [ eq : common - basis ] ) . )the assumption ( [ eq : s - assum ] ) also implies that =0 ] .it follows that = 0 , \quad n = 0,1,\ldots,\ ] ] and , by eqn .( [ eq : artanh - series ] ) , that = [ s , e^{-g } ] = 0.\ ] ] this gives again , since is a basis for , this holds if and only if =0 ] if and only if =0 ] is to define the effective observable ( which in the commuting case is equal to ) which obviously commutes with and satisfies this amounts to modifying the concept of rotational symmetry in the above example .the assumptions ( [ eq : common - basis ] ) and ( [ eq : s - assum ] ) have consequences also for the structure of the principal vectors and .indeed , write where the sum runs over all distinct eigenvalues , of , and where ( ) is the corresponding eigenspace , i.e. , the eigenspaces are all mutually orthogonal , viz , , , and , for .the definition ( [ eq : pca - minimax ] ) of the principal vectors and angles can then we written thus , for each , there is an eigenvalue of such that showing that the principal vectors are eigenvectors of if and only if =0 ] , and the assumption ( [ eq : common - basis ] ) .the present symmetry considerations imply that model spaces obeying as many symmetries as possible should be favored over less symmetric model spaces , since these other model spaces become less `` natural '' or `` less effective '' in the sense that their geometry is less similar to the original hilbert space .this is most easily seen from the fact that principal vectors are eigenvectors for the conserved observable .this may well have great consequences for the widely - used sub - cluster approximation to the effective hamiltonian in no - core shell model calculations , where one constructs the effective hamiltonian for a system of particles in order to obtain an approximation to the -body effective hamiltonian .the model space in this case is constructed in different ways in different implementations .some of these model spaces may therefore be better than others due to different symmetry properties .since computer calculations are invariably done using matrices for operators , we here present matrix expressions for and compare them to those usually programmed in the literature , as well as expressions for and .recall the standard basis of , where the constitute a basis for .these are usually eigenvectors of the unperturbed `` zero order '' hamiltonian , but we will not use this assumption .as previously we also assume without loss that the eigenpairs we wish to approximate in are . an operator has a matrix associated with it .the matrix elements are given by such that similarly , any vector has a column vector associated with it , with .we will also view dual vectors , e.g. , , as row vectors . the model space and the excluded space are conveniently identified with and , respectively . also note that , , etc ., are identified with the upper left , upper right , etc . , blocks of as in eqn .( [ eq : blocks ] ) .we use a notation inspired by fortran and matlab and write and so forth .we introduce the unitary operator as i.e. , a basis change from the chosen standard basis to the eigenvector basis .the columns of are the eigenvectors components in the standard basis , i.e. , and are typically the eigenvectors returned from a computer implementation of the spectral decomposition , viz , the svd is similarly transformed to matrix form .the svd defined in sec .[ sec : svd ] is then formulated as follows : for any matrix there exist matrices ( ) and , such that ( the identity matrix ) , and a non - negative diagonal matrix such that here , , being the singular values . the columns of are the left singular vectors components , i.e. , , and similarly for and the right singular vectors .the difference between the two svd formulations is then purely geometric , as the matrix formulation favorizes the standard bases in and . the present version of the matrix svd is often referred to as the `` economic '' svd , since the matrices and may be extended to unitary matrices over and , respectively , by adding singular values , .the matrix is then a matrix with `` diagonal '' given by .this is the `` full '' svd , equivalent to our basis - free definition .let the eigenvectors be calculated and arranged in a matrix , i.e. , ( where the subscript does _ not _ pick a single component ) . consider the operator defined in eqn .( [ eq : u ] ) , whose matrix columns are the bloch - brandow effective eigenvectors the standard basis , viz , the columns of the matrix of are the canonical effective eigenvectors . the svd ( [ eq : u - svd ] ) can be written which gives since , we obtain which gives , when applied to thus , we obtain the canonical effective eigenvectors by taking the matrix svd of and multiplying together the matrices of singular vectors . as efficient and robust svd implementationsare almost universally available , e.g. , in the lapack library , this makes the canonical effective interaction much easier to compute compared to eqn .( [ eq : effham - old ] ) , viz , this version requires one svd computation and three matrix multiplications , all with matrices , one of which is diagonal . equation ( [ eq : effham - old ] ) requires , on the other hand , several more matrix multiplications , inversions and the square root computation .the bloch - brandow effective hamiltonian is simply calculated by for the record , the matrix of is given by although we have no use for it when using the svd based algorithm .it may be useful , though , to be able to compute the principal vectors for and .for this , one may compute the svd of or of , the latter which gives , and directly in the standard basis as singular values and vectors , respectively .we have characterized the effective hamiltonians commonly used in nuclear shell - model calculations in terms of geometric properties of the spaces and .the svd and the principal angles and vectors were central in the investigation . while the bloch - brandow effective hamiltonian is obtained by orthogonally projecting onto , thereby _ globally _ minimizing the norm - error of the effective eigenvectors , the canonical effective hamiltonian is obtained by rotating into using , which minimizes the norm - error while retaining orthonormality of the effective eigenvectors .moreover , we obtained a complete description of the de - coupling operator in terms of the principal angles and vectors defining .an important question is whether the present treatment generalizes to infinite dimensional hilbert spaces .our analysis fits into the general assumptions in the literature , being that is large but finite , or at least that the spectrum of purely discrete .a minimal requirement is that has eigenvalues , so that can be constructed .in particular , the svd generalizes to finite rank operators in the infinite dimensional case , and are thus valid for all the operators considered here even when .unfortunately , has almost never a purely discrete spectrum .it is well - known that the spectrum in general has continuous parts and resonances embedded in these , and a proper theory should treat these cases as well as the discrete part .in fact , the treatments of in the literature invariably glosses over this . it is an interesting future project to develop a geometric theory for the effective hamiltonians which incorporates resonances and continuous spectra . the geometrical view simplified and unified the available treatments in the literature somewhat , and offered further insights into the effective hamiltonians .moreover , the the symmetry considerations in sec .[ sec : commuting - observables ] may have significant bearing on the analysis of perturbation expansions and the properties of sub - cluster approximations to .indeed , it is easy to see , that if we have a _ complete _ set of commuting observables ( csco ) for , and the same set of observables form a csco for , all eigenvalues and eigenfunctions of are analytic in , implying that the rayleigh - schroedinger perturbation series for converges ( i.e. , at ) .intuitively , the fewer commuting observables we are able to identify , the more likely it is that there are singularities in , so called intruder states .the rayleigh - schroedinger series diverges outside the singularity closest to , and in nuclear systems this singularity is indeed likely to be close to . on the other hand , resummation of the seriescan be convergent and yield an analytic continuation of outside the region of convergence . to the author s knowledge, there is no systematic treatment of this phenomenon in the literature . on the contrary , to be able to do such a resummation is sort of a `` holy grail '' of many - body perturbation theory .a geometric study of the present kind to many - body perturbation theory and diagram expansions may yield a step closer to this goal , as we have clearly identified the impact of commuting observables on the principal vectors of and .we have also discussed a compact algorithm in terms of matrices to compute , relying on the svd . to the author s knowledge , this algorithm is previously unpublished . since robust and fast svd implementations are readily available , e.g. , in the lapack library , and since few other matrix manipulations are needed , it should be preferred in computer implementations . as stressed in the introduction , the algorithms presented are really only useful if we compute the _ exact _ effective hamiltonian , as opposed to a many - body perturbation theoretical calculation , and if we know what exact eigenpairs to use , such as in a sub - cluster approximation . in this case, one should analyze the error in the approximation , i.e. , the error in neglecting the many - body correlations in . in the perturbative regime , some results exist .the author believes , that the geometric description may facilitate a deeper analysis , and this is an interesting idea for future work .the author wishes to thank prof .m. hjorth - jensen , cma , for helpful discussions .this work was funded by cma through the norwegian research council .
we give a complete geometrical description of the effective hamiltonians common in nuclear shell model calculations . by recasting the theory in a manifestly geometric form , we reinterpret and clarify several points . some of these results are hitherto unknown or unpublished . in particular , commuting observables and symmetries are discussed in detail . simple and explicit proofs are given , and numerical algorithms are proposed , that improve and stabilize common methods used today .
boosting algorithms have become very successful in machine learning .this study revisits _logitboost_ under the framework of _ adaptive base class boost ( abc - boost ) _ in , for multi - class classification .we denote a training dataset by , where is the number of feature vectors ( samples ) , is the feature vector , and is the class label , where in multi - class classification . both _logitboost_ and _ mart _ ( multiple additive regression trees) algorithms can be viewed as generalizations to the logistic regression model , which assumes the class probabilities to be while traditional logistic regression assumes , _ logitboost _ and _ mart _ adopt the flexible `` additive model , '' which is a function of terms : where , the base learner , is typically a regression tree .the parameters , and , are learned from the data , by maximum likelihood , which is equivalent to minimizing the _ negative log - likelihood loss _ where if and otherwise . for identifiability , the `` sum - to - zero '' constraint , ,is usually adopted . as described in alg .[ alg_logitboost ] , builds the additive model ( [ eqn_f_m ] ) by a greedy stage - wise procedure , using a second - order ( diagonal ) approximation , which requires knowing the first two derivatives of the loss function ( [ eqn_loss ] ) with respective to the function values . obtained : those derivatives can be derived by assuming no relations among , to .however , used the `` sum - to - zero '' constraint throughout the paper and they provided an alternative explanation . showed ( [ eqn_mart_d1d2 ] ) by conditioning on a `` base class '' and noticed the resultant derivatives are independent of the particular choice of the base class .0 : , if , otherwise .+ 1 : , , to , to + 2 : for to do + 3 : for to , do + 4 : compute . + 5 : compute .+ 6 : fit the function by a weighted least - square of to with weights .+ 7 : + 8 : end + 9 : , to , to + 10 : end at each stage , _ logitboost _ fits an individual regression function separately for each class . this is analogous to the popular _ individualized regression _ approach in multinomial logistic regression , which is known to result in loss of statistical efficiency , compared to the full ( conditional ) maximum likelihood approach . on the other hand , in order to use trees as base learner , the diagonal approximation appears to be a must , at least from the practical perspective . derived the derivatives of ( [ eqn_loss ] ) under the sum - to - zero constraint . without loss of generality, we can assume that class 0 is the base class .for any , the base class must be identified at each boosting iteration during training . suggested an exhaustive procedure to adaptively find the best base class to minimize the training loss ( [ eqn_loss ] ) at each iteration . combined the idea of _ abc - boost _ with _ mart_. the algorithm , _ abc - mart _ , achieved good performance in multi - class classification on the datasets used in .we propose _ abc - logitboost _ , by combining _abc - boost _ with _ robust logitboost_ .our extensive experiments will demonstrate that _ abc - logitboost _ can considerably improve _ logitboost _ and _ abc - mart _ on a variety of datasets .our work is based on _ robust logitboost_ , which differs from the original _ logitboost _ algorithm .thus , this section provides an introduction to _robust logitboost_. commented that _ logitboost _ ( alg . [ alg_logitboost ] ) can be numerically unstable .the original paper suggested some `` crucial implementation protections '' on page 17 of : * in line 5 of alg .[ alg_logitboost ] , compute the response by ( if ) or ( if ) .* bound the response by $ ] .note that the above operations are applied to each individual sample .the goal is to ensure that the response is not too large ( note that always ) . on the other hand, we should hope to use larger to better capture the data variation .therefore , the thresholding occurs very frequently and it is expected that some of the useful information is lost . demonstrated that , if implemented carefully , _logitboost _ is almost identical to _ mart_. the only difference is the tree - splitting criterion .consider weights , and response values , to , which are assumed to be ordered according to the sorted order of the corresponding feature values .the tree - splitting procedure is to find the index , , such that the weighted mean square error ( mse ) is reduced the most if split at .that is , we seek to maximize \end{aligned}\ ] ] where , , and .after simplification , we obtain ^ 2}{\sum_{i=1}^s w_i}+\frac{\left[\sum_{i = s+1}^n z_iw_i\right]^2}{\sum_{i = s+1}^{n } w_i } - \frac{\left[\sum_{i=1}^n z_iw_i\right]^2}{\sum_{i=1}^n w_i}\end{aligned}\ ] ] plugging in , and as in alg .[ alg_logitboost ] , yields , ^ 2}{\sum_{i=1}^s p_{i , k}(1-p_{i , k})}+\frac{\left[\sum_{i = s+1}^n r_{i , k } - p_{i , k } \right]^2}{\sum_{i = s+1}^{n } p_{i , k}(1-p_{i , k } ) } - \frac{\left[\sum_{i=1}^n r_{i , k } - p_{i , k } \right]^2}{\sum_{i=1}^n p_{i , k}(1-p_{i , k})}.\end{aligned}\ ] ] because the computations involve as a group , this procedure is actually numerically stable . + in comparison , _mart_ only used the first order information to construct the trees , i.e. , ^ 2+\left[\sum_{i = s+1}^n r_{i , k } - p_{i , k } \right]^2 - \left[\sum_{i=1}^n r_{i , k } - p_{i , k } \right]^2.\end{aligned}\ ] ] 1 : , , to , to + 2 : for to do + 3 : for to do + 4 : -terminal node regression tree from , + : with weights as in section 2.1 . + 5 : + 6 : + 7 : end + 8 : , to , to + 9 : end alg .[ alg_robust_logitboost ] describes _ robust logitboost _ using the tree - splitting criterion developed in section [ sec_split ] .note that after trees are constructed , the values of the terminal nodes are computed by which explains line 5 of alg .[ alg_robust_logitboost ] .[ alg_robust_logitboost ] has three main parameters , to which the performance is not very sensitive , as long as they fall in some reasonable range .this is a very significant advantage in practice .the number of terminal nodes , , determines the capacity of the base learner . suggested . commented that is unlikely . in our experience , for large datasets ( or moderate datasets in high - dimensions ) , is often a reasonable choice ; also see .the shrinkage , , should be large enough to make sufficient progress at each step and small enough to avoid over - fitting . suggested .normally , is used . the number of iterations , , is largely determined by the affordable computing time .a commonly - regarded merit of boosting is that over - fitting can be largely avoided for reasonable and .1 : , , to , to + 2 : for to do + 3 : for to , do + 4 : for to , , do + 5 : -terminal node regression tree from + : with weights , as in section 2.1 .+ 6 : + + 7 : + 8 : end + 9 : + 10 : + 11 : + 12 : end + 13 : + 14 : + 15 : + 16 : end the recently proposed _abc - boost _ algorithm consists of two key components : 1 . using the widely - used _ sum - to - zero _constraint on the loss function , one can formulate boosting algorithms only for classes , by using one class as the * base class*. 2 . at each boosting iteration ,* adaptively * select the base class according to the training loss . suggested an exhaustive search strategy . combined _ abc - boost _ with _ mart _ to develop _ abc - mart_. demonstrated the good performance of _ abc - mart _ compared to _mart_. this study will illustrate that * _ abc - logitboost _ * , the combination of _ abc - boost _ with _ ( robust ) logitboost _ , will further reduce the test errors , at least on a variety of datasets .[ alg_abc - logitboost ] presents _ abc - logitboost _ , using the derivatives in ( [ eqn_abc_derivatives ] ) and the same exhaustive search strategy as in _ abc - mart_. again , _abc - logitboost _ differs from _ abc - mart _ only in the tree - splitting procedure ( line 5 in alg .[ alg_abc - logitboost ] ) .table [ tab_data ] lists the datasets in our experiments , which include all the datasets used in , plus _mnist10k_. ] ..for _ letter , pendigits , zipcode , optdigits _ and _ isolet _ , we used the standard ( default ) training and test sets . for _ covertype_ , we use the same split in . for _ mnist10k _ , we used the original 10000 test samples in the original _ mnist _ dataset for training , and the original 60000 training samples for testing .also , as explained in , _ letter2k _( _ letter4k _ ) used the last 2000 ( 4000 ) samples of _ letter _ for training and the remaining 18000 ( 16000 ) for testing , from the original _letter _ dataset . [ cols="<,>,>,>,>",options="header " , ] [ tab_covertype ] the results on _ covertype _ are reported differently from other datasets ._ covertype _ is fairly large . building a very large model ( e.g., boosting steps ) would be expensive .testing a very large model at run - time can be costly or infeasible for certain applications ( e.g. , search engines ) .therefore , it is often important to examine the performance of the algorithm at much earlier boosting iterations .table [ tab_covertype ] shows that _ abc - logitboost _ may improve _ logitboost _ as much as , as opposed to the reported in table [ tab_summary ] .[ tab_letter2k ] [ tab_letter4k ] [ tab_letter ] [ tab_pendigits ] [ tab_zipcode ] [ tab_optdigits ] for this dataset , only experimented with for _ mart _ and _ abc - mart_. we add the experiment results for .[ tab_isolet ] [ tab_isolet_mart ]multi - class classification is a fundamental task in machine learning .this paper presents the _ abc - logitboost _ algorithm and demonstrates its considerable improvements over _ logitboost _ and _ abc - mart _ on a variety of datasets .+ there is one interesting uci dataset named _ poker _ , with 25k training samples and 1 million testing samples .our experiments showed that _ abc - boost _ could achieve an accuracy ( i.e. , the error rate ) .interestingly , using libsvm , an accuracy of about was obtained .we will report the results in a separate paper .
we develop _ abc - logitboost _ , based on the prior work on _ abc - boost_ and _ robust logitboost_ . our extensive experiments on a variety of datasets demonstrate the considerable improvement of _ abc - logitboost _ over _ logitboost _ and _ abc - mart_.
multiple - input multiple - output ( mimo ) technology is becoming one of the most promising solutions for the next - generation wireless communication to meet the urgent demands of high - speed data transmissions and explosive growing numbers of user terminals , such as the traditional mobile equipments and the new internet of things ( iot ) devices .compared with conventional mimo mechanisms , massive mimo is capable of achieving higher reliabilities , increased throughputs and improved energy efficiency by employing less complicated signal processing techniques , e.g. , maximum - ratio combining / maximum - ratio transmission ( mrc / mrt ) , with inexpensive and low - power components . despite of the advantages , massive mimo is also facing significant challenges on the way towards practical applications .how to obtain precise channel state information ( csi ) while consuming limited resources is most critical and fundamental . in existing massive mimo studies , time - division duplex ( tdd )is more widely considered than frequency - division duplex ( fdd ) because it is potentially easier and more feasible to obtain csi .thanks to the channel reciprocity , a tdd system exploits uplink pilot training to estimate channels which can be used in both uplink and downlink data transmissions within a coherence interval where the interval length is determined by the mobility of user equipment . compared with the downlink one , the uplink pilot training saves a large amount of resources to estimate the channels of a large - scale antenna array because each pilot sequence can be used to estimate the channels between all base station antennas and a single - antenna user equipment .however , the uplink channel estimation has to deal with pilot contamination issues as the user number grows .it is reported in that pilot contamination reduces the system performance but can not be suppressed by increasing the number of antennas .generally , the length of pilot sequence should be equal to or greater than the number of users to guarantee the orthogonality of pilot patterns among different users , in order to avoid pilot contamination . in a massive mimo system , due to the large user number ,orthogonal pilot sequences become very long , causing significant overhead for channel estimation and thus degrading the effective system throughput .when the channel varies with time due to medium to high mobility , i.e. , relatively short coherence interval , the pilot overhead issue gets more severe as channel estimation needs to be done frequently .there are some efforts in the literature to reduce pilot overhead in the massive mimo cellular system serving a large number of users within a finitely long coherence interval .proposed a semi - orthogonal pilot design in and to transmit both data and pilots simultaneously where a successive interference cancellation ( sic ) method was employed to reduce the contaminations of interfering pilots .investigated the performance of a pilot reuse scheme in the single - cell scenario which distinguishes users by the angle of arrival and thereby reuses pilot patterns . a time - shifted pilot based scheme was proposed in and it was then extended to the finite antenna regime in to cope with the multi - cell scenarionevertheless , all the research introduced above focused on point - to - point ( p2p ) communications .few work has studied the pilot scheme design and optimization in massive mimo relaying systems , especially for one - way multipair communications .the relaying technique is an emerging cooperative technology capable of scaling up the system performance by orders of magnitude , extending the coverage and reducing power consumption . combining with massive mimo technologies wherebythe relay station ( rs ) is equipped with large scale antenna arrays , the performance of a relaying system can be dramatically improved . moreover , in spite of the conventional half - duplex ( hd ) system , the full - duplex ( fd ) relaying technique has attracted more interests recently due to the simultaneous uplink ( multi - access of the sources to rs ) and downlink ( broadcasting of rs to the destinations ) data transmissions , whereby the overall system performance is further improved .however , similar to the p2p system , the multiuser relaying system also suffers from the critical channel estimation overhead within limited coherence time intervals . for a relaying system, it may be even worse as both source and destination users need to transmit pilots within the coherence interval , where the coherence interval determined by user pairs may be shorter than or at best equal to that by each user .further , different from the p2p cellular system , the throughput of the whole relaying system is determined by the weaker one between the uplink and downlink connections .thus , it is critical to co - consider both uplink and downlink transmissions when designing the pilot scheme for the relaying system , while previous work in the literature did not take this into consideration .this paper investigates the pilot and data transmission scheme in multipair massive mimo relaying systems for both hd and fd communications . due to the massive antennas equipped on rs , the source - relay and relay - destination channels are asymptotically orthogonal to each other , and thereby the transmission phase of pilots and datacan be shifted to overlap each other to reduce the overhead of pilot transmission and accordingly to improve the system performance .based on this consideration , a transmission scheme with pilot - data overlay in both hd and fd communications is proposed in this paper . here , the main contributions of the paper are summarized as follows : * * pilot - data overlay transmission scheme design : * a transmission scheme with pilot - data overlay in both hd and fd multipair massive mimo relaying systems is proposed and designed . in the hd overlay scheme , destination pilots are transmitted simultaneously with source data transmission , such that the effective data transmission duration is increased . moreover ,both source and destination pilots are transmitted along with data transmission in the fd system and thus the effective data transmission duration can be further increased .however , pilot and data contaminate each other at the rs due to the simultaneous transmission .nevertheless , by exploiting the asymptotic orthogonality of massive mimo channels , this paper demonstrates that the received data and pilots can be well separated from each other with only residues of additive thermal noise by applying the mrc processing .after all , the effective data transmission duration is extended within the limited coherence time interval and therefore the overall system performance is improved . * * closed - form achievable rates and comparison with conventional schemes : * this paper derives _ closed - form expressions _ of the ergodic achievable rates of the considered relaying systems with the proposed scheme .the derived expression reveals that the loop interference ( li ) in the fd overlay scheme can be effectively suppressed by the growing number of rs antennas and no error propagation exists with the proposed scheme , which is a critical issue in where a semi - orthogonal pilot design is applied to the p2p system .numerical results show that the superiority of the proposed scheme persists even with 25 db li . for quantitative comparison between the proposed scheme and conventional ones , asymptotic achievable rates at both ultra - high and -low snrsare derived and the superiority of the proposed scheme is proved theoretically .* * power allocation design : * this paper designs an optimal power allocation for the fd overlay scheme to minimize the interference between pilot and data transmissions by properly regulating the source and relay data transmission power for a fixed pilot power and proposes a successive convex approximation ( sca ) approach to solve the non - convex optimization problem .simulation results indicate that the proposed approach further improves the achievable rate compared with the equal power allocation .in addition , the proposed approach is computationally efficient and converges fast . with typical configurations ( e.g. , total data transmission power at 20 db ), simulation shows that the proposed approach converges to a relative error tolerance at after a few , say 4 , iterations ._ organization : _ the rest of this paper is organized as follows .the channel and signal models are presented in section [ sec : systemmodel ] within which the conventional and overlay scheme is presented and proposed , respectively . in the following section [ sec : channelestimation ] , channel estimations of the proposed scheme applying to both hd and fd relaying systemsare elaborated in details and the system achievable rates are derived theoretically in section [ sec : achievablerateanalysis ] .section [ sec : asymptoticanalysis ] and [ sec : powerallocation ] extend the analyses to the asymptotic scenario and power allocation consideration , respectively .the results presented in section [ sec : numericalresults ] reveal the performance comparisons numerically .section [ sec : conclusion ] concludes our works ._ notations : _ the boldface capital and lowercase letters are used to denote matrices and vectors , respectively , while the plain lowercase letters are scalars . the superscript , and stands for the conjugate , transpose and conjugate - transpose of a vector or matrix , respectively . represents the identity matrix of size .the operator , and denotes the euclidean norm of a vector , the frobenius norm and the trace of a matrix , respectively . for statistical vectors and matrices , and utilized to represent the expectation and variance , respectively .moreover , is used to denote the almost sure convergence and the notation represents the complex gaussian random vector with zero mean and covariance matrix .finally , the postfix ] is the additive white gaussian noise ( awgn ) matrix constructed by rvs . by employing the mmse criteria , the estimate of source channelscan be obtained as = \frac{1}{\sqrt{k\rho_\mathrm{p}}}\mathbf{r}^\mathrm{a}[1]\bm{\phi}^{{\mathrm{h}}}\mathbf{\tilde{d}}_\mathrm{s}[1]\ ] ] where \triangleq(\mathbf{i}_k+\frac{1}{k\rho_\mathrm{p}}\mathbf{d}_\mathrm{s}^{-1})^{-1} ] is the error matrix constructed by columns mutually independent of the corresponding column entries of ] and ] and ] , \sim{{\mathcal{cn}}}(\mathbf{0 } , \varepsilon_{\mathrm{s}k}^2[1]\mathbf{i}_m) ] , and \triangleq\beta_{\mathrm{s}k}-\sigma_{\mathrm{s}k}^2[1] ] ( given by ( [ eq : destinationchannest ] ) ) denotes the mrt precoding matrix of the forwarding data ] , and \in\mathbb{c}^{m\times k} ] refer to sections [ sec : downlinkanalysis ] and [ sec : analysisofuplinkphasec ] . by applying mmse channel estimation ,the estimate of the source channels is obtained by =\frac{1}{\sqrt{k\rho_\mathrm{p}}}\mathbf{r}^\mathrm{a}[\iota]\bm{\phi}^\mathrm{h}\mathbf{\tilde{d}}_\mathrm{s}[\iota]\ ] ] where \triangleq\left(\mathbf{i}_k+\frac{\rho_\mathrm{d}\beta_\mathrm{li}+1}{k\rho_\mathrm{p}}\mathbf{d}_\mathrm{s}^{-1}\right)^{-1} ] denotes the error matrix of estimations . with regard to the columns of matrices ] , there exist \sim\mathcal{cn}\left(\mathbf{0 } , \sigma_{\mathrm{s}k}^2[\iota]\mathbf{i}_m\right) ] , where \triangleq{k\rho_\mathrm{p}\beta_{\mathrm{s}k}^2}/(\rho_\mathrm{d}\beta_\mathrm{li}+1+k\rho_\mathrm{p}\beta_{\mathrm{s}k}) ] , respectively , for from to . in this subsection , the communication in phase b is formulated during which the source data and destination pilots are transmitted simultaneously .to simplify the description , the source data is separated into two successive parts as ] is the awgn matrix consisting of rvs .the mrc is applied to combine signals received by the rs antennas , where the combiner is ] can be exactly detected from ] . and the mmse estimation follows = \mathbf{\hat{g}}_{\mathrm{d}}[\iota ] + \mathcal{e}_{\mathrm{d}}[\iota]\ ] ] where ] are independent of each other .particularly , the columns of both matrices , ] , are mutually independent random vectors , following distribution \mathbf{i}_m) ] , respectively , where \triangleq k\rho_\mathrm{p}\beta_{\mathrm{d}k}^2/\left(\rho_\mathrm{s}\sum_{i=1}^{k}\varepsilon_{\mathrm{s}i}^2[\iota]+1 + k\rho_\mathrm{p}\beta_{\mathrm{d}k}\right) ] , for from to . the covariance factor of the source channel estimate , ] , only depends on the source channel estimation errors , which are independent of . from this phenomenon, it is interesting to note that no error propagation exists for the proposed pilot - data transmission scheme in both duplex relaying systems , which differs from the semi - orthogonal pilot design proposed in where csi estimation errors accumulate as the increase of .this section characterizes the performance of the proposed pilot - data transmission scheme by evaluating achievable rates of the considered massive mimo relaying systems . for the multipair communication ,the normalized system achievable rate is defined as an average of sum rates among all user pairs over the entire transmission time , that is .\ ] ] the individual achievable rate in the decode - and - forward ( df ) relaying system is given by = \min\{\mathcal{r}^\mathrm{ul}_k[\iota ] , \mathcal{r}^\mathrm{dl}_k[\iota]\},\ ] ] where ] denote the uplink and downlink rates between user pair and rs in the coherence interval , respectively . herewe employ the technique developed by to approximate the ergodic achievable rate for per - link communication , i.e. , ] . in this technique, the received signal is separated into desired signal and effective noise terms , where the former term is the product of transmitted signal and the expectation of channels while the latter one consists of uncorrelated interferences and awgn .hence , only the statistical , other than instantaneous , csi is required to evaluate the achievable rate .the rate calculated by this technique is the lower bound of the exact one , and numerical results presented in both and show that it is tolerably close to the genie result produced by monte - carlo simulation .consequently , the per - link ergodic achievable rate within a coherence interval is bounded by = \tau_\mathrm{d}[\iota]\log_2(1+\gamma_k^{\mathrm{pl}}[\iota])\ ] ] where ] . here , ] and ] and transmitting to the destination channels with power ,the received signal at destination users is obtained , for user ( ) , as = \sqrt{\rho_\mathrm{d}}\alpha[\iota]\mathbf{g}_{\mathrm{d}k}^{{\mathrm{h}}}[\iota]\mathbf{\hat{g}}_\mathrm{d}[\iota]\mathbf{x}[\iota ] + \mathbf{z}_k[\iota]\ ] ] where \in{{\mathbb{c}}}^{1\times t_\mathrm{d}} ] is the factor to normalize the average transmit power , i.e. , letting \mathbf{\hat{g}}_\mathrm{d}[\iota]\|^2\}=1 ] . to separate the desired signal from the interference and noise , ( [ eq : downlinkreceivedsignal ] ) can be rewritten as =\underbrace{\sqrt{\rho_\mathrm{d}}\alpha[\iota]{{\mathbb{e}}}\left\{\mathbf{g}_{\mathrm{d}k}^{{\mathrm{h}}}[\iota]\mathbf{\hat{g}}_{\mathrm{d}k}[\iota]\right\}\mathbf{x}_k[\iota]}_{\text{desired signal } } + \underbrace{\mathbf{\breve{z}}_k[\iota]}_{\text{effective noise}}\ ] ] where the effective noise is defined by \triangleq & \sqrt{\rho_\mathrm{d}}\alpha[\iota]\left(\mathbf{g}_{\mathrm{d}k}^{{\mathrm{h}}}[\iota]\mathbf{\hat{g}}_{\mathrm{d}k}[\iota]-{{\mathbb{e}}}\left\{\mathbf{g}_{\mathrm{d}k}^{{\mathrm{h}}}[\iota]\mathbf{\hat{g}}_{\mathrm{d}k}[\iota]\right\}\right)\mathbf{x}_k[\iota]\\ & + \sqrt{\rho_\mathrm{d}}\alpha[\iota]\sum_{i=1,i\neq k}^{k}\mathbf{g}_{\mathrm{d}k}^{{\mathrm{h}}}[\iota]\mathbf{\hat{g}}_{\mathrm{d}i}[\iota]\mathbf{x}_i[\iota ] + \mathbf{z}_k[\iota ] .\end{split}\ ] ] therefore , the effective sinr of the received signal at the destination user can be expressed as = \frac{\rho_\mathrm{d}\alpha^2[\iota]\left|{{\mathbb{e}}}\left\{\mathbf{g}_{\mathrm{d}k}^{{\mathrm{h}}}[\iota]\mathbf{\hat{g}}_{\mathrm{d}k}[\iota]\right\}\right|^2}{\rho_\mathrm{d}\alpha^2[\iota]{{\mathbb{v}\mathrm{ar}}}\left\{\mathbf{g}_{\mathrm{d}k}^{{\mathrm{h}}}[\iota]\mathbf{\hat{g}}_{\mathrm{d}k}[\iota]\right\ } + \mathrm{mi}_k^{\mathrm{dl}}[\iota ] + 1}\ ] ] where the power of the downlink multipair interference ( mi ) is defined by \triangleq\rho_\mathrm{d}\alpha^2[\iota]\sum_{i=1,i\neq k}^{k}{{\mathbb{e}}}\left\{\left|\mathbf{g}_{\mathrm{d}k}^{{\mathrm{h}}}[\iota]\mathbf{\hat{g}}_{\mathrm{d}i}[\iota]\right|^2\right\}.\ ] ] [ thm : downlinkachievablerate ] by employing the mrt processing , the achievable rate of the downlink data forwarded to the destination user ( ) in both hd and fd , for a finite number of rs transmitter antennas , can be characterized by = t_d\log_2\left(1+\gamma_k^{\mathrm{dl}}[\iota]\right)\ ] ] where = \frac{m\sigma_{\mathrm{d}k}^4[\iota]}{(\beta_{\mathrm{d}k}+1/\rho_\mathrm{d})\sum_{i=1}^{k}\sigma_{\mathrm{d}i}^2[\iota]}.\ ] ] see appendix [ apdx : prf : thm : downlinkachievablerate ] . in the proposed pilot - data overlay scheme ,the uplink data is transmitted in two successive phases where the first part of data is transmitted during phase b and the remaining is sent within phase c. for the two duplex systems , the phase b communication is similar while the phase c differs .the following description first conducts the rate analysis of phase b for both duplex systems , and then perform the phase c analysis distinguished by each mode . the row of ] is the li for the fd relaying while for hd , it is zero , and ] and ] and =\beta_{\text{d}k} ] . with similar manipulations ,the downlink effective sinr expressed by ( [ eq : gammadownlink ] ) can be reformulated as = \frac{mk\rho^2\beta_{\text{d}k}^4}{\sum_{i=1}^{k}\beta_{\text{d}i}^2}\ ] ] where =\beta_{\text{s}k}-k\rho\beta_{\text{s}k}^2 ] .by respectively substituting the above asymptotic effective sinrs into ( [ eq : uplinkachievablerate ] ) and ( [ eq : downlinkachievablerate ] ) , the asymptotic per - link achievable rates are obtained , and hence the asymptotic overall system rate by summing them up . for comparisons , the asymptotic achievable rates with the conventional pilot scheme at both high and lowsnrs are presented as follows : note that is used to indicate the conventional scheme in this section . by comparing the achievable rates of two pilot schemes via the effective sinrs computed above , corollary [ cry : asymptoticresults ]is drawn to confirm the advantages of the proposed scheme .[ cry : asymptoticresults ] with a large number of user pairs served by rs , the proposed pilot - data overlay transmission scheme outperforms the conventional one in multipair hd and fd massive mimo relaying systems for both high and low snrs .see appendix [ apdx : prf : cry : asymptoticresults ] .[ rmk : indepentofcointerval ] according to corollary [ cry : asymptoticresults ] , it is interesting to stress that the conclusion is independent of the coherence interval length and the number of successive coherence intervals used for continuous communications . therefore , the proposed scheme is always superior to , at least not worse than , the conventional scheme for both low and high snrs , regardless of the coherence interval length .by analyzing the effective sinrs of the fd relaying system , one can find that the self - interference between uplink and downlink data transmissions , and hence the impact upon pilots sending , is a major factor that limits the system achievable rate .therefore , making a tradeoff between uplink and downlink data transmission is necessary to improve the overall system performance .the power allocation is performed in the fd relaying system to optimize the transmit power of both source and forwarding data to control interference among uplink , downlink and pilot transmissions .note that this paper only considers balancing the source and rs data transmission power level while the power allocation for each individual source user is not taken into account .in addition , this paper assume the total power consumption of all data transmission to be constrained under a fixed pilot transmit power .in other words , data transmission power and are balanced with respect to a given to achieve the maximal overall system rate . to begin with, the power allocation problem is formulated as \\ & \text{subject to : } & & \mathcal{r}_k[\iota ] = \min\left\{\mathcal{r}_k^{\text{ul}}[\iota ] , \mathcal{r}_k^{\text{dl}}[\iota]\right\}\\ & & & \mathcal{l}t_d\left(k\rho_\text{s } + \rho_\text{d}\right ) = e_\mathrm{d}\\ & & & k=1,2,\cdots , k \text { and } \iota=1,2,\cdots,\mathcal{l } \end{aligned}\ ] ] where ] and ] and ] and the superscript denotes the corresponding value at the iteration . substituting ( [ opt : fd : constraintsineq2 ] ) into ( [ opt : fd : constraintsineq1 ] ) , the optimization problem can be reformulated in the iteration as \\ & \text{subject to : } & & \mathcal{r}^{(i)}_k[\iota ] \leq \mathcal{r}^{\text{ul}}_k[\iota](\bm{\rho}^{(i)})+\nabla^\text{t}\mathcal{r}^{\text{ul}}_k[\iota](\bm{\rho}^{(i)})(\bm{\rho}-\bm{\rho}^{(i)})\\ & & & \mathcal{r}^{(i)}_k[\iota ] \leq \mathcal{r}^{\text{dl}}_k[\iota](\bm{\rho}^{(i)})+\nabla^\text{t}\mathcal{r}^{\text{dl}}_k[\iota](\bm{\rho}^{(i)})(\bm{\rho}-\bm{\rho}^{(i)})\\ & & & \mathcal{l}t_d\left(k\rho_\text{s } + \rho_\text{d}\right ) = e_\mathrm{d}\\ & & & k=1,2,\cdots , k \text { and } \iota=1,2,\cdots,\mathcal{l}. \end{aligned}\ ] ] it is obvious that ( [ opt : fd : powalloc2 ] ) is a linear programming ( lp ) which can be solved efficiently by utilizing , e.g. , the conventional interior - point method .the lp in ( [ opt : fd : powalloc2 ] ) is solved repeatedly in iterations with increased by 1 each time , until the increment of is smaller than a set error tolerance .after all , the in the last iteration is the optimal power allocation that leads to the maximum sum rate .the sca approach is summarized by algorithm [ algtm : fd : sca ]. the complexity and convergence analyses of the proposed sca approach will be discussed in the numerical results of the next section .solve optimization problem ( [ opt : fd : powalloc1 ] ) , , , , , and optimized power allocation and initialize and set initialize solve the lp in ( [ opt : fd : powalloc2 ] ) and obtain the optimizer stop * loop * output and .in this section , the performance of the proposed pilot - data overlay transmission scheme and the power allocation approach are studied by simulation and numerical evaluation for both hd and fd relaying systems . unless otherwise specified , by default , the rs is equipped with 128 antennas serving 10 pairs of users , is set to be 3 db over the noise floor , the processing delay of the fd relaying forthe conventional pilot scheme is 1 symbol slot , the coherence time interval is set to be 40 time slots , the transmission power of pilots , source data and forwarding data satisfy db , and the monte carlo results are obtained by taking the average of 1000 random simulations with instantaneous channels .firstly , the system achievable rates under different snrs are evaluated and the performance is compared between the proposed and conventional schemes for both hd and fd relayings .the performance comparisons are shown in fig .[ fig : achievableratesnr ] where the snr varies from -30 db to 30 db . in the figure ,the lines represent the rates obtained by monte carlo simulation while the markers denote the ergodic achievable rate lower bound computed by the closed - form expressions with statistical csi ( refer to as theorem [ thm : downlinkachievablerate ] and [ thm : uplinkachievablerate ] ) .the comparison shows that the relative performance gap between monte carlo and the closed - form results is small , e.g. , 1.76 bits / s / hz for fd and 0.82 bits / s / hz for hd at 0 db of snr , which implies that our closed - form ergodic achievable rate expression is a good predictor for the system performance .in addition , fig .[ fig : achievableratesnr ] shows that the proposed scheme outperforms the conventional one in both high and low snr regions , which verifies corollary [ cry : asymptoticresults ] , where about 7.5 bits / s / hz and 2.5 bits / s / hz improvements are observed in the high snr region for the fd and hd systems , respectively . on top of that, the fd mode is shown exceeding the hd in system rates by about 13.6 bits / s / hz in high snrs with the proposed scheme , which verifies remark [ rmk : throughput ] .next , the performance of the massive mimo system versus the growing number of rs antennas is depicted in fig .[ fig : achievableratenumant ] .it is obvious that the rate gap between the proposed and conventional schemes increases as the number of rs antennas grows . with more rs antennas ,the source and destination channels are closer to be orthogonal to each other , and thus less interferences reside in the combined signal leading to improved performance of the proposed scheme .as expected , the increase of the li power degrades the fd performance and fig .[ fig : achievableratenumant ] shows the superiority of the proposed scheme in the fd system as the li power equals to both as small as 0 db and as large as 25 db .further , even as few as 20 antennas are deployed on rs , the proposed scheme still outperforms the conventional one , which verifies that the proposed scheme is feasible and superior even in medium scale mimo relaying systems . as discussed in section [ sec : powerallocation ] , further performance improvement of the proposed scheme can be obtained by balancing the tradeoff between pilot overhead and channel estimation accuracy where the estimation precision decreases due to the data interference as the percentage of data transmission time within a coherence interval increases .therefore , the length of the coherence interval and the snr of the received pilots are two crucial system parameters that affect the channel estimation overhead and accuracy , respectively . here , the performance improvement of the proposed scheme is verified with respect to the length of coherence intervals . according to fig .[ fig : achievableratecotime ] , the achievable rate of the proposed scheme always outperforms the conventional one for coherence interval from 20 to 300 time slots at both high and low snrs , which verifies the conclusion in remark [ rmk : indepentofcointerval ] .nonetheless , the gap between the two schemes decreases with the increase of the interval length due to the reduction of relative pilot transmission overhead within a longer coherence interval . in practical systems ,the network operator would prefer to serve more user pairs to improve the overall performance of the entire system at long coherence intervals , especially when a large number of antennas exist .therefore , the performance of both pilot schemes by varying the number of user pairs is examined at a fixed coherence interval . as shown in fig .[ fig : achievableratenumuser ] , the maximal rate is achieved when 12 user pairs are communicating simultaneously in the fd system by utilizing the proposed transmission scheme while only 8 user pairs are served with the conventional scheme whose sum rate rapidly deceases with the increase of user numbers .moreover , the proposed scheme outperforms the conventional one even with 2 user pairs served by the fd relaying system , which affirms the robustness of the proposed scheme when small number of users access to the network .in the hd mode , similar performance comparisons are observed .as expected , the growing number of user pairs also enlarges the rate gap between two pilot schemes .particularly , when the number of user pairs equals to the half length of coherence interval , i.e. , , all hd and conventional fd curves touch zero rates due to no time resource left for data transmission , except the fd overlay system still working in the high - throughput state .therefore , the comparisons reveals that both hd and fd relaying systems employing the pilot - data overlay scheme achieve higher system rates and serve more user pairs than those with the conventional pilot scheme .further , the fd overlay system emerges to be superior to all other systems in the extreme scenario .finally , the performance of the proposed power allocation approach is evaluated for the fd overlay relaying . in the simulation ,the pilot power is fixed to be 10 db per user and the total data transmission power per coherence interval varies from -10 db to 60 db for both optimal and equal power allocations .additionally , is set in the equal power allocation .[ fig : achievableratesnrpowalloc ] shows the comparison of the system rates between the two allocation schemes while the convergence of the proposed approach at various cases are depicted in fig .[ fig : convergitenumpowalloc ] . fig .[ fig : achievableratesnrpowalloc ] shows that the optimal power allocation achieves better performance than the equal power allocation for both low and high transmission powers , especially at moderate power region , where the rate increment becomes more noticeable .the rate of equal power allocation starts decreasing when the data transmission power increases to an extreme large value while the proposed scheme maintains a fixed high performance .such rate decrement of equal power allocation is due to more pilot contamination when data transmission power is large yet pilot power is fixed .in contrast , the optimal power allocation scheme can adaptively adjust the data transmission power to control the interference between pilot and data transmissions and therefore always achieve the best performance . in fig .[ fig : convergitenumpowalloc ] , the number of iterations is mostly below 4 if the relative error in algorithm [ algtm : fd : sca ] is set to be , indicating the low complexity of the proposed approach .[ fig : convergitenumpowalloc ] shows that the proposed sca approach always converges fast in solving the power allocation problem .channel estimation overhead is a critical limitation on improving performance of the multiuser massive mimo systems . dealing this problem ,this paper has proposed a pilot - data overlay transmission scheme in both hd and fd one - way multipair massive mimo relaying systems .the proposed scheme has exploited the orthogonality of source - relay and relay - destination channels with massive mimo setups at the rs and redesigned the pilot and data transmission scheme to increase the effective data transmission period , which improves the achievable rate performance of the system in practice .the asymptotic analysis in both low and high snrs with infinite number rs antennas has been conducted in this paper and it has proven the superiority of the proposed scheme theoretically . finally , a power allocation problem algorithm based on sca has been proposed for the fd system , which further increases the achievable rate of the relaying system .( the law of large numbers)[lm : lawlargenumbers ] let and are two mutually independent random vectors consisting of i.i.d . and rvs , respectively. then and .without loss of generality , we focus on the data detection of the source user , where . by highlighting the row of and normalizing its power , ( [ eq : datadetectionsb ] ) can be rewritten as follows : where and are the rows of and , respectively . the derivation attributes to the decomposition of source channels depicted by eq .( [ eq : sourcechannelvectordecompositionfirstinterval ] ) .moreover , because is independent of , ( ) and , by dividing both denominators and numerators by and applying lemma [ lm : lawlargenumbers ] , the almost surely convergence in the last equality holds due to .therefore , can be exactly detected from . where is independent of . where exploits lemma [ lm : fourthordermoment ] and the independence between and .mi defined by ( [ eq : midownlink ] ) can be calculated as follows : where exploits lemma [ lm : fourthordermoment ] . by substituting ( [ eq : proofdownlinke ] ) and( [ eq : proofdownlinkmi ] ) into ( [ eq : gammadownlinkorigin ] ) , the desired result comes out .the expectation , variance and uplink mi terms in both ( [ eq : gammauplinkoriginb ] ) and ( [ eq : gammauplinkoriginc ] ) can be calculated by employing the same manipulations as theorem [ thm : downlinkachievablerate ] does .the pi term is derived as where exploits the independence between and .the li term is obtained by where and exploit the independences among , and and the fact that , and takes .finally , the awgn term is straightforwardly derived as . by substituting above equations into ( [ eq : gammauplinkoriginb ] ) and ( [ eq : gammauplinkoriginc ] ) , theorem [ thm : uplinkachievablerate ] is obtained .here starts the proof with the high snr case . for the downlink , because and , it is obvious that for the uplink , the difference of the rates between the proposed and conventional schemes is evaluated as : where and .hence , holds , if and only if which is equivalent to where the left - hand - side term is -dependent while the right - hand - side term is a fixed finite value . in massive mimo systems, it is always assumed that and therefore ( [ eq : hd : prf : highsnruplink3 ] ) holds with finite and . hence , ( [ eq : hd : prf : highsnruplink ] ) holds , with respect to any within to .therefore , the proposed scheme outperforms the conventional one for high snrs in half - duplex communications by combining ( [ eq : hd : prf : highsnrdownlink ] ) and ( [ eq : hd : prf : highsnruplink ] ) . on the other hand , regarding low snrs ,it is obvious that therefore , the achievable rate of the proposed scheme is greater than those of the conventional one due to the multiplication factor .f. rusek , d. persson , b. k. lau , e. g. larsson , t. l. marzetta , o. edfors , and f. tufvesson , `` scaling up mimo : opportunities and challenges with very large arrays , '' _ ieee signal process . mag ._ , vol . 30 , no . 1 ,pp . 4060 , jan .2013 . l. you , x. gao , xxia , n. ma , and y. peng , `` pilot reuse for massive mimo transmission over spatially correlated rayleigh fading channels , '' _ ieee trans .wireless commun ._ , vol . 1276 , no . 6 , pp . 115 , jun2015 .h. q. ngo , e. g. larsson , s. member , and s. member , `` large - scale multipair two - way relay networks with distributed af beamforming large - scale multipair two - way relay networks with distributed af beamforming , '' _ ieee commun ._ , vol .17 , no . 12 , pp .22882291 , dec .2013 .y. dai and x. dong , `` power allocation for multi - pair massive mimo two - way af relaying with robust linear processing , '' _ arxiv preprint arxiv:1508.06656 _ , pp .114 , aug .[ online ] .available : http://arxiv.org/abs/1508.06656 h. q. ngo , h. a. suraweera , m. matthaiou , and e. g. larsson , `` multipair full - duplex relaying with massive arrays and linear processing , '' _ ieee j. sel .areas commun ._ , vol .32 , no .9 , pp . 17211737 , sept .2014 .l. j. rodriguez , n. h. tran , and t. le - ngoc , `` optimal power allocation and capacity of full - duplex af relaying under residual self - interference , '' _ ieee wireless commun ._ , vol . 3 , no . 2 ,pp . 233236 , apr .2014 .e. bjrnson , l. sanguinetti , j. hoydis , and m. debbah , `` optimal design of energy - efficient multi - user mimo systems : is massive mimo the answer ? '' _ ieee trans . on wireless commun ._ , vol . 14 , no . 6 , pp . 3059 3075 , jun. 2015 .e. bjrnson , e. g. larsson , and m. debbah , `` massive mimo for maximal spectral efficiency : how many users and pilots should be allocated ? '' _ ieee trans .on wireless commun ._ , vol . 15 , no . 2 , pp .1293 1308 , feb .
we propose a pilot - data transmission overlay scheme for multipair massive multiple - input multiple - output ( mimo ) relaying systems employing either half- or full - duplex ( hd or fd ) communications at the relay station ( rs ) . in the proposed scheme , pilots are transmitted in partial overlap with data to decrease the channel estimation overhead . the rs can detect the source data with minimal destination pilot interference by exploiting the asymptotic orthogonality of massive mimo channels . then pilot - data interference can be effectively suppressed with assistance of the detected source data in the destination channel estimation . due to the transmission overlay , the effective data period is extended , hence improving system throughput . both theoretical and simulation results confirm that the proposed pilot - data overlay scheme outperforms the conventional separate pilot - data design in the limited coherence time interval scenario . moreover , asymptotic analyses at high and low snr regions demonstrate the superiority of the proposed scheme regardless of the coherence interval length . because of simultaneous transmission , the proper allocation of source data transmission and relay data forwarding power can further improve the system performance . hence a power allocation problem is formulated and a successive convex approximation approach is proposed to solve the non - convex optimization problem with the fd pilot - data transmission overlay . massive mimo ; multipair relaying ; full - duplex ; pilot - data transmission overlay ; power allocation .
word embedding has gained popularity as an important unsupervised natural language processing ( nlp ) technique in recent years .the task of word embedding is to derive a set of vectors in a euclidean space corresponding to words which best fit certain statistics derived from a corpus .these vectors , commonly referred to as the _ embeddings _ , capture the semantic / syntactic regularities between the words .word embeddings can supersede the traditional one - hot encoding of words as the input of an nlp learning system , and can often significantly improve the performance of the system .there are two lines of word embedding methods .the first line is neural word embedding models , which use softmax regression to fit bigram probabilities and are optimized with stochastic gradient descent ( sgd ) .one of the best known tools is word2vec .the second line is low - rank matrix factorization ( mf)-based methods , which aim to reconstruct certain bigram statistics matrix extracted from a corpus , by the product of two low rank factor matrices .representative methods / toolboxes include hyperwords , glove , singular , and sparse .all these methods use two different sets of embeddings for words and their context words , respectively .svd based optimization procedures are used to yield two singular matrices .only the left singular matrix is used as the embeddings of words . however , svd operates on , which incurs information loss in , and may not correctly capture the _ signed correlations _ between words .an empirical comparison of popular methods is presented in .the toolbox presented in this paper is an implementation of our previous work .it is a new mf - based method , but is based on eigendecomposition instead .this toolbox is based on , where we estabilish a bayesian generative model of word embedding , derive a weighted low - rank positive semidefinite approximation problem to the pointwise mutual information ( pmi ) matrix , and finally solve it using eigendecomposition .eigendecomposition avoids the information loss in based methods , and the yielded embeddings are of higher quality than svd - based methods .however eigendecomposition is known to be difficult to scale up . to make our method scalable to large vocabularies, we exploit the sparsity pattern of the weight matrix and implement a divide - and - conquer approximate solver to find the embeddings incrementally .our toolbox is named _ positive - semidefinite vectors ( psdvec)_. it offers the following advantages over other word embedding tools : 1 .the incremental solver in psdvec has a time complexity and space complexity , where is the total number of words in a vocabulary , is the specified dimensionality of embeddings , and is the number of specified core words . note the space complexity does not increase with the vocabulary size .in contrast , other mf - based solvers , including the core embedding generation of psdvec , are of time complexity and space complexity .hence asymptotically , psdvec takes about of the time and of the space of other mf - based solvers , and space complexity , where is the number of word occurrences in the input corpus , and is the number of negative sampling words , typically in the range . ]; 2 . given the embeddings of an original vocabulary , psdvec is able to learn the embeddings of new words incrementally . to our best knowledge ,none of other word embedding tools provide this functionality ; instead , new words have to be learned together with old words in batch mode .a common situation is that we have a huge general corpus such as english wikipedia , and also have a small domain - specific corpus , such as the nips dataset . in the general corpus , specific termsmay appear rarely .it would be desirable to train the embeddings of a general vocabulary on the general corpus , and then incrementally learn words that are unique in the domain - specific corpus .then this feature of incremental learning could come into play ; 3 . on word similarity / analogy benchmark sets and common natural language processing ( nlp ) tasks , psdvec produces embeddings that has the best average performance among popular word embedding tools ; 4 .psdvec is established as a bayesian generative model .the probabilistic modeling endows psdvec clear probabilistic interpretation , and the modular structure of the generative model is easy to customize and extend in a principled manner . for example , global factors like topics can be naturally incorporated , resulting in a hybrid model of word embedding and latent dirichlet allocation . for such extensions, psdvec would serve as a good prototype . while in other methods , the regression objectives are usually heuristic , and other factors are difficult to be incorporated .psdvec implements a low - rank mf - based word embedding method .this method aims to fit the using , where and are the empirical unigram and bigram probabilities , respectively , and is the embedding of .the regression residuals are penalized by a monotonic transformation of , which implies that , for more frequent ( therefore more important ) bigram , we expect it is better fitted .the optimization objective in the matrix form is where is the pmi matrix , is the embedding matrix , is the bigram probabilities matrix , is the -weighted frobenius - norm , and are the tikhonov regularization coefficients .the purpose of the tikhonov regularization is to penalize overlong embeddings .the overlength of embeddings is a sign of overfitting the corpus .our experiments showed that , with such regularization , the yielded embeddings perform better on all tasks .is to find a weighted low - rank positive semidefinite approximation to .prior to computing , the bigram probabilities are smoothed using jelinek - mercer smoothing .a block coordinate descent ( bcd ) algorithm is used to approach , which requires eigendecomposition of .the eigendecomposition requires time and space , which is difficult to scale up . as a remedy, we implement an approximate solution that learns embeddings incrementally .the incremental learning proceeds as follows : 1 .partition the vocabulary into consecutive groups .take as an example . consists of the most frequent words , referred to as the * core words * , and the remaining words are * noncore words * ; 2 . accordinglypartition into blocks as partition in the same way . correspond to * core - core bigrams * ( consisting of two core words ) .partition into [l]{\boldsymbol{s}_{1}}\boldsymbol{v}_{1 } } & \negthickspace { \makebox[0pt][l]{\boldsymbol{s}_{2}}\;\boldsymbol{v}_{2 } } & \negthickspace { \makebox[0pt][l]{\boldsymbol{s}_{3}}\;\boldsymbol{v}_{3}}\end{pmatrix}\\ \rule{0pt}{15pt } \end{array}$ ] ; 3 . for core words ,set , and solve using eigendecomposition , obtaining core embeddings ; 4 .set , and find that minimizes the total penalty of the -th and 21-th blocks ( the 22-th block is ignored due to its high sparsity ) : the columns in are independent , thus for each , it is a separate weighted ridge regression problem , which has a closed - form solution ; 5 . for any other set of noncore words ,find that minimizes the total penalty of the -th and -th blocks , ignoring all other -th and -th blocks ; 6 .combine all subsets of embeddings to form . hereour toolbox consists of 4 python / perl scripts : ` extractwiki.py ` , ` gramcount.pl ` , ` factorize.py ` and ` evaluate.py ` .figure 1 presents the overall architecture .= [ trapezium , trapezium left angle=75 , trapezium right angle=105 , minimum width=2 cm , minimum height=1 cm , text centered , draw = black , outer sep=1pt , inner xsep=-4pt , fill = blue!30 ] = [ rectangle , minimum width=3 cm , minimum height=1 cm , text centered , draw = black , outer sep=0 , inner sep=5pt , fill = orange!30 , rounded corners ] = [ diamond , minimum width=1 cm , minimum height=1 cm , text centered , inner sep=-2pt , draw = black , fill = green!30 ] = [ thick,->,>=stealth ] ( corpus ) [ io ] corpus ; ( gramcount ) [ process , right of = corpus , xshift=2 cm , align = center ] extractwiki.py + gramcount.pl ; ( bigram ) [ io , below of = gramcount , yshift=0.3 cm ] bigram statistics ; ( factorize ) [ process , below of = bigram , yshift=0.3 cm , align = center ] factorize.py : we_factorize_em ( ) + factorize core block ; ( core ) [ io , right of = factorize , xshift=4 cm ] core embeddings ; ( more ) [ decision , below of = factorize , yshift=-0.7cm , text width=2 cm ] more noncore words ? ; ( concat ) [ process , below of = more , yshift=-1.2cm , text width=3 cm ] concatenate all embeddings ; ( gennoncore ) [ process , right of = more , xshift=3.5cm , text width=4 cm , align = center ] factorize.py : + block_factorize ( ) + solve noncore blocks ; ( noncore ) [ io , below of = gennoncore , yshift=-0.2 cm , minimum width=2 cm , inner xsep=-14pt , align = center ] noncore embeddings ; ( vec ) [ io , below of = concat , yshift=0.2 cm ] save to .vec ; ( eval ) [ process , xshift=3 cm , right of = vec , align = center ] evaluate.py + evaluate ; ( testset ) [ io , xshift=3 cm , right of = eval , xshift=-1 cm , align = center ] 7 datasets ; ( corpus ) ( gramcount ) ; ( gramcount ) ( bigram ) ; ( bigram ) ( factorize ) ; ( factorize ) ( core ) ; ( factorize ) ( more ) ; ( [ xshift=2.8em]core ) ( gennoncore ) ; ( more ) node[anchor = east ] yes ( concat ) ; ( more ) node[anchor = south ] no ( gennoncore ) ; ( concat ) ( vec ) ; ( gennoncore ) ( noncore ) ; ( noncore ) |- ( [ yshift=0.5em]concat ) ; ( core.east ) |- ( [ yshift=-1.5em]concat ) ; ( vec ) ( eval ) ; ( testset ) ( eval ) ; = [ rectangle , minimum width=3 cm , minimum height=1 cm , text centered , text width=3 cm , draw = black , fill = orange!30 ] 1 . `extractwiki.py ` first receives a wikipedia snapshot as input ; it then removes non - textual elements , non - english words and punctuation ; after converting all letters to lowercase , it finally produces a clean stream of english words ; 2 . `counts the frequencies of either unigrams or bigrams in a word stream , and saves them into a file . in the unigram mode ( ` -m1 ` ) , unigrams that appear less than certain frequency threshold are discarded . in the bigram mode ( ` -m2 ` ) , each pair of words in a text window ( whose size is specified by ` -n ` ) forms a bigram .bigrams starting with the same leading word are grouped together in a row , corresponding to a row in matrices and ; 3 . `factorize.py ` is the core module that learns embeddings from a bigram frequency file generated by ` gramcount.pl ` .a user chooses to split the vocabulary into a set of core words and a few sets of noncore words . `can : 1 ) in function ` we_factorize_em ( ) ` , do bcd on the pmi submatrix of core - core bigrams , yielding _ core embeddings _ ; 2 ) given the core embeddings obtained in 1 ) , in ` block_factorize ( ) ` , do a weighted ridge regression w.r.t . _ noncore embeddings _ to fit the pmi submatrices of core - noncore bigrams .the tikhonov regularization coefficient for a whole noncore block can be specified by ` -t ` . a goodrule - of - thumb for setting is to increase as the word frequencies decrease , i.e. , give more penalty to rarer words , since the corpus contains insufficient information of them ; 4 . ` evaluate.py ` evaluates a given set of embeddings on 7 commonly used testsets , including 5 similarity tasks and 2 analogy tasks .the python scripts use numpy for the matrix computation .numpy automatically parallelizes the computation to fully utilize a multi - core machine . the perl script ` gramcount.pl ` implements an embedded c++ engine to speed up the processing with a smaller memory footprint .our competitors include : * word2vec * , * ppmi * and * svd * in hyperwords , * glove * , * singular * and * sparse*. in addition , to show the effect of tikhonov regularization on `` psdvec '' , evaluations were done on an unregularized psdvec ( by passing `` ` -t 0 ` '' to ` factorize.py ` ) , denoted as * psd - unreg*. all methods were trained on an 12-core xeon 3.6ghz pc with 48 gb of ram .we evaluated all methods on two types of testsets .the first type of testsets are shipped with our toolkit , consisting of 7 word similarity tasks and 2 word analogy tasks ( luong s rare words is excluded due to many rare words contained ) . 7 out of the 9 testsets are used in .the hyperparameter settings of other methods and evaluation criteria are detailed in .the other 2 tasks are toefl synonym questions ( * tfl * ) and rubenstein & goodenough ( * rg * ) dataset . for these tasks ,all 7 methods were trained on the apri 2015 english wikipedia .all embeddings except `` sparse '' were 500 dimensional .`` sparse '' needs more dimensionality to cater for vector sparsity , so its dimensionality was set to 2500 .it used the embeddings of word2vec as the input . in analogy tasks ` google ` and ` msr ` , embeddings were evaluated using 3cosmul .the embedding set of psdvec for these tasks contained 180,000 words , which was trained using the blockwise online learning procedure described in section 5 , based on 25,000 core words .the second type of testsets are 2 practical nlp tasks for evaluating word embedding methods as used in , i.e. , named entity recognition ( * ner * ) and noun phrase chunking ( * chunk * ) .following settings in , the embeddings for nlp tasks were trained on reuters corpus , volume 1 , and the embedding dimensionality was set to 50 ( `` sparse '' had a dimensionality of 500 ) .the embedding set of psdvec for these tasks contained 46,409 words , based on 15,000 core words .[ cols="^,^,^,^,^,^,^,^,^,^,^,^,^",options="header " , ]in this example , we train embeddings on the english wikipedia snapshot in april 2015 .the training procedure goes as follows : 1 .use ` extractwiki.py ` to cleanse a wikipedia snapshot , and generate ` cleanwiki.txt ` , which is a stream of 2.1 billion words ; 2 .use ` gramcount.pl ` with ` cleanwiki.txt ` as input , to generate ` top1grams-wiki.txt ` ; 3 .use ` gramcount.pl ` with ` top1grams-wiki.txt ` and ` cleanwiki.txt ` as input , to generate ` top2grams-wiki.txt ` ; 4 .use ` factorize.py ` with ` top2grams-wiki.txt ` as input , to obtain 25000 core embeddings , saved into ` 25000-500-em.vec ` ; 5 .use ` factorize.py ` with ` top2grams-wiki.txt ` and ` 25000-500-em.vec ` as input , and tikhonov regularization coefficient set to 2 , to obtain 55000 noncore embeddings .the word vectors of totally 80000 words is saved into ` 25000-80000-500-blkem.vec ` ; 6 .repeat step 5 twice with tikhonov regularization coefficient set to 4 and 8 , respectively , to obtain extra noncore embeddings . the word vectors are saved into ` 25000-130000-500-blkem.vec ` and ` 25000-180000-500-blkem.vec ` , respectively ; 7 .use ` evaluate.py ` to test ` 25000-180000-500-blkem.vec ` .we have developed a python / perl toolkit ` psdvec ` for learning word embeddings from a corpus .this open - source cross - platform software is easy to use , easy to extend , scales up to large vocabularies , and can learn new words incrementally without re - training the whole vocabulary .the produced embeddings performed stably on various test tasks , and achieved the best average score among 7 state - of - the - art methods .this research is supported by the national research foundation singapore under its interactive digital media ( idm ) strategic research programme .10 david m blei , andrew y ng , and michael i jordan .latent dirichlet allocation ., 3:9931022 , 2003 .manaal faruqui , yulia tsvetkov , dani yogatama , chris dyer , and noah a. smith . sparse overcomplete word vector representations . in _ proceedings of acl _ ,thomas k landauer and susan t dumais .a solution to plato s problem : the latent semantic analysis theory of acquisition , induction , and representation of knowledge ., 104(2):211 , 1997 . omer levy and yoav goldberg .neural word embeddings as implicit matrix factorization . in _ proceedings of nips 2014 _ , 2014 .omer levy , yoav goldberg , and ido dagan .improving distributional similarity with lessons learned from word embeddings ., 3:211225 , 2015 .omer levy , yoav goldberg , and israel ramat - gan .linguistic regularities in sparse and explicit word representations . in _ proceedings of conll-2014_ , page 171 , 2014 .david d lewis , yiming yang , tony g rose , and fan li .rcv1 : a new benchmark collection for text categorization research . , 5:361397 , 2004 .shaohua li , tat - seng chua , jun zhu , and chunyan miao .topic embedding : a continuous representation of documents . in _ proceedings of the the 54th annual meeting of the association for computational linguistics ( acl ) _ , 2016 .shaohua li , jun zhu , and chunyan miao . a generative word embedding model and its low rank positive semidefinite solution . in _ proceedings of the 2015 conference on empirical methods in natural language processingpages 15991609 , lisbon , portugal , september 2015 .association for computational linguistics .tomas mikolov , ilya sutskever , kai chen , greg s corrado , and jeff dean . distributed representations of words and phrases and their compositionality . in _ proceedings of nips 2013_ , pages 31113119 , 2013 .jeffrey pennington , richard socher , and christopher d manning .glove : global vectors for word representation ., 12 , 2014 .herbert rubenstein and john b. goodenough .contextual correlates of synonymy . , 8(10):627633 , october 1965 .nathan srebro , tommi jaakkola , et al . weighted low - rank approximations . in _ proceedings of icml 2003_ , volume 3 , pages 720727 , 2003 .karl stratos , michael collins , and daniel hsu .model - based word embeddings from decompositions of count matrices . in _ proceedings of acl _ , 2015 .joseph turian , lev ratinov , and yoshua bengio .word representations : a simple and general method for semi - supervised learning . in_ proceedings of the 48th annual meeting of the association for computational linguistics _ , pages 384394 .association for computational linguistics , 2010 .ancillary data table required for subversion of the codebase .kindly replace examples in right column with the correct information about your current code , and leave the left column as it is .c1 & current code version & 0.4 + c2 & permanent link to code / repository used of this code version & https://github.com/askerlee/topicvec + c3 & legal code license & gpl-3.0 + c4 & code versioning system used & git + c5 & software code languages , tools , and services used & python , perl , ( inline ) c++ + c6 & compilation requirements , operating environments & dependencies & python : numpy , scipy , psutils ; perl : inline::cpp ; c++ compiler + c7 & if available link to developer documentation / manual & n / a + c8 & support email for questions & shaohua.com +
psdvec is a python / perl toolbox that learns word embeddings , i.e. the mapping of words in a natural language to continuous vectors which encode the semantic / syntactic regularities between the words . psdvec implements a word embedding learning method based on a weighted low - rank positive semidefinite approximation . to scale up the learning process , we implement a blockwise online learning algorithm to learn the embeddings incrementally . this strategy greatly reduces the learning time of word embeddings on a large vocabulary , and can learn the embeddings of new words without re - learning the whole vocabulary . on 9 word similarity / analogy benchmark sets and 2 natural language processing ( nlp ) tasks , psdvec produces embeddings that has the best average performance among popular word embedding tools . psdvec provides a new option for nlp practitioners . word embedding , matrix factorization , incremental learning
we study in this paper the asymptotic behavior of the steady - state waiting time distribution of an m / g/1 queue with subexponential service time distribution and first - in - first - out ( fifo ) discipline .the goal is to provide expressions that will allow us to identify the different types of asymptotic behavior that the queue experiences depending on different combinations of traffic intensity and overflow levels .we give our results for the special case of an m / g/1 queue with the idea that the insights that we obtain are applicable to more general queues and even to networks of queues .the special case of an m / g/1 queue with regularly varying processing times was previously analyzed in , where it was shown that the behavior of , the steady - state waiting time distribution , can be fully described by the so - called heavy traffic approximation and heavy tail asymptotic ( see theorems 2.1 and 2.2 in ) .as pointed out in that work , the same type of results can be derived for a larger subclass of the subexponential family , in particular , for service time distributions whose tails decay slower than . as the main results of this paper show ,the behavior of for lighter subexponential service time distributions may include a third region where neither the heavy traffic approximation nor the heavy tail asympotic are valid , and where the higher order moments of the service time distribution start playing a role .the exact way in which these higher order moments appear in the distribution of is closely related to the large deviations behavior of an associated random walk and its corresponding cramr series .the approach that we take to understand the asymptotics of over the entire line is to provide approximations that hold uniformly across all values of the traffic intensity for large values of the tail , or alternatively , uniformly across all tail values for traffic intensities close to one . from such uniform approximationsit is possible to compute the exact thresholds separating the different regions of deviations of , which for service time distributions decaying slower than are simply the heavy traffic and heavy tail regions , and , for lighter subexponential distributions , include a third region where neither the heavy traffic approximation nor the heavy tail asymptotic hold .similar uniform approximations have been derived in the literature for the tail distribution of a random walk with subexponential increments in , , and , where the uniformity is on the number of summands for large values of the tail or across all tail values as the number of summands grows to infinity .the results in the paper are in some sense the equivalent for the single - server queue .to explain the idea behind our main results let us recall that one can approximate the tail distribution of the steady - state waiting time of a single - server queue with subexponential processing times , , via two well known approximations : the heavy traffic approximation and the heavy tail asymptotic respectively , where denotes the service time , the inter - arrival time , and the traffic intensity of the queue .we refer the reader to chapter x of and the references therein for more details on the history and the exact formulation of these limit theorems .the heavy traffic approximation is valid for the general gi / gi/1 queue and can be derived by using a functional central limit theorem type of analysis ( see , e.g. ) .the theorem that justifies this approximation is obtained by taking the limit as the traffic intensity approaches one and is applicable for bounded values of .the heavy tail asymptotic is valid for the gi / gi/1 fifo queue with subexponential service time distribution ( see , e.g. , ) , and is obtained by taking the limit as goes to infinity for a fixed traffic intensity , that is , it is applicable for large values of .one can then think of combining these two approximations to obtain an expression that is uniformly valid on the entire positive axis . the approach we take in the derivation of the main theorems is to start with the pollaczek - khintchine formula for the distribution of the steady - state waiting time of the m / g/1 queue , which expresses it as a geometric random sum , and use the asymptotics for the tail distribution of the random walk .one of the difficulties in obtaining uniform asymptotics for the distribution of lies in the highly complex asymptotic behavior of the random walk .surprisingly , most of the cumbersome details of the asymptotics for the random walk disappear in the queue , but showing that this is indeed the case requires a considerable amount of work .the qualitative difference between queues with service time distributions with tails decaying slower than and their lighter - tailed counterparts comes from the asymptotic behavior of the random walk associated to the geometric random sum . the function has been identified as a threshold in the behavior of heavy tailed sums and queues in , and , respectively , to name a few references , and we provide here yet another example . as mentioned before, the approximations we provide can be used to derive the exact regions where the heavy traffic and heavy tail approximations hold , but we do not provide the details in this paper since our focus is on deriving uniform expressions for under minimal conditions on the service time distribution .the setting we consider is the same from where the busy period was analyzed .more detailed comments about the third region of asymptotics that arises when the service time distribution is lighter than can be found in remark [ r.mainremarks ] right after theorem [ t.main ] . for clarity , we state all our assumptions and notation in the following section , and our main results in section [ s.mainresults ] .finally , we mention that the expressions given in the main theorems can be of practical use as numerical approximations for , and based on simulation experiments done for service times with a pareto ( ) or weibull ( ) distribution , they seem to perform very well ( see section 4 in ) .it is worth pointing out that the uniform approximations given here are far superior than the heavy traffic or heavy tail approximations individually even in the regions where these are valid , which is to be expected since they are based on the entire pollaczek - khintchine formula ; they are also easy to compute given the integrated tail distribution of the processing times and its first few moments ( cumulants ) .let be the waiting time sequence for an m / g/1 fifo queue that is fed by a poisson arrival process having arrival rate and independent iid processing times .provided that the traffic intensity is smaller than one , we denote by the steady - state waiting time of the queue .we assume that is such that its integrated tail distribution , given by is subexponential , where . the sequence will denote iid random variables having distribution . define to be the cumulative hazard function of and let be its hazard rate function ; note that is the density of .just as in and , we define the hazard rate index all the results presented in this paper hold for subexponential distributions ( its corresponding integrated tail distribution ) satisfying the following assumption .[ a.hazard ] 1 . ; 2 . , where assumption [ a.hazard ] is consistent with conditions b and c in and , respectively , and also very closely related to definition 1 in .all three of these works study the asymptotic behavior of random sums with subexponential increments applied to either the study of the busy period of a gi / gi/1 queue or to ruin probabilities in insurance . also , by proposition 3.7 in , assumption [ a.hazard ] ( a. ) is equivalent to the function being decreasing on for any , which is the same as equation ( 3 ) in , where uniform asymptotics for the tail behavior of a random walk with subexponential increments were derived . as mentioned in and , lemma 3.6 in implies that < \infty\ } \geq \liminf_{t \to \infty } t q(t) ] for all .furthermore , assumption [ a.hazard ] ( b. ) and lemma 3.6 in together imply that , which in turn implies that for some and , although the tail distribution of the busy period in queues with heavy tailed service times is related to that of its waiting time in the sense that it is determined by ( see ) , the approach to its analysis is rather different from that of the waiting time , so the only connection between the results in this paper and those cited above is the setting .this family of distributions includes in particular all regularly varying distributions , with , and all semiexponential distributions , with ; in these definitions is a slowly varying function .the regularly varying case with was covered in detail in .some subexponential distributions that do not satisfy assumption [ a.hazard ] are those decaying almost " exponentially fast , e.g. . before stating our main results in the following section , we introduce some more notation that will be used throughout the paper .let and . also , define and note that by proposition 3.7 in , is eventually decreasing for all , which implies that for all .in particular , for this implies that and .also , we obtain the relation , or equivalently , . combining this observation with our previous remark about assumption [ a.hazard ] ( b. )gives that for and any we have < \infty ] ( see , 5.1 and the references therein ) ; taking gives the threshold , and translating into the positive mean case gives .note that = \infty ] , so this choice of is very close to the boundary of the region . finally , the asymptotic as is known to hold , in the mean zero case , for ( see theorem 1 in ) , and provided that ( which occurs when ) , the translation into the positive mean case gives the threshold . when we can not guarantee that , so by taking the minimum between and we satisfy the condition , and therefore our choice of .we point out that since the thresholds do not need to be too precise , we ignored the constant inside of and in the definitions of and , respectively , to simplify the expressions . the first asymptotic for we propose is given by the following expression based on the pollaczek - khintchine formula , for , ,\ ] ] and for , , \label{eq : z_def_big2}\end{aligned}\ ] ] where n(0,1 ) and . throughout the paper we use the convention that whenever . our first theorem is formally stated below .[ t.sumapprox ] suppose assumption [ a.hazard ] is satisfied , and define according to and .then , \(i ) we point out that the approximation given by is explicit in the sense that given the exact form of , all the functions and parameters involved in the approximation are known . in particular , = \rho^{\frac{x}{\mu } }e^{\frac{\sigma^2 ( \log\rho)^2 x}{2 \mu^3 } } \phi\left ( \frac{\sqrt{\mu } \ ,t}{\sigma } + \frac{\sigma \sqrt{x}}{\mu^{3/2}}\log \rho \right).\ ] ] ( ii ) this approximation is suitable for numerical computations since it involves no integrals or infinite sums .( iii ) with some additional work once can show that the first term in and can be replaced by which is asymptotically equivalent to the heavy tail asymptotic for appropriate values of .we choose not to use this simpler expression because our numerical experiments show that it would result in a less accurate approximation for .( iv ) for the case , the middle term in provides a direct connection between the cramr region of asymptotics for the random walk and the asymptotic behavior of the queue , and also reiterates the qualitative difference between distributions decaying slower than and those with lighter tails ( see , , , to name some references ) .( v ) unlike the next approximation , given in theorem [ t.main ] , the expression does not work as a uniform asymptotic in as for , since it does not converge to one for small values of .nevertheless , it is not difficult to show that for any as ( see the proof of lemma 3.3 in ) . in the same spirit of the heavy traffic approximations in and , where is approximated by where is a power series in ,our second result derives an approximation that involves a power series in . the number of terms in this power series is also determined by ( as in the definition of ) , and its coefficients are closely related to those of the cramr series .this other approximation substitutes the second term in and the second and third terms in by their corresponding asymptotic expression as .the intuition behind this substitution is that these terms only dominate the behavior of when the effects of the heavy traffic are more important than those of the heavy tails . besides unifying the cases and , this new approximation will also have the advantage of being uniformly good for as . in order to state our next theoremwe need the following definitions .let where , and are the coefficients of the cramr series corresponding to .this function can be obtained by expanding into powers of ; the details can be found in lemma [ l.lambda ] .we also need to define to be the smallest positive solution to .some properties of and are given in the following lemma .[ l.ustar ] define according to and let be the smallest positive solution to . then is concave in a neighborhood of the origin , and as , where and for , , , and the second approximation for that we propose is where . the precise statement of our result is given below .[ t.main ] suppose assumption [ a.hazard ] is satisfied , and define according to. then , moreover , [ r.mainremarks ] ( i ) as mentioned earlier , the difference between and is in the terms that correspond to the behavior of the queue when the effects of the heavy traffic dominate those of the heavy tails . in particular , what prevents from being uniformly good for all values of as is that if is bounded , then the second term in and the second and third terms in do not converge to one when , which can be fixed by substituting them by their asymptotic expression as ; evaluating at the value guarantees that the contribution of becomes negligible when the queue is in the heavy tail regime .( ii ) for analytical applications , lemma [ l.ustar ] states that can be written as a power series in whose terms of order greater than can be ignored . for numerical implementations , nonetheless, it might be easier to compute by directly optimizing , since is just a polynomial of order .( iii ) by simply matching the leading exponents of the heavy tail asymptotic and the function , that is , by solving the equation we obtain that the heavy tail region is roughly , whereas on one should use to approximate .it follows that the heavy traffic region is given by the subset of where is asymptotically equivalent to , the heavy traffic approximation for the m / g/1 queue .we note that when , the heavy traffic region is the entire , but it is a strict subset of if , in which case a third region of asymptotics arises where neither the heavy traffic nor the heavy tail approximations are valid .( iv ) as mentioned before , the coefficients of can be easily obtained from the first coefficients of the cramr series of , which in turn can be obtained from the cumulants of .we end this section with a formula that can be used to compute the coefficients of the cramr series .the following formula taken from can be used to recursively compute the coefficients in the cramr series , and we include it only for completeness .let be a random variable having , , and cumulants .let be the coefficients of the ( formal ) cramr series of , i.e. , .let .then , for and , the first four coefficients are given by the rest of the paper consists mostly of the proofs of all the results in section [ s.mainresults ] and is organized as follows .section [ s.uniformrw ] states an approximation for that is valid for all pairs and that will be used to derive uniform asymptotics for .section [ s.sumapproxproof ] contains the proof of theorem [ t.sumapprox ] ; and section [ s.mainproof ] contains the proofs of lemma [ l.ustar ] and theorem [ t.main ] .we conclude the paper by giving a couple of numerical examples comparing the two suggested approximations for the tail distribution of , and , in section [ s.numerical ] . a table of notation is included at the end of the paper .in this section we will state the uniform approximation for that we will substitute in the pollaczek - khintchine formula outside of the heavy - tail region .this approximation was derived in for mean zero and unit variance random walks and it works on the whole positive line as .although rather complicated as an approximation for , it will be useful in the derivation of simpler expressions for the queue with the level of generality that we described in section [ s.modeldescription ] .for the heavy - tail region ( small values of ) we will use in section [ ss.firstapprox ] a result from to prove that as uniformly in the region .we start by stating the assumptions needed for the mean zero and unit variance random walk , and after giving the approximation in this setting we will show that under assumption [ a.hazard ] , the random variable satisfies these conditions .then we will apply a slightly modified version of the approximation to the positive mean case and we will show that it holds uniformly in the region .the notation as means . we will also use to denote a generic positive constant , i.e. , , , etc .[ a.stdcase ] let be a random variable with = 0 ] , where throughout this section let , where and are the coefficients of the cramr series of , and let , where are iid with common distribution .we also define the functions we start by proving some properties about the functions , and . [ l.g_properties ] suppose assumption [ a.stdcase ] holds .then , for any there exists a constant such that 1 . is decreasing for all , 2 . for all , 3 . for all and any , 4 . for all and any , also , the following limit holds 1 . , part ( a. ) follows directly from proposition 3.7 in . for part ( b. )note that is eventually decreasing for any , so it follows that for all for some .this in turn implies that for all , and therefore , .for part ( c. ) note that proposition 3.7 in gives for any and all sufficiently large , then it follows from noting that is strictly increasing for large enough , that for part ( d. ) let and define , . by proposition 3.7 in , , from where we obtain that it follows that , where and .for part ( e. ) let and note that by part ( a ) for any and sufficiently large , and by assumption , so simply choose to see that the last limit is zero .[ l.roz_pi ] suppose assumption [ a.stdcase ] holds .define then , for any constant , as , uniformly for . choose and set .define and suppose first that and note that in this case and .note that we can choose above so that .then , by lemma [ l.g_properties ] ( a. ) , decreases for all sufficiently large . also , and as .define and note that then , by lemma 1a in , we have as , uniformly for , where is an arbitrary constant ( see the statement of remark 1 in to see that the constant can be arbitrary ) .suppose now that and recall that by assumption < \infty ] .choose and set .note that by lemma [ l.g_properties ] ( a. ) is eventually decreasing . also , since , by the clt .define and as in lemma [ l.roz_pi ] and let be given by .set and note that by lemma [ l.g_properties ] ( b. ) for all sufficiently large , so . since as , we have let , and define then , by theorem 2 and remark 1 from , as , uniformly for all and for any ] , and by by lemma [ l.g_properties ] ( c. ) and ( d. ) , we now give a lemma stating that under assumption [ a.hazard ] , the random variable satisfies assumption [ a.stdcase ] . throughout the rest of the paper , andthe functions and , as well as the constant , are defined according to this function .[ eq : stdtononneg ] suppose satisfies assumption [ a.hazard ] , then satisfies assumption [ a.stdcase ] .let , then , and since as , then . also , since has lebesgue density , then has lebesgue density .it follows that by , there exists such that for all sufficiently large .it follows that where if we have . therefore , and clearly , if then and .if we already showed that , which combined with gives , which in turn implies that .we also note that for any since for any we have , it follows that .finally , from the discussion following the definition of , equation , we have that < \infty ] .we are now ready to give a uniform approximation for that will work over the region .we choose not to use this approximation in the heavy tail region to avoid having to show that it is equivalent to the heavy tail asymptotic .instead , we use a result from that will give us without much additional work the heavy tail asymptotic directly .we point out that we will not apply theorem [ t.rozovskii ] to the positive mean exactly the way it is stated , but instead we use a slight modification that will work better when applied to the queue .in particular , we will substitute the function given by , where , with the following the function given in does not need to be modified since its contribution will be shown to be negligible in the queue .[ l.uglytail ] suppose satisfies assumption [ a.hazard ] .let , fix and define where , and are given by , and , respectively .then , moreover , there exist constants such that ] .it can be verified that for sufficiently large , where , so all that remains to show is that as for all .note that after some algebra we can obtain the equivalence since for sufficiently large , it follows that for , while for we have and to analyze and the corresponding segment of define , then since for we have , it follows that and the corresponding segment of are bounded by where . tobound and the corresponding segment of we note that for we have ( recall that if ) , so where for the third inequality we used the relation for all .therefore , and the corresponding segment of are bounded by where . to bound the last segment of note that the preceding calculation yields since implies that , then so where we have thus shown that when , finally , to bound we use the inequalitvy to obtain , for , it follows that is bounded by where . we conclude that for all .this completes the proof .we will now give an approximation for , that although too complicated to be used in practice , will serve as an intermediate step towards obtaining the more explicit approximations given in theorems [ t.sumapprox ] and [ t.main ] .the idea of this section is to substitute in the pollaczek - khintchine formula the heavy - tail approximation in the range , and by , as defined in lemma [ l.uglytail ] , in the range .the intermediate approximation for is given by where , and , and are given by , and , respectively .the last term in corresponds to the so - called intermediate domain " , where as mentioned in section [ s.mainresults ] , the asymptotic behavior of is rather complicated . under additional ( differentiability ) assumptions on ,more explicit asymptotics for have been derived in ( see also for other results applicable to this region ) .we point out that is very close " to being the approximation in theorem [ t.sumapprox ] if we replace with and ignore the entire third term of , to see this sum the tail of the second term of to write it as the expectation of a function of a normal random variable .we will now show the asymptotic equivalence of and .[ l.heavytail_region ] suppose satisfies assumption [ a.hazard ] , then recall that and .by lemma [ l.rightinverse ] , is non decreasing , and .let , and note that then by theorem 3.1 in , next , we will show that for we have .first , when we have , so implies similarly , when and , we have that implies these observations , combined with the fact that the subexponentiality of implies that as uniformly for for some completes the proof . combining lemmas [ l.heavytail_region ] and [ l.uglytail ] gives the following result .[ p.uglyapprox ] define according to and suppose satisfies assumption [ a.hazard ] , then , this first approximation for might not very useful in practice since it involves two integrals , those in the definition of , that are not in general closed - form , and two indicator functions that depend on the quantity ( the solution to a certain optimization problem ) .the approximation given in theorem [ t.sumapprox ] is more explicit , and thus more suitable for computations , both numerical and analytical .the proof of theorem [ t.sumapprox ] is rather technical , so we divide into several lemmas , the first of which gives some more properties of the functions and .[ l.b_and_w ] suppose satisfies assumption [ a.hazard ] .let and be defined according to and , respectively .then , 1 . , 2 . . to show the first limit in ( a. )use proposition 3.7 in with some as follows , for the second limit we first note that the same arguments used above give , so all we need to show is that . that this is the case follows from and , which gives for large . for part ( b. ) next define according to and , and according to .let \right| , & \kappa = 2 , \\\left| \sum_{n = n(x)+1}^{\infty } ( 1-\rho ) \rho^n \hat\pi_\kappa(x , n ) - e\left [ \rho^{a(x , z ) } \indicator\left(\sigma z \leq \sqrt{\mu \log x } \right ) \right ] \right| , & \kappa > 2 , \end{cases } \\ e_3(\rho , x ) & = \sum_{n = k_r(x)+1}^{\infty } ( 1-\rho ) \rho^n j(y , n ) \indicator(y > ( 1-\epsilon ) c_n).\end{aligned}\ ] ] then , we will split the proof of theorem [ t.sumapprox ] into three propositions , each of them showing that as uniformly for , and some auxiliary lemmas .we start by giving a result that provides lower bounds for .[ l.lowerbound ] fix and let .then , for any , while for , where was defined in lemma [ l.ustar ] .let and note that the first statement follows from the observation that for we have for the second statement consider first the case , for which and \\ & = e^{\frac{x}{\mu } \log\rho + \frac{\sigma^2 x(\log\rho)^2}{2 \mu^{3 } } } \phi\left ( \frac{\sqrt{\mu}\omega_1^{-1}(x)}{\sigma\sqrt{x } } + \frac{\sigma \sqrt{x}}{\mu^{3/2 } } \log\rho \right ) \\ & \geq e^{\frac{x}{\mu } \lambda_\rho(u(\rho ) ) } \phi\left ( \frac{\sqrt{\mu}\omega_1^{-1}(x)}{\sigma\sqrt{x } } - \frac{\sigma cq(x)}{\sqrt{\mu x } } \right ) = e^{\frac{x}{\mu } \lambda_\rho(u(\rho ) ) } ( 1 + o(1))\end{aligned}\ ] ] as , for all ( since by lemma [ l.b_and_w ] ( a. ) ) .for we split the interval into two parts as follows .define .then , for , \geq \phi(0 ) e^{\frac{x}{\mu } \log\rho + \frac{\sigma^2}{2\mu^3 } x(\log\rho)^2 } \geq c e^{\frac{x}{\mu } \lambda_\rho(u(\rho))}.\ ] ] for the interval ( assuming ) , let and use lemma [ l.lambda ] to obtain by lemma [ l.ustar ] , is concave on ] we have so on ] .then , for any and sufficiently large , where in the second inequality we used lemma [ l.g_properties ] ( c. ) .similarly , for any and sufficiently large , it follows that for sufficiently large .hence , and by lemma [ l.uniformbound ] , by using the inequality for any , and observing that for all , we obtain , for such and all sufficiently large , it follows that note that by lemma [ l.b_and_w ] ( a. ) , which implies that for sufficiently large , finally , by using the inequality for again , and , we obtain that [ l.e2bound ] let if and , if , then .\end{aligned}\ ] ] recall that and n(0,1 ) .define .note that exact computation gives , \\ & = e\left [ \rho^{\max\{\lfloor a(x , z ) \rfloor+1 , l_\kappa(x)+1\ } } \right ] \\ & = e\left [ \rho^{\lfloor a(x , z ) \rfloor + 1 } \indicator ( a(x , z ) \geq l_\kappa(x ) ) \right ] + \rho^{l_\kappa(x)+1 } p ( a(x , z ) < l_k(x ) ) \\ & = e\left [ \rho^{\lfloor a(x , z ) \rfloor + 1 } \indicator\left(z \leq ( x-\mu l_\kappa(x))/\sqrt{\sigma^2 x/\mu } \right ) \right ] + \rho^{l_\kappa(x)+1 } \phi\left ( -(x-\mu l_\kappa(x))/\sqrt{\sigma^2 x/\mu } \right).\end{aligned}\ ] ] observe that , from where it follows that can further be bounded by \label{eq : indicators_2 } \\ & \hspace{5 mm } + \left| e\left [ \left(\rho^{\lfloor a(x , z ) \rfloor + 1 } - \rho^{a(x , z)}\right ) \indicator\left(z \leq h_\kappa(x)/\sqrt{\sigma^2 x/\mu}\right ) \right ] \right| \label{eq : integerpart_2 } \\ & \hspace{5 mm } + \rho^{l_\kappa(x)+1 } \phi\left ( -(x-\mu l_\kappa(x))/\sqrt{\sigma^2 x/\mu } \right ) \label{eq : extraterm_2}.\end{aligned}\ ] ] next , note that since is decreasing in , we obtain that is bounded by \\ & = \rho^ { l_\kappa(x ) + 1 } \left ( \phi\left ( h_\kappa(x)/\sqrt{\sigma^2 x/\mu } + \mu^{3/2}/\sqrt{\sigma^2 x } \right ) - \phi\left ( h_\kappa(x)/\sqrt{\sigma^2 x/\mu } \right )\right ) \\ & \leq \rho^{\lfloor \frac{1}{\mu } \left(x-h_\kappa(x ) \right ) \rfloor + 1 } \phi'\left ( h_\kappa(x)/\sqrt{\sigma^2 x/\mu } \right ) \frac{\mu^{3/2}}{\sigma \sqrt{x } } \\ & \leq \frac{c}{\sqrt{x } } \rho^ { \frac{1}{\mu } \left ( x - h_\kappa(x ) \right ) } e^{-\frac{\mu ( h_\kappa(x))^2}{2\sigma^2x } } .\end{aligned}\ ] ] for we use the simple bound .\ ] ] and for we use the inequality for any to obtain the bound [ p.e2 ] under the assumptions of theorem [ t.sumapprox ] , let if and , if , then , by lemma [ l.e2bound ] , we have that . \end{aligned}\ ] ] fix and define .we will first show that is as uniformly for . before we proceed notethat as ( by lemma [ l.b_and_w ] ( b. ) for ) and \\ & = e^{\frac{x}{\mu } \log\rho + \frac{\sigma^2 x(\log\rho)^2}{2 \mu^{3 } } } \phi\left ( \frac{\sqrt{\mu } h_\kappa(x)}{\sigma\sqrt{x } } + \frac{\sigma \sqrt{x } \log\rho}{\mu^{3/2 } } \right ) \\ & \leq e^{\frac{x}{\mu } \log\rho + \frac{\sigma^2 x(\log\rho)^2}{2 \mu^{3 } } } \phi'\left ( \frac{\sqrt{\mu } h_\kappa(x)}{\sigma\sqrt{x } } + \frac{\sigma \sqrt{x } \log\rho}{\mu^{3/2 } } \right ) \indicator\left ( \frac{\sqrt{\mu } h_\kappa(x)}{\sigma\sqrt{x } } + \frac{\sigma \sqrt{x } \log\rho}{\mu^{3/2 } } \leq -1 \right ) \\ & \hspace{3 mm } + e^{\frac{x}{\mu } \log\rho + \frac{\sigma^2 x(\log\rho)^2}{2 \mu^{3 } } } \indicator\left ( \frac{\sqrt{\mu } h_\kappa(x)}{\sigma\sqrt{x } } + \frac{\sigma \sqrt{x } \log\rho}{\mu^{3/2 } } > -1 \right ) \\ & = \frac{1}{\sqrt{2\pi } } \rho^{(x - h_\kappa(x))/\mu } e^{- \frac{\mu ( h_\kappa(x))^2}{2\sigma^2 x } } \indicator\left ( |\log\rho| \geq \frac{\mu^2 h_\kappa(x)}{\sigma^2 x } + \frac{\mu^{3/2}}{\sigma \sqrt{x } } \right ) \\ & \hspace{3 mm } + e^{\frac{x}{\mu } \log\rho + \frac{\sigma^2 x(\log\rho)^2}{2 \mu^{3 } } } \indicator\left ( |\log\rho| < \frac{\mu^2 h_\kappa(x)}{\sigma^2 x } + \frac{\mu^{3/2}}{\sigma \sqrt{x } } \right),\end{aligned}\ ] ] where for the inequality we used for .furthermore , it follows that for sufficiently large , is bounded by now we use lemma [ l.lowerbound ] and the observation that as to obtain as . for the range we first note that ] , and , by lemma [ l.b_and_w ] ( a. ) , .this completes the proof .propositions [ p.e1 ] , [ p.e2 ] and [ p.e1 ] give which combined with proposition [ p.uglyapprox ] give in this section we prove lemma [ l.ustar ] and theorem [ t.main ] . to ease the reading we restate the definition of below . where is given by , and is the smallest positive solution to .we start with the proof of lemma [ l.ustar ] and then split the proof of theorem [ t.main ] into three parts . that is concave in a neighborhood of the origin follows from if have , which is maximized at and satisfies . in general , for recall that , so is the solution to the equation . by lagranges inversion theorem , where furthermore , by fa di bruno s formula , where , , and .note that . finally , since we have we now prove two preliminary results before we proceed to the proof of theorem [ t.main ] .[ l.lambda ] let be given by and set .then , as .define the function and note that by expanding into its taylor series centered at zero we obtain recall from section [ s.modeldescription ] ( after equation ) that and for any and sufficiently large .it follows that for we have as .hence , as , uniformly in .the second preliminary result is an application of laplace s method , which states that the asymptotic behavior of an integral of the form as , is determined by the value of the integral in a small interval around the maximizer of on the interval $ ] .what makes the proof below very technical is that the limits of integration are functions of .[ l.watson ] let , , , and define . then, under the assumptions of theorem [ t.main ] , as , let and define , . then by lemma [ l.lambda ] , as , uniformly for .it remains to show that we start by computing the derivatives of : and note that for all .also , we have for all . set and note that for , , so for sufficiently large we have to bound note that for some between and , so is bounded by , where to see that note that for we have and also .let and let .we start by analyzing , for which we have as , uniformly for . to analyze note that on we have , which yields as .we have thus shown that i s as , uniformly for .we now need to show that the same is true of .note that is bounded by note that for any and some between and , which gives that for , , and for , , and therefore , is bounded by this completes the proof .finally , we give below the proof of theorem [ t.main ] .note that by theorem [ t.sumapprox ] we know that so for the first statement of theorem [ t.main ] it suffices to show that the second statement , which refers to the uniformity in as will follow from lemma 3.3 in once we show that as . to see this isthe case simply note that for all as . we now proceed to establish .let if and if , and set . then, - e^{\frac{x}{\mu } \lambda_\rho(w(\rho , x ) ) } \right| \\ & \leq \frac{\sigma \sqrt{x}(1-\rho)}{\sqrt{2\pi \mu } } \left| \sum_{n = m(x)+1}^{n(x ) } \rho^n \frac{e^{n q_k\left ( \frac{x - n\mu}{\sigma n}\right ) } } { x - n\mu } - \sum_{n = m(x)+1}^{n(x ) } \frac{e^{\frac{x}{\mu } \lambda_\rho(u_n ) } } { xu_n } \right| \indicator(\kappa > 2 ) \\ & \hspace{3 mm } + \left| \frac{\sigma ( 1-\rho)}{\sqrt{2\pi \mu x } } \sum_{n = m(x)+1}^{n(x ) } \frac{e^{\frac{x}{\mu } \lambda_\rho(u_n ) } } { u_n } \indicator(\kappa > 2 ) \right . \\ & \hspace{3 mm } \left .+ e\left [ \rho^{a(x , z ) } \indicator\left(\sigma z \leq \sqrt{\mu } h_\kappa(x)/\sqrt{x } \right ) \right ] - e^{\frac{x}{\mu } \lambda_\rho(w(\rho , x ) ) } \right|.\end{aligned}\ ] ] define .we separate our analysis into two cases .* case 1 : * . note that for this range of values of we have , by lemma [ l.ustar ] , that , and by lemma [ l.b_and_w ] ( a. ) , that for all sufficiently large .it follows that .also , by lemma [ l.lambda ] we have that there exists a function as such that - e^{\frac{x}{\mu } \lambda_\rho(u(\rho ) ) } \right| \notag\end{aligned}\ ] ] where .furthermore , by lemma [ l.watson ] we have that and are bounded by for some other . since by lemma [ l.lowerbound ]we have that on , it only remains to show that the term following is .first we notice that exact computation gives - e^{\frac{x}{\mu } \lambda_\rho(u(\rho ) ) } \right| \\ & = e^{\frac{x}{\mu } \lambda_\rho(u(\rho ) ) } \left| \phi(-\gamma(\rho , x ) ) \indicator(\kappa > 2 ) + e^{\frac{x}{\mu } \left ( \log\rho + \frac{\sigma^2(\log\rho)^2}{2\mu^2 } - \lambda_\rho(u(\rho ) ) \right ) } \phi\left ( \frac{\sqrt{\mu } h_\kappa(x)}{\sigma\sqrt{x } } + \frac{\sigma \sqrt{x}\log\rho}{\mu^{3/2 } } \right ) - 1 \right| \\ & = \begin{cases } e^{\frac{x}{\mu } \lambda_\rho(u(\rho ) ) } \phi\left ( -\frac{\sqrt{\mu } \omega_1^{-1}(x)}{\sigma\sqrt{x } } - \frac{\sigma \sqrt{x}\log\rho}{\mu^{3/2 } } \right ) , & \kappa = 2 , \\e^{\frac{x}{\mu } \lambda_\rho(u(\rho ) ) } \phi\left ( \gamma(\rho , x ) \right ) \left| e^{\frac{x}{\mu } \left ( \log\rho + \frac{\sigma^2(\log\rho)^2}{2\mu^2 } - \lambda_\rho(u(\rho ) ) \right ) } - 1 \right| , & \kappa > 2 . \end{cases}\end{aligned}\ ] ] when we simply have as , since by lemma [ l.b_and_w ] ( a. ) .when note that as .* case 2 : * . for this range of values of use lemma [ l.lowerbound ] to obtain that , which together with lemma [ l.lambda ] gives , for , + e^{\frac{x}{\mu } \lambda_\rho(w(\rho , x ) ) } \right\ } \\ & \leq c\rho^{-1 } e^{q(x ) } \left\ { \sqrt{x } e^{\frac{x}{\mu } \lambda_\rho ( w(\rho , x ) ) } \int_{\sqrt{x\log x}}^{\omega_1^{-1}(x)+\mu } \frac{1}{u } du \ , \indicator(\kappa >2 ) \right .\\ & \hspace{3 mm } \left .+ e^{\frac{x}{\mu } \log\rho + \frac{\sigma^2x(\log\rho)^2}{2\mu^3 } } \phi\left ( \frac{\sqrt{\mu } h_\kappa(x)}{\sigma\sqrt{x } } + \frac{\sigma \sqrt{x}\log\rho}{\mu^{3/2 } } \right ) + e^{\frac{x}{\mu } \lambda_\rho(w(\rho , x ) ) } \right\}.\end{aligned}\ ] ] let and and note that which converges to zero as since by lemma [ l.b_and_w ] , and .for the range we have , by lemma [ l.ustar ] , which also converges to zero as since by .this completes the proof .we conclude the paper with two examples comparing simulated values of to the approximations and suggested by theorems [ t.sumapprox ] and [ t.main ] . for illustration purposes we also plot the heavy - tail and heavy - traffic approximations the simulated values of were obtained using the conditional monte carlo algorithm from , and each point was estimated using 100,000 simulation runs .we point out that simulating heavy - tailed queues in heavy traffic is very difficult , and in particular , the simulated values of for pairs in the region around the point where the queue s behavior transitions from the heavy traffic regime into the heavy tail regime , are highly unreliable . in terms ofthe approximations and suggested in this paper , they tend to be sensitive to the mean and variance of the integrated tail distribution , and , respectively , so we suggest first scaling the queue in such a way that both parameters are small ( of order one ) .we give two examples below , one in which the integrated tail distribution is lognormal and one where it is heavy - tailed weibull ; note that no m / g/1 queue can have exactly weibull integrated tail distribution , since its density is not monotone , but there are valid distributions ( with decreasing densities ) whose tail is asymptotically weibull . for the lognormal example we used , which although an approximation to works well in practice .the authors would like to thank two anonymous referees for their valuable comments which helped improve the presentation of the paper .
this paper studies the asymptotic behavior of the steady - state waiting time , , of the m / g/1 queue with subexponenential processing times for different combinations of traffic intensities and overflow levels . in particular , we provide insights into the regions of large deviations where the so - called heavy traffic approximation and heavy tail asymptotic hold . for queues whose service time distribution decays slower than we identify a third region of asymptotics where neither the heavy traffic nor the heavy tailed approximations are valid . these results are obtained by deriving approximations for that are either uniform in the traffic intensity as the tail value goes to infinity or uniform on the positive axis as the traffic intensity converges to one . our approach makes clear the connection between the asymptotic behavior of the steady - state waiting time distribution and that of an associated random walk .
hidden - variable model , or theory , is the ( unfortunate ) name with which physicists refer to a hypothetical theory where the quantum state ( or for mixtures ) of a system is supplemented by additional parameters , .quantum mechanics provides a set of rules that determine the probabilities of observing events for a given preparation of the system and a given experimental setup , .the challenge for the hidden - variable theory is to find a distribution of the , , and a set of conditional probabilities such that the quantum mechanical predictions are reproduced on average , namely , where , as follows from bayess rule . alternatively , considering that experiments have some unavoidable imprecision , one could require just that so that the the hidden - variable model reproduces the experimental data with an accuracy comparable to that of quantum mechanics .the earliest example of a hidden - variable theory is given by the bohm formulation . as demonstrated by bell , all hidden variable theories satisfying three hypotheses known as measurement independence ( uncorrelated choice in our terminology to be introduced below ) , setting independence , and outcome independence ( reducibility of correlations , in our terminology ) are incompatible with quantum mechanics and , more importantly , with experimental evidence .more recently , leggett demonstrated the incompatibility of quantum mechanics with all theories satisfying measurement independence and maluss law .these theories as well were ruled out by experiment . by violating one or more hypotheses , however , it is possible to reproduce the quantum mechanical predictions .examples that the violation of measurement independence can lead to models reproducing the quantum mechanical prediction were provided in refs .the amount of violation of measurement independence necessary to reproduce quantum mechanics was recently quantified . in the present paper ,we provide a model that satisfies at the same time the hypotheses setting independence , outcome independence , and maluss law , but not measurement independence .for comparison , in the literature there are models violating measurement and outcome independence , measurement independence and maluss law , setting independence , and the only model satisfying setting independence , outcome independence , and maluss law turns out to be flawed .is not normalized to one ; if it was , then the correlator would be , four times smaller than the quantum mechanical one . ]in order to discuss the hypotheses underlying bell and leggett inequalities introduced below , we introduce some useful concepts .we distinguish two kinds of hidden variables : global parameters and local parameters .the former ones can not be ascribed to a region of spacetime , while the latter ones can .furthermore , local parameters can be detected by a single - shot measurement , while global ones require an ensemble of measurements or the specification of a preparation procedure .in other words , the value of local parameters can constitute an event . for instance, consider a card chosen at random from a deck ; the deck itself is chosen at random from a set of decks , each having a different distribution of red and black cards . the probability of the card being red depends on which deck was chosen .this information constitutes a global parameter , since it can not be ascribed to the card . on the other hand ,the card possesses the property of being black or red before it is measured .this property is a local parameter .the wave - function of a quantum system is a global parameter . according to the naive , classical world - view , the knowledge of all the local parameters makes global parameters irrelevant . on the other hand ,the shift in the epistemic paradigm introduced by quantum mechanics consists in recognizing that some global parameters can not be simply reduced to the ignorance of some fundamental yet unknown local parameters , and that the events resulting as the outcome of a measurement are not interpretable as preexisting local parameters belonging to the observed system .now we are in a position to define _ locality _ , by which we mean the impossibility of action - at - a - distance ( nad ) .the very word action stems from classical determinism , and indicates the change in a local parameter .however , we need to extend the concept of locality to probabilistic theories .our proposed definition is : locality implies that the probability of observing an event at a spacetime region can be expressed by a function that depends only on global parameters and on those local parameters that are localized within the region . in formulas eq . should be interpreted as a procedure ( in the algorithmic sense ) assigning a value according to a routine that takes as an input the variables , which are not to be confused with their values .clearly , if the local parameters are correlated , the value of may coincide , for instance , with that of .however , a local operation in will change the value of while keeping fixed , and this will not affect the marginal probability at .finally , if the event consists in the measurement of the value of a local parameter , say of , for which we assume there is a measuring device , then the parameters and the settings are all considered to be calculated at the same time .no hypothesis is made about the equations of motion for the additional parameters .scheme of the setup considered.regions a and b are spacelike separated .the semicircles represent hypothetical detectors for the hidden variables as discussed in the text.,width=480 ] the setup considered ( see fig .[ fig : eprbsetup ] ) consists of two particles produced in a region of the spacetime , each travelling to a different detection region , and .the measurements of the two particles does not need to happen at the same time , but the two detection events are assumed to have spacelike separation , so that a reference frame exists in which the measurements are simultaneous .the outcomes of the two measurements are two - valued , and will be denoted by and . the ( ) detector is characterized by a unit vector ( ) , corresponding to the orientation of a spin measuring device in quantum mechanics .the quantity of interest is the joint probability with describing the preparation of a singlet state , and specifying the observables being measured . in the following , we shall write the hidden variables as , where refers to the global parameters , and ( to the local parameters . since appears as a prior in all the probabilities considered ,it is omitted for brevity .references discriminated between quantum mechanics and some special classes of hidden - variable theories through inequalities , known as bell , chsh , and ch inequalities , which have been verified experimentally .there are three hypotheses needed to derive bell - type inequalities : 1 .measurement independence , i.e. , the marginal distribution of the local parameters and the choice of the corresponding remote settings are uncorrelated , , with representing the detectors settings .herein , i shall refer to this hypothesis with the more descriptive term uncorrelated choice .2 . setting independence , i.e. the marginal probability of observing an event at station does not depend on the setting of the remote station , .outcome independence , i.e. , for fixed the events and are uncorrelated , where the marginal and conditional probabilities are by definition reference coined the name `` outcome independence '' for the additional hypothesis , while jarrett and bell referred to it as completeness . in my opinion , the latter term is too ambiguous , while the former is too technical .i shall hence refer to this hypothesis as reducibility of correlations , since it amounts to assume that for given there are no correlations between the two outcomes .we notice that uncorrelated choice / measurement independence is stronger than locality , which implies more generally , and setting independence weaker , since it allows a dependence of the marginal probability on the remote local setting .thus , as could be changed at station , and this would change the marginal probability at station , setting independence would allow instantaneous communication from to .hence , setting independence does not imply no - signaling , contrary to a widespread belief . concerning the hypothesis reducibility of correlations / outcome independence ,it has been demonstrated that any model satisfying it can be supplemented by further additional parameters in such a way that the new model becomes deterministic , i.e. .however , experiment must provide the ultimate test for any theory , i.e. , if one formulates a model and in addition gives a prescription to either fix or measure , then the probability becomes experimentally accessible and the theory falsifiable .it may happen that there is no way to measure or fix the additional parameters , so that the deterministic completion would turn out to be a useful fiction .reference considered a class of hidden - variable models that do not necessarily satisfy reducibility of correlations / outcome independence , but obey an analogue of malus s law for the hidden variables , , where $ ] consists in two unit vectors , localized at particle and , and is a unit vector denoting the setting of station .thus maluss law is a special case of setting independence .it was shown that these models predict a correlator satisfying an inequality known as leggett inequality , which is violated by quantum mechanics and by experiments .the assumption of maluss law appeals to the intuitive notion that each spin possesses a vector describing its polarization and influencing the outcome of the measurement according to the well known maluss law , which applies for pure single - particle states . in summary ,bell inequalities are obtained assuming that uncorrelated choice / measurement independence , setting independence , and reducibility of correlations / outcome independence hold , while leggett inequalities are obtained assuming uncorrelated choice and compliance with maluss law .by violating one of the hypotheses at the basis of bell and leggett inequalities it may be possible to violate them . reproducing quantum mechanics ,however , is not guaranteed , since the violation of bell and leggett inequalities is a necessary but not sufficient condition . herewe provide a model that not only violates the inequalities , but also reproduces the quantum mechanical prediction for a spin singlet .another distinguishing feature of the model discussed herein is the simplicity of the distribution [ compare eq .below ] , which is not contrived _ ad hoc _ in order to reproduce the quantum mechanics of a spin - singlet .let us consider the following hidden - variable model : the hidden variables consist of two unit vectors , the first being associated with the particle going to and the second with the particle going to .the joint probability , conditioned on the values is so that the marginal and conditional probabilities are the model obeys setting independence , since the probability of finding outcome depends solely on the variable associated to the particle at , and in this sense the marginal probability obeys the locality condition .furthermore , eq . states that the maluss law is obeyed , while eq . shows that reducibility of correlations / outcome independence is satisfied as well .the hidden variables have the following probability density it is immediate to verify that upon integration over the hidden variables coincides with the quantum mechanical predictions for a spin singlet .the reason that , in particular , bell and leggett inequalities are violated by the model above is apparent : the distribution of the hidden variables and that of the settings of the detectors are correlated i.e. ( uc ) does not hold in our model .let us assume that and are local parameters . then eq. violates the principle of locality , since , e.g. , the marginal probability for is .\ ] ] we recall that and the settings are evaluated at the same time .thus , a change in can influence instantaneously the distribution of the remote parameter , and hence the non - locality .we prove that in this case there can be instantaneous communication between the regions and , provided that the hidden parameters are measurable .suppose that two observers at and agree on two orientations , , e.g. , orthogonal to each other .they use the following protocol : immediately before the particles impinge on the spin detectors , they measure the hidden variables ( see fig . [fig : eprbsetup ] ) ; if the observer measures , ( i.e. , the orientation is determined by the one of the spin detector in ) , she will turn her apparatus in the direction if she wants to make sure that the observer in obtains the result , which is agreed to correspond to the 0-bit , or in the direction if she wants to make sure that the observer in obtains the result , which is agreed to correspond to the 1-bit ; if instead the hidden parameters turns out to be , the observer in will make the opposite switching ; the observer at , on the other hand , will take no action whenever he measures , since he knows he will be on the receiving end of the transmission ; when instead the observer in will switch his apparatus in an analogous fashion as does in the other cases , so that he can send instantaneous information to , who will be measuring and knows that she should take no action in this case .we remark that the non - locality of the model does not follow automatically from the hypothesis in eq . , but stems from the assumption that the settings of the detectors determine and not _vice versa_. since probability theory is time - symmetric and acausal , one could assume the opposite cause - effect relation as in refs . . the model could then be reproduced through classical resources in the following way : the detector at ( ) and the entangler share a pseudo - random number generator , which gives as output a unit vector ( ) .the random - number generator ( rng ) at and is delayed by a time equal to the time - of - flight of the particles , respect to the one at . with probability entangler produces one of four possible pairs , then attaches the first member of the pair to the particle reaching , and the second to the one reaching , thus forcing . at the moment of choosing the direction along which to measure , and their rng and use its outcome , setting and .the outcome of each measurement is then randomized independently as for eq . .in formulas \label{eq : hiddvardens2 } \ , \end{aligned}\ ] ] where , are global parameters , and local parameters .the weak point of such an explanation is not the presumed violation of free will ( all in all free will is limited by physical laws , and it could be but an illusion if these are deterministic ) , but the origin of the correlations and the persistence of detector - entangler correlations notwithstanding the effects of the environment .indeed , at some point in the common past light - cone of ( or ) and , the two shared a random number generator . yet , all events that are in the past light - cone of and can not be causally correlated with turn out to be irrelevant in the determination of , even though they are more recent than the sharing of the rng .furthermore , if the detectors at and were two automata complex enough to possess self - awareness , but without the possibility of finding out their inner workings , and these automata could measure and , they would not only believe that they were acting out of free will , but also that they could establish superluminal communication according to the protocol illustrated previously , while a wary external observer would see them reciting a predetermined script .this consideration sheds a new light on the issue of free will : if our choices were determined by underlying variables , which we were able to measure ( i.e. if we were puppets who could see their strings ) , then we would be able to test whether we have free will by trying to send superluminal signals . in casethe communication resulted to be botched , we would have evidence in favor of slave will otherwise we would observe instantaneous communication of meaningful information . finally , the same probability distribution would arise if and were conscious agents aware of the variables and choosing their settings accordingly , in order to produce the quantum correlations . assuming that they could choose not to do so , we would have then not a violation of free will , but a conspiracy , leading to the same probability distribution .this shows that it is not possible to deduce , from the assumed violation of uc , which among the hypotheses free will , locality , or no - conspiracy is being violated , and conversely , that none of the three hypotheses implies by itself uc , unless additional assumptions about the physical nature of the hidden variables are made .the hidden - variable model presented does not only violate bell and leggett inequalities , but reproduces the results of quantum mechanics for a spin singlet .the model violates only the hypothesis of uncorrelated choice , but satisfies all other assumptions at the basis of bell and leggett inequalities , namely setting independence , reducibility of correlations / outcome independence , and compliance with malus s law .thanks to this , the model seems to appeal to our intuitive , classical notion of spin polarization : both particles have a fixed value of the polarization , which determines the probability of each experimental outcome .it can be realized indifferently by an alleged non - local influence of the detector on the hidden variables , or by preexisting correlations between the entangler and the settings of the stations , which could be seen either as a conspiracy or as a limitation of free will .thus mathematical hypotheses about the form of probabilities can not be claimed to derive from physical requirements , unless the variables appearing in the model are given first a physical meaning .this work was supported by fundao de amparo pesquisa do estado de minas gerais through process no . apq-02804 - 10 .10 url # 1#1urlprefix[2][]#2 bayes t 1763 an essay towards solving a problem in the doctrine of chances . _ phil .trans . _ * 53 * 370418 http://www.sciencedirect.com/science/article/b6v04-49wnsyh-16/2/4581ef69494e5a936da5ceca0e033078 rowe m a , kielpinski d , meyer v , sackett c a , itano w m , monroe c and wineland d j 2001 experimental violation of a bell s inequality with efficient detection _ nature _ * 409 * 791794 http://www.nature.com/nature/journal/v409/n6822/abs/409791a0.html grblacher s , paterek t , kaltenbaek r , brukner , ukowski m , aspelmeyer m and zeilinger a 2007 supplementary information to an experimental test of non - local realism _ nature _ * 446 * 871875 http://www.nature.com/nature/journal/v446/n7138/suppinfo/nature05677.html branciard c , brunner n , gisinn , kurtsiefer c , lamas - linares a , ling a and scarani v 2008 testing quantum correlations versus single - particle properties within leggett s model and beyond _ nature physics _ * 4 * 681685 http://www.nature.com/nphys/journal/v4/n9/abs/nphys1020.html
a hidden variable model reproducing the quantum mechanical probabilities for a spin singlet is presented . the model violates only the hypothesis of independence of the distribution for the hidden variables from the detectors settings and _ vice versa _ ( measurement independence ) . it otherwise satisfies the hypotheses of setting independence , outcome independence made in the derivation of bell inequality and that of compliance with malus s law made in the derivation of leggett inequality . it is shown that the violation of the measurement independence hypothesis may be explained alternatively by assuming a non - local influence of the detectors settings on the hidden variables , or by taking the hidden variable to influence the choice of settings ( limitation of free will ) , or finally by purporting a conspiracy . it is demonstrated that the last two cases admit a realization through existing local classical resources .
a quantitative investigation of cohesive fracture propagation necessitates an accurate description of various fracture phenomena including : crack initiation ; propagation along complex three - dimensional paths ; interaction and coalescence of distributed multi - cracks into localized continuous cracks ; and interaction of fractured / unfractured material .the classical finite element ( fe ) method , although it has been used with some success to address some of these aspects , is inherently incapable of modeling the displacement discontinuities associated with fracture . to address this issue ,advanced computational technologies have been developed in the recent past .first , the embedded discontinuity methods ( edms ) were proposed to handle displacement discontinuity within finite elements . in these methodsthe crack is represented by a narrow band of high strain , which is embedded in the element and can be arbitrarily aligned .many different edm formulations can be found in the literature and a comprehensive comparative study of these formulations appears in reference .the most common drawbacks of edm formulations are stress locking ( spurious stress transfer between the crack surfaces ) , inconsistency between the stress acting on the crack surface and the stress in the adjacent material bulk , and mesh sensitivity ( crack path depending upon mesh alignment and refinement ) .a method that does not experience stress locking and reduces mesh sensitivity is the extended finite element method ( x - fem ) .x - fem , first introduced by belytschko & black , exploits the partition of unity property of fe shape functions .this property enables discontinuous terms to be incorporated locally in the displacement field without the need of topology changes in the initial uncracked mesh .mos et al . enhanced the primary work of belytschko et al . through including a discontinuous enrichment function to represent displacement jump across the crack faces away from the crack tip .x - fem has been successfully applied to a wide variety of problems .dolbow et . applied xfem to the simulation of growing discontinuity in mindlin - reissner plates by employing appropriate asymptotic crack - tip enrichment functions .belytschko and coworkers modeled evolution of arbitrary discontinuities in classical finite elements , in which discontinuity branching and intersection modeling are handled by the virtue of adding proper terms to the related finite element displacement shape functions .furthermore , they studied crack initiation and propagation under dynamic loading condition and used a criterion based on the loss of hyperbolicity of the underlying continuum problem . extended x - fem to the simulation of cohesive crack propagation .the main drawbacks of x - fem are that the implementation into existing fe codes is not straightforward , the insertion of additional degrees of freedoms is required on - the - fly to describe the discontinuous enrichment , and complex quadrature routines are necessary to integrate discontinuous integrands .another approach widely used for the simulation of cohesive fracture is based on the adoption of cohesive zero - thickness finite elements located at the interface between the usual finite elements that discretize the body of interest .this method , even if its implementation is relatively simple , tends to be computationally intensive because of the large number of nodes that are needed to allow fracturing at each element interface .furthermore , in the elastic phase the zero - thickness finite elements require the definition of an artificial penalty stiffness to ensure inter - element compatibility .this stiffness usually deteriorates the accuracy and rate of convergence of the numerical solution and it may cause numerical instability . to avoid this problem , algorithms have been proposed in the literature for the dynamic insertion of cohesive fractures into fe meshes .the dynamic insertion works reasonably well in high speed dynamic applications but is not adequate for quasi - static applications and leads to inaccurate stress calculations along the crack path .an attractive alternative to the aforementioned approaches is the adoption of discrete models ( particle and lattice models ) , which replace the continuum a priori by a system of rigid particles that interact by means of linear / nonlinear springs or by a grid of beam - type elements .these models were first developed to describe the behavior of particulate materials and to solve elastic problems in the pre - computers era .later , they have been adapted to simulate fracture and failure of quasi - brittle materials in both two and three dimensional problems . in this class of models ,it is worth mentioning the rigid - body - spring model developed by bolander and collaborators , which dicretizes the material domain using voronoi diagrams with random geometry , interconnected by zero - size springs , to simulate cohesive fracture in two and three dimensional problems .various other discrete models , in the form of either lattice or particle models , have been quite successful recently in simulating concrete materials .discrete models can realistically simulate fracture propagation and fragmentation without suffering from the aforementioned typical drawbacks of other computational technologies .the effectiveness and the robustness of the method are ensured by the fact that : a ) their kinematics naturally handle displacement discontinuities ; b ) the crack opening at a certain point depends upon the displacements of only two nodes of the mesh ; c ) the constitutive law for the fracturing behavior is vectorial ; d ) remeshing of the material domain or inclusion of additional degrees of freedom during the fracture propagation process is not necessary . despite these advantagesthe general adoption of these methods to simulate fracture propagation in continuous media has been quite limited because of various drawbacks in the uncracked phase , including : 1 ) the stiffness of the springs is defined through a heuristic ( trial - and - error ) characterization ; 2 ) various elastic phenomena , e.g. poisson s effect , can not be reproduced exactly ; 3 ) the convergence of the numerical scheme to the continuum solution can not be proved ; 4 ) amalgamation with classical tensorial constitutive laws is not possible ; and 5 ) spurious numerical heterogeneity of the response ( not related to the internal structure of the material ) is inherently associated with these methods if simply used as discretization techniques for continuum problems .the discontinuous cell method ( dcm ) presented in this paper provides a framework unifying discrete models and continuum based methods .the delaunay triangulation is employed to discretize the solid domain into triangular elements , the voronoi tessellation is then used to build a set of discrete polyhedral cells whose kinematics is described through rigid body motion typical of discrete models .tonti presented a somewhat similar approach to discretize the material domain and to compute the finite element nodal forces using dual cell geometries .furthermore , the dcm formulation is similar to that of the discontinuous galerkin method which has primarily been applied in the past to the solution of fluid dynamics problems , but has also been extended to the study of elasticity .recently , discontinuous galerkin approaches have also been used for the study of fracture mechanics and cohesive fracture propagation .the dcm formulation can be considered as a discontinuous galerkin approach which utilizes piecewise constant shape functions .another interesting feature of dcm is that the formulation includes rotational degrees of freedom .researchers have attempted to introduce rotational degrees of freedom to classical finite elements by considering special form of displacement functions along each element edge to improve their performance in bending problems .this strategy leads often to zero energy deformation modes and to singular element stiffness matrix even if the rigid body motions are constrained .dcm formulation simply incorporates nodal rotational degrees of freedom , without suffering from the aforementioned problem .equilibrium , compatibility , and constitutive laws for cauchy continua can be formulated as limit case of the governing equations for cosserat continua in which displacements and rotations are assumed to be independent fields but the couple stress tensor is identically zero . for small deformations and for any position vector in the material domain , one has and in the above equations , the summation rule of repeated indices applies ; and are displacement and rotation fields , respectively . is the strain tensor ; is the stress tensor ; are body forces per unit volume ; is the mass density .subscripts , , and represent components of cartesian coordinate system which can be in three dimensional problems ; is the levi - civita permutation symbol .considering any position dependent field variable such as , represent partial derivative of with respect to the component of the coordinate system , while is the time derivative of the variable .the partial differential equations above need to be complemented by appropriate boundary conditions that can either involve displacements , on ( essential boundary conditions ) ; or tractions , on ( natural boundary conditions ) ; where is the boundary of the solid volume . in the elastic regime ,the constitutive equations can be written as where is the volumetric strain ; and are the volumetric and deviatoric moduli that can be expressed through young s modulus and poisson s ratio : ; .it is worth observing that since the solution of the problem formulated above requires the stress tensor to be symmetric ( see second equation in equation [ eq : equilibrium-2 ] ) , the constitutive equations imply the symmetry of the strain tensor as well , which , in turn leads to displacements and rotations to be related through the following expression : the weak form of the equilibrium equations can be obtained through the principle of virtual work ( pvw ) as where , and are arbitrary strains , displacements , and rotations , respectively , satisfying compatibility equations with homogeneous essential boundary conditions .it must be observed here that the pvw in equation [ pvw ] is the weak formulation of both linear and angular momentum balances .hence , the symmetry of the stress tensor and , consequently , the symmetry of the strain tensor are imposed in average sense .this is a significant difference compared to classical formulations for cauchy continua in which the symmetry of the stress tensor is assumed `` a priori '' .let us consider a three - dimensional _ primal cell complex _ , which , according to the customary terminology in algebraic topology , is a subdivision of the three - dimensional space through sets of vertices ( 0-cells ) , edges ( 1-cells ) , faces ( 2-cells ) , and volumes ( 3-cells ) .next let us construct a _ dual cell complex _ anchored to the primal .this can be achieved , for example , by associating a primal 3-cell with a dual 0-cell , a primal 2-cell with a dual 1-cell , etc .the primal / dual complex obtained through the delaunay triangulation of a set of points and its associated voronoi tessellation is a very popular choice in many fields of study for its ability to discretize complex geometry and it is adopted in this study .let us consider a material domain and discretize it into tetrahedral elements by using the centroidal delaunay tetrahedralization , and the associated voronoi tessellation which leads to a system of polyhedral cells .figure [ dcmgeom3d]a , shows a typical tetrahedral element with the volume , external boundary , and oriented surfaces located within the volume .the interior oriented surfaces are derived from the voronoi tessellation and are hereinafter called facets " . in 3d ,the facets are triangular areas of contact between adjacent polyhedral cells . in the voronoi tessellation procedure ,the triangular facets are perpendicular to the element edges , which is a crucial feature of dcm formulation as it will be shown later , and their geometry is such that one node of each facet is placed in the middle of the tetrahedral element edge , one is located on one of the triangular faces of the tetrahedral element , and one is located inside the tetrahedral element . as a result , each tetrahedral element contains twelve facets in a 3d setting , figure [ dcmgeom3d]a .figure [ dcmgeom3d]b illustrates a portion of the tetrahedral element associated with one of its four nodes and the corresponding facets . combining such portions from all the tetrahedral elements connected to the same node ,one obtains the corresponding voronoi cell .each node in the 3d dcm formulation has six degrees of freedom , three translational and three rotational , which are shown in figure [ dcmgeom3d]b . the same figure depicts , for a generic facet , three unit vectors , one normal and two tangential ones and , defining a local system of reference . in the rest of the paper ,the facet index is dropped when possible to simplify notation .the dcm approximation is based on the assumptions that displacement and rotation fields can be approximated by the rigid body kinematics of each voronoi cell , that is where , are displacements and rotations of node ; is the volume of the cell associated with node .obviously with this approximating displacement and rotation functions , strain versus displacement / rotation relationships in equations [ eq : compatibility-1 ] can not be enforced locally as typically done in displacement based finite element formulation .let us consider a generic node of spatial coordinates and the adjacent nodes of spatial coordinates , where is the vector connecting the two nodes .one can write in which is a unit vector in the direction of .note that , to simplify notation the two indices and are dropped when supposed to appear together . without loss of generality , let us also assume that node is located on the negative side of the facets whereas node is located on the positive side of the associated facet oriented through its normal unit vector .moreover , it is useful to introduce the vector connecting node to a generic point p on the facet .the displacement jump at point p reads where and are the values of displacements on the positive and negative side of the facet , respectively ; ; ; ; and , are magnitude and direction of vector .equation [ eq : strain1 ] can be rewritten in tensorial notation as by expanding the displacement and the rotation fields in taylor series around and by truncating the displacement to the second order and the rotation to the first , one obtains by projecting the displacement jump in the direction orthogonal to the facet and dividing it by the element edge length , one can write where is the strain tensor . at convergencethe discretization size tends to zero ( ) and one can write . furthermore , if the dual complex adopted for the volume discretization is such that the facets are orthogonal to the element edges this condition is verified , for example , by the delaunay - voronoi complex then , , and one has : the normal component of the displacement jump normalized with the element edge length represents the projection of the strain tensor onto the facet . similarly , it can be shown that the components of the displacement jump tangential to the facets can be expressed as and . before moving forward ,a few observations are in order .since the facet are flat the unit vector is the same for any point belonging to a given facet and the projection of the strain tensor is uniform over each facet for a uniform strain field .the variation of the displacement jump over the facet is due to the curvature and it is an high order effect that can be neglected ( see the last term in equation [ eq : disjumpn ] ) . based on the previous observation one can conclude that the analysis of the interaction of two adjacent nodes can be based on the average displacement jump which , given the linear distribution of the jump , can be calculated as the displacement jump , , at the centroid of the facet c. this leads naturally to the following definition of `` facet strains '' : and equations [ eq : eps - n ] and [ eq : eps - ml ] show that the `` facet strains '' correspond to the projection of the strain tensor onto the facet local system of reference . let us now consider a uniform hydrostatic stress / strain state in one element and .in this case the tractions on each facet must correspond to the volumetric stress and energetic consistency requires that which gives where is the set of facets belonging to one element . by using equation [ eq : eps - v ] and equations [ eq : eps - n ] , [ eq : eps - ml ] ,facet deviatoric strains can also be calculated as . by introducing equation [ eq : disp - approx ] into equations [ eq : eps - n ] and [ eq : eps - ml ]one can also write where , , , , , , and , are vectors connecting the facet centroid c with nodes , , respectively .for the adopted discretized kinematics in which all deformability is concentrated at the facets , the pvw in equation [ pvw ] can be rewritten as where is the set of all facets in the domain . by introducing equations [ eq : disp - approx ] , [ eq : eps - n - new ] , [ eq : eps - m - new ] , and [ eq : eps - l - new ] into equation [ eq : equi - weak - discrete-0 ] and considering displacement and traction continuity at the inter - element interfaces, one can write [ \delta u_{ii } + e_{ijk } \delta \varphi_{ij } ( x_k - x_{ik } ) ] d\omega \right)=0 \end{split}\end{aligned}\ ] ] where , is the generic voronoi cell , and are the volume and the set of facets of the cell , respectively .note that the first term on the lhs of equation [ eq : equi - weak - discrete ] also includes the contribution of external tractions for cells located on the domain boundary . since equation [ eq : equi - weak - discrete ] must be satisfied for any virtual variation and , it is equivalent to the following system of algebraic equations ( , = total number of voronoi cells ) : where = external force resultant , = mass , = first - order mass moments , = external moment resultant , = second - order mass moments , of cell .equations [ eq : cell - equilibrium-1 ] and [ eq : cell - equilibrium-2 ] coincides with the force and moment equilibrium equations for each voronoi cell . note that and ( for uniform body force ) , if the vertex of the nodes of the delaunay discretization coincide with the mass centroid of the voronoi cellsthis is the case for all the cells in the interior of the mesh if a centroidal voronoi tessellation is adopted . also , in general , for , and this leads to a non - diagonal mass matrix .a diagonalized mass matrix can be obtained simply by discarding the non - diagonal terms . in the dcm framework ,the constitutive equations are imposed at the facet level where the facet tractions need to be expressed as function of the facet strains . for elasticity , by projecting the tensorial constitutive equations reported in equation [ eq : const ] in the local system of reference of each facet , one has order to pursue a two - dimensional implementation of dcm , let us consider a 2d delaunay - voronoi discretization as shown in figures [ dcmgeom]a and b. a generic triangle can be considered as the triangular base of a prismatic volume as shown in figure [ dcmgeom]c and characterized by 6 vertexes , 9 edges , 2 triangular faces , and 3 rectangular faces . by considering the voronoi vertexes on the two parallel triangular faces and face / edge points located at mid - thickness ( see , e.g. , points and ) a complete tessellation of the volume in six sub - volumes , one per vertex , can be obtained by triangular facets . of these facets , are orthogonal to the triangular faces , set ( see , e.g. , the one connecting points , , in figure [ dcmgeom]c ) and are parallel to the triangular faces , set , ( see , e.g. , the one connecting points , , in figure [ dcmgeom]c ) .one can write where is the out - of - plane thickness . for plane strain conditions for the facet set andsimply the second term in equation [ eq : eps - v-2d ] is zero . for plane stress , instead , for the facet set .therefore , for the facet set , and , also , .using these relations , one has . substituting this relation in equation [ eq : eps - v-2d ] leads to .in addition the facet set is composed by 3 sets of 4 planar triangular facets .for each set , strains and tractions are the same on the 4 facets because the response is uniform through the thickness .consequently the 4 facets can be grouped into one rectangular facet of area where is the in length of the facet ( see figure [ dcmgeom]b ) by taking everything into account the volumetric strain for 2d problem can be written as where is the area of the triangular element and for plane stress and for plane strain .the triangular dcm element has 9 degrees of freedom , two displacements and one rotation for each node , which can be collected in one vector ] required for the calculation of facet strains .the volumetric strain , constant inside the quadrilateral element , is still calculated by equation [ eq : eps - v-2d-1 ] where the contribution of all five facets are taken into account .also , a four node , rectangular element with four facets is generated if the two adjacent triangles are both right , see figure [ 34elems]d .for the generic quadrilateral element , equations [ eq : matrix - n1 ] to [ eq : matrix - m3 ] must be substituted by the following equations { \node[shape = circle , draw , inner sep=0.1pt ] ( char ) { 1 } ; } } } = \ell^{-1}_{{\tikz[baseline=(char.base ) ] { \node[shape = circle , draw , inner sep=0.1pt ] ( char ) { 1 } ; } } } \begin{bmatrix } -n_{i1 } & -n_{i2 } & n_{i1}c_{i2}-n_{i2}c_{i1 } & n_{i1 } & n_{i2 } & -n_{i1}c_{j2}+n_{i2}c_{j1 } & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}\ ] ] { \node[shape = circle , draw , inner sep=0.1pt ] ( char ) { 1 } ; } } } = \ell^{-1}_{{\tikz[baseline=(char.base ) ] { \node[shape = circle , draw , inner sep=0.1pt ] ( char ) { 1 } ; } } } \begin{bmatrix } -m_{i1 } & -m_{i2 } & m_{i1}c_{i2}-m_{i2}c_{i1 } & m_{i1 } & m_{i2 } & -m_{i1}c_{j2}+m_{i2}c_{j1 } & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}\ ] ] { \node[shape = circle , draw , inner sep=0.1pt ] ( char ) { 2 } ; } } } = \ell^{-1}_{{\tikz[baseline=(char.base ) ] { \node[shape = circle , draw , inner sep=0.1pt ] ( char ) { 2 } ; } } } \begin{bmatrix } 0 & 0 & 0 & -n_{j1 } & -n_{j2 } & n_{j1}c_{j2}-n_{j2}c_{j1 } & n_{j1 } & n_{j2 } & -n_{j1}c_{k2}+n_{j2}c_{k1 } & 0 & 0 & 0 \end{bmatrix}\ ] ] { \node[shape = circle , draw , inner sep=0.1pt ] ( char ) { 2 } ; } } } = \ell^{-1}_{{\tikz[baseline=(char.base ) ] { \node[shape = circle , draw , inner sep=0.1pt ] ( char ) { 2 } ; } } } \begin{bmatrix } 0 & 0 & 0 & -m_{j1 } & -m_{j2 } & m_{j1}c_{j2}-m_{j2}c_{j1 } & m_{j1 } & m_{j2 } & -m_{j1}c_{k2}+m_{j2}c_{k1 } & 0 & 0 & 0 \end{bmatrix}\ ] ] { \node[shape = circle , draw , inner sep=0.1pt ] ( char ) { 3 } ; } } } = \ell^{-1}_{{\tikz[baseline=(char.base ) ] { \node[shape = circle , draw , inner sep=0.1pt ] ( char ) { 3 } ; } } } \begin{bmatrix } 0 & 0 & 0 & 0 & 0 & 0 & -n_{k1 } & -n_{k2 } & n_{k1}c_{k2}-n_{k2}c_{k1 } & n_{k1 } & n_{k2 } & -n_{k1}c_{l2}+n_{k2}c_{l1 } \end{bmatrix}\ ] ] { \node[shape = circle , draw , inner sep=0.1pt ] ( char ) { 3 } ; } } } = \ell^{-1}_{{\tikz[baseline=(char.base ) ] { \node[shape = circle , draw , inner sep=0.1pt ] ( char ) { 3 } ; } } } \begin{bmatrix } 0 & 0 & 0 & 0 & 0 & 0 & -m_{k1 } & -m_{k2 } & m_{k1}c_{k2}-m_{k2}c_{k1 } & m_{k1 } & m_{k2 } & -m_{k1}c_{l2}+m_{k2}c_{l1 } \end{bmatrix}\ ] ] { \node[shape = circle , draw , inner sep=0.1pt ] ( char ) { 4 } ; } } } = \ell^{-1}_{{\tikz[baseline=(char.base ) ] { \node[shape = circle , draw , inner sep=0.1pt ] ( char ) { 4 } ; } } } \begin{bmatrix } -n_{i1 } & -n_{i2 } & n_{i1}c_{i2}-n_{i2}c_{i1 } & 0 & 0 & 0 & 0 & 0 & 0 & n_{i1 } & n_{i2 } & -n_{i1}c_{l2}+n_{i2}c_{l1 } \end{bmatrix}\ ] ] { \node[shape = circle , draw , inner sep=0.1pt ] ( char ) { 4 } ; } } } = \ell^{-1}_{{\tikz[baseline=(char.base ) ] { \node[shape = circle , draw , inner sep=0.1pt ] ( char ) { 4 } ; } } } \begin{bmatrix } -m_{i1 } & -m_{i2 } & m_{i1}c_{i2}-m_{i2}c_{i1 } & 0 & 0 & 0 & 0 & 0 & 0 & m_{i1 } & m_{i2 } & -m_{i1}c_{l2}+m_{i2}c_{l1 } \end{bmatrix}\ ] ] { \node[shape = circle , draw , inner sep=0.1pt ] ( char ) { 5 } ; } } } = \ell^{-1}_{{\tikz[baseline=(char.base ) ] { \node[shape = circle , draw , inner sep=0.1pt ] ( char ) { 5 } ; } } } \begin{bmatrix } -n_{i1 } & -n_{i2 } & n_{i1}c_{i2}-n_{i2}c_{i1 } & 0 & 0 & 0 & n_{i1 } & n_{i2 } & -n_{i1}c_{k2}+n_{i2}c_{k1 } & 0 & 0 & 0 \end{bmatrix}\ ] ] { \node[shape = circle , draw , inner sep=0.1pt ] ( char ) { 5 } ; } } } = \ell^{-1}_{{\tikz[baseline=(char.base ) ] { \node[shape = circle , draw , inner sep=0.1pt ] ( char ) { 5 } ; } } } \begin{bmatrix } -m_{i1 } & -m_{i2 } & m_{i1}c_{i2}-m_{i2}c_{i1 } & 0 & 0 & 0 & m_{i1 } & m_{i2 } & -m_{i1}c_{k2}+m_{i2}c_{k1 } & 0 & 0 & 0 \end{bmatrix}\ ] ]numerical experiments carried out in this section show that the 2d dcm triangle passes the patch test and is able to reproduce exactly uniform strain and stress fields . acknowledging that the following observations apply equally to stress or strain , the most basic result regarding a uniform field is that the normal and tangential stresses calculated by dcm for a facet with certain orientation correspond to the tractions calculated by projecting the stress tensor onto the facet orientation .conversely , facet s orientation , normal and tangential ( shear ) stresses along with the calculated value of the volumetric stress can be used to determine the facet overall stress tensor with respect to the global system of reference ( see appendix [ facet strain tensor ] ) . to do the patch test , an elastic square specimen discretized by dcmis subjected to uniform as shown in figure [ patchtest]a . mpa and = 0.25 are used as material properties .dcm analysis is performed , and the results are used to calculated the stress tensor for all of the facets .the exact uniform stress tensor due to the applied uniform strain tensor can be calculated by elasticity equations . in figure[ patchtest]b , ratio of the calculated by dcm for each facet to the one obtained from elasticity equations , is plotted for all facets . for each facet and the corresponding element, the portion of the element area associated to that facet is colored according to the value .one can see that dcm successfully generated the uniform stress field which matches well with the elasticity results .this contour is the same for the component of the stress tensor .in addition , the resultant force vector on each node is plotted in figure [ patchtest]c .it is clear that the force vector is negligible for the nodes inside the specimen , while its distribution on the specimen surfaces correspond to the uniform stress and strain fields . in order to study the convergence of the present method to the exact solution for a non - uniform strain field ,a classical cantilever beam test [ 19 ] was simulated .the rectangular domain , shown in figure [ canteliverbeam ] , is characterized by a length - to - depth ratio of 4 .the traction boundary conditions are the classic stress distributions of simple bending .figure [ canteliverbeam ] shows a parabolically varying shear at the cantilever tip . at the fixed end, only the displacement boundary conditions are shown for clarity .however , an equal but opposite parabolic shear was applied at the fixed end , as well as a linearly varying normal stress based on the non - zero bending moment at that location . the exact solution for the displacement fieldis provided in hughes [ 19 ] which assumes linear isotropic elasticity .six different meshes at various levels of refinement are considered .figure [ canteliverbeam ] shows the coarsest , with 128 elements , and the finest , with 1790 elements . for comparison, the same numerical simulations were performed using the standard constant strain triangle ( cst ) finite element .all the computations were carried out under plane strain conditions , with a poisson s ratio of 0.3 .figure [ peakstat ] presents the results of the convergence study .the relative error between the numerical calculation and the exact solution is plotted as a function of the inverse of the square root of the number of elements ( ) , which is proportional to the characteristic element size .the results for the total elastic energy and the tip displacement are shown in figures [ peakstat]a for both dcm and constant strain triangle ( cst ) finite element . in the log - log plots of figures [ peakstat]a ,the slope of the line segments provide a measure of the convergence rate . for the strain energy , the average convergence rates for the dcm and cst , respectively , are 1.62 and 1.99 , and for the tip displacement they are 2.1 and 2.02 .the theoretical convergence rate for the cst is 2 for both strain energy and tip deflection .although the convergence rates are comparable , the dcm outperforms cst in terms of accuracy .the cst error in both strain energy and tip deflection is one order of magnitude higher than the dcm error . in terms of energy , for example, the dcm error ranges from 0.38 to 0.06 ( coarsest to finest mesh ) , whereas the cst error ranges from 10 to 0.8 .it must be mentioned , however , that each node in the dcm has one degree of freedom , the rotation , more than its counterpart in the cst .this additional degree of freedom results in higher computational cost for dcm compared to classical fem .the tip displacement and the strain energy errors are plotted versus total number of degrees of freedom for both dcm and fem simulations in figure [ peakstat]b in log - log axes .one can see that as the total number of dofs increases , the error values decrease ( dofs axis is reversed ) .it can bee seen that for approximately equal number of elements , total number of dofs for dcm is higher than cst .however , the accuracy remains higher than cst for the same number of dofs .the convergence study presented in the previous section demonstrates that the dcm performs very well in the elastic regime . however , the most attractive feature of this method is the ability of easily accommodating the displacement discontinuity associated with fracture without suffering from the typical shortcomings of the classical finite element method , the limitations of typical particle models , or the complexity and the high computational cost of advanced finite element formulations . in this sectiona simple isotropic damage model is introduced in the dcm framework in order to simulate the initiation and propagation of quasi - brittle fracture . where is the damage parameter related to the facet .the evolution of the damage parameter is assumed to be governed by a history variable , the facet maximum effective strain , , characterizing the overall amount of straining that the material has been subject to during prior loading : \ ] ] where , is a material parameter representing the strain limit which governs the onset of damage , and governs the damage evolution rate . the maximum effective strain that is used for each facet at each computational step is equal to the maximum principal strain that the facet has experienced through the loading process .this value is compared to to distinguish facet elastic behavior from the the nonlinear case . is the maximum eigenvalue of the facet strain tensor whose components can be derived in terms of facet normal , tangential , and volumetric strains as discussed in section [ cantilever ] and presented in details in appendix [ facet strain tensor ] . in order to ensure convergence upon mesh refinement and to avoid spurious mesh sensitivity, one can define where is hillerborg s characteristic length , which is assumed to be a material parameter . and are the elastic limit stress and fracture energy , respectively . in order to demonstrate the ability of dcm to simulate cohesive fracture with this simple two - parameter model , several different fracture analyses will be summarized in the following sections .included will be examples of quasi - static fracture , dynamic crack propagation , and fragmentation .multiple numerical tests are carried out to check the efficiency and robustness of the established framework .quasi - static and dynamic fracture simulations are performed . in this section ,the simulation of direct tensile tests and three - point bending tests on notched specimens under quasi - static loading is carried out .the specimens are 120 mm rectangular panels with out of plane thickness of 80 mm .the notch is one third of the panel depth , 40 mm , with a width equal to 12 mm .as shown in figure [ notchedspecimen ] , the notch tip is assumed to be semicircular in order to avoid unrealistic singularities in the stress distribution which might lead to premature crack initiation and propagation .for both the tensile and the three - point bending tests , the assumed material model parameters are : , tensile strength mpa , young s modulus gpa ; and material characteristic length m , which corresponds to a fracture energy of approximately 47 j / m .the direct tensile test is performed by constraining the boundary nodes on the left side of the specimen and by applying an increasing displacement to the nodes on the right side of the specimen , see figure [ notchedspecimen](a ) . in order to investigate the mesh size dependency of the solution , three different average element size in the zone of the notch of 4 , 2 , and 1 mm for the coarse , medium , and fine meshes , respectively , are considered and shown in figure [ notchedgeoms ] .figure [ notchedresults - dt]b reports the nominal stress versus applied displacement curves for three different mesh resolutions . the nominal stress is defined as , where is the overall applied load corresponding to a certain displacement . and are the specimen width and thickness , respectively .cracked specimens with different mesh resolutions are plotted in figure [ notchedresults - dt]a .the crack pattern shown in this figure is relevant to the applied displacement at the end of the loading process ( 0.06 mm , see figure [ notchedresults - dt]b ) .the nodal displacements in figure [ notchedresults - dt]a have been amplified by 50 to clearly visualize the crack that develops from the notch tip and towards the upper edge of the specimen .all curves reported in figure [ notchedresults - dt]b have an initial elastic tangent followed by a nonlinear hardening branch up to a certain peak load .afterwards , the behavior is characterized by a softening response with decreasing load carrying capacity for increasing end displacement .one can observe that the dcm responses for different mesh configurations matches very well which confirms convergence and mesh insensitivity of the dcm framework obtained with the simple regularization in equation [ effmaxstr ] .spurious mesh sensitivity which is characterized by mesh dependent energy dissipation and lack of convergence is the typical problem of the tensorial constitutive equations for softening materials when not properly regularized .classical numerical techniques such as finite element method that employs these type of constitutive equations result in more brittle response as a finer mesh is utilized .dcm solves this problem by introducing a length type variable , the local distance between the two nodes , into the vectorial constitutive equations defined on each facet , equations [ effmaxstr ] .therefore , the constitutive equations for any generic facet vary with the edge length between the two corresponding nodes , which yields to a mesh independent global response for different mesh configurations .fracture energy , energy consumed per unit area of the formed crack , is equal to the area under the curves in figure [ notchedresults - dt](b ) , which means all the three simulations show the same fracture energy .the simulations were not carried out until the applied load dropped to zero , and so only partial estimates of the total energy consumption relevant to complete material separation can be made .for the performed dcm simulations , consumed energies are calculated as 45.7 , 46.5 , and 46.9 j / m for the coarse , medium , and fine meshes , respectively .as noted above the material property used for all simulations was relevant to a fracture energy of 47 j / m .these positive results provide some quantitative evidence regarding convergence , and proper energy consumption independent of mesh refinement of dcm .the configuration for the three - point bending test is shown in figure [ notchedspecimen](b ) .the bottom left corner of the specimen is constrained both vertically and horizontally , whereas the bottom right corner is constrained only in the vertical direction ( classical pin - roller boundary conditions for simply supported beams ) .the load is applied by gradually increasing the displacement of a few boundary nodes located on the top side of the specimen , centered at the specimen axis of symmetry .similar to the case of direct tension , the obtained results are reported in terms of nominal stress versus applied displacement for the three mesh resolutions. cracked specimens at the end of the loading process ( 0.12 mm , see figure [ notchedresults-3pbt](b ) ) are depicted in figure [ notchedresults-3pbt](a ) . due to the bending character of these simulations , the stress profile and the crack opening displacement along the crack are not uniform .nominal stress versus the applied displacement for the three simulations are plotted in figure [ notchedresults-3pbt](b ) .the results display once again that the dcm solution is convergent upon mesh refinement and spurious mesh sensitivity does not take place .an end - spalling fragmentation experiment is carried out on a two dimensional bar of 1 mm height 10 mm length and 1 mm thickness under plane stress condition .the bar is subjected to a sinusoidal velocity impulse with peak value of m / s and 1 duration time . in this study ,the assumed material properties are : mpa , gpa , material characteristic length m which corresponds to a fracture energy of 2.2 j / m , material density , and the poisson s ratio .the horizontal velocity component of the nodes located vertically at mid - height of the bar is plotted along the bar ( figure [ bar - sinus - veltime ] ) .one can see that the velocity impulse travels undisturbed across the bar , see time steps 1 , 1.5 , and 2 , while the magnitude of the wave doubles as it reaches the right end due to the interaction with the free - end boundary condition , see time step 2.5 in figure [ bar - sinus - veltime ] .one can calculate that the velocity impulse moves across the specimen at an approximate velocity of 5000 m / s , which corresponds closely to the basic equation for one dimensional wave propagation velocity which results in a velocity of 4874 m / s . in terms of stress, the traveling wave applies a compressive stress on the bar before arriving at the free - end , where the stress reverses from compression to tension .the generated tensile stress overcomes the tensile strength of the material , and end - spalling fragmentation begins .one can see that the wave moving back through the bar is no more sinusoidal which is due to the engendered material nonlinearity , see time steps 2.75 and 3.5 .more detailed analysis of the problem under consideration reveals that actually a bi - axial strain state is generated through the specimen due to the poisson s effect results in the presence of lateral straining . in turn, this leads to inclined principal strain directions and , consequently , inclined cracks ( see figure [ crack - bar - impulse ] ) since crack initiation and propagation are simulated with the strain dependent damage model discussed earlier .figure [ crack - bar - impulse ] shows the fracture pattern of the bar at different time steps namely 3.6 , 4 , and 4.4 .one can see that a localized fracture takes place at the bar end and evolves into total separation of the right end once the fracture energy of the material is completely overcome .contour of the damage parameter is also illustrated in figure [ crack - edge - impulse ] , which confirms the fracture pattern occurred at the bar end .in addition , one can notice that the crack propagates vertically at the center of the bar , while it deviates as it moves towards the cross section edges .this can be explained by the fact that the bi - axial effect is more pronounced over the areas away from the center of the bar . in figure[ bar - eps - ratio ] , maximum effective strain experienced by each facet normalized by is plotted at different time instants . 1.35 , 1.8 , and 2.25 are instants during which the compressive wave travels through the bar before reaching the free - end , while 2.7 , 3.6 , and 4.5 are after the signal reaches the free - end and leads to a tensile wave .one can see that at 1.35 , 1.8 , and 2.25 , for all facets , which implies that all facets stay in the elastic regime . at 2.7 , which is just after the signal reaches the free - end , and the compressive wave converts into tensile one , at the end of the bar where nonlinearity starts to develop . at 3.6 and 4.5 , damage localizes , and corresponds to the specimen fracture pattern depicted in figure [ crack - bar - impulse ] .ratio of the horizontal component facet stress tensor to is plotted in figure [ bar - sig - ratio ] at the same time instants as the ones considered in figure [ bar - eps - ratio ] .one can clearly see the propagation of the compressive wave through the specimen at 1.35 , 1.8 , and 2.25 , and its conversion to tensile wave at 2.7 . at 4.5 , it can be observed that the stress value on the facets around which fracture takes place is approximately zero , which corresponds to the bar splitting type of failure pattern . in this section ,a classical dynamic crack propagation test is simulated .the reference experimental data is relevant to maraging steel , which shows high tensile strength and brittle behavior when subjected to high strain rate .a schematic representation of the test configuration is shown in figure [ crack - edge - impulse ] , in which one can see a projectile impacting the central part of an unrestrained double notched specimen .the plate has a 10 mm out - of - plane thickness .plane stress condition can be assumed for the dcm analysis . by using the symmetry of the problem , half of the specimen is modeled and appropriate boundary conditions , horizontal nodal displacement and nodal rotation equal to zero , are enforced on the line of symmetry .kalthoff and wrinkler investigated the effect of the projectile velocity on the failure mechanism : a brittle failure with a crack at an angle of was observed for the case of low impact velocity ( 32 m / s ) , see figure [ crack - edge - impulse ] . in the current example , a velocity of 16 m / s is applied at the impacted nodes , and this impulse is kept constant to the end of the simulation . the velocity of 16 m / s is selected because the elastic impedance of the projectile and the specimen are considered to be equal .material properties considered in the dcm simulations are the same as the ones used in section [ end - spalling ] .belytschko et al . simulated this experiment using the continuum based model xfem for quasi - brittle fracture , which is considered here as reference to discuss the dcm performance . to investigate the mesh dependency of the dcm results , a fine and a coarse mesh with element edge of .65 mm ( 50573 elements ) and mm ( 22437 elements )are considered .the initial vertical notch is simulated with 1.5 mm width , and the time step used in the explicit integration scheme is 0.02 .the velocity impulse applied to the dcm boundary nodes generates a compressive wave in the central part of the specimen , which propagates until it reaches the notch tip . at this point ,significant shear strains develop leading to high principal tensile strains and crack initiation at the left side of the notch tip . subsequently , crack propagates towards the left boundary of the specimen .figure [ ecpf - frac ] shows crack initiation and propagation in different time steps for the fine mesh case . the average crack propagation angle with the horizontal axis at the time step 56 is which compares very well with experimental result .at this time , a localized damage which leads to fracture takes place on the top right boundary of the specimen , and the generated crack propagates towards the notch tip .this is due to the reflection of the compressive wave from the top right boundary and is also reported by belytschko et al . in their xfem simulations .the propagating crack tends to become horizontal at the end of the simulation , see figure [ ecpf - frac]e , as the localized fracture occurs and propagates on the top right boundary .this can also be related to the strain based failure criteria employed in dcm model and the simple damage model used in constitutive behavior of the material .damage coefficient contours of the fine mesh simulation are plotted in figure [ ecpf - damage ] , which clearly shows two highly localized damaged areas corresponding to the fracture pattern depicted in figure [ ecpf - frac ] .figures [ ecpc - frac ] and [ ecpc - damage ] shows the fracture pattern and damage coefficient contour at different time steps for the coarse mesh simulations , respectively , which agrees well with the fine mesh results .average crack propagation angle with the horizontal axis at the time step 56 is , and the crack tends to propagate horizontally as fracture occurs and develops from the top right boundary .dcm performs more accurately compared to the xfem in terms of predicting the crack propagation angle .however , the crack does not develop on the same path to the end of the test , which is captured by the xfem simulations .in addition , dcm is able to capture micro cracks developing from the main crack faces , as it can be seen in figures [ ecpf - damage ] and [ ecpc - damage ] , while this is not captured by other approaches .it is also worth noting that the computational cost of methods like xfem increases as the crack propagates because additional dofs must be inserted to capture the displacement discontinuity .this is not the case for dcm which is characterized by the same number of dofs in the elastic and fracturing regimes .a final benchmark fracture problem simulated by dcm in this section involves dynamic crack propagation and crack branching .figure [ branch - geom ] shows a schematic representation of the test configuration .a pre - notched rectangular panel is subjected to a uniform traction applied as a step function on the two edges parallel to the notch .this experiment has been simulated computationally by other authors , and related experimental results were reported by different researchers .ramulu and kobayashi observed experimentally that a major crack starts to propagate from the notch tip to the right , which branches into two cracks at a certain point during the experiment , see figure [ branch - geom ] for the sketch of the experimental result .the dcm parameters used in this test are : mpa , gpa ; material characteristic length m , which corresponds to a fracture energy of approximately 3 j / m ; material density ; poisson s ratio .the applied traction is mpa .crack initiation and propagation at different time steps of the dcm simulation is illustrated in figure [ branch - damage](a - e ) through the damage parameter contours .one can see that the crack starts to propagate from the notch tip parallel to the symmetry axis of the configuration on a straight path for a short distance , and it branches into two cracks subsequently .the deformed configuration of the dcm simulation is plotted in figure [ branch - damage]f , which agrees well with the experimental results .experimental observations also report that before the main branching occurs , minor branches emerge from the main crack but only propagate on a short distance .dcm is able to capture these minor branches which can be seen in the damage variable contours in figure [ branch - damage ] .dcm and classical particle models are basically governed by the same set of algebraic equations expressing compatibility and equilibrium .this naturally follows from the adoption of rigid body kinematics which is common to the two approaches .the difference between the two methods lies in the formulation of the constitutive law , namely in the relationship between facet tractions and facet openings . for classical particle models ,in which rigid particles are connected by springs , this relationship can be expressed as and , where and are the normal and tangential elastic stiffnesses , respectively .dcm constitutive laws , equation [ ntconstdamag ] , are consistent to the ones for particle models if one sets and .these conditions correspond to an elastic material with zero poisson s ratio . by properly setting the ratio between the normal and tangential stiffnesses, particle models can simulate an `` average '' non - zero poisson s ratio ( average in the sense that poisson s ratio is defined by analyzing a finite , as opposed to an infinitesimal , volume of material ) . in this case , however , particle models feature an intrinsic heterogeneous response even for load configurations that produce uniform strain fields according to continuum theory . in conclusion , for non - zero poissons ratio the two formulations are fundamentally different and the key difference is that dcm accounts for the orthogonality of the deviatoric and volumetric deformation modes while classical particle models do not . it must be mentioned here that the heterogeneous response of particle models is not necessarily a negative property and , actually , it is critical for their ability to handle automatically strain localization and crack initiation . it must be kept in mind , however , that in this case the size of the discretization can not be user - defined but must be linked to the actual size of the material heterogeneity . only under this conditioncan one consider the heterogeneous response of particle models to be a representation of the actual internal behavior of the material rather than a spurious numerical artifact .in this paper , the formulation of the discontinuous cell method ( dcm ) has been outlined .a convergence study in the elastic regime shows that dcm converges to the exact continuum solution with a convergence rate that is comparable to that of constant strain finite elements , but with accuracy that is one order of magnitude higher .in addition , numerical simulations show that dcm , with a simple two parameter isotropic damage model , can simulate cohesive fracture propagation without the drawbacks of standard finite elements , such as spurious mesh sensitivity , and without the complications of most recently formulated computational techniques .in addition , dcm successfully simulated the crack branching which is observed in the experiment of a benchmark problem .finally , dcm can simulate the transition from localized fracture to fragmentation without mesh entanglement typical of finite element approaches .* acknowledgments * + this material is based upon work supported by the national science foundation under grant no .. 99 m. jirasek .comparative study on finite elements with embedded discontinuities .comp . meth . in appl . mech . and eng .2000 ; 188 : 307330 .t. belytschko , t. black .elastic crack growth in finite elements with minimal remeshing .j. for num .meth . in eng .1999 ; 45(5 ) : 601620 .n. moes , j. dolbow , t. belytschko . a finite element method for crack growth without remeshing .j. for num .meth . in eng .1999 ; 46(1 ) : 133150 .j. dolbow , n. moes , t. belytschko . discontinuous enrichment in finite elements with a partition of unity method .finite elements in analysis and design 2000 ; 36(3 ) : 235260 .t. belytschko , n. moes , s. usui , c. parimi .arbitrary discontinuities in finite elements .j. for num .meth . in eng .2001 ; 50(4 ) : 9931013 .t. belytschko , h. chen , j. x. xu , g. zi .dynamic crack propagation based on loss of hyperbolicity and a new discontinuous enrichment .j. for num .meth . in eng .2003 ; 58 : 18731905 .g. zi , t. belytschko new crack - tip elements for xfem and applications to cohesive cracks .engng 2003 ; 57:22212240 .s. esna ashari , s. mohammadi . delamination analysis of composites by new orthotropic bimaterial extended finite element method .j. for num .meth . in eng .2011 ; 86 : 1507-1543 .s. esna ashari , s. mohammadi .fracture analysis of frp - reinforced beams by orthotropic xfem .journal of composite materials . 2011 ; 0(0 ) : 1-23 .g. t. camacho , m. ortiz .computational modeling of impact damage in brittle materials .j. of solids and structures 1996 ; 33 : 28992938 .m. ortiz , a. pandolfi .finite - deformation irreversible cohesive elements for three - dimensional crack - propagation analysis .j. numer . meth . engng .1999 ; 44 : 12671282 .a. pandolfi , m. ortiz .an efficient adaptive procedure for three - dimensional fragmentation simulations .engineering with computers 2002 ; 18(2):148159 .p. a. cundall , o. d. l. strack . a discrete numerical model for granular assemblies .geotechnique 1979 ; 29 : 4765 . a. hrennikoff .solution of problems of elasticity by the framework method .j appl mech 1941 ; 12 : 169 - 75 .e. schlangen , j.g.m .. experimental and numerical analysis of micromechanisms of fracture of cement - based composites .cement concrete composite 1992 ; 14:105 - 118 .g. cusatis , z. p. baant , l. cedolin .confinement - shear lattice model for concrete damage in tension and compression : i. theory .( asce ) 2003 ; 129(12 ) : 14391448 .g. cusatis , z. p. baant , l. cedolin . confinement - shear lattice model for concrete damage in tension and compression : ii . computation and validation .( asce ) 2003 ; 129(12 ) : 14491458 .g. lilliu , j. g. m. van mier .3d lattice type fracture model for concrete .eng . fract . mech .2003 ; 70 : 927941 .g. cusatis , z. p. baant , l. cedolin .confinement - shear lattice model for fracture propagation in concrete .methods appl .2006 ; 195 : 71547171 .j. e. bolander , s. saito .fracture analysis using spring network with random geometry .fract . mech .1998 ; 61(5 - 6 ) : 569591 .j. e. bolander , k. yoshitake , j. thomure .stress analysis using elastically uniform rigid - body - spring networks .j. struct .. earthquake eng .( jsce ) 1999 ; 633(i-49 ) : 2532 .j. e. bolander , g. s. hong , k. yoshitake . structural concrete analysis using rigid - body - spring networks . j. comp . aided civil and infrastructure eng .2000 ; 15 : 120133 .yip , mien , jon mohle , and j. e. bolander . automated modeling of three - dimensional structural components using irregular lattices .computer - aided civil and infrastructure engineering ; 2005 20(6 ) : 393407 .g. cusatis , d. pelessone , a. mencarelli .lattice discrete particle model ( ldpm ) for failure behavior of concrete .i : theory .cement and concrete composites 2011 ; 33 : 881 - 890 .g. cusatis , a. mencarelli , d. pelessone , j. baylot .lattice discrete particle model ( ldpm ) for failure behavior of concrete .ii : calibration and validation . cement and concrete composites 2011 ; 33 : 891 - 905 .r. rezakhani , g. cusatis , asymptotic expansion homogenization of discrete fine - scale models with rotational degrees of freedom for the simulation of quasi - brittle materials ._ j. mech .solids _ 2016 ; 88 : 320345 .leite , v. slowik , h. mihashi .computer simulation of fracture processes of concrete using mesolevel models of lattice structures .cement and concrete research 2004 ; 34 : 1025 - 1033 .f. camborde , c. mariotti , f.v .. numerical study of rock and concrete behaviour by discrete element modelling .computers and geotechnics , 2000 ; 27 : 225247 .p. grassl , z. bazant , g. cusatis . lattice - cell approach to quasi - brittle fracture modeling .computational modelling of concrete structures 2006 ; 930 : 263268 930 .e. tonti , f. zarantonello . algebraic formulation of elastostatics : the cel method .computer modeling in engineering and science 2009 ; 39(3 ) : 201236 .s. gzey , b. cockburn , h. k. and stolarski .the embedded discontinuous galerkin method : application to linear shell problems .j. numer . meth . engng . 2007 ; 70 : 757790 .y. shen , a. lew . an optimally convergent discontinuous galerkin - based extended finite element method for fracture mechanics .j. numer . meth . engng .2010 ; 82 : 716755 .r. abedi , m. a. hawker , r. b. haber , k. matous .an adaptive spacetime discontinuous galerkin method for cohesive models of elastodynamic fracture .j. numer . meth .2010 ; 81 : 12071241 .d. j. allman . a compatible triangular element including vertex rotations for plane elasticity analysis . computers and structures 1984 ; 19(2 ) : 18 .p. g. bergan , c. a. felippa .a triangular membrane element with rotational degrees of freedom .mech . eng .1985 ; 50 : 2569 .x. zhou , g. cusatis .tetrahedral finite element with rotational degrees of freedom for cosserat and cauchy continuum problems .( asce ) 2015 ; 141(2 ) , 06014017 .e. tonti . a direct discrete formulation of field laws : the cell method .computer modeling in engineering and sciences , 2001 ; 2(2 ) : 237258 . c. talischi , g. h. paulino , a. pereira , i. f. menezes .polymesher : a general - purpose mesh generator for polygonal elements written in matlab .structural and multidisciplinary optimization , 2012 ; 45(3 ) : 309328 .z. p. baant , h. oh.byung .crack band theory for fracture of concrete .materiaux et construction , 1983 ; 16(3 ) : 155 - 177. j. f. kalthoff , s. winkler .failure mode transition at high rates of shear loading .int . conf . on impact loading and dynamic behavior of materials 1987; 1 : 85195. t. belytschko , h. chen , j. xu , g. zi .dynamic crack propagation based on loss of hyperbolicity and a new discontinuous enrichment .j. numer . meth .engng 2003 ; 58 : 1873-1905 .song , h. wang , t. belytschko . a comparative study on finite element methods for dynamic fracture .comput . mech .2008 ; 42 : 239250 .xu , a. needleman .numerical simulation of fast crack growth in brittle solids .solids 1994 ; 42(9 ) : 13971434 .m. ramulu , a. s. kobayashi .mechanics of crack curving and branching - a dynamic fracture analysis .1985 : 27 ; 187201 .k. ravi - chandar .dynamic fracture of nominally brittle materials .1998 : 90 ; 83102 .derivation of facet strain tensor components in terms of facet normal , tangential and volumetric strains are explained in this section for plane strain and stress problems . in both cases ,one should solve a system of three algebraic equations with three unknowns . where and are the two projection tensors that are calculated for each facet using its unit normal and tangential vectors , subscript dropped for simplicity .solution of the above system of equations will yield to the following expressions for strain tensor components : for the case of plane stress problems , out of plane strain component should be taken into account .therefore , the first equation in the system of equations [ pestraintens-1 ] should be revised as , while the two other equations stay the same .
in this paper , the discontinuous cell method ( dcm ) is formulated with the objective of simulating cohesive fracture propagation and fragmentation in homogeneous solids without issues relevant to excessive mesh deformation typical of available finite element formulations . dcm discretizes solids by using the delaunay triangulation and its associated voronoi tessellation giving rise to a system of discrete cells interacting through shared facets . for each voronoi cell , the displacement field is approximated on the basis of rigid body kinematics , which is used to compute a strain vector at the centroid of the voronoi facets . such strain vector is demonstrated to be the projection of the strain tensor at that location . at the same point stress tractions are computed through vectorial constitutive equations derived on the basis of classical continuum tensorial theories . results of analysis of a cantilever beam are used to perform convergence studies and comparison with classical finite element formulations in the elastic regime . furthermore , cohesive fracture and fragmentation of homogeneous solids are studied under quasi - static and dynamic loading conditions . the mesh dependency problem , typically encountered upon adopting softening constitutive equations , is tackled through the crack band approach . this study demonstrates the capabilities of dcm by solving multiple benchmark problems relevant to cohesive crack propagation . the simulations show that dcm can handle effectively a wide range of problems from the simulation of a single propagating fracture to crack branching and fragmentation . * center for sustainable engineering of geological and infrastructure materials ( segim ) * + [ 0.1 in ] department of civil and environmental engineering + [ 0.1 in ] mccormick school of engineering and applied science + [ 0.1 in ] evanston , illinois 60208 , usa + 0.5 in + + * segim internal report no . 16 - 08/587d * + cohesive fracture , finite elements , discrete models , delaunay triangulation , voronoi tessellation , fragmentation
scientometrics is a field of information and communication sciences devoted to quantitative studies of science .the term was coined by v.v .nalimov and z.m .mulchenko as title of a book about the measurement of scientific activity . the systematic study of systems of thoughts was and still is inherent part of philosophy .but the growth of the academic system after world war ii , the need for accountability of public spending , the increasing role of technological innovation for economic wealth , and critical debates about the role of science for society lead to a formation of a new special field devoted to the study of scholarly activity . in this newly emerging field the more traditional epistemic and historical perspective has been combined with studying the sciences as a social system using approaches from social - psychology , sociology , and cultural studies .bernal has been called one of the grandfathers of this emerging field , and the foundation of a society called society for social studies of science in 1975 was a first sign of an institutional consolidation of the scientific community interested in the sciences as an object of studies . at the very beginning , quantitative studies and qualitative approaches were closely together. the sociological theories of robert k. merton about feedback mechanisms ( social enforcement ) in the distribution of award in the science system resonated with stochastic mathematical models for the skew distribution of citations as proposed by derek de solla price , a physicist and science historian . and others proposed a socio - economic theory of the academic systems shedding light on necessary preconditions of scientific labor and related , so - called input indicators . the emergence of ( digital ) databases of scientific information such as the _ science citation index _ of the _ institute for scientific information _( isi , now thompson reuters ) triggered a wave of systematic , statistical studies of scientific activities - the core of _ scientometrics _ still today . not surprisingly , the number of quantitative studies grew with the availability of data .most of the scientometric studies were devoted to products of scholarly activity , namely publications .they are based on a so - called literary model of science and have been boosted by the groundbreaking innovation of a bibliographic information system which includes the references used in a paper - on top of authors , title , abstract , keywords and the bibliographic reference itself. the last decades have witnessed a bias of quantitative studies about the products ( texts and communication ) of scholarly activity compared to studies of their producers ( authors ) or the circumstances of the production ( expenditures ) . in fig .1 some relevant branches of research inside of scientometrics and some of their representatives are named .this illustration does not claim any completeness . for a more comprehensive introduction into scientometrics we would like to refer the reader to a recently published book on bibliometrics which also discusses the social theories used in scientometrics .further useful sources are the lecture notes of wolfgang glnzel devoted to the main mathematical approaches to scientometrics indicators , the website of one of the authors and , of course , main journals in the field such as _ scientometrics _ , _ journal of the american society for information science and technology _ , _ journal of documentation _ , _ research policy _ , _ research evaluation _ , and _ journal of informetrics_. the representation in fig . [ andrea1 ] suggests that for large parts of scientometrics authors were the forgottenunits of analysis .there is indeed a rationale behind the focus on texts . for the elements of textual production - or more specific journal articles - databases asthe citation indices of the isi - web of knowledge have introduced standards for the units of a bibliographic reference : the journal names , the subject classifications , and other meta data such as document type .however , the identification of authors by their names creates a problem ( i.e. , occurrence of common names , transcription of non - english names , name changes ) .only recently attempts have been made to also make authors automatically traceable .one way is to introduce standardized meta data for authors , for instance by introducing a unique digital identity for reserchers .currently , different systems commercial and public coexist and compete . publishers as thompson reuters ( see www.researcherid.com ) and elsevier have introduced i d s for authors . for the dutch national science system the _ surf foundation _has introduced a digital author identifier ( dai ) .open repositories also aim for the identification of authors ( see http://arxiv.org/help/author_identifiers ) . another way is to automatically allocate articles to authors using author - specific characteristics or patterns ( e.g. , a combination of a specific journal set , subject categories , and addresses ). for large scale statistical analyses of the behavior of authors the ambiguity of person names is less important .examples are investigations of authorship networks , author s productivity , or author - citation network models . but without a researcher identity , or additional knowledge about the author it is usually not possible to trace individual actors .the above mentioned steps , including a researcherid , might allow more systematic author based studies beyond samples sizes which still can be cleaned by manual inspection . in the future a combination of following the creators of scientific ideas and the influence of these ideas themselves seems to be possible and promising. however , the dichotomy between texts and authors as introduced in fig .[ andrea1 ] is not a strict one .scientometric studies can not be always fully separated into either text and author centered .there is a gray area between both directions and interesting studies can be found also in the history of scientometrics which trace authors in threads of ideas and _ vice versa_. two of them have inspired this paper : * algorithmic historiography * and * field mobility*. the so - called * historiographic approach * has been proposed by the second author of the present article .this approach allows to reconstruct the main citation paths over years starting from a seed node .the seed can be one paper of an author , or a collection of papers characterizing an author or a scientific speciality . based on isi - data, the tool _ histcite _ allows to extract and to visualize the citation network. based on the citation rates of all nodes in this network either in the network itself ( local g1 graph ) or in the whole isi database ( global g2 graph ) graphs can be displayed showing the citation tree of the most cited papers in this directed graph ordered along a time axis . through a historiographical analysisone can reconstruct schools of thoughts ; paths of influence and the diffusion of ideas. in this paper , we use one of the recorded _ histcite _ files , namely the historiograph of merton s paper of 1968 .a part of the global _ histcite _ graph g2 is displayed in fig .[ andrea2 ] .a novel aspect applied in the present paper is an analysis of the nodes in the historiograph in terms of their disciplinary origin .this approach has been inspired by the study of * field mobility * a term coined by jan vlachy. vlachy introduced this notion of mobility , generalizing geographic and occupational mobility or migration towards the movement of researchers through cognitive spaces ._ field mobility _ describes one aspect of the _ cognitive mobility _ of a researcher who during the life span of her or his career moves from scientific field to scientific field . in the past , this approach has been integrated into a dynamic model of scientific growth. recently , the concept has been used again to trace the activities of a researcher in different fields. the crucial point for an application of this concept is the question how to identify the fields . for the traces of individual authors mobility hellsten used self - citations .self - citations represent a self - referential mechanism which automatically leads to a clustering of papers which have a common focus . often in these thematic clusterswe also find different co - authors , and different keywords and title words can be used to label the different _ fields of activity _ on the micro - level of an individual researcher .the analysis we present in this paper is a combination of a historiographic and a field mobility approach .we extend the approach of field mobility from the mobility of a researcher between fields to the mobility of a paper between fields .while one can easily imagine that a researcher by her or his on - going creativity travels between topics and fields , this is intuitively less clear for a published and therefore stable paper .so , what do we mean by this ?once a paper is published , it has a certain location in an envisioned landscape of science .this position can be determined by the journal in which the paper appeared .the disciplinary classification of the journal can be seen as an attribute or characteristics of the paper which allows to place it on a map of science. while this landscape seems to be relatively stable or , at least , slowly changes over time , citations to a paper represent a more fluid and faster dynamics . a paper published in sociology , for instance ,can suddenly gain importance for different areas as distant as physics or computer sciences .if we look at this paper through the lense of citations its position can be variable .eventually , the changing perception of the paper causes its travel in this imagined landscape .referencing to papers can be seen as a process of re - shaping the scientific landscape . due to sequential layers of perception the actual location of a paper , now determined bythe position of the recent papers citing it , can shift .this travel of a paper , or more precisely its perception , between fields is an indicator for the diffusion of ideas and _ field mobility _ in a generalized sense . in difference to the earlier mentioned author mobility study in this paper we determine different fields by manually inspecting and classifying journals . this way we identify _ fields of activity _ on the meso - level of journals , rather than on the micro - level of individual behavior ( as done in the case study on self - citation pattern ) , or on the macro - level of disciplines ( as represented by larger journal groups ) .due to its broad and persistent perception beyond sociology , merton s work seems to be a good candidate for studying the diffusion of ideas .moreover , looking at citation behavior over time and mobility phenomena we want to shed light on the micro - dynamic processes at the basis of past , current and future structures of the landscape of science . in a certain sense, we study the same questions as merton who himself asked for generic mechanisms in the dynamics of science . in the paper under study his 1968 paper on the _ matthew effect of science _ he proposes a specific mechanisms , namely the accumulation of reward and attention . with our study , we ask to which extend we can use the dynamics of the perception of his work as a case to shed another light on these generic mechanisms . the influence and relevance of merton s workhas been discussed earlier by one of the authors. historiographs of his uvre or part of it are available for further inspection .still , it remains a question what actually bibliometrics can add to science history based on text analysis and eye witness accounts .recently , harriet zuckerman has thoroughly discussed the matthew effect and carefully analyzed its perception in past and presence . in her analysisshe uses bibliometric information for the global pattern of perception of merton in a kind of _ birds - eye view_. the current bibliometric exercise complements her study on a meso - level . in this paper , we concentrate on the perception of one specific paper of robert k. merton out of the three devoted to the _ matthew effect _ .instead of looking on the overall citation numbers ( macro - level ) or following the nodes and paths in the historiograph of this particular paper in depth ( micro - level ) , we analyze the citing papers according to the disciplinary distribution of the journals in which these papers appeared ( meso - level ) .robert k. merton ( 1910 - 2003 ) is known for his theory of social structures as an organized set of social relationships , the discussion of their functionality or disfunctionality , and for his definition of culture as an organized set of normative values governing behavior. applied to science(s ) as a social system , he defined four scientific norms or ideals : communalism , universalism , disinterestedness , and organized skepticism . looking for empirical evidence supporting or undermining theoretical frames , he was also interested in social behavior of scientists which actually contradicts the norms and values functional for science. in particular , he drew attention to mechanisms of reward in science . in 1968 he published a paper in the journal _ science _ entitled the matthew effect in science : the reward and communication systems of science are considered . in this paperhe addressed the phenomenon that well - known scientists often receive more reward for the same contribution than other , less - known researchers .while this the rich get s richer effect has often been described as a sign of injustice and malfunctioning , merton also discussed that this deviant behavior from an ideal one has a constitutive , positive function for the whole system .it creates a focus of attention , a kind of pre - selection , and a structuring which allows an easier orientation in large amounts of information .it has been argued elsewhere that the essence of the _ parable of the talents _ as told in the bible , does not so much concern an unequal distribution of wealth or reward as such , but the difference between an expected and eventually achieved position in such a rank distribution depending on an appropriate _ use of talents_. in an empirical study of the scientific performance of countries in terms of citation gathering it has been shown that privileged countries in terms of expected citation rates receive even more citations than countries with smaller expectations .not only is the distribution of talents , gifts , strengths a skewed one , their further use seems to even increase this skewness .apparently , the _ matthew effect _ does not play in favor of certain researchers and allocates fame and reward not always to the person which deserves it most .however , on the level of the system , this effect is an important dynamic mechanism to create order out of chaos. different authors among them derek de solla price have pointed to the fact that in the language of system theory , cybernetics , and mathematics , this effect corresponds to a positive feedback loop which introduces a non - linear interaction mechanisms into the dynamics of the system . in terms of mathematical modelsthis can be described as self - accelerating growth rate , a growth rate of an entity depending on the actual size of the entity itself .thereby , an entity can be a scientific field , a certain technology , or a certain type of behavior .applied to a single entity models with such growth rates describe non - linear growth , exponential or hyperbolic . when implemented as a mechanism in a system of several competing growing entities different types of selection , including hyperselection ,can result from this non - linear mechanism. in general , positive feedback loops such as the _ matthew effect _ are at the core of specific pattern formation visible in skew distributions or in dominant designs .it is therefore not surprising that the perception of merton s paper has not been restricted to sociology and other social sciences , but has also found resonance in mathematics and physics .we started our investigation with the question why merton s paper still so often appears in the list of references of various authors .citation numbers of merton s paper seem even to show an increase instead of an expected fading away .this bibliometric observation seems to be in line with other observations of contemporary witnesses and friends of merton .our question was : can we use scientometrics to find objective , data - based evidence for subjective impressions ?is there any way to _factualize _ the impact of this specific paper of merton ? if we explore what citation analysis can contribute to uncover some attributes of the lasting impact of merton s work , we face the analytic challenge to measure the impact of a single work ( a single paper ) with methods designed to reveal regularities in large amounts of data .let us therefore first present some standard scientometrical insights into the citation history of scientific publications .citation analysis has taught us about so - called _citation classics_. these are highly cited papers sometimes even nobel prize winning ones .beginning in the 1960 s , one of the authors started to publish about highly cited papers in the journal _current contents_. highly cited papers represent only a very small fraction of all papers and citation rates are highly field - dependent .being aware of this , a refined methodology was proposed to identify a _citation classics_. as few as 100 citations ( or even fewer ) may qualify a work as a citation classic in some of these areas , such as radio astronomy , engineering , or mathematics . to identify citation classics in smaller fields we use several criteria .one is rank within a specialty journal .if a specialty journal defines a unique field , then the most - cited articles from that journal include many if not all citation classics for that field . merton as an author has entered the set of _ citation classics _ not with his paper of 1968 but with his book _ social theory and social structure _ from 1949 . in his own commentary on this fact merton wrote : i am not at all sure of the reasons for social theory and social structure ( stss ) still being cited 30 years after its first appearance . to answer that question with reasonable assurance would require a detailed citation analysis and readership study , hardly worth the effort . nowadays , in the age of digitally available databases and computers , such an effort is more practical .so , let us first look at the citation number for merton s paper of 1968 ( see fig .[ andrea3 ] ) .merton s paper has attracted 741 citations from 1968 to june 2009 . to extract these data we used the cited reference search command in the _ _web of knowledge__. records of the retrieved citing documents have been downloaded and exported into an excel data base for further analysis .the annual number of citing publications fluctuates between 5 and 15 - 20 over the period of more than 30 years with a slightly increasing tendency , but from 2002 onwards we observe a remarkable increase of it ( see fig .[ andrea3 ] ) . even if one takes into account that the database itself is not steady but growing over time , we can state that the perception of merton s work is continuing . from fig .[ andrea3 ] we can also see that the citations to merton s paper are widely scattered . in the figure we display the number of citing papers and the number of citing journals together .whenever in fig .[ andrea3 ] both graphs coincide each of the citations appear in a different journal .whenever the dark gray area is seen above the light one , in some journals more than one paper cites merton .we will look into these multiple citations from journals later again .a closer look reveals that not only the number of citations increases but also the number of journals with papers citing merton .citation analysis also taught us about different possible citation life - cycles of a paper , a person , or a research field .vlachy developed a typology of these life courses of papers in terms of citations .successive citations represent traces a work leaves in our collective memory .we see patterns between never reaching a wider audience ( scarcely reflected as labeled by vlachy ) , oscillating recognition , exponential or hyberbolic growth ( genial ) , and an almost gaussian growth and decline of recognition ( basic recognized). the latter effect seems to be much in line with what the analysis of larger ensembles has shown , namely that there is a citation window and a half - lifetime of a paper .moreover , after a certain peak in recognition the knowledge related to a certain paper becomes incorporated into reviews , textbooks , or figures under the name of an effect or author only without carrying a citation mark anymore .but , patterns and laws in the collective production of scientific knowledge is only one side of the coin .important singular events critical ( re)shaping the way we think about problems and solve them are another .both sides do not contradict each other .even more , they heavily rely on each other . for merton s paper of 1968 we find a steady growth over decades .what causes this growth ? to answer this questionwe analyze the journals which contain the citing papers .the citations towards merton s paper are concentrated in some of the 368 journals over which the citations in the whole period are distributed .if we plot the number of journals with n citations against n we see that only 24 journals carry more than 5 citations in the whole period ( fig .[ andrea5 ] ) . in the next step we allocated the journals to fields using two different classifications .a finer classification was used for a core set of 24 journals with more than 5 citations ( see table [ table1 ] ) .this core set carries about 40 percent of all citations . for the further examination of the whole journal set we used a rougher classification on the level of disciplines ( see table [ table2 ] ) .we choose field and discipline names used in bibliometric studies .the allocation of a journal to it is based on personal judgment . in both tableswe also give the overall number of citations from these fields to merton s paper .sti & science and technology studies information science & 106 + soc & sociology & 62 + ste & science and technology studies evaluation & 28 + edu & education & 25 + stc & science and technology studies science studies & 23 + i d & information and documentation & 14 + psy & psychology & 6 + phi & philosophy & 5 + man & management & 3 + [ table1 ] math & mathematics & 2 + phys & physics & 21 + chem & chemistry & 3 + eng & engineering & 12 + lif & life sciences & 20 + med & medical research ( including psychology ) & 94 + soc & sociology / social sciences / information science & 540 + phi & philosophy & 30 + mult & multidisciplinary & 19 + [ table2 ] first , we took a closer look at the core set of journals .we asked which journals are part of this core set , which fields they represent , and how their presence in the core set changes over time .the growth of perception of merton s paper appears mainly in this core group of journals . in table[ table3 ] we display the distribution of citing papers across journals for all journals with more than 5 citing papers .merton s paper was published in an interdisciplinary journal .it was first taken up in established sociological journals .but , most of the journals in the core set have only be founded in the 1960 s or 1970 s . in table[ table3 ] we indicate the first year when a citing paper appeared , the category of the journal ( using the classification given in table [ table1 ] ) , and the founding date of the journal .a comparison of the year of foundation of a journal and its first appearance in the core set shows that the perception of merton s paper co - evolves with the newly emerging field of _ science and technology studies _( * st * * ) .if we order the journals according to their overall number of citations , the journal scientometrics contains most of the papers citing merton s paper of 1968 .scientometrics is not the first journal in which merton s paper is cited .papers in the journals american journal of sociology , sociology of education , annual review of information science and technology , american sociological review and acta cientifica venezolana deliver the first 4 citations in 1968 .but , the example of scientometrics represents the consolidation of a new field the field of _ science and technology studies _ which grows partly inside existing journals and partly due to newly emerging journals .about 30 percent of the journals in the core set belong to this field ( * st - i / e / c * ) and contain about 60 percent of all citing papers ..distribution of citing papers among journals [ cols="<,<,<",options="header " , ] [ table3 ] we classified the field of _ science and technology studies _ into subfields such as information science ( general laws , quantitatively and mathematically oriented ) , evaluation ( indicator research ) , and science studies ( general laws , qualitatively oriented ) , and allocated journals to these fields ( see tables [ table1 ] , [ table3 ] ) .we are aware of the possible objection that such an allocation contains an element of arbitrariness .we also did not take into account that the profiles of the journals and their function for the scientific community partly overlap .moreover , these profiles change over time .also , the actual articles citing merton s paper might contentwise represent a different approach than expressed in the rough journal categorization . despite these shortcomingsthe display of different fields in the recognition sphere ( set of citing papers ) of merton s paper both for the core set and for the whole set reveals interesting insights . if we examine the whole set of journals across the time scale we make two observations .looking at the entry and exit of journals into the area of perception of merton s paper we see that most of the journals are _ transient _ they appear only a few times in the _ recognition _ or _ perception sphere _ around merton s paper .this is why the core of journals containing citations is relatively small .second , the distribution of citations across fields and disciplines does not remain stable over time . in fig .[ andrea6 ] we plot the number of citing papers per journals of the core set against the time axis .next to an absolute increase we also observe a shift of attention among different fields .not surprisingly the perception of merton s paper starts in sociological journals .for instance , in 1970 the journals american journal of sociology and sociological inquiry contribute with two citations each ._ sociology _ remains a persistent discipline over the years .the eventually dominating field of _ science and technologies studies _ gains momentum since mid of the 1970 s .for instance , in 1977 the journal social studies of science contributes with three papers . in later years , in particular since the end of the 1990 s * st * * fields contribute with around 10 citations per year .if we look into the subfields of * st * * , we see a clear shift from sociological and cultural studies of science towards informetric analysis . however , one has to take into account that all these statements are based on rather small numbers and , therefore , are susceptible to random factors .only a close reading of the text of the citing papers could reveal if also the context of the citation to merton s paper changed systematically . by a random inspectionwe find papers reporting _ personal experiences _ explained with the _ matthew effect _ , discussions of the social function of the effect , or its possible quantitative validation . in the core journal set , _ philosophy _plays a rather marginal role .but , this does not mean that authors of philosophical journals are not interested in merton s work .on the contrary , an analysis of the whole journal set shows that the discipline _ philosophy _ holds the third rank ( see table [ table2 ] ) .the explanation can be found in the wide scattering of citing papers over journals . in the core journalset only the journal minerva represents the field of _ philosophy _ with 5 citing papers . in the whole journal set we find 20 more journals classified under _philosophy_. most of them appear only once . to detect the field - specific pattern of the diffusion of the perception of merton s paper we classified all 368 journals using nine macro - categories ( see table [ table3 ] ) . in fig .[ andrea7 ] , we visualize the annual disciplinary distribution of papers citing merton s paper . not surprisingly , the _ social sciences _ ( now including _ science and technology studies _ ) dominate the picture . however , each discipline in the natural and social sciences and also a wide variety of journals showed interest in merton s paper .for instance , in _ physics _ the first two citations to merton s paper appear in 1969 , one in the journal energie nucleaire ( paris ) with the title la documentation scientifique et technique .la notion de centre dinformation , and one in the journal proceedings of the royal society of london series a mathematical and physical sciences with the title some problems of growth and spread of science into developing countries. after 2004 , _ physics _ seems to pay more systematic attention to merton s paper of 1968 . in 2004, two papers appeared , one in physical review e ( biased growth processes and the rich - get - richer principle ) and one in physics today ( could feynman have said this ? ) .the following period of persistent interest in physics journals is due to the emerging specialty _ complex networks _ inside of statistical physics. merton s paper has been recognized widely as being important and the fundamental for the understanding of networks of scientific communication and collaboration as nowadays modeled by complex network models. the importance of networks becomes also visible in an experiment we performed with the _ network workbench _ ,a tool developed by the group of katy brner. this tool allows to import , visualize , and analyze networks , including scientometric data from isi databases .we experimented with kleinberg s burst detection algorithm and applied the algorithm to the keyword field of our dataset .[ andrea9 ] displays the keywords which are suddenly used more .according to this analysis keyword bursts only occur since mid 1990 . not surprisingly , due to the relevance of merton s findings for the understanding of the social - behavioral patterns behind complex structures we find _ networks _ among the bursting terms in the period of the emergence of network science across all disciplines .if one looks at the original table of all citing papers and their journals there seems to be an increase both in the number of fields and disciplines present and in the number of citing paper . as we saw in fig .[ andrea7 ] already , not all disciplines are present at all times .while no clear pattern of diffusion of the perception of merton s paper among disciplines and no clear transfer path visible are visible , we searched for another indicator to explain the spreading of interest in the work of merton . however , as shown in fig .[ andrea8 ] the number of different disciplines present each year shows strong fluctuations which makes it almost impossible to talk about some trends .in this paper we explored the possibilities to follow the diffusion or perception of a specific idea using bibliometric approaches .more specifically , we asked why merton s paper on the matthew effect in science still lives in the memory of the sciences and still gets cited .we extended historiographic methods as proposed by one of the authors towards the analysis of the disciplinary spreading of ideas .the visualization of the spreading of ideas on large science maps has been explored recently by rafols and co - authors. our goal was much more modest and much more specific at the same time .we looked at the trace of _ one _ scholar and , even more narrow , at the trace of one of his publications .we asked what bibliometrics can add to science - historical and biographical research on small scales .one access to the disciplinary dimension of the spreading of ideas is to look into the disciplinary origin of papers .another way would be to follow the disciplinary traces of scholars in an abstract scientific landscape constructed and explored at the same time by scholars traversing it. whatever future algorithm will be at hand to map the landscapes of scholarly knowledge on larger and smaller scales , most promising is a combination between _ following the actors _ ( the producers and living _ carriers of knowledge _ ) and their _ traces _ left. when we tried to visualize this knowledge dynamics in the _ histcite _ graph we saw how complex the situation is .[ andrea10 ] shows the g1 visualization of the citation network of merton s paper .let us note once more that in the _ histcite _ algorithm the g1 graph selects and visualizes papers according to their importance ( citations ) _ inside _ the set of citing papers ( local graph ) . in difference to g1 , the g2 graph displays papers according to their importance in the whole database ( see fig . [ andrea2 ] ) . in fig .[ andrea10 ] , we have used color codes and first author names in addition to the node numbers .papers in _ sociology _ and _ psychology _ jounals appear as white nodes , merton s papers as black nodes , and papers in _ science and technology studies / information and documentation journals _ as ( dark / light ) grey nodes . not unexpectedly ,a look into the specific community structure around merton s paper reveals almost the same journals which contain most of the citations to its papers . with the exception of a paper by newman ( node 426 ) in a _ physics _ journal and a publication by stephen in an _ economics _ journal ( node 330 ) all nodes in this graph either belong to _ sociology _ ( or _ sociology of education and psychology _ ) or to _ science and technology studies_. the classification exercise inside of the histcite graph reveals the emergence of new scientific fields ( as _ science and technologies studies _ in the 1970s ) and related journals .node 64 represents the first paper from a _ science and technologies studies _ journal namely social studies of science in 1975 .remember that this graph contains only a selection of all citing papers .we have seen the emergence of the new field of _ science and technology studies _ and the corresponding shift in perception of merton s paper already in fig .[ andrea6 ] .[ andrea10 ] adds concrete _ faces _ to this shift by visualizing some key papers .we also see that with the time more * st * * nodes appear .at least two mechanisms are important for the diffusion of ideas : researchers which get _ infected _ by an idea and travelling around taking the idea to new places , and the emergence of new journals which present new channels of communication and are a sign for the formation of new scientifc communities .for instance , in fig .[ andrea10 ] some authors reappear , even in the narrow selection of g1 , and not always they publish in the same scientific field .irrespectively where published , all articles citing merton s paper ( see appendix ) contribute to a better understanding of the dynamics of the science system and its impact on society .merton s ideas about ( self-)organizing processes inside the science system spread out over different disciplines and over time .merton s paper has been proven to be a landmark for the study of science . at the same time , merton s paper is a constitutive element for the formation of a community of researchers interested in science studies whose work forms the basis on which merton s paper can finally function as a landmark .the diffusion of citations to merton s key paper across journals , disciplines , and time eventually shows the persistent importantance of the idea for _ social studies of science_. this concept is the constant , stable , core knowledge element still floating around , witnessing the longevity and integrating power of scientific ideas . 2 : : zuckerman , ha , patterns of name ordering among authors of scientific papers study of social symbolism and its ambiguity , american journal of sociology .1968 ; 74 ( 3 ) : 276 - 291 3 : : cole s , cole jr , visibility and structural bases of awareness of scientific research .american sociological review .1968 ; 33 ( 3 ) : 397 - 413 5 : : merton rk , matthew effect in science . science .1968 ; 159 ( 3810 ) : 56 - 63 17 : : crane d , academic marketplace revisited study of faculty mobility using cartter ratings .american journal of sociology .1970 ; 75 ( 6 ) : 953 - 964 18 : : cole s , professional standing and reception of scientific discoveries .american journal of sociology .1970 ; 76 ( 2 ) : 286 - 306 20 : : myers cr , journal citations and scientific eminence in contemporary psychology .american psychologist .1970 ; 25 ( 11 ) : 1041 - 1048 28 : : zuckerman h , stratification in american science . sociological inquiry .1970 ; 40 ( 2 ) : 235 - 247 40 : : lodahl jb , gordon g , structure of scientific fields and functioning of university graduate departments .american sociological review .1972 ; 37 ( 1 ) : 57 - 72 53 : : hagstrom wo , competition in science .american sociological review . 1974 ; 39 ( 1 ) : 1 - 18 54 : : allison pd , stewart ja , productivity differences among scientists evidence for accumulative advantage .american sociological review .1974 ; 39 ( 4 ) : 596 - 606 64 : : chubin de , moitra sd , content - analysis of references adjunct or alternative to citation counting .social studies of science .1975 ; 5 ( 4 ) : 423 - 441 76 : : reskin bf , scientific productivity and reward structure of science .american sociological review .1977 ; 42 ( 3 ) : 491 - 504 84 : : gilbert gn , referencing as persuasion .social studies of science . 1977 ; 7 ( 1 ) : 113 - 122 117 : : helmreich rl , spence jt , beane we , lucker gw , et al . , making it in academic psychology demographic and personality - correlates of attainment .journal of personality and social psychology . 1980; 39 ( 5 ) : 896 - 908 132 : : allison pd , long js , krauze tk , cumulative advantage and inequality in science .american sociological review .1982 ; 47 ( 5 ) : 615 - 625 156 : : walberg hj , tsai sl , matthew effects in education .american educational research journal .1983 ; 20 ( 3 ) : 359 - 373 163 : : stewart ja , achievement and ascriptive processes in the recognition of scientific articles . social forces .1983 ; 62 ( 1 ) : 166 - 189 182 : : garfield e , uses and misuses of citation frequency . current contents .1985 ; ( 43 ) : 3 - 9 199 : : macroberts mh , macroberts br , quantitative measures of communication in science a study of the formal level .social studies of science . 1986 ;16 ( 1 ) : 151 - 172 223 : : zuckerman h , citation analysis and the complex problem of intellectual influence .1987 ; 12 ( 5 - 6 ) : 329 - 338 230 : : merton rk , the matthew effect in science .2 . cumulative advantage and the symbolism of intellectual property .1988 ; 79 ( 299 ) : 606 - 623 279 : : seglen po , the skewness of science .journal of the american society for information science .1992 ; 43 ( 9 ) : 628 - 638 283 : : podolny jm , a status - based model of market competition .american journal of sociology .1993 ; 98 ( 4 ) : 829 - 872 304 : : podolny jm , market uncertainty and the social character of economic exchange .administrative science quarterly .1994 ; 39 ( 3 ) : 458 - 483 311 : : podolny jm , stuart te , a role - based ecology of technological - change . american journal of sociology .1995 ; 100 ( 5 ) : 1224 - 1260 322 : : ross ce , wu cl , education , age , and the cumulative advantage in health .journal of health and social behavior .1996 ; 37 ( 1 ) : 104 - 120 330 : : stephan pe , the economics of science. journal of economic literature .1996 ; 34 ( 3 ) : 1199 - 1235 347 : : bonitz m , bruckner e , scharnhorst a , characteristics and impact of the matthew effect for countries . scientometrics .1997 ; 40 ( 3 ) : 407 - 422 360 : : keith b , babchuk n , the quest for institutional recognition : a longitudinal analysis of scholarly productivity and academic prestige among sociology departments . social forces .1998 ; 76 ( 4 ) : 1495 - 1533 368 : : leydesdorff l , theories of citation ? scientometrics . 1998; 43 ( 1 ) : 5 - 25 461 : : owen - smith j , from separate systems to a hybrid order : accumulative advantage across public and private science at research one universities . research policy .2003 ; 32 ( 6 ) : 1081 - 1104 462 : : newman mej , the structure and function of complex networks .siam review .2003 jun ; 45 ( 2 ) : 167 - 256 brner , k. , maru , j. , goldstone , r. , _ the simultaneous evolution of author and paper networks_. proceedings of the national academy of sciences of the united states of america , * 101*(suppl .1)(2004)5266 - 5273 .brner , k. , sanyal , s. , vespignani , a. , _ network science_. in : cronin , b. ( ed . ) , annual review of information science and technology , vol .537 - 607 , chapter 12 , medford , nj : information today , inc./american society for information science and technology , 2007 . garfield , e. , _ historiographs , librarianship , and the history of science_. in : rawski , c.h .( ed . ) , toward a theory of librarianship : papers in honor of jesse hauk shera , n. j. : scarecrow press , 1973 , p. 380 - 402 . reprinted in : garfield , e. , essays of an information scientist , vol . 2 , 1974 - 76 , pp .136 - 150 ; and current contents , * # 38 * ,september 18 , 1974 .available at http://www.garfield.library.upenn.edu/essays/v2p136y1974-76.pdf garfield , e. , _ do nobel prize winners write citation classics? _ current contents , * # 23*(1986)3 - 8 ; reprinted in : garfield , e. , essays of an information scientist , vol . 9 , 1986 , p. 182 ; available at http://www.garfield.library.upenn.edu/essays/v9p182y1986.pdf glnzel , w. , _ bibliometrics as a research field : a course on theory and application of bibliometric indicators_. courses handout , on - line source , available at http://www.norslis.net/2004/bib_module_kul.pdf ( accessed on february 2 , 2010 ) , 2003 .leydesdorff , l. , _ what can heterogeneity add to the scientometric map? steps towards algorithmic historiography_. in : akrich , m . ,barthe , y . , muniesa , f. , mustar , p. ( eds . ) , festschrift for michel callon s 65th birthday .paris : cole nationale suprieure des mines , 2010 ( in press ) .preprint available at arxiv.org ( arxiv:1002.0532v1 ) .lucio - arias , d. , leydesdorff , l. , _ main - path analysis and path - dependent transitions in histcite - based historiograms_. journal of the american society for information science and technology * 59*(12)(2008)1948 - 1962 .merton , r.k ., _ this week s citation classics : merton , r. k. , social theory and social structure .new york : free press , 1949 .423 pp . [ columbia university , new york , ny ] _ current contents * 21*(1980)285 , available at www.garfield.library.upenn.edu/classics1980/a1980js04600001.pdf scharnhorst , a. , _ constructing knowledge landscapes within the framework of geometrically oriented evolutionary theories_. in : matthies , m. , malchow , h. , kriz , j.(eds . ) , integrative systems approaches to natural and social sciences systems science 2000 .berlin : springer , 2001 , pp . 505 - 515 .wouters , p. , _ the citation culture_. amsterdam : university of amsterdam amsterdam , phd thesis ( unpublished ) ,1999.(available http://garfield.library.upenn.edu/wouters/wouters.pdf , accessed february 2 , 2010 ) ziman , j.m . , _ some problems of growth and spread of science into developing countries_. proceedings of the royal society of london series a mathematical and physical sciences * 311*(no .1506)(1969)349 - 369 .zuckerman , h. , _ the matthew effect writ large and larger : a study in sociological semantics_. in : elkana , y. ( ed . ) , concepts and social order : essays on the works of robert k. merton .budapest : central university press , 2010 ( in press ) .
scientometrics is the field of quantitative studies of scholarly activity . it has been used for systematic studies of the fundamentals of scholarly practice as well as for evaluation purposes . although advocated from the very beginning the use of scientometrics as an additional method for science history is still under explored . in this paper we show how a scientometric analysis can be used to shed light on the reception history of certain outstanding scholars . as a case , we look into citation patterns of a specific paper by the american sociologist robert k. merton .
articulatory speech synthesisers generate sound based on the shape of the vocal tract .vibration of the vocal folds under the expiratory air flow is the source in the system ; and the vocal tract , consisting of the larynx , pharynx , oral and nasal cavities , constitutes a filter where sound frequencies are shaped .this creates a number of resonant peaks in the spectrum , known as formants .the first and second formants ( and ) are used to distinguish the vowel phonemes , where the value of and is controlled by the height and backness - frontness of the tongue body respectively .traditionally , the acoustic system is approximated by a one - dimensional wave equation that associates the slow varying cross - sectional area of a rigid tube to the pressure wave for a low - frequency sound .however , complex shape of the vocal tract , with its side branches and asymmetry , has motivated higher dimensional acoustic analysis .the 3d analysis methods were shown to produce a better representation of the sound spectrum at the price of higher computational cost .however , some studies suggested that the spectrum yielded by 1d acoustic analysis matches closely that of the 3d analysis for frequencies less than 7khz . suggested that the discrepancy between the resonance frequencies computed by 3d analysis of the vocal tract and the formant frequencies of the recorded audio is a result of insufficient boundary conditions in the wave equation especially in case of the open lips and/or velar port . in this paper , we follow in calculating the helmholtz resonances of our vocal tract geometries using 3d fem analysis .the resonances are then compared to the formant frequencies obtained from the 1d acoustic synthesizer proposed by and those of the recorded audio .we use static mri images acquired with a siemens magnetom avanto 1.5 t scanner .a 12-element head matrix coil , and a 4-element neck matrix coil , allow for the generalize auto - calibrating partially parallel acquisition ( grappa ) acceleration technique .one speaker , a 26-year - old male , was imaged while he uttered four sustained finnish vowels .the mri data covers the vocal and nasal tracts , from the lips and nostrils to the beginning of the trachea , in 44 sagittal slices , with an in - plane resolution of 1.9 mm .figure [ fig : vt ] shows the vt surface geometries extracted from mri data using an automatized segmentation method . for our 1d acoustic analysis, we describe the vocal tract by an area function where is the distance from the glottis on the tube axis and denotes the time .we take the similar notion of in defining the variables and as the scaled versions of volume - velocity and air density respectively . is the mass density of the air and is the speed of sound .we solve for and in the tube using derivations of the linearised navier - stokes equation ( [ eq : navier ] ) and the equation of continuity ( [ eq : cont ] ) subject to the boundary conditions described in equation [ eq : boundarycond ] : [ eq1 ] where and with the wall loss coefficient and ; and is the source volume velocity at the glottis .we couple the vocal tract to a two - mass glottal model and solve equation [ eq1 ] in the frequency domain using a digital ladder filter defined based on the cross - sectional areas of 20 segments of the vocal tract .we refer to for full details of the implementation . for our 3d acoustic analysis, we calculate the vowel formants directly from the wave equation by finding the eigenvalues , , and their corresponding velocity potential eigenfunction , , from the helmholtz resonance problem : [ eq2 ] where is the air column volume and is its surface including the boundary at mouth opening ( ) , at air - tissue interface ( ) and at a virtual plane above glottis ( ) ; and denotes the exterior normal derivative.the value of regulates the energy dissipation through tissue walls , and the case corresponds with hard , reflecting boundaries .we calculate the numerical solution of equation [ eq2 ] by finite element method ( fem ) using piecewise linear shape functions and approximately tetrahedral elements .the imaginary parts of the first two smallest eigenvalues and give first two helmholtz resonances of the vocal tract .we refer to and for details of implementation . in order to distinguish the effects of dimensionality ( 1d vs. 3d ) from the effects of different boundary conditions in equations [ eq1 ] and [ eq2 ] ,we also compute the webster resonances by interpreting equation [ eq2 ] in one dimension : \\ \tipasafemode \lambda\phi_{\lambda}-c\phi_{\lambda}^{'}&=0 & \text{at } s=0\\ \tipasafemode \phi_{\lambda}&=0 & \text{at } s = l\end{aligned}\ ] ] [ eq3 ] where denotes the sound speed correction factor that depends on the curvature of the vocal tract ; a(x ) is the area function and s is the implicit parameter to , a , w and .we refer to for details of implementation and parameter values .figure [ fig : twoformants ] shows the first two formant / resonance frequencies , computed for the four finnish vowels .webster formants ( w ) are calculated by solving equation [ eq1 ] , as suggested by .helmholtz ( h ) and webster resonances ( w ) are obtained from equations [ eq2 ] and [ eq3 ] , respectively .s denotes the scaled version of w .the figure also includes the formant frequencies ( a ) computed from audio signals recorded in an anechoic chamber .the values are averaged over 10 repetitions of each vowel utterance .as we can see in figure [ fig : twoformants ] , the resonance values ( h , w and s ) lie close together for vowels /i/ and /e/ , with s being closer to h , as expected . for vowels /o/ and /a/ there is more difference in the first resonances of h and w ; for /o/ , although s lies closer to h , its first resonance is surprisingly low . for all of the vowels in figure[ fig : twoformants ] , the second formant of the audio is less than the computed results .the vowel /i/ is expected to be very sensitive to glottal end position , which , in turn , suggests the significance of adequate mri resolution and accurate geometry processing for its spectral analysis .interestingly , the webster formants ( w ) remain closer to the audio formants ( a ) than any of the resonances in the case of /i/ , /e/ , and /a/. for /o/ the distance to the a is almost equal for w and h , with both having similar values for the second formant / resonance ; however , the first h is lower , and the first w is higher , than the first a . the time - domain webster analysis accounts for the vt wall - vibration phenomenon that is missing in the resonance analysis .this is done by substituting , from equation 5.3 , with : where is the slow - varying circumference and is the wall displacement governed by a damped mass - spring system . setting to zero , the webster formants move along the arrows in figure [ fig : twoformants ] , reducing in their first formants .this moves the w closer to the h as both acoustical models now ignore the wall vibration .meanwhile , w moves away from the audio formants in the case of /i/ ,/e/ , and /a/. the distance between w and w remains large , despite the fact that both acoustical models solve the webster equation .the results imply that 3d helmholtz analysis is more realistic than its 1d webster version , as expected .overall , our experiments suggest that the time - domain interpretation of acoustic equations provides more realistic results even if it requires reducing from 3d to 1d. this may be partially due to the fact that time - domain analysis allows for more complexity in the acoustical model such as inclusion of lip radiation and wall loss .certainly unknown parameters always remain ( such as those involved in glottal flow , coupling between fluid mechanics and acoustical analysis , etc . ) , which are estimated indirectly , based on observed behaviour in simulations .it should be noted that our experiments are solely based on data from a single speaker . a larger database inclusive of more speakers from different genders and languages is needed in order to confirm the validity / generality of our findings .aalto d , et al .algorithmic surface extraction from mri data - modelling the human vocal tract .proceeding of 6th international joint conference on biomedical engineering systems and technologies ; barcelona , spain .takemoto h , mokhtari p , kitamura t. 2014 .comparison of vocal tract transfer functions calculated using one - dimensional and three - dimensional acoustic simulation methods .proceeding of 15th annual conference of the international speech communication association ; singapore , singapore .
a state - of - the - art 1d acoustic synthesizer has been previously developed , and coupled to speaker - specific biomechanical models of oropharynx in artisynth . as expected , the formant frequencies of the synthesized vowel sounds were shown to be different from those of the recorded audio . such discrepancy was hypothesized to be due to the simplified geometry of the vocal tract model as well as the one dimensional implementation of navier - stokes equations . in this paper , we calculate helmholtz resonances of our vocal tract geometries using 3d finite element method ( fem ) , and compare them with the formant frequencies obtained from the 1d method and audio . we hope such comparison helps with clarifying the limitations of our current models and/or speech synthesizer .
quantum entanglement provides a fundamental potential resource for communication and information processing and is one of the key quantitative notions of the intriguing field of quantum information theory and quantum computation .a quantum superposition state decays into a classical , statistical mixture of states through a decoherence process which is caused by entangling interactions between the system and its environment .superposition of quantum states however , are very fragile and easily destroyed by the decoherence processes .such uncontrollable influences cause noise in the communication or errors in the outcome of a computation , and thus reduce the advantages of quantum information methods . however , in a more realistic and practical situation, decoherence caused by an external environment is inevitable .therefore , influence of an external environmental system on the entanglement can not be ignored .a novel research has been carried out to study the quantum communication channels .macchiavello and palma have developed the theory of quantum channels to encompass memory effects . in real - world applicationsthe assumption of having uncorrelated noise channels can not be fully justified . however , quantum computing in the presence of noise is possible with the use of decoherence free subspaces and the quantum error correction .application of mathematical physics to economics has seen a recent development in the form of quantum game theory .two - player quantum games have attracted a lot of interest in recent years [ 5 - 7 ] .a number of authors have investigated the quantum prisoner s dilemma game [ 8 - 10 ] .a detailed description on quantum game theory can be found in references [ 11 - 16 ] .there have been remarkable advances in the experimental realization of quantum games such as prisoner s dilemma .the prisoner s dilemma game is a widely known example in classical game theory . the quantum version of the prisoner sdilemma has been experimentally demonstrated using a nuclear magnetic resonance ( nmr ) quantum computer .recently , prevedel et al . have experimentally demonstrated the application of a measurement - based protocol .they realized a quantum version of the prisoner s dilemma game based on the entangled photonic cluster states .it was the first realization of a quantum game in the context of one - way quantum computing .studies concerning the quantum games in the presence of decoherence and correlated noise have produced interesting results .chen et al . shown that in the case of two - player prisoner s dilemma game , the nash equilibria are not changed by the effect of decoherence in a maximally entangled case .nawaz and toor have studied quantum games under the effect of correlated noise by taking a particular example of the phase - damping channel .they have shown that the quantum player outperforms the classical players for all values of the decoherence parameter .they have also shown that for maximum correlation the effects of decoherence diminish and it behaves as a noiseless game .recently , we have investigated different quantum games under different noise models and found interesting results .more recently , gawron et al . have studied the noise effects in quantum magic squares game .they have shown that the probability of success can be used to determine characteristics of quantum channels .investigation of multiplayer quantum games in a multi - qubit system could be of much interest and significance .in the recent years , quantum games with more than two players were investigated [ 24 - 27 ] .such games can exhibit certain forms of pure quantum equilibrium that have no analog in classical games , or even in two - player quantum games .recently , cao et al . have investigated the effect of quantum noise on a multiplayer prisoner s dilemma quantum game .they have shown that in a maximally entangled case a special nash equilibrium appears for a specific range of the quantum noise parameter ( the decoherence parameter ) . however ,yet no attention has been given to the multiplayer quantum games under the effect of correlated noise , which is the main focus of this paper .in this paper , we investigate three - player prisoner s dilemma quantum game under the effect of decoherence and correlated noise in a three - qubit system .we have considered a dephasing channel parameterized by the memory factor which measures the degree of correlations . by exploiting the initial state and measurement basis entanglement parameters , ] we study the role of decoherence parameter ] on the three - player prisoner s dilemma quantum game .here , means that the measurement basis are unentangled and means that it is maximally entangled .similarly , means that the game is initially unentangled and means that it is maximally entangled .whereas the lower and upper limits of correspond to a fully coherent and fully decohered system , respectively .similarly , the lower and upper limits of correspond to a memoryless and maximum memory ( degree of correlation ) cases , respectively .it is seen that in contradiction to the two - player prisoner s dilemma quantum game , in the three - player game , the quantum player can outperform the classical players for all values of the decoherence parameter for the maximum degree of correlations ( i.e. memory parameter ) . in comparison to the two - player situation, the three - player game does not become noiseless and quantum player still remains superior over the classical ones for an entire range of the decoherence parameter , in memoryless case i.e. .it is shown that the payoffs reduction due to decoherence is controlled by the memory parameter throughout the course of the game .it is also shown that the nash equilibrium of the game does not change under the correlated noise in contradiction to the case of decoherence effects as investigated by cao et al . .properties of the two - player quantum games have been discussed extensively [ 11 - 13 , 29 ] , however , not much attention has been given to the multiplayer quantum games .study of the multiplayer games may exhibit interesting results in comparison to the two - player games .the three - player prisoners dilemma is similar to the two - player situation except that alice , bob and a third player charlie join the game .the three players are arrested under the suspicion of robbing a bank .similar to two - player case , they are interrogated in separate cells without communicating with each other .the two possible moves for each prisoner are , to cooperate or to defect the payoff table for the three - player prisoner s dilemma is shown in table 1 .the game is symmetric for the three players , and the strategy dominates the strategy for all of them .since the selfish players prefer to choose as the optimal strategy , the unique nash equilibrium is ( ) with payoffs ( ) .this is a pareto inferior outcome , since ( ) with payoffs ( ) would be better for all the three players .this situation is the very catch of the dilemma and is similar to the two - player version of this game .the dilemma of this game can be resolved in its quantum version .[ 25 ] investigated the three - player quantum prisoner s dilemma game with a certain strategic space .they found a nash equilibrium that can remove the dilemma in the classical game when the game s state is maximally entangled .this particular nash equilibrium remains to be a nash equilibrium even for the non - maximally entangled cases .however , their calculations for the expected payoffs of the players comprise product measurement basis for the arbiter of the game . here in our modelwe use the entangled measurement basis for the arbiter of the game to perform measurement .in addition , we include the effect of decoherence and correlated noise in the three - players settings .quantum information is encoded in qubits during its transmission from one party to another and requires a communication channel . in a realistic situation ,the qubits have a nontrivial dynamics during transmission because of their interaction with the environment .therefore , bob may receive a set of distorted qubits because of the disturbing action of the channel .studies on quantum channels have attracted a lot of attention in the recent years .early work in this direction was devoted mainly , to memoryless channels for which consecutive signal transmissions through the channel are not correlated . in the correlated channels ( channels with the memory ) ,the noise acts on consecutive uses of channels .we consider here the noise model based on the time correlated dephasing channel . in the operator sum representation ,the dephasing process can be expressed as the kraus operators , is the identity operator is the pauli matrix and is the decoherence parameter .let qubits are allowed to pass through such a channel then equation ( 1 ) becomes flitney2 if the noise is correlated with the memory of degree then the action of the channel on the two consecutive qubits is given by the kraus operators }\sigma _ { i}\otimes \sigma _ { j}\]]where and are usual pauli matrices with indices and run from to and is the memory parameter the above expression means that with the probability the noise is uncorrelated whereas with the probability the noise is correlated .physically the parameter is determined by the relaxation time of the channel when a qubit passes through it . in order to remove correlations, one can wait until the channel has relaxed to its original state before sending the next qubit .however , this may lower the rate of information transfer .the kraus operators for the three qubit system can be written as [(1-\mu ) p_{j}+\mu \delta _{ jk}]p_{k}}\sigma ^{i}\otimes \sigma ^{j}\otimes \sigma ^{k}\]]where are or the memory parameter is contained in the probabilities which determines the probability of the errors recalling that is the probability of independent errors on two consecutive qubits and is the probability of identical errors .the sum of probabilities of all types of errors on the three qubits add to unity as we expect,=1\]]it is necessary to consider the performance of the channel for arbitrary values of the to reach a compromise between various factors which determine the final rate of information transfer .thus in passing through the channel any two consecutive qubits undergo random independent ( uncorrelated ) errors with the probability ( and identical ( correlated ) errors with the probability . this should be the case if the channel has a memory depending on its relaxation time and if we stream the qubits through it .in our model , alice , bob and charlie , each uses individual channels to communicate with the arbiter of the game .the two uses of the channel i.e. the first passage ( from the arbiter ) and the second passage ( back to the arbiter ) are correlated as depicted in figure 1 .we consider that the initial entangled state is prepared by the arbiter and passed on to the players through a quantum correlated dephasing channel ( qcdc ) . on receiving the quantum state , the players apply their local operators ( strategies ) and return it back to the arbiter via qcdc .then , the arbiter performs the measurement and announces their payoffs .let s consider that the three players alice , bob and charlie be given the following initial quantum state: corresponds to the entanglement of the initial state .the players can locally manipulate their individual qubits .the strategies of the players can be represented by the unitary operator of the form . or and , are the unitary operators defined as and application of the local operators of the players transforms the initial state given in equation ( 7 ) to is the density matrix for the quantum state .the operators used by the arbiter to determine the payoffs for alice , bob and charlie are , or and and are elements of the payoff matrix as given in table 1 . since quantum mechanics is a fundamentally probabilistic theory , the strategic notion of the payoff is the expected payoff .the players after their actions , forward their qubits to the arbiter of the game for the final projective measurement in the computational basis ( see equation ( [ mbasis ] ) ) .the arbiter of the game finally determines their payoffs ( see figure 1 ) .the payoffs for the players can be obtained as the mean values of the payoff operators as tr represents the trace of the matrix . using equations ( 5 ) to ( 13 ), the payoffs for the three players can be obtained as \notag \\ & & + s_{1}s_{2}s_{3}[\eta _ { 2}\_{111}^{k}-(\_{111}^{k})\mu _ { p}^{(1)}\mu _ { p}^{(2)}\xi \cos 2(\beta _ { 1}+\beta _ { 2}+\beta _ { 3 } ) ] \notag \\ & & + c_{1}c_{2}s_{3}[\eta _ { 1}\_{110}^{k}+(\_{110}^{k})\mu _ { p}^{(1)}\mu _ { p}^{(2)}\xi \cos 2(\alpha _ { 1}+\alpha _ { 2}-\beta _ { 3 } ) ] \notag \\ & & + s_{1}s_{2}c_{3}[\eta _ { 2}\_{110}^{k}-(\_{110}^{k})\mu _ { p}^{(1)}\mu _ { p}^{(2)}\xi \cos 2(\beta _ { 1}+\beta _ { 2}-\alpha _ { 3 } ) ] \notag \\ & & + s_{1}c_{2}c_{3}[\eta _ { 1}\_{011}^{k}+(\_{011}^{k})\mu _ { p}^{(1)}\mu _ { p}^{(2)}\xi \cos 2(\alpha _ { 2}+\alpha _ { 3}-\beta _ { 1 } ) ] \notag \\ & & + c_{1}s_{2}s_{3}[\eta _ { 2}\_{011}^{k}-(\_{011}^{k})\mu _ { p}^{(1)}\mu _ { p}^{(2)}\xi \cos 2(\beta _ { 2}+\beta _ { 3}-\alpha _ { 1 } ) ] \notag \\ & & + s_{1}c_{2}s_{3}[\eta _ { 1}\_{010}^{k}+(\_{010}^{k})\mu _ { p}^{(1)}\mu _ { p}^{(2)}\xi \cos 2(\beta _ { 1}+\beta _ { 3}-\alpha _ { 2 } ) ] \notag \\ & & + c_{1}s_{2}c_{3}[\eta _ { 2}\_{010}^{k}-(\_{010}^{k})\mu _ { p}^{(1)}\mu _ { p}^{(2)}\xi \cos 2(\alpha _ { 1}+\alpha _ { 3}-\beta _ { 2 } ) ] \notag \\ & & + \frac{\mu _ { p}^{(1)}}{8}(\cos ^{2}(\delta /2)-\sin^{2}(\delta /2))[\_{111}^{k}-\_{110}^{k}-\_{101}^{k}+\_{100}^{k}]\times \notag \\ & & \sin ( \gamma ) \sin ( \theta _ { 1})\sin ( \theta _ { 2})\sin ( \theta _ { 3})\cos ( \alpha _ { 1}+\alpha _ { 2}+\alpha _ { 3}-\beta _ { 1}-\beta _ { 2}-\beta _ { 3 } ) \notag \\ & & + [ [ \_{111}^{k}]\sin ( \delta ) \sin ( \theta _ { 1})\sin ( \theta _ { 2})\sin ( \theta _ { 2})\cos ( \alpha _ { 1}+\alpha _ { 2}+\alpha _ { 3}-\beta _ { 1}-\beta _ { 2}-\beta _ { 3 } ) \notag \\ & & + [ \_{001}^{k}]\sin ( \delta ) \sin ( \theta _ { 1})\sin ( \theta _ { 2})\sin ( \theta _ { 2})\cos ( \alpha _ { 1}+\alpha _ { 2}-\alpha _ { 3}+\beta _ { 1}+\beta _ { 2}-\beta _ { 3 } ) \notag \\ & & + [ \_{101}^{k}]\sin ( \delta ) \sin ( \theta _ { 1})\sin ( \theta _ { 2})\sin ( \theta _ { 2})\cos ( \alpha _ { 1}-\alpha _ { 2}+\alpha _ { 3}+\beta _ { 1}-\beta _ { 2}+\beta _ { 3 } ) \notag \\ & & + [ \_{011}^{k}]\sin ( \delta ) \sin ( \theta _ { 1})\sin ( \theta _ { 2})\sin ( \theta _ { 2})\cos ( \alpha _ { 1}-\alpha _ { 2}-\alpha _ { 3}+\beta _ { 1}-\beta _ { 2}-\beta _ { 3})]\times \notag \\ & & [ \frac{\mu _ { p}^{(2)}}{8}(\cos ^{2}(\gamma /2)-\sin ^{2}(\gamma /2))]\end{aligned}\]]where or .the payoffs for the three players can be found by substituting the appropriate values for into equation ( 14 ) .elements of the classical payoff matrix for the prisoner s dilemma game are given in table 1 .the payoff matrix under decoherence can be obtained by setting i.e. by setting in equation ( 15 ) .it is important to mention that for and we mean and unless otherwise specified .our results are consistent with ref .[ 25 , 27 ] and can be verified from equation ( 14 ) when all the three players resort to their nash equilibrium strategies .it can be seen that the decoherence causes a reduction in the payoffs of the players in the memoryless case ( see equation ( 14 ) ) .we consider here that alice and bob are restricted to play classical strategies , i.e. , , whereas charlie is allowed to play the quantum strategies as well .it is shown that the quantum player outperforms the classical players for all values of the decoherence parameter for an entire range of the memory parameter . under these circumstances, it is seen that in contradiction to the two - player prisoner s dilemma quantum game , for maximum degree of correlations the effect of decoherence survives and it does not behave as a noiseless game .it can be seen that the memory compensates the payoffs reduction due to decoherence .further more , it is shown that the memory has no effect on the nash equilibrium of the game .alice s best strategy ( and remains her best strategy throughout the course of the game .this implies that the correlated noise has no effect on the nash equilibrium of the game .to analyze the effects of correlated noise ( memory ) and decoherence on the dynamics of the three - player prisoner s dilemma quantum game .we consider the restricted game scenario where alice and bob are allowed to play the classical strategies , i.e. , , whereas charlie is allowed to play the quantum strategies . in figure 2, we have plotted the players payoffs as a function of the decoherence parameter for the dephasing channel .it is seen that the quantum player out scores the classical players for all values of the decoherence parameter for the memoryless ( case .it is shown that even for a maximum degree of memory i.e. the quantum player can outperform the classical players , which is in contradiction to the two - player prisoner s dilemmaquantum game .in addition , the decoherence effects persist for maximum correlation and it does not behave as a noiseless game , contrary to the two - player case . in figure 3 , we have plotted payoffs of the classical and the quantum players as a function of the memory parameter for and respectively . it is seen that memory compensates the payoffs reduction due to decoherence . in figures 4 and 5 ,we have plotted alice s payoff as a function of her strategies and for and respectively .it can be seen that the memory has no effect on the nash equilibrium of the game .it is evident from figures 4 and 5 that the best strategy for alice is and .it remains her best strategy for the full range of the decoherence parameter and the memory parameter throughout the course of the game .therefore , it can be inferred that correlated noise has no effect on the nash equilibrium of the game . in comparison to the investigations of cao , it is shown that the new nash equilibrium , appearing for a specific range of the decoherence parameter disappears under the effect of correlated noise . as it can be seenthat for the entire range of the decoherence parameter and the memory parameter the nash equilibrium of the game does not change ( see figures 4 and 5 ) . further more , it can also be seen that the payoffs of the players are increased with the addition of the correlated noise as can be seen from figures 4 and 5 respectively , for the entire ranges of the decoherence and the memory parameters .we present a quantization scheme for the three - player prisoner s dilemma game under the effect of decoherence and correlated noise .we study the effects of decoherence and correlated noise on the game dynamics .we consider a restricted game situation , where alice and bob are restricted to play the classical strategies , i.e. , , however charlie is allowed to play the quantum strategies as well .it is shown that the quantum player is always better off for all values of the decoherence parameter for increasing values of the memory parameter .it is seen that for maximum degree of correlations , the effect of decoherence does not vanish in comparison to the two - player prisoner s dilemma quantum game .the three - players game doe not become noiseless game which is in contradiction to the two - player case .it is also seen that for the maximum degree of memory i.e. , that the quantum player can out score the classical players for an entire range of the decoherence parameter .the payoffs reduction due to the decoherence is controlled by the memory parameter throughout the course of the game.furthermore , it is shown that the memory has no effect on the nash equilibrium of the game .figures captions**figure 1**. schematic diagram of the model.*figure 2*. players payoffs as a function of the decoherence parameter for the dephasing channel are plotted for the quantum prisoner s dilemma game , with memory parameter ( solid lines ) , ( dotted lines ) . are payoffs of the classical players ( alice / bob ) while represents the payoff of the quantum player ( charlie ) .the other parameters are and are the optimal strategies of charlie.*figure 3*. payoffs of the classical players ( alice /bob ) and the quantum player ( charlie ) are plotted as a function of memory parameter and are payoffs of the classical players for values of the decoherence parameter and respectively . and are payoffs of quantum player for and respectively . the other parameters are and are the optimal strategies of charlie.*figure 4*. alice s payoff is plotted as a function of her strategies and with and **figure 5**. alice s payoff is plotted as a function of her strategies and with and table caption**table 1**. the payoff matrix for the three - player prisoner s dilemma game where the first number in the parenthesis denotes the payoff of alice , the second number denotes the payoff of bob and the third number denotes the payoff of charlie . for the dephasing channelare plotted for the quantum prisoner s dilemma game , with memory parameter ( solid lines ) , ( dotted lines ) . are payoffs of the classical players ( alice / bob ) while represents the payoff of the quantum player ( charlie ) .the other parameters are and are the optimal strategies of charlie.,title="fig : " ] + and are payoffs of the classical players for values of the decoherence parameter and respectively . and are payoffs of quantum player for and respectively .the other parameters are and are the optimal strategies of charlie.,title="fig : " ] +
we study the three - player prisoner s dilemma game under the effect of decoherence and correlated noise . it is seen that the quantum player is always better off over the classical players . it is also seen that the game s nash equilibrium does not change in the presence of correlated noise in contradiction to the effect of decoherence in multiplayer case . furthermore , it is shown that for maximum correlation the game does not behave as a noiseless game and the quantum player is still better off for all values of the decoherence parameter which is not possible in the two - player case . in addition , the payoffs reduction due to decoherence is controlled by the correlated noise throughout the course of the game . keywords : prisoner s dilemma ; three - player ; decoherence ; correlated noise ; dephasing channel
diffusion processes governed by stochastic differential equations ( hereafter sde ) are widely used in describing the phenomenon of random fluctuations over time , and even become indispensable for analyzing high - frequency data ; see , for example , mykland and zhang .practical application of diffusion models calls for statistical inference based on discretely monitored data .the literature has seen a wide spectrum of asymptotically efficient estimation methods , for example , those based on various contrast functions proposed in yoshida , kessler , kessler and srensen and the references given in srensen .taking the efficiency , feasibility and generality into account , maximum - likelihood estimation ( hereafter mle ) can be a choice among others .however , for the increasingly complex real - world dynamics , likelihood functions ( transition densities ) are generally not known in closed - form and thus involve significant challenges in valuation .this leads to various methods of approximation and the resulting approximate mle .the focus of this paper is to propose a widely applicable closed - form asymptotic expansion for transition density and thus to apply it in approximate mle for multivariate diffusion process . to approximate likelihood functions , yoshida proposed to discretize continuous likelihood functions ( see , e.g. , basawa and prakasa rao ) ; many others focused on direct approximation of likelihood functions ( transition densities ) for discretely monitored data , see surveys in , for example , phillips and yu , jensen and poulsen , hurn , jeisman and lindsay and the references therein .in particular , among various numerical methods , lo proposed to employ a numerical solution of kolmogorov equation for transition density ; pedersen , brandt and santa - clara , durham and gallant , stramer and yan , beskos and roberts , beskos et al . , beskos , papaspiliopoulos and roberts and elerian , chib and shephard advocated the application of various monte carlo simulation methods ; yu and phillips developed an exact gaussian method for models with a linear drift function ; jensen and poulsen resorted to the techniques of binomial trees . since all these numerical methods are computationally demanding , real - world implementation has necessitated the development of analytical methods for efficiently approximating transition density .an adhoc approach is to approximate the model by discretization , for example , the euler scheme , and then use the transition density of the discretized model .elerian refined such an approximation via the second order milstein scheme .kessler and uchida and yoshida employed a more sophisticated normal - distribution - based approximation via higher order expansions of the mean and variance .for approximate mle of diffusions , dacunha - castelle and florens - zmirou is one of the earliest attempts to apply the idea of small - time expansion of transition densities , which in principle can be made arbitrarily accurate .however , their method relies on implicit representation of moments of brownian bridge functionals , and thus requires monte carlo simulation in implementation .a milestone is the ground - breaking work of at - sahalia , which established the theory of hermite - polynomial - based analytical expansion for transition density of diffusion models and the corresponding approximate mle . along the line of at - sahalia ,a number of substantial refinements and applications emerged in the literature of likelihood - based statistical inference ( see surveys in at - sahalia ) ; see , for example , bakshi and ju , bakshi , ju and ou - yang , at - sahalia and mykland , at - sahalia and kimmel , li , egorov , li and xu , schaumburg , at - sahalia and yu , yu , filipovi , mayerhofer and schneider , tang and chen , xiu and chang and chen . starting from the celebrated edgeworth expansion for distribution of standardized summation of independently identically distributed random variables ( see , e.g. , chapter xvi in feller , chapter 2 in hall and chapter 5 in mccullagh ) , asymptotic expansions have become powerful tools for statistics , econometrics and many other disciplines in science and technology .taking dependence of random variables into account , mykland established the theory , calculation and various statistical applications of martingale expansion , which is further developed in yoshida .having an analogy with these edgeworth - type expansions and motivated by mle for diffusion processes , i propose a new small - time asymptotic expansion of transition density for multivariate diffusions based on the theory of watanabe and yoshida .however , in contrast to the traditional edgeworth expansions , our expansion does not require the knowledge of generally implicit moments , cumulants or characteristic function of the underlying variable , and thus it is applicable to a wide range of diffusion processes .moreover , in analogy to the verification of validity given in , for example , bhattacharya and ghosh , mykland and yoshida for edgeworth type expansions , the uniform convergence rate ( with respect to various parameters ) of our density expansion is proved under some sufficient conditions on the drift and diffusion coefficients of the underlying diffusion using the theory of watanabe and yoshida .consequently , the approximate mle converges to the true one , and thus inherits its asymptotic properties .such results are further demonstrated through numerical tests and monte carlo simulations for some representative examples . in comparison to the expansion proposed by at - sahalia ,our method is able to bypass the challenge resulting from the discussion of reducibility , the explicity of the _ lamperti _ transform ( see , e.g. , section 5.2 in karatzas and shreve ) and its inversion , as well as the iterated equations for expressing correction terms , which in general lead to multidimensional integrals ; see bakshi , ju and ou - yang .thus it renders an algorithm for practically obtaining a closed - form expansion ( without integrals and implicit transforms ) for transition density up to any arbitrary order , which serves as a widely applicable tool for approximate mle . even after the _ lamperti _ transform, our expansion employs a completely different nature comparing with those proposed in at - sahalia , which hinge on expansions in an orthogonal basis consisting of hermite polynomials and expansions of each coefficient expressed by an expectation of a smooth functional of the transformed variable via an iterated dynkin formula ; see section 4 in at - sahalia .moreover , our method is different from the existing theory of large - deviations - based expansions , which were discussed in , for example , azencott , bismut , ben arous and landre , and given probabilistic representation in watanabe for the purpose of investigating the analytical structure of heat kernel in differential geometry .large - deviations - based asymptotic expansions involve riemannian distance ( implied by the true but generally unknown transition density ) and higher order correction terms . except for some special cases ,they rarely admit closed - form expressions by solving the corresponding variational problems . however , for practical implementation of statistical estimation , relatively simple closed - form approximations are usually favorable .the rest of this paper is organized as follows . in section [ sectionmodelmle ] ,the model is introduced with some technical assumptions and the maximum - likelihood estimation problem is formulated . in section [ sectionexpansionframework ] ,the transition density expansion is proposed with closed - form correction terms of any arbitrary order for general multivariate diffusion processes , and the uniform convergence of the expansion is . in section [ sectionimplementationexamples ] , numerical performance of the density expansionis demonstrated through examples . in section [ sectionamle ] ,the asymptotic properties of the consequent approximate mle are established . in section [ sectionamlemc ] , monte carlo evidence for the approximate mleis provided . in section [ sectionconcludingremarks ] , the paper is concluded and some for future research are outlined .appendix [ subsectionce ] provides an algorithm for explicitly calculating a type of conditional expectation , which plays an important role in the closed - form expansion .appendix [ appendixproofs ] contains all proofs .the supplementary material collects some concrete formulas for illustration , figures for exhibiting detailed numerical performance , additional and alternative output of simulation results , more examples , a brief introduction to the theory of watanabe yoshida and the proof of a technical lemma .assuming known parametric form of the drift vector function and the dispersion matrix : with unknown parameter belonging to a compact set , an -dimensional time - homogenous diffusion is modeled by an sde , where is a -dimensional standard brownian motion .let denote the state space of .without loss of generality , we assume throughout the paper . by the time - homogeneity nature of diffusion ,let denote the conditional density of given , that is , based on the discrete observations of at time grids , which correspond to the daily , weekly or monthly monitoring , etc ., the likelihood function is constructed as the corresponding log - likelihood function admits the following form : where the transition density is .\ ] ] maximum - likelihood estimation is to identify the optimizer in for ( [ likelihood ] ) or equivalently ( [ loglikelihood ] ) . however , except for some simple models , ( [ likelihood ] ) and ( [ loglikelihood ] ) rarely admit closed - form expressions .for ease of exposition , we introduce some technical assumptions .let denote the diffusion matrix .[ assumptionpositivedefinite ] the diffusion matrix is positive definite , that is , , for any .[ aussumptionboundedderivatives ] for each integer , the order derivatives in of the functions and exist , and they are uniformly bounded for any .[ assumption3timesdifferentiable ] the transition density is continuous in , and the log - likelihood function ( [ loglikelihood ] ) admits a unique maximizer in the parameter set .assumptions [ assumptionpositivedefinite ] and [ aussumptionboundedderivatives ] are conventionally proposed in the study of stochastic differential equations ; see , for example , ikeda and watanabe .they are sufficient ( but not necessary ) to guarantee the existence and uniqueness of the solution and other desirable technical properties . for convenience , the theoretical proofs given in appendix [ appendixproofs ] are based on these conditions .however , as is shown in sections [ sectionimplementationexamples ] and [ sectionamlemc ] , numerical examples suggest that the method proposed in this paper is applicable to a wide range of commonly used models , rather than confined to those strictly satisfying these sufficient ( but not necessary ) conditions .assumption [ assumption3timesdifferentiable ] collects two standard conditions for maximum likelihood estimation .in particular , for the continuity ( and higher differentiability ) of the transition density in the parameter , sufficient conditions based on the smoothness of the drift and dispersion functions can be found in , for example , azencott and at - sahalia .theoretical relaxation of these conditions may involve case - by - case treatment and standard approximation argument , which is beyond the scope of this paper and can be regarded as a future research topic .the method of approximate maximum - likelihood estimation proposed in this paper relies on a closed - form expansion for transition density of any arbitrary diffusion process .the discussion of the _ lamperti _ transform and the reducibility issue as in at - sahalia , our starting point stands on the fact that the transition density can be expressed as ,\ ] ] where is the dirac delta function centered at for some variable .more precisely , is defined as a generalized function ( distribution ) such that it is zero for all values of except when it is zero , and its integral from to is equal to one ; see , for example , kanwal for more details .watanabe established the validity of ( [ expectationdiracdensity ] ) through the theory of generalized random variables and expressed correction terms of large - deviations - based density expansion as implicit expectation forms by separately treating the cases of diagonal ( ) and off - diagonal ( ) .in particular , the off - diagonal ( ) expansion depends on a generally implicit variational formulation for riemanian distance . from the viewpoint of statistical applications where ( corresponding to ) happens almost surely , the expansion proposed in watanabe is impractical due to high computational costs . in the literature of statistical inference , ( [ expectationdiracdensity ] )has been employed in pedersen for simulation - based approximate mle . in this section ,we propose a new expansion of the transition density which universally treats the diagonal ( ) and off - diagonal ( ) cases .heuristically speaking , our method hinges on a taylor - like expansion of a standardized version of , which results in closed - form formulas for any arbitrary correction term .let be a small parameter based on which an asymptotic expansion is carried out . by rescaling the model ( [ generalmodelx ] ) to bring forth finer local behavior of the diffusion process, we let .integral substitution and the brownian scaling property yield that where is a -dimensional standard brownian motion . for notation simplicity , we write the scaled brownian motion as and drop the parameter in what follows .let us introduce a vector function defined by and construct the following differential operators : which map vector - valued functions to vector - valued functions of the same dimension , respectively .more precisely , for any and a -dimensional vector - valued function , and for . for an index and a right - continuous stochastic process , define an iterated stratonovich integral with integrand as (t):=\int_{0}^{t}\int _ { 0}^{t_{1}}\cdots\int_{0}^{t_{n-1}}f(t_{n } ) \circ dw_{i_{n}}(t_{n})\cdots\circ dw_{i_{2}}(t_{2 } ) \circ dw_{i_{1}}(t_{1}),\ ] ] where denotes stochastic integral in the stratonovich sense . note that (t) ] is abbreviated to . by convention ,let and define \ ] ] as a `` norm '' of index , which counts an index with twice . by viewing as a function of ,it is natural to obtain a pathwise expansion in with random coefficients , which serves as a foundation for our transition density expansion . according to watanabe , i introduce the following coefficient function defined by iterative application of the differential operators ( [ a0ajoperators ] ) : for an index . here , for , the vector denotes the column vector of the dispersion matrix , for , refers to the vector defined in ( [ b ] ) . using vector function ( [ b ] ), the scaled diffusion ( [ scaleddiffusion ] ) can be equivalently expressed as the following stochastic differential equation in the stratonovich sense ( see , e.g. , section 3.3 in karatzas and shreve ) , that is , thus , similarly to theorem 3.3 in watanabe , it is easy to obtain a closed - form pathwise expansion of from successive applications of the it formula . admits the following pathwise asymptotic expansion : for any .here , and can be written as a closed - form linear combination of iterated stratonovich integrals , that is , for , where the integral , the norm and coefficient are defined in ( [ stratintfgeneral ] ) , ( [ inorm ] ) and ( [ ccoefficient ] ) , respectively . for any arbitrary dimension , one has the element - wise form of the expansion ( [ xpathwiseexpansion ] ) as where with for . note that ( [ xpathwiseexpansion ] ) is different from the wiener chaos decomposition ( see , e.g. , nualart ) , which employs an alternative way of representing random variables .the validity of the pathwise expansion ( [ xpathwiseexpansion ] ) and other expansions introduced in the next subsection can be rigorously guaranteed by the theory of watanabe and yoshida . for ease of exposition , we focus on the derivation of density expansion in this and the following subsection and articulate the validity issue in section [ subsectionconvergencedensityexpansion ] . we introduce an -dimensional correlated brownian motion for .thus , the leading term can be expressed as let be a diagonal matrix defined by it follows that and .furthermore , the correlation of and for is given by so , the covariance matrix of is it follows that assumption [ assumptionpositivedefinite ] is equivalent to the positive definite property of the correlation matrix and the nonsingularity of the dispersion matrix , that is , . finally , for any index and differentiable function with , we introduce the following differential operator : where denotes the element of the vector . employing the scaled diffusion with ,the expectation representation ( [ expectationdiracdensity ] ) for transition density can be expressed as .\ ] ] to guarantee the convergence , our expansion procedure begins with standardizing to which converges to a nonconstant random variable ( a multivariate normal in our case ) , see watanabe and yoshida for a similar setting .indeed , based on the brownian motion defined in ( [ bmb ] ) and the fact , the component of satisfies that for .it is worth noting that watanabe employed an alternative standardization method ( see theorem 3.7 in watanabe ) in constructing the implicit expectation representation for the correction terms of large - deviations - based density expansion for the case of ; see theorem 3.8 in watanabe .owing to ( [ standardizationxy ] ) , the pathwise expansion ( [ xpathwiseexpansion ] ) implies that for any .thus , based on ( [ pedelta ] ) , a jacobian transform resulting from the change of variable in ( [ standardizationxy ] ) yields the following representation of the density of based on that of , that is , ,\ ] ] where . for ease of exposition, the initial condition is omitted in what follows .so , the key task is to develop an asymptotic expansion for ] is abbreviated to . before discussing details in the following subsections ,we briefly outline a general algorithm , which can be implemented using any symbolic packages , for example , mathematica . throughout our discussion , the iterated ( stratonovich or it ) stochastic integrals may involve integrations with respect to not only brownian motions but also time variables . [ algorithmce ]* convert each iterated stratonovich integral in ( [ ce_product_stratonovichintegrals ] ) to a linear combination of iterated it integrals ; * convert each multiplication of iterated it integrals resulting from the previous step to a linear combination of iterated it integrals ; * compute the conditional expectation of iterated it integral via an explicit construction of brownian bridge .denote by the length of the index .denote by an index obtained from deleting the first element of .in particular , if , we define (t)=f(t)$ ] by slightly extending the definition ( [ stratintfgeneral ] ) .according to page 172 of kloeden and platen , we have the following conversion algorithm : for the case of or , we have ; for the case of , we have (t)+1 _ { \ { i_{1}=i_{2}\neq0 \ } } i_{(0 ) } \bigl [ \tfrac{1}{2}j_{-(-\mathbf { i } ) } ( \cdot ) \bigr ] ( t).\ ] ] for example ,if , one has thus , with the conversion algorithm ( [ conversionstit ] ) , we convert each iterated stratonovich integral in ( [ ce_product_stratonovichintegrals ] ) to a linear combination of iterated it integrals .thus , the product can be expanded as a linear combination of multiplication of it integrals .we provide a simple recursion algorithm for converting a multiplication of iterated it integrals to a linear combination . according to lemma 2 in tocino , a product of two it integrals as defined in ( [ itointfgeneral ] )satisfies that \\[-8pt ] & & { } + \int_{0}^{t}i_{-\bolds{\alpha}}(s)i_{-\bolds{\beta}}(s)1_{\{\alpha_{1}=\beta_{1}\neq 0\}}\,ds\nonumber\end{aligned}\ ] ] for any arbitrary indices and .iterative applications of this relation render a linear combination form of .inductive applications of such an algorithm convert a product of any number of iterated it integrals to a linear combination .therefore , our immediate task is reduced to the calculation of conditional expectations of iterated it integrals .we focus on the explicit calculation of conditional expectations of the following type : \\[-8pt ] & & \qquad=\mathbb{e } \biggl ( \int_{0}^{1}\int _ { 0}^{t_{1}}\cdots\int_{0}^{t_{n-1}}dw_{i_{n}}({t_{n})}\cdots dw_{i_{2}}(t_{2})\,dw_{i_{1}}(t_{1})|w(1)=z \biggr ) .\nonumber\end{aligned}\ ] ] by an explicit construction of brownian bridge ( see page 358 in karatzas and shreve ) , we obtain the following distributional identity , for any : where s are independent brownian motions and is distributed as a brownian bridge starting from and ending at at time .for ease of exposition , we also introduce and .therefore , the condition in ( [ ceito ] ) can be eliminated since \\[-8pt ] & & \hspace*{121.7pt}d \bigl(\mathcal{b}_{i_{2}}(t_{2 } ) -t_{2}\mathcal{b}_{i_{2}}(1)+t_{2}z_{i_{2 } } \bigr ) \nonumber\\ & & \hspace*{140pt}d\bigl(\mathcal{b}_{i_{1}}(t_{1})-t_{1 } \mathcal{b}_{i_{1}}(1)+t_{1}z_{i_{1}}\bigr)\biggr).\nonumber\end{aligned}\ ] ] an early attempt using the idea of brownian bridge to deal with conditional expectation ( [ ceito ] ) can be found in uemura , which investigated the calculation of heat kernel expansion in the diagonal case .it is worth mentioning that , instead of giving a method for explicitly calculating ( [ ceito ] ) , uemura employed discretization of stochastic integrals to show that ( [ ceito ] ) has the structure of a multivariate polynomial in with unknown coefficients .therefore , the validity of the above derivation can be seen from the definition of stochastic integral as a limit of discretized summation .in particular , the random variables are not involved in the integral in ( [ convertedce ] ) .the integrals with respect to are in the sense of usual stochastic integrals ; the integrals with respect to are in the sense of lebesgue integrals . by expanding the right - hand side of ( [ convertedce ] ) and collecting terms according to monomials of s , we express ( [ ceito ] ) as a multivariate polynomial in : where the coefficients are determined by \\[-8pt ] & & \qquad\hspace*{97pt } d\bigl(\mathcal{b}_{i_{l_{2}+1}}(t_{l_{2}+1})-t_{l_{2}+1 } \mathcal{b}_{i_{l_{2}+1}}(1)\bigr)\nonumber\\ & & \qquad\hspace*{97pt } dt_{l_{2}}\,d\bigl ( \mathcal{b}_{i_{l_{2}-1}}(t_{l_{2}-1})-t_{l_{2}-1 } \mathcal{b}_{i_{l_{2}-1}}(1)\bigr)\cdots \nonumber \\ & & \qquad\hspace*{97pt } d\bigl(\mathcal{b}_{i_{l_{1}+1}}(t_{l_{1}+1})-t_{l_{1}+1 } \mathcal{b}_{i_{l_{1}+1}}(1)\bigr)\nonumber\\ & & \qquad\hspace*{97pt } dt_{l_{1}}\,d\bigl ( \mathcal{b}_{i_{l_{1}-1}}(t_{l_{1}-1})-t_{l_{1}-1 } \mathcal{b}_{i_{l_{1}-1}}(1)\bigr)\cdots \nonumber \\ & & \qquad\hspace*{97pt } d\bigl(\mathcal{b}_{i_{1}}(t_{1})-t_{1 } \mathcal{b}_{i_{1}}(1)\bigr ) .\nonumber\end{aligned}\ ] ] algebraic calculation from expanding the terms like simplifies ( [ clk ] ) as a linear combination of expectations of the following form : where is an iterated it integral . by viewing as , we have to calculate this expectation , we use the algorithm proposed in section [ subsectionconversionmultiplicationtolinearcombination ] to convert to a linear combination of iterated it integrals .finally , we need to calculate expectation of iterated it integrals without conditioning . for any arbitrary index , we have if and ,otherwise ( by the martingale property of stochastic integrals ) .using the chain rule and the taylor theorem , the ( ) order correction term for admits the following form : where denotes for simplicity .thus , taking expectation of ( [ phikgeneral ] ) and applying ( [ yexpansion ] ) , we obtain that employing the integration - by - parts property of the dirac delta function ( see , e.g. , section 2.6 in kanwal ) , the conditional expectation can be computed as \\ & & \qquad=\int_{b\in\mathbb{r}^{d}}\mathbb{e\bigl[}\partial^{\mathbf { r}}\delta \bigl(b(1)-y\bigr)f_{j_{1}+1,r_{1}}f_{j_{2}+1,r_{2}}\cdots f_{j_{l}+1,r_{l}}|b(1)=b \bigr]\\ & & \hspace*{23.5pt}\qquad\quad{}\times\phi_{\sigma(x_{0})}(b)\,db \\ & & \qquad=(-1)^{l}\,\partial^{\mathbf{r } } \bigl [ \mathbb{e \bigl[}f_{j_{1}+1,r_{1}}f_{j_{2}+1,r_{2}}\cdots f_{j_{l}+1,r_{l}}|w(1)= \sigma(x_{0})^{-1}d(x_{0})^{-1}y\bigr]\\ & & \qquad\hspace*{256pt}{}\times \phi_{\sigma(x_{0})}(y ) \bigr],\end{aligned}\ ] ] where is given in ( [ leadingordergeneral ] ) . by plugging in ( [ ccoefficientsr ] ) , we have that \\ & & \qquad=\sum _ { \ { ( \mathbf{i}_{1},\mathbf{i}_{2},\ldots,\mathbf{i}_{l})|{\vert}\mathbf{i}_{\omega}{\vert}=j_{\omega } + 1,\omega = 1,2,\ldots , l \ } } \prod_{\omega=1}^{l}c_{\mathbf { i}_{\omega } , r_{\omega}}(x_{0})p_{(\mathbf{i}_{1},\mathbf{i}_{2},\ldots,\mathbf{i } _ { l } ) } ( z ) , \end{aligned}\ ] ] where is defined in ( [ ce_product_stratonovichintegrals ] ) . formula( [ formulageneralomegak ] ) follows from the fact that as well as the definition of the differential operators in ( [ doperators ] ) .the above conditioning argument can be justified , when is regarded as a generalized wiener functional ( random variable ) and the expectation is interpreted in the corresponding generalized sense as in watanabe .now , based on assumption [ aussumptionboundedderivatives ] , we introduce the following uniform upper bounds . for ,let and be the uniform upper bounds of the order derivative of and , respectively , that is , for .also , let and denote the uniform upper bounds of and on , respectively , that is , for . in order to establish the uniform convergence in theorem [ propositionconvergencedensity ] ,we introduce the following lemma .when the dependence of parameters is emphasized , we express as and express the standardized random variable defined in ( [ standardizationxy ] ) as in this appendix , we employ standard notation of malliavin calculus ( see , e.g. , nualart and ikeda and watanabe ) and the theory of watanabe and yoshida . for the readers convenience , a brief survey of some relative theory is provided in the supplementary material .[ propositiondexpansion ] under assumption [ aussumptionboundedderivatives ] , the pathwise expansion ( [ yexpansion ] ) holds in the sense of uniformly in , that is , for any , and .see the supplementary material . because of assumption [ assumptionpositivedefinite ] , theorem 3.4 in watanabe guarantees the uniform nondegenerate condition , that is , < \infty\qquad\mbox{for any } p\in(0,+\infty).\ ] ] let denote a set of indices . for any , let us consider a generalized function defined as , which is a schwartz distribution , that is , . applying theorem 2.3 in watanabe and theorem 2.2 in yoshida , we obtain that admits the following asymptotic expansion : for any arbitrary , uniform in , and . here , the correction term is given in ( [ phikgeneral ] ) .therefore , we obtain that hence , by taking into account the transform ( [ standardizationxy ] ) , we obtain that which yields ( [ densityapproximationerrorboundequation ] ) . for ,let by theorem [ propositionconvergencedensity ] , there exists a constant such that for any and sufficiently small .thus , for any positive integer , it follows that \leq c^{k } \epsilon^{k(j+1-m)}\rightarrow0\qquad\mbox{as } \epsilon\rightarrow0.\ ] ] by the chebyshev inequality , converges to zero in probability given , that is , for any , \rightarrow0\qquad\mbox{as } \epsilon \rightarrow0.\ ] ] by conditioning , it follows that \\ & & \qquad=\int_{r}\mathbb{p } \bigl [ \bigl|r^{(j)}\bigl ( \delta , x(t+\delta)|x(t);\theta\bigr)\bigr|>\varepsilon|x(t)=x_{0 } \bigr ] \mathbb{p}\bigl(x(t)\in dx_{0}\bigr).\end{aligned}\ ] ] because of the fact that \leq1\ ] ] and , it follows from the lebesgue dominated convergence theorem that \rightarrow0\qquad\mbox{as } \epsilon\rightarrow0,\ ] ] that is , \rightarrow0\ ] ] as .now , we obtain that as uniformly in . following similar lines of argument as those in the proof of theorem 2 in at - sahalia and theorem 3 in at - sahalia , we arrive at by the convergence in ( [ convinprobdensity ] ) and continuity of logarithm .hence , for any arbitrary , one obtains the convergence of log - likelihood uniformly in .finally , the convergence of as follows directly from assumption [ assumption3timesdifferentiable ] and the standard method employed in at - sahalia .i am very grateful to professor peter bhlmann ( co - editor ) , the associate editor and three anonymous referees for the constructive suggestions .i also thank professors yacine at - sahalia , mark broadie , song xi chen , ioannis karatzas , per mykland , nakahiro yoshida and lan zhang for helpful comments .
this paper proposes a widely applicable method of approximate maximum - likelihood estimation for multivariate diffusion process from discretely sampled data . a closed - form asymptotic expansion for transition density is proposed and accompanied by an algorithm containing only basic and explicit calculations for delivering any arbitrary order of the expansion . the likelihood function is thus approximated explicitly and employed in statistical estimation . the performance of our method is demonstrated by monte carlo simulations from implementing several examples , which represent a wide range of commonly used diffusion models . the convergence related to the expansion and the estimation method are theoretically justified using the theory of watanabe [ _ ann . probab . _ * 15 * ( 1987 ) 139 ] and yoshida [ _ j . japan statist . soc . _ * 22 * ( 1992 ) 139159 ] on analysis of the generalized random variables under some standard sufficient conditions .
the limited battery capacity is a major hurdle to the development of modern wireless technology .frequent device battery outage not only disrupts the normal operation of individual wireless devices ( wds ) , but also significantly degrades the overall network performance , e.g. , the sensing accuracy of a wireless sensor network .conventional wireless systems require frequent recharging / replacement of the depleted batteries manually , which is costly and inconvenient especially for networks consisting of a large number of battery - powered wds or operating under some special application scenarios , e.g. , sensors embedded in building structure . given stringent battery capacity constraints , minimizing energy consumption to prolong the wd operating lifetimeis one critical design objective in battery - powered wireless systems . using wireless communication networks for example ,various energy - conservation schemes have been proposed , e.g. , via transmit power management , energy - aware medium access control and routing selection , and device clustering , etc .the recent advance of wireless power transfer ( wpt ) technology provides an attractive alternative solution to power wds over the air , where wds can harvest energy remotely from the radio frequency ( rf ) signals radiated by the dedicated energy nodes ( ens ) . currently , with a transmit power of watts , tens of microwatts ( ) rf power can be transferred to a distance of several meters , meters is about . ] which is sufficient to power the activities of many low - power devices , such as sensors and rf identification ( rfid ) tags .besides , wpt is fully controllable in its transmit power , waveforms , and occupied time / frequency resource blocks , thus can be easily adjusted in real - time to meet the energy demand of wds .its application can significantly improve the system performance and reduce the operating cost of a battery - powered wireless network .due to the short operating range of wpt , a wpt network often needs to deploy _ multiple _ ens that are distributed in a target area to reduce the power transfer distance to the wds within .meanwhile , for radiation safety concern , densely deployed ens are also necessary to reduce the individual transmit power of each en for satisfying the equivalent isotropically radiated power ( eirp ) requirement enforced by spectrum regulating authorities . in light of this, we study in this paper the charging control for multiple distributed ens in wpt networks .the application of wpt also brings in a fundamental shift of design principle in energy - constrained wireless systems . instead of being utterly energy - conservative in battery - powered systems, one can now prolong the device lifetime and meanwhile optimize the system performance by balancing the energy harvested and consumed . for point - to - point energy transfer, many techniques have been proposed to enhance the efficiency of wpt through , e.g. , multi - antenna beamforming technique , wpt - tailored channel training / feedback , and energy transmitting / receiving antenna and circuit designs . from a network - level perspective , efficient methods have also been proposed to optimize both the long - term network placement ( see e.g. , ) and real - time wireless resource allocation ( see e.g. , ) in wpt networks for optimizing the communication performance . among them , one effective method is to exploit the _ frequency diversity _ of multi - path fading channels in a broadband network .this is achievable by transmitting multiple energy signals on parallel frequency sub - channels that are separated at least by the channel coherence bandwidth .intuitively , one can maximize the energy transfer efficiency in a point - to - point frequency - selective channel by allocating all transmit power to the strongest sub - channel .however , in the general case with multiple ens and wds with different sub - channel gains between each pair of en and wd , there is a trade - off between ens energy efficiency and wds power balance in the transmit power allocation over frequency . in this paper, we aim to optimize the transmit power allocation over frequency and time in a multi - en and multi - wd broadband wpt network to maximize the network operating lifetime , which is a key performance metric of energy - constrained networks defined as the duration until a fixed number of wds plunge into energy outage .a closely related topic is the design of lifetime - maximizing user scheduling in conventional battery - powered communication networks in the sense that the user scheduling determines the user priority to consume energy ( transmit data ) , while the charging control problem considered in this paper determines the user priority to harvest more energy . nonetheless , their designs differ significantly for two main reasons .on one hand , wpt to a particular wd will not cause detrimental co - channel interference to the others as in wireless information transmission ( wit ) , but can instead be exploited to boost the energy harvesting performance of all wds . on the other hand ,the optimal power allocation to optimize the performance of wpt and wit is fundamentally different .using a point - to - point frequency - selective channel for example , the energy - optimal solution for wpt allocates power only to the strongest sub - channel , while the rate - optimal solution for wit is the well - known water - filling power allocation over more than one strong sub - channels in general .another important objective of this paper is to design an efficient feedback mechanism in wpt networks .as shown in , to maximize the network lifetime , it is important for the ens to have the knowledge of both channel state information ( csi ) and battery state information ( bsi ) , i.e. , the residual battery levels of wds .specifically , the knowledge of the strong sub - channels can boost the energy transfer efficiency , and the knowledge of those close - to - outage wds can help avoid their energy outage by timely charging . in practice ,transmitting csi and bsi feedbacks may consume non - trivial amount of energy of the wds and leave less time for wpt .therefore , efficient csi / bsi feedback is needed to maximize the net energy gain , i.e. , the energy gain obtained from more refined charging control less by the feedback energy cost .our main contributions in this paper are as follows .* we propose a voting - based distributed charging control framework for broadband wpt networks .specifically , each wd simply estimates the frequency sub - channels , casts its vote(s ) for some strong sub - channel(s ) and sends to the ens along with its battery state , based on which each en allocates its transmit power over the sub - channels independently .the proposed feedback method is low in complexity and applicable to practical wds ( e.g. , rfid tags ) only with simple baseband processing capability . under the proposed framework , we study lifetime - maximizing csi feedback and transmit power allocation designs . *we derive the general expression of the expected lifetime achieved by a charging control method in wpt networks , which shows that a lifetime - maximizing charging control should be able to achieve a balance between the energy efficiency , user fairness and the induced energy cost of wpt .some general principles are derived to guide the design of practical charging control method , e.g. , the user priority - based charging scheduling .* based on the analysis , we propose practical power allocation algorithm with the considered voting - based csi feedback .specifically , the power allocated to a sub - channel is a function of the weighted sum vote received from all wds , while the number of votes cast by a wd and the weight of each vote are related to its current energy level .several effective power allocation functions are proposed . for practical implementation, we also discuss the setting of function parameters to maximize the network lifetime in practical systems .the network lifetime performance of the proposed distributed charging control methods is then evaluated through simulations under different setups .we show that the proposed voting - based charging control can effectively extend the network lifetime .interestingly , we find that allocating all the transmit power of each en to the best sub - channel that receives the highest vote achieves superior performance compared to other power allocation methods .in fact , this is consistent with the energy - optimal power allocation solution in point - to - point frequency - selective broadband channel , i.e. , a special case of the multi - en and multi - wd system considered in this paper .a related work in designs an interesting energy auction mechanism among the wds in wpt networks to control the transmit power and shows the existence of an equilibrium .however , it only considers energy transfer on a narrowband channel instead of the frequency - selective broadband channel considered in this paper . besides, the wds are assumed selfish by nature and intend to harvest more energy . in our paper , however , we consider the wds working collaboratively to achieve a common objective , e.g. , monitoring the temperature of an area , such that a wd is not aimed to maximize its own harvested energy at the cost of reducing the lifetime of the whole network .the rest of this paper is organized as follows .we first present in section ii a voting - based distributed charging control framework and and the key performance metric . in section iii , we analyze the expected network lifetime and derive the lifetime - maximizing design principles of wireless charging control .the detailed designs of power allocation and feedback mechanism are presented in section iv . in sectionv , simulation results are presented to evaluate the performance of the proposed charging control methods .finally , the paper is concluded in section vi .as shown in fig . [ 101 ] , we consider a broadband wpt network , where ens are connected to stable power sources and broadcast rf energy to power distributed wds .the total bandwidth of the system and the channel coherence bandwidth are denoted by and , respectively , with . for simplicity, we assume that can be divided by to form parallel channels . to achieve full frequency diversity gain for each en, each channel is further divided into sub - channels each for one of the ens .the sub - channels allocated to the -th en are denoted by , , on which the en can transmit narrowband energy signals .an example channel assignment is shown in fig . [ 101 ] , where the adjacent sub - channels allocated to the same en are separated by , thus the energy signals transmitted by the -th en to a wd experience independent fading over the sub - channels . besides , the sub - channels of different ens are also assumed to be independent due to sufficient spatial separations .we further assume that the wireless channels experience block fading , where the sub - channel gains remain constant in a transmission block of length and vary independently over different blocks . for eachwd , a single antenna is used for both energy harvesting and communication in a time - division - duplexing ( tdd ) manner ( see wd1 in fig . [ 101 ] ) . in particular , the communication circuit is used for channel estimation , i.e. , receiving pilot signals sent by the ens and sending channel feedback to the ens . besides, each wd may have a functional circuit to perform specific tasks , e.g. , target sensing in fig . [101 ] . for the -thwd , the energy harvesting circuit converts the received rf signal to dc energy and store in a rechargeable battery of capacity to power the communication and functional circuits .on the other hand , each single - antenna en also has a similar tdd circuit structure ( see en1 in fig . [ 101 ] ) to switch between energy transfer and communication with the wds . at the beginning of the -th transmission block , , the ens broadcast pilot signals simultaneously to the wds in time duration .specifically , the -th en transmits pilot signals on the sub - channels in , . upon receiving the pilot signals , each wd first estimates the sub - channel gains , denoted by , .for the sub - channels in allocated to the -th en , we assume that the channel gains from the en to the -th wd follow a general distribution with the equal mean given by = \beta d_{i , k}^{-\delta } , \ \\forall j\in\mathcal{e}_i , \l=1,2,\cdots,\ ] ] where denotes the distance between the -th en and the -th wd , denotes the path - loss exponent , and denotes a positive parameter related to the antenna gain and signal carrier frequency , which is assumed to be equal for all the sub - channels .then , the wds feed back the channel gains to all the ens in the next time , which can be achieved either using orthogonal time slots or frequency bands .conventional channel feedback procedure requires each wd to encode and modulate the real channel gains , and send to the ens .this , however , can be costly to the wds due to some of the energy harvested consumed on channel feedback , or even infeasible due to the lack of adequate baseband processing capability of some simple energy - harvesting wds . in light of this, we consider a practical voting - based feedback mechanism as shown in fig .specifically , each wd , say the -th wd , simply estimates the received power levels of the sub - channels , selects the strongest sub - channels , ranks them in a descending order based on the channel gains , and broadcasts the indices of the ordered sub - channels , denoted by , to the ens .the rank of sub - channel is denoted by .notice that the value of is a design parameter to be specified later , which can be varying in different transmission block and across different wds .for each en , it observes the feedbacks from all the wds , denoted by .the channel feedback mechanism can be analogously considered as a voting system that the -th elector ( wd ) casts _ ranked votes _ for the candidates ( sub - channels ) .strongest sub - channels ( scs ) and send their indices with ranks to the ens.,scaledwidth=50.0% ] let denote the residual energy of the -th wd at the end of -th block , denote the amount of energy consumed within the block , including the energy spent on performing csi feedback . for simplicity , we assume that the energy consumption rate is constant within each block , so that the energy level increases / decreases monotonically in each block . then , the residual energy at the end of the -th block is where denotes the initial energy level . in this paper, is assumed to follow a general distribution with an average consumption rate = \mu_k t ] is divided into intervals specified by the thresholds , where , and if .we use to denote the battery state of wd at the end of the -th transmission block , where the wd is referred to as in the -th battery state , i.e. , , if the residual energy ] is a fixed parameter denoting the energy harvesting efficiency and assumed equal for all wds .the output voltage of a battery decreases with the residual energy level .we say an _ energy outage _ occurs if the remaining energy level of a wd is below a certain threshold , such that normal device operation could not be maintained .once a device is in energy outage , it is assumed to enter hibernation mode and become inactive .given the initial battery level ] over frequency sub - channels and time block .meanwhile , we also notice from ( [ 1 ] ) that the network lifetime is closely related to the locations of the ens . in particular , the placement optimization of ens has been studied in wireless powered communication networks where the locations of the wds are fixed .in fact , the designs of transmit power allocation and en placement are complementary to each other in different time - scales .that is , en placement is designed in a large time - scale to deal with wireless signal path loss , while transmit power allocation is performed in a small time - scale to adapt to wireless channel fading and battery storage variation . in this paper, we assume that the placement of the ens is given and focus on the design of lifetime - maximizing charging control method over channel and battery dynamics .in this section , we analyze the impact of a charging control policy to the operating lifetime of wpt networks , defined as the duration until one of the wds is in energy outage . in particular , we denote as the expected network lifetime achieved by a charging policy , which specifies the transmit power allocation at each en and in each transmission block , and thus determines the harvested energy , for and . as a good chargingpolicy should perform consistently regardless of the en transmit power constraint . to avoid trivial results , we assume that is sufficiently small , such that the expected network lifetime is finite regardless of the charging policy used , i.e. , where is the set of all feasible policies that satisfy the transmit power constraints .that is , the total energy harvesting rate of all the wds is always lower than the total charging - independent consumption rate , i.e. , < \sum_{k=1}^k \bar{\mu}_k ] ) , e.g. , wds far away from the ens , as they are rarely over - charged .instead , it will impact the battery dynamics of close - to - en wds with >\mathbb{e}[e_k^l] ] and = 10^{-3}c ] and = 10^{-3}c ] under different /\mathbb{e}[e_k^l] ] for each . in the following ,we construct a fair game and derive the expected network lifetime using the _ martingale _ stopping time theorem .the key idea of constructing a fair bet is to compensate the gamblers in each bet .we define a random process ] denotes the average amount of energy received by the -th wd in the -th transmission block given that the wds are in energy states at the beginning of the transmission block , where the average is taken over the realizations of wireless channel fading of all the sub - channels in the -th transmission block .similarly , ] in a bet if its balance is below in the previous bet and ] , and 2 ) = \mathbf{z}_l ] , , then =\mathbb{e}\left[\mathbf{z}_0\right] ] denote the initial total energy and the expected total residual energy when outage occurs , we have .\ ] ] we consider independent experiments of the repeated betting process , where is sufficiently large . by the law of large numbers , it holds that ,\ ] ] where is the stopping time of the -th experiment . by substituting ( [ 15 ] ) into ( [ 16 ] ) , the lhs of ( [ 16 ] ) can be further expressed as \\ & - \lim_{n\rightarrow \infty } \frac{1}{n}\sum_{i=1}^n \sum_{l=1}^{w_i } \left(1 - \mathbf{1}_{ilk}^{c}\right)\mathbb{e}\left[q_{k}^{i , l } \mid \mathbf{b}^{i , l}\right]\biggr\ } , \end{aligned}\ ] ] where the superscript of denotes the corresponding value in the -th experiment . denotes an indicator function that equals if in the -th experiment and otherwise . in particular , the first term in the rhs of ( [ 17 ] ) can be equivalently written as \\ = & \lim_{n\rightarrow \infty}\frac{\sum_{i=1}^n w_i}{n } \cdot \frac{\sum_{i=1}^n \sum_{l=1}^{w_i } \mathbb{e}\left[e_{k}^{i , l } \mid \mathbf{b}^{i , l}\right]}{\sum_{i=1}^n w_i } \\\triangleq & \ \mathbb{e}\left[w\right ] \mathbb{e}\left[e_{k}\right ] , \end{aligned}\ ] ] where ] denotes the mean energy consumption of wd in a transmission block averaged over all the realizations of battery state .similarly , the second term in the rhs of ( [ 17 ] ) can be written as \\ = & \lim_{n\rightarrow \infty}\frac{\sum_{i=1}^n w_i}{n } \cdot \biggl(\frac{\sum_{i=1}^n \sum_{l=1}^{w_i } \mathbb{e}\left[q_{k}^{i , l } \mid \mathbf{b}^{i , l}\right ] } { \sum_{i=1}^n w_i } \\ & \ \ - \frac{\sum_{i=1}^n \sum_{l=1}^{w_i}\mathbf{1}_{ilk}^{c}\mathbb{e}\left[q_{k}^{i , l } \mid \mathbf{b}^{i , l}\right]}{\sum_{i=1}^n w_i}\biggr)\\ \triangleq & \ \mathbb{e}\left[w\right]\cdot\left(\mathbb{e}\left[q_k\right ] - \mathbb{e}\left[q_k^c\right]\right ) . \end{aligned}\ ] ] where ] denotes the average amount of energy transferred to the -th wd , which , however , can not be harvested by the wd because of battery over - charge , i.e. , battery level is larger than or equal to the capacity . for the simplicity of exposition , we denote /\mathbb{e}\left[q_k\right] ] is always positive by assumption , as the total energy harvesting rate is smaller than the consumption rate , i.e. , .it is worth mentioning that the network lifetime expression in ( [ 4 ] ) assumes no specific setups , e.g. , the number of ens or wireless channel distribution , thus is applicable to any general wpt network . to prolong the network lifetime in ( [ 4 ] ) , a charging policy should produce 1 .high effective energy harvested by the wds , i.e. , ; 2 . low total residual energy upon energy outage ; 3 .low total energy consumption rates . for condition ) , the ens should maximize the _ energy efficiency _ of wireless energy transfer , i.e. , the energy received by the wds less by that wasted due to overcharging . therefore , a good charging policy should transfer as much energy as possible to the wds given that their current batteries are not fully charged .this indicates that the ens should assign lower priority to transmit energy to the wds that are close - to - capacity .however , maximizing energy efficiency does not translate to the low total residual energy upon outage as required in condition ) .intuitively , suppose that a tagged wd is close - to - outage , maximizing the total energy received by the wds may overlook the emergent energy requirement of the tagged wd , such that the large amount of energy harvested by the wds of moderate / high energy levels will translate to higher if the tagged wd dies out in the following transmission blocks due to the low energy harvesting rate .recall that holds , the average total residual energy of the wds decreases as the time elapses .therefore , to reduce , the ens should give priority to charging those close - to - outage wds to avoid imminent energy outage , which in fact advocates _ energy fairness _ among the wds .the ideal case is for all the wds to drain their batteries simultaneously right before outage , i.e. , . for the last condition ,the charging control design only affects s through designing the csi feedback mechanism over time .evidently , there is a design tradeoff in the amount of csi feedback . in general , setting larger s , i.e. , feeding back on more sub - channels , could allow the ens to have a better estimation of the sub - channel conditions , and thus better power allocation decisions .however , this also induces higher _ energy cost _ on transmitting the feedback signals , which can eventually offset the energy gain .therefore , we need to carefully design csi feedback to maximize the net energy gains of the wds . to sum up , a lifetime - maximizing charging control policy should be able to balance between energy efficiency , fairness and the induced energy cost . specifically , it should follow the design principles listed below to control the power transfer in a transmission block : * assign higher priority to charging wds that are close - to - outage , if any ; and assign lower priority to charging wds that are close - to - capacity , if any ; * maximize the total amount of energy transferred to the wds under the assigned priorities ; * set proper amount of csi feedback to maximize the net energy gains for the wds . in practical wpt networks ,the above mentioned terms , such as close - to - outage " and priority " , should be translated to realistic design parameters depending on the specific system setups , such as channel coherence bandwidth , transmit power limit and user energy consumption rate . in the next section ,we apply the above design principles to study the transmit power allocation problem under the voting - based charging control framework introduced in section ii .in this section , we propose a voting - based distributed charging control policy , which includes the methods to assign weights to the votes , tally votes and allocate transmit power over frequency .we also propose a low - complexity protocol and discuss the practical design issues . for convenience of exposition, we drop the superscript in all notations as the index of the transmission block , and focus on one particular transmission block .recall that each en is aware of the bsi and ( partial ) csi from the voting - based feedback .each en can tally the votes to the sub - channels in , from which it can have a rough estimation of the en - to - wd channel conditions and allocate the transmit power .intuitively , a sub - channel should be allocated with more transmit power if it gets many high - ranked votes , because this indicates that larger total energy can be transferred to the wds that share the same strong sub - channel ( see principle in section iii.c ) .this implies that each vote should be weighted by the rank of vote among all the votes cast by the wd . besides , to reflect on the design principle ) in section iii.c, the weight of a vote should be higher ( or lower ) if the wd casts the vote is close - to - outage ( or close - to - capacity ) . as for the principle in section iii.c ,the number of votes cast by each wd should be reduced ( or increased ) whenever energy conservation is necessary ( or not urgent ) . from the above discussion, the weight assignment of the votes can be achieved through designing a weighting matrix , where is the number of battery states and denotes the maximum number of votes any wd can cast , i.e. , a wd can feed back at most channel indices .in particular , the number of votes that a wd can cast in any transmission block is determined by its current energy state .each entry indicates the positive weight of a vote if a wd that casts the vote is in the -th battery state and the vote is ranked the -th among all the votes cast by this wd .an example matrix is shown as below , here , we consider battery states , and assume that a wd in energy states feeds back sub - channel indices , respectively .notice that some entries can be set as zero , e.g. , and .with given in ( [ 5 ] ) , the vote cast by wd with rank has a weight .for instance , a vote ranked the among the votes cast by the wd in battery state is assigned a weight .we can see that the weight assignment method discussed above is consistent with the general design principles of wpt control : the charging priority of a wd is achieved through assigning higher ( lower ) weight to its vote if the wd is in lower ( higher ) battery state ; the charging efficiency is maximized through assigning higher ( lower ) weight to the entry in each row that corresponds to higher ( lower ) sub - channel gains ; while wds in different battery states can balance the energy gain and cost through feeding back different number of channel indices .the value of the weighting matrix has direct effect to the performance of the wpt network , which will be discussed in section iv.c . for the moment , we assume that is known by all the ens and study the associated transmit power allocation method in the next subsection .following the general design principles , there are multiple ways to design the power allocation function in ( [ 9 ] ) , depending on the methods used to tally the votes and accordingly allocate the power over frequency . here , we introduce two vote tallying methods and two power allocation methods , which can be combined to generate power allocation functions .specifically , the two vote tallying methods are given as follows first . 1 . _ universal tallying : _ each en tallies all the votes cast by the wds .specifically , based on the weighting matrix , each en can compute the weighted sum vote to the -th sub - channel in as \cdot w_{b_k , r_{j , k } } , \ j\in\mathcal{e}_i,\ ] ] where is the rank of sub - channel among the votes cast by wd , and ] is an indicator function with value if and otherwise .furthermore , with either of the above two vote tallying methods , the following two power allocation strategies can be applied . 1 ._ single - channel allocation : _ allocate all the power to the sub - channel that receives the highest weighted sum vote .specifically , the power allocated by the -th en to the -th sub - channel is where is given in ( [ 7 ] ) or ( [ 10 ] ) .the first case corresponds to the scenario that the sub - channels in receive no vote from the wds .as the -th en has no knowledge of the current wireless channel conditions , it allocates equally the transmit power among the sub - channels in . besides, if multiple sub - channels have the same weighted sum vote , we randomly pick one and allocate all the transmit power to it .proportional allocation : _ the transmit power of the -th en is allocated proportionally to the weighted sum vote received by each sub - channel in , i.e. , the above vote tallying and power allocation methods can find their deep roots in real - life politics .on one hand , the universal tallying corresponds to the universal suffrage system that everyone s vote counts , while the prioritized tallying is analogous to parliament election system , where only the parliament members ( prioritized voters ) get to vote , rather than the common public . on the other hand ,the single - channel power allocation is analogous to the winner - gets - all presidential election , while the proportional power allocation can be considered as the parliament election , where the number of seats that a party controls in the parliament is proportional to the votes it receives . in practice ,each of the vote tallying methods can be flexibly combined with the power allocation methods . however , as it is an inconclusive question to real - life politics of which form of election method is the best , for the time being we do not have a conclusion about which combination is lifetime - maximizing in wpt networks .instead , we address this question based on simulation results later in section vi .interestingly , we find by simulations that the single - channel power allocation achieves evident performance gain over the proportional power allocation , and the universal tallying can further improve the network lifetime performance . in the following ,we summarize the designs in this section as a voting - based distributed charging control protocol that operates in the following steps and is illustrated in fig .[ 103 ] : 1 . at the beginning of each transmission block ,each wd reports to the ens a one - bit information indicating the change of battery state , if any , from which all the ens knowthe bsi of all the wds , i.e. , ; 2 .each en sends pilot signals on its sub - channels in , .then , each wd , estimates its own sub - channel gains , denoted by s for .3 . each wd selects the strongest sub - channels from the sub - channels by ordering , , where is the number of non - zero entries in the -th row of the weighting matrix .then , each wd broadcasts the ordered indices of the sub - channels ( i.e. , ) .4 . based on s and s ,each en independently allocates transmit power according to a combination of the vote tallying and power allocation methods introduced in section iv.b .the wds harvest rf energy in the remaining transmission block .then , the iteration repeats from step .the proposed charging control protocol incurs little signaling overhead exchanged between the ens and the wds .specifically , each wd only needs to send out limited number of sub - channel indices based on the estimated channel gains and its own residual energy level , and broadcasts a simple one - bit bsi message only when its battery state changes .besides , the protocol has low computational complexity and requires no coordination among the ens .each en _ independently _ tallies the received votes to the sub - channels in , and computes its own power allocation using simple power allocation function as in ( [ 11 ] ) or ( [ 12 ] ) .the entries in are the key design parameters of the proposed voting - based charging control protocol . a point to noticeis that the value of only needs to be determined once throughout the entire network operating lifetime .in particular , we can design the value of in an offline manner and allow the ens to inform to all the wds at the very beginning of the network operation . in this sense ,the energy - limited wds do not bear any computational complexity in the design of .in the next subsection , we have some discussions on the design of .the design of includes : 1 ) the number of rows , i.e. , the number of battery states ( and the corresponding battery thresholds ) of the wds ; 2 ) the number of non - zero entries in each row ; and 3 ) the value of each non - zero entry . in practice ,setting the parameters of is an art under specific network setup , however , still has some rules to follow as discussed below . to begin with , using a larger number of battery statescan improve the ens knowledge of the residual device energy levels , thus achieving more accurate charging priority assignment of the wds .however , this also increases the frequency of the one - bit bsi feedback and accordingly the energy cost of the wds . in practice , setting a small number of battery states , e.g. , , would be sufficient to achieve satisfactory priority - based charging control . on the other hand , the thresholds of the battery states , i.e. , , are not necessarily uniform .in fact , setting denser thresholds at low battery region can help ens better identify the wd in the most urgent energy outage situation so as to arrange timely charging to it .secondly , the number of votes a wd casts is related to both the energy cost of sending a channel index feedback and its current battery state .the weighting matrix in ( [ 5 ] ) gives a good example to set the feedback amount of a wd in different battery states : the close - to - outage wds should only send very few channel feedbacks to save energy .however , the number of feedbacks can not be too small as well ( e.g. , votes in ( [ 5 ] ) ) , because more channel feedbacks allow multiple ens to allocate more transmit power in favor of it ; while those close - to - capacity wds are currently not in need of energy transfer , and thus only need to cast one vote to indicate its strongest sub - channel ; in between , the wds of moderate battery levels should feed back several sub - channels ( e.g. , votes in ( [ 5 ] ) ) to maximize the harvested energy without worrying too much about the cost on feedback or battery overcharging .finally , the values of non - zero s in the -th row should be larger than s in the -th row when to ensure that higher charging priority is given to wds with lower residual battery . in ( [ 5 ] ) , for instance , the sum of the row is times larger than that in the row , which is subsequently times larger than that in the row .the rationale is that the number of close - to - outage wds is often much smaller than those in moderate energy states .setting a much higher value for the entries in the lower energy states can make sure the votes cast by the close - to - outage wds are not overwhelmed by the many votes cast by the wds in higher energy states . within each row ,a larger portion of the sum row weight should be given to the first entry , to increase the power allocated to the best sub - channel . following the above discussions , the impact of on the network lifetime is evaluated by simulations in the next section ..simulation parameters [ cols="^,^,^,^",options="header " , ] [ stat ]in this section , we evaluate the performance of the proposed voting - based charging control protocol . in all simulations, we use the powercast tx91501 - 1w transmitter as the ens and p2110 powerharvester as the energy receiver at each wd with energy harvesting efficiency .unless otherwise stated , the simulation parameters are listed in table i , which correspond to a typical indoor sensor network .the weighting matrix is as given in ( [ 5 ] ) , where the threshold vector for the battery is .besides , we consider a stochastic energy consumption model that a wd consumes mw power with probability within a block , and no power with probability . in this case, the average power consumption rate is mw for each wd .we set the initial battery level of all wds as , such that the battery will be depleted in about hours without wpt .the wireless channel power gains follow exponential distributions with mean obtained from the path loss model . without loss of generality, we consider a network fails if more than wds are in energy outage .unless otherwise stated , all simulations are performed in a simple -en wpt network shown in fig . [ 104 ] , where the ens are located at . in particular , wds are randomly placed within a circle of radius centered at each en , i.e. , wds . in general, a larger indicates a larger disparity among the users in the wireless channel conditions , and also a larger distance between the wds to the ens , which will translate to a shorter network lifetime in general .ens and wds.,scaledwidth=45.0% ] for performance comparison , we consider the four power allocation functions from the combinations of the two vote tallying and two power allocation methods described in section iv.b : * singl - univ : single - channel power allocation based on universal vote tallying ; * singl - prio : single - channel power allocation based on prioritized vote tallying ; * propo - univ : proportional power allocation based on universal vote tallying ; * propo - prio : proportional power allocation based on prioritized vote tallying .besides , we also consider five other representative benchmark schemes : * eqlpower : power is equally allocated to all the sub - channels by each en ; * singl - unwt : the power of each en is all allocated to the single sub - channel that receives the most number of votes , i.e. , the votes are unweighted ; * propo - unwt : power is allocated proportionally to the number of unweighted votes that each sub - channel receives at each en ; * singl - greedy : each greedy user votes for only the best sub - channel , and the power of each en is all allocated to the single sub - channel that receives the most number of votes ; * propo - greedy : each greedy user votes for only the best sub - channel , and power is allocated proportionally to the number of votes that each sub - channel receives at each en . because the eqlpower scheme is completely oblivious to csi , we assume that of the time is used for wpt without any signaling overhead . for fair comparison ,we assume that the other schemes use the same csi feedback mechanism , where of the time is spent on sending pilot signals , of the time is spent on csi feedback , and the rest is for wpt .we first verify the analysis of expected network lifetime expression derived in ( [ 4 ] ) . for the simplicity of exposition, we consider without loss of generality the eqlpower , singl - univ , and propo - univ schemes , and compare in fig .[ 110 ] their average network lifetime by simulations and analysis under different en transmit power . to be consistent with the analysis in section iii, we define that a network reaches its lifetime if any wd is in energy outage .we consider a specific realization of the random placement of the wds in fig .each point in the figure is an average of independent simulations .we can see that all the analytical results are very close to the simulations .in general , the analysis underestimates the network lifetime , as the second modification of the battery dynamic overestimates the battery levels , thus leading to larger in ( [ 4 ] ) .the analysis is especially accurate when the transmit power is small , and becomes less accurate as the transmit power increases because of the increase of over - charging probability .overall , the average difference between the analysis and simulation is less than of the simulation value , which verifies the validity of our analysis in ( [ 4 ] ) . in fig .[ 105 ] , we plot the average network lifetime achieved by different power allocation functions . unless otherwise stated , each point in the figure is an average performance of random placements of the wds , and the lifetime of a particular placement is an average of independent simulations over random wireless channels and device power consumptions .for all the schemes , the network lifetime decreases as increases , as expected .we can see that significant frequency diversity gain can be achieved from power allocation , where the channel - oblivious eqlpower scheme has the worst performance .meanwhile , under the same vote - tallying method , a scheme that employs single - channel power allocation achieves evidently longer lifetime than using proportional power allocation .one explanation is that the single - channel power allocation can maximize the energy transferred to a particular wd in the current time slot , which is more effective to avoid energy outage for the wds in urgent battery outage situations .meanwhile , we can also see that each en should tally the votes from all the wds ( instead of only the wds in the lowest battery state ) , where singl - univ performs better than the singl - prio scheme .besides , a wd should cast multiple votes when it is in need of energy , where the two greedy user schemes ( singl - greedy and propo - greedy ) perform poorly . in addition , the schemes using weighted votes ( singl - univ and singl - prio ) based on csi and bsi feedbacks have much better performance than the one using unweighted votes ( singl - unwt ) .in particular , the best - performing singl - univ method achieves on average longer lifetime than the propo - univ scheme , and over longer lifetime than the eqlpower scheme .the simulation results reveal an interesting finding in wpt networks that transmit power should be allocated to the best sub - channel .in fact , this is consistent with the energy - optimal power allocation solution in point - to - point frequency - selective channel , a special case of the multi - en and multi - wd system considered . besides, the selection of the best sub - channel should consider the votes from all the wds . in fig .[ 106 ] , we plot the minimum transmit power required by each en to achieve nearly - perpetual network operation . for the simplicity of illustration, we consider three representative schemes : singl - univ , propo - univ , and eqlpower . due tothe randomness of channel fading and energy consumptions , it is not possible to truly sustain perpetual network operation . here , a wpt system is said nearly - perpetual if the network lifetime is longer than hours in all the independent simulations conducted .for each , we randomly generate placements and calculate the average minimum transmit power required for each of the placements .the best - performing ( worst - performing ) singl - univ ( eqlpower ) scheme in fig .[ 105 ] also require the lowest ( highest ) transmit power to achieve nearly - perpetual operation in fig . [ 106 ] .in particular , the singl - univ scheme can save more than of the transmit power than that required by the eqlpower scheme .the results in figs . [ 105 ] and [ 106 ] demonstrate the effectiveness of the proposed voting - based charging control method in extending the network lifetime , and shows that a scheme that achieves a longer network lifetime under low transmit power is in general also more power - efficient to achieve self - sustainable operation in practical wpt networks with higher power . in this subsection, we use the best - performing singl - univ scheme to investigate the impact of the weighting matrix on the system performance .in particular , we first examine the network lifetime when changing the value of for those .specifically , we keep a fixed number of non - zero entries in and only change the values of s . for the simplicity of illustration, we consider a weighting matrix as a function of power exponent as follows : notice that the weight matrix in ( [ 5 ] ) corresponds to the case with in ( [ 13 ] ) .evidently , a larger will lead to a larger difference of weights of the votes cast by wds in different battery states . in fig . [ 107](a ) , we plot the average network lifetime as a function of when the cluster radius or meters .we can see that the lifetime decreases when increase from to .intuitively , this is because assigning very large weights to the votes cast by wds in lower battery states essentially approaches the worse - performing prioritized tallying method , where votes cast by wds in higher battery states are neglected .however , the simulation results in fig . [ 107](a ) do not imply that a small weight is more favorable for the votes cast by wds of low battery state .instead , we can infer that the setting of needs to balance between the energy efficiency and fairness among all the wds . ) on network lifetime performance when : ( a ) the power exponent changes ; or ( b ) the amount of csi feedback amount changes.,scaledwidth=50.0% ] at last , we investigate the performance tradeoff in terms of the csi feedback amount , i.e. , the number non - zero entries in .specifically , we consider a of rows ( i.e. , fixed battery states ) and varying number of columns ( i.e. , variable feedback amount ) .the non - zero elements in the and -th rows of are the same as those of in ( [ 5 ] ) , while the non - zero elements in the first and the second rows are set as and respectively , in different feedback designs .that is , a wd casts vote when it is in the -th battery state , votes in the battery state , and a variable number of votes from to when it is in the or battery states .the in ( [ 5 ] ) correspond to the case when the wds cast votes in the first two battery states .the power consumed on transmitting each vote is mw . besides , the time reserved on csi feedback is assumed proportional to the maximum of non - zero entries among the rows in .for instance , when the feedback number is for wds in the or battery state , the csi feedback occupies of a transmission block , because a wd casts votes at maximum when it is in the battery state ; when the maximum feedback number is , however , we have . the network lifetime performance under different feedback settings is shown in fig .[ 107](b ) when or meters .we can see that the network lifetime increases when the feedback amount increases from to , indicating that the energy gain obtained from more refined csi feedback outweighs the extra energy cost on sending more csi feedbacks .however , as we further increase the feedback amount , the network lifetime decreases mainly because of the extra time consumed on sending csi feedback to the ens , which leaves less time for wpt transmission in a transmission block .we can therefore infer that a proper feedback amount should be selected .meanwhile , because the best - performing singl - univ scheme allocates transmit power to only one sub - channel for each en , the feedback amount should be set small to increase the chance of the strongest sub - channel being selected for transmission and also reduce the energy cost due to less wpt time resulted .in this paper , we proposed a voting - based distributed charging control framework in multi - en broadband wpt networks , to exploit frequency diversity gain to maximize the network operating lifetime .the proposed voting - based channel feedback mechanism is especially suitable for wireless powered devices with simple hardware structure and stringent battery constraint . under the proposed framework , we studied the power allocation method and efficient csi / bsi feedback design over multiple sub - channels .in particular , we derived the expected network lifetime of general wpt networks to draw the guideline of designing practical lifetime - maximizing charging control policies , and proposed accordingly efficient voting - based power allocation schemes with battery - state dependent csi feedbacks .the effectiveness of the proposed methods in extending the network lifetime has been verified by extensive simulations .interestingly , we found that superior lifetime performance is achievable by allocating all the transmit power of each en to the best sub - channel that receives the highest weighted sum vote from all the wds , instead of spreading the transmit power over multiple sub - channels .practical system design issues were also discussed and examined through simulations .s. ulukus , a. yener , e. erkip , o. simeone , m. zorzi , p. grover , and k. huang , energy harvesting wireless communications : a review of recent advances , " _ ieee j. sel .areas commun ._ , vol .33 , no .3 , pp . 360 - 381 , mar .i. krikidis , s. timotheou , s. nikolaou , g. zheng , d. w. k. ng , and r. schober , simultaneous wireless information and power transfer in modern communication systems , " _ ieee commun . mag ._ , vol .52 , no . 11 , pp . 104 - 110 , nov . 2014. y. zeng and r. zhang , optimized training for net energy maximization in multi - antenna wireless energy transfer over frequency - selective channel , " _ ieee trans ._ , vol .63 , no . 6 , pp . 2360 - 2373 , jun . 2015 .s. bi and r. zhang , placement optimization of energy and information access points in wireless powered communication networks , " _ ieee trans .wireless commun ._ , vol . 15 , no . 3 , pp .2351 - 2364 , mar .2016 .h. chen , y. li , j. l. rebelatto , b. f. uchoa - filho , and b. vucetic , harvest - then - cooperate : wireless - powered cooperative communications , " in _ ieee trans .signal process ._ , vol .63 , no . 7 , pp . 1700 - 1711 , feb . 2015 .x. zhou , c. k. ho , and r. zhang , wireless power meets energy harvesting : a joint energy allocation approach in ofdm - based system , " _ ieee trans .wireless commun ._ , vol . 15 , no . 5 , pp3481 - 3491 , may . 2016
wireless power transfer ( wpt ) technology provides a cost - effective solution to achieve sustainable energy supply in wireless networks , where wpt - enabled energy nodes ( ens ) can charge wireless devices ( wds ) remotely without interruption to the use . however , in a heterogeneous wpt network with distributed ens and wds , some wds may quickly deplete their batteries due to the lack of timely wireless power supply by the ens , thus resulting in short network operating lifetime . in this paper , we exploit frequency diversity in a broadband wpt network and study the distributed charging control by ens to maximize network lifetime . in particular , we propose a practical voting - based distributed charging control framework where each wd simply estimates the broadband channel , casts its vote(s ) for some strong sub - channel(s ) and sends to the ens along with its battery state information , based on which the ens independently allocate their transmit power over the sub - channels without the need of centralized control . under this framework , we aim to design lifetime - maximizing power allocation and efficient voting - based feedback methods . towards this end , we first derive the general expression of the expected lifetime of a wpt network and draw the general design principles for lifetime - maximizing charging control . based on the analysis , we then propose a distributed charging control protocol with voting - based feedback , where the power allocated to sub - channels at each en is a function of the weighted sum vote received from all wds . besides , the number of votes cast by a wd and the weight of each vote are related to its current battery state . simulation results show that the proposed distributed charging control protocol could significantly increase the network lifetime under stringent transmit power constraint in a broadband wpt network . reciprocally , it also consumes lower transmit power to achieve nearly - perpetual network operation . wireless power transfer , distributed charging control , network lifetime , broadband network .
participatory web sites facilitate their users creating , rating and sharing content .examples include digg[.com ] for news stories , flickr[.com ] for photos and wikipedia[.org ] for encyclopedia articles .to aid users in finding content , many such sites employ collaborative filtering to allow users to specify links to other users whose content or ratings are particularly relevant .these links can involve either people who already know each other ( e.g. , friends ) or people who discover their common interests through participating in the web site . in addition to helping identify relevant content , the resulting networks enable users to find others with similar interests and establish trust in recommendations .the availability of activity records from these sites has led to numerous studies of user behavior and the networks they create .observed commonalities in these systems suggest general generative processes leading to these observations .examples include preferential attachment in forming networks and multiplicative processes leading to wide variation in user activity .while such models provide a broad understanding of the observations , they often lack causal connection with plausible user behaviors based on user preferences and the information available to users in making their decisions .moreover , observed behavior can arise from a variety of mechanisms . for predicting consequences of alternate designs of the web site ,models including causal behavior are necessary .establishing such models is more difficult than simply observing behavior : due to the possibility of confounding factors in observations , many different causal models can produce the same observations . instead , such models would ideally use intervention studies and randomized trials to identify important causal relationships .in contrast to the wide availability of observational data on user behavior , such intervention studies are difficult , though this is situation is improving with the increasing feasibility of experiments in large virtual communities and large - scale web - based experiments . nevertheless , identifying information readily available to users on a participatory web site can suggest plausible causal mechanisms .such models provide specific hypotheses to test with future intervention experiments and also suggest improvements to overall system behavior by altering the user experience , e.g. , available information or incentives .the simplest such approach considers average behavior of users on a site .such models can indicate how system behavior relates to the average decisions of many users . by design, such models do not address a prominent aspect of observed online networks : the long tails in their distributions of links and activity .models including this diversity could be useful to improve effectiveness of the web sites by allowing focus on significantly active users or especially interesting content , and enhancing user experience by leveraging the long tail in niche demand . a key question with respect to the observed diversity is whether users , content and the networks are reasonably viewed as behaviors arising from a statistically homogeneous population , and hence well - characterized by a mean and variance . or is diversity of intrinsic characteristics among participants the dominant cause of the observed wide variation in behaviors ? in the latter case , can these characteristics be estimated ( quickly ) from ( a few ) observations of behavior , allowing site design to use estimates of these characteristics , e.g. , to highlight especially interesting content ? moreover , to the extent user diversity is important , what is a minimal characterization of this user variation sufficient to produce the observed long - tail distributions ?this paper considers these questions in the context of a politically - oriented web community , essembly . unlike most such sites ,essembly provides multiple networks with differing nominal semantics , which is useful for distinguishing among some models .we consider plausible mechanisms users could be following to produce the observed long - tail behaviors both in their online activities and network characteristics . in the remainder of this paper , we first describe essembly and our data set in sec .[ sect.essembly ] .we then separately examine highly variable behaviors for users , content rating and network formation , in secs .[ sect.users ] , [ sect.resolves ] , and [ sect.links ] , respectively .we suggest models to describe the observed characteristics of users , content , and the network , and consider their possible use during operation of the web site by helping identify user and content parameters early in their history .finally we discuss implications and extensions to other participatory web sites . in the three sections focusing on user behavior ,resolve characteristics , and network structure , respectively , we first introduce the observations , then present a model describing these observations ( subsections _ model _ ) , and finally analyze the model parameters and predictions ( subsections _ behavior _ ) .[ sect.essembly ] essembly is an online service helping users engage in political discussion through creating and voting on _ resolves _ reflecting controversial issues .essembly provides three distinct networks for users : a social network , an ideological preference network , and an anti - preference network , called _ friends _ ( those who know each other in person ) , _ allies _ ( those who share similar ideologies ) and _ nemeses _ ( those who have opposing ideologies ) , respectively .the distinct social and ideological networks enable users to distinguish between people they know personally and people encountered on the site with whom they tend to agree or disagree .network links are formed by invitation only and each link must be approved by the invitee .thus all three networks in essembly are explicitly created by users .essembly provides a ranked list of ideologically most similar or dissimilar users based on voting history , thus users can identify potential allies or nemeses by comparing profiles . with regards to voting activity, the essembly user interface presents several options for users to discover new resolves , for instance based on votes by network neighbors , recency , overall popularity , and degree of controversy . our data set consists of anonymized voting records for essembly between its inception in august 2005 and december 2006 , and the users and the links they have at that time in the three networks at the end of this period .our data set has users .essembly presents 10 resolves during the user registration process to establish an initial ideological profile used to facilitate users finding others with similar or different political views . to focus on user - created content , we consider the remaining resolves , with a total of million votes .[ sect.users ] distribution of activity times for users .the line shows an exponential fit to the values between 10 and 200 days , proportional to where . ] distribution of number of users vs. the number of votes a user made .the solid curve indicates a zipf distribution fit to the values , with parameter . in this and other figuresthe range given with the parameter estimate is the confidence interval .the plot does not include the users with zero votes . ] fig .[ fig.activity time ] shows most users are active for only a short time ( less than a day ) , as measured by the time between their first and last votes ( this includes votes on the initial resolves during registration users need not vote on all of them immediately ) .the 4762 users active for at least a day account for most of the votes and links , and we focus on these _ active users _ for our model . for these users , fig .[ fig.activity time ] shows an exponential fit to the activity distribution for intermediate times .thus users who have sufficient interest in the system to participate for at least a few days behave approximately as if they decide to stop participating as a poisson process .the additional decrease at long times ( above 200 days or so ) is due to the finite length of our data sample ( about 500 days ) . about a fifth of usershave no votes on noninitial resolves .for the rest of the users , fig .[ fig.votes per user ] shows the distribution of votes among users who voted at least once for noninitial resolves .these votes are close to a zipf distribution in number of votes , with number of users with votes proportional to .the parameter estimates and confidence intervals in this and the other figures are maximum likelihood estimates assuming independent samples .this wide variation in user activity also occurs in other participatory web sites such as digg .the distribution of number votes per user arises from two factors : how long users participate before becoming inactive , and how often they vote while active .model of user behavior .people join the site as active users , who _ create _ resolves , _ vote _ on them and _ link _ to other active users .users can eventually stop participating and become inactive.,width=288 ] fig .[ fig.user model ] summarizes our model for user behavior .this models the participation of users and their activities on the site while they are active .new users arrive in the system when they register , and we model this as a poisson process with rate , and such users leave the system ( _ i.e. _ , become inactive ) with a rate .table [ table.user parameters ] gives the values for these model parameters based on average arrival and activity times of active users ..[table.user parameters]user activity parameters . [ cols="<,<",options="header " , ] user activities consist of _ voting _ , _ creating resolves _ , and _forming links_. user activity is clumped in time , with groups of many votes close in time separated by gaps of at least several hours .this temporal structure can be viewed as a sequence of user sessions .the averaged distributions for interevent times between activities of individuals show long - tail behavior , similarly to other observed human activity patterns , such as email communications or web site visits . to model the number of votes per user in the long time limit where we are only interested in the total number of accumulated votes for a particular user ,this clumping of votes in time is not important .specifically we suppose each user has an average activity rate while they are active on the site ( cf .[ fig.activity time ] ) , given as , where is user s activity , is her number of events ( i.e. , votes , resolve creations and links ) , and is the time elapsed between her first and last vote . we suppose the values arise as independent choices from a distribution and the values are independent of the length of time a user is active on the site. these properties are only weakly correlated ( correlation coefficient among active users ) .we characterize user activities by fractions and for creating resolves and forming links , respectively .the rate of voting on existing resolves for a user is then , which is by far the most common of the three user activities . for simplicity , we treat these choices as independent and take and to be the same for all users .thus in our model , the variation among users is due to their differing overall activity rates and amount of time they are active on the site .cumulative distribution of activity rates , , for the 4719 users who were active at least one day and voted on at least one noninitial resolve or formed at least on link .plot includes a curve for a lognormal distribution fit , which is indistinguishable from the points and with parameters and .the values are in units of actions per day . ]we estimate the model parameters from the observed user activities , and restrict attention to active users .table [ table.user parameters ] shows the estimates for parameters , and , governing activity choices .[ fig.rho estimates ] shows the observed cumulative distribution values and a fit to a lognormal distribution .the heavy tailed nature of the votes per user distribution ( fig .[ fig.votes per user ] ) can be attributed to the interplay between the user activity times and the broad lognormal distribution of the user activity rates : the mixture of these two distributions results in a power law , as has been shown in the context of web page links as well .the distributions of activity times and rates presumably reflect the range of dedication of users to the site , where most users are trying the service for a very limited time but active users are also represented in the heavy tail .such extended distributions of user activity rates is also seen in other activities , including use of web sites , e.g. , digg , and scientific productivity .[ sect.resolves ] a key question for user - created content is how user activities distribute among the available content . for essembly ,[ fig.votes per resolve ] shows the total number of votes per resolve .this distribution covers a wide range , with some resolves receiving many times as many votes as the median . inessembly , each resolve receives its first vote when it is created , i.e. , the vote of the user introducing the resolve .thus the observed votes on a resolve are a combination of two user activities : creating a new resolve ( giving the resolve its first vote ) and subsequently other users choosing to vote on the resolve if they see it while visiting the site .we note that users do not see the distribution of previous votes until they cast their votes , so that their judgement is unbiased .after voting , they can see how other users had voted on the resolve .distribution of votes on resolves .the solid curve indicates a double pareto lognormal fit to the values , with parameters , , and . ]we consider a user s selection of an existing resolve to vote on as mainly due to a combination of two factors : visibility and interestingness of a resolve to a user .visibility is the probability a user finds the resolve during a visit to the site .interestingness is the conditional probability a user votes on the resolve given it is visible to that user .these two factors apply to a variety of web sites , e.g. , providing a description of average behavior on digg .the design of the web site s user interface determines content visibility .typically sites , including essembly , emphasize recently created content and popular content ( i.e. , receiving many votes over a period of time ) .essembly also emphasizes controversial resolves . as with other networking sites ,the user interface highlights resolves with these properties both globally and among the user s network neighbors .users can also find resolves through a search interface .while we can not observe which resolves people click on , we do register when they vote on them , and thus find it interesting enough to warrant spending time to consider them . in a similar vein, clickthrough rates have been extensively investigated in the context of web search and search engine optimization .web search engines strive to provide users with relevant results to their queries , and rank the matching documents in reverse order of perceived importance to the searcher . however , due to the fact that search queries are not well defined and several possible optimal results may match a user s request , it is not always the top ranked result that is most relevant to the user .search engine logs provide data on which results users click on for given queries , and thus can reveal users implicit relevance judgements to their searches .it has been shown , however , that the probability of clicking on a given result is biased by the presentation order , thus a result with the same relevance as another but appearing in a higher poisition may get more clicks ( this is also called `` trust bias '' ) .eyetracking experiments have also shown that users scan through search results in a linear order from top to bottom , which further explains why results on the top are clicked with a larger probability .clickthroughs are analogous to votes cast on resolves in essembly , indicating a preference on the part of the user for the given item found for the query during a web search , and the resolve voted on in essembly , respectively .predictive models have been developed to compensate for position bias and to offset it to reveal the true relevance of the search results for the users . in essembly, recency appears to be the most significant factor affecting visibility , in a very similar manner to how search engine users perceive the ranked results .[ fig.votes vs age ] shows how votes distribute according to the age of the resolve at the time of the vote .we define the _ age _ of the resolve as the ordinality of the given resolve among resolves introduced in time .an age 1 resolve is the newest one of the resolves introduced , while the oldest resolve has age where is the number of resolves .most votes go to recent resolves with a small age .distribution of votes vs. age of a resolve . ]the decay in votes with age is motivated by recency ( decreasing visibility with age as resolve moves down , and eventually off , the list of recent resolves ) .we offer no underlying model for this `` aging function '' but its overall power - law form corresponds to users willingness to visit successive pages or scroll down a long list .the step at age 50 is , presumably , due to a limit on number of recent resolves readily accessible to users .the values decrease as a power law , proportional to , where is resolve age and up to about age 50 . for larger ages , the values in fig .[ fig.votes vs age ] decreases faster , with .it has also been found that in search engine result pages the probability of clicking on a result also decreases with the rank of the result as a power law , albeit with a different exponent ( ) .the combination of different ages in the data sample is a significant factor in producing the observed distributions . in particular , a distribution of ages and a multiplicative process produces a lognormal distribution with power - law tails , the double pareto lognormal distribution , with four parameters .two parameters , and characterize the location and width of the center of the distribution . the remaining parameterscharacterize the tails : for the power - law decay in the upper tail , with number of resolves with votes proportional to , and for the power - law growth in the lower tail , with number of resolves proportional to .[ fig.votes per resolve ] shows a fit of this distribution to the numbers of votes different resolves received .for essembly , the networks have only a modest influence on voting . our model of resolve creation , described in sec .[ sect.users ] , involves a fraction of each user s activity on the site , on average , giving each resolve its first vote . for subsequent votes, we view a user s choice of resolve as due to an intrinsic _ interestingness _ property of each resolve and its visibility. in general could depend on the resolve age and its popularity ( especially among network neighbors , if neighbors influence a user to vote rather than just make a resolve more visible ) .however , for simplicity , we take to be constant for a resolve . a key motivation for this choice is the observation that high or low rates of voting on a resolve tend to persist over time , when controlling for the age and number of votes the resolve already has .thus the importance of an intrinsic interestingness property of resolves is a reasonable approximation for essembly ( as discussed further in sec .[ sect.online estimation ] ) .we further assume is independent of the user , which amounts to considering general interest in resolves among the population rather than considering possible niche interests among subgroups of users . with these simplifications ,we take the values to arise as independent choices from a distribution .visibility of a resolve depends on age , rank in number of votes compared with other resolves ( popularity ) , controversy , both in general and among user s neighbors . for essembly ,resolve age appears to be the most significant factor , so we take visibility to be a function of age alone , as determined by a function . with these factors , we model the chance that the next vote on existing resolves goes to resolve as being proportional to where is the age of the resolve at the time of the vote .the model s behavior is unchanged by an overall multiplicative constant , and we arbitrarily set .[ sect.resolves behavior ] we would like to estimate the distribution and the aging function . to do so , we consider the votes ( other than the first vote on each resolve ) between successive resolve introductions .specifically , let be the number of resolves in our data sample .we denote the resolves in the order they were introduced , ranging from 1 to .let us assume that there have resolves been introduced in essembly up to a given time , and let be the number of votes made in the time interval between the introductions of resolves and ( not including the two votes accompanying those resolve introductions ). during this interval , the system has existing resolves as assumed .when the number of existing resolves is large , we can treat the votes going to each resolve as approximately independent . in this case , the number of votes resolve receives during time interval is a poisson process with mean because during this interval resolve is of age .table [ table.r estimate ] illustrates these relationships . we estimate the and values as those maximizing the likelihood of getting the observed numbers of votes on the resolves in these time intervals , coming from independent poisson distributions .this maximization does not have a simple closed form , but setting derivatives with respect to these parameters to zero does give simple relations between these values at the maximum : where is the number of votes resolve has received , is the number of votes made to resolves of age at the time of the vote , in both cases excluding the initial vote to each resolve .the resulting estimates from the numerical solution are similar to the distribution of votes vs. age in fig .[ fig.votes vs age ] , and fig. [ fig.r estimates ] shows the distribution of estimated values and a lognormal fit .cumulative distribution of values for the resolves as obtained from a maximum likelihood estimate for the observed data .the curve shows a lognormal distribution fit , with parameters and . ] with the wide variation in values for resolves and the activity rates for users ( fig .[ fig.rho estimates ] ) , a natural question is whether these variations are related . in particular , whether the most active users tend to preferentially introduce resolves that are especially interesting to other users . while active users tend to introduce more resolves overall , the correlation between the activity rate of a user and the average values of the resolves introduced by that user is small : .we find a modest correlation ( ) between the _ time _ a user is active on the site and the mean values of that user s introduced resolves . to relate this model to the vote distribution of fig .[ fig.votes per resolve ] , consider the votes received by resolve up to and including the time it is of age . according to our model , the number of votes , _ other than _ its first vote, this resolve receives is a poisson variable with mean at the end of our data set , resolve is of age .the persistence of votes on resolves based on the wide variation of values among resolves gives rise to a multiplicative process with decay . to see this , in our model the number of votes between successive resolve introductions is geometrically distributed with mean , from fig . [ fig.votes vs age ] , the aging function is approximately power law , with and .the expected number of votes up to age is then . after accumulating many votes ( i.e. , when is large ) , the actual number of votes will usually be close to this expected value .the change in votes to age is where is a nonnegative random variable with mean .thus , except possibly for the votes a resolve receives shortly after its introduction , the growth in number of votes is well - described by a multiplicative process with decay . that our model corresponds to a multiplicative process has two consequences .first , a sample obtained at a range of ages from a multiplicative process ( with or without decay ) leads to the double pareto lognormal distribution seen in fig .[ fig.votes per resolve ] . in our case, the sample has a uniform range of ages from 1 to , though with the decay older resolves accumulate votes more slowly than younger ones .a second consequence arises from the decay as resolves become less visible over time .thus our model provides one mechanism using locally available information giving rise to dynamics governed by multiplicative random variation with decay .a similar process arises if the decay is due to any combination of decreasing interest in the content and loss of visibility with age , e.g. , as seen in sites such as digg with current events stories that become less relevant over time .ratio of means of estimates for resolves receiving votes at or after various ages to the estimates for all resolves of those ages .error bars indicate the standard error in the ratio of means from the standard deviation of the values and the number of resolves in each category . ] as one indication of the diversity of voting on resolves , fig .[ fig.avg r ratio by vote age ] shows how the average value for resolves receiving votes compares to the average for all resolves among those at least a given age .randomization tests indicate average values of resolves receiving votes are unlikely to be the same as those of all resolves at each of these ages , with -values less than in all cases . with increasing age ,resolves continuing to receive votes tend to be those with especially high values .this behavior indicates that high interestingness estimates for resolves persist over time , as a small subset of resolves continue to collect votes well after their introduction .[ sect.links ] users decisions of who to link to and how they attend to the behavior of their neighbors can significantly affect the performance of participatory web sites .a common property of such networks is the wide range in numbers of links made by users , i.e. , the degree distribution of the network .the structure of the networks is typical of those seen in online social networking sites , and the links created by users generally conform to their nominal semantics .the degree distributions in all three essembly networks are close to a truncated power law , with number of users in the network with degree proportional to .[ fig.degreedistribs ] shows the distribution of degrees in the networks .the number of users in the essembly social network who have a given number of links of the indicated type ( plus symbols are for the friends , circles for the allies , and squares for the nemeses networks , respectively ) .the parameters of the best fits of truncated power laws on the three sets of data are given in the text . ]these long - tail degree distributions are often viewed as due to a preferential attachment process in which users tend to form links with others in proportion to how many links they already have .combined with a limitation on the number of links a user has , this process gives truncated power - law degree distributions .for essembly , this limitation arises from users becoming inactive , since such users no longer accept links .however , users in essembly have no direct access to number of links of other users .thus we need to identify a mechanism users could use , based on information available to them . the mechanism underlying preferential attachmentlikely differs between the friends network and the two ideological ones . in particular, since the links appear to follow their nominal semantics , links in the friends network are likely to be mainly between people who know each other ( i.e. , not found via essembly ) while ideological links ( especially those who are not also friends ) require finding the people by ideological profile ( which essembly makes available ) . building such a profilerequires voting , so a user with many votes is more likely to have voted on similar resolves as other users .such common votes allow ideological comparisons between users and therefore suggestions for potential users to link to .the need to build ideological profiles suggests votes on common resolves is key to the number and type of links .that is , a user forming a link is more likely to have many common votes with other users who are very active ( and hence have many votes ) . thus forming linksbased on common votes is likely to lead people to link preferentially to highly active users , who will in turn tend to have many links .one challenge to evaluating this mechanism is causation : resolves voted on by network neighbors are highlighted in the user interface , making them more visible and hence more likely to receive votes .thus common votes increase the chances of forming a link by providing information to form a profile , and links increase the chance of common votes through visibility of resolves . separating these effects is especially challenging since our data set does not indicate _ when _ each link was formed .we can partially address this challenge through two observations .first , essembly presents `` resolves in your network '' grouping the three networks together .so any influence on resolve visibility due to networks should be similar for all networks .second , fig .[ fig.common votes ] illustrates a distinction between the ideological networks in essembly and the social network nominally linking people who know each other as friends .the figure shows friends generally have many more resolves in common , i.e. , both users voted on , than random pairs of users who participate in at least one of the networks .the figure also shows the ideological networks ( both allies and nemeses ) are similar and have significantly more common resolves than the friends network .these two observations suggest the enhanced number of common votes for the friends network compared to random pairs is primarily due to the increased visibility of resolves due to network neighbors voting on them . because essembly presents resolves from all networks together, this enhancement is also likely to be the same for the ideological networks .hence , the remaining increase in common votes in the ideological networks compared to the friends network suggests the additional commonality required for users to form the links . cumulative distribution of number of common votes among linked pairs in the networks , and among random pairs of users who are in at least one network . for each number of common votes ,the curves show the fraction of pairs with more than that many resolves both users in the pair voted on . ]fraction of link types vs. number of votes on noninitial resolves .a linked pair of users are denoted `` only friends '' when their only link is in the friends network , `` non - friends '' when they are not linked in the friends network , and `` friends & ideological '' when they have a friends link as well as a link in the allies or nemeses networks . ] fig .[ fig.link types ] shows the types of links vary depending on user activity .for this plot , users with at least one network connection are grouped into quantiles by their number of votes .each point on the plot is the average fraction of link types among users in that quantile , with the error bar indicating the standard deviation of this estimate of the mean .users with few votes tend to have most of their links to friends only , so do not participate much in the ideological networks . on the other hand ,users with many votes tend to have most of their links in the ideological networks and to people who are not also friends .the same trend in link types occurs as a function of other measures of user activity , i.e. , using quantiles based on the time a user is active or the number of links a user has . in our model ,user forms links at a rate .thus the number of links a user forms is a combination of activity rate and how long the user remains at the site . the wide variation in activity times and among users ( fig .[ fig.activity time ] and [ fig.rho estimates ] ) gives rise to a wide distribution of number of links . while the most common mechanisms designed to reproduce the observed power law degree distributions use growing rules and the degree of vertices in link formation , in the following we propose a mechanism that only takes into account the extent to which two users share interests to describe link formation between two users . because links involve two people , an additional modeling issue is which pairs of users form links . in our model, we take the friends network to primarily reflect a preexisting social network . for the ideological networks ,however , we take the choices to depend on common votes . furthermore, only active users can form links . specifically , we model the likelihood a ( non - friend ) pair forms a link in an ideological network as proportional to the number of common votes they have . in addition , existing friends links can add an ideological link .people form ideological links based on common votes , and only users active at the same time can form links .the first factor gives more links for those who vote a lot ( due to being more likely to have votes in common with others ) .this leads to , in effect , preferential attachment for forming links ( those with more links are likely to be users with more votes , hence more overlap with others ) , while the attachment probability does not explicitly depend on degrees .the activity constraint limits the link growth , corresponding to descriptive models giving truncated power - law degree distribution . to verify whether users connect to each other based on similarities in their voting profile , we propose the following simplified mechanism for link formation .suppose that user voted on resolves , while user voted on resolves in total .assuming that and form a link with a probability proportional to the number of votes that the pair has in common , this probability will be if and are sufficiently smaller than the number of all resolves , and and vote independently of each other and pick resolves randomly from the pool of all available resolves .caldarelli et al .have shown that if vertices in a network possess intrinsic `` fitnesses '' , and the linking probability is proportional to the product of fitnesses of the two vertices to be linked , then in the particular case when the fitnesses are drawn from a power - law probability distribution function the resulting degree distribution will have the same exponent as the fitness distribution .we can consider the number of votes a person makes as the fitness of the vertex , and arrive by analogy at the same model as ref . , resulting in an expected power - law exponent of ( fig .[ fig.votes per user ] ) .fitting truncated power laws to the degree distributions of the three networks shown in fig .[ fig.degreedistribs ] , we found the parameters , ; , ; and , for the friends , allies , and nemeses networks , respectively , with the values for the confidence intervals indicated .the power - law exponents are in the range $ ] , giving a consistent match to the exponent of fig .[ fig.votes per user ] .the truncation of the power laws seen in the degree distributions are most likely the result of vertices gradually becoming inactive in time .an interesting consequence of the above is that while the friends network as seen on essembly is supposed to not be a result of shared votes made conspicuous by the web user interface , we see a consistent match in the exponents : this suggests that friendship links in real life may also form around shared interests , and that the scope of interests people have may follow a similar probability distribution function as shown in fig .[ fig.votes per user ] .unlike random graph models with this degree distribution , our mechanism based on common votes also gives significant transitivity , comparable to that observed for the allies network .that is , if users and have voted on many resolves in common , as have users and , then users and also tend to have significant overlap in the resolves they voted on . a further consequence of our model with ideological links depending on common votes is the prediction of a change in the types of links users make as they vote .in particular , users with few votes will also have few common votes with other users and hence their links will tend to be mostly friends .users with many votes , on the other hand , will tend to have common votes with many others and hence , according to this model , tend to have mostly ideological links .this change in type of links as a function of a user s number of votes or links occurs in essembly , as seen in fig .[ fig.link types ] .finally , our model also describes the significant fraction of users who form no links as due to a combination of low activity rate and short activity time .specifically , in our model the probability a user has no links is . for active users ,whose activity time distribution is roughly exponential with time constant , the values in table [ table.user parameters ] and the distribution of values in fig .[ fig.rho estimates ] give the probability for no links as the average value of equal to .this compares with the 1242 out of 4762 active users ( i.e. , ) who have no links in our data set .[ sect.online estimation ] our model allows estimating parameters for new users and new resolves as they act in the system . in particular , we describe using the early history of resolves to estimate the number of votes a resolve will eventually have as well as which resolve will likely receive the next vote .estimates of values for several users as a function of the time since their first vote .error bars show the confidence intervals.,width=240 ] fig .[ fig.rho estimates vs time ] shows estimates of user activity levels as a function of time since the user first voted .we see user activity levels change with time , and in different ways .so users not only differ considerably in their average activity rates but also in how their interest in the site varies in time .estimates of values for two resolves as a function of their age .error bars show the confidence intervals.,width=240 ] for resolves , using the model of sec .[ sect.resolves ] , fig .[ fig.r estimates vs age ] shows how estimates for resolves , and their confidence intervals change over time , as more votes are observed .other resolves show similar behavior .thus the interestingness of resolves appears to converge in time as we expect . in practice , however , the optimization procedure is computationally very costly due to the large number of parameters that grows linearly with the number of resolves in the system. a further requirement of an online algorithm is that it is able to update the model parameters in real time as new users , votes and resolves enter the system .thus it is not feasible to consider a growing number of resolves with constant resources .instead we must limit the the number of parameters and thus resolves to be optimized to a constant value .one such approach is to optimize parameters based on the last active resolves only , and keep the interestingness and aging parameters constant for resolves older than that .this method , interestingly , has the potential benefit of being able to track changes to interestingness and aging in time .another incremental approach uses the observation that old resolves , with a long track record of votes , have their interestingness well - estimated and similarly the aging function for small ages is well - estimated from prior experience with many resolves receiving votes at those ages .conversely , recently introduced resolves have had little time to accumulate votes and for large ages is poorly estimated due to having little experience in the system with resolves that old .furthermore , we can expect to change slowly with time as primarily due to how the user interface makes resolves visible to users .the maximum likelihood estimation for these parameters described in sec . [ sect.resolves ] requires a computationally expensive optimization to find the best choices for and for all values . for new resolves , with close to , eq .( [ eq.r ] ) determines the values in terms of the values of for small ages ( i.e. , ) which are already well - determined from the prior history of the system .so instead of an expensive reevaluation of all the and values , we can simply incrementally estimate the values of new resolves assuming values for small ages do not change much .conversely , as new resolves are introduced , the oldest resolves in the system advance to ever larger ages , allowing estimates of for those ages from eq .( [ eq.f(a ) ] ) by assuming the values of those old resolves do not change much with the introduction of new resolves .such estimates of model parameters can be useful guides for improving social web sites if extended to user behavior as well , by identifying new users likely to become highly active or content likely to become popular .since it is possible to estimate the statistical errors given the sample size , one can also perform risk assessment when giving the estimates .newly posted content with high interestingness , for instance , can be quickly identified and given prominent attention on the online interface .we described several extended distributions resulting from user behavior on essembly , a web site where users create and rate content as well as form networks .these distributions are common in participatory web sites . from the extended distributions of user behavior we find extremely heterogenous population of users andwe introduce a plausible mechanism describing user behavior based on locally available information , involving a combination of aging and a large variation among people and resolves .we centered our investigations around three areas : the wide range in user activity levels in online participation ; how online social networks form around topical interests ; and the factors that influence the popularity of user - created content .in particular , we found , first , that most users try the online services only briefly , so most of the activity arises from a relatively small fraction of users who account for the diverse behavior observed .second , we gave a plausible , quantitative explanation of the long - tailed degree distributions observed in online communities , based on only the observed activity patterns of users and the underlying collaborative mechanisms .our observations suggest different mechanisms underly the formation of the social ( friends ) and ideological ( allies and nemeses ) networks , although these mechanisms give similar outcomes , e.g. , for the qualitative form of the degree distribution .the implications may extend beyond the scope of purely online societies to describe other societal connections as well where shared interests motivate relationship formation .our model , however , does not address other significant properties of the networks , such as community structure and assortativity and why they differ among the three networks . nor does our model address detailed effects on user behavior due to their network neighbors .third , we proposed a model and algorithm that can describe and predict through iterative refinements how the popularity of user - generated submissions evolves in time , considering both their changing exposure online and their inherent interestingness .we found that the exposure that content receives depends largely on its recency , and decays with age .the characteristics of our models plausibly apply to other web sites where user participation is self - directed and where content creation and social link formation plays a dominant part in the individual online activities .the digg and wikipedia user communities ( those whose activity data is publicly available ) in particular may show similar behavior in their activity patterns . our models could be extended to include the weak , but nevertheless statistically significant , correlations among user behaviors such as activity rate and the time they remain active on the site .including such correlations , as well as some historical and demographic information on individual users , may improve the model predictions as seen , for example , in models estimating customer purchase activities .consequences of our model include suggestions for identifying active users and interesting resolves early in their history .e.g. , from persistence in voting rates over time , even before accumulating enough votes to be rated as popular . such identification could be useful to promote interesting content on the web site more rapidly , particularly in the case of niche interests . beyond helping users find interesting content ,designs informed by causal models could also help with derivative applications , such as collaborative filtering or developing trust and reputations , by quickly focusing on the most significant users or items .such applications raise significant questions of the relevant time scales .that is , observed behavior is noisy , so there is a tradeoff between using a long time to accumulate enough statistics to calibrate the model vs. using a short time to allow responsiveness faster than other proxies for user interest such as popularity .our models raise additional questions on population properties we used .one such question is understanding how the resolve aging function relates to the user interface and changing interests among the user population .another question is how the wide distributions in user activity and resolve interestingness arise .the lognormal fits suggest underlying multiplicative processes are involved. it would also be interesting to extend the model to identify niche resolves , if any .that is , resolves of high interest to small subgroups of users but not to the population as a whole .automatically identifying such subgroups could help people find others with similar interests by supplementing comparisons based on ideological profiles .a caveat on our results , as with other observational studies of web behavior , is the evidence for mechanisms is based on correlations in observations . while mechanisms proposed here are plausible causal explanations since they rely on information and actions available to users , intervention experiments would give more confidence in distinguishing correlation from causal relationships .our model provides testable hypotheses for such experiments .for example , if intrinsic interest in resolves is a major factor in users selection of resolves , then deliberate changes in the number of votes may change visibility but will not affect interestingness . in that case , we would expect subsequent votes to return to the original trend .thus one area for experimentation is to determine how users value content on various web sites .for example , if items are valued mainly because others value them ( e.g. , fashion items ) then observed votes would _ cause _ rather than just reflect high value . in such cases , random initial variations in ratings would be amplified , and show very different results if repeated or tried on separate subgroups of the population .if items all have similar values and difference mainly due to visibility , e.g. , recency or popularity , then we would expect votes due to rank order of votes ( e.g. , whether item is most popular ) rather than absolute number of votes .if items have broad intrinsic value , then voting would show persistence over time and similar outcomes for independent subgroups .it would also be useful to identify aspects of the model that could be tested in small groups , thereby allowing detailed and well - controlled laboratory experiments comparing multiple interventions .larger scale experiments would also be useful to determine the generality of these mechanisms . the key features of continual arrival of new users , existing users becoming inactive and a wide range of activity levels among the user population and interest in the content can apply in many contexts . forthe distribution of how user rate content ( e.g. , votes on resolves in essembly ) , generalizing to other situations will depend on the origin of perceived value to the users .at one extreme , which seems to apply to essembly , the resolves themselves have a wide range of appeal to the user population , leading some items to consistently collect ratings at higher rates than others . at the other extreme, perceived value could be largely driven by popularity among the users , or subgroups of users , as seen in some cultural products . in rapidly changing situations ,e.g. , current news events or blog posts , recency is important not only in providing visibility through the system s user interface , but also determining the level of interest . in other situations , the level of interest in the items changes slowly , if at all , as appears to be the case for resolves in essembly concerning broad political questions such as the benefits of free trade .all these situations can lead to long - tail distributions through a combination of a `` rich get richer '' multiplicative process and decay with age .but these situations have different underlying causal mechanisms and hence different implications for how changes in the site will affect user behavior .thus , design and evaluation of participatory web sites can benefit from the availability of causal models .we thank chris chan and jimmy kittiyachavalit of essembly for their help in accessing the essembly data .we have benefited from discussions with michael brzozowski , dennis wilkinson , and tams sarls .e. agichtein , e. brill , s. dumais , and r. ragno .learning user interaction models for predicting web search result preferences . in _ proc .of the international acm sigir conference on research and development in information retrieval _ , pages 310 , 2006 .b. carterette and r. jones . evaluating search engines by modeling the relationship between relevance and clicks . in j.platt et al . ,editors , _ advances in neural information processing systems_. nips , 2007 . c. l. a. clarke , e. agichtein , s. dumais , and r. w. white .the influence of caption features on clickthrough patterns in web search . in _ proc . of the international acm sigir conference on research and development in information retrieval _ , pages 135142 , 2007 .n. craswell , o. zoeter , m. taylor , and b. ramsey .an experimental comparison of click position - bias models . in _ proc .of the international conference on web search and web data mining _ , pages 8794 , ny , 2008 . acm .t. joachims , l. granka , b. pan , h. hembrooke , and g. gay . accurately interpreting clickthrough data as implicit feedback . in _ proc . of the international acm sigir conference on research and development in information retrieval _ , pages 154161 , 2005 .
web sites where users create and rate content as well as form networks with other users display long - tailed distributions in many aspects of behavior . using behavior on one such community site , essembly , we propose and evaluate plausible mechanisms to explain these behaviors . unlike purely descriptive models , these mechanisms rely on user behaviors based on information available locally to each user . for essembly , we find the long - tails arise from large differences among user activity rates and qualities of the rated content , as well as the extensive variability in the time users devote to the site . we show that the models not only explain overall behavior but also allow estimating the quality of content from their early behaviors .
financial support from fundacin antorchas , argentina , and from alexander von humboldt foundation , germany ( scm ) is gratefully acknowledged . j.j .sepkowski jr . , paleobiology * 19 * , 43 ( 1991 ) ; d.m .raup , _ extinction : bad genes or bad luck ? _ ( oxford u. press , 1993 ) ; k. sneppen , p. bak , h. flyvbjerg , and m.h .jensen , proc .usa * 92 * , 5209 ( 1995 ) ; r.v .sol , s.c .manrubia , m.j .benton , and p. bak , nature * 388 * , 764 ( 1997 ) .zipf , _ human behavior and the principle of least effort _( addison - wesley , cambridge ma , 1949 ) ; h.a .makse , s. havlin , and h.e .stanley , nature * 377 * , 608 ( 1995 ) ; d.h .zanette and s.c .manrubia , phys . rev. lett . * 79 * , 523 ( 1997 ) .
we study a stochastic multiplicative process with reset events . it is shown that the model develops a stationary power - law probability distribution for the relevant variable , whose exponent depends on the model parameters . two qualitatively different regimes are observed , corresponding to intermittent and regular behaviour . in the boundary between them , the mean value of the relevant variable is time - independent , and the exponent of the stationary distribution equals . the addition of diffusion to the system modifies in a non - trivial way the profile of the stationary distribution . numerical and analytical results are presented . the occurrence of power - law distributions ( plds ) is a common feature in the description of natural phenomena . these distributions appear in a wide class of nonequilibrium systems , ranging from physical processes such as dielectric breakdown , percolation , and rupture , to biological processes such as dendritic growth and large - scale evolution , to sociological phenomena such as urban development . power - laws have been associated with the effect of the complex driving mechanisms inherent to these systems and with their intrincate dynamical structure . criticality , fractals , and chaotic dynamics are known to be intimately related to plds . in view of the ubiquity of plds in the mathematical description of nature , much work has been recently devoted to detecting universal mechanisms able to give rise to such distributions . in the frame of equilibrium processes , for instance , power - laws have been shown to derive from generalized maximum - entropy formulations . for nonequilibrium phenomena , self - organized criticality ( soc ) and stochastic multiplicative processes ( smps ) have been identified as sources of plds . according to the soc conjecture , some nonequilibrium systems are continuously driven by their own internal dynamics to a critical state where , as for equilibrium phase transitions , power - laws are omnipresent . on the other hand , smps provide a ( more flexible ) mechanism for generating plds , based in the presence of underlying replication events . it is however well known that a pure smp , with a random variable , does not generate a stationary pld for . rather , it gives rise to a time - dependent log - normal distribution . to model the above mentioned phenomena , therefore , smps have to be combined with additional mechanisms . it has been shown that transport processes , sources , and boundary constraints are able to induce a smp to generate power - laws . the aim of the present paper is to discuss an alternative additional mechanism , namely , randomly reseting of the relevant variable to a given reference value . in a real system , this would represent catastrophic annihilation or death events , seemingly originated outside the system . we consider a discrete - time stochastic multiplicative process , added with reset events in the following way . at each time step , is reset with probability to a new value , drawn from a probability distribution . if the reset event does not occur , is multiplied by a random positive factor with probability distribution . namely , between two consecutive reset events , thus behaves as a pure multiplicative process . when one of such events occurs , the multiplicative sequence starts again . in order to gain insight into the dynamics of process ( [ p0 ] ) we first consider the simplest case where and are constant for all . since an arbitrary factor in the initial value of is irrelevant to its subsequent evolution , we take without loss of generality . we have thus this stochastic recursive equation can be readily solved to give note that the possible values of , ( ) , lie in the interval ] for . except for the extreme value , the associated probabilities are time - independent . as time elapses , the probability of each possible value of is therefore quenched for , and the corresponding probability distribution evolves at this extreme value only . thus , the distribution sequentially builds up in zones that lie increasingly further from . for large times , when the number of possible values of becomes also large , it is possible to give the probability distribution for ] for , and in for . in contrast with multiplicative processes with boundary constraints , there are no conditions on the parameters to obtain a stationary power - law distribution . for , the exponent of this distribution is positive ( ) , and grows with . in this situation , however , the distribution is defined for and exhibits a cut off at . on the other hand , for or the exponent is negative ( ) . for , i.e. when , the moments diverge for , indicating the presence of intermittent amplifications . for , diverges for . it is interesting to relate the exponent of the power - law distribution with the evolution of the mean value . from ( [ s1 ] ) , this mean value can be written as for , the mean value of converges to a finite value ] . the particular form of sets a lower boundary for the region where behaves as a power law , but does not affect the corresponding exponent . solid lines in the log - log plot of fig . 1 have the theoretical slope . figure 2 shows our simulation results for three different forms of : an exponential distribution with , a uniform distribution with $ ] , and a discrete distribution with , and . the slope of the solid lines has been obtained numerically for various values of from eq . ( [ a ] ) . this yields for the exponential distribution with , for the uniform distribution with , and for the discrete distribution with . in all cases , our numerical and analytical results are in full agreement within six to nine decades in the power - law region . we have also investigated the effects of diffusive transport on the process ( [ p1 ] ) . with this aim , we have considered a one - dimensional array of elements whose individual dynamics is given by ( [ p1 ] ) and , at each time step , we have incorporated an interaction mechanism that mimics diffusion . after the multiplicative process with reset events has been applied , the state of each element is further changed to , \ ] ] where labels the elements in the array , with periodic boundary conditions . then , is used as the input state for the next step . in this deterministic , time - discrete version of diffusive transport , plays the role of a diffusion constant . figure 3 summarizes our numerical results on the effect of diffusion on the smp ( [ p1 ] ) , displaying the dependence of the power - law exponent with the diffusion constant . we have chosen values of and such that the different regimes of the process have been explored . the value of the multiplicative constant has been fixed in this case to . in the regular regime ( i.e. ) , diffusion produces a decrease of in the power - law distribution . this can be understood if we consider that the role of diffusion is to deplete dense areas , transporting material to less occupied cells . the multiplicative process is not fast enough in this regime to balance the joint effect of reset events and diffusion . as a result , underpopulation occurs in the high - density region , and decreases ( in fig . 3 ) . in the intermittent regime ( i.e. ) , diffusion favors the opposite effect . remarkably , diffusion does not have any effect on the value of when the system is evolving at the explosion threshold . within numerical errors , in fact , irrespectively of the value of . it is also worth to point out that the qualitative behaviour of the process depends on and only . changing does not allow the system to switch between the intermittent and the regular regimes . summing up , in this paper we have studied a stochastic multiplicative process with reset events . the combination of this random reseting with the replication events driven by the stochastic process allows for the development of a stationary distribution in the system , both when the mean value of the relevant variable converges to a finite value ( regular regime ) and when it diverges ( intermittent regime ) . the regime at the boundary between regular and intermittent behaviour is of particular interest . at this point , where the overall effects of the multiplicative process are exactly balanced by the random resets , the mean value of the relevant variable remains constant in time . we have shown that this property is closely related with the fact that the exponent of the power - law stationary distribution equals . this value is to be related with zipf law , which predicts the same exponent of power - law distributions in a series of seemingly disparate natural systems . thus , the smp with reset events offers an alternative explanation of this ubiquitous exponent . in fact , whereas a general trend of biological and social systems could be to improve their growth rates by increasing the parameter , it is on the other hand to be expected that external constrains are going to operate in order to avoid divergencies by increasing . it is not unlikely that the competition between these two processes could lead real systems to this boundary between regular behavior and developed intermittency .
the study of _ link reciprocity _ in binary directed networks , or the tendency of vertex pairs to form mutual connections , has received an increasing attention in recent years . among other things , reciprocity has been shown to be crucial in order to classify and model directed networks , understand the effects of network structure on dynamical processes ( e.g. diffusion or percolation processes ) , explain patterns of growth in out - of - equilibrium networks ( as in the case of the wikipedia or the world trade web ) , and study the onset of higher - order structures such as correlations and triadic motifs . in networks that aggregate temporal information such as e - mail or phone - call networks, reciprocity also provides a measure of the simplest _ feed - back _ process occurring in the network , i.e. the tendency of a vertex to _ respond _ to another vertex stimulus .finally , reciprocity quantifies the information loss determined by projecting a directed network into an undirected one : if the reciprocity of the original network is maximum , the full directed information can be retrieved from the undirected projection ; on the other hand , no reciprocity implies a maximum uncertainty about the directionality of the original links that have been converted into undirected ones . in particular intermediate cases ,significant directed information can be retrieved from an undirected projection using the knowledge of reciprocity . in general , reciprocity is the main quantity characterizing the possible dyadic patterns , i.e. the possible types of connections between two vertices .while the reciprocity of binary networks has been studied extensively , that of weighted networks has received much less attention , because of a more complicated phenomenology at the dyadic level . while in a binary graphit is straightforward to say that a link from vertex to vertex is reciprocated if the link from to is also there , in a weighted network there are clear complications . given a link of weight from vertex to vertex , how can we assess , in terms of the mutual link of weight , whether the interaction is reciprocated ? while ( no link from to ) clearly signals the absence of reciprocation , what about a value but such that ?this complication has generally led to two approaches to the study of directionality in weighted networks : one assuming ( either explicitly or implicitly ) that perfect reciprocity corresponds to symmetric weights ( ) , and one looking for deviations from such symmetry by studying net flows ( or imbalances ) , defined as . in the latter approach , significant information about the original weights , including their reciprocity , is lost : the original network produces the same results as any other network where and .since is arbitrary , this approach can not distinguish networks that have very different symmetry properties .in particular , maximally asymmetric ( i.e. , implying whenever ) and maximally symmetric networks ( i.e. , implying ) , which are treated as opposite in the first approach , are indistinguishable in the second one .consider , for example , two nodes and linked by the asymmetric weights and : the imbalance is the same as if they were an almost symmetric dyad with and .in addition to the above limitations , it has become increasingly clear that the heterogeneity of vertices , which in weighted networks is primarily reflected into a generally very broad distribution of the _ strength _( total weight of the links entering or exiting a vertex ) , must be taken into account in order to build an adequate null model of a network .indeed , the different intrinsic tendencies of individual vertices to establish and/or strengthen connections have a strong impact on many other structural properties , and the reciprocity is no exception .it is therefore important to account for such irreducible heterogeneity by treating local properties such as the strength ( or the degree in the binary case ) as constraints defining a null model for the network .while null models of weighted networks are generally computationally demanding , recently a fast and analytical method providing exact expressions characterizing both binary and weighted networks with constraints has been proposed .this allows us , for the first time , to have mathematical expressions characterizing the behaviour of topological properties under the null model considered . in this paperwe extend those results , in order to propose new mathematical definitions of reciprocity in the weighted case and to evaluate their behaviour exactly under various null models that introduce different constraints .this also allows us to assess whether an observed asymmetry between reciprocal links is consistent with fluctuations around a balanced but noisy average , or whether it a statistically robust signature of imbalance .finally , we introduce models that successfully reproduce the observed patterns by introducing either a correct global reciprocity level or more stringent constraints on the local reciprocity structure .we first introduce measures of reciprocity which meet three criteria simultaneously : 1 ) if applied to a binary network , they must reduce to their well - known unweighted counterparts ; 2 ) they must allow a consistent analysis across all structural levels , from dyad - specific through vertex - specific to network - wide ; 3 ) they must have a mathematically controlled behaviour under null models with different constraints , thus disentangling reciprocity from other sources of ( a)symmetry . then , we discuss the differences with respect to other inadequate measures of ` symmetry ' , show our empirical results , and introduce theoretical models aimed at reproducing the reciprocity structure of real weighted networks .we consider a directed weighted network specified by the weight matrix , where the entry indicates the weight of the directed link from vertex to vertex , including the case indicating the absence of such link . for simplicity , we assume no self - loops ( i.e. ) , as the latter carry no information about reciprocity ( in any case , allowing for self - loops is straightforward in our approach ) . as fig .1 shows , we can always decompose each pair of reciprocal links into a bidirectional ( fully reciprocated ) interaction , plus a unidirectional ( non reciprocated ) interaction .and ) into a fully reciprocated component ( ) and a fully non - reciprocated component ( , which implies ) . ]formally , we can define the _ reciprocated _ weight between and ( the symmetric part ) as =w^\leftrightarrow_{ji } \label{eq : wlr}\ ] ] and the _ non - reciprocated weight _ from to ( the asymmetric part ) as note that if then , which makes the unidirectionality manifest .we can also define as the _ non - reciprocated weight _ from to , and restate the unidirectionality property in terms of the fact that and can not be both nonzero .thus any dyad can be equivalently decomposed as . if the network is binary , all the above variables are either or and our decomposition coincides with a well studied dyadic decomposition . from the above fundamental dyadic quantities it is possible to define reciprocity measures at the more aggregate level of vertices .we recall that the out- and in - strength of a vertex are defined as the sum of the weights of the out - going and in - coming links respectively : in analogy with the so - called degree sequence in binary networks , we denote the vector of values as the _ out - strength sequence _ , and the vector of values as the _ in - strength sequence_. using eqs.([eq : wlr]-[eq : wl ] ) , we can split the above quantities into their reciprocated and non - reciprocated contributions , as has been proposed for vertex degrees in binary networks .we first define the _ reciprocated strength _ which measures the overlap between the in - strength and the out - strength of vertex , i.e. the portion of strength of that vertex which is fully reciprocated by its neighbours .then we define the _ non - reciprocated out - strength _ as and the _ non - reciprocated in - strength _ as the last two quantities represent the non - reciprocated components of and respectively , i.e. the out - going and in - coming fluxes which exceed the inverse fluxes contributed by the neighbours of vertex . finally , we introduce weighted measures of reciprocity at the global , network - wide level . recall that the total weight of the network is similarly , we denote the _ total reciprocated weight _ as extending a common definition widely used for binary graphs , we can then define the _ weighted reciprocity _ of a weighted network as if all fluxes are perfectly reciprocated ( i.e. ) then , whereas in absence of reciprocation ( i.e. ) then . in the appendix we discuss the difference between our definitions and other attempts to characterize the reciprocity of weighted networks . just like its binary counterpart , eq.([eq : r ] )is informative only after a comparison with a null model ( nm ) is made , i.e. with a value expected for a network having some property in common ( e.g. the number of vertices and/or the total weight ) with the observed one . as a consequence ,networks with different empirical values of such quantities can not be consistently ranked in terms of the measured value of .an analogous problem is encountered in the binary case , and has been solved by introducing a transformed quantity that we generalize to the present setting as the sign of is directly informative of an increased , with respect to the null model , tendency to reciprocate ( ) or to avoid reciprocation ( ) .if is consistent with zero ( within a statistical error that we quantify in the appendix ) , then the observed level of reciprocity is compatible with what merely expected by chance under the null model .the literature on null models of networks is very vast . in this paperwe adopt a recent analytical method and extend it in order to study the reciprocity of weighted networks .the three null models we consider are described in the methods and appendix .we stress that the alternative approaches are all based on the assumption that the maximum level of reciprocity corresponds to a symmetric network where , so that deviations from this symmetric situation are interpreted as signatures of incomplete reciprocity .this is actually incorrect : independently of other properties of the observed network , the symmetry of weights ( i.e. ) is completely uninformative about the reciprocity structure , for two reasons .first , in networks with broadly distributed strengths ( as in most real - world cases ) the attainable level of symmetry strongly depends on the in- and out - strengths of the end - point vertices : unless for all vertices , it becomes more and more difficult , as the heterogeneity of strengths across vertices increases , to match all the constraints required to ensure that for all pairs . therefore , even networks that maximize the level of reciprocity , given the values of the strengths of all vertices , are in general not symmetric .on the other hand , in networks with balance of flows at the vertex level ( for all vertices ) an average symmetry of weights ( ) is automatically achieved by pure chance , even without introducing a tendency to reciprocate ( see appendix ) . in many real networks ( including examples we study below ) , the balance of flows at the vertex levelis actually realized , either exactly or approximately , as the result of conservation laws ( e.g. mass or current balance ) . in those cases ,the symmetry of weights should not be interpreted as a preference for reciprocated interactions . in the appendixwe also show that measures based on the correlation between and are flawed .similarly , studies of asymmetry focusing on the differences are severely limited by the fact that the observed imbalances might actually be fluctuations around a zero average ( ) , irrespective of the level of reciprocity .thus , reciprocity and symmetry are two completely different structural aspects . [cols="<,^,^,^,^",options="header " , ] we now carry out an empirical analysis of several real weighted networks using our definitions introduced above .we start with the global quantities and defined in eqs.([eq : r ] ) and ( [ eq : rho ] ) . in table 1we report the analysis of 70 biological , social and economic networks .all networks display a nontrivial weighted reciprocity structure ( i.e. ) , which differs from that predicted by the 3 null models considered ( wcm , bcm and wrg : see methods and appendix ) .this means that the imposed constraints can not account for the observed reciprocity .remarkably , we also find that networks of the same type systematically display similar values of : for a given choice of the null model , the resulting reciprocity ranking provides a consistent ( non - overlapping ) classification of networks .however , different null models provide different estimates of reciprocity and rank the same networks differently . some networks ( social networks and the world trade web ) always show a positive reciprocity , while others ( foodwebs ) always show a negative reciprocity , irrespective of the null model. however , other networks ( interbank networks ) are classified as weakly but positively reciprocal under the wcm , but as strongly negatively reciprocal under the bcm and the wrg . in one case ( neural network ) , the estimated level of reciprocity can be slightly positive , negative , or even consistent with zero depending on the null model . as a consequence ,the 5 interbank networks are more reciprocal than the neural network under the wcm , while the ranking is inverted under the bcm and the wrg . since the wcm is the most conservative model , preserving most information from empirical data , we choose to rank the networks in the table using .importantly , we find that all weighted rankings are quite different from the binary analysis - based ranking . while the various snapshots of the world trade web are systematically found to be strongly and sometimes almost perfectly reciprocal in the binary case ( under the binary random graph model ) , here we find them to be less reciprocal than social networks if the additional weighted information is taken into account .also , while the neural network of _ c. elegans _ has a strong binary reciprocity ( ) , here we find it to have a very weak ( under the wcm ) , consistent with zero ( under the wrg ) , or even negative ( under the bcm ) weighted reciprocity .these important differences show that the reciprocity of weighted networks is nontrivial and irreducible to a binary description .versus out - strength in four weighted networks in increasing order of reciprocity : a ) the everglades marshes foodweb , b ) the neural network of _ c. elegans _ , c ) the world trade web in the year 2000 , and d ) the social network of a fraternity at west virginia college ( note that the increase in reciprocity is not necessarily associated with an increase in symmetry ) . ]the two differences between the wcm and the wrg ( see methods and appendix ) are node imbalance ( and are equal in the wrg and different in the wcm ) and node heterogeneity ( the expected strenghts of all vertices are equal in the wrg , and broadly distributed in the wcm ) .we can use the bcm as an intermediate model in order to disentangle the role of these two differences in producing the observed deviations between and .the bcm preserves node heterogeneity but assumes node balance by regarding the observed difference between the in- and out - strength of each vertex as a statistical fluctuation around a balanced average ( see appendix ) . as we show in fig .2 , some real networks ( such as foodwebs and the world trade web ) indeed appear to display very small fluctuations around this type of node balance . in foodwebs , where edges represent stationary flows of energy among species ,the almost perfect balance is due to an approximate biomass or energy conservation at each vertex . in the world trade web , where edges represent the amount of trade among world countries , the approximate balance of vertex flows is due to the fact that countries tend to minimize the difference between their total import and their total exports , i.e. they try to ` balance their payments ' . as we show in the appendix , the balance of vertex flows implies that , even without introducing a tendency to reciprocate , the expected mutual weights are equal : .this implies a larger expected reciprocated weight in the bcm than in the wcm , so that , as confirmed by table 1 .however , we find that and are always very similar , while they can be very different from .this means that node imbalances , even when very weak , can have a major effect on the expected level of reciprocity .surprisingly , we find that this effect is much stronger than that of the strikingly more pronounced node heterogeneity .correctly filtering out the effects of flux balances or other symmetries can lead to counter - intuitive results : the most reciprocal of the four networks ( the social network , see table 1 ) is one of the least symmetric ones ( see fig .2d ) , whereas the least reciprocal of the four networks ( the foodweb , see table 1 ) is the most symmetric one ( see fig .2a ) .since consistently ranks the reciprocity of networks with different properties , it can also track the evolution of reciprocity in a network that changes over time .for this reason , in our dataset we have included 53 yearly snapshots of the world trade web , from year 1948 to 2000 . in fig .3 we show the evolution of , and under the three null models .the plots confirm that , unlike , is not an adequate indicator of the evolution of reciprocity , since the baseline expected value ( under every null model ) also changes in time as a sort of moving target ( fig .( blue ) and its expected values under the weighted configuration model ( red ) , the balanced configuration model ( green ) , and the weighted random graph ( orange ) ; b ) evolution of under the same 3 null models as above . ]note that fluctuates much more than and , and its fluctuations resemble those of the observed value ( see fig .this is due to the fact that , while all snapshots of the network are characterized by ` static ' fluctuations of the empirical strengths of vertices around the balanced flux condition ( like those in shown in fig .2c for the year 2000 ) , these fluctuations have different entities in different years .changes in the size of ` static ' fluctuations produce the ` temporal ' fluctuations observed in the evolution of , and partly also in the observed value , confirming the important role of node ( im)balances . after controlling for the time - varying entity of node imbalances ( using the wcm ) , we indeed find that the fluctuations of are less pronounced than those of and ( see fig .3b ) . however , the fluctuations of and do not cancel out completely , and their resulting net effect ( the trend of ) is still significant , indicating the strongest level of reciprocity across the three null models . while a binary analysis of the wtw detected an almost monotonic increase of the reciprocity , with a marked acceleration in the 90 s, we find that the weighted reciprocity has instead undergone a rapid decrease over the same decade : this counter - intuitive result confirms that the information conveyed by a weighted analysis of reciprocity is nontrivial and irreducible to the binary picture .we now focus on the reciprocity structure at the local level of vertices , i.e. on the reciprocated and non - reciprocated strength , and defined in eqs.([eq : slr]-[eq : sl ] ) . as clear from eq.([eq : wrec ] ) , this allows us to analyse how different vertices contribute to the overall value of and hence to . in order to assess whether the vertex - specific reciprocity structure is significant , rather than merely a consequence of the local topological properties of vertices , we compare the observed value of , and with their expected values under the wcm and the bcm .unlike the wrg , these models preserve the total strength of each vertex , thus filtering out the effects of the observed heterogenity of vertices . in fig .4 we show the observed and expected values of the ( non-)reciprocated strength versus the total strength for the four networks already shown in fig . 2 in order of increasing reciprocity . ) and reciprocated ( ) or non - reciprocated ( ) strength in four weighted networks in increasing order of reciprocity : a ) the everglades marshes foodweb , b ) the neural network of _ c. elegans _ , c ) the world trade web in the year 2000 , and d ) the social network of a fraternity at west virginia college ( black : real data , green : weighted configuration model , red : balanced configuration model ) . ] for the anti - reciprocal networks with ( the foodweb and , under some null model , the neural network ) , the dominant and less fluctuating contribution to comes from the non - reciprocated strength , and therefore we choose to plot versus ( fig .conversely , for the positively reciprocal networks with ( the world trade web and the social network ) the dominant contribution comes from the reciprocated strength , so we consider versus ( fig.4c - d ) .we found very rich and diverse patterns . in all networks, the selected quantity displays an approximately monotonic increase with .qualitatively , this increasing trend is also reproduced by the two null models .however , we systematically find large differences between the latter and real data . in the foodweb ( fig .4a ) , the observed values of the _ non - reciprocated strength _ are always larger than the expected values ( note that the separation between the two trends is exponentially larger than it appears in a log - log plot ) .this shows that each vertex contributes , roughly proportionally to its total strength , to the overall anti - reciprocity of this network ( and hence , see table 1 ) .by contrast , in the neural network ( fig .4b ) some vertices ( mostly , but not uniquely those with large ) have a larger non - reciprocated strength than expected under the null models , while for other vertices ( mostly those with small ) the opposite is true .this shows that the weak ( and nearly consistent with zero , see table 1 ) overall reciprocity of this network is the result of several opposite contributions of different vertices , that cancel each other almost completely . the world trade web ( fig .4c ) also shows a combination of deviations in both directions , even if in this case for the vast majority of vertices the observed _ reciprocated _ strength is larger than the expected one .this results in the overall positive reciprocity of the network , but again in a such a way that the global information is not reflected equally into the local one . finally , the social network ( fig .4d ) displays a behaviour analogous , but opposite , to that of the foodweb : the observed _ reciprocated _ strength of each vertex systematically exceeds its expected value and gives a proportional contribution to the overall positive reciprocity . note that , while the striking similarity between the predictions of the wcm and the bcm in the foodweb and in the world trade web is not surprising , because of the very close node - balance relationship in these two networks ( see fig . 2a and 2c ) , in the neural network and in the social network the similarity between the predictions of the two null modelsis nontrivial , since node balance is strongly violated in these cases ( see fig .2b and 2d ) .having shown that the reciprocity of real weighted networks is very pronounced , we conclude our study by introducing a class of models aimed at correctly reproducing the observed patterns . to this end , rather than proposing untestable models of network formation , we expand the null models we have considered above by enforcing additional or alternative constraints on the reciprocity structure .this approach leads us to define the weighted counterparts of the binary exponential random graphs ( or models ) with reciprocity and their generalizations .we first define three models that exactly reproduce , besides the observed heterogeneity of the strength of vertices , the observed global level of reciprocity ( i.e. such that and , implying ) .our aim is to check whether this is enough in order to reproduce the more detailed , local reciprocity structure . in the first model ( ` weighted reciprocity model ' ,see appendix ) , the constraints are and for each vertex ( as in the wcm ) , and additionally .this model is the analogue of the binary reciprocity model by holland and leinhardt and replicates the overall reciprocity exactly .however , as we discuss in the appendix , it is best suited to reproduce networks that are anti - reciprocal or , more precisely , less reciprocal than the wcm ( ) .therefore , in our analysis we can only apply it to the foodwebs . in fig .5a we show our results on the everglades web . for the sake of comparison with fig .4a , we plot as a function of .we find that , quite surprisingly , the model does not significantly improve the accordance between real and expected trends produced by the wcm and bcm ( see fig.4a ) .the only difference with respect to the latter is that now a few vertices with very large lie below the expected trend , while all the other vertices continue to lie above it ( fig .5a ) producing an overall : so , even if all vertices appeared to contribute evenly and proportionally to the global anti - reciprocity ( see fig .4a ) , adding the latter as an overall constraint is not enough in order to capture the local reciprocity structure . ) and reciprocated ( ) or non - reciprocated ( ) strength in four weighted networks in increasing order of reciprocity : a ) the everglades marshes foodweb , b ) the neural network of _ c. elegans _ , c ) the world trade web in the year 2000 , and d ) the social network of a fraternity at west virginia college ( black : real data , blue : the weighted reciprocity model , orange : the non - reciprocated strength model , green : the reciprocated strength model ; all such models reproduce the global level of reciprocity but not necessarily the local reciprocity structure ) . ] in our second model ( ` non - reciprocated strength model ' , see appendix ) , the constraints are , ( for each vertex ) , and .this slightly relaxed model ( potentially ) generates all levels of reciprocity .however , it does not automatically reproduce the in- and out - strength sequences , therefore it is only appropriate for networks where and are the dominant contributions to and respectively , so that specifying the former largely specifies the latter as well .so , even if now there are no mathematical restrictions , this model is again only appropriate for networks with negative reciprocity ( ) . in fig .5a we show the predictions of this model on the foodweb : note that , as compared to the previous model , now the quantity is exactly reproduced by construction , while is not reproduced , with most vertices lying above the expected trend and a few dominating ones lying below it .so the result is even worse than before . in fig .5b we also show the performance of this model on the neural network ( which actually displays , even if it still has negative reciprocity under other null models , see table 1 ) : even if the agreement is now much better , most data continue to lie either above or below the expected curve , confirming that the reciprocated strengths can not be simply reconciled with the total strengths .note however that for networks with smaller this model becomes more accurate , and in the limit it exactly reproduces all the strength sequences of any network .our third model ( ` reciprocated strength model ' , see appendix ) is a ` dual ' one appropriate in the opposite regime of strong positive reciprocity ( i.e. , especially in the limit ) .the constraints are now ( for each vertex ) and the total weight ( note that , as a consequence , also the non - reciprocated total weight is kept fixed ) .this model is most appropriate for networks where is the dominant contribution to . in fig .5c we show the predictions of this model on the world trade web .now is obviously always reproduced , while instead is not reproduced for all vertices . in fig .5d we show the results for the social network , and in this case we find that the model reproduces real data remarkably well .this confirms that the model is particularly appropriate for strongly reciprocal networks .we therefore find that , as in the dual case discussed above , if the overall reciprocity is moderate then the constraints are in general not enough in order to characterize the local reciprocity structure . however , in networks with strong overall reciprocity , this model accurately ( and exactly in the limit ) reproduces all the local reciprocity structure .the above three models produce the correct level of global reciprocity ( i.e. or ) but not necessarily the correct local reciprocity structure . in networks with strong ( either positive or negative ) reciprocity , the local reciprocity structurecan be simply inferred from the global one , plus some information about the heterogeneity of vertices ( some strength sequence ) .conversely , in networks with moderate reciprocity the local patterns are irreducible to any overall information , and thus constitute intrinsic heterogeneous features . in this case , it is unavoidable to use a model that fully reproduces the three quantities , and separately for each vertex , by treating them as constraints . in the appendix we describe this model , that we denote as the weighted reciprocated configuration model ( wrcm ) in detail . using this model , all the plots in fig .5 are automatically reproduced exactly , by construction .therefore we believe that this model represents an important starting point for future analyses of higher - order topological properties in weighted networks .in particular , we foresee two main applications .the first application is to the analysis of weighted ` motifs ' , i.e. the abundances of all topologically distinct subgraphs of three or four vertices . in the binary case, it has been realized that such subgraphs are important building blocks of large networks , and that their abundance is not trivially explained in terms of the dyadic structure .this result can only be obtained by comparing the observed abundances with their expectation values under a null model that separately preserves the number of reciprocated and non - reciprocated ( in - coming and out - going ) links of each vertex . in the weighted case ,no similar analysis has been carried out so far , because of the lack of an analogous method , like the wrcm defined here , to control for the reciprocated and non - reciprocated connectivity properties separately .the second application is to the problem of community detection in weighted directed networks , i.e. the identification of densely connected modules of vertices .most approaches attempt to find the partition of the network that maximizes the so - called ` modularity ' , i.e. the total difference between the observed weights of intra - community links and their expected values under the wcm . in networks where the observed reciprocity is not reproduced by the wcm ( as all networks in the present study ), the difference between observed and expected weights is not necessarily due to the presence of community structure , as it also receives a ( potentially strong ) contribution by the reciprocity .this means that , in order to filter out the effects of reciprocity from community structure , in the modularity function one should replace the expected values under the wcm with the expected values under the wrcm . the ever - increasing gap between the growth of data about weighted networks and our poor understanding of their dyadic properties led us to propose a rigorous approach to the reciprocity of weighted networks .we showed that real networks systematically display a rich and diverse reciprocity structure , with several interesting patterns at the global and local level .we believe that our results form an important starting point to answer many open questions about the effect of reciprocity on higher - order structural properties and on dynamical processes taking place on real weighted networks .equation ( [ eq : rho ] ) in the results section introduces the quantity , as the normalized difference between the observed value of the weighted reciprocity and its expected value under a chosen null model .the introduction of has two important consequences .firstly , networks with different parameters can be ranked from the most to the least reciprocal using the measured value of . secondly , and consequently, the reciprocity of a network that evolves in time can be tracked dynamically using even if other topological properties of the network change ( as is typically the case ) . clearly , the above considerations apply not only to the global quantity , but also to the edge- and vertex - specific definitions we have introduced in eqs.([eq : wlr]-[eq : wl ] ) and ( [ eq : slr]-[eq : sl ] ) . for this reason , in the appendix we introduce and study three important null models in great detail .we briefly describe these models below . to start with, we consider a network model with the same total weight as the real network but with no tendency towards or against reciprocation , i.e. a directed version of the weighted random graph ( wrg ) model .this allows us to quantify for the first time the baseline level of reciprocity expected by chance in a directed network with given total weight .however , this null model is severely limited by the fact that it is completely homogeneous in two respects ( see the appendix ) : it generates networks where each vertex has the same expected in- and out - strength ( ) , and moreover this value is common to all vertices ( ) . a popular and more appropriate null model that preserves the observed intrinsic heterogeneity of vertices is one where all vertices have the same in - strength and out - strength as in the real network , i.e. the directed weighted configuration model ( wcm ) . in such model ,since and , the two sources of homogeneity characterizing the wrg are both absent : each vertex has different values of the in - strength and out - strength , and these values are also heterogeneously distributed across vertices . in other words , this model preserves the in- and out - strength sequences separately .another important null model that we introduce here for the first time is one that allows us to conclude whether the observed asymmetry of fluxes is consistent with a fluctuation around a balanced network ( i.e. one where the net flow at each vertex is zero ) .this model , that we denote as the balanced configuration model ( bcm ) , is somewhat intermediate between the above two models , as it assumes ( like the wrg ) that the expected in- and out - strength of each vertex are the same , i.e. that the two observed values and are fluctuations around a common expected value , but at the same time preserves ( as the wcm ) the strong heterogeneity of vertices ( i.e. in general if ) .this model preserves the total strength of each vertex , but not the in- and out - strength separately .+ note that all the above null models preserve the total weight of the original network , i.e. .however , they do not automatically preserve the reciprocity ( neither locally nor globally ) .our aim is to understand whether the observed reciprocity can be simply reproduced by one of the null models ( and is therefore trivial ) , or whether it deviates systematically from the null expectations . in the next sectionwe show that the latter is true , and that the reciprocity structure is a robust and novel pattern characterizing weighted networks . as we show in the appendix , it is possible to characterize all the above null models analytically , and thus to calculate the required expected values exactly . even if the final expressions are rather simple , their derivation is in some cases quite involved and requires further developments of mathematical results that have appeared relatively recently in the literature .moreover , the crucial step that fixes the values of the parameters of all models requires the application of a maximum - likelihood method that has been proposed by two of us only recently .it is for the above reasons , we believe , that the reciprocity of weighted networks has not been studied as intensively as its binary counterpart so far . by putting all the pieces together , we are finally able to approach the problem in a consistent and rigorous way .importantly , the framework wherein our null models are introduced ( maximum - entropy ensembles of weighted networks with given properties ) extends to the weighted case , and at the same time formally unifies , recent randomization approaches proposed by physicists and well - established models of social networks introduced by statisticians , i.e. the so - called exponential random graphs or models ( see the appendix ) .while a variety of specifications for the latter exist in the binary graph case , very few results for weighted graphs are available .our contribution opens the way for the introduction of more general families of exponential random graphs for weighted networks . indeed , besides the null models discussed above, we will also introduce the first models that correctly reproduce the observed reciprocity structure , either at the global ( but not necessarily local ) level , or at the local ( and consequently also global ) level .it is worth mentioning that our approach makes use of exact analytical expressions , and allows to find the correct values of the parameters both in the null models and in the models with reciprocity .by contrast , the common methods available in social network analysis to estimate binary exponential random graphs rely on approximate techniques such as markov chain monte carlo or pseudo - likelihood approaches .another advantage is that the method we employ allows us to obtain the expected value of any topological property mathematically , and in a time as short as that required in order to measure the same property on the original network . unlike other randomization approaches , we do not need to computationally generate several randomized variants of the original network and take ( approximate , and generally biased ) sample averages over them .comparing real data with the above null models , and the null models among themselves , allows us to separate different sources of heterogeneity observed in networks .this is a key step towards understanding the origin of the reciprocity structure of real weighted networks .d. g. acknowledges support from the dutch econophysics foundation ( stichting econophysics , leiden , the netherlands ) with funds from beneficiaries of duyfken trading knowledge bv , amsterdam , the netherlands .f. r. acknowledges support from the fessud project on `` financialisation , economy , society and sustainable development '' , seventh framework programme , eu .before considering the reciprocity of weighted networks , we briefly recall the basic definitions in the binary case , that were originally introduced to describe the mutual relations taking place between vertex pairs . for binary ,directed networks the reciprocity is defined as the fraction of links having a `` partner '' pointing in the opposite direction : where and .the above quantity , , is not independent on the link density ( or connectance ) : on the contrary , it can be shown that is the expected value of under the directed random graph model ( drg in what follows ) . in the drg, a directed link is placed with probability between any two vertices , i.e. ( with ) .this implies showing that the expected value of coincides with the fundamental parameter of this null model , and hence depends on and . in order to assess whether there is positive or negative reciprocity, one should compare the measured with its expected value .this means that can not be used to consistently rank networks with different values of and , because they have different reference values .also , and consequently , can not be used to track the evolution of a network that changes in time , because and/or will also change .this is why a different definition of reciprocity was proposed , trying to control for the time - varying properties by means of the pearson correlation coefficient between the transpose elements of the adjacency matrix : a symmetrical adjacency matrix ( as those for binary , undirected networks ) represents a network with the highest values of and ( both equal to 1 ) , whereas a fully asymmetrical one , with values mirroring values on opposite sides of the main diagonal ( like a triangular matrix ) , displays the lowest value , being and ) .this meaningful definition of reciprocity automatically discounts density effects , i.e. the expectation value of ( under the drg ) . as a result ,consistent rankings and temporal analyses become possible in terms of .in what follows we provide additional information about the possible generalization of the reciprocity to the weighted case . by looking at eq.([rho ] ) , it is not clear whether a generalization to the weighted case should start from the first term on the left ( i.e. as a correlation coefficient ) or from the last term on the right ( i.e. as the normalized excess from a random expectation ) .this ambiguity comes from the fact that , for weighted networks , those two terms are no longer equivalent ( as we now show ) .we therefore start by attempting the first route , and then consider the second one . if we follow the binary recipe from left to right , we define the weighted reciprocity as the pearson correlation coefficient ( where , as usual , ) .after some algebra , this implies where , in order to produce a result formally equivalent to eq.([rho ] ) , we have defined the weighted analogues of and as follows : note that the equivalence , valid for the binary case , no longer holds : .the previous expressions generalize the binary ones and reduce to them when substituting the s in place of the s .moreover , interestingly enough , the coefficient can be expressed as a function of the weights distribution mean , , and standard deviation , , or , in an equivalent way , as a function of the so - called coefficient of variation , , as we could be tempted to interpret as the weighted counterpart of the binary connectance and , as the weighted counterpart of eq.([r ] ) .however , we can show a simple case for which the above `` product - over - squares '' definition above fails in measuring our intuitive notion of reciprocity .let us consider a simple network like that in fig . 1 .if we calculate by choosing , we obtain where the sum in the denominator includes all the weights different from the central ones .now , let us imagine a second situation where ; the calculations , now , would give and we would intuitively require that , for every choice of the involved weights , because of the greater disparity between the two central flows .however , it can be shown that under certain circumstances exactly the opposite result is obtained , by simply changing the non - central weights .in fact , by choosing the latter to satisfy the condition the very counter - intuitive result is obtained .this shows that eq.([rhow ] ) is not a good choice for a weighted extension of eq.([rho ] ) . before considering the alternative route, we observe that we could also imagine to define a slightly different correlation coefficient , only between the two triangular blocks of the weighted adjacency matrix : the upper - diagonal one and the lower - diagonal one .this would be defined as where is the upper - diagonal mean and is the lower - diagonal mean .again , this definition has an undesirable performance .this is evident if we imagine a matrix whose transposed entries are defined as and ( with ) . in this case , we would have independently of the value of !so we could arbitrarily rise or lower the value of , thus making the matrix more and more asymmetric , without measuring this effect at all .note that this circumstance is impossible in the binary case , as all weights are forced to be either zero or one , and therefore the only allowed value for is one .the two examples above show that correlation - based definitions of reciprocity , while having a satisfactory behaviour in the binary case , become problematic in the weighted one .unfortunately , the few attempts that have been proposed so far in order to characterize the reciprocity of weighted networks are all based on measures of correlation or symmetry between mutual weights . later, we show that symmetry - based measures are also flawed . together with our results above, this means that all the available measures fail in providing a consistent and interpretable characterizaton of the reciprocity of weighted networks .we now consider the second route , i.e. a definition that starts from generalizing the last term in eq.([rho ] ) .this means that we are now free to first generalize in a satisfactory way , rather than as a forced effect of the correlation - based definition , and then calculate its expected value under some appropriate null model . to this end , we note that the binary nature of the variables defining allows us to rewrite it in a very suggestive way : }{\sum_{i\neq j}a_{ij}}.\ ] ] the previous relation is consistent with the intuitive meaning of reciprocity , as a measure of the quantity of mutually - exchanged flux between vertices .so we can extend this definition to the weighted case , to obtain }{\sum_{i\neq j}w_{ij}}. \label{rmin}\ ] ] where we have defined the _total reciprocated weight _ as ] and having defined now , the most challenging calculation is about the partition function .this can be done by rewriting the hamiltonian solely in terms of the variables , and , \end{aligned}\ ] ] and considering the admissible states for them : where and .so the partition function becomes ( having posed , and ) and , consequently , the probability coefficient for the generic configuration is now , the maximum - likelihood principle prescribes to maximize \end{aligned}\ ] ] with respect to , and .the solution to the previous optimization problem can be found by solving the system where now , the expected value of the minimum between and is \rangle_{wrm}^*=\langle w_{ij}^{\leftrightarrow}\rangle_{\vec{\theta}^*} ] and $ ] ) .killworth p. d. , bernard h. r. & sailer l. informant accuracy in social network data iv : a comparison of clique - level structure in behavioral and cognitive network data ._ social networks _ * 2 * , 191 - 218 ( 1979 ) .killworth p. d. , bernard h. r. & sailer l. informant accuracy in social - network data v. an experimental attempt to predict actual communication from recall data ._ social science research _ * 11 * , 30 - 66 ( 1982 ) .
in directed networks , reciprocal links have dramatic effects on dynamical processes , network growth , and higher - order structures such as motifs and communities . while the reciprocity of binary networks has been extensively studied , that of weighted networks is still poorly understood , implying an ever - increasing gap between the availability of weighted network data and our understanding of their dyadic properties . here we introduce a general approach to the reciprocity of weighted networks , and define quantities and null models that consistently capture empirical reciprocity patterns at different structural levels . we show that , counter - intuitively , previous reciprocity measures based on the similarity of mutual weights are uninformative . by contrast , our measures allow to consistently classify different weighted networks according to their reciprocity , track the evolution of a network s reciprocity over time , identify patterns at the level of dyads and vertices , and distinguish the effects of flux ( im)balances or other ( a)symmetries from a true tendency towards ( anti-)reciprocation .
in recent years , molecular genetic data have been used extensively to learn about the evolutionary processes that gave rise to the observed genetic variation .one important example is the use of genetic data to try to infer whether or not gene flow occurred between closely related species during or after speciation ( see for example reviews by nadachowska 2010 , pinho and hey 2010 , smadja and butlin 2011 , bird et al . 2012 and a large number of references in these papers ) .such applications typically use computer programs such as _ mdiv _ ( nielsen and wakeley 2001 ) , _ i m _ ( hey and nielsen 2004 , hey 2005 ) , _ i m a _ ( hey and nielsen 2007 ) , _ mimar _ ( becquet and przeworski 2007 ) or _ ima2 _( hey 2010 ) , based on the isolation with migration " ( i m ) model , which assumes that a panmictic ancestral population instantaneously split into two or more descendant populations some time in the past and that migration occurred between these descendant populations at a constant rate ever since . whilst such methods have been extensively and successfully applied to study the relationships between different populations within species , the assumption of migration continuing at a constant rate until the present is unrealistic when studying relationships between species .becquet and przeworski ( 2009 ) and strasburg and rieseberg ( 2010 ) investigated by means of simulations how robust parameter estimates ( migration rates , divergence times and effective population sizes ) , obtained by methods based on the i m model , are to violations of the i m model assumptions .becquet and przeworski ( 2009 ) found that parameter estimates obtained with _i m _ and _ mimar _ are often biased when the assumptions of the i m model are violated , and concluded that these methods are highly sensitive to the assumption of a constant migration rate since the population split .theoretical results derived by wilkinson - herbots ( 2012 ) also suggested that estimated levels of gene flow obtained by applying an i m model to species which are now completely isolated can not simply be interpreted as average levels of gene flow over time .thus , whilst computer programs based on the i m model can be used to test for departure from a complete isolation model ( which assumes that an ancestral population instantaneously split into two or more descendant populations which have been completely isolated ever since ) , the actual parameter estimates obtained may be difficult to interpret if the i m model is not an accurate description of the history of the populations or species concerned .furthermore , in some studies the programs _i m a _ or _ima2 _ have also been used to estimate the times when migration events occurred , and to try to distinguish between scenarios of speciation with gene flow and scenarios of introgression , and it has recently been demonstrated that such inferences about the timing of gene flow ( obtained with programs based on the i m model ) are not valid ( strasburg and rieseberg 2011 , sousa et al .becquet and przeworski ( 2009 ) , strasburg and rieseberg ( 2011 ) and sousa et al . (2011 ) all suggested that more realistic models of population divergence and speciation are needed . a first step in trying to make the i m model more suitable for the study of speciation is to allow gene flow to occur during a limited period of time , followed by complete isolation of the species , and such modelshave been studied by teshima and tajima ( 2002 ) , innan and watanabe ( 2006 ) and wilkinson - herbots ( 2012 ) . in the latter paper , a model of isolation with an initial period of migration " ( iim )was studied where a panmictic ancestral population gave rise to two or more descendant populations which exchanged migrants symmetrically at a constant rate for a period of time , after which they became completely isolated from each other .explicit analytical expressions were derived for the probability that two dna sequences ( from the same descendant population or from different descendant populations ) differ at nucleotide sites , assuming the infinite sites model of neutral mutation .it was suggested that these results may be useful for maximum likelihood estimation of the parameters of the model , if one pair of dna sequences is available at each of a large number of independent loci , and that such an ml method should be very fast as it is based on an explicit expression for the likelihood rather than on computation by means of numerical approximations or mcmc simulation .however the proposed ml estimation method had not yet been implemented and its usefulness was still to be demonstrated . in this paperwe present an implementation of the ml estimation method for the parameters of the isolation with an initial period of migration " model proposed in wilkinson - herbots ( 2012 ) , and we illustrate its potential by applying it to a set of dna sequence data from _ drosophila simulans _ and _ drosophila melanogaster _previously analysed by wang and hey ( 2010 ) and lohse et al .the parameter estimates and maximized likelihood obtained for the iim model are also compared to those under an i m model and an isolation model , and it is illustrated that the method makes it possible to distinguish between more and less plausible evolutionary scenarios , by comparing aic scores or by means of likelihood ratio tests .the implementation was done in r ( r development core team 2011 ) .the r code for the mle method described is included as supplementary material and can readily be pasted into an r document .this mle algorithm is very fast indeed : for the drosophila data mentioned above ( using the number of nucleotide differences between two dna sequences at each of approximately 30,000 loci ) , we obtained estimates of the parameters of the iim model using a desktop pc ; the program returned results instantly ( i.e. in a small fraction of a second ) if all loci had been trimmed to the same estimated mutation rate ( as proposed by lohse et al .2011 ) , or in approximately 20 seconds if the full sequences ( and hence different mutation rates at different loci ) were used . whereas most earlier methods used large samples from a small number of loci , recent advances in dna sequencing technology and the advent of whole - genome sequencing have led to an increased interest in methods which ( like that described in this paper ) can handle data from a small number of individuals at a large number of loci ( for example , takahata 1995 ; takahata et al .1995 ; takahata and satta 1997 ; yang 1997 , 2002 ; innan and watanabe 2006 ; wilkinson - herbots 2008 ; burgess and yang 2008 ; wang and hey 2010 ; yang 2010 ; hobolth et al . 2011 ; li and durbin 2011 ; lohse et al . 2011 ; wilkinson - herbots 2012 ; zhu and yang 2012 ; andersen et al .this type of data set has two advantages .firstly , data from even a very large number of individuals at the same locus tend to contain only little information about very old divergence or speciation events because typically the individuals ancestral lineages will have coalesced to a very small number of ancestral lineages by the time the event of interest is reached , and in such contexts a data set consisting of a small number of dna sequences at each of a large number of independent loci may be more informative ( maddison and knowles 2006 ; wang and hey 2010 ; lohse et al . 2010 , 2011 ) .secondly , considering small numbers of sequences at large numbers of independent loci is mathematically much easier and computationally much faster than working with large numbers of sequences at the same locus . in particular , explicit analytical expressions for the likelihoodhave been obtained for pairs or triplets of sequences for a number of demographic models ( for example , takahata et al .1995 ; wilkinson - herbots 2008 ; lohse et al . 2011 ; wilkinson - herbots 2012 ; zhu and yang 2012 ) , which substantially speeds up computation and maximization of the likelihood .we obtain maximum likelihood estimates of the parameters of the isolation with initial migration " ( iim ) model studied in wilkinson - herbots ( 2012 ) , for the case of descendant populations or species .this model assumes that , time ago ( ) , a panmictic ancestral population instantaneously split into two descendant populations which subsequently exchanged migrants symmetrically at a constant rate until time ago ( ) , when they became completely isolated from each other ( see figure [ fig1 ] ) .focusing on dna sequences at a single locus that is not subject to intragenic recombination , the ancestral population is assumed to have been of constant size homologous sequences until the split occurred time ago , where is large . between time ago and time ago , the two descendant populations were of constant size sequences each and exchanged migrants at a constant rate , where is the proportion of each descendant population that was replaced by immigrants each generation. the current size of descendant population is sequences ( ) , and is assumed to have been constant since migration ended time ago .as is standard in coalescent theory and assuming that reproduction within populations follows the neutral wright - fisher model , time is measured in units of generations ( this also applies to the times and ) ; in practical applications where the wright - fisher model does not hold , is interpreted as the effective population size .the scaled " migration and mutation rates are defined as and , respectively , where is the mutation rate per dna sequence per generation at the locus concerned .the work described in this paper assumes that mutations are selectively neutral and follow the infinite sites model ( watterson 1975 ) . extensions to other neutral mutation models are feasible but have not yet been implemented . for this iim model, wilkinson - herbots ( 2012 ) found the probability that two homologous dna sequences differ at nucleotide sites : denoting by the number of nucleotide differences between two homologous sequences sampled from descendant populations and ( ) and using the subscript to indicate the scaled mutation rate at the locus concerned , we have for for a pair of sequences from the same descendant population , and + for a pair of sequences from different descendant populations , where with and where thus for pairwise difference data of the form , consisting of the number of nucleotide differences between one pair of dna sequences at each of independent loci , the likelihood under the iim model is given by where and denote the locations ( i.e. the population labels ) of the two dna sequences sampled at the locus , and where each factor is given by equation ( [ distribution0 ] ) or ( [ distribution1 ] ) as appropriate , replacing by the scaled mutation rate for the locus .note that the above derivation of the likelihood assumes that there is no recombination within loci and free recombination between loci . in order to jointly estimate all the parameters of the model , data from between - population sequence comparisons as well as within - population sequence comparisons from both populationsshould be included in the above likelihood ( where each pairwise comparison must be at a different , independent locus ) .the above explicit formula for the likelihood allows rapid computation and maximization , so that ml estimates can easily be obtained , as well as aic scores and likelihood ratios comparing the iim model with competing models such as the complete isolation " model and the symmetric isolation - with - migration " model .similar ml methods were first developed by takahata et al .( 1995 ) for a number of different demographic models : a single population of constant size , a population undergoing an instantaneous change of size , and complete isolation models for two and for three species .innan and watanabe ( 2006 ) developed an extension of takahata et al.s mle method to a more sophisticated model of gradual population divergence than the iim model considered in this paper , but their calculation of the likelihood relies on numerical computation of the probability density function of the coalescence time using recursion equations on a series of time points ( and then numerically integrating over the coalescence time to find the probability of nucleotide differences ) , which can be time - consuming ; in addition , the accuracy of their recursion and likelihood calculation depend on the number of time points considered .the method described in the present paper assumes a simpler model than innan and watanabe s , but is faster because both the calculation of the pdf of the coalescence time and the integration over the coalescence time to find the probability of nucleotide differences have already been done , in an exact way , to give equations ( [ distribution0 ] ) and ( [ distribution1 ] ) above , leaving far less computation to be done . in order to obtain good starting values for the likelihood maximization under the iim model , our implementation first fits a complete isolation model , as the latter model gives a more tractable likelihood surface and is therefore less sensitive to the choice of starting values .the parameter estimates obtained for the complete isolation model are then used as starting values to fit the iim model ; this was found to reduce the possibility that the program might otherwise converge on a local maximum rather than on the global maximum of the likelihood under the iim model . whilst the theoretical results obtained make it possible to directly compute ml estimates of the original parameters of the iim model as described above ( ) ,our implementation uses the following reparameterizations as this improved performance and robustness : ( similar to the choice of parameters in , for example , yang 2002 , and hey and nielsen 2004 ) , i.e. and represent respectively the time since the end of gene flow and the duration of the period of gene flow , measured by twice the expected number of mutations per lineage during the period concerned ; and are the population size parameters " of the current descendant populations and the ancestral population , respectively ( is the population size parameter of each descendant population during the migration stage of the model ) .ml estimates are obtained jointly for the parameters , and these can readily be converted to ml estimates of the original model parameters if required .our computer code is included as supplementary material .equations ( 1 ) and ( 2 ) rely on the assumption of symmetric migration and equal population sizes during the period of gene flow . without these assumptions ,an explicit analytical formula for the likelihood becomes difficult to obtain .simulation results suggest however that our method is reasonably robust to minor violations of these assumptions ; furthermore , it is possible to extend our method to allow for asymmetric migration and unequal population sizes ( costa rj and wilkinson - herbots hm , work in progress ) .to illustrate the mle method for the iim model described above , we applied this method to the genomic data set of _d. simulans _ and _ d. melanogaster _ compiled and analyzed by wang and hey ( 2010 ) , and reanalyzed by lohse et al .( 2011 ) , who both used an i m model assuming a constant migration rate from the onset of speciation until the present .the data consist of alignments of 30,247 blocks of intergenic sequence of 500 bp each , from two inbred lines of _d. simulans _ and from one inbred line each of _ d. melanogaster _ and _ d. yakuba _ , and have been pre - processed as described in wang and hey ( 2010 ) and lohse et al .we also follow these authors in using _ d. yakuba _ as an outgroup to estimate the relative mutation rate at each locus . as our method uses the number of nucleotide differences between one pair of sequences at each locus , we had to choose two of the three sequences ( which we will denote by _d.sim_1 , _ d.sim_2 and _ d.mel _ for brevity ) at each locus .there are of course many ways in which this can be done ( for example , one could choose two sequences at random at each locus , as done by wang and hey 2010 ) . in order to use all the data , and to be able to check to what extent our results depend on our choice of sequences, we took the following approach : three ( overlapping ) pairwise data sets were formed by alternately assigning loci to the comparisons _d.mel - d.sim_1 , _d.mel - d.sim_2 and _d.sim_1 - _ d.sim_2 , where data set 1 starts with _ d.mel _- _ d.sim_1 at locus 1 ( _ d.mel _ - _ d.sim_2 at locus 2 , _ d.sim_1 - _ d.sim_2 at locus 3 , and so on ) , data set 2 uses __ - _ d.sim_2 at locus 1 ( _ d.sim_1 - _ d.sim_2 at locus 2 , ) , and data set 3 starts with _d.sim_1 - _ d.sim_2 at locus 1 ( _ d.mel _ - _ d.sim_1 at locus 2 , ) .thus each sequence at each locus is used in exactly two of the three data sets , and each data set contains between - species differences at approximately 20,000 loci and within - species ( _ d . simulans _ ) differences at approximately 10,000 loci .ml estimates and estimated standard errors for the iim model parameters represents the current size of _ d. simulans_. we used the main wang and hey ( 2010 ) data set used also by lohse et al .( 2011 ) , which does not include any _d. melanogaster _ pairs and hence contains no information on the population size parameter corresponding to the current size of _ d. melanogaster _ ( this parameter does not appear in the likelihood of these data and thus can not be estimated here ) . ] were obtained for each of the three data sets and then averaged over the three data sets . for comparison ,in addition to fitting an iim model as described above , we also obtained ml estimates assuming an iim model with ( or equivalently , , i.e. not allowing for a change of population size at the end of the migration period ) , a symmetric i m model ( which corresponds to putting in the iim model ) , and a complete isolation model ( which corresponds to assuming in addition to ; the size of descendant population 2 is irrelevant here ) .the results are given in table [ tab1 ] .llrrrrrrrrr + data & model & & & & & & & & & aic + + set 1 & isolation & & 13.66 & & 5.67 & 4.69 & & -90,097.17 & 1,175.66 & 180,200.34 + & i m & & 14.79 & & 5.54 & 3.97 & 0.0209 & -89,509.34 & 270.65 & 179,026.68 + & iim with & 5.87 & 9.99 & & 5.47 & 3.48 & 0.1389 & -89,374.01 & 518.43 & 178,758.03 + & iim & 7.22 & 9.38 & 6.70 & 2.68 & 3.22 & 0.0888 & -89,114.80 & & 178,241.60 + & ( s.e . ) & ( 0.20 ) & ( 0.16 ) & ( 0.11 ) & ( 0.12 ) & ( 0.09 ) & ( 0.0059 ) & & & + + set 2 & isolation & & 13.57 & & 5.72 & 4.77 & & -90,339.14 & 1,185.38 & 180,684.28 + & i m & & 14.80 & & 5.58 & 3.98 & 0.0227 & -89,746.45 & 347.39 & 179,500.90 + & iim with & 6.27 & 9.88 & & 5.49 & 3.37 & 0.1793 & -89,572.76 & 685.95 & 179,155.51 + & iim & 7.62 & 9.43 & 6.85 & 2.31 & 3.05 & 0.0904 & -89,229.78 & & 178,471.56 + & ( s.e . ) & ( 0.17 ) & ( 0.15 ) & ( 0.11 ) & ( 0.11 ) & ( 0.09 ) & ( 0.0054 ) & & & + + set 3 & isolation & & 13.63 & & 5.59 & 4.72 & & -89,979.56 & 1,094.44 & 179,965.12 + & i m & & 14.69 & & 5.48 & 4.04 & 0.0193 & -89,432.34 & 291.86 & 178,872.68 + & iim with & 6.29 & 9.67 & & 5.39 & 3.46 & 0.1635 & -89,286.41 & 548.28 & 178,582.82 + & iim & 7.31 & 9.31 & 6.62 & 2.56 & 3.23 & 0.0870 & -89,012.27 & & 178,036.54 + & ( s.e . ) & ( 0.20 ) & ( 0.16 ) & ( 0.11 ) & ( 0.12 ) & ( 0.09 ) & ( 0.0058 ) & & & + + average & iim & 7.39 & 9.38 & 6.72 & 2.52 & 3.16 & 0.0887 & & & + & ( s.e . )& ( 0.19 ) & ( 0.15 ) & ( 0.11 ) & ( 0.11 ) & ( 0.09 ) & ( 0.0057 ) & & & + + note the values of the likelihood ratio test statistic shown are for the comparison of the model concerned ( considered to be the null model ) against the model immediately underneath it .estimated standard errors ( numbers in brackets ) are provided for the model with the best fit , i.e. the full iim model .the bottom section of the table gives the parameter estimates under the iim model , averaged over the three data sets ; for each parameter , the average of the estimated standard errors for the three data sets is given in brackets , which serves as an estimated _ upper bound _ on the standard error of the averaged parameter estimate ( estimates of the exact standard errors would be hard to obtain as the three data sets overlap ) . [ tab1 ] fitting the full iim model took approximately 20 seconds for each of the three data sets , on a desktop pc .estimated mutation rates , and hence the estimates of the parameters and defined in ( [ pars ] ) , are averages over all the loci considered .mutation rate heterogeneity between loci was accounted for by estimating the relative mutation rate at each locus from comparison with the outgroup _d. yakuba _ , as proposed in wang and hey ( 2010 ) ( see materials and methods " for further details ) ; for simplicity these relative mutation rates are treated as known constants ( see also yang 1997 , 2002 ) , i.e. uncertainty about the relative mutation rates is ignored .table [ tab2 ] shows the estimates ( averages over the three data sets ) converted to times in years , diploid effective population sizes , and the migration rate per generation , for each of the four models considered .lcccccc + model & & & & & & + + isolation & & 2.97my & & 6.18 m & 5.16 m & + i m & & 3.22my & & 6.04 m & 4.36 m & + iim with & 1.34my & 3.49my & & 5.95 m & 3.75 m & + iim & 1.61my & 3.66my & 7.33 m & 2.75 m & 3.45 m & + ( s.e . ) & ( 0.04my ) & ( 0.03my ) & ( 0.12 m ) & ( 0.12 m ) & ( 0.10 m ) & ( ) + + note the abbreviations my " and m " stand for million years " and million individuals " .the times and denote , respectively , the time since complete isolation of _ d. simulans _ and _ d. melanogaster _ ( for the iim model ) , and the time since the onset of speciation ( for all models ) , i.e. these are the times and converted into years ( see fig.1 ) . denotes the effective size of _d. simulans _ in the isolation and i m models , and during the migration stage of the iim model ; denotes the present effective size of _d. simulans _ in the full iim model ; is the effective size of the ancestral population .the estimates shown are the averages of the estimates obtained for the three ( overlapping ) data sets described in the text .for the iim model , the averaged estimated standard error is also given ( in brackets ) for each parameter ; this is an estimated upper bound on the standard error of the averaged parameter estimate .[ tab2 ] these conversions assume a generation time of 0.1 year and a 10 million year speciation time between _d. yakuba _ and _ d. melanogaster_/_d. simulans _ ( as assumed by wang and hey 2010 , and by lohse et al . 2011; see also powell 1997 ) .it is seen that the iim model places the onset of speciation ( million years ) further back into the past than both the isolation model and the i m model , whereas the estimated time since complete isolation of the two species under the iim model ( million years ) is more recent than under the isolation model . under the iim modelwe obtain an estimated ancestral effective population size of million individuals , splitting into two populations containing million individuals each during the time interval when gene flow occurred , with _d. simulans _ expanding to a current effective population size of million individuals .note that the estimated migration rate per generation ( ) is an order of magnitude higher under the iim model than under the i m model .on the one hand , our averaging of the estimated standard errors over the three data sets ( bottom rows of tables [ tab1 ] and [ tab2 ] ) will have given us an overestimate of the standard error of the averaged parameter estimates .on the other hand , however , it should be noted that these estimated standard errors underrepresent the true total amount of uncertainty , as they do not account for uncertainty about the relative mutation rates at the different loci ( which have been treated as known constants , whereas in practice we have estimated them from comparison with_ d. yakuba _ ) .care should be taken therefore in the interpretation of the estimated standard errors stated in tables [ tab1 ] and [ tab2 ] .table [ tab1 ] also gives the maximized loglikelihood ( ) for the different models fitted , for each of the three data sets .it is seen that , amongst the models being compared , the full iim model consistently gives by far the best loglikelihood value , though of course this model also has the largest number of parameters .the easiest way of comparing the fit of the different models is by using akaike s information criterion , aic , which was designed to compare competing models with different numbers of parameters ( aic scores were also used in , for example , takahata et al .1995 , nielsen and wakeley 2001 , and carstens et al . 2009 ) . for each model ( and for the same data ), aic is defined as ( akaike 1972 , 1974 ) .thus a larger maximized likelihood leads to a lower aic score , subject to a penalty for each additional model parameter .the minimum aic estimate " ( maice ) is then defined by the model ( and the maximum likelihood estimates of the model parameters ) which gives the smallest aic value amongst the competing models considered .table [ tab1 ] includes , for each data set , the aic scores of the different models considered .it is seen that for each of the three data sets , the maice is given by the full iim model .thus , out of all the models considered here , the iim model provides the best fit to the data , as measured by the aic scores .the results in table [ tab1 ] also show that an iim model with has a substantially better aic score than a symmetric i m model , suggesting that the improved fit of the iim model compared to the i m model is indeed due at least in part to accounting for the eventual complete isolation of the two species , and is not due solely to allowing differences between population sizes .an alternative approach is to perform a series of likelihood ratio tests for nested models ( see , for example , cox 2006 , section 6.5 on tests and model reduction " ) .focusing on any one of the three data sets in table [ tab1 ] , if we start with the full iim model and look upwards , each of the models considered reduces to the model immediately above it by fixing the value of one parameter : respectively , , and . each pair of neighbouring " modelscan then be formally compared by means of a likelihood ratio test , where the null hypothesis represents the simpler of the two models ( i.e. the one with fewer free parameters ) .if the null hypothesis is true , then the test statistic should approximately follow a distribution if the null " value of the parameter concerned is an interior point of the parameter space ( as is the case when we test ) , whereas we might expect to follow approximately a mixture if the null " value of the parameter concerned lies on the boundary of the parameter space ( as is the case in the tests of and ) ; using a distribution in the latter case is conservative ( self and liang 1987 ; see also our simulation results below ) .the value of the likelihood ratio test statistic is given in table [ tab1 ] for each possible null model , when evaluated against the model immediately underneath it .for each model comparison , the value of the test statistic is so large that the simpler model is rejected in favour of the more complex model underneath it , with , in each case , a very small p - value ( ) providing overwhelming evidence against the simpler model .thus these likelihood ratio tests also identify the full iim model as the most plausible amongst the different models considered .simulations were done to assess how well our method performs and how reliable the estimates obtained in the previous section are . from the three pairwise sets of drosophila sequence data considered in table [ tab1 ] , we arbitrarily selected the first one ( data set 1 " ) and mimicked this data set in our simulations. each simulated data set consists of the number of nucleotide differences between a pair of dna sequences sampled from descendant population 1 for 10,082 independent loci , and the number of differences between a pair of dna sequences taken from different descendant populations for 20,165 independent loci , generated under the full iim model with true " parameter values equal to the estimates obtained for data set 1 ( i.e. , , , , and ) and the infinite sites model of neutral mutation .these numbers of loci for both types of comparison match those in drosophila data set 1 , and so do the relative mutation rates assumed at the different loci .one hundred such data sets were generated .for each simulated data set , ml estimates of the parameters were obtained using our r program for the iim model , i.e. the method described in new approaches " .the resulting estimates are shown in figure [ fig2 ] . , , , , and .each simulated data set consists of the number of nucleotide differences between two dna sequences from descendant population 1 at 10,082 independent loci , and between two dna sequences from different descendant populations at 20,165 independent loci .different loci were assumed to have different mutation rates : the relative mutation rates used at the different loci match those of the drosophila data considered in this paper.,width=574 ] as one would expect , it is seen that ( at least for a large sample size ) our ml estimation procedure gives estimates centred on , and close to , the true " parameter values . in order to assess whether our method enables us to correctly select the iim model from amongst the different models considered , when the iim model is in fact the true underlying model , we also mimicked the type of analysis as was shown for data set 1 in table [ tab1 ] : for each simulated data set , we fitted an isolation model , a symmetric i m model , an iim model with , and a full iim model , and computed the aic scores and the values of the likelihood ratio test statistic . both procedures ( using aic scores or likelihood ratio tests ) correctly identified the full iim model as the best - fitting model , for each of the 100 simulated data sets .the smallest difference obtained between the aic scores of any two neighbouring models was in fact , with the simpler model always having the worse aic score , and the smallest difference observed between the aic scores of the full iim model and any other model was , indicating that the iim model could be identified with ease .for the likelihood ratio approach , comparing the value of the test statistic for a pair of neighbouring models with the distribution gave in all cases a -value much smaller than , providing extremely strong evidence against the simpler of the two models compared ( the smallest value of obtained for any pair of neighbouring models was , whereas the quantile of the distribution is only ) . when comparing the full iim model directly with the isolation model by means of a likelihood ratio test , the smallest value of observed amongst the simulated data sets was , again giving a -value very much smaller than ( using the distribution ) for each of the 100 simulated data sets .we also simulated 100 data sets under the isolation model , to further investigate the performance of our method in identifying the correct model , and whether false positives may be produced in particular , whether a signal of gene flow may be obtained when in reality there was no gene flow .the true " parameter values assumed in the simulations were those obtained from fitting an isolation model to drosophila data set 1 ( see table [ tab1 ] ) .two of the simulated data sets were problematic , in that only the isolation model could be fitted : due to some large values ( ) for the simulated numbers of nucleotide differences between pairs of sequences from different species , r was unable to evaluate the likelihood for any of the other models in the relevant part of the parameter space . for the remaining simulated data sets for which all models could be fitted , a likelihood ratio test using as the null distribution the naive distribution ( with degrees of freedom equal to the difference between the number of parameters in the two models being compared ) gave better results than did comparison of aic scores .on the basis of aic scores , an incorrect model was selected for as many as 19 of the simulated data sets : in 6 cases the i m model gave the lowest aic score , in 2 cases the iim model with , and in 11 cases the full iim model .however , in 6 of the 11 cases where the full iim model was selected , the estimated migration rate was so that this iim " model was in fact an isolation model but with an additional small change of population size .a likelihood ratio test of the isolation model against the i m model at a significance level of resulted in acceptance of the isolation model for all 98 data sets , regardless of whether we used or as the null distribution . at a significance level of , the isolation model was rejected for 4 of the 98 simulated data sets if we used , and for 6 of the simulated data sets if the mixed distribution was used .figure [ fig3 ] against the distribution ( left ) , and against the mixture ( right ) . on the vertical axiswe have plotted the sample quantiles of the values of observed when testing the isolation model against the i m model for data sets simulated from the isolation model ( 2 data sets have been omitted as r was unable to fit the i m model - please see the main text for further details ) .the line is also shown for ease of comparison.,title="fig:",width=317 ] against the distribution ( left ) , and against the mixture ( right ) . on the vertical axis we have plotted the sample quantiles of the values of observed when testing the isolation model against the i m model for data sets simulated from the isolation model ( 2 data sets have been omitted as r was unable to fit the i m model - please see the main text for further details ) .the line is also shown for ease of comparison.,title="fig:",width=317 ] shows that the use of as the null distribution is indeed conservative , and suggests that this may be preferable to using the mixed distribution .similarly , at a significance level of , a likelihood ratio test of the isolation model against the iim model with led to acceptance of the isolation model for all 98 data sets , whether we used or the mixture as the null distribution . at a significance level of ,the use of resulted in rejection of the isolation model for 1 of the 98 data sets , whilst the use of as the null distribution led to rejection of the isolation model for 5 of the data sets .the qq - plots shown in figure [ fig4 ] confirm again that the use of in this case is conservative . against the distribution ( left ) , and against the mixture ( right ) .on the vertical axis we have plotted the sample quantiles of the values of observed when testing the isolation model against the iim model with for data sets simulated from the isolation model ( 2 data sets have been omitted as r was unable to fit the iim model with ) .the line is also shown for ease of comparison.,title="fig:",width=317 ] against the distribution ( left ) , and against the mixture ( right ) . on the vertical axis we have plotted the sample quantiles of the values of observed when testing the isolation model against the iim model with for data sets simulated from the isolation model ( 2 data sets have been omitted as r was unable to fit the iim model with ) .the line is also shown for ease of comparison.,title="fig:",width=317 ] similarly , the qq - plots in figure [ fig5 ] indicate that the use of the distribution is conservative when testing the isolation model against the full iim model , and that this may be preferable to the use of a mixed distribution .a likelihood ratio test at a significance level of led to acceptance of the isolation model for all 98 data sets if a distribution was used , and resulted in rejection of the isolation model for 1 of the 98 data sets when using the mixed distribution . at a significance level of , using the distribution resulted in rejection of the isolation model for 4 of the 98 data sets ( in 2 of these 4 cases the estimated migration rate was , reducing the iim model to an isolation model with an additional slight change of population size ) , whereas the use of the mixed distribution led to rejection of the isolation model for 12 of the 98 data sets ( in 6 of these 12 cases , the estimated migration rate was ) . against the distribution ( left ) , and against the mixture ( right ) .on the vertical axis we have plotted the sample quantiles of the values of obtained when testing the isolation model against the full iim model for data sets simulated from the isolation model ( 2 data sets have been omitted as r was unable to fit the iim model ) .the line is also shown for ease of comparison.,title="fig:",width=317 ] against the distribution ( left ) , and against the mixture ( right ) . on the vertical axis we have plotted the sample quantiles of the values of obtained when testing the isolation model against the full iim model for data sets simulated from the isolation model ( 2 data sets have been omitted as r was unable to fit the iim model ) . the line is also shown for ease of comparison.,title="fig:",width=317 ]in this paper we have presented a very fast method to obtain ml estimates and to distinguish between different evolutionary scenarios , using nucleotide difference data from pairs of sequences at a large number of independent loci .the iim model considered allows for an initial period of gene flow between two diverging populations or species , followed by a period of complete isolation , and is more appropriate in the context of speciation than the i m model commonly used in the literature ( which assumes that gene flow continues at a constant rate all the way until the present time ) .we have illustrated the speed and power of our method by applying it to a large data set from two related species of drosophila , and to simulated data .fitting an iim model to a data set of approximately 30,000 loci ( with an average length of about 400 bp and varying mutation rates ) from two species of drosophila took approximately 20 seconds on a desktop pc ; this time included fitting an isolation model first in order to obtain good starting values for the parameters .moreover , the results made it possible to distinguish between more and less plausible models ( representing alternative evolutionary scenarios ) with ease , and identified the iim model as providing the best fit amongst the models considered .the drosophila data set studied in this paper was previously analysed by wang and hey ( 2010 ) , who compiled the data , and by lohse et al .both fitted i m models to these data , assuming that gene flow occurred at a constant rate from the time of separation of the two species until the present time . in tables[ tab1 ] and [ tab2 ] , alongside our results for the iim model , we also included parameter estimates for a symmetric i m model , for the sake of comparison .as expected , our i m results are in close agreement with those of lohse et al .( 2011 ) , who fitted two versions of the i m model , one with symmetric migration and one with migration in only one direction ( _ d .simulans _ to _d. melanogaster _ forward in time , motivated by wang and hey s results ) .they found that their estimated migration rate for the symmetric model is approximately half that obtained for the model with migration in one direction , i.e. both models gave the same estimated total number of migrants per generation between the two species in the two directions combined , whereas their estimates of the other parameters are virtually identical for the two models ( see p.23 of their supporting information ) .our i m results are also in good agreement with those of wang and hey ( 2010 ) , even though they fitted a more general version of the i m model ( allowing for unequal sizes of the two species and unequal migration rates in the two directions ) and assumed the jukes - cantor model of mutation , whereas our results and those of lohse et al .( 2011 ) assume the infinite sites model for its mathematical simplicity . indeed , lohse et al .( 2011 ) comment on how little difference the assumption of the infinite sites model makes to the i m results for the pairwise drosophila data , compared to wang and hey s results based on the jukes - cantor model . of more interest is the comparison of these authors i m results with those obtained for our iim model .we note firstly that their estimates of 3.04 million years ( wang and hey 2010 ) or 2.98 million years ( lohse et al .2011 ) for the _ simulans - melanogaster _ speciation time , and also our i m estimate of 3.22 million years , fall in between our iim estimates of 3.66 million years for the time since the onset of speciation ( ) and 1.61 million years for the time since complete isolation of these two species ( ) . secondly , under the im model the amount of gene flow between the two species ( in the two directions combined ) was estimated at 0.0134 migrant gene copies per generation ( wang and hey 2010 ) , 0.0255 migrant gene copies per generation ( lohse et al .2011 , asymmetric i m model ) , 0.0256 ( lohse et al .2011 , symmetric i m model ) , or 0.0210 migrant gene copies per generation ( our i m results in table [ tab1 ] , averaged over the three data sets ) .these estimates are considerably smaller than the corresponding estimate of 0.0887 migrant gene copies per generation during the period of migration under our iim model .lohse et al . (2011 ) also considered a trimmed version of the wang and hey ( 2010 ) drosophila data , where they shortened all loci to a fixed number of nucleotide differences between _d. melanogaster _ and the _ d. yakuba _ outgroup , so that the estimated mutation rate for all loci was equal .this assumption that all loci have the same mutation rate massively speeds up the computation of the likelihood , since in this case the probability of differences needs to be calculated only once for each observed value of , rather than having to calculate this probability over and over again for different mutation rates at different loci .further to this idea , we also implemented a simplified , much faster , version of our mle method to be used for data sets where all loci have the same mutation rate ( this r code is also included as supplementary material ) . applyingthis version of our iim program to the trimmed drosophila data ( still approximately 30,000 loci ) gave virtually instant results on a desktop pc , i.e. the computing time was second .however , the substantial shortening of loci did result in loss of information , and the distinction between more and less plausible models was not always as clear anymore , as the differences between the values of the maximized loglikelihood for the different models ( and hence the values of the likelihood ratio test statistic , and the differences between the aic scores of different models ) were not as large as they were for the full sequence data .nevertheless , the full iim model could still be identified as giving the best fit amongst the models considered in this paper , both on the basis of aic scores and by means of likelihood ratio tests .table [ tab3 ] gives the estimates obtained by fitting the iim model to the trimmed data ; the conversion to times in years , diploid effective population sizes and migration rate per generation was done using a generation time of 0.1 year and a 10 million year speciation time between _d. yakuba _ and \{_d . melanogaster _ , _ d. simulans _ } , as before .the results are broadly in agreement with the corresponding estimates for the full sequence data ( see table [ tab2 ] , bottom row ) . lcccccc + model & & & & & & + + iim & 1.52my & 3.67my & 6.04 m & 3.94 m & 3.82 m & + ( s.e . ) & ( 0.16my ) & ( 0.10my ) & ( 0.11 m ) & ( 0.33 m ) & ( 0.18 m ) & ( ) + + note the abbreviations my " and m " stand for million years " and million individuals " .the times and denote , respectively , the time since complete isolation of _ d. simulans _ and _ d. melanogaster _ , and the time since the onset of speciation , i.e. these are the times and converted into years ( see fig.1 ) .the effective population sizes , and refer respectively to the present _d. simulans _ population , each population during the migration stage of the model , and the ancestral population .the estimates shown have been averaged over the three overlapping data sets described in the text . the averaged estimated standard erroris also given ( in brackets ) for each parameter ; this is an estimated upper bound on the standard error of the averaged parameter estimate .[ tab3 ] the purpose of our analysis of the drosophila data was merely to illustrate the potential of our method , rather than to draw any firm conclusions about the evolutionary history of the particular species concerned . whilst the iim model provides the best fit to the drosophila data amongst the different models considered in this paper , it is likely that more realistic models can be constructed which provide yet a better fit than the version of the iim model considered here .our assumption of equal population sizes and symmetric migration during the period of gene flow in the model is obviously an unrealistic oversimplification , but was made for the sake of mathematical tractability and computational speed .an extension of our method to a more general iim model allowing for unequal migration rates and unequal population sizes during the migration stage of the model is possible and is being developed ( costa rj and wilkinson - herbots hm , work in progress ) .similarly , the current implementation of our method assumes the infinite sites model of neutral mutation because of its mathematical ease .extensions of our method to other neutral mutation models ( for example , the jukes - cantor model ) can be done but have not yet been implemented .it should also be feasible to extend our method to incorporate more than two species . whilst our method explicitly allows for mutation rate heterogeneity between loci (whether due to variation in sequence length or variation in the mutation rate per site , or both ) , our current implementation assumes that accurate estimates of the relative mutation rates of the different loci are available ( our implementation treats the relative mutation rates as known constants ) .this limitation will become less important as more and more genome sequences become available , allowing more accurate estimation of the relative mutation rates . for our analysis of the drosophila data presented in this paper ,however , the relative mutation rates of the different loci were estimated by comparison of \{_d .simulans _ , _ d. melanogaster _ } with the outgroup _ d. yakuba _ , as was done also by wang and hey ( 2010 ) and by lohse et al .( 2011 ) . using more than one outgroup sequence (if available ) to estimate the relative mutation rate at each locus should improve the accuracy .it may also be possible to adapt our method to incorporate uncertainty about the relative mutation rates by modelling mutation rate variation as a random variable and integrating out over it ( see yang 1997 , or the all - rate " method proposed by wang and hey 2010 ) .the drosophila data considered in this paper are the _d. melanogaster - d. simulans _ divergence data compiled and analyzed by wang and hey ( 2010 ) ; we used the subset of the data that was studied also by lohse et al .the data consist of alignments of 30,247 segments of intergenic sequence of length 500 bp each , from two inbred lines of _d. simulans _ and one inbred line of _ d. melanogaster _ , and from an inbred line of _ d. yakuba _ for use as an outgroup .the data have been pre - processed as described in wang and hey ( 2010 ) and lohse et al .( 2011 ) ; the version of the data used is that labelled wangheyraw " in the supporting information of lohse et al .as our method uses pairwise nucleotide differences , subsets of the data were formed by selecting one pair of sequences from each locus ( i.e. _ d.mel _ - _ d.sim_1 , _ d.mel_ - _ d.sim_2 , or _d.sim_1 - _d.sim_2 ) .three such subsets were formed as described in the section on results " ; these three data sets overlap as each sequence at each locus is used in two of the three data sets .parameter estimates and estimated standard errors were obtained for each of these three data sets separately and then averaged over the three data sets .this gives too large estimates of the standard errors of the averaged parameter estimates ; however , due to the overlap between the data sets , more accurate estimated standard errors would be difficult to obtain .following wang and hey ( 2010 ) and lohse et al .( 2011 ) , mutation rate heterogeneity between loci was accounted for by comparing the _ d. melanogaster _ and _ d. simulans _ sequences with the outgroup _d. yakuba_. calculating the outgroup divergence between \{_d .simulans _ , _d. melanogaster _ } and _ d. yakuba _ at locus as a weighted average over the available sequences , assigning 25% weight to each of the _ simulans _ sequences and 50% weight to the single _ melanogaster _sequence , the relative mutation rate of locus was estimated as , where is the average of the over all loci . the scaled mutation rate at locus then given by , where is the average scaled mutation rate over all the loci considered .the relative mutation rates are treated as fixed constants in the likelihood maximization ( see also yang 1997 , 2002 ) .estimates of the parameters of the iim model were obtained by maximizing the likelihood , given by equation ( [ likelihood ] ) , using the reparameterization ( [ pars ] ) . in order to obtain reasonable starting values for the likelihood maximization under the iim model , it can be helpful to first fit an isolation model , as the latter has a more tractable likelihood surface .estimated standard errors are obtained from the inverted hessian matrix .the computer code was written in r and is included as supplementary material .table [ tab1 ] shows the results obtained for the three sets of drosophila data . to convert the parameter estimates into readily interpretable units, we followed wang and hey ( 2010 ) and lohse et al .( 2011 ) in using a generation time of year and a 10 million year speciation time between _d. yakuba _ and \{_d .simulans _ , _ d. melanogaster _ } ( see also powell 1997 ) , which gives us an estimate of for the mutation rate per locus per generation , averaged over all the loci included in the analysis .the converted estimates given in table [ tab2 ] were then calculated according to the following equations : for the times in years since complete isolation and since the onset of speciation , respectively ; for the effective size ( number of diploid individuals ) of either population during the migration stage of the model , and similarly for all other population sizes ; and for the migration rate per generation . estimatedstandard errors for the converted estimates were obtained by re - running a reparameterized version of the program , in terms of and instead of and , using the ml estimates already obtained as starting values , allowing easy computation of the hessian matrix .as before , estimated standard errors were computed separately for each of the three data sets and then averaged .following lohse et al .( 2011 ) we also analyzed a trimmed version of the wang and hey ( 2010 ) drosophila data , where each locus was cut after 16 nucleotide differences between _d. melanogaster _ and _ d. yakuba _ , and the remainder of the locus was ignored ; the 2,090 loci which had fewer than 16 differences between _ d. melanogaster _ and _ d. yakuba _ were omitted altogether , leaving 28,157 trimmed loci for analysis .this trimming amounts to a shortening of the loci by roughly a factor of 3 on average . again , three ( overlapping ) subsets of the data were formed , each containing one pair of sequences from each locus ( i.e. _ d.mel _ - _ d.sim_1 , _ d.mel _ - _ d.sim_2 , or _ d.sim_1 - _d.sim_2 ) as described above and in the section on results " .parameter estimates and estimated standard errors were obtained for each of these three data sets separately and then averaged over the three data sets .we used a simplified version of our r code written specifically for data sets where all loci have the same estimated mutation rate ; this simplified code is also included as supplementary material .the likelihood calculation uses the frequencies of the observed numbers of pairwise differences , which is much faster than the corresponding code for the case where different loci have different mutation rates : for the trimmed drosophila data , instead of evaluating 28,157 terms in the loglikelihood ( one term for each locus ) , only about 50 different terms need to be calculated ( corresponding to the different values of the number of pairwise differences observed , within and between species ) and multiplied by their frequencies , which hugely speeds up the likelihood computation and maximization .table 3 shows the results obtained under the iim model , after conversion into conventional units as explained above , using a mutation rate of per locus per generation for the trimmed data ( see also lohse et al 2011 ) .simulated pairwise difference data were generated by first simulating the coalescence times of pairs of sequences and then superimposing neutral mutation under the infinite sites model ; our r code is included as supplementary material . for pairs of sequences from the same species , our simulation algorithm uses the shortcut " provided by equations ( 8) and ( 9 ) in wilkinson - herbots ( 2012 ) , exploiting the fact that the distribution of the coalescence time of a pair of sequences sampled from the same species in the iim model is a mixture of two piecewise exponential " distributions ( with change points " and ) , eliminating the need to explicitly simulate migration events .i would like to express my sincere thanks to yong wang , jody hey , konrad lohse and nick barton for the use of their drosophila data .thanks are also due to paul northrop and rex galbraith for some helpful tips on programming in r , and to ziheng yang for some valuable discussions .this work was supported by the engineering and physical sciences research council via an institutional sponsorship award to university college london ( grant number ep / k503459/1 ) .the r code which fits an iim model and an isolation model to pairwise difference data from a large number of independent loci is supplied as supplementary material .the r code for simulating pairwise difference data under the iim model is also provided .: : akaike h. 1972 .information theory and an extension of the maximum likelihood principle . in : petrov bn , csaki f , editors .2nd int .information theory , supp . to problems of control and information theory .p. 267 - 281 .: : akaike h. 1974 . a new look at the statistical model identification .ieee transactions on automatic control ac-19:716 - 723 .: : andersen ln , mailund t , hobolth a. 2014 .efficient computation in the i m model .j math biol . 68:1423 - 51 .: : becquet c , przeworski m. 2007 . a new approach to estimate parameters of speciation models with application to apes .genome res . 17:1505 - 1519 .: : becquet c , przeworski m. 2009 .learning about modes of speciation by computational approaches .evolution 63:2547 - 2562 .: : bird ce , fernandez - silva i , skillings dj , toonen rj . 2012 .sympatric speciation in the post modern synthesis " era of evolutionary biology .evol biol . 39:158 - 180 .: : burgess r , yang z. 2008 .estimation of hominoid ancestral population sizes under bayesian coalescent models incorporating mutation rate variation and sequencing errors .mol biol evol .25:1979 - 1994 .: : carstens bc , stoute hn , reid nm .an information - theoretical approach to phylogeography .mol ecol 18:4270 - 4282 .: : cox dr . 2006 : principles of statistical inference .cambridge university press .: : hey j. 2005 . on the number of new world founders : a population genetic portrait of the peopling of the americas .plos biol .3:965 - 975 .: : hey j. 2010 .isolation with migration models for more than two populations .mol biol evol 27:905 - 920 .: : hey j , nielsen r. 2004 .multilocus methods for estimating population sizes , migration rates and divergence time , with applications to the divergence of drosophila pseudoobscura and d. persimilis .genetics 167:747 - 760 .: : hey j , nielsen r. 2007 .integration within the felsenstein equation for improved markov chain monte carlo methods in population genetics .proc natl acad sci usa .104:2785 - 2790 .: : hobolth a , andersen ln , mailund t. 2011 . on computing the coalescencetime density in an isolation - with - migration model with few samples .genetics 187:1241 - 1243 .: : innan h , watanabe h. 2006 .the effect of gene flow on the coalescent time in the human - chimpanzee ancestral population .mol biol evol .23:1040 - 1047 .: : li h , durbin r. 2011 .inference of human population history from individual whole - genome sequences .nature 475(7357):493 - 496 .: : lohse k , sharanowski b , stone gn . 2010 . quantifying the pleistocene history of the oak gall parasitoid _cecidostiba fungosa _ using twenty intron loci .evolution 64:2664 - 2681 .: : lohse k , harrison rj , barton nh .a general method for calculating likelihoods under the coalescent process .genetics 189:977 - 987 .: : maddison wp , knowles ll .2006 . inferring phylogeny despite incomplete lineage sorting .. 55:21 - 30 .: : nadachowska k. 2010 .divergence with gene flow - the amphibian perspective .herpetological journal 20:7 - 15 .: : nielsen r , wakeley j. 2001 . distinguishing migration from isolation : a markov chain monte carlo approach. genetics 158:885 - 896 .: : pinho c , hey j. 2010 .divergence with gene flow : models and data .annu rev ecol evol syst .41:215 - 230 .: : powell jr .progress and prospects in evolutionary biology : the drosophila model .oxford university press .: : r development core team . 2011 .r : a language and environment for statistical computing .r foundation for statistical computing , vienna , austria .url http://www.r - project.org/. : : self sg , liang ky . 1987 .asymptotic properties of maximum likelihood estimators and likelihood ratio tests under non - standard conditions .j am stat assoc . 82:605 - 610 .: : smadja cm , butlin rk .a framework for comparing processes of speciation in the presence of gene flow .20:5123 - 5140 .: : sousa vc , grelaud a , hey j. 2011 . on the nonidentifiability of migration timeestimates in isolation with migration models .molecular ecology 20:3956 - 3962 .: : strasburg jl , rieseberg lh . 2010. how robust are isolation with migration " analyses to violations of the i m model ?a simulation study .mol biol evol .27:297 - 310 .: : strasburg jl , rieseberg lh .2011 . interpreting the estimated timing of migration events between hybridizing species .molecular ecology 20:2353 - 2366 .: : takahata n. 1995 . a genetic perspective on the origin and history of humans .annu rev ecol syst . 26:343 - 372 .: : takahata n , satta y , klein j. 1995 .divergence time and population size in the lineage leading to modern humans .theor popul biol .48:198 - 221 .: : takahata n , satta y. 1997 .evolution of the primate lineage leading to modern humans : phylogenetic and demographic inferences from dna sequences .proc natl acad sci usa .94:4811 - 4815 .: : teshima km , tajima f. 2002 .the effect of migration during the divergence .theor popul biol . 62:81 - 95 .: : wang y , hey j. 2010 .estimating divergence parameters with small samples from a large number of loci .genetics 184:363 - 379 .: : watterson ga .1975 . on the number of segregating sites in genetical models without recombination .theor popul biol . 7:256 - 276 .: : wilkinson - herbots hm .the distribution of the coalescence time and the number of pairwise nucleotide differences in the `` isolation with migration '' model .theor popul biol .73:277 - 288 .: : wilkinson - herbots hm .the distribution of the coalescence time and the number of pairwise nucleotide differences in a model of population divergence or speciation with an initial period of gene flow .theor popul biol . 82:92 - 108 .: : yang z. 1997 . on the estimation of ancestral population sizes of modern humans .genet res camb . 69:111 - 116 .: : yang z. 2002 .likelihood and bayes estimation of ancestral population sizes in hominoids using data from multiple loci .genetics 162:1811 - 1823 .: : yang z. 2010 . a likelihood ratio test of speciation with gene flow using genomic sequence data . genome biol evol .2:200 - 211 .: : zhu t , yang z. 2012 .maximum likelihood implementation of an isolation - with - migration model with three species for testing speciation with gene flow .mol biol evol .29:3131 - 3142 .
we consider a model of isolation with an initial period of migration " ( iim ) , where an ancestral population instantaneously split into two descendant populations which exchanged migrants symmetrically at a constant rate for a period of time but which are now completely isolated from each other . a method of maximum likelihood estimation of the parameters of the model is implemented , for data consisting of the number of nucleotide differences between two dna sequences at each of a large number of independent loci , using the explicit analytical expressions for the likelihood obtained in wilkinson - herbots ( 2012 ) . the method is demonstrated on a large set of dna sequence data from two species of drosophila , as well as on simulated data . the method is extremely fast , returning parameter estimates in less than 1 minute for a data set consisting of the numbers of differences between pairs of sequences from 10,000s of loci , or in a small fraction of a second if all loci are trimmed to the same estimated mutation rate . it is also illustrated how the maximized likelihood can be used to quickly distinguish between competing models describing alternative evolutionary scenarios , either by comparing aic scores or by means of likelihood ratio tests . the present implementation is for a simple version of the model , but various extensions are possible and are briefly discussed .
we focus on the study of the quasi - periodically ( q.p . for short ) forced logistic map ( flm for short ) .the flm is a two parametric map in the cylinder where the dynamics in the periodic component is a rigid rotation and the dynamics in the other component is the logistic map plus a quasi - periodic forcing term .this map appears in the literature in different contexts , usually related with the destruction of invariant curves . for example , in it was introduced as an example where snas ( strange non - chaotic attractors ) were created through a collision between stable and unstable invariant curves .since then , different routes for the destruction of invariant curves have been explored for this map , for instance see and references therein .some other recent studies are .on the other hand , the flm is also related to the truncation of period doubling cascades .it is well known that the one dimensional logistic map exhibits an infinite cascade of period doubling bifurcations which leads to chaotic behavior .moreover this infinite cascade extends to a wider class of unimodal maps .but when some q.p .forcing is added , the number of period doubling bifurcations of the invariant curves is finite .this phenomenon of finite period doubling cascade has been observed in different applied and theoretical contexts . in the applied contextit has been observed in a truncation of the navier - stokes flow or in a periodically driven low order atmosphere model . in the theoretical context , it has also been reported in different maps which were somehow built to have period doubling cascades , and more recently in the analysis of the hopf - saddle - node bifurcation .actually , in the flm itself is given as a model for the truncation of the period doubling bifurcation cascade .the study presented here is more concerned with the mechanisms which cause the truncation of the period doubling bifurcation cascades than with the possible existence of snas for this family of maps .concretely , we show ( numerically ) that the reducibility has the role of confining the period doubling bifurcation in closed regions of the parameter space . in the remainder of this article we focus on the shape of this reducibility regions . in will use the reducibility loss bifurcation to study the self renormalizable properties of the bifurcation diagram and how the feigenbaum - collet - tresser renormalization theory can be extended to understand it . see also for a united exposition of the present paper with the other three cited before .this paper is structured as follows . in section [ section invariant curves in q.p systems ] we review some concepts and results concerning the continuation of invariant curves for a quasi - periodic forced maps .we also look at the concrete case when the map is uncoupled . in section [ chapter forcel logistic map ]we focus on the dynamics of the flm .first we review some computations which can be found in the literature .then , we do a study of the parameter space in terms of the dynamics of the attracting set of the map . for this study different properties of the attracting setare considered , as the value of the lyapunov exponent and , in the case of having a periodic invariant curve , the period . differently to other works , in our study the reducibility of the invariant curves has been also taken into account .this reveals interesting information , for example we observe that the parameter values for which the invariant curve doubles its period is contained in regions of the parameter space where the invariant curve is reducible .the subsequent sections are developed with the aim of understanding the results presented in this section . in section [ section obstruction to reducibility ]we consider the images and the preimages of the critical set ( this set is the set where the derivative of the map w.r.t the non - periodic coordinate is equal to zero ) .we also consider the continuation in the parameter space of the invariant curve which comes from one of the fixed points of the logistic map . doing a study of the preimages of the criticalset we construct forbidden regions in the parameter space for the reducibility of the invariant curve . in other words , we give some constrains on the reducibility of the invariant curve . in section [ section period doubling and reducibility ] the reducibility loss of the invariant curve is considered as a codimension one bifurcation and then we study its interaction with the period doubling bifurcation .the study done here is not particular for the flm . in this sectionwe also give a general model for the reducibility regions enclosing the period doubling bifurcation observed in the parameter space of section [ section parameter space and reducibility ] . in section [ section summary and conclusions ]we summarize the results obtained in the previous ones .we analyze again the bifurcations diagram obtained in section [ section parameter space and reducibility ] taking into account the results obtained in sections [ section obstruction to reducibility ] and [ section period doubling and reducibility ] .in this section we briefly review some of the key definitions and results on the theory of invariant curves in quasi - periodically forced maps .these definitions will be useful for the forthcoming analysis of the dynamics of the flm .a * quasi periodically forced one dimensional map * is a map of the form where with and the parameter . given a quasi - periodically forced map as above , we have that it determines a dynamical system in the cylinder , explicitly defined as given a continuous function we will say that is an * invariant curve * of ( [ q.p . forced system ] ) if , and only if , the value is known as the * rotation number * of . an equivalent way to define invariant curve ,is to require the set to be invariant by , where is the function defined by ( [ q.p . forced map general ] ) .on the other hand , if we consider the map we have that it is also a quasi - periodically forced map .given a function , we will say that is a -periodic invariant curve of if the set is invariant by ( and there is no smaller satisfying such condition ) . since a periodic invariant curve of a map is indeed an invariant curve of , any result for invariant curves can be extended to periodic invariant curves . given an invariant curve of ( [ q.p .forced system ] ) , its linearized normal behavior is described by the following linear skew product : where is also of class , and .we will assume that the invariant curve is not degenerate , in the sense that the function is not identically zero .[ definition reducibility ] the system ( [ linear skew product ] ) is called * reducible * if , and only if , there exists a change of variable , continuous with respect to , such that ( [ linear skew product ] ) becomes where does not depend on .the constant is called the * multiplier * of the reduced system . in the casethat is a function and is diophantine ( see proposition 1 in ) , the skew product ( [ linear skew product ] ) is reducible if , and only if , has no zeros , see corollary 1 of . actually , the reducibility loss can be characterized as a codimension one bifurcation .[ definition of reducibility loss bifurcation ] let us consider a one - parametric family of linear skew - products where is diophantine and belongs to an open set of and is a function of and .we will say that the system ( [ 1-d family of skew products ] ) undergoes a * reducibility loss bifurcation * at if 1 . has no zeros for , 2 . has a double zero at for , 3 . .on the other hand , consider a system like ( [ q.p. forced system ] ) with a function , which depends ( smoothly ) on a one dimensional parameter ( ) .assume also that we have an invariant curve of the system .we will say that the invariant curve undergoes a reducibility loss bifurcation if the system ( [ linear skew product ] ) associated to the invariant curve ( ) undergoes a reducibility loss bifurcation as a system of linear skew - products . given a map like ( [ linear skew product ] ) we have that , due to the rigid rotation in the periodic component , one of lyapunov exponents is equal to zero ( see ) .then the definition of the lyapunov exponent can be suited to the case of linear skew - products as follows .[ definitio lyapunov exponent skew - products ] if , we define the * lyapunov exponent * of ( [ linear skew product ] ) at as we also define the * lyapunov exponent of the skew product * ( [ linear skew product ] ) as if is finite then , applying the birkhoff ergodic theorem we have that the in ( [ lyapunov exponent limit ] ) is in fact a limit and for lebesgue a.e. .if never vanishes , the in ( [ lyapunov exponent limit ] ) is again a limit and coincides with , but now for all .now , consider a map like ( [ q.p. forced map general ] ) .if there exists an invariant compact cylinder where is monotone and has a negative schwarzian derivative with respect to , then jger has proved the existence of invariant curves , see for details . on the other hand in result on the persistence of invariant curves is given , in terms of the reducibility and the lyapunov exponent of the curve .in this subsection we turn our attention to the maps which are of the same class of the flm , in the sense that the quasi - periodic function can be written as a one dimensional function plus a quasi - periodic term . given a map like ( [ q.p . forced map general ] ) we will say the the map * is uncoupled * if does not depend on , i.e. note that if a map is uncoupled , the also does .[ prop persistence fixed points ] let be a one dimensional family of maps like ( [ q.p. forced map general ] ) such that for a fixed value the map is uncoupled , that is .then any hyperbolic fixed point of extends to an invariant curve of the system for close to .suppose that there exists a hyperbolic fixed point of .we have that it can be seen as an invariant curve of , with for any .the skew product ( [ linear skew product ] ) associated to the invariant curve has as a multiplier , which actually does not depend on .concretely we have that the system is reducible .now we can apply the theory exposed in section 3.3 of .assume first that .we have that a curve is persistent by perturbation if does not belong to the spectrum of the transfer operator associated to the curve .since the curve is reducible we have that the spectrum is a circle of modulus . using that the fixed point is hyperbolic we have that , therefore does not belongs to the spectrum . when we have that the spectrum collapses to .then does not belong to the spectrum of the transfer operator either .note that , by considering for the periodic case , the result extends to any periodic point of the uncoupled system .this last proposition can be also proved using the normal hyperbolicity theory .but this theory is only valid for diffeomorphisms , then the case of is not included .the flm is a map in the cylinder defined as where are parameters and a fixed diophantine number ( typically in our study it will be the golden mean ) .note that the flm ( [ flm ] ) is a q.p . forced system like ( [ q.p . forced system ] ) , which depends on two parameters and .moreover , we have that the function which defines the map can be written as a logistic map plus a q.p . forcing term . in other words, we have that where ( the logistic map ) and ( which is zero when is ) .then proposition [ prop persistence fixed points ] is applicable to the map . in some cases it will be convenient to work in a compact domain . in this casenote that when the compact cylinder ] .these sets will be the closed graph of a curve or a subset of a graph .in general when we say that one of these sets is above ( respectively below ) of another , we mean that , for each value of the corresponding -coordinate of the first set is bigger ( resp .smaller ) than the -coordinate of the other set ( for the same ) .the proofs in this section have been omitted because of their simplicity . given a q.p . forced map like ( [ q.p .forced map general ] ) , we define its * critical set * as the set of points on its domain where the derivative of the map ( with respect to ) is zero . in other words , in the case of the logistic map we have that .when we have a q.p . forced map like ( [ q.p . forced map general ] ) which is and is diophantine , corollary 1 of implies that an invariant curve is reducible if , and only if , where |\thinspace x = y_0(\theta ) \} ] is invariant by the map .then , any reducible invariant curve is either above or below the critical set . actually , when the map is uncoupled ( ) we have that the invariant curve is above the critical set when .consider now the image of the critical set by the map , namely * post - critical set * , which is defined as | \thinspace \bar{\theta } = \theta + \omega , \text { } \bar{x}= f(\theta , x ) , \text { for some } ( \theta , x)\in p_0\}.\ ] ] in the particular case of the flm we have , | \thinspace \bar{x}= \frac{\alpha}{4}(1+{\varepsilon}\cos(2\pi(\bar{\theta } - \omega)))\right\}.\ ] ] [ proposition postcritical set ] in the case of the flm we have the following properties on the post - critical set when . 1 .the set is above the image of any other point ] .these two preimages form the set . in the top right part of the figure we have added the set to the previous ones .the lower component of is below ( so it has two preimages ) , which form part of the set .the upper component of intersects .then only some part of it has preimage in ] which are below or in , in other words \left| \thinspace x \leq \frac{\alpha}{4}(1+{\varepsilon}\cos(2\pi(\bar{\theta } - \omega ) ) ) \right .\right\}.\ ] ] then we have that any point has exactly two preimages which are given by and , where \\ & \left ( \begin{array}{c } \theta \\ x \end{array } \right ) & \mapsto & \left ( \begin{array}{c } \theta - \omega \\ \rule{0ex}{6ex}\displaystyle \frac{1}{2 } \mp \sqrt{\frac{1}{4 } - \frac{x}{\alpha(1 + { \varepsilon}\cos(2\pi(\theta - \omega ) ) ) } } \end{array } \right ) \end{array } \ ] ] moreover we have that ( respectively ) maps homeomorphically the set to the set ] ) .finally we have that the map preserves the orientation , in the sense that , it is monotone with respect to . on the flip side the map reverses orientation , i. e. it swaps relative positions ( with respect to the -coordinate ) . to , and the post - critical set of the for .we have indicated the symbolic codes of some of the components of the pre - critical sets .the horizontal axis corresponds to and the vertical one to .,width=453 ] this result can be used to describe the pre - critical set with some more detail . by constructionwe have that when the constraint ( [ a first bound for the reducibility ] ) is satisfied , consequently the set is strictly above .we can apply the proposition [ proposition preimages flm ] to obtain that the set is composed by the union of two different sets , i. e. , where moreover we have that is below and is above it .now we can consider further preimages .since is below we have that it belongs to therefore its preimages are defined .let us denote by and .on the other hand when , we consider the preimages of , we can have different relative positions between and depending on the parameters .it might happen that the curve is completely below , completely above it or that they intersect .in the case that has points above we have that is not defined in these points , therefore the set is not well defined .this can be fixed if we formally extend to ] .then this set does not suppose an obstruction to the reducibility .on the other hand we have that define two arches around the critical set , then we have that define two arches around .it can happen that these arches are below .if this is the case we can consider again their preimages by and . the preimages by will be below then they can be discarded , because they will not become an obstruction to the reducibility of .the set is below , then will be above and then will be below .if does not suppose an obstruction to the reducibility , neither does .finally the set will be an arch above , then this can intersect the set becoming an obstruction to the reducibility . to avoid being an obstruction to the reducibilityone should require it to be below for any parameter value .this will be an additional constraint to the previous ones . in figure[ bounds of reducibility ] we have also added this constraint . finally note that the argument used above can be extended to any order .assume that we have which exists but it is not an obstruction to reducibility , where represents the times repetition of the symbol .we will have that can be discarded for being below , and can be discarded for being below the union of all the set for , then the only set which can suppose an additional restriction is .note that the higher order conditions does not necessarily suppose an improvement of the previous ones .for example in figure [ bounds of reducibility ] we have considered the additional constraints , but not until we have had an improvement to the constraint given by .we have assumed that the sets for odd intersect the set in exactly points , but we actually have that this assumption can be omitted . if the curves intersect in an even number of points we have that the discussion above is still valid but considering the different arches at the same time .when they intersect in an odd number of points there is at least one point where both curves are tangent .then the preimage of the intersection is a single point of , therefore it can be omitted . on the other hand ,we have only taken into account the case where an arch of intersects the set as a possible obstruction to the reducibility of the curve .it might also happen that an arch of the set intersects another arch of the set for some .this can produce also an obstruction to the reducibility of the invariant curve as well .but this case can be omitted because , if it occurs , then we can consider the ( )-th image of both sets by the flm an we will have that .some other properties can be deduced for the pre - critical sets .for example , if for some , then we have that for any .on the other hand we have that if for some then for any .another interesting property is the fact that when the set does not intersect for any then we have that between them there exist an invariant compact subset in ] , with an arbitrary small value yet to be defined .note that for any point on the interval the monotonicity condition is satisfied , then it is enough to check that the interval satisfies the invariance condition . in order to have invariance of the interval we have to check that for any .since the monotonicity condition is satisfied , it is enough to check that then it follows that it is enough to have when , we have that there exists a value of sufficiently small such that this is satisfied for any .we are in situation of applying the theorem 4.2 of , we know that is always a continuous invariant curve , which is contained in the set .moreover we know its lyapunov exponent explicitly , therefore when this crosses zero , the theorem implies that there exist two invariant curves ( with negative lyapunov exponents ) of the system ( [ model q.p period doubling ] ) , which correspond to periodic solutions of the system ( [ model q.p period doubling b ] ) .moreover the invariant curves have negative lyapunov exponent , therefore the periodic invariant curve of the original map is attracting . to illustrate this last proposition , in figure [ period doubling model parameter space ] we have plotted the curve which constrains the validity of proposition [ proposition existense of period doubling ] .let us assume that in the reducible case ( ) the parabola corresponds to a period doubling bifurcation .note that this parabola has a tangency at the points with the boundary of reducibility , as predicted by theorem [ theorem lyapunov exponent and reducibility ] .then these points in the parameter space would correspond to the `` period doubling - reducibility loss '' bifurcation , since it is the point where both curves merge .recall that in section [ section parameter space and reducibility ] we have reported how the period doubling bifurcation of the flm were enclosed inside regions of reducibility . in the proposed model ( [ skew trivial inviariant set ] ) for the reducibility regions it is easy to justify this behavior .differentiating the equation which defines the map we have that the critical region of the map is . following the arguments of section [ section obstruction to reducibility ] we have that a reducible invariant ( or periodic ) curve of ( [ skew trivial inviariant set ] ) can not intersect the critical set .note that for the set is composed by two closed curves in the cylinder , one in each side of the trivial invariant set .moreover when tends to we have that the components get closer to the trivial invariant set .if we assume that the parabola in the parameter space corresponds to a period doubling bifurcation , then we have that when the trivial invariant set becomes unstable then a period two solution must be created in its neighborhood .but then , when tends to we have that the set gets closer and closer to the trivial invariant set , therefore there is no room for the period doubled curve to be reducible .this explains why arbitrarily close to the `` period doubling - reducibility loss '' bifurcation parameter one can observe a reducibility loss of the period two invariant curve . in figure[ period doubling model parameter space ] there are shown the different bifurcation curves of the map ( [ skew trivial inviariant set ] ) .the reducibility of the period two invariant curve have been estimated numerically .let us remark the resemblance of the region of reducibility of the attractor with the same regions of the bifurcation diagram of the flm .we study the forced logistic map as a toy model for the truncation of the period doubling cascade of invariant curves . in section [ section parameter space and reducibility ]we have done a numerical analysis of the bifurcation diagram of the flm , which is displayed in figure [ flm parameter space ] .this computation revealed that each period doubling bifurcation curve in the parameter space is confined inside a region where the attracting invariant curve is reducible .now we can use studies done in sections [ section obstruction to reducibility ] and [ section period doubling and reducibility ] to review the analysis of the bifurcation diagram of the flm done in section [ section parameter space and reducibility ] . in section [ section obstruction to reducibility ]we have done a study of the critical set , their images and their preimages .we have constructed different constrains in the parameter space for the reducibility of the invariant curve ( which is the continuation of the invariant curve for ) .we have also illustrated how the combination of all these constrains seems to be the optimal constrain for the reducibility of the invariant curve . in other words , they approximate the boundary of reducibility of the attracting set of the map . using the notation introduced in section [ section parameter space and reducibility ] , we have that these constraints characterize the curve . in the bifurcation diagram of the figure [ flm parameter space ] only the properties of the stable set are reflected , but we have that the constrains are still valid after the period doubling .actually we conjecture that these constrains give the boundary of existence of the curve when it is unstable . in section [ section period doubling and reducibility ]we have studied the interaction between the reducibility loss and the period doubling bifurcation .for the case of linear skew products we have theorem [ theorem lyapunov exponent and reducibility ] , which says that generically we can expect the period doubling bifurcation curves and the reducibility loss bifurcation curve to be tangent .the diagram of the figure [ flm parameter space ] has been done in terms of the attracting set of the flm . as the flm is not a linear skew product ,this theorem is not applicable .but it can be applied to the linear skew product given by the linearization of the map around the invariant curve . in the same section ,we have also given a model for the interaction of the period doubling bifurcation and the reducibility loss . with this modelwe have seen that , if the period doubling is close to a reducibility loss , then there is an obstruction to the reducibility of the period doubled curve .this explains why the curves , and meet at the same point ( using again the notation of section [ section parameter space and reducibility ] ) .moreover , theorem [ theorem lyapunov exponent and reducibility ] gives a good explanation of why they do it in a tangent way .finally , let us remark that the study done in section [ section period doubling and reducibility ] does not depend on the map considered , therefore it can be extended to the rest of reducibility regions determined by and ( and containing ) .part of the study done here will be continued in . in will propose an extension of the renormalization the theory for the case of one dimensional quasi - periodic forced maps .using this theory we will be able to prove that the curves of reducibility loss bifurcation really exists ( for small enough ) . in we will use the theory proposed in the previous one to study the asymptotic behavior of the reducibility loss bifurcations when the period goes to infinity . in the previous two articles several conjectures will be done .in we will support numerically this conjectures .we will give also numerical evidences of the self - renormalizable character of the bifurcation diagram of the figure [ flm parameter space ] when different values of the rotation number are considered .u. feudel , s. kuznetsov , and a. pikovsky . , volume 56 of _ world scientific series on nonlinear science .series a : monographs and treatises_. world scientific publishing co. pte .hackensack , nj , 2006 .dynamics between order and chaos in quasiperiodically forced systems .a. jorba , p. rabassa , and j.c .tatjer . towards a renormalization theory forquasi - periodically forced one dimensional maps ii .asymptotic behavior of reducibility loss bifurcations . in preparation , 2011 . c. sim . on the analytical and numerical approximation of invariant manifolds . in _les mthodes modernes de la mecnique cleste ( course given at goutelas , france , 1989 ) , d. benest and c. froeschl ( eds . ) _ , pages 285329 . editions frontires , paris , 1990 .available at http://www.maia.ub.es/dsg/2004/index.html .
we study the dynamics of the forced logistic map in the cylinder . we compute a bifurcation diagram in terms of the dynamics of the attracting set . different properties of the attracting set are considered , as the lyapunov exponent and , in the case of having a periodic invariant curve , its period and its reducibility . this reveals that the parameter values for which the invariant curve doubles its period are contained in regions of the parameter space where the invariant curve is reducible . then we present two additional studies to explain this fact . in first place we consider the images and the preimages of the critical set ( the set where the derivative of the map w.r.t the non - periodic coordinate is equal to zero ) . studying these sets we construct constrains in the parameter space for the reducibility of the invariant curve . in second place we consider the reducibility loss of the invariant curve as codimension one bifurcation and we study its interaction with the period doubling bifurcation . this reveals that , if the reducibility loss and the period doubling bifurcation curves meet , they do it in a tangent way .
the most important principle in timber engineering to produce structural wood components of constant quality , consists of cutting wood into smaller pieces , selecting the best ones , and joining them again by adhesive bondings .what is known as a rather simple processing step becomes quite complicated , once we look in detail at the penetration of the hardening adhesive into the porous wood skeleton .unfortunately the details of the adhesive penetration can influence bond performance in multiple ways and the quality of the adhesive bonds determine the overall performance of structural parts .what complicates studies of adhesive penetration is the interplay between pore space geometry and fluid transport , cell wall material and adhesive rheology and of course process parameters like amount of adhesive , growth ring orientation , and surface roughness , just to name a few . while for soft wood predictions are rather simple , the micro - structure of hard woods complicates the problem significantly , since adhesives can penetrate through the big vessel network deep into the wood structure . in a previous work we explored the topological characteristics of the vessel network in beech _( fagus sylvatica l. ) _ and showed in part i of this work how the problem is dominated by flow through the vessel network .adhesive penetration into hard wood was studied before , although only experimentally or in descriptive form . for soft wood, the penetration depth can be expressed by a simple trigonometric function , describing the filling of cut tracheids .for hard wood however a model that characterizes the wood anatomy in order to predict the penetration depth and the amount of adhesive inside the structure is unknown .we construct an analitycal model based on the network properties and predict the adhesive penetration and the saturation of the vessel pore space .our model has two - scales : the first scale describing the transport of a hardening adhesive through a single vessel in time due to an applied pressure and capillarity effects , and also with the possibility of constant diffusion of solvent through the vessel wall , what turns out to be important for some adhesives like pvac or uf .when the viscosity increases by hardening and/or loss of solvent , the adhesive front slows down and finally stops . on the second , or network scale , the result for single vessels is embedded into a network model with identical topological properties like pore size distribution and connectivity that are characteristic for the vessel network of the respective wood .the model is compared with experiments where specimens are bonded with parallel longitudinal axes under varying growth ring angles using three different adhesive systems : prf , uf , and pvac .first we describe the rheological model of the adhesives , before we calculate the penetration into a single vessel with diffusion into the half space .subsequently we discuss the network construction and the consideration of process parameters . with all model components at our hands ,we finally compare the model with the experiments and discuss the results .adhesive penetration is the result of an interplay of adhesive hardening , capillary penetration , and technological processing . in order to set up a model for adhesive penetration of hard wood, we have to combine several models in a hierarchical way .first we address bulk viscosity evolution of adhesives due to generic hardening mechanisms . on the fundamental level , we model the penetration of a fluid into one single , straight or wavy pipe .this model is enriched by diffusive transport of solvent through its wall . on the next hierarchic level ,we project the fundamental model onto a network structure of perfectly aligned hard wood that represents the vessel network .finally , we rotate the result of the vessel network penetration to consider the general situation , where the adhesive surface is not necessarily aligned to the material orientation . we show how material parameters like porosity , hardening time or applied amount of adhesive will limit penetration .the hardening process of various adhesives can be described by the temporal evolution of the viscosity .depending on the hardening type , different viscosity models need to be applied .for example reactive adhesives do not depend on the solvent concentration , while the viscosity evolution of solvent based adhesives strongly depends on solvent concentration . in parti of this work , we showed experimental viscosity measurements for uf , pvac , and pur .if solvent concentrations are important , like in the case of pvac , the viscosity evolution can be expressed by \exp(\beta [ 1-c ] ) \quad , \ ] ] where , , and are parameters that depend on the adhesive type and the initial solvent concentration . for pvac adhesive ,we find , since the hardening process is mostly due to the loss of moisture and the initial viscosity only depends on the initial concentration . for pur adhesive ,the same expression can be used , however the concentration is kept constant during the process , expressed by and constant , , that only depend on the initial concentration .unfortunately a whole class of adhesives , can not be described by eq .[ nuall ] , since their hardening process is more complex .for example the uf adhesive changes from liquid phase to gel phase during penetration , resulting in penetration arrest . the only active processes after this phase transition are the chemical curing reactions .therefore the viscosity model should take into account the critical time when the phase transition occurs . additionally , the concentration of the solvent changes in time due to the diffusion of the solvent into the cell wood structure .we propose the viscosity relation where , , and are experimental parameters . using the data from ref . we found mpa and and variable parameters ( and ) that depend on the initial solvent concentration .note that describes the time when the penetration process finishes due to the liquid - gel transition .using these two generic hardening models , we are able to describe the viscosity evolution of numerous adhesives. the fundamental scale is given by the capillary transport of a fluid characterized by its viscosity , inside a cylindrical pipe of radius with a penetration rate that follows where with the applied pressure , the surface tension and the contact angle between fluid and pipe wall . to obtain the penetrated distance we integrate leading to a total fluid volume of inside the pipe . for reactive adhesives ,whose hardening only depends on time , the integral can be found by combining eqs .[ nuall ] and [ velocitypeneint ] to this way we obtain the time dependent penetration distance in a straight single vessel , taking into account the applied pressure , the capillarity effects , and the reactive hardening process .note that for adhesive types , whose viscosity changes when in contact with wood , eq .[ velocitypeneint ] can not be integrated so easily , since the viscosity depends also on the concentration that changes with time .note that changes of the contact angle and furface tension of the adhesives with solvent concentration are not considered in this work .since hard wood vessels are not straight , but weave tangentially around rays , the penetration distance needs to be modified .here we simply describe vessels by the radius , wavelength , and amplitude ( see fig . [ model ] ) of the oscillation in the -plane in the parametrized form as , ^ 2}{\sec^2[\arctan(n k\sin(kz ) ) ] } = r^2 \quad , \ ] ] where . by integrating the vessel length along the direction, we obtain the volume \quad.\ ] ] various adhesives contain solvents , whose concentration in the mixture changes with time due to their diffusion into the cellular structure through vessel walls . to take this effect into account , we can write the solution of the diffusion equation in cylindrical -coordinates as with the initial concentration of the solvent and its diffusivity across the cell wall . the average diffusivity of the respective wood proved to be a good value .the mean value for the solvent concentration inside the vessel follows as \quad .\ ] ] to obtain the complete equation for the evolution of the viscosity , we go back to eq . [ nuall ] and insert the concentration evolution into the respective concentration dependent parameters . note that we do not consider the diffusion of low molecular parts of the adhesive .we also neglect the effect of swelling of the wood skeleton due to moisture changes , since the size of vessels is rather big compared to tracheids . ) , tangential ( ) , and radial ( ) directions . [ model ] ] the adhesive penetration is dominated by the flow inside the vessel network , hence its topology determines the adhesive distribution .the network is formed by bundles of vessels that divide and weave around rays of various sizes . inside the bundle ,vessels interconnect by contact zones when touching each other and can also interchange positions . disorder in the networkcan only be considered through a numerical approach . in order to be able to derive an analytical model , we need to neglect disorder and use average topological network parameterswe build up a regular network using the average topological parameters and for connectivity in tangential directions and , for the connectivity in radial direction .[ model ] shows the vessel network in three dimensions with the geometrical parameters , , and .note that and can be obtained from the size distribution of big and middle sized rays that are mainly responsible for the splitting and joining of the bundles of vessels .the parameters and however are more difficult to obtain .basically the probability for radial network interconnections depends on the vessel density .we can therefore find a relation between the vessel density and the parameter . however will remain a free parameter for transport in radial direction . by separating the two geometric parameters , and , , we obtain anisotropic transport in the three principal directions , longitudinal , radial , and tangential ( see fig .[ model ] ) .note that inclined samples with respect to the principal axis can be considered after rotation . to describe the penetration process of adhesives into wood, we have to define the bond line .the bondline is the whole region , where the adhesive can be found .this includes the pure adhesive between the two adherends and the area , where the adhesive has penetrated into the wood structure .the adherends are two pieces of wood which have been connected by the adhesive . in our case, we will focus on the zone where the adhesive layer and the adherent structure coexist .the procedure to obtain the maximum penetration depth is to calculate the penetration separately in each principal direction ( tangential , radial , and longitudinal , shown in fig . [ model ] ) and then applying a rotation matrix to find the total penetration depth of the adhesive when the growth ring angle and the angle between vertical and longitudinal axis of the specimen are not zero . ) , radial ( ) , and longitudinal ( ) direction .thick lines represent filled vessels .[ modelpene ] ] since we employ a regular network , a unit cell can be used that consists of two single vessels with interconnections in the vertices and the center of the cell ( see fig .[ model ] ) .consequently we can use eq .[ vesselvolumen ] of the single vessel to obtain the volume of the unit cell with the area .because the unit cell can reproduce all the network , we can use it to simplify the calculation of the penetration depth on each direction and , using geometric properties , we can relate the adhesive volume inside the network , the network parameters and the penetration depth . to consider the penetration in the _ tangential direction _( see fig .[ modelpene ] ) , we need to consider additionally to the tangential waviness with wavelength and amplitude the radial waviness with amplitude and wavelength .[ modelpene ] shows how the vessel network is filled by the adhesive .the path along one radial wave as function of the tangential coordinate is given by \quad \text{with } \quad g_r=1+\frac{c^2 \pi^2}{8 d^2}~~ .\ ] ] the number of layers accessible from the bond line is defined by with the sample width .summing over all accessible layers gives the total penetrated length as function of to obtain the total penetrated volume we have to calculate the number of unit cells along and in longitudinal sample direction , this is = , and then multiplied by the unit cell volume , eq .[ cellvolumen ] , assuming that the penetration of the adhesive is smaller than the total wavelength , . therefore using eqs .[ cellvolumen ] and [ stotal ] , eq .[ eq : vtot ] can be expressed as when the adhesive stops to penetrate , this volume becomes the maximum volumen inside the structure and the tangential coordinate transforms in the maximum penetration depth , following the idea of calculating the adhesive penetration in each principal direction , the next step is to obtain the penetration depth when the adhesive penetrates only in _ radial direction_. fig .[ modelpene ] illustrates the penetration of the adhesive in order to relate the volume with the network parameter and the penetration depth .we analogously count the volume occupied by vessels as function of the radial coordinate .again we calculate the total length of the radial wave but now as function of by and obtain the number of unit cells the total volume occupied is given by multiplying the number of unit cells ( eq . [ numbersection ] ) by the volume from eq .[ cellvolumen ] : as before , we must now compare with the volume occupied by the adhesive with the maximum penetration depth in the radial direction to obtain finally , we can insert the penetration path from eq .[ surroundlength ] and obtain g_t } \quad .\ ] ] we consider now the penetration only in _ longitudinal direction_. fig .[ modelpene ] shows that the adhesive penetration is basically along the vessels .this value is found again by calculating the number of total unit cells , but now in the plane , namely multiplying with the occupied volume of the adhesive for each vessel as function of ( eq . [ vesselvolumen ] ) , and taking into account that the penetration , , we obtain again , comparing this volume with the adhesive volume , using eq .[ surroundlength ] , we can write the maximum penetration depth as \left ( 2g_t-1 \right ) } \quad .\ ] ] finally we obtained the maximum penetration depth in the three principal directions ( eqs .[ deptht],[depthr],[depthl ] ) .we can introduce the porosity of the wood which can be extracted easily from experimental data .expressing the penetration depth in terms of porosity also simplifies the model verification . the number of vessels in the plane equals .the porosity is therefore \quad .\ ] ] since porosity is a mean value , we can neglect the periodic part on the right hand of the eq .[ porosity ] . inserting into eqs .[ deptht ] , [ depthr ] , and [ depthl ] , the maximum penetration depths become up to now , we calculated the penetration of an infinite amount of non - hardening fluid .however the amount of applied adhesive and the penetration time due to hardening are both limited .therefore the volume needs to be calculated considering these limitations .both limitations will lead to different penetrated volumes but only the smaller one has a physical meaning . to calculate the volume with penetration of hardening adhesives, we need to treat the separately .to consider _ tangential penetration _ we employ eq .[ depthmaxp ] and apply adhesive only on the plane . from there , the adhesive can penetrate two channels with radius per unit cell , and considering the number of unit cells on this face , the volume penetrated after the hardening process , using eq .[ velocitypeneint ] , is inserting eq .[ eq : vrl ] into eq .[ depthmaxp ] and using eq .[ porosity ] , we obtain the penetration depth with hardening as for the _ radial penetration _ , the penetrated volume is given by , by inserting into eq .[ depthmaxp ] and taking the mean value of the periodic term , we obtain finally , for the _ longitudinal penetration _ , , following a similar procedure the penetration depth becomes these values determine the maximum penetration depth that the adhesive can reach until becoming solid .however it is possible , that not enough adhesive is available , and penetration stops before . using the available adhesive volume in eqs .[ depthmaxp ] , the penetration depths , , and can be calculated and compared to the hardening ones ( , , ) , e.g. if , to obtain the limiting case . in order to apply our model to real situations, we must have a way to consider an orientation of the adhesive application surface that deviates from the wood material system .therefore we need to calculate the global penetration depth and as function of , , , and , , , respectively .we can apply a rotation matrix with the growth ring angle and the angle between the vertical axis and the longitudinal axis of specimen . assuming that the adhesive is always applied on the plane , we apply two rotations in the principal coordinate system , one in the radial direction and the other in the longitudinal direction via the rotation matrix note that eqs . [ depthmaxp ] give a dependence of the penetration depths on the application areas , , .we define a penetration vector , where is oriented normal to the adhesive surface . is given by in the principal coordinate system ( ) .applying the rotation matr ix to the vector , the component gives the maximum penetration depth with the area of the surface where the adhesive is applied .we can directly apply the rotation matrix to the vector and find , with these derivations , we complete the geometric and dynamical description of our model .the information about the dynamics of the adhesive is included in the length according to eq .[ velocitypeneint ] .finally , our maximum penetration depth with solvent diffusion can be calculated using eq .[ penetrationfinalh ] , by replacing the concentration function in eq .[ nuall ] for the respective adhesives . in a next stepwe will apply the model to experiments described in the first part of this paper .using synchrotron radiation x - ray tomographic microscopy ( srxtm ) and digital image analysis , we extracted bond lines from beech wood samples , that were bonded with pur , uf , and pvac adhesives of different viscosity under growth ring angles ranging from 0 to 90 in 15 steps . since our model is periodic , we will calculate the maximum penetration depths for various situations .the procedure is as follows : first we calculate the penetration distance of adhesive inside a single vessel .note that for pur the calculation is without time dependence of the concentration using eq .[ finallpur ] , while for pvac and uf adhesives with time dependence additionally eq .[ concen ] is used .the porosity and mean radius of the vessel are taken from an earlier srxtm study as =28.03 m and porosity =0.34 .for all samples the mean applied pressure was .literature values of the surface tension , for the three types of adhesives , are not large enough to compete with the applied pressure term in eqs .[ velocitypene ] , leading to negligible capillarity effects , this means .the parameters for the viscosity , , and are taken from part i .dependence of the viscosity parameters and with the solvent concentration for uf adhesive .the dots denote the experimental data and the solid line the exponential fit .experimental values for uf are ,,, . ] * for pur adhesive , we choose the concentration , and the parameters , =9.74 10 , =0.0028s , =0 .the diffusion of solvent is not relevant . with these quantities the length for pur adhesive calculated using eq .[ finallpur ] to =0.304 m .this value seems huge at first sight , however it related to the path along the waving vessels that can easily reach lengths of 0.5 m and above .* in the case of pvac adhesive , the parameters are =0.49 , =0.001859 mpa , =0 , =0s and =29.64 .the diffusivity of the solvent ( water ) for the samples is taken as =3.0 m s , and using eq .[ velocitypeneint ] , we find a significantly lower value =0.7 mm . * for ufwe include the viscosity parameters and of eq .[ nualluf ] that change with the solvent concentration .we use the experimental viscosity data from ref. and fit it with analytical curves ( see fig . [ viscosity ] ) to determine the concentration dependence of the viscosity parameters and . after the identification of the right values for and , we can integrate eq .[ velocitypeneint ] and obtain a vessel penetration depth of =1.1 mm .the parameters and can be determined experimentally using image processing ( see ref. ) . in our casewe measured the area and the eccentricity of segmented rays and averaged over several samples , and obtained values for mm and mm . to eliminate variations due to the year ring structure, we used an average porosity of .the cylindric sample size had mm height and mm diameter , leading to an adhesive area of .as described in ref. , the quantity of applied adhesive was around for all adhesives .we compare the maximum penetration depth for samples with different growth ring and grain angles ( see figs . [ overview1],[overview2 ] ) .we fit the parameter to obtain , what can be interpreted as a lower probability of interconnection in radial than in tangential direction . and grain angle is shown by the white lines.[overview1 ] ] * for pur adhesive , we choose a sample with angles , and . if we calculate the maximum penetration depth using eq .[ penetrationfinal ] we obtain a value of mm with hardening as limiting factor , however using the volume limitation with eq .[ penetrationfinalh ] we obtain .therefore we can conclude that all the adhesive penetrated before hardening took place , leaving a starved bond line behind ( see fig .[ overview1 ] ) .note that the adhesive penetrates both wood pieces , but significantly deeper into on the application side ( right side of samples in fig .[ overview1 ] ) .therefore we have to take an average value of showing good agreement with the experimental data . to test the model for other orientations , we choose a sample with and .this means we use the previous calculation but apply a new rotation matrix .we obtain a penetration depth of , and . in fig .[ overview1 ] the quality of the analytical prediction is shown .we repeat this for angles and and obtain the penetration depths , and ( compare fig .[ overview1 ] ) .these tests show that our model is a good approximation for the beech wood structure and therefore we fix the network parameters for further calculations . *we exemplify the penetration of pvac using a sample oriented at angles and and calculate the maximum penetration depths from eqs .[ penetrationfinal ] and [ penetrationfinalh ] .we find , leading to and .therefore the maximum penetration depth is limited by the hardening process . in fig .[ overview2 ] , we show that almost all the adhesive remains in the bond line with only a small quantity of adhesive inside the vessel network . * foruf we repeat the same procedure as before on a sample with orientation angles and .we find that the penetration depths are , , and and again the penetration of the adhesive is limited by adhesive hardening . fig .[ overview2 ] shows the sample with the predicted penetration depth , exhibiting excellent agreements between the analytical prediction and the experiments .bond lines of pvac and uf adhesive in beech wood with maximum predicted penetration depth for samples with the orientation angle and .all dimensions are given in m . ] to study the dependence of maximum penetration depth on the growth ring angle , we take the values for uf and keep all parameters fixed , except the growth ring angle . in fig .[ figure7 ] we see the two limiting conditions for the penetration depth .the penetration depth is an increasing function of the growth ring angle for the hardening limitation case , and we observe a distinct maximum at approximately in the case when the maximum available volume is the limitation .this observation is in agreement with pur adhesive that fulfills the volume limiting condition , as demonstrated in part i of this work .this result shows that even though we reduce the wood anatomy to a homogeneous , regular network , adhesive transport , the beech wood seems well described by the model and for a desired penetration depth , the model can predict the optimal growth ring angle of the samples .dependence of the penetration depth on the growth ring angle . ] our model can also be used to design new adhesives with optimized properties , like reactivity , if an ideal penetration depth is to be reached .[ optim ] shows the maximum penetration depth for a wide range of adhesive parameters from fig .[ viscosity ] .we show this in four plots combining two parameters .horizontal planes represent the case where all available adhesive is inside the vessel structure , while the curved surfaces show the penetration limit due to adhesive hardening . the intersection line ( see fig .[ optim ] ) separates regions with complete penetration from those , where penetration is limited by adhesive hardening .therefore , fig .[ optim ] allows to choose a pair of reactivity parameters in order to obtain a desired penetration depth .the model can also be used to minimize solvent concentration and amount of applied adhesive for a required penetration depth .[ peneoptim ] illustrates the maximum penetration depth as function of the solvent concentration and the total amount of applied adhesive .the solid lines represent the proportions between solvent concentration and the total applied volume of adhesive which give the same penetration depth .penetration depth in mm as function of adhesive parameters from fig .[ viscosity ] for uf and growth ring angle of 45 .the experimentally obtained value for uf is marked with the white dot .( color version online ) ] penetration depth in mm as function of solvent concentration and total amount of applied adhesive for uf and growth ring angle of 45 .the solid lines represent the fixed and the dashed line divides the surface into two regions , the right one when the penetration is limited by the hardening process and the left one , when the total volume of applied adhesive is the limiting factor .( color version online ) ]we presented an analytical model for the prediction of the penetration depth of adhesives , paint or hardening fluids in general , into the beech wood structure .since we focused on hard wood , the pore space of wood is formed by a network of interconnected vessels .the network is characterized by parameters that are related to amplitude and wavelength of the oscillating vessels in tangential and radial direction and the porosity of wood originating from the vessel network .we compared the model to experiments and found good agreement for various adhesives . therefore , even though we reduce the wood anatomy to a homogeneous , regular network , adhesive transport for the much more disordered , complex pore space of real beech wood seems well described .the analytical model considers generic types of adhesive hardening .however if other special fluids are to be considered , only the viscosity dependence on concentration and time need to be known .we applied the model to three major types of adhesive which are pur , uf , and pvac and compared the respective penetration depths . for adhesives whose hardening process depends on the change of concentration , we include a description for solvent concentration diffusion inside the beech wood .this approach makes the model applicable for adhesives like uf and pvac .penetration is limited by two things : the penetration due to the applied pressure that is arrested by hardening processes , and the total amount of applied adhesive that is available to penetrate into the vessel network .the smaller penetration depth is the limiting one . by comparing the model with the experimental data we showed that it is possible to model the maximum penetration depth for three different used adhesives , namely pur , pvac , and uf .our model is sufficiently simple to allow for a broad applicability . by determining the morphological and rheological parameters ,it can be applied to a wide range of wood species and to fluids with various hardening kinematics to predict the penetration depth of these fluids into porous structures , when transport is dominated by capillarity .the authors are grateful for the financial support of the swiss national science foundation ( snf ) under grant no .116052 . a.a .marra , technology of wood bonding , van nostrand reinhold , new york , ny ( 1992 ) .j. custodio , j. broughton , h. cruz , a review of factors influencing the durability of structural bonded timber joints , international journal of adhesion and adhesives , 29 , 173 - 185 ( 2009 ) .wang , n. yan , characterizing liquid resin penetration in wood using a mercury intrusion porosimeter , wood and fiber science , 37 , 505 - 514 ( 2005 ) .kamke , j.n .lee , adhesive penetration of wood - a review , wood and fiber science , 39(2 ) , 205 - 220 ( 2007 ) .siau , transport processes in wood .springer , new york , ny ( 1984 ) .p. hass , f.k .wittel , s.a .mcdonald , f. marone , m. stampanoni , h.j .herrmann , p. niemz , pore space analysis of beech wood - the vessel network .submitted to holzforschung .preprint visible in electronic form .m. sernek , j. resnik , and f.a .kamke , penetration of liquid urea - formaldehyde adhesive into beech wood , wood and fiber science , 31(1 ) , 41 - 48 ( 1999 ) .p. niemz , d. mannes , e. lehmann , p. vontobel , and s. haase , untersuchungen zur verteilung des klebstoffes i m bereich der leimfugen mittels neutronenradiographie und mikroskopie , european journal of wood and wood products 62 , 424 - 432 ( 2004 ) .collett , a review of surface and interfacial adhesion in wood science and related fields , wood science and technology , 6,1 - 42 ( 1972 ) .o. suchsland , ber das eindringen des leimsbei der holzverleimung und die bedeutung der eindringtiefe fr die fugenfestigkeit , european journal of wood and wood products 16(39),101 - 108 ( 1958 ) .p. hass , m. mendoza , f.k .wittel , p. niemz , h.j .herrmann , adhesive penetration of hard wood : part i : experiments on beech , submitted to wood science and technology .preprint visible in electronic form . e.w .washburn , the dynamics of capillarity flow , phys .17 , 273 - 283 ( 1921 ) .bosshard , l. kucera , the network of vessel system in fagus sylvatica l. , european journal of wood and wood products 31,437 - 445 ( 1973 ) .a. bhattacharya and p. ray , studies on surface tension of poly(vinyl alcohol ) : effect of concentration , temperature , and addition of chaotropic agents , j. of appl .science , 93 , 122 - 130 ( 2004 ) .hse , surface tension of phenol - formaldehyde wood adhesives , holzforschung 26 , 82 - 85 ( 1972 ) .s. lee , t.f .shupe , l.h .groom and c.y .hse , wetting behaviors of phenol- and urea - formaldehyde resins as compatibilizers , wood and fiber science 39 , 482 - 492 ( 2007 ) .s. kurjatko and j. kudela , wood structure and properties , arbora publisher ( 1998 ) .w. olek , p. perr and j. weres , inverse analysis of the transient bound water diffusion in wood , holzforschung 59 , 38 - 45 ( 2005 ) .
we propose an analytical model to predict the adhesives penetration into hard wood . penetration of hard wood is dominated by the vessel network which prohibits porous medium approximations . our model considers two scales : a one dimensional capillary fluid transport of a hardening adhesive through a single , straight vessel with diffusion of solvent through its walls and a mesoscopic scale based on topological characteristics of the vessel network , where results from the single vessel scale are mapped onto a periodic network . given an initial amount of adhesive and applied bonding pressure , we calculate the portion of the filled structure . the model is applied to beech wood samples joined with three different types of adhesive ( pur , uf , pvac ) under various growth ring angles . we evaluate adhesive properties and bond line morphologies described in part i of this work . the model contains one free parameter that can be adjusted in order to fit the experimental data .
the concept of efficiency has its origin in fisher s 1920s claim of asymptotic optimality of the maximum - likelihood estimator in differentiable parametric models ( fisher ) . in 1930s and 1940s ,fisher s ideas on optimality in differentiable models were sharpened and elaborated upon ( see , e.g. , cramr ) , until hodges s 1951 discovery of a superefficient estimator indicated that a comprehensive understanding of optimality in differentiable estimation problems remained elusive .further consideration directed attention to the property of _ regularity _ to delimit the class of estimators over which optimality is achieved .hjek s convolution theorem ( hjek ) implies that within the class of regular estimates , asymptotic variance is lower - bounded by the cramr rao bound in the limit experiment .the asymptotic minimax theorem ( hjek ) underlines the central role of the concept of regularity .an estimator that is optimal among regular estimates is called _ best - regular _ ; in a hellinger differentiable model , an estimator for is best - regular _ if and only if _ it is asymptotically linear , that is , for all in the model , where is the score for and the corresponding fisher information . to address the question of efficiency in smooth parametric models from a bayesian perspective , we turn to the bernstein von mises theorem . in the literaturemany different versions of the theorem exist , varying both in ( stringency of ) conditions and ( strength or ) form of the assertion .following le cam and yang ( see also van der vaart ) , we state the theorem as follows .( for later reference , define a prior to be _ thick _ at , if it has a lebesgue density that is continuous and strictly positive at . )[ thmparabvm ] assume that is open and that the model is identifiable and dominated .suppose forms an i.i.d .sample from for some .assume that the model is locally asymptotically normal at with nonsingular fisher information . furthermore, suppose that : the prior is thick at ; for every , there exists a test sequence such that then the posterior distributions converge in total variation , in -probability , where denotes any best - regular estimator sequence . for a proof ,the reader is referred to ( or to kleijn and van der vaart , for a proof under model misspecification that has a lot in common with the proof of theorem [ thmpan ] below ) .neither the frequentist theory on asymptotic optimality nor theorem [ thmparabvm ] generalize fully to nonparametric estimation problems .examples of the failure of the bernstein von mises limit in infinite - dimensional problems ( with regard to the _ full _ parameter ) can be found in freedman .freedman initiated a discussion concerning the merits of bayesian methods in nonparametric problems as early as 1963 , showing that even with a natural and seemingly innocuous choice of the nonparametric prior , posterior inconsistency may result .this warning against instances of inconsistency due to ill - advised nonparametric priors was reiterated in the literature many times over , for example , in cox and in diaconis and freedman .however , general conditions for bayesian consistency were formulated by schwartz as early as 1965 ; positive results on posterior rates of convergence in the same spirit were obtained in ghosal , ghosh and van der vaart ( see also , shen and wasserman ) .the combined message of negative and positive results appears to be that the choice of a nonparametric prior is a sensitive one that leaves room for unintended consequences unless due care is taken .this lesson must also be taken seriously when one asks the question whether the posterior for the parameter of interest in a semiparametric estimation problem displays bernstein von mises - type limiting behavior .like in the parametric case , we estimate a finite - dimensional parameter , but now in a model that also leaves room for an infinite - dimensional nuisance parameter .we look for general sufficient conditions on model and prior such that the _ marginal posterior for the parameter of interest _satisfies in -probability , where here denotes the efficient score function and the efficient fisher information [ assumed to be nonsingular at .the sequence also features on the r.h.s . of the semiparametric version of ( [ eqaslin ] )( see lemma 25.23 in ) .assertion ( [ eqassertbvm ] ) often implies efficiency of point - estimators like the posterior median , mode or mean ( a first condition being that the estimate is a functional on , continuous in total - variation ) and always leads to asymptotic identification of credible regions with efficient confidence regions . to illustrate ,if is a credible set in , ( [ eqassertbvm ] ) guarantees that posterior coverage and coverage under the limiting normal for are ( close to ) equal .because the limiting normals are _ also _ the asymptotic sampling distributions for efficient point - estimators , ( [ eqassertbvm ] ) enables interpretation of credible sets as asymptotically efficient confidence regions . from a practical point of view, the latter conclusion has an important implication : whereas it can be hard to compute optimal semiparametric confidence regions directly , simulation of a large sample from the marginal posterior ( e.g. , by mcmc techniques ; see robert ) is sometimes comparatively straightforward .instances of the bernstein von mises limit have been studied in various semiparametric models : several papers have provided studies of asymptotic normality of posterior distributions for models from survival analysis .particularly , kim and lee show that the _ infinite - dimensional _posterior for the cumulative hazard function under right - censoring converges at rate to a gaussian centered at the aalen nelson estimator for a class of neutral - to - the - right process priors . in kim , the posterior for the baseline cumulative hazard function and regression coefficients in cox s proportional hazard modelare considered with similar priors .castillo considers marginal posteriors in cox s proportional hazards model and stein s symmetric location problem from a unified point of view .a general approach has been given in shen , but his conditions may prove somewhat hard to verify in examples .cheng and kosorok give a general perspective too , proving weak convergence of the posterior under sufficient conditions .rivoirard and rousseau prove a version for linear functionals over the model , using a class of nonparametric priors based on infinite - dimensional exponential families .boucheron and gassiat consider the bernstein von mises theorem for families of discrete distributions .johnstone studies various marginal posteriors in the gaussian sequence model .the ( frequentist ) true distribution of the data is denoted and assumed to lie in , so that there exist , such that .we localize by introducing with inverse .the expectation of a random variable with respect to a probability measure is denoted ; the sample averageof is denoted and ( for other conventions and nomenclature customary in empirical process theory , see ) . if is stochastic, denotes the integral .the hellinger distance between is denoted and induces a metric on the space of nuisance parameters by , for all .we endow the model with the borel -algebra generated by the hellinger topology and refer to regarding issues of measurability .consider estimation of a functional on a dominated nonparametric model with metric , based on a sample i.i.d . according to .we introduce a prior on and consider the subsequent sequence of posteriors , where is any measurable model subset .typically , optimal ( e.g. , minimax ) nonparametric posterior rates of convergence are powers of ( possibly modified by a slowly varying function ) that converge to zero more slowly than the parametric -rate .estimators for may be derived by `` plugging in '' a nonparametric estimate [ cf . , but optimality in rate or asymptotic variance can not be expected to obtain generically in this way .this does not preclude efficient estimation of real - valued aspects of : parametrize the model in terms of a finite - dimensional _ parameter of interest _ and a _ nuisance parameter _ where is open in and an infinite - dimensional metric space : . assuming identifiability, there exist unique , such that .assuming measurability of the map , we place a product prior on to define a prior on .parametric rates for the marginal posterior of are achievable because it is possible for contraction of the full posterior to occur anisotropically , that is , at rate along the -direction , but at a slower , nonparametric rate along the -directions .the proof of ( [ eqassertbvm ] ) will consist of three steps : in section [ secpert ] , we show that the posterior concentrates its mass around so - called _ least - favorable submodels _ ( see stein and ) . in the second step ( see section [ secilan ] ), we show that this implies local asymptotic normality ( lan ) for integrals of the likelihood over , with the efficient score determining the expansion . in section [ secpan ], it is shown that these lan integrals induce asymptotic normality of the marginal posterior , analogous to the way local asymptotic normality of parametric likelihoods induces the parametric bernstein von mises theorem . to see why asymptotic accumulation of posterior mass occurs around so - called least - favorable submodels ,a crude argument departs from the observation that , according to ( [ eqposterior ] ) , posterior concentration occurs in regions of the model with relatively high ( log-)likelihood ( barring inhomogeneities of the prior ) .asymptotically , such regions are characterized by close - to - minimal kullback leibler divergence with respect to . to exploit this ,let us assume that for each in a neighborhood of , there exists a unique minimizer of the kullback leibler divergence , giving rise to a submodel .as is well known , if is smooth it constitutes a least - favorable submodel and scores along are efficient .[ in subsequent sections it is not required that is defined by ( [ eqminkl ] ) , only that is least - favorable . ]neighborhoods of are described with hellinger balls in of radius around , for all , to give a more precise argument for posterior concentration around , consider the posterior for , _ given _ ; unless happens to be equal to , the submodel is misspecified .kleijn and van der vaart show that the misspecified posterior concentrates asymptotically in any ( hellinger ) neighborhood of the point of minimal kullback leibler divergence with respect to the true distribution of the data . applied to , we see that receives asymptotic posterior probability one for any . for posterior concentration to occur prior mass must be present in certain kullback leibler - type neighborhoods . in the present context , these neighborhoods can be defined as \\[-8pt ] & & \hspace*{37.5pt}p_0\biggl ( \sup_{\|h\|\leq m}-\log\frac{p_{{\theta}_n(h),\eta}}{p_{{\theta } _ 0,\eta_0 } } \biggr)^2\leq\rho^2 \biggr\}\nonumber\end{aligned}\ ] ] for and .if this type of posterior convergence occurs with an appropriate form of uniformity over the relevant values of ( see `` consistency under perturbation , '' section [ secpert ] ) , one expects that the nonparametric posterior contracts into hellinger neighborhoods of the curve ( theorem [ thmpertroc ] and corollary [ corconspert ] ) . to introduce the second step ,consider ( [ eqposterior ] ) with for some measurable .since the prior is of product form , , the marginal posterior for the parameter depends on the nuisance factor only through the integrated likelihood ratio , where we have introduced factors in the denominator for later convenience ; see ( [ eqdefpd ] ) .[ the localized version of ( [ eqdefsn ] ) is denoted ; see ( [ eqdefsn ] ) . ]the map is to be viewed in a role similar to that of the _ profile likelihood _ in semiparametric maximum - likelihood methods ( see , e.g. , severini and wong and murphy and van der vaart ) , in the sense that embodies the intermediate stage between nonparametric and semiparametric steps of the estimation procedure .we impose smoothness through a form of le cam s local asymptotic normality : let be given , and let be a one - dimensional submodel of such that . specializing to i.i.d .observations , we say that the model is _ stochastically lan _ at along the direction , if there exists an -function with such that for all random sequences bounded in -probability , here is the score - function , and is the fisher information of the submodel at .stochastic lan is slightly stronger than the usual lan property . in examples ,the proof of the ordinary lan property often extends to stochastic lan without significant difficulties . although formally only a convenience , the presentation benefits from an _ adaptive _ reparametrization ( see section 2.4 of bickel et al . ) : based on the least - favorable submodel , we define , for all , , and we introduce the notation . with , describes the least - favorable submodel and with a nonzero value of , describes a version thereof , translated over a nuisance direction ( see figure [ figrecoord ] ) . expressed in terms of the metric , the sets mapped to open balls centered at the origin , in the formulation of theorem [ thmsbvmone ] , we make use of a domination condition based on the quantities for all and . below , it is required that there exists a sequence with , , such that , for every _ bounded _ , stochastic sequence , ( where the expectation concerns the stochastic dependence of as well ; see _ notation and conventions _ ) . for a single , fixed , the requirement says that the likelihood ratio remains integrable when we replace by the maximum - likelihood estimator .lemma [ lemudom ] demonstrates that ordinary differentiability of the likelihood - ratio with respect to , combined with a uniform upper bound on certain fisher information coefficients , suffices to satisfy for all bounded , stochastic and every .the second step of the proof can now be summarized as follows : assuming stochastic lan of the model , contraction of the nuisance posterior as in figure [ fignbd - d ] and said domination condition .shown are the least - favorable curve and ( for fixed and ) the neighborhood of .the sets are expected to capture ( -conditional ) posterior mass one asymptotically , for all and . ] are enough to turn lan expansions for the integrand in ( [ eqdefsn ] ) into a single lan expansion for .the latter is determined by the efficient score , because the locus of posterior concentration , , is a least - favorable submodel ( see theorem [ thmilanone ] ) . .curved lines represent sets for fixed .the curve through parametrizes the least - favorable submodel .vertical dashed lines delimit regions such that . also indicated are directions along which the likelihood is expanded , with score functions . ]the third step is based on two observations : first , in a semiparametric problem , the integrals appear in the expression for the marginal posterior in exactly the same way as parametric likelihood ratios appear in the posterior for parametric problems .second , the parametric bernstein von mises proof depends on likelihood ratios _ only _ through the lan property . as a consequence , local asymptotic normality for offers the possibility to apply le cam s proof of posterior asymptotic normality in semiparametric context . if , in addition , we impose contraction at parametric rate for the marginal posterior , the lan expansion of leads to the conclusion that the marginal posterior satisfies the bernstein von mises assertion ( [ eqassertbvm ] ) ; see theorem [ thmpan ] .before we state the main result of this paper , general conditions imposed on models and priors are formulated : _ model assumptions ._ throughout the remainder of this article , is assumed to be well specified and dominated by a -finite measure on the sample space and parametrized identifiably on , with open and a subset of a metric vector - space with metric .smoothness of the model is required but mentioned explicitly throughout .we also assume that there exists an open neighborhood of on which a least - favorable submodel is defined ._ prior assumptions . _ with regard to the prior we follow the product structure of the parametrization of , by endowing the parameterspace with a product - prior defined on a -field that includes the borel -field generated by the product - topology . also , it is assumed that the prior is thick at . with the above general considerations for model and prior in mind, we formulate the main result of this paper .[ thmsbvmone ]let be distributed i.i.d.- , with , and let be thick at .suppose that for large enough , the map is continuous -almost - surely .also assume that is stochastically lan in the -direction , for all in an -neighborhood of and that the efficient fisher information is nonsingular .furthermore , assume that there exists a sequence with , such that : for all , there exists a such that , for large enough , for all large enough , the hellinger metric entropy satisfies and , for every bounded , stochastic .the model satisfies the domination condition , for all , hellinger distances satisfy the uniform bound , finally , suppose that for every , , the posterior satisfies then the sequence of marginal posteriors for converges in total variation to a normal distribution , centered on with covariance matrix .the assertion follows from combination of theorem [ thmpertroc ] , corollary [ corconspert ] , theorems [ thmilanone ] and [ thmpan ] .let us briefly discuss some aspects of the conditions of theorem [ thmsbvmone ] .first , consider the required existence of a least - favorable submodel in . in many semiparametric problems ,the efficient score function is _ not _ a proper score in the sense that it corresponds to a smooth submodel ; instead , the efficient score lies in the -closure of the set of all proper scores .so there exist sequences of so - called _ approximately least - favorable _ submodels whose scores converge to the efficient score in . using such approximations of , our proof will entail extra conditions , but there is no reason to expect problems of an overly restrictive nature .it may therefore be hoped that the result remains largely unchanged if we turn ( [ eqrepara ] ) into a sequence of reparametrizations based on suitably chosen approximately least - favorable submodels .second , consider the rate , which must be slow enough to satisfy condition ( iv ) and is fixed at ( or above ) the minimax hellinger rate for estimation of the nuisance with known by condition ( ii ) , while satisfying ( i ) and ( iii ) as well . conditions ( i ) and ( ii ) also arise when considering hellinger rates for nonparametric posterior convergence and the methods of ghosal et al . can be applied in the present context with minor modifications . in addition, lemma [ lemudom ] shows that in a wide class of semiparametric models , condition ( iii ) is satisfied for _ any _ rate sequence .typically , the numerator in condition ( iv ) is of order , so that condition ( iv ) holds true for any such that .the above enables a rate - free version of the semiparametric bernstein von mises theorem ( corollary [ corsimplesbvm ] ) , in which conditions ( i ) and ( ii ) above are weakened to become comparable to those of schwartz for nonparametric posterior consistency .applicability of corollary [ corsimplesbvm ] is demonstrated in section [ secplr ] , where the linear coefficient in the partial linear regression model is estimated .third , consider condition ( v ) of theorem [ thmsbvmone ] : though it is necessary [ as it follows from ( [ eqconvtv ] ) ] , it is hard to formulate straightforward sufficient conditions to satisfy ( v ) in generality .moreover , condition ( v ) involves the nuisance prior and , as such , imposes another condition on besides ( i ) . to lessen its influence on , constructions in section [ secmarg ] either work for all nuisance priors ( see lemma [ lemlehmann ] ) or require only consistency of the nuisance posterior ( see theorem [ thmrocbayes ] ) .the latter is based on the limiting behavior of posteriors in misspecified parametric models and allows for the tentative but general observation that a bias [ cf .( [ eqnearstraight ] ) ] may ruin -consistency of the marginal posterior , especially if the rate is sub - optimal . in the example of section [ secplr ] ,the `` hard work '' stems from condition ( v ) of theorem [ thmsbvmone ] : hlder smoothness and boundedness of the family of regression functions in corollary [ corsmoothplr ] are imposed in order to satisfy this condition . since conditions ( i ) and ( ii )appear quite reasonable and conditions ( iii ) and ( iv ) are satisfied relatively easily , condition ( v ) should be viewed as the most complicated in an essential way .to conclude , consistency under perturbation ( with appropriate rate ) is one of the sufficient conditions , but it is by no means clear in how far it should also hold with necessity .one expects that in some situations where consistency under perturbation fails to hold fully , integral local asymptotic normality ( see section [ secilan ] ) is still satisfied in a weaker form .in particular , it is possible that ( [ eqilan ] ) holds with a less - than - efficient score and fisher information , a result that would have an interpretation analogous to suboptimality in hjek s convolution theorem .what happens in cases where integral lan fails more comprehensively is both interesting and completely mysterious from the point of view taken in this article .in this section , we consider contraction of the posterior around least - favorable submodels .we express this form of posterior convergence by showing that ( under suitable conditions ) the conditional posterior for the nuisance parameter contracts around the least - favorable submodel , conditioned on a sequence for the parameter of interest with .we view the sequence of models as a random perturbation of the model and generalize ghosal et al . to describe posterior contraction .ultimately , random perturbation of represents the `` appropriate form of uniformity '' referred to just after definition ( [ eqksets ] ) . given a rate sequence , , we say that the conditioned nuisance posterior is _ consistent under -perturbation at rate _ , if for all bounded , stochastic sequences .[ thmpertroc ] assume that there exists a sequence with , such that for all and every bounded , stochastic : there exists a constant such that for large enough , for large enough , there exist such that for large enough , the least - favorable submodel satisfies .then , for every bounded , stochastic there exists an such that the conditional nuisance posterior converges as under -perturbation .let be a stochastic sequence bounded by , and let be given .let and be as in conditions ( i ) and ( ii ) .choose and large enough to satisfy condition ( ii ) for some .by lemma [ lemrocdenom ] , the events satisfy .using also the first limit in ( [ eqnuistest ] ) , we then derive [ even with random , the posterior , by definition ( [ eqposterior ] ) ] . the first term on the r.h.s .can be bounded further by the definition of the events , due to condition ( iii ) it follows that for large enough .therefore , \\[-8pt ] & & \qquad\leq\int_{d^c({\theta}_0,l\rho_n/2 ) } p_{{\theta}_n(h_n),\eta}^n(1-\phi_n ) \,d\pi_h(\eta).\nonumber\end{aligned}\ ] ] upon substitution of ( [ eqonedomain ] ) and with the use of the second bound in ( [ eqnuistest ] ) and ( [ eqsuffprior ] ) , the choice we made earlier for proves the assertion .we conclude from the above that besides sufficiency of prior mass , the crucial condition for consistency under perturbation is the existence of a test sequence satisfying ( [ eqnuistest ] ) . to find sufficient conditions , we follow a construction of tests based on the hellinger geometry of the model , generalizing the approach of birg and le cam to -perturbed context .it is easiest to illustrate their approach by considering the problem of testing / estimating when is known : we cover the nuisance model by a minimal collection of hellinger balls of radii , each of which is convex and hence testable against with power bounded by , based on the minimax theorem .the tests for the covering hellinger balls are combined into a single test for the nonconvex alternative against .the order of the cover controls the power of the combined test .therefore the construction requires an upper bound to hellinger metric entropy numbers which is interpreted as indicative of the nuisance model s complexity in the sense that the lower bound to the collection of rates solving ( [ eqminimaxrate ] ) is the hellinger minimax rate for estimation of . in the -perturbed problem, the alternative does not just consist of the complement of a hellinger - ball in the nuisance factor , but also has an extent in the -direction shrinking at rate .condition ( [ eqhcone ] ) below guarantees that hellinger covers of like the above are large enough to accommodate the -extent of the alternative , the implication being that the test sequence one constructs for the nuisance in case is known , can also be used when is known only up to -perturbation .therefore , the entropy bound in lemma [ lemtestpert ] is ( [ eqminimaxrate ] ) .geometrically , ( [ eqhcone ] ) requires that -perturbed versions of the nuisance model are contained in a narrowing sequence of metric cones based at . in differentiable models ,the hellinger distance is typically of order for all .so if , in addition , , limit ( [ eqhcone ] ) is expected to hold pointwise in . then only the uniform character of ( [ eqhcone ] ) truly forms a condition .[ lemtestpert ] if satisfies ,] and the following requirements are met : for all large enough , .for all and all bounded , stochastic , then for all , there exists a test sequence such that for all bounded , stochastic , for large enough .let be such that ( i ) and ( ii ) are satisfied .let and be given . for all , define and .cover with hellinger balls , where and , that is , there exists an such that .denote . by assumption ,the minimal number of such balls needed to cover is finite ; we denote the corresponding covering number by , that is , .let be given .there exists an ( ) such that .then , by the triangle inequality , the definition of and assumption ( [ eqhcone ] ) , for large enough . we conclude that there exists an such that for all , , , , .moreover , hellinger balls are convex and for all , . as a consequence of the minimax theorem( see le cam , birg ) , there exists a test sequence such that where the supremum runs over all . defining , for all , ,we find ( for details , see the proof of theorem 3.10 in ) that for all and .since , we have for all , \\[-8pt ] & \leq & n(\rho_n,{{\mathscr p}},h ) \leq e^{n\rho_n^2}\nonumber\end{aligned}\ ] ] by assumption ( [ eqminimaxrate ] ) . upon substitution of ( [ eqestn ] ) into ( [ eqtestintermediate ] ) , we obtain the following bounds : for large enough , which implies assertion ( [ eqdtests ] ) . in preparation of corollary [ corsimplesbvm ] , we also provide a version of theorem [ thmpertroc ] that only asserts consistency under -perturbation at _ some _ rate while relaxing bounds for prior mass and entropy . in the statement of the corollary , we make use of the family of kullback leibler neighborhoods that would play a role for the posterior of the nuisance if were known . for all .the proof below follows steps similar to those in the proof of corollary 2.1 in .[ corconspert ] assume that for all , , and : for all there is an such that for all and large enough , .for every bounded random sequence , and are of order .then there exists a sequence , , , such that the conditional nuisance posterior converges under -perturbation at rate .we follow the proof of corollary 2.1 in kleijn and van der vaart and add that , under condition ( ii ) , ( [ eqhcone ] ) and condition ( iii ) of theorem [ thmpertroc ] are satisfied .we conclude that there exists a test sequence satisfying ( [ eqnuistest ] ) .then the assertion of theorem [ thmpertroc ] holds .the following lemma generalizes lemma 8.1 in ghosal et al . to the -perturbed setting .[ lemrocdenom ] let be stochastic and bounded by some .then \\[-10pt ] & & \qquad\leq\frac{1}{c^2n\rho^2}\nonumber\end{aligned}\ ] ] for all , and .see the proof of lemma 8.1 in ghosal et al . ( dominating the -dependent log - likelihood ratio immediately after the first application of jensen s inequality ) .the smoothness condition in the le cam s parametric bernstein von mises theorem is a lan expansion of the likelihood , which is replaced in semiparametric context by a stochastic lan expansion of the integrated likelihood ( [ eqdefsn ] ) . in this section ,we consider sufficient conditions under which the localized integrated likelihood has the _ integral lan _ property ; that is , allows an expansion of the form for every random sequence of order , as required in theorem [ thmpan ] .theorem [ thmilanone ] assumes that the model is stochastically lan and requires consistency under -perturbation for the nuisance posterior .consistency not only allows us to restrict sufficient conditions to neighborhoods of in , but also enables lifting of the lan expansion of the integrand in ( [ eqdefsn ] ) to an expansion of the integral itself ; cf . ( [ eqilan ] ) .the posterior concentrates on the least - favorable submodel so that only the least - favorable expansion at contributes to ( [ eqilan ] ) asymptotically . for this reason ,the intergral lan expansion is determined by the efficient score function ( and not some other influence function ) .ultimately , occurrence of the efficient score lends the marginal posterior ( and statistics based upon it ) properties of frequentist semiparametric optimality . to derive theorem [ thmilanone ] ,we reparametrize the model ; cf .( [ eqrepara ] ) .while yielding adaptivity , this reparametrization also leads to -dependence in the prior for , a technical issue that we tackle before addressing the main point of this section .we show that the prior mass of the relevant neighborhoods displays the appropriate type of stability , under a condition on local behavior of hellinger in the least - favorable model . for smooth least - favorable submodels , typically for all bounded , stochastic , which suffices .[ lemtranslate ] let be a bounded , stochastic sequence of perturbations , and let be any prior on .let be such that .then the prior mass of radius- neighborhoods of is stable , that is , let and be such that .denote by and by for all . since we consider the sequence of symmetric differences .fix some .then for all and all large enough , , so that . furthermore , for large enough and any , , so that . therefore , which implies ( [ eqstab ] ) .once stability of the nuisance prior is established , theorem [ thmilanone ] hinges on stochastic local asymptotic normality of the submodels , for all in an -neighborhood of .we assume there exists a such that for every random bounded in -probability , where and . equation ( [ eqqlan ] ) specifies the ( minimal ) tangent set ( van der vaart , section 25.4 ) with respect to which differentiability of the model is required .note that .[ thmilanone ] suppose that is stochastically lan for all in an -neighborhood of .furthermore , assume that posterior consistency under -perturbation obtains with a rate also valid in ( [ eqnormdom ] ) .then the integral lan - expansion ( [ eqilan ] ) holds . throughout this proof , for all andall .furthermore , we abbreviate to and omit explicit notation for -dependence in several places .let be given , and let with bounded in -probability .then there exists a constant such that for all . with bounded, the assumption of consistency under -perturbation says that for large enough .this implies that the posterior s numerator and denominator are related through \\[-8pt ] & & \qquad\hspace*{0pt}\leq e^{\varepsilon}1_{\{\|h_n\|\leq m\}}\int_{d({\theta}_n,\rho_n ) } \prod_{i=1}^n\frac{p_{{\theta}_n,\eta}}{p_{{\theta}_0,\eta _ 0}}(x_i ) \,d\pi _ h(\eta ) \biggr ) > 1-\delta .\nonumber\end{aligned}\ ] ] we continue with the integral over under the restriction and parametrize the model locally in terms of [ see ( [ eqrepara ] ) ] where denotes the prior for given , that is , translated over . next we note that by fubini s theorem and the domination condition ( [ eqnormdom ] ) , there exists a constant such that for large enough .since the least - favorable submodel is stochastically lan , lemma [ lemtranslate ] asserts that the difference on the r.h.s . ofthe above display is , so that \\[-8pt ] & & \qquad= \int_{b(\rho_n ) } \prod_{i=1}^n\frac{q_{{\theta}_n,\zeta}}{q_{{\theta}_0,0}}(x_i ) \,d\pi(\zeta ) + o_{p_0}(1),\nonumber\end{aligned}\ ] ] where we use the notation for brevity .we define for all , , the events . with ( [ eqnormdom ] ) as a domination condition , fatou s lemma and the fact that lead to \\[-8pt ] & & \qquad\leq\int\limsup_{n\rightarrow\infty } 1_{b(\rho_n)\setminus\{0\ } } ( \zeta ) q^n_{{\theta}_n,\zeta}(f_n^c(\zeta,{\varepsilon } ) ) \,d\pi(\zeta ) = 0\nonumber\end{aligned}\ ] ] [ again using ( [ eqnormdom ] ) in the last step ] .combined with fubini s theorem , this suffices to conclude that and we continue with the first term on the right - hand side . by stochastic local asymptotic normality for every ,expansion ( [ eqqlan ] ) of the log - likelihood implies that where the rest term is of order .accordingly , we define , for every , the events .contiguity then implies that as well .reasoning as in ( [ eqintrobn ] ) we see that \\[-8pt ] & & \qquad= \int_{b(\rho_n)}\prod_{i=1}^n\frac{q_{{\theta}_n,\zeta } } { q_{{\theta}_0,0}}(x_i ) 1_{a_n(\zeta,{\varepsilon})\cap f_n(\zeta,{\varepsilon } ) } \,d\pi(\zeta ) + o_{p_0}(1 ) .\nonumber\end{aligned}\ ] ] for fixed and and for all , so that the first term on the right - hand side of ( [ eqintroan ] ) satisfies the bounds the integral factored into lower and upper bounds can be relieved of the indicator for by reversing the argument that led to ( [ eqintrobn ] ) and ( [ eqintroan ] ) ( with replacing ) , at the expense of an -factor . substituting in ( [ eqexpcorrection ] ) and using , consecutively , ( [ eqintroan ] ) , ( [ eqintrobn ] ) , ( [ eqshiftprior ] ) and ( [ eqintd ] ) for the bounded integral , we find since this holds with arbitrarily small for large enough , it proves ( [ eqilan ] ) . with regard to the nuisance rate , we first note that our proof of theorem [ thmsbvmone ] fails if the slowest rate required to satisfy ( [ eqnormdom ] ) vanishes _ faster _ then the optimal rate for convergence under -perturbation [ as determined in ( [ eqminimaxrate ] ) and ( [ eqsuffprior ] ) ]. however , the rate does not appear in assertion ( [ eqilan ] ) , so if said contradiction between conditions ( [ eqnormdom ] ) and ( [ eqminimaxrate])/([eqsuffprior ] ) do not occur , the sequence can remain entirely internal to the proof of theorem [ thmilanone ] . more particularly , if condition ( [ eqnormdom ] ) holds for _ any _ such that , integral lan only requires consistency under -perturbation at _ some _ such . in that case , we may appeal to corollary [ corconspert ] instead of theorem [ thmpertroc ] , thus relaxing conditions on model entropy and nuisance prior .the following lemma shows that a first - order taylor expansion of likelihood ratios combined with a boundedness condition on certain fisher information coefficients is enough to enable use of corollary [ corconspert ] instead of theorem [ thmpertroc ] .[ lemudom ] let be one - dimensional .assume that there exists a such that for every and all in the samplespace , the map is continuously differentiable on ] and a smoothness condition on the conditional expectation ] ( see for definition ) , and define the nuisance prior through the gaussian process where \} ] , form a -independent , -i.i.d .sample and denotes or for all .the prior process is zero - mean gaussian of ( hlder-)smoothness and the resulting posterior mean for concentrates asymptotically on the smoothing spline that solves the penalized ml problem .mcmc simulations based on gaussian priors have been carried out by shively , kohn and wood . here, we reiterate the question of how frequentist sufficient conditions are expressed in a bayesian analysis based on corollary [ corsimplesbvm ] .we show that with a nuisance of known ( hlder-)smoothness greater than , the process ( [ eqkibm ] ) provides a prior such that the marginal posterior for satisfies the bernstein von mises limit . to facilitate the analysis, we think of the regression function and the process ( [ eqkibm ] ) as elements of the banach space ,\mbox{}) ] of finite metric entropy with respect to the uniform norm and that forms a -donsker class .regarding the distribution of , suppose that , and , as well as )^2>0 ] and \in h ] with a prior such that .then the marginal posterior for satisfies the bernstein von mises limit , where ) ] .for any and , , so that for fixed , minimal kl - divergence over obtains at ] , under . since ,the last term on the right is if is bounded in probability .we conclude that is stochastically lan .in addition , ( [ eqparallellik ] ) shows that is continuous for every . by assumption ,)^2 ] , so that there exists a constant such that . for any and all , the map is continuously differentiable on all of , with score )+({\theta}-{\theta}_0)(u-{\mathrm e}[u|v])^2 ] does not depend on and is bounded over ] .the maximum - likelihood estimate for is therefore of the form , where .note that and that is assumed to be -donsker , so that is asymptotically tight .since , in addition , almost surely and the limit is strictly positive by assumption , .hence , & & \qquad\leq p_0^n\biggl ( \sup_{{\theta}\in{{\theta}}_n^c } \biggl ( { \frac{1}{4}}|{\theta}-{\theta}_0|\frac{m_n}{n^{1/2 } } - { \frac{1}{2 } } ( { \theta}-{\theta } _ 0)^2\biggr ) { { \mathbb p}}_nw^2 > -\frac{cm_n^2}{n } \biggr ) + o(1)\\[-2pt ] & & \qquad\leq p_0^n ( { { \mathbb p}}_nw^2<4c ) + o(1).\end{aligned}\ ] ] since , there exists a small enough such that the first term on the right - hand side is of order as well , which shows that condition ( [ eqlehmann ] ) is satisfied .lemma [ lemlehmann ] asserts that condition ( v ) of corollary [ corsimplesbvm ] is met as well .assertion [ eqplm ] now holds . in the following corollarywe choose a prior by picking a suitable in ( [ eqkibm ] ) and conditioning on .the resulting prior is shown to be well defined below and is denoted .[ corsmoothplr ] let and be given ; choose \dvtx\|\eta\|_\alpha < m\} ] .suppose the distribution of the covariates is as in theorem [ thmplm ] .then , for any integer , the conditioned prior is well defined and gives rise to a marginal posterior for satisfying ( [ eqplm ] ) .choose as indicated ; the gaussian distribution of over ] and denoted . since in ( [ eqkibm ] ) has smoothness , )=1 ] , which forms a separable banach space even with strengthened norm , without changing the rkhs .the trivial embedding of ] is one - to - one and continuous , enabling identification of the prior induced by on ] .given ] . since is of order , and a similar bound exists for the -norm of the difference , lies in the closure of the rkhs both with respect to and to .particularly , lies in the support of , in ] .moreover , it follows that for all .we conclude that times integrated brownian motion started at random , conditioned to be bounded by in -norm , gives rise to a prior that satisfies .as is well - known , the entropy numbers of with respect to the uniform norm satisfy , for every , , for some constant that depends only on and .the associated bound on the bracketing entropy gives rise to finite bracketing integrals , so that universally donsker .then , if the distribution of the covariates is as assumed in theorem [ thmplm ] , the bernstein von mises limit ( [ eqplm ] ) holds .the authors would like to thank d. freedman , a .gamst , c. klaassen , b. knapik and a. van der vaart for valuable discussions and suggestions .b. j. k. kleijn thanks u.c .statistics dept . and cambridge s isaac newton institute for their hospitality .
in a smooth semiparametric estimation problem , the marginal posterior for the parameter of interest is expected to be asymptotically normal and satisfy frequentist criteria of optimality if the model is endowed with a suitable prior . it is shown that , under certain straightforward and interpretable conditions , the assertion of le cam s acclaimed , but strictly parametric , bernstein von mises theorem [ _ univ . california publ . statist . _ * 1 * ( 1953 ) 277329 ] holds in the semiparametric situation as well . as a consequence , bayesian point - estimators achieve efficiency , for example , in the sense of hjek s convolution theorem [ _ z . wahrsch . verw . gebiete _ * 14 * ( 1970 ) 323330 ] . the model is required to satisfy differentiability and metric entropy conditions , while the nuisance prior must assign nonzero mass to certain kullback leibler neighborhoods [ ghosal , ghosh and van der vaart _ ann . statist . _ * 28 * ( 2000 ) 500531 ] . in addition , the marginal posterior is required to converge at parametric rate , which appears to be the most stringent condition in examples . the results are applied to estimation of the linear coefficient in partial linear regression , with a gaussian prior on a smoothness class for the nuisance . . .
many sensors , such as sonar , rely on ballistic wave propagation that provides only direct line of sight information . in that case ,monitoring a cavity which has an irregular geometric shape , including hidden regions , may require installing multiple sensors throughout the cavity for a comprehensive coverage . however , most real world cavities have irregular shapes .this irregularity has the benefit of facilitating the creation of ray chaotic trajectories .the study of waves propagating inside these ray chaotic cavities , in the semi - classical limit , is called wave chaos .wave chaos is essentially the manifestation of the underlying ray chaos on the properties of the waves whose wavelength is much smaller than the typical dimensions of the cavity .for instance , random matrix theory has been shown to describe the spectral statistics of quantum systems with chaotic classical counterparts .ray chaos is characterized by sensitive dependence of ray trajectories to initial conditions .the effect of perturbations on waves propagating in such cavities was studied using the concept of the scattering fidelity .scattering fidelity is a normalized correlation between two cavity response signals as a function of time ; the response signals are typically collected before and after a perturbation to the cavity .the scattering fidelity decay resulting from global ( as opposed to local ) perturbations which change all the boundary conditions of wave chaotic cavities has been studied .global perturbations that change only one of the walls of the cavity have also been considered .the scattering fidelity decay associated with global perturbations is either an exponential or gaussian function of time . on the other hand, it has been shown that the scattering fidelity associated with a local perturbation has a slower algebraic decay .the fidelity decay induced by perturbing the cavity coupling has also been considered . on the other hand ,the effect of a local boundary perturbation on a quantum mechanical system is studied using the loschmidt echo concept . despite the extensive research on scattering fidelity ,a practical sensing application of the scattering fidelity concept has not been explored . in previous work ,wave chaotic sensing techniques that allow a comprehensive spatial coverage using a single sensor were introduced .these techniques rely on the wave chaotic nature of most real world cavities .when a wave is broadcast into a cavity to probe it , the response signal consists of reflections that bounce from almost all parts of the cavity ; this is due to the underlying spatial ergodicity of ray trajectories in ray chaotic cavities .therefore , the response signal essentially `` fingerprints '' the cavity , and it enables the detection of changes to the cavity . the wave chaotic sensing techniques developed in refs. were not used to quantify any kind of perturbation .local and global perturbations to the boundaries of the cavity , and perturbations to the medium of wave propagation within the cavity , were all shown to be detectable .however , the quantification of a perturbation was not accomplished . on the other hand , a remarkably sensitive quantification of a perturbation which involved translation of a sub - wavelength object over sub - wavelength distanceswas successfully demonstrated .however , the quantification was based on an empirical law that is specific to the system and perturbation at hand .this is because the effect of the perturbation on the dynamics of the waves propagating inside the wave chaotic cavity is not straightforward . in this paper , we focus on a single class of perturbation whose effect can be theoretically predicted , and we propose two time domain techniques to measure that particular kind of perturbation in any cavity . in this paper , we focus on quantifying volume changing perturbations ( vcp ) to a wave chaotic scattering system .such systems have all degeneracies broken , and we shall further assume that they are time - reversal invariant .in general , a vcp changes the volume of a cavity , but it may slightly change its shape as well . a special kind of vcp is a volume changing and shape preserving perturbation ( vcspp ) . in sec .[ sec:3-theory ] , the theoretical prediction of the effect of vcspps is discussed .[ sec:3-theory ] proposes two time domain techniques to quantify vcps . as in previous work ,these techniques are based on the scattering fidelity and time reversal mirrors , whose experimental implementations are distinctly different .[ sec:3-testing ] presents the experimental test of these two vcspp sensing techniques , along with a head - to - head numerical validation .the experimental test is carried out inside a mixed chaotic and regular billiard system using electromagnetic waves .[ sec:3-freqdomain ] provides a test of the sensing techniques in a numerical model of the star graph , which is a quasi-1d wave chaotic system .[ sec:3-freqdomain ] also shows the relative merits of approaching the problem in the frequency domain .[ sec:3-discussion ] discusses practical applications of the vcspp sensor , and sec .[ sec:3-conclusion ] provides a conclusion .consider a generic wave chaotic cavity with volume , which is considered as a baseline ( i.e. reference or unperturbed ) system ( see fig .[ fig : vcpfig1](a ) ) .the schematics in fig .[ fig : vcpfig1](a)&(b ) illustrate the cavity as a stadium billiard , but the cavity is considered generic throughout sec . [ sec:3-theory ] .suppose that the baseline cavity is perturbed such that each of its three length dimensions increase by a factor of .this amounts to a vcspp , by a factor of ; the perturbed cavity has a volume of ( see fig .[ fig : vcpfig1](b ) ) . fig .[ fig : vcpfig1](a)&(b ) show a brief pulse being broadcast into the cavity .the response signal to the pulse is called the sona .the typical durations of the pulse and the sona are and respectively .the sona from the baseline cavity ( which is referred to as baseline sona ) and the sona from the perturbed cavity ( which is referred to as perturbed sona ) are expected to be related , under certain conditions which are discussed in sec .[ sec:3-limitationtd ] .for instance , if , a signal feature in the perturbed sona is expected to be delayed by a factor of compared to its appearance in the baseline sona .the sensor is designed to enable the measurement of the value of , which effectively quantifies the vcspp , by using the theoretically predicted effects of the vcspp on the dynamics of the waves .another practically useful capability of the sensor is to check if the perturbation is indeed a vcspp , and not just merely a vcp .next , consider the scattering parameters of the cavities as a function of frequency .the as a function of frequency of the baseline and the perturbed -port cavity are schematically shown in figs .[ fig : vcpfig1](c)&(d ) .let us assume that the antennas coupling energy into the cavities have a negligible frequency dependence .then , we expect a precise mathematical relationship between the scattering parameters of the baseline and perturbed cavities as a function of frequency .in particular , if , the baseline spectrum can be obtained by stretching out the perturbed spectrum by a factor of along its frequency axis .this is precisely the prediction about the effect of vcspps on the dynamics of the waves .this prediction can be used to measure the perturbation in the frequency domain .as opposed to the resource intensive frequency domain sensing , it is practically preferable to use a time domain interrogation of the baseline and the perturbed cavity by measuring the sonas .however , it is useful to look at the problem in the frequency domain to understand the limitations of the time domain approach discussed in sec .[ sec:3-limitationtd ] . .( b ) sona is collected from a perturbed cavity of volume .( c ) pulse exciting the resonances of the baseline system .( d ) the same pulse exciting the perturbed resonances of the perturbed system .[ fig : vcpfig1],width=288 ] there are two classes of time domain sensing techniques that can be used to quantify vcspps .the first technique relies on the scattering fidelity .consider two sonas and , which are real voltage versus time signals with zero mean values .the scattering fidelity ( ) of and is simply their pearson s correlation as a function of time , ; [m ] } { \sqrt { \sum_{m = t}^{m = t+\delta t}x[m]^{2 } \sum_{m = t}^{m = t+\delta t}y[m]^{2 } } } \label{eqn : sf}\ ] ] where is typically chosen to be the time it takes the waves to traverse the cavity , at the very least , once ( i.e. in order of magnitude of the ballistic flight time ) .the of two sonas can take real values ranging from ( i.e. perfect correlation at time ) to ( i.e. perfect anti - correlation at time ). if is , then the sonas are not correlated at time .the of the baseline and the perturbed sonas is not expected to stay close to throughout time . however , the of the perturbed sona and the baseline sona whose time axis is scaled using the optimum stretching / squeezing factor is expected to stay close to throughout time . the optimum stretching / squeezing factor is expected to be equal to , which is also related to the magnitude of the perturbation .the second technique to quantify vcspps utilizes classical time reversal mirrors . to see the operation of a time reversal mirror ,consider a two port cavity .suppose that a pulse is broadcast into the baseline cavity through port 1 , and a baseline sona is recorded through port 2 .if the baseline sona is time reversed and broadcast back into the cavity through port 2 , a time - reversed version of the original pulse reconstructs at port 1 .the reconstructed pulse approximates the time reversed version of the original pulse broadcast into the cavity .however , if the time reversed baseline sona is broadcast into a perturbed cavity , then the reconstructed pulse will more poorly approximate the time reversed version of the original pulse .the time axis of the baseline sona needs to be scaled using the optimum factor before it is time reversed and broadcast into the perturbed cavity ; this is assuming that the perturbation is vcspp .the optimum stretching / squeezing factor is expected to result in a reconstructed pulse that best approximates the original pulse .once again , the optimum stretching / squeezing factor is expected to be .time - reversal mirrors have found a wide range of practical applications such as crack imaging in solids , and improved acoustic communication in air , among other things .time - reversal mirrors have been shown to benefit from the cavity s underlying ray chaos , which is prevalent in most real world cavities .recently , it was proposed that time reversal mirrors could also be applied to quantum systems .the robustness of time - reversal mirrors in a scattering medium undergoing perturbations has also been studied .the cavity that is used to test the sensing techniques is an approximately ( i.e. dimensions of x x ) aluminum box that has scatterers and interior surface irregularities which facilitate the creation of ray chaotic trajectories ( see fig .[ fig : vcpfig2 ] ) .the cavity is a mixed chaotic and regular billiard system because it has parallel walls which may support integrable modes in addition to the chaotic modes .overall , the cavity represents a real world case in which the sensor would operate .there are two ports that connect the cavity to a microwave source and an oscilloscope .each port consists of a monopole antenna of length , and diameter .the monopole antennas are mounted on two different walls of the cavity .an electromagnetic pulse with a center frequency of , and a gaussian envelope of standard deviation is typically broadcast into the cavity through port 1 .the resulting sona signal is collected at port 2 by the oscilloscope , and it is digitally filtered to minimize noise . experimentally inducing a vcspp can be more challenging than inducing a vcp , which may slightly change the shape of the enclosure .vcspps can be realized by changing the speed of wave propagation within the cavity ( i.e. changing the electrical volume ) .the speed of light ( for the electromagnetic experiment at ) within the cavity can be changed by filling up the cavity with different gases which have similar dissipation and dispersion properties .for example , the relative dielectric constant ( ) of air ( at relative humidity ) , nitrogen gas , and helium gas are , , , respectively , at a temperature of and a pressure of . as discussed later in sec .[ sec:3-fdtd ] , it was shown that the slightly different dissipation values of these gases does not change the sona signal in any perceivable way .however , the slightly different speed of light values of these gases was seen to significantly change the sonas collected ., width=288 ] the experimental procedure for filling up the cavity with different gases is as follows ( see fig . [fig : vcpfig2 ] ) .there is a gas inlet on the top wall of the cavity .the gas inlet was connected to a gas tank via a plastic tube and a long copper tube coil .the long copper tube coil allowed the gas to reach room temperature before it gets into the cavity .there was a pressure regulator in between the gas tank and the copper tube to control the rate of flow of the gas .there were three gas outlets near the top wall of the cavity , and three gas outlets near the bottom wall of the cavity .the diameter of the gas inlet and outlets was about a fifth of the wavelength , so that no significant microwave leakage occurred .depending on the density of the gas which was being pumped into the cavity , half of the outlets ( near the top or bottom wall ) were closed off with tape .this procedure helped to displace the existing gas and retain the gas being pumped into the cavity .as detailed in sec .[ sec:3-results ] , sona signals were collected from the cavity both during and after the gas transfer process .the sona signals that were collected during the gas transfer indicate when the cavity is almost fully filled with the new gas .the sona signals that were collected after the gas transfer were used to measure the vcspp that was induced . the experiment described in sec .[ sec:3-experiment ] was modeled using a finite difference time domain ( fdtd ) code .the fdtd code solves maxwell s equations inside a 3d numerical model of the experimental cavity introduced in sec .[ sec:3-experiment ] .the code for this simulation was optimized for parallel computers and for the simulation of reverberation chambers .the fdtd simulation of the cavity enabled a direct comparison of experimental and simulation results .the fdtd simulation formed a 3d model of the experimental cavity by using spatial cubic cells with an edge length of , and the cavity consisted of cells .the smallest time step taken to propagate solutions of maxwell s equations through the cells was .therefore , the courant s number for computational stability was for all the media ( ) considered , where is the speed of light in vacuum .the model of the cavity also had two ports , with antennas that are similar to the monopoles used in the experiment .the electromagnetic pulse broadcast into the model was also similar to the one broadcast experimentally ( i.e. center frequency of , and width of ) .the maximum and minimum of the ratio of the wavelength to the cubic cell dimension were and respectively . in the experiment ,the main source of dissipation is ohmic loss from the aluminum walls of the cavity . in the simulation ,the walls were assumed to be lossless for simplicity . instead , an equivalent loss was introduced within the medium of wave propagation to achieve the same quality factor as the experimental cavity . to accomplish this ,a uniform conductivity of was introduced throughout the interior of the cavity model . as mentioned in sec .[ sec:3-experiment ] , the experiment relies on changing the electrical volume ( at ) within the cavity by changing the gases filling the cavity .the gases used were air , nitrogen gas , and helium gas . for typical laboratory atmospheric conditions , the electromagnetic loss in air at can be mainly attributed to oxygen molecules .the specific attenuation of oxygen is whereas the specific attenuation of water vapor is just at .the conductivity of air is estimated to be .therefore , the dissipation inside the cavity filled by helium gas or nitrogen gas was roughly modeled by introducing an equivalent loss of uniformly throughout the cavity ( once again , the walls were assumed to be lossless in the model ) .it was shown that sonas that are collected from the cavity model with conductivity , and sonas that are collected from the cavity model with are identical ( i.e. they have a fidelity of ) .this simulation result proved that the difference in loss among air ( at relative humidity ) , nitrogen gas , and helium gas is not significant .as will be discussed later in sec .[ sec:3-results ] , the gases that experimentally filled the cavity are not pure .the effective of the gases that filled the cavity are different from the values for pure gases .the simulation of the cavity that is filled with helium , nitrogen or air is done by using the effective values .this allows the simulation to better model the experimental reality .the values used to simulate a cavity filled with air , nitrogen and helium are , , and respectively .these values were chosen to match the experimental results that will be discussed in sec .[ sec:3-results ] .the values were determined by anchoring the effective of nitrogen to the value for pure nitrogen gas , and finding the effective values for the other gases to match the experiment . here , the sensitivity of the sensor is described by deriving an expression for the minimum volume changing perturbation that can be measured .then , it will be shown that the vcspp which is induced when either of the three gases in the experiment ( i.e. air , nitrogen and helium gas ) is displaced by another one , can be measured using our experimental set up .consider using the sensing technique based on scattering fidelity , which was introduced in section [ sec:3-theory ] .the technique relies on comparing the baseline and perturbed sonas generated by a short pulse that excites many modes of the system as a function of time . in sec .[ sec:3-sensitivity ] , it is assumed that the perturbation increases the volume by a factor of , where . at time , the sonas are expected to be similar for small enough ( see eq .[ eqn : sf ] ) , hence . at any other time , there may be a perceptible difference between the sonas .any particular signal feature in the baseline sona at time is expected to be seen in the perturbed sona at time , where ; here , is defined as the time gap that develops between two identical features in the baseline and perturbed sona at time , where is measured within the baseline sona .this is because the baseline sona stretched out along its time axis by a factor of should approximate the perturbed sona , as discussed in sec .[ sec:3-theory ] .the minimum volume changing perturbation that can be quantified is the minimum of . on the other hand , when the of the baseline and perturbed sona is computed at any time ( see eq .[ eqn : sf ] ) , there are two necessary conditions that should be satisfied in order to be able to measure the perturbation .first , the signal - to - noise - ratio ( ) of the baseline sona at time should be well above .this is because the of the baseline and perturbed sonas ( when the is close to 1 ) would simply be the correlation of two noisy signals .second , should be , conservatively , greater than half of the period of the oscillations in the sona signals .otherwise , if is much smaller than half a period of the sona oscillations , the will not be convincingly lower than its maximum value at ( which is for the appropriate value in eq .[ eqn : sf ] ) , hindering reliable measurement of the perturbation .these two conditions guarantee that the of the baseline and perturbed sona can be used to measure the perturbation .the minimum of subject to these two conditions is , where is the period of sona oscillations and is the time at which the of the sona approaches . therefore , the minimum perturbation that can be quantified by the sensor ( i.e. ) depends on the wavelength of the waves used to probe the cavity , the dissipation in the cavity and the of the system .the in turn depends on the noise in the system , and the dynamic range of the wave generation and detection equipment .the equipment used in this experiment are shown in fig .[ fig : vcpfig2 ] . for the electromagnetic experimentalset up that is probing the cavity discussed in this work , using wavelength waves , a vcspp that is as small as parts in can be measured .note that it is assumed that the probing pulse excites several resonances of the baseline system as it is discussed further in sec .[ sec:3-limitationtd ] .getting back to the experimental set up shown in fig .[ fig : vcpfig2 ] , the vcspp was induced by changing the gas filling the cavity from air to nitrogen gas or to helium gas .this results in change in the of the gases , which is equivalent to a vcspp of at least parts in .therefore , the experimental system is expected to detect the change in electrical volume induced when one of these gases displaces the other inside the cavity ( assuming that the gases are pure ) . the air in the cavity at and relative humidity , was systematically displaced with nitrogen gas at room temperature .the nitrogen gas was pumped into the cavity at gauge pressure as the air flowed out through the gas outlets of the cavity .every two minutes , the flow was stopped , and nominally identical sonas ( which are actually almost identical ) were measured from the cavity , and these were averaged together .the averaging is done after aligning the sonas to eliminate the adverse effects of trigger jitter in the data acquisition system . in this manner , fiveaveraged sonas were collected from the cavity as the cavity was filled with more and more pure nitrogen gas .each of these five sonas were compared with a sona that was collected from the original cavity filled with air .the comparison was done by computing the scattering fidelity ( see eq .[ eqn : sf ] ) of the sona from airy cavity and a sona from a partially air - filled cavity .[ fig : vcpfig3 ] shows these scattering fidelities .the concentration of nitrogen increases with the number of minutes of nitrogen inflow into the cavity . therefore , fig .[ fig : vcpfig3 ] shows scattering fidelities of vcspps which get progressively stronger . , , , , and minutes of nitrogen gas inflow .the perturbation gets stronger as the concentration of nitrogen gas increases in the perturbed cavity .[ fig : vcpfig3],width=288 ] the scattering fidelity of a vcspp shows oscillation whose period is inversely related to the strength of the perturbation ( see fig .[ fig : vcpfig3 ] ) .the oscillation in the scattering fidelity can be explained by specifically examining the scattering fidelity of sona from the air filled cavity , and sona from the nitrogen gas filled cavity ( see fig .[ fig : vcpfig4](a ) ) . here ,both of the sonas were obtained by averaging over nominally identical sona samples . for times relatively close to , the difference in the speed of light between air and nitrogendoes not show up when signals that traveled through these two gases are compared ( see fig .[ fig : vcpfig4](b ) ) .however , at , a phase shift of half a period ( i.e. for the probing pulse centered at ) develops between the two signals ( see fig .[ fig : vcpfig4](c ) ) .thus , the scattering fidelity between the two sonas becomes .the relative phase shift between the two sonas increases to one full period at about , and hence the scattering fidelity recovers to almost ( see fig . [fig : vcpfig4](d ) ) .however , as can be seen in fig .[ fig : vcpfig4](d ) , in addition to the phase shift that develops between the individual oscillations of the two sonas , a relative phase shift starts to develop between the envelopes of the sona signals .thus , the scattering fidelity of vcspps is not expected to oscillate between and indefinitely .rather , it is expected to decay while oscillating .however , we do not anticipate seeing this fidelity decay with our measurement system for this particular perturbation because the of the sonas approaches unity after about , and the fidelity does not decay appreciably within this time for this particular perturbation .sona samples .( b ) the sonas near have fidelity of .( c ) the sonas are out of phase by half a period around .( d ) the sonas are out of phase by a period around .besides , the phase shift between the envelopes of the sonas becomes significant .[ fig : vcpfig4],width=288 ] the speed of light at in pure nitrogen gas was faster than it was in the laboratory air ( which had about relative humidity ) .therefore , as discussed in sec .[ sec:3-theory ] , the vcspp can be quantified by finding the optimum stretching factor to be applied on the sona that is collected from the cavity filled with nitrogen .the goal is to recover the scattering fidelity of the stretched `` nitrogen - sona '' and the `` air - sona '' to .this was achieved by using a stretching factor given by ; because , ( half a period of oscillation of the sona ) at ( ) based on the discussion in sec .[ sec:3-sensitivity ] .the resulting scattering fidelity is plotted in red ( see fig . [fig : vcpfig5 ] ) with the scattering fidelity of the unmodified sonas which is shown in blue .the optimum stretching factor is . whereas , is the expected value of . in other words , ideally a ( parts per million )change across each electrical dimensions of the cavity is expected , however , an change is measured .the discrepancy can be explained by the fact that the nitrogen gas that filled the cavity has an effective that is probably much closer to the effective of air .[ fig : vcpfig3 ] shows that it can take several minutes to displace the air in the cavity with pure nitrogen .therefore , we do not expect the of the nitrogen gas that filled the cavity to be the same as the literature value for pure nitrogen gas .the same conclusion applies to the other gases filling the cavity .there are several reasons for the expected discrepancy between the literature value of and the effective of the gases filling the cavity experimentally .the helium and nitrogen gases are of industrial quality , which is not perfectly pure . besides, the cavity is not necessarily air tight ; this could lead to leakage of air into the cavity when the cavity is not pressurized . finally , the of the gases is a function of temperature and relative humidity ( for the case of air ) .therefore , deviations of laboratory temperature and relative humidity from and could impact the effective value of the gases .the fact that the shape of the cavity was preserved ( during the displacement of the air by nitrogen gas ) can also be seen from fig .[ fig : vcpfig5 ] .if the shape of the cavity were not preserved , it would not be possible to recover the scattering fidelity of the two sonas through simple numerical stretching of one of the sonas , along the time axis .hence , displacing air with nitrogen gas is not just a vcp , but is also a vcspp .later , a vcp induced by displacement of air with helium , which is not vcspp , will be discussed . throughout the times when the of the sonas is robust .the snr decreases roughly by a factor of a thousand between and .the optimum stretching factor quantified the vcspp .the fact that the scattering fidelity was recovered proved that the perturbation was vcspp .[ fig : vcpfig5],width=288 ] the fidelity decay of vcspps , which is expected to be superimposed on the fidelity oscillations , can be seen for a stronger vcspp .the vcspp should be strong enough to bring about a significant phase shift between the envelopes of the sonas before their deteriorates . for our experimental set up, such a strong vcspp can be achieved by displacing the air in the cavity by helium gas .the scattering fidelity of sona from a cavity that is filled with air and sona from a cavity that is filled with helium is plotted in fig .[ fig : vcpfig6 ] in blue . based on the definition in sec .[ sec:3-sensitivity ] , ( which is half the period of sona oscillation ) at time .thus , the optimum stretching factor was chosen to be .this optimum value maximizes the average value of the resulting scattering fidelity . since the speed of light at is higher in helium than in air , it was the `` helium sona '' that was stretched out along its time axis .the scattering fidelity of the stretched `` helium sona '' and the `` air sona '' is also shown in fig . [fig : vcpfig6 ] in red .once again , the stretching factor approximates . in other words , a change across each electrical dimensions of the cavityis expected , and a change is measured .this shows that the change in electrical volume which was induced by replacing the air in the cavity with helium was quantified successfully , considering the fact that the effective of the helium and air gases are probably closer than expected . however , unlike the case in fig .[ fig : vcpfig5 ] , the effect of the vcp could not be undone perfectly .the scattering fidelity of the stretched `` helium sona '' and the `` air sona '' was not close to throughout time ; instead it shows a fidelity decay .the scattering fidelity of `` helium sona '' and `` air sona '' is expected to oscillate between and ( as the phase shift between the fast oscillations of the sonas increases in a similar fashion to the illustration in fig .[ fig : vcpfig4](b - d ) ) and decay to ( as the phase shift between the complicated envelopes of the sonas increases ) .this decay is seen experimentally in fig .[ fig : vcpfig6 ]. however , there must be another fidelity decay superimposed on the fidelity decay that can be attributed to a vcp ; because the fidelity decay could not be undone by numerically stretching out one of the sonas .a fdtd simulation of the experiment was performed to better understand the form of the scattering fidelity when comparing `` helium sona '' and `` air sona '' . as discussed in sec .[ sec:3-fdtd ] , differences in the dissipation of helium gas and air are so minute that they do not need to be considered . the scattering fidelity of `` helium sona '' and `` air sona '' , which were obtained from the fdtd model by broadcasting the pulse used experimentally , are shown in fig.[fig : vcpfig7 ] .the simulation results show that the effect of the vcspp can be undone by applying the optimum stretching factor ( i.e. ) to the `` helium sona '' .the scattering fidelity of the `` air sona '' and the stretched `` helium sona '' is shown in red in fig.[fig : vcpfig7 ] ; it is close to which shows that the effect of the perturbation can be undone by simple numerical stretching .the fidelity is close to for times where the numerical errors in the fdtd are negligible . ,width=288 ] , width=288 ] the difference in the results in fig .[ fig : vcpfig6 ] and fig .[ fig : vcpfig7 ] , shows that experimentally displacing air with helium induces another kind of perturbation other than a volume changing perturbation .the additional perturbation was not seen when nitrogen was pumped into the cavity at the same pressure setting ( i.e. gauge pressure ) as was used for pumping helium into the cavity .one possible explanation for this discrepancy is the fact that helium gas exerts a significant buoyancy force which slightly flexes the walls of the cavity ( about area , and thick aluminum sheets ) . from previous work on this cavity, it was shown that a flexing of one of the walls of the cavity can cause a significant shape changing perturbation .the result shown in fig . [ fig : vcpfig6 ] demonstrates that it is possible to verify if the shape of the cavity remained intact while its electrical volume changed .this verification can be simply done by checking if the scattering fidelity can be recovered to throughout time .the capability to detect changes to the shape of the cavity during a volume changing perturbation ( which could be induced by a spatially uniform heating or cooling of a homogenous cavity ) can have several applications as was pointed out in sec .[ sec:3-introduction ] .yet another possible explanation for the above mentioned discrepancy is the fact that nitrogen and helium gases have different density and atomic sizes .this can lead to differences in the spatial uniformity of the gasses filling the cavity . again, such differences can have the character of cavity shape changing perturbations ., width=288 ] so far , only one of the sensing techniques introduced in sec .[ sec:3-theory ] is demonstrated to quantify volume changing perturbations .in addition to the scattering fidelity technique , time reversal mirrors can be used to quantify volume changing perturbations . when a sona signal that is collected from the cavity filled with nitrogen is time reversed and broadcast into the cavity filled with helium , the reconstructed time reversed pulseis not expected to be ideal .this is because the time reversed sona would be traversing an effectively smaller cavity .note that in this particular case , the baseline cavity is electrically larger than the perturbed cavity .based on the discussions in sec .[ sec:3-theory ] , the baseline sona should be squeezed along its time axis using an optimum factor to recover the maximum amplitude time reversed pulse . when the optimally squeezed `` nitrogen sona '' is time reversed and broadcast into the cavity filled with helium , the reconstructed time reversed pulseis expected to better approximate the original pulse .the improvement in the quality of the time reversed reconstructed pulse can be measured in various ways . here , the simplest measure ( the peak - to - peak - amplitude ( ppa ) of the reconstructed pulse ) of the quality of the time reversed pulse is used .[ fig : vcpfig8 ] shows the ppa of the time reversed reconstructed pulse obtained when the `` nitrogen sona '' is numerically squeezed along its time axis by varying amounts .the optimum squeezing factor of approximates .this means that a change is expected , but a change across each electrical dimensions of the cavity was measured .once again , the small discrepancy can be explained by the fact that the gases in the cavity are not perfectly pure .this shows that the time reversal technique can also be used to quantify volume changing perturbations .time reversal mirrors can also be used to detect when a volume changing perturbation slightly changes the shape of the cavity . in this case , when the `` nitrogen sona '' is time reversed and broadcast into the cavity filled with nitrogen , the ppa of the reconstructed time reversed pulse was about . however, when an optimally squeezed `` nitrogen sona '' is broadcast into the cavity filled with helium gas , the optimal value of the ppa is only about .this indicates that the perturbation , which is induced when the nitrogen gas inside the cavity is displaced with helium gas , is not just a volume changing perturbation but also a shape changing perturbation . once again , this is due to the relatively strong buoyant force of helium gas which can flex the walls of the cavity . to summarize , either of the two time domain sensing techniques introduced in sec .[ sec:3-theory ] can be used to identify and quantify a vcspp .however , the sensing technique based on time reversal can have an advantage because it is computationally cheaper .now we can describe the time domain approach further , and point out its limitations .when a cavity is monitored for a vcspp , a pulse is periodically broadcast into it , and a sona is collected .it is generally preferred to use the same probing pulse ( i.e. the same center frequency , bandwidth , amplitude , and shape ) so that only the changes in the system show up when looking at the sonas .when a vcspp ( with ) occurs , the transmission spectrum of the system shrinks along the frequency axis by a factor of as shown in figs .[ fig : vcpfig1](c)&(d ) .the probing pulse frequency coverage is schematically shown in the frequency domain in figs .[ fig : vcpfig1](c)&(d ) . the baseline andthe perturbed sonas are a result of the probing pulse exciting resonances of the cavity .the resonances excited by the probing pulse in the baseline and perturbed cavity are not all the same .suppose that there is a significant overlap between the resonances excited by the probing pulse in the baseline and perturbed cavity . under this condition, we expect that the baseline sona can be numerically stretched out by a factor of along its time axis to approximate the perturbed sona .however , if there is no overlap between the resonances excited in the baseline and perturbed system , the time domain sensing techniques are not expected to work .thus , both of the time domain sensing techniques ( based on scattering fidelity and time reversal mirrors ) face a limitation regarding the maximum vcspp that can be measured . this limitation can be improved by increasing the bandwidth of the pulse that is used to probe the cavity .doing so would effectively make the time domain technique closer to the frequency domain interrogation of the system , which does not face a limit on the maximum vcspp that can be measured . in a similar spirit , using a pulse with a flat frequency response ( such as a chirp ) may also be helpful .note that the frequency domain interrogation of the system is assumed to be done on a large enough frequency window to measure the perturbation .the regime of vcspp strengths that can not be measured using the time domain techniques is estimated as follows .suppose that the probing pulse excites resonances with frequencies ranging from to , with the center frequency at , and bandwidth of .it is assumed that the probing pulse excites several resonances of the baseline system . in other words , , where is the mean spacing between the resonant frequencies of the baseline cavity .the vcspp changes the volume of the cavity by a factor of , where for simplicity .once again , the vcspp has the effect of scaling the of the cavity along the frequency axis by a factor of .if , then the vcspp is not expected to be measured by using this probing pulse in the time domain because there would be significant overlap between the resonances excited in the baseline and perturbed cavities . in the experiments discussed in sec .[ sec:3-experiment ] , the probing pulse excited resonances between and . the mean spacing between the resonant frequencies of the baseline cavityis given by ; where is the volume of the baseline cavity , is the center frequency of the pulse , and is the wavelength .therefore , a vcspp where is not expected to be measured in the time domain experiments discussed in sec . [ sec:3-experiment ]. clearly , the strongest vcspp that was achieved in the laboratory ( i.e. for nitrogen gas vs the air ) , and the strongest vcp ( i.e. for helium gas vs the air ) are both far below the maximum perturbation strengths that can be measured using the time domain techniques .the case of a strong vcspp perturbation , which can not be measured using the time domain techniques , is best considered using a simulation tool that can be easily interrogated in the frequency domain as well .[ sec:3-freqdomain ] discusses such strong perturbations , and shows how they can be quantified using a frequency domain approach .in sec . [ sec:3-theory ] , it was mentioned that vcspps can be quantified using information obtained in the time domain or frequency domain .the time domain approach is generally more practical in applications .however , the frequency domain approach does not have limitations on the maximum perturbation value that can be quantified .this frequency domain approach is used to measure vcspps on a quasi-1d system called the star graph .we use the star graph because it is a type of quantum graph that has generic properties of wave chaotic systems , but is relatively simple to understand .it is also computationally cheaper to simulate than the 3d wave chaotic system discussed in sec .[ sec:3-fdtd ] .besides , the star graph can be directly implemented in the frequency domain as discussed in sec .[ sec:3-stargraph ] .the star graph is numerically modeled as a set of interconnected transmission lines as shown , schematically , in fig .[ fig : vcpfig9 ] .this is a one port system , hence , the input signal is injected into the driving transmission line and the output signal is also retrieved from the same line .the driving transmission line has zero length .the driving transmission line is connected with a number of transmission lines which are all connected in parallel with each other .the transmission line properties ( for a line labeled by ) are length ( ) , characteristic admittance ( ) , frequency dependent complex propagation constant ( ) , and complex reflection coefficient ( ) ( for reflection from the terminations of the lines that are not connected to the driving line ) .the driving line has zero length , thus its only adjustable property is its characteristic admittance ( ) .transmission lines that are connected in parallel , and a driving transmission line of zero length .each of the lines ( labeled by ) can have unique length ( ) , characteristic admittance ( ) , frequency dependent propagation constant ( ) , and reflection coefficient ( ) .the driving line has a characteristic admittance of .[ fig : vcpfig9],width=288 ] this one port system is modeled by using the analytically derived expression for its scattering parameter as a function of frequency , . the scattering parameter can be expressed in terms of the characteristic admittance of the driving line ( ) and the input admittance ( ) of each of the other transmission lines while looking towards them , where is the number of transmission lines connected in parallel .the input admittance of each transmission line ( labeled by ) , , can be expressed in terms of the above mentioned properties of the line , once the scattering parameter of the system is computed over a broad frequency range , the response to any time domain input signal can be calculated .this is done by fourier transforming the input signal to the frequency domain , multiplying it by the scattering parameter ( ) and inverse fourier transforming the product back to the time domain .this establishes the star graph model as a time domain simulation of this quasi-1d system .however , in this section we are mainly interested in the frequency domain representation of the star graph using its scattering parameter .the frequency domain approach will be shown to be effective in quantifying strong perturbations that could not have been measured otherwise . as discussed in sec .[ sec:3-theory ] , the scattering parameter of a cavity can be used to quantify a vcspp . the star graph is a quasi-1d system . a perturbation that changes the length of all of the lines of the star graph by the same proportion ( i.e. with constant ) changes its effective volume while leaving its shape intact ( i.e. it is a vcspp ) .the effective volume of a star graph with lines is given by .therefore , for a vcspp perturbation that scales the lengths by a factor of , the effective volume of the star graph also changes by a factor of .this is different from the case of the 3d cavity discussed in sec . [ sec:3-testing ] ( i.e. the volume of the 3d cavity changes by ) .the baseline star graph was set up using the following parameter values .there were lines whose length is given by ( for ranging from to ) .each of these lines had a characteristic admittance , , of .the characteristic admittance of the driving line was chosen such that in order to eliminate prompt reflection of signals injected through the driving line .the propagation constant of the lines is a function of the frequency , , and was given by with , where is the speed of light in vacuum ; thus , the lines themselves were considered to be lossless ( i.e. does not have a real part ) .however , energy was dissipated during reflection from the terminations of the lines .the reflection coefficient was given by with , where .the amount of dissipation was designed to be independent of the size of the star graph .however , the dissipation introduced through sub - unitary values of can be interpreted as an equivalent loss that could be introduced through ( i.e. by introducing as the real part of ) .thus , if the dissipation were modeled using , would be interpreted as the time it takes the signals to decay by as they propagate along the lines . as a result , the typical decay time of the sona from the star graph was , a value typical of our 3d experiment .the perturbed star graph was set up using identical values of parameters as the baseline star graph , except for the length , .the lengths of the perturbed star graph were chosen to be , where is the perturbation strength .the driving line has zero length in both the baseline and perturbed star graphs .a gaussian pulse of width and center frequency was used to generate a baseline and perturbed sona from the baseline and perturbed star graphs . the time domain sensing technique which is based on scattering fidelitywas applied to quantify a perturbation of strength .[ fig : vcpfig10 ] shows the scattering fidelity of the baseline and perturbed sonas before ( blue ) and after ( red ) optimally stretching the baseline sona .this demonstrates that the scattering fidelity sensing technique can be used to quantify a vcspp in the quasi-1d chaotic system .the result gives the clearest evidence to the discussion of fidelity decay induced by vcspps in sec .[ sec:3-resultssf ] ( i.e. the results shown in figs .[ fig : vcpfig5 ] and [ fig : vcpfig6 ] ) .vcspps induce a scattering fidelity oscillation that is superimposed on a fidelity decay ( see fig . [fig : vcpfig10 ] ) . .[ fig : vcpfig10],width=288 ] the application of the time domain sensing technique which is based on time reversal is presented in fig .[ fig : vcpfig11 ] . fig .[ fig : vcpfig11 ] shows the ppa of the time reversed pulse reconstructed using the baseline sona scaled with different factors along its time axis .the optimal ppa was obtained when the sona was scaled by a factor exactly equal to the stretching of the transmission line lengths ( i.e. ) ., width=288 ] the perturbation strength of was shown to be detectable using time domain techniques in the star graph set up described . however , as the perturbation got stronger , the shortcoming of the time domain techniques was revealed .for perturbation strength values , , ranging from to , the of the perturbed sona and the optimally scaled ( along the time axis ) baseline sona was examined . for each , the averaged over time , ( from to , which is the duration of the sonas ) .the closeness of the average to indicates the success of the time domain technique to undo the effect of the vcspp , and to measure it .[ fig : vcpfig12 ] shows the average value of ( for optimally scaled baseline sona ) versus ; the standard deviations of taken over are also shown as an error bar in fig .[ fig : vcpfig12 ] .as increased beyond about , the effectiveness of the time domain sensing technique starts to deteriorate .based on the discussion in sec .[ sec:3-limitationtd ] , vcspp perturbations that are characterized by are not expected to be quantified using the time domain techniques .the mean spacing between resonant frequencies of the baseline star graph is , where is the number of lines in the star graph , and .therefore , the probing pulse excites several resonances of the baseline cavity .the result illustrated in fig .[ fig : vcpfig12 ] shows that vcspp perturbations that are characterized by are quantifiable . of the baseline sona and perturbed sonas , which are optimally scaled by ( perturbation magnitude ) along their time axis .the sonas were collected from the baseline star graph , and a perturbed star graph ( vcspp with scaling factor ) .the average was taken from time to of the sonas ; error bars show the associated standard deviation .[ fig : vcpfig12],width=288 ] the limitation of the time domain approach was discussed in sec .[ sec:3-limitationtd ] , and demonstrated using the star graph in sec .[ sec:3-quantvolchastargraphtd ] . as shown in fig .[ fig : vcpfig12 ] , for a strong perturbation such as , the time domain sensing techniques fail to undo the effect of the vcspp , and hence to measure it . here, a frequency domain approach is used to illustrate how the vcspp with can be measured .[ fig : vcpfig13](a ) shows the of the baseline ( blue ) and perturbed ( red ) star graphs .the of the perturbed star graph ( larger in size by a factor of ) is compressed along the frequency axis compared to the baseline s case .this effect was predicted in sec .[ sec:3-theory ] , and schematically illustrated in fig .[ fig : vcpfig1 ] .the frequency domain approach to measure vcspp involves optimally scaling the frequency axis of the scattering parameter of the perturbed system ( star graph in this case ) to align it with the scattering parameter of the baseline system .[ fig : vcpfig13](b ) shows the of the baseline star graph ( blue ) and the optimally stretched of the perturbed star graph ( black ) .the optimal frequency scaling factor was , which also successfully measures the vcspp induced on the baseline star graph .to conclude , given the scattering parameters of a baseline and a perturbed system , one can check if a vcspp happened , and quantify it . of the baseline ( blue ) and a perturbed ( red ) star graph for perturbation strength .( b ) the of the baseline star graph ( blue ) , and the of the perturbed star graph ( black ) after optimal frequency scaling which measures the vcspp .[ fig : vcpfig13],width=288 ]the development of this quantitative sensor opens up possibilities for several potentially useful applications .for example , when a cavity that has a homogenous material make up is cooled down ( or warmed up ) , it is interesting to check if the temperature stays uniform throughout all parts of the cavity .the sensor developed in this paper would allow one to check if the volume of the cavity is decreasing ( or increasing ) while the shape is intact ; the sensor would also allow one to measure by how much the volume ( and hence the temperature ) of the cavity is changing .this is feasible as long as the temperature change primarily affects the volume of the cavity , and not the dielectric constant of the medium filling the cavity .this is one possible application of the vcspp sensor which can practically compete with the traditional option of installing thermometers throughout the cavity .another possible application of the vcspp sensor is monitoring if a fluid has displaced another fluid uniformly throughout a cavity .since the speed of the waves inside different fluids can be different , the displacement of the fluid can change the volume of the cavity , as seen by the waves .this assumes that other wave properties ( such as dissipation and dispersion ) of the two fluids are similar .measuring the effects of perturbation to a wave chaotic enclosure and uniquely identifying the perturbation that gave rise to this change is a challenge because many such perturbations can give rise to the same measured change .however , the effect of a perturbation that changes the volume but keeps the shape intact can be theoretically predicted .the theoretical prediction is most clear in the frequency domain .thus , quantifying volume changing perturbations is best done in the frequency domain .nonetheless , time domain approaches can be preferred for practical purposes .the time domain approach is limited by a maximum perturbation that can be measured .this limitation was demonstrated using a simulation of a star graph , which is a representative wave chaotic system .the time domain approach can work using either scattering fidelity techniques or time reversal mirrors .quantification of a volume changing perturbation was experimentally demonstrated using these techniques .the volume changing perturbation was induced experimentally by changing the electrical volume of the cavity .the results of the experiment were compared with fdtd simulation results of the cavity and good agreement was found .this work is supported by : onr muri grant n000140710734 ; afosr grant fa95500710049 ; onr / appel , task a2 , through grant n000140911190 ; and the maryland center for nanophysics and advanced materials . the computational resources for the fdtd simulationswere granted by cineca ( italian supercomputing center ) under the project iscra hp10byqkhn .we would like to thank jen - hao yeh , matt frazier , and t.h .seligman for helpful discussions .franco moglie acknowledges the financial support and the hospitality of the wave chaos group at ireap , university of maryland .
a sensor was developed to quantitatively measure perturbations which change the volume of a wave chaotic cavity while leaving its shape intact . the sensors work in the time domain by using either scattering fidelity of the transmitted signals or time reversal mirrors . the sensors were tested experimentally by inducing volume changing perturbations to a one cubic meter mixed chaotic and regular billiard system . perturbations which caused a volume change that is as small as parts in a million were quantitatively measured . these results were obtained by using electromagnetic waves with a wavelength of about , therefore , the sensor is sensitive to extreme sub - wavelength changes of the boundaries of a cavity . the experimental results were compared with finite difference time domain ( fdtd ) simulation results , and good agreement was found . furthermore , the sensor was tested using a frequency domain approach on a numerical model of the star graph , which is a representative wave chaotic system . these results open up interesting applications such as : monitoring the spatial uniformity of the temperature of a homogeneous cavity during heating up / cooling down procedures , verifying the uniform displacement of a fluid inside a wave chaotic cavity by another fluid , etc .
a scale - free network is a large graph with power law degree distribution , i.e. , \sim k^{-\gamma} ] if for each subinterval \subseteq [ a , b] ] , the following conditions are equivalent : a. the sequence is equidistributed in ] [ def angular average ] let and let be such that its restriction to every sphere with is riemann integrable .then , we define its * angular average * by the average value of along the sphere , i.e. , : [ 0 , 1)\rightarrow \r \ , , & & \operatorname*{m_{\theta } } [ f ] ( t)\defining \frac{1}{2 \pi}\int_{0}^{2\pi } f\big(t \cos \theta , \ , t \sin \theta ) \big)\,d \theta .\end{aligned}\ ] ]in this section we analyze the behavior of the solutions of a family of well - posed problems on an very particular increasing sequence of graphs , depicted in _ figure _ [ fig graphs stages five and n ] . in the following we denote by , the unit disk and the unit sphere in respectively .the function is such that for all , is a sequence of real numbers and the diffusion coefficient is such that almost everywhere in . [ def radial equidistributed graph ]let be an equidistributed sequence in and .a. for each define the graph in the following way : b. for the increasing sequence of graphs define the limit graph as described in _ definition _ [ def increasing graph sequence ] .c. in the following we denote the natural domains corresponding to , by and respectively . d. for any edge we denote by its boundary vertex and ] .moreover , this convergence is uniform in the following sense c. the function given by is well - defined and it will be referred to as the * limit function*. a. fix and let be such that . since is the solution of _ problem _ it follows that for all , in particular with . on the other hand ,since is absolutely continuous , the fundamental theorem of calculus applies , hence for all .therefore , where is the global bound found in _ _ l__emma [ th estimates on v_0 = 0]-(ii ) above .integrating along the edge gives .next , given that for all , repeating the previous argument yields .finally , since , the result follows for any satisfying b. fix , due to the previous part the sequence is bounded in , then there exists and a subsequence such that { } \psupe \quad \text{weakly in}\ , h^{2}(e ) \ ;\text{and strongly in}\ , h^{1}(e).\ ] ] let such that equals zero on the boundary vertex of .let be the function in such that and is linear for all .test _ problem _ with this function to get integrating by parts the second summand of the left hand side yields since is a solution of the problem , the above reduces to equality holds for all , in particular it holds for the convergent subsequence , taking limit on this sequence and recalling , we have the statement holds for all vanishing at , the boundary vertex of .define the space and consider the problem due to the lax - milgram theorem the problem above is well - posed , additionally it is clear that , therefore it is the unique solution to the _ problem _ above .now , recall that is bounded in and that the previous reasoning applies for every strongly -convergent subsequence , therefore its limit is the unique solution to _ problem _ . consequently , due to rellich - kondrachov , it follows that the whole sequence converges strongly .next , for the uniform convergence test both statements and with and subtract them to get the above yields now , the uniform convergence follows from the _ statement _ , which concludes the second part . c. since for all then, the limit function is well - defined and the proof is complete .in this section we study the asymptotic properties of the global behavior of the solutions .it will be seen that such analysis must be done for certain type of cesro averages " of the solutions .this is observed by the techniques and the hypotheses of _ lemma _ [ th estimates on v_0 = 0 ] , which are necessary to conclude the local convergence of .additionally , the type of estimates and the numerical experiments suggest this physical magnitude as the most significant for global behavior analysis and upscaling purposes .we start introducing some necessary hypotheses .[ hyp forcing term cesaro means assumptions ] suppose that , and verify _ hypothesis _ [ hyp forcing terms and permeability ] and , additionally a. the diffusion coefficient has finite range .moreover , if and , then { } s_{i } \ , k_{i}.\ ] ] with for all and such that .b. the forcing term satisfies that { } \fi \ , , & & \forall \ ; 1\leq i\leq i.\end{aligned}\ ] ] where and the sense of convergence is pointwise almost everywhere . c. the sequence is convergent with .[ rem riemann integrability function ] a. notice that if ( i ) and ( ii ) in _ hypothesis _ [ hyp forcing term cesaro means assumptions ] are satisfied , then { } \sum_{i\ , = \ , 1}^{i}s_{i}\fi \ , .\ ] ] hence , the sequence is cesro convergent .b. a familiar context for the required convergence statement in _ hypothesis _ [ hyp forcing term cesaro means assumptions ] above is the following .let be a continuous and bounded function defined on the whole disk and suppose that for each , the sequence of vertices is equidistributed on . then , due to _weyl s theorem _ [ th weyl s theorem ] , for any fixed it holds that { } \operatorname*{m_{\theta } } [ f] ] for all .notice that due to the law of large numbers , with probability one it holds that { } s_{i } \ , k_{i}.\ ] ] b. let be a random variable such that and such that { } \f_{i } \ , , & & \forall \; 1\leq i \leq i.\end{aligned}\ ] ] therefore , the results of _ theorem _ [ th the upscaled problem ] hold , when replacing by or by or when making both substitutions at the same time .in this section we present two types of numerical experiments .the first type are verification examples , supporting our homogenization conclusions for a problem whose asymptotic behavior is known exactly .the second type are of exploratory nature , in order to gain further understanding of the phenomenon s upscaled behavior .the experiments are executed in a matlab code using the finite element method ( fem ) ; it is an adaptation of the code * fem1d.m * . for the sake of simplicitythe vertices of the graph are given by , as it is known that is equidistributed in ( see ) .the diffusion coefficient hits only two possible values one and two .two types of coefficients will be analyzed , a deterministic and a probabilistic one respectively .they satisfy [ def experimental difussion coefficients ] = \frac{1}{3 } \ , , \ ; \operatorname*{\bm{\mathbbm{e } } } [ k_{p } = 2 ] = \frac{2}{3 } .\end{aligned}\ ] ] in our experiments the asymptotic analysis is performed for being a fixed realization of a random sequence of length 1000 , generated with the binomial distribution . since it follows that the upscaled graph has only three vertices and two edges namely , , and , . also , define the domains where or depending on the probabilistic or deterministic context .additionally , we define for all the examples we use the forcing terms for every .the fem approximation is done with elements per edge with uniform grid . for each examplewe present two graphics for values of chosen from , based on optical neatness . for visual purposes in all the casesthe edges are colored with red if or blue if .also , for displaying purposes , in the cases the edges are labeled with " for identification , however for the labels were removed because they overload the image .basic example ] we begin our examples with the most familiar context as discussed in _ remark _ [ rem riemann integrability function ] .define by .since both sequences and are equidistributed , _weyl s theorem _ [ th weyl s theorem ] implies = \f_{2 } = \operatorname*{m_{\theta } } [ f\vert_{\omega_{g}^{2 } } ] = \operatorname*{m_{\theta } } [ f ] \equiv 0 .\ ] ] here , are the limits defined in _ hypothesis _ [ hyp forcing term cesaro means assumptions]-(ii ) . for this casethe exact solution of the upscaled _ problem _ is given by , with .for the diffusion coefficient we use the deterministic one , defined in .the following table summarizes the convergence behavior ._ example _ [ ex . basic example ] : convergence table , . [ cols="^,^,^,^,^",options="header " , ] it follows that this system has more than one internal equilibrium .consequently , an upscaled model of a system such as this , should contain uncertainty which , in this specific case , remains bounded due to the properties of the forcing term .a. the authors tried to find experimentally a rate of convergence using the well - know estimate the sampling was made on the intervals , for and .experiments were run on all the examples except for _ example _ [ ex. unbounded frequency ] . in none of the cases ,solid numerical evidence was detected that could suggest an order of convergence for the phenomenon .b. experiments for random variations of the examples above were also executed , under the hypothesis that random variables were subject to the law of large numbers .convergence , slower than its corresponding deterministic version was observed , as expected .this is important for its applicability to upscaling networks derived from game theory , see .the present work yields several accomplishments and also limitations as we point out below .a. the method presented in this paper can be easily extended to general scale - free networks in a very simple way .first identify the communication kernel ( see ) .second , for each node in the kernel , replace its numerous incident low - degree nodes by the upscaled nodes together with the homogenized diffusion coefficients and forcing terms , see _ figure _ [ fig networks ] . b. the particular scale - free network treated in the paper i.e. , the star metric graph , arises naturally in some important examples .these come from the theory of the strategic network formation , where the agents choose their connections following utilitarian behavior . under certain conditions for the benefit - cost relation affecting the actors whenestablishing links with other agents , the asymptotic network is star - shaped ( see ) .c. the scale - free networks are frequent in many real world examples as already mentioned .it follows that the method is applicable to a wide range of cases .however , important networks can not be treated the same way for homogenization , even if they share some important properties of communication .the small - world networks constitute an example since they are highly clustered , this feature contradicts the power - law degree distribution hypothesis .see for a detailed exposition on the matter .d. the upscaling of the diffusion phenomenon is done in a hybrid fashion . on one hand ,the diffusion on the low - degree nodes is modeled by the weak variational form of the differential operators defined over the graph , but ignoring its combinatorial structure . on the other hand ,the diffusion on the communication kernel will still depend on both , the differential operators and the combinatorial structure .this is an important achievement , because it is consistent with the nature of available data for the analysis of real world networks . typically , the data for central ( or highly connected ) agents are more reliable than data for marginal ( or low degree ) agents .e. the central cesro convergence hypotheses for data behavior ( stated in _ lemma _[ th estimates on v_0 = 0]-(iii ) , as well as those contained in _ hypothesis _ [ hyp forcing term cesaro means assumptions ] , in order to conclude convergence have probabilistic - statistical nature .this is one of the main accomplishments of the work , because the hypotheses are mild and adjust to realistic scenarios ; unlike strong hypotheses of topological nature such as periodicity , continuity , differentiability or even riemann - integrability of the forcing terms ( see ) .this fact is further illustrated in _example _ [ ex .non - riemann integrable forcing terms ] , where good asymptotic behavior is observed for a forcing term which is nowhere continuous on the domain of analysis .f. an important and desirable consequence of the data hypotheses adopted , is that the method can be extended to more general scenarios , as mentioned in _ remark _ [ rem probabilistic flexibilities ] , reported in _ subsection _ [ sec numerical closing observations ] and illustrated in _examples _ [ ex .probabilistic flexibilities ] , [ ex . unbounded example ] and [ ex . unbounded frequency ] .moreover , _ example _[ ex . unbounded frequency ] suggests a probabilistic upscaled model for the communication kernel , to be explored in future work .g. a different line of future research consists in the analysis of the same phenomenon , but using the mixed - mixed variational formulation introduced in instead of the direct one used in the present analysis .the key motivation in doing so , is that the mixed - mixed formulation is capable of modeling more general exchange conditions than those handled by the direct variational formulation and by the classic mixed formulations .this advantage can broaden in a significant way the spectrum of real - world networks which can be successfully modeled and upscaled .h. finally , the preexistent literature typically analyses the asymptotic behavior of diffusion in complex networks , starting from fully discrete models ( e.g. , ) .the pseudo - discrete treatment that we have followed here , constitutes more a complementary than an alternative approach . depending on the availability of data and/or sampling , as well as the scale of interest for a particular problem , it is natural to consider a blending " of both techniques .the authors wish to acknowledge universidad nacional de colombia , sede medelln for its support in this work through the project hermes 27798 .the authors also wish to thank professor magorzata peszyska from oregon state university , for authorizing the use of code * fem1d.m * in the implementation of the numerical experiments presented in _ section _ [ sec numerical experiments ] .it is a tool of remarkable quality , efficiency and versatility that has proved to be a decisive element in production and shaping of this work .p. tsaparas , l. m. ramrez , o. bodenreider , e. v. kooning , i. jordan , global similarity and local divergence in human and mouse gene coexpression networks , bmc evol .( 2006 ) 70 .http://dx.doi.org/10.1186/1471-2148-6-70 [ ] .
this work discusses the homogenization analysis for diffusion processes on scale - free metric graphs , using weak variational formulations . the oscillations of the diffusion coefficient along the edges of a metric graph induce internal singularities in the global system which , together with the high complexity of large networks constitute significant difficulties in the direct analysis of the problem . at the same time , these facts also suggest homogenization as a viable approach for modeling the global behavior of the problem . to that end , we study the asymptotic behavior of a sequence of boundary problems defined on a nested collection of metric graphs . this paper presents the weak variational formulation of the problems , the convergence analysis of the solutions and some numerical experiments . coupled pde systems , homogenization , graph theory . 35r02 , 35j50 , 74qxx , 05c07 , 05c82
oscillating flows play a central role in fluid dynamics , their key appearances in various applications in medicine , biophysics , geophysics , engineering , astrophysics , acoustics are well known . in this note we consider inviscid oscillating incompressible flows driven by an oscillating in time body force ( in particular , it can be a rotating body force ) . for doing that we use the two - timing method .our main result is the deriving of a general and simple form of the averaged euler s equations by the two - timing method .the used two - timing technique is taken in the same form as in .there are many attempts to describe the fluid flows with very complex boundary conditions by the replacing of moving and deforming boundaries by various body forces .these research directions are motivated by important applications in such areas as turbo - machinery , biological and medical fluid dynamics .the most popular approaches here are the penalization method and the immersed boundary methods , see .however it is surprising that until now the averaged equations of flows cased by oscillating body forces has escaped any attention of researchers .we study the dynamics of a homogeneous inviscid incompressible fluid with velocity field and vorticity ( asterisks mark dimensional variables ) . the governing equations in cartesian coordinates and time are where and is a given external body force being a periodic function of variable , where is a given frequency . for brevitywe include the constant density into and .also for simplicity we accept that the fluid fills all three dimensional space .we accept that the considered class of oscillating flows possesses the characteristic scales of velocity , length , and frequency where is a dependent time - scale .the dimensionless ( not asteriated ) variables and frequency are where is the small parameter of our asymptotic theory . in the dimensionless variablestakes place here and below we do not write the condition , but always keep it in mind .we consider the solutions of in the two - timing form with two time - variables where and are two _ mutually dependent _ time - variables ( we call _ slow time _ and _ fast time _ ) .then the use of the chain rule brings to the form where the subscripts and stand for the partial derivatives .the key suggestion of the two - timing method is as a result , we convert from a pde with independent variables and into a pde with the extended number of independent variables and . then the solutions of must have a functional form : it should be emphasized , that without a functional form of solutions can be different from ; indeed the presence of the dimensionless scaling parameter allows one to build an infinite number of different time - scales , not just and . in this paperwe accept and analyse the related averaged equations and solutions in the functional form . to make further analytic progress, we introduce few convenient notations and agreements . here and belowwe assume that _ any dimensionless function _ has the following properties : \(i ) and all its required for consideration - , - , and -derivatives are also ; \(ii ) is -periodic in , _i.e. _ ( about this technical simplification see the discussion section ) ; \(iii ) has an average given by \(iv ) can be split into averaged and purely oscillating parts where _ tilde - functions _ ( or purely oscillating functions ) are such that and the _ bar - functions _ ( or the averaged functions ) are -independent ; \(v ) we introduce a special notation ( with a superscript ) for the _ tilde - integration _ of tilde - functions , such integration keeps the result in the tilde - class . for doing that we notice that the integral of a tilde - function often does not belong to the tilde - class . in order to keep the result of integration in the tilde - classwe should subtract the average the tilde - integration is inverse to the -differentiation ; the proof is omitted .here we emphasize that in all text below all large or small parameters are represented by various degrees of only ; these parameters appear as explicit multipliers in all formulae containing tilde- and bar - functions ; while these functions are always of order one .let us make some amplitude specification in where we have chosen the magnitudes of and in terms of .mathematically , this choice is required for balancing the first term in the equation with other terms of the same order .physically , the choice of the force as is dictated by the fact that the flow is driven by this force , so , it has to be of the highest available order of magnitude . we have also made a simplifying suggestion ( just for this note ) : the given body force is chosen as being a purely oscillatory function with a zero mean .the explicit introduction of converts into we are looking for the solutions in the form of regular series the substitution of ( [ basic-4aa ] ) into ( [ main - eq ] ) produces the equations for successive approximations ._ the equations of zero approximation _ of ( [ main - eq ] ) are the bar - parts of these equations give us while the tilde - parts lead to the conclusion taking the divergence of the first equation we obtain the poisson equation for which can be solved provided the boundary conditions at infinity are given .this solution can be symbolically written as : the boundary conditions at infinity can be chosen as as . at the same time we can consider the class of functions which are rapidly decaying as or outside a desired finite domain ( say , outside the domain , modelling an oscillating heart or rotating turbine ) . after solving , the first equation gives us the expression for ._ the equations of the first approximation _ of ( [ main - eq ] ) are the bar - part of the first equation is where the already known function is to be substituted from , . using the identity we transform into the final system of equations where is a modified pressure .one can see that the resulting form of the averaged equations coincides with the standard euler s equations containing an additional body force where the oscillatory pressure represents the solution of poisson s equation .this formula shows that for we have to consider body forces with .a simple example can be chosen as with new given functions and .the averaged force for this case can be calculated as where one can see that the force can be chosen as solenoidal with in this case due to the uniqueness of solution to laplace equation , which means and where the gradient term is included to the modified pressure .the expressions and can be further specified by the particular choice of and .we can take , in the cylindrical coordinates , where represents an arbitrary smooth function . in this casegives and one can see that the oscillatory force with the only nonzero angular component produces a radially directed averaged force .the force could have some relations to applications .two terms are chosen in order to consider a rotating force . a simpler way of doing that is to consider three components of in cylindrical or spherical coordinates , where all the components are functions of the form or with an integer and the azimuthal angle .this research is partially supported by the grant ig / sci / doms/16/13 from sultan qaboos university , oman .a given force with non - zero average part can be routinely included into the above consideration . in this caseone should take instead of in or an additional term in the right hand side of .it will only case the appearance of an additional term in as \2 .viscosity can be included into consideration , however the form of resulting averaged equations will depend on the order of magnitude of reynolds number in terms of . in particular , if then the standard ` viscous ' term can be just added to the equation .we believe that the presented form of equations can be generalised , specified and used in such high - impact areas as turbo - machinery , medicine , biological fluid dynamics , _ etc ._ , where the use of various ` effective body forces ' represents one of actively used modelling approaches .
in this note we consider general formulation of euler s equations for an inviscid incompressible homogeneous fluid with an oscillating body force . our aim is to derive the averaged equations for these flows with the help of two - timing method . our main result is the general and simple form of the equation describing the averaged flows , which are derived without making any additional assumptions . the presented results can have many interesting applications .
in this paper we obtain time - uniform estimates for the convergence of a class of interacting diffusion stochastic differential equations towards the associated mean field equation .the propagation of chaos resulting from this convergence when the number of particles tends to infinity is uniform in time which means that not only the particles are independent of each other , but also this independence is reached uniformly in time .the -particle interacting diffusion model is of the following form here are independent wiener processes , is a constant and , , are measurable functions .we will explain further below our reasons for studying this type of model . for a probability measure on , , write .the limiting processes are defined to be where is the law of . the classical propagation of chaos result states that , under suitable conditions on and , the probability law of over some fixed time interval ] ) , converges weakly to the probability law of .refer to for more details .we briefly consider the following toy model to motivate our problem .consider for the moment the system where has lipschitz constant .define where is the law of .assume that both of the above equations have strong solutions .using gronwall s inequality and the cauchy - schwartz inequality , obtained a bound of the form }|y^j_t - \bar{y}^j_t|\right ] \leq\exp\left ( 2 t \tilde{b}^{lip}_n\right)\sup_{s\in [ 0,t]}{\mathbb e}\left[b_1(\bar{y}^j_s,\bar{y}^k_s)^2\right]^{\frac{1}{2}}.\ ] ] it is clear from the above that is a good approximation to when .it is also clear that as , this bound becomes very poor , particularly due to the exponentiation .in much modeling of interacting diffusions , such as neuroscience , it is difficult to assume that is small : indeed , often it is difficult to properly model the ` start ' of a system .it is therefore desirable to obtain convergence results which are uniform in time .this is the focus of this paper . for ,let for some functions and described further below .we expect ( but do not require ) to be of the form where is a positive integer . modulates the rate of convergence for when is ` close ' to . is a weight function which modulates the behavior for when or asymptote to .if is a metric , then this result guarantees that the wasserstein distance ( with respect to ) between the laws of and converges to zero as , with a rate which is uniform in . as a consequence of theorem [theorem major result ] , and since is a metric , the result guarantees that the joint law of any finite set of neurons(or particles in the general case ) converges to a tensor product of iid processes , each with law given by the sde in ( [ eqn limit system ] ) .it is easily verified as explained in corollary [ corollary1 ] . to the best of our knowledge , the first work on uniform propagation of chaos was when approximating feynman kac formula for non linear filtering .other authors applied log - sobolev inequalities and concentration inequalities .most of the previously cited works assume that the interaction term is of the form and the local term is of the form for some satisfying certain convexity properties .this work is essentially a generalization of .we are motivated in particular by the application of these models to neuroscience ( see for instance ) although we expect in fact that these results are applicable in other domains such as agent - based modeling in finance , insect - swarms , granular models and various other applications of statistical physics .we have been able to weaken some of the requirements in and other works , so that the results may be applied in arguably more biologically realistic contexts .we do not assume that the interaction term is a function of , as in many of the previously cited works .the uniform propagation of chaos result is essentially due to the stabilizing effect of the internal dynamics ( term ) outweighing the destabilizing effect of the inputs from other neurons ( term ) and the noise ( term ) . in , it was assumed that the gradient of is always negative , and is at least linear .however it is not clear ( at least in the context of neuroscience ) that the decay resulting from the internal dynamics term is always this strong for large values of .neuroscientific models are only experimentally validated over a finite parameter range , and therefore it is not certain how to model the dynamics for when the state variable is very large or small .our more abstract setup does not require the decay to be linear ( as in for example the wilson - cowan model ) for large values of : indeed the decay could be sub - linear or super - linear ; all that is required is that in the asymptotic limit the decay from dominates the destabilizing effects of and .another improvement of our model over is that we consider multiplicative noise ( i.e. ) .this is more realistic because we expect the noise term to be of decreasing influence as gets large .this is because one would expect in general that the system is less responsive to the noise when its activity is greatly elevated , since the system should be stable .the point is that experimentalists should have some liberty in fitting our model to experimental data ; all that is required is that in the asymptotic limit the decay from dominates the destabilizing effects of and .we do not delve into the details of existence and uniqueness of solutions , and so throughout we assume that [ assumption one ] there exist unique strong solutions to and .our major result is the following uniform convergence property .[ theorem major result ] if assumption [ assumption one ] and the assumptions in section [ sect assumptions ] hold , then there exists a constant such that for all \leq k n^{-\frac{a}{q(a-1)}},\ ] ] for integers and .it is easy to show existence and uniqueness if , for example , and are each globally lipschitz . in the case of existence and uniqueness of ,* theorem 3.6 ) provides a useful general criterion .refer to for a discussion of how to treat the existence and uniqueness of in a more general case .our paper is structured as follows . in section [ sect assumptions ]we outline the assumptions of our model , in section [ section proof ] we prove theorem [ theorem major result ] and in section [ application ] we outline an example of a system satisfying the assumptions of section [ sect assumptions ] .the requirements outlined below might seem quite tedious .however in the next section we consider an application which allows us to simplify many of them .we split into two domains and . is a closed compact interval which we expect the system to be most of the time . over , we require that the natural convexity of dominates that of and . in require bounds for when the absolute values of the variables are asymptotically large .assume that , that , and if and only if .suppose that for , and clearly .write .assume that for all , assume that for all , there exists a constant such that assume that for all , there is some such that assume that there exists a constant such that for all , assume that for , for all probability measures and all , assume that there exist constants such that for all , assumption might seems a little strange .if as , in the context of neuroscience it would mean that the relative influence of neuron on neuron decreases as . this seems biologically reasonable .we assume that dominates the other terms , i.e. for some positive integer , we require that there exists a constant such that for all , \leq c_2.\label{eq b1 q bound}\end{aligned}\ ] ] assume that there exists a constant such that for all and for all , ,{\mathbb e}\left [ g(x_s)^{\frac{2(a-1)q}{aq - a - q}}\right ] \leq c_1.\label{eq g bound}\ ] ]we now outline the proof of theorem [ theorem major result ] .we will prove that there exists a constant such that \leq \int_0^t -c { \mathbb e}\left [ h(x^j_s,\bar{x}^j_s)\right ] + cn^{-\frac{1}{q}}{\mathbb e}\left [ h(x^j_s,\bar{x}^j_s)\right]^{\frac{1}{a}}ds.\ ] ] the theorem will then follow from the application of lemma [ lemma ut ] to the above result .we observe using ito s lemma that the are we start by establishing that \leq -c_0 \int_0^t { \mathbb e}\left [ h(x^j_s,\bar{x}^j_s)\right]ds.\ ] ] we prove that the sum of the integrands of ,, and is less than or equal to .suppose firstly that .then the integrands of and are all zero .furthermore , using , the integrand of satisfies the bound now suppose that .the integrand of is less than or equal to zero because of . through , \geq g(\bar{x}^j_s)c_0 f(\bar{x}^j_s - x^j_s).\end{gathered}\ ] ] since and , upon multiplying the above identity by , + \frac{1}{2 } f(x^j_s - \bar{x}^j_s)g'(x^j_s)g'(\bar{x}^j_s)b_2(x^j_s)b_2(\bar{x}^j_s)\\ \leq-c_0 h(x^j_s,\bar{x}^j_s)-\frac{1}{2}a_0g(x^j_s)g'(\bar{x}^j_s)b_2(\bar{x}^j_s ) f(x^j_s - \bar{x}^j_s)+\\ \frac{1}{2 } f(x^j_s - \bar{x}^j_s)g'(x^j_s)g'(\bar{x}^j_s)b_2(x^j_s)b_2(\bar{x}^j_s ) \leq - c_0 h(x^j_s,\bar{x}^j_s),\label{int temp 3}\end{gathered}\ ] ] since by , notice that the left hand side of is the sum of the integrand of and half of the integrand of . similarly if , the integrand of is less than or equal to zero , and through and , + \frac{1}{2 } f(x^j_s - \bar{x}^j_s)g'(x^j_s)g'(\bar{x}^j_s)b_2(x^j_s)b_2(\bar{x}^j_s)\\ \leq -c_0 h(x^j_s,\bar{x}^j_s)-\frac{1}{2}a_0g'(x^j_s)g(\bar{x}^j_s)b_2(x^j_s ) f(x^j_s - \bar{x}^j_s)+\\ \frac{1}{2 } f(x^j_s - \bar{x}^j_s)g'(x^j_s)g'(\bar{x}^j_s)b_2(x^j_s)b_2(\bar{x}^j_s ) \leq - c_0 h(x^j_s,\bar{x}^j_s).\label{int temp 4}\end{gathered}\ ] ] the left hand side of the above is equal to the integrand of and half of the integrand of .observe that if , then the left hand side of is zero because is zero in .similarly if , then the left hand side of is zero because is zero in .these considerations yield the bound .it follows from that \leq c_2 \int_0^t { \mathbb e}\left [ h(x^j_s,\bar{x}^j_s)\right]ds.\ ] ] we finish by bounding the term .suppose that .then using , - and the triangular inequality we obtain the same inequality when . that is , applying holder s inequality to the above , \leq\\ \grave{c}_1 { \mathbb e}\left[g(x^j_s)g(\bar{x}^j_s)f(x^j_s - \bar{x}^j_s)\right ] + \\ \breve{c}_1 { \mathbb e}\left[g(x^j_s)g(\bar{x}^j_s)f(x^j_s-\bar{x}^j_s)\right]^{\frac{1}{a}}{\mathbb e}\left[g(x^k_s)g(\bar{x}^k_s)f(x^k_s - \bar{x}^k_s)\right]^{\frac{a-1}{a } } \\ = ( \grave{c}_1 + \breve{c}_1){\mathbb e}\left[g(x^j_s)g(\bar{x}^j_s)f(x^j_s - \bar{x}^j_s)\right ] \end{gathered}\ ] ] we use holder s inequality to see that \leq \\ { \mathbb e}\left [ f'(x^j_s - \bar{x}^j_s)^a g(x^j_s)g(\bar{x}^j_s)\right]^{\frac{1}{a } } { \mathbb e}\left [ g(\bar{x}_s^j)^{\frac{a-1}{a}\times\frac{2aq}{aq - a - q}}\right]^{\frac{aq - a - q}{2aq}}\times\\ { \mathbb e}\left [ g(x_s^j)^{\frac{a-1}{a}\times\frac{2aq}{aq - a - q}}\right]^{\frac{aq - a - q}{2aq}}\times { \mathbb e}\left[\left(\sum_{k=1}^n b_1(\bar{x}^j_s,\bar{x}^k_s ) - \bar{b}_1(\bar{x}^j_s,\bar{\mu}_s)\right)^q\right]^{\frac{1}{q}}.\end{gathered}\ ] ] where is the integer that appears in assumption .+ by assumption , ^{\frac{aq - a - q}{aq}}\times { \mathbb e}\left [ g(x_s^j)^{\frac{2(a-1)q}{aq - a - q}}\right]^{\frac{aq - a - q}{aq}}\ ] ] is uniformly bounded for all .furthermore through assumption and lemma [ lemma bound polynomial ] , ^{\frac{1}{q}} ] and = 0 ] for some .we take and . define the sigmoid function , it is clear that is of class , and its derivative is bounded and positive . using this , we define we consider a population of neurons , with evolution equation where is the membrane potential of neuron , is the deterministic input current . denotes the synaptic weight from neuron to neuron .the function is assumed to be of class in both variables , such that both it and its derivative are bounded .+ the above assumptions are sufficient for the requirements of section [ sect assumptions ] to be satisfied .in particular , using the mean value theorem , one can easily verify the bounds [ eq c1 bound 1 ] and [ eq c1 bound 2 ] .morever , one can refer to and verify that assumption [ assumption one ] is satisfied. it then follows , using theorem [ theorem major result ] , that for all \leq 4k n^{-\frac{2}{3}},\ ] ] in other words , the law of an individual neuron converges to its limit as at the time - uniform rate given above .12 j. baladron , d. fasoli , o. faugeras , and j. touboul , _ mean - field description and propagation of chaos in networks of hodgkin - huxley and fitzhugh - nagumo neurons _ , the journal of mathematical neuroscience , 2 ( 2012 ) .f. bolley , i. gentil , and a. guillin , _ uniform convergence to equilibrium for granular media _ , archive for rational mechanics and analysis , 208 ( 2013 ) , pp .429 - 445 .m. bossy , o. faugeras , and d. talay , _ clarification and complement to mean - field description and propagation of chaos in networks of hodgkin - huxley and fitzhugh - nagumo neurons _ ,report , hal inria , 2015 .bressloff , _ spatiotemporal dynamics of continuum neural fields _ , journal of physics a : mathematical and theoretical , 45 ( 2012 ). j. a. carillo , r. j. mccann , and c. villani,_kinetic equilibration rates for granular media and related equations : entropy dissipation and mass transportation estimates _ , revista matematica iberoamericana , 19 ( 2003 ) , pp .971 - 1018 .p. cattiaux , a. guillin , and f. malrieu , _probabilistic approach for granular media equa- tions in the non - uniformly convex case _ , probability theory and related fields , ( 2008 ) .s. coombes , _ large - scale neural dynamics : simple and complex _ , neurolmage , 52 ( 2010),pp .731 - 739 . g. deco , v .k. jirsa , p. a. robinson , m. breakspear , and k. friston , _ the dynamic brain : from spiking neurons to neuralmasses and cortical fields _ , plos comput .biol . , 4 ( 2008 ) .a. destexhe and t. j. sejnowski,_the wilson - cowan model _ , 36 years later , biological cybernetics , 101 ( 2009 ) , pp .1 - 2 . w. gerstner and w. kistler , _ spiking neuron models _, cambridge university press , 2002 .d. hansel and h. sompolinsky , _ chaos and synchrony in a model of a hypercolumn in visual cortex _ , journal of computational neuroscience , 3 ( 1996 ) , pp .f. malrieu , _ logarithmic sobolev inequalities for some nonlinear pde s _ , stochastic processes and their applications , 95 ( 2001 ) , pp .109 - 132 .x. mao , _ stochastic differential equations and applications _ , horwood , 2008 , 2nd edition .del moral and l. miclo , _ branching and interacting particle systems approximations of feynman - kac formulae with applications to non - linear filtering , in sminaire de probabilits xxxiv _ , j. azma , m. emery , m. ledoux , and m. yor , eds .1729 , springer- verlag berlin , 2000 .del moral and e. rio , _ concentration inequalities for mean field particle models _ , annals of applied probability , ( 2011 ) .del moral and j. tugaut , _ uniform propagation of chaos for a class of inhomogeneous diffusions _ , tech .report , hal inria , 2014 .a. sznitman , _ topics in propagation of chaos , in ecole dt de probabilits de saint - flour xix-1989 , donald burkholder , etienne pardoux , and alain - sol sznitman , eds _ , vol .1464 of lecture notes in mathematics , springer berlin / heidelberg , 1991 , pp .165 - 251 .j. touboul , _ the propagation of chaos in neural fields _ , the annals of applied probability , 24 ( 2014 ) .visual cortex , journal of computational neuroscience , 3 ( 1996 ) , pp .j. touboul and b. ermentrout , _ finite - size and correlation - induced effects in mean - field dynamics _ , j comput neurosci , 31 ( 2011 ) , pp.453 - 484 . a. yu veretennikov,_on ergodic measures for mckean - vlasov stochastic equations _ ,monte - carlo and quasi - monte - carlo methods , ( 2006 ) , pp . 471 - 486 .wilson and j.d .cowan,_excitatory and inhibitory interactions in localized polulations of model neurons _, biophys .j. , 12 ( 1972),pp . 1 - 24 .
in this paper we obtain time uniform propagation estimates for systems of interacting diffusion processes . using a well defined metric function , our result guarantees a time - uniform estimates for the convergence of a class of interacting stochastic differential equations towards their mean field equation , and this for a general model , satisfying various conditions ensuring that the decay associated to the internal dynamics term dominates the interaction and noise terms . our result should have diverse applications , particularly in neuroscience , and allows for models more elaborate than the one of wilson and cowan , not requiring the internal dynamics to be of linear decay . an example is given at the end of this work as an illustration of the interest of this result . stochastic differential equation , mean fields , mckean - vlasov equations , interacting diffusion , uniform propagation of chaos , neural network . 60k35 , 60j60 , 60j65 , 92b20
the following basic example illustrates what we would like to visualize .let be the square root of , if is a non - negative real number , we typically define as the non - negative real number whose square equals , i.e. we always choose the non - negative solution of the equation as . for negative real numbers , no real number solves .however , if we define the imaginary unit as a number with the property that then the square root of becomes the purely imaginary number together , these two conventions yield a continuous square root function for complex numbers , has exactly two complex solutions ( counted with multiplicity ) , the square roots of .we have seen that , for real numbers , we can choose one solution of for the square root and obtain a square root function that is continuous over the real numbers .in contrast , we can not for every complex number choose one solution of so that we obtain a square root function that is continuous over the complex numbers : if we plot the two solutions of as runs along a circle centred at the origin of the complex plane , we observe that moves at half the angular velocity of ( see ) .( 0,0 ) circle ( 2 ) ; ( 0,0 ) circle ( sqrt(2 ) ) ; ( -2.5,0 ) ( 2.5,0 ) node[right ] ; ( 0,-2.5 ) ( 0,2.5 ) node[above ] ; in 0,10, ... ,90 ( : 2 ) circle ( 0.05 ) ; ( 0.5*:sqrt(2 ) ) circle ( 0.05 ) ; ( 0.5*+180:sqrt(2 ) ) circle ( 0.05 ) ; ( -2.5,0 ) ( 2.5,0 ) node[right ] ; ( 0,-2.5 ) ( 0,2.5 ) node[above ] ; ( 0:2 ) circle ( 0.05 ) ; ( 0:sqrt(2 ) ) circle ( 0.05 ) ; ( 180:sqrt(2 ) ) circle ( 0.05 ) ; ( 2:sqrt(2 ) ) arc ( 2:178:sqrt(2 ) ) ; ( 182:sqrt(2 ) ) arc ( 182:358:sqrt(2 ) ) ; ( 2:2 ) arc ( 2:358:2 ) ; when completes one full circle and reaches its initial position again , the square roots have interchanged signs . therefore, a discontinuity occurs when returns to its initial position after one full turn and the square root jumps back to its initial position .note that by choosing the values of the square root in a different manner , or , equivalently , letting start at at a different position , we can move the discontinuity to an arbitrary position on the circle .moreover , note that there is ( at least ) one discontinuity on any circle of any radius centred at the origin . in order to define the principal branch of the complex square root function ,we usually align the discontinuities along the negative real axis , the canonical branch cut of the complex square root function , and choose those values that on the real axis agree with the square root over the real numbers .alternatively , we can extend the domain of the complex square root to make it a single - valued and continuous function .to that end , we take two copies of the extended complex plane and slit them along the negative real axis . on the first copy ,we choose the solution of with non - negative real part as the complex square root of ; on the second copy , we choose the other solution .we glue the upper side ( lower side ) of the slit of the first copy to the lower side ( upper side ) of the slit of the second copy and obtain a riemann surface of the complex square root .( in three dimensions , this is not possible without self - intersections . ) on the riemann surface , the complex square root is single - valued and continuous .it is even analytic except at the origin and at infinity , which are exactly the points where the two solutions of coincide .the branch cut is not a special curve of the riemann surface .when we glue the riemann surface together , the branch cut becomes a curve like every other curve on the riemann surface .if we had used a different curve between the origin and infinity as branch cut , we would have obtained the same result . describes a parabola .we can proceed analogously to obtain riemann surfaces for other plane algebraic curves .probably the most common approach for visualization of functions is to plot a function graph .however , for a complex function the function graph is a ( real ) two - dimensional surface in ( real ) four - dimensional space .one way to visualize a four - dimensional object is to plot several two- or three - dimensional slices .this approach seems less useful for understanding the overall structure of the object .another traditional method to visualize complex functions is domain colouring .the principle of domain colouring is to colour every point in the domain of a function with the colour of its function value in a reference image .if we choose the reference image wisely , a lot of information about the complex function can be read off from the resulting two - dimensional image ( see e.g. and ) .the idea of lifting domain colouring to riemann surfaces is due to . we can interpret a riemann surface of a plane algebraic curve as a function graph of a multivalued complex function , which maps every to multiple values of .if is a polynomial of degree in , there are exactly values of for every value of that satisfy ( counted with multiplicity ) .every such pair corresponds to a point on the riemann surface . in other words ,the riemann surface is an -fold cover of the complex plane .let denote a projection function on the riemann surface .then the values of at correspond to the elements of the fibre .the situation is analogous to function graphs of single - valued functions from the real numbers ( or the real plane ) to the real numbers , where one function value lies above every point in the domain .we can transfer the riemann surface from ( real ) four - dimensional space into ( real ) three - dimensional space by introducing a height function .we typically use the real part as a height function .we plot the surface and use domain colouring to represent the value of at every point of the surface . in practice , we want to generate a triangle mesh that approximates the riemann surface as the graph of a multivalued function over a triangulated domain in the complex plane .the riemann surface mesh approximates the continuous riemann surface in the following sense : the -values at the vertices of a triangle of the riemann surface mesh result from each other under analytic continuation along the edges of the underlying triangle in the triangulated domain . if is a polynomial of degree in there are values of above every vertex of the triangulated domainhence , we have to determine which of the values of above a triangle in the triangulated domain form triangles of the riemann surface mesh .a wrong combination of values of to triangles might for example occur due to discontinuity if we used the principal branch of the square root function for the computation of .this would produce artefacts in the visualization for which there is no mathematical justification . for the generation of such a riemann surface mesh ,previous algorithms have solved systems of differential equations or explicitly identified and analyzed branch cuts to remove discontinuities . in the next section, we discuss an algorithm based on a different idea : we can exploit that is continuous almost everywhere on the riemann surface and therefore , if changes little , so does .in this section , we describe algorithms for generating and visualizing domain - coloured riemann surface meshes of plane algebraic curves .let be a complex plane algebraic curve .in particular , let be a polynomial with complex coefficients of degree in . moreover let be a triangulated domain in the complex plane .( in practice , is typically rectangular . )we want to generate a riemann surface mesh of .the mesh discretizes a part of a ( real ) two - dimensional surface in ( real ) four - dimensional space .we can visualize it using a height function and domain colouring , as described in the previous section .we obtain a riemann surface mesh of as a graph of the multivalued function induced by , which maps every value of in to values of such that . for every triangle in , we thus obtain values of at each of its three vertices .the problem is to determine whether , and if so , how , the values of can be combined to form triangles of the riemann surface mesh .the resulting triangles should be consistent with the fact that as a function of is analytic almost everywhere on the riemann surface .this is impossible if the triangle in contains a ramification point of . in this case, we subdivide the triangle to obtain smaller triangles mostly free of ramification points .otherwise , the triangles of the riemann surface mesh are uniquely determined by analytic continuation of along the edges of the triangle in . in order to find these triangles of the riemann surface mesh, we use the following idea : consider a triangle in that is free of ramification points of . under this assumption, is continuous on those parts of the riemann surface that lie above .hence , for every there exists such that for all with . if is half the minimum distance between the values of at and is smaller than the corresponding , then the values of at are closer to the corresponding values of at than to any other value of at . in other words ,if triangle is small enough , we can combine the values of at its vertices to triangles of the riemann surface mesh based on proximity : among the values of at the vertices of triangle , every three values of closest to each other form a triangle of the riemann surface mesh .we can algorithmically compute a as above using the epsilon - delta bound for plane algebraic curves of .is of essential importance for our approach .our approach only works because provides us with a reliable bound computable as a function of that depends only on a few constants derived from the coefficients of .if triangle is not small enough to correctly combine the values of at its vertices based on proximity , we subdivide the triangle . in summary, we obtain the following algorithm : [ alg : riemann - surface - mesh ] let be a triangulated domain in the complex plane .let be a complex plane algebraic curve and a polynomial of degree in .we prescribe a maximal subdivision depth ( as a maximal number of iterations or as a minimal edge length ) . 1 .compute the global ingredients of the epsilon - delta bound of for .2 . for every triangle in : 1 . compute the values of at , 2 .compute half the minimum distance between the values of at each of the vertices of , 3 .compute by the epsilon - delta bound of so that 4 . determine which of the edges of are longer than the minimum of the at their endpoints and must be subdivided .select the right adaptive refinement pattern ( see ) and subdivide accordingly .repeat step 2 until the maximal subdivision depth is reached .4 . discard every triangle in with an edge longer than the minimum of the at its endpoints .5 . for every triangle in ,combine the values of at its vertices to triangles of the riemann surface mesh based on proximity .more formally , the triangles added to the riemann surface mesh comprise the vertices for . 6 .output the riemann surface mesh and stop .\(a ) at ( 210:1 ) ; ( b ) at ( 330:1 ) ; ( c ) at ( 90:1 ) ; ( a ) ( b ) ( c ) cycle ; \(a ) ( b ) ( c ) cycle ; ( b ) ( ) ; \(a ) ( b ) ( c ) cycle ; ( a ) ( ) ; \(a ) ( b ) ( c ) cycle ; ( a ) ( ) ; ( ) ( ) ; \(a ) ( b ) ( c ) cycle ; ( c ) ( ) ; \(a ) ( b ) ( c ) cycle ; ( ) ( ) ; ( b ) ( ) ; \(a ) ( b ) ( c ) cycle ; ( c ) ( ) ; ( ) ( ) ; \(a ) ( b ) ( c ) cycle ; ( ) ( ) ; ( ) ( ) ; ( ) ( ) ; by construction , generates a riemann surface mesh that is consistent with the analytic structure of the riemann surface of .the adaptive refinement patterns used for the subdivision of triangles , whose edges are too long , produce a watertight subdivision .step 4 of produces holes around the ramification points of .we can make these holes very small if we choose the maximal subdivision depth appropriately . for the visualization of a riemann surface mesh , we use the following algorithm : [ alg : visualization ] let a riemann surface mesh and a domain colouring reference image be given .we choose a height function to transform a point on the riemann surface mesh from ( real ) four - dimensional space to a point in ( real ) three - dimensional space , 1 .draw the mesh that results from transforming every vertex of the riemann surface mesh as above .2 . interpolate the value of on the transformed mesh .3 . assign to every point on the transformed mesh the colour in the reference image of the value that attains at that point on the transformed mesh . if we choose the real ( or imaginary ) part of as a height function , the transformation from ( real ) four - dimensional to ( real ) three - dimensional space becomes a projection .using the real part of as a height function has the advantage that the visualization then contains the image of interpreted as a real plane algebraic curve .it is the intersection of the visualization of the riemann surface mesh in ( real ) three - dimensional space with the --plane ( the -plane , if we label the coordinate axes of real three - dimensional space such that the -axis points to the right and the -axis points upwards ) .the computation of the riemann surface mesh by is independent of the choice of height function used for its visualization .in this section , we discuss how and can be implemented using opengl and webgl .since webgl targets a much wider range of devices , its api is more limited than that of opengl . consequently , our implementation using webgl differs substantially from our implementation using opengl . before we discuss each setup separately ,let us talk about what they have in common .the main part of our programs is written in shading language ( glsl for opengl and essl for webgl ) and runs on the gpu .we use the cpu to compute the global ingredients for the epsilon - delta bound , to generate shading language code that computes the epsilon - delta bound for as a function of , and to generate a coarse triangulation of the input domain .the implementations in opengl and webgl share some shading language code .since there is no native type for complex numbers , we represent them using two - dimensional floating point vectors .common routines include complex arithmetic , numerical root - finding algorithms , the computation of the epsilon - delta bound , and domain colouring .the implementation of complex arithmetic is straightforward and we shall not go into detail about it .we need numerical root - finding algorithms to approximate roots of polynomials in order to compute values of ( and to compute the global ingredients of the epsilon - delta bound ) .for instance , laguerre s method *section 9.5.1 and deflation *section 9.5.3 or weierstra durand kerner method are well - suited .the latter may be a little easier to implement in shading language ( due to the absence of variable - length arrays ) . for the computation of the epsilon - delta bound , we use the following theorem : [ thm : epsilon - delta - bound ] let be a complex plane algebraic curve , where is a polynomial of degree in whose coefficients are polynomials in of the form let be a point in the complex plane at which neither the leading coefficient nor the discriminant of w.r.t . vanish .then for every , we can algorithmically compute such that for all holomorphic functions , , which satisfy in a neighbourhood of and for all with .we obtain where denotes the -discriminant of , and , , are the zeros of .note that the computation is parallelizable since the epsilon - delta bound can be implemented as a function of that depends on only a few constants derived from the coefficients of . instead of computing texture coordinates , which would depend on the range of on the input domain, we generate the domain colouring procedurally on - the - fly . to that end, we use a variation of the enhanced phase portrait colour scheme of *section 2.5 .the reference image is shown in .we discuss the colour scheme in .the main difference between the implementations in opengl and webgl is how the common routines can be combined to realize and .our implementation of in opengl comprises three glsl programs , for initialization , subdivision , and assembly of the riemann surface mesh .we cache the output of each program using transform feedback and feed it back to the next program .the initialization program consists only of a vertex shader , which operates on the vertices of the triangulated input domain . for every vertex , we compute , , and .after initialization , we run the subdivision program .the program consists of a pass - through vertex shader and a geometry shader .the geometry shader operates on the triangles of the triangulated input domain or of its last subdivision , respectively .we have access to the values of , , and , , , at the vertices of each triangle .we determine which edges of triangle are longer than the minimum of the at their endpoints . in order to subdivide these edges, we compute their midpoints , and and , , at the midpoints .we use the appropriate adaptive refine pattern of and output between one and four triangles for every input triangle . in doing so , we reuse previously computed values rather than recomputing them .we run the subdivision program iteratively until we reach the prescribed maximal subdivision depth .the assembly program consists of a pass - through vertex shader and a geometry shader .the geometry shader operates on the triangles of the adaptively subdivided input domain .we again have access to the values of , , and , , , at the vertices of each triangle . for every triangle , we test whether one of its edges is longer than the minimum of the at its endpoints . in this case, we discard the triangle . otherwise , we determine the triangles of the riemann surface mesh by proximity ( see , step 5 ) and output these triangles .we also cache the assembled riemann surface mesh using transform feedback so that we can pass it as input to our implementation of the visualization algorithm ( ) .our implementation of in opengl consists of one glsl program with a vertex and a fragment shader .the vertex shader operates on the vertices of a riemann surface mesh generated by our implementation of .we apply height function to map each ( real ) four - dimensional vertex to a ( real ) three - dimensional vertex we homogenize the coordinates of this vertex and transform them using the model - view - projection matrix .we pass as a varying variable to the fragment shader .the fragment shader operates on the interpolated value of at a fragment of a pixel of the output device .we compute the colour of according to our domain colouring reference image .using our implementation , the generation of a riemann surface mesh takes little but noticeable time .the bottlenecks of the implementation are numerical root - finding and iterative subdivision . however ,if we use transform feedback to cache the riemann surface mesh and pass it to the implementation of the visualization algorithm , we obtain interactive performance .another advantage of using transform feedback to cache the riemann surface mesh is that we can easily export the data .if we additionally compute texture coordinates and a high - resolution reference image , we can even print our visualization using a full colour 3d printer ( see ) . in order to support a wider range of devices ,the webgl api is much more limited than the opengl api .particularly , in webgl , geometry shaders and transform feedback are currently unavailable .( the webgl 2 draft includes transform feedback and compute shaders . )therefore our implementation in webgl differs substantially from our implementation in opengl . instead oftransform feedback , our implementation in webgl uses floating point textures ( specified in the ` oes_texture_float ` extension ) and multiple render targets ( specified in the ` webgl_draw_buffers ` extension ) .i do not claim originality of this approach .it is commonly used for running simulations on the gpu .the original idea may be due to .we number the vertices of every mesh consecutively and pass this number ( index ) to the vertex shaders along with the other attributes .in particular , vertices that are shared among several triangles must be duplicated and numbered separately .hence , we assume that every triangle appears as three consecutive vertices in array buffer storage ( triangle soup ) .we use floating point textures essentially as we would arrays of floats , indexed by vertex number .we store values corresponding to the -th vertex in the -th pixel of a texture . we can store up to four floats per pixel of a floating point texture , namely one float each in the red , green , blue , and alpha channel .if we need to store more than four floats , we use multiple render targets which allows us to colour the same pixel of several textures simultaneously .we want to store values we compute for a vertex in textures ( ` transform ' in transform feedback ) .to that end , we bind the array buffers and draw the contents as points ( as opposed to triangles ) . in the vertex shader , we compute the positions of the point with index ( in normalized device coordinates ) so that it is rasterized as the -th pixel of the render target textures .recall that normalized device coordinates range in }^3 ] . for a texture of height and width , we compute the texture coordinates adding in the numerators accounts for the fact that we want to obtain coordinates for the centre of a pixel in order to avoid interpolation with adjacent pixels .we pass the texture coordinates to the fragment shader , where we can use them to perform a texture lookup . in order to access data of a whole triangle ( as in geometry shaders ) ,we can , in the vertex shader , determine the indices of the other vertices of the triangle .for example , the point with index is part of the triangle whose vertices have indices we compute normalized device coordinates or texture coordinates for all three indices and pass them to the fragment shader , together with the index of the triangle vertex currently under consideration .we replace the geometry shader of the subdivision program of our implementation in opengl using a variation of a method proposed by .the method works as follows : we precompute all adaptive refinement patterns up to a certain subdivision depth , in our case eight adaptive refinement patterns up to depth one ( see ) .we use barycentric coordinates to store the positions of the triangle vertices of each refinement pattern in an array buffer .using array buffers of different lengths allows us to achieve variable - length output , as with geometry shaders . for every triangle of a coarse input mesh ,we draw the triangles in the array buffer of the appropriate adaptive refinement pattern .we use the vertex positions of the input triangle ( read from a texture or from uniform variables ) and the barycentric coordinates of the triangles of the adaptive refinement pattern to compute the vertex positions of the output triangle .we can combine this method with floating point texture and multiple render targets as outlined above , if we number the vertices of each adaptive refinement pattern consecutively and store those indices together with the barycentric coordinates .we pass an offset as a uniform variable to the vertex shader that needs to be added to the indices .we draw the adaptive refinement pattern and increment the offset by the number of vertices in the adaptive refinement pattern . the geometry shader of the assembly program has fixed - length output .it generates exactly triangles of the riemann surface mesh per triangle of the ( subdivided ) input mesh .we can replace it with invocations of a vertex shader , one for every sheet of the riemann surface mesh .we pass the number of the current sheet to the vertex shader as a uniform variable .we can not expect our webgl implementation to reach the same performance as our opengl implementation . in the subdivision program ,since we draw a different adaptive refinement pattern for every triangle of the input mesh , we lose parallelism .consequently , subdivision in webgl is much slower than its opengl counterpart .however , if we cache the assembled riemann surface mesh ( in textures ) and pass it to our implementation of the visualization algorithm , we can still achieve interactive performance .in this section , we discuss domain - coloured riemann surface meshes for the complex square root function and for the folium of descartes . before that , let us explain our domain colouring reference image so that we can interpret the domain - coloured riemann surface meshes .recall that the basic idea of domain colouring is the following : if we want to visualize a complex function we face the problem that its graph is real four - dimensional. however , we can visualize the behaviour of the function by colouring every point in its domain with the colour of the function value at that point in a reference image .the reference image is the domain colouring of the complex identity function .depending on what reference image we choose , we can read off various properties of a function from its domain colouring .for an overview of different colour schemes , we refer to . as our reference image , we use a variation of the enhanced phase portrait colour scheme of *section 2.5 .the reference image is best described using polar coordinates of a complex number with _modulus _ and _ phase _ .firstly , we encode the phase at any point in the domain as the hue of its colour ( in hsi colour space ) . in a square with side length centred at the origin , we thus obtain the colour wheel shown in . as the phase changes from to , we obtain every colour of the rainbow .positive real numbers , which have phase , are coloured in pure red .negative real numbers , which have phase , are coloured in cyan . purely imaginary numbers do not have such distinctive colours .( this can be fixed using the nist continuous phase mapping , which scales the phase piecewise linearly so that purely imaginary numbers with positive imaginary part become yellow and purely imaginary numbers with negative imaginary part become blue .see *http://dlmf.nist.gov / help / vrml / aboutcolor#s2.ss2 .for simplicity , we do not follow this approach here . )0.32 0.32 0.32 secondly , we add contour lines of complex numbers of the same phase at integer multiples of degrees ( see ) . to that end , we change the intensity of the colour by multiplying it with a sawtooth function because phase corresponds to hue , the points of such a contour line are all of the same colour .finally , we add contour lines of complex numbers of the same modulus on a log - scale ( see ) . to that end, we change the intensity of the colour by multiplying it with a sawtooth function note that the contour lines of phase and modulus intersect each other orthogonally .the scaling factor in the sawtooth function for the modulus contour lines deliberately matches the scaling factor used in the sawtooth function for the phase contour lines .consequently , the regions enclosed by the contour lines of phase and modulus are squarish in appearance .recall the construction of a riemann surface of the complex square root from where we glued together its two branches at a branch cut along the negative real axis .0.49 0.49 the domain colouring of the two branches of the complex square root over a square of side length centred at the origin is shown in .on the sheet shown in , the complex square root takes values with negative real part ( coloured green to blue ) . on the sheet shown in , it takes values with positive real part ( coloured purple to yellow ) .( the sheet shown in corresponds to the principal branch of the complex square root . ) on both sheets , twelve contour lines of phase are visible , half as many as in the reference image .we can see that the phase of the complex square root function changes at half the angular velocity of its argument . moreover , the discontinuity at the branch cut along the negative real axis is clearly visible .we also see that there is a smooth transition between the second ( third ) quadrant of and the third ( second ) quadrant of . if we cut the two sheets along the negative real axis and glue the upper side of the cut of one sheet to the lower side of the cut of the other sheet , and vice versa , we obtain a riemann surface of the complex square root .the resulting riemann surface , produced with and using real part as height function , is shown in ( perspective ) and ( multiview orthogonal ) . 0.24 0.24 0.24 0.24 note that the self - intersection of the surface in is only an artefact of using a height function to map the riemann surface mesh from real four - dimensional to real three - dimensional space .evidently , the two values of the complex square root at each point of the self - intersection do not agree : they are coloured differently , in green and purple , respectively . in, we see the parabola that the real parts of the -values describe according to the equation when takes values on the non - negative real axis .( -5,-5 ) grid ( 5,5 ) ; ( -5,0 ) ( 5,0 ) node[below left ] ; ( 0,-5 ) ( 0,5 ) node[below right ] ; ( -5,-5 ) rectangle ( 5,5 ) ; plot ( ( 6*(-1+^2)^2)/(-1 + 3*^2 + 8*^3 - 3*^4+^6 ) , 12*(^4-^2)/(3*^2 + 8*^3 - 3*^4+^6 - 1 ) ) ; plot ( ( -6*(-1+^2)^2)/(-1 + 3*^2 - 8*^3 - 3*^4+^6 ) , 12*(^4-^2)/(3*^2 - 8*^3 - 3*^4+^6 - 1 ) ) ; the folium of descartes is a classical plane algebraic curve of order three , the cubic curve is nowadays called ` folium ' after the leaf - shaped loop that it describes in the first quadrant of the real plane ( see ) .it is named in honour of the french geometer ren descartes ( 15961650 ) , who was among the first mathematicians to introduce coordinates into geometry .originally , the curve was called _fleur de jasmin _ since descartes and some of his contemporaries , who were working out the principles of dealing with negative and infinite coordinates , initially wrongly believed that the leaf - shaped loop repeated itself in the other quadrants and therefore resembled a jasmine flower *p .0.32 0.32 0.32 shows three domain - coloured sheets of the folium of descartes over a square of side length centred at the origin of the complex plane .we can generate these sheets by sorting the -values that satisfy at every point of the domain according to their real part .the sheet shown in uses the -value with the smallest real part , the sheet shown in the -value with the second - smallest real part , and the sheet shown in the -value with the largest real part .we see that the first sheet carries -values with negative real part ( coloured green to blue ) . at the centre of the second sheet ,we identify a zero of order two , which we can recognize from the fact that the colours of the colour wheel used in our reference image wind around it twice in the same order as in the reference image .it is the node of the leaf - shaped loop .the third sheet carries -values with positive real part ( coloured purple to yellow ) .there are three branch cuts ( discontinuities of hue ) on the first sheet , six on the second sheet and three on the third sheets .we can see how the sheets of the riemann surface are connected to each other along the branch cuts : first and second sheet are connected at the branch cuts of the first sheet .second and third sheet are connected at the branch cuts of the third sheet .first and third sheet are not connected directly with each other .( imagine how much harder it would be to read this off from . )apart from the branch cuts , the map from to is conformal ( angle - preserving ) on every sheet .we can see that the contour lines of phase and modulus intersect each other orthogonally on every sheet , as in our reference image .if we cut the sheets along the branch cuts and glue them together correctly , we obtain a riemann surface for the folium of descartes .the resulting riemann surface , produced with and using real part as height function , is shown in ( perspective ) and ( multiview orthogonal ) .again , the self - intersections of the surface in are only an artefact of using a height function to map the riemann surface mesh from ( real ) four - dimensional to ( real ) three - dimensional space. makes it obvious that cutting a riemann surface into sheets by sorting -values by real part may be the most straightforward but not necessarily the geometrically most appropriate method .our riemann surface of the folium of descartes in large part appears to be composed of three copies of the complex plane ( which looks like our reference image ) .complications seem to arise only near the origin . 0.24 0.24 0.24 0.24if we look closely at , we may see how we obtain the real folium of descartes ( as a real plane algebraic curve ) as the intersection of our riemann surface mesh with the --plane .the leaf - shaped loop is clearly visible as a hole in our visualization .one of the ` complex planes of which the riemann surface is composed ' is so thin that it is barely visible from this perspective .it is almost asymptotic to the ` wings ' of the folium of descartes ( as a real plane algebraic curve ) in the second and fourth quadrant of the real -plane .right below the centre of , we see two leaf - shaped loops in complex directions .perhaps descartes and his contemporaries were not entirely wrong after all to believe that the folium of descartes has more than one leaf . indeed , if we let , we discover that in the --plane the curve describes a leaf - shaped loop , which is exactly half as high as that in the --plane ( this also holds for the ` wings ' ) and rotated into a different quadrant ( see ) .0.49 0.49we have discussed algorithms for the generation of a riemann surface mesh of a plane algebraic curve ( ) and its visualization as a domain - coloured surface ( ) and their implementation using opengl and webgl .the webgl implementation combines floating point textures , multiple render targets , and a method due to to replace the use of transform feedback and geometry shaders of the opengl implementation .while the generation of the surface takes noticeable time in both implementations , the visualization of a cached riemann surface mesh is possible with interactive performance .this allows us to visually explore otherwise almost unimaginable mathematical objects .sometimes the visualization makes properties of the plane algebraic curves immediately apparent that may not so easily be read off from its equation .it is possible to turn these domain - coloured riemann surface meshes into physical models using a full colour 3d printer .this research was supported by dfg collaborative research center trr 109 , `` discretization in geometry and dynamics '' .
we examine an algorithm for the visualization of domain - coloured riemann surfaces of plane algebraic curves . the approach faithfully reproduces the topology and the holomorphic structure of the riemann surface . we discuss how the algorithm can be implemented efficiently in opengl with geometry shaders , and ( less efficiently ) even in webgl with multiple render targets and floating point textures . while the generation of the surface takes noticeable time in both implementations , the visualization of a cached riemann surface mesh is possible with interactive performance . this allows us to visually explore otherwise almost unimaginable mathematical objects . as examples , we look at the complex square root and the folium of descartes . for the folium of descartes , the visualization reveals features of the algebraic curve that are not obvious from its equation .
computational time - reversal imaging ( ctri ) has become an important research area in recent years , with relevant applications in radar imaging , exploration seismics , nondestructive material testing , medical imaging [ 1 - 9 ] etc .ctri uses the information carried by scattered acoustic , elastic or electro - magnetic waves to obtain images of the investigated domain [ 1 ] .it was shown that scattered acoustic waves can be time - reversed and focused onto their original source location through arbitrary media , using a so - called time - reversal mirror [ 2 ] .this important result shows how one can use ctri to identify the location of multiple point scatterers ( targets ) in a known background medium [ 3 ] . in this case , a back - propagated signal is computed , rather than implemented in the real medium , and its peaks indicate the existence of possible scattering targets .the current methods for ctri are based on the null subspace projection operator , obtained through the singular value decomposition ( svd ) of the frequency response matrix [ 4 - 9 ] .motivated by several results obtained in random low rank approximation theory , here we investigate the problem of image recovery from a small number of random and noisy measurements , and we show that this problem is equivalent to a randomized approximation of the null subspace of the frequency response matrix .we consider a system consisting of an array of transceivers ( i.e. each antenna is an emitter and a receiver ) located at , and a collection of distinct scatterers ( targets ) with scattering coefficients , located at ( fig ., is the dimensionality of the space .also , we assume that the wave propagation is well approximated in the space - frequency domain by the inhomogeneous helmholtz equation [ 1 - 9 ] : \psi ( x,\omega ) = -s(x,\omega ) , \ ] ] where is the wave amplitude produced by a localized source , is the wavenumber of the homogeneous background , with the frequency , the homogeneous background wave speed , and the wavelength . here , is the index of refraction : , where is the wave speed at location . in the background we have , while , measures the change in the wave speed at the scatterers location . the fundamental solutions , or the green functions , for this problem satisfy the following equations : g_{0}(x , x^{\prime } ) = -\delta ( x - x^{\prime } ) , \ ] ] g(x , x^{\prime } ) = -\delta ( x - x^{\prime } ) , \ ] ] for the homogeneous and inhomogeneous media , respectively .the fundamental solution for the inhomogeneous medium can be written in terms of that for the homogeneous one as : this is an implicit integral equation for . since the scatterers are assumed to be pointlike , the regions with are assumed to be finite , and included in compact domains centered at , , which are small compared to the wavelength .therefore we can write : and consequently we obtain : if the scatterers are sufficiently far apart we can neglect the multiple scattering among the scatterers and we obtain the born approximation of the solution [ 10 ] : if corresponds to the receiver location , and corresponds to the emitter location , then we obtain : where are the elements of the frequency response matrix ] can be written as [ 11 ] : where is the rank of , and are the left and right singular vectors , and the singular values ( in decreasing order ) are : .if one can always compute the svd of the transpose matrix and then swap the left and right singular vectors in order to recover the svd of the original matrix .we also remind that the frobenius and the spectral norms of are [ 11 ] : if we define for any , then , by the eckart - young theorem , is the best rank approximation to with respect to the spectral norm and the frobenius norm [ 11 ] . thus ,for any matrix of rank at most , we have : from basic linear algebra we have : .\ ] ] also , we say that a matrix has a good rank approximation if is small with respect to the spectral norm and the frobenius norm .our problem is to substitute with some other rank matrix , which is much simpler than , and does not require the full knowledge of .therefore , the matrix must satisfy the general condition : where represents a tolerable level of error for the given application .several important results have been recently obtained regarding this problem .it has been shown that one can compute a rank approximation of from a randomly chosen submatrix of [ 12 , 13 ] . for any and this methoduses a matrix , containing only a random sample of rows of matrix , so that : holds with probability of at least .recently , the above result has been improved : by taking into account that the additive error can be arbitrarily large compared to the true error [ 14 ] .these results show that the sparse matrix recovers almost as much from as the best rank approximation matrix . in a different approach [ 15 ], it has been shown that one can substitute with a sparse matrix ] , where is the side of the imaging area .the number of targets ( with the scattering coefficients ) is set to and their position is randomly generated in the imaging area .the computational image grid is also set to pixels .the two dimensional green function is , where is the zero order hankel function of the first kind .the noise level is characterized by the signal to noise ratio ( snr ) .snr compares the level of a desired signal to the level of background noise .the higher the ratio , the less obtrusive the background noise is .snr measures the power ratio between a signal and the background noise : where is average power and is root mean square ( rms ) amplitude .let us first consider the case when the matrix is obtained by randomly selecting rows of the matrix . in figure 2we give the results obtained for the extreme case of and for different levels of noise , .one can see that even for this extreme case the results are actually pretty good . by increasing quality of the image improves even at high levels of noise , as shown on figure 3 , where and the noise is fixed at on the first line of images , and respectively on the second line of images . if the algorithm does nt work . in figure 4we give the results obtained when the elements of the matrix are selected randomly as : with probability , and with probability ( we also conserved the symmetry during the random selection ) .the figure matrix is organized on lines and columns .the lines correspond to the probability , and the columns correspond to the noise level .we have shown that the problem of image recovery from a small number of random and noisy measurements is equivalent to a randomized approximation of the null subspace of the frequency response matrix .the obtained results show that one can recover the sparse time - reversal image from fewer ( random ) measurements than conventional methods use . from the analytical results and the numerical experimentswe conclude that the minimum number of measurements is , where is the rank of the full matrix . c. prada , s. manneville .d. spoliansky , m. fink , decomposition of the time reversal operator : detection and selective focusing on two scatterers , journal of the acoustical society of america , 99 ( 1996 ) 2067 .gruber , e.a .marengo , a.j .devaney , timereversal imaging with multiple signal classification considering multiple scattering between the targets , journal of the acoustical society of america , 115 ( 2004 ) 3042 .h. lev - ari , a. j. devaney , the time - reversal technique reinterpreted : subspace - based signal processing for multi - static target location , ieee sensor array and multichannel signal processing workshop , cambridge ( ma ) , usa , ( 2000 ) 509 .numerical results for obtained by randomly selecting rows of the matrix , for different levels of noise : ( from top left corner to bottom right corner).,width=453 ] numerical results obtained when the elements of the matrix are selected randomly as : with probability , and with probability .the lines correspond to the probability , and the columns correspond to the noise level .,width=453 ]
computational time reversal imaging can be used to locate the position of multiple scatterers in a known background medium . the current methods for computational time reversal imaging are based on the null subspace projection operator , obtained through the singular value decomposition of the frequency response matrix . here , we discuss the image recovery problem from a small number of random and noisy measurements , and we show that this problem is equivalent to a randomized approximation of the null subspace of the frequency response matrix . ibi , university of calgary 2500 university drive nw , calgary alberta , t2n 1n4 , canada pacs : 02.30.zz inverse problems 43.60.pt signal processing techniques for acoustic inverse problems 43.60.tj wave front reconstruction , acoustic time - reversal
evidence synthesis [ e.g. , spiegelhalter , abrams andmyles ( ) , ] has become an important method in epidemiology , where multiple , disparate , incomplete and often biased sources of observational ( e.g. , surveillance or survey ) data are available to inform estimation of relevant quantities , such as prevalence and incidence of infectious disease [ ] .data may directly inform a quantity of interest , , or , more usually , may indirectly inform multiple parameters by directly informing some function of , .such a function may represent , for example , the relationship between a biased source of data and the parameter the data should theoretically measure , so that the bias is explicitly modelled .evidence synthesis methods combine these heterogeneous types of challenging data in a coherent manner , to estimate the `` basic '' parameters and from these obtain simultaneously the `` functional '' parameters .these functional parameters include both those directly observed and others that may not be observed but are of interest to estimate .this type of estimation typically necessitates the formulation of complex probabilistic models , often in a bayesian framework .knowledge of the severity of an influenza outbreak is crucial for informing and monitoring appropriate public health responses .severity estimates are necessary not only during a pandemic to inform immediate public health responses , but also afterwards , when a robust reconstruction of what happened during the pandemic is required to evaluate the responses . moreover , as has happened in past influenza pandemics [ ] , if a pandemic strain continues to circulate for some years , with unusual patterns of age - specific mortality , then severity estimates over time , both in terms of attack rates ( the proportion of the population infected ) and case - severity risks ( the probability an infection leads to a severe event ) , are required to understand if the strain is likely to continue circulating and if severity is changing over time .however , severity is an example epidemic characteristic that is difficult to measure directly .typically , severity is expressed as the probability that an infection will result in a severe event , for example , death .we refer to this probability as the `` case - fatality risk '' ( ) .severity may also be quantified by `` case - hospitalisation '' ( ) and `` case - intensive care admission '' ( ) risks , defined similarly as probabilities that an infection results in hospitalisation or intensive care ( icu ) admission .not all influenza infections will be symptomatic , where `` symptomatic '' may be defined in different ways , but is here taken to denote febrile influenza - like illness ( ili ) .not all infections will therefore result in symptoms severe enough for a patient to access health care and hence be detectable in surveillance systems [ ] .symptomatic case - severity risks ( ) , the probabilities a symptomatic infection leads to severe events , are therefore also considered as important indicators of severity for influenza .estimation of these probabilities requires information on both the cumulative incidence of ( symptomatic ) infection over a period of time of interest ( the denominator ) and the cumulative incidence of severe events ( the numerator ) .however , the denominator , whether symptomatic or all infection , is challenging to determine , due to the unobserved infections .population - wide serological testing ( testing for antibodies to influenza infection in blood serum samples ) to measure the proportion of the population infected is one possibility , but is unlikely to be feasible .this challenge is only compounded in a pandemic situation , where resources and time are even more stretched than usual [ e.g. , ] .the most feasible approach to the assessment of severity is therefore via estimation , combining data from different sources and accounting for their biases , due , for example , to under - ascertainment .the majority of methods adopted to estimate influenza case - severity [ e.g. , ] have not systematically accounted for all biases .crucially , they have not made use of all available information in the estimation process , nor have they accounted for all uncertainty inherent in the data .bayesian evidence synthesis provides a flexible framework in which all available relevant data may be coherently amalgamated , together with prior information on biases , to estimate case - severity [ , ( ) , ] . until the 2012/2013 winter, england experienced three waves of infection with the 2009 pandemic a / h1n1 influenza strain : in the summer of 2009 , the autumn and winter of 20092010 , and the autumn and winter of 20102011 .the severity of the first two waves , as measured by case - severity risks , was previously estimated [ ] by synthesising data either from surveillance systems in place to monitor seasonal influenza or from systems set up specifically in response to the pandemic [ ] . in this paper , we present in the statistical model used in and extend the approach to estimating severity in the third wave of infection .after the first two waves , the world health organization declared a move to a post - pandemic period ( http://www.who.int/mediacentre/news/statements/2010/h1n1_vpc_20100810/en/index.html ) , at which time many of thesurveillance systems that operated during the pandemic situation were either stopped or changed in form .we describe how the model of is further developed to account for these changes in the available data .the evidence used to estimate severity in the first two waves and the changes to the surveillance systems between waves are described in section [ sec_data ] .a bayesian approach to evidence synthesis is introduced in section [ sec_methods ] .we then describe in section [ sec_model ] a generic model for estimating severity , before showing in section [ sec_model12 ] how the model was implemented in the first two waves .we next develop the model to estimate severity in the third wave , presenting two approaches ( sections [ sec_3wsep ] and [ sec_3wsim ] , resp . ) .results are given in section [ sec_results ] and we end with a discussion in section [ sec_discuss ] .during the first two pandemic waves in 20092010 , data were available from various surveillance systems at or used by the uk s health protection agency ( hpa , now public health england ) that provided evidence on some aspect of the pandemic , at various levels of severity .these sources indirectly informed the case - severity risks and full details of each are given in section 1.1 of the supplementary material [ ] .briefly , they included the following : data on laboratory - confirmed pandemic a / h1n1 cases [ i.e. , cases where infection with the pandemic strain was confirmed virologically , via real - time polymerase chain reaction ( rt - pcr ) testing of nasal or throat swabs ] in the first few weeks of the pandemic [ health protection agency , health protection scotland , communicable disease surveillance centrenorthern ireland and national public health service for wales ( ) , ] . the data included dates of illness onset and information on hospital admissionif it occurred , from which age group - specific case - hospitalisation risks amongst confirmed cases could be estimated .note that these confirmed - case - hospitalisation risks are likely to be higher than the case - hospitalisation risks in all symptomatic cases , since not all symptomatic cases will have been confirmed in the first few weeks , and more severe cases in hospital are more likely to have been detected than less severe cases ; estimates of the number of symptomatic cases by week , age and region , produced by the hpa .these estimates were recognised to be under - estimates , given the data of point ( iii ) ; serial data on age group - specific proportions of individuals with antibodies to the pandemic strain of influenza ( `` sero - prevalence '' ) , from repeated cross - sectional surveys of residual sera from other ( unrelated ) diagnostic testing [ ] .these data indirectly inform the cumulative incidence of infection , that is , the proportion of the population infected over a period of time .initially these data were taken at face value , but concerns about potential sampling biases led to extra sensitivity analyses ( see section [ sec_sens12 ] ) ; data on laboratory - confirmed cases in hospital [ campbell et al.( ) ] , including age group and dates of illness onset , hospital admission and icu admission ; and data on the number of deaths amongst persons with confirmed pandemic a / h1n1 influenza and/or mention of influenza on the death certificate , reported to the hpa and/or the chief medical officer [ ] . during the third wave ,data sources ( i ) , ( ii ) and ( iv ) were no longer available in the same form .although results from testing of samples from before and after the third wave from data source ( iii ) are now available [ ] , at the time of the analyses presented here , they were not accessible .full details of each source below are given in section 1.2 of the supplementary material [ ] .between the second and third waves , the surveillance system for hospital admissions of confirmed cases moved to being a sentinel surveillance system , the uk severe influenza surveillance scheme ( usiss ) .the data from this system are available at a coarser level of age aggregation and come from a sentinel sample of 23 acute nhs hospital trusts in the 20102011 season , as opposed to the 129 trusts participating in hospital surveillance during the first two waves .additional data are available on patients present in all icus in england with _ suspected _ pandemic a / h1n1 influenza , again at a coarser age aggregation , from the department of health [ dh ; ] .we also have data on virological positivity ( proportion testing positive for the pandemic strain ) from a sentinel system , `` datamart , '' comprising results of rt - pcr testing from 16 hpa and nhs laboratories in england , covering mainly patients hospitalised with respiratory illness . in the third wave ,the hpa estimates of source ( ii ) were not available , due to the underlying data being specified at a different level of disaggregation .instead , we use estimates of the number symptomatic ( details in section 3.1 of the supplementary material [ ] ) obtained from an alternative general practice sentinel surveillance system [ ] . estimating case - severity by dividing the observed number of infections at a severe level over a period of time by the observed ( i.e. , confirmed ) number of infections in the same period is highly likely to result in biased estimates .this bias is due to both under - ascertainment of infections in surveillance systems and differential probabilities of observation by severity of infection [ ] .any estimation therefore has to account for these probabilities of observing infections ( `` detection probabilities '' ) .further challenges are posed by the following : uncertainty about the representativeness of the surveillance data for the general population ( sampling biases ) ; the different degrees of aggregation in each data source ; the fact that some of the data sources , such as the sero - prevalence data , only inform _ indirectly _ the number of infections ; and the changes in surveillance systems over time .a synthesis of all the above data sources to estimate case - severity therefore requires these challenges to be addressed .evidence synthesis [ see , e.g. , ] denotes the idea of estimating a set of `` basic '' parameters from a collection of independent data sources , arising from multiple studies , perhaps of differing design .each source provides evidence on a `` functional '' parameter .the function may either be equality to a single specific element of , so that the data _ directly _ informs , or a function of one or more components of , so that the data _ indirectly _ inform multiple basic parameters .the collection is therefore a mixture of basic and functional parameters .the aim is to estimate the set of basic parameters , from which the functional parameters , as well as any other functions of that are of interest , may be simultaneously derived .denote the total set of functions by .inference may be carried out either in a classical setting , maximising the likelihood , or , as in this paper , in a bayesian setting , assigning a prior distribution to the basic parameters , , and obtaining the posterior distribution typically via a simulation - based algorithm such as markov chain monte carlo ( mcmc ) .the posterior distribution of any of the functional parameters may also be derived .a bayesian evidence synthesis meets the challenges of case - severity estimation by allowing the relationship between data and parameters to be accurately formulated , for example , through the use of bias parameters such as detection probabilities ; prior information on such biases to be easily introduced ; and a natural framework in which to assess the consistency of evidence [ ] , as part of the inference and model criticism cycle advocated by and .the following generic synthesis of evidence to estimate severity was the basis of the estimation of severity of the 2009 pandemic a / h1n1 strain of influenza [ ( ) ] , both in the usa and in england during the first two waves . assume the population of interest is divided into 7 age groups : , 14 , 514 , 1524 , 2544 , 4564 , 65 , indexed by .denote the age - specific population sizes by , where indexes waves of infection ( in the case of england ) .consider infections at five increasing severity levels : all infections ( ) , symptomatic infections ( ) , hospitalisations ( ) , icu admissions ( ) and deaths ( ) .for each wave and age - group , consider each of these sets of infections to be subsets of the set of infections at a less severe level , such that and .note that we assume the set of deaths is a subset of the set of hospitalisations , but that not all deaths are a subset of the set of icu admissions .the set of infections is clearly a subset of the population . for each age group , denote the cumulative number of new infections during wave at severity level ( i.e. , the size of subset ) by .denote by the age- and wave - specific conditional probability that a case is at severity level given the case has already reached a less severe level , that is , . for ,let , where , respectively .for all infections , define . for deaths , define , that is , in terms of the conditional probability of dying given hospitalisation .the conditional probabilities , and are basic parameters to which we assign prior distributions and the are functional parameters .note that in the us analysis [ ] , the were considered stochastic nodes , realisations of a binomial distribution with probability parameter and an appropriate denominator .however , in the uk analysis [ ] and the analyses reported below , convergence of the mcmc algorithm was only achieved when the corresponding deterministic ( mean ) assumption was made for the , for reasons that are discussed further in section [ sec_discuss ] .the subsetting assumptions allow the case - hospitalisation , case - icu admission and case - fatality risks to be defined as functional parameters expressed as products of component conditional probabilities : similarly , the symptomatic case - icu admission and symptomatic case - fatality risks are defined as the conditional probability is commonly referred to as the `` infection attack rate '' ( ) and is known as the `` symptomatic attack rate , '' .let denote `` detection '' probabilities , that is , probabilities that infections at severity level are observed .the full set of wave- and age - specific basic parameters to which we assign a prior distribution is then with the total set defined as the full set of wave- and age - specific functional parameters is with the total set defined as the prior distributions assigned to the basic parameters , whether diffuse or informative , will depend on the specifics of the severity model considered ; see section [ sec_eng ] .in general , at each severity level , we observe infections out of the total infections .each is assumed to be binomially distributed with size parameter and detection probability : the likelihood would then be the specific models , for example , as in sections [ sec_model12 ] and [ sec_3wsep ] , may have variations on this likelihood , depending on the data available .for example , data may be directly available on the number of hospitalisations resulting in icu admission , in which case these data may contribute to the likelihood in the following form : once the priors and likelihood are defined , samples are obtained from the resulting joint posterior distribution by mcmc simulation , using openbugs [ ] . in each model described below , three independent chains were run for 2,000,000 iterations each , with the first 500,000 iterations discarded as a burn - in period and the remainder thinned to every iteration , resulting in 450,000 samples on which to base posterior inference .convergence was established by both visual inspection of the trace plots and examination of the brooks gelman rubin diagnostic plots [ ] .the model used in for the first two waves of infection in england is described in the next section .two alternative methods of modelling the third wave of infection are then given : ( a ) a two - stage approach where posterior distributions from the second wave model are used to inform prior distributions for some of the conditional probabilities in the third wave ; and ( b ) a one - stage approach where all three waves are modelled simultaneously , with the third wave conditional probabilities parameterised in terms of the corresponding second wave probabilities .figure [ fig_sev12 ] is a schematic directed acyclic graph ( dag ) displaying the relationship between parameters and data in the model for severity in the first two waves in england [ ] .the figure displays one generic example age group , with the and indices left out for simplicity .parameters are denoted by circles and data by rectangles .the dashed rectangle represents repetition over the two waves .double circles are basic parameters which are assigned prior distributions , either vague or informative , and filled light grey circles denote the key parameters ( both basic and functional ) we wish to estimate. dashed arrows denote functional relationships , for example , the definition of each number or equations ( [ eqn_csr ] ) and ( [ eqn_scsr ] ) .solid arrows represent distributional assumptions , for example , that an observation is binomially distributed . independently for each age group , a vague prior distribution is given to the infection attack rate, , in each of the two waves , together with the remaining fraction of the population , comprising those either uninfected in the first two waves or with some degree of immunity at baseline : the three proportions are therefore constrained a priori to sum to 1 and to lie between 0 and 1 .this parameterisation assumes each infected individual was infected in only a single wave .the remaining priors are either uniform or beta distributions , with full details given in section 2.2 of the supplementary material [ ] .the likelihood is a product of binomial and log - normal contributions , as detailed in the following .[ [ infections ] ] infections + + + + + + + + + + the sero - prevalence data [ source ( iii ) of section [ sec_data ] ] consist of the number of samples testing positive for pandemic a / h1n1 antibodies , both before and after the first wave .they are realisations of two binomial distributions and provide information on the corresponding prevalences at the two time points .the difference in these two prevalences informs the infection attack rate in the first wave , via the functional relationship ( figure [ fig_sev12 ] ) .the post - second wave sero - prevalence data were not used initially , as some samples taken after the vaccination campaign had begun were likely to test positive due to vaccination rather than infection . a lack of information on the vaccination status of individuals in the sample , together with concerns that individuals in the sample may have been more likely than the general population to be at risk of infection , due to pre - existing conditions , and therefore to be vaccinated [ ] , precluded the use of the data without further work to address these challenges .[ [ symptomatic - infections ] ] symptomatic infections + + + + + + + + + + + + + + + + + + + + + + the estimates ( figure [ fig_sev12 ] ) of the number symptomatic from the hpa ( source ( ii ) , section 1.1.2 of the supplementary material [ ] ) are assumed to be log - normally distributed , with a mean that ( on the original scale ) is drawn from a binomial distribution with size parameter and probability parameter given by the detection probability .this parameterisation reflects the belief that the hpa estimates are underestimates of the number symptomatic .[ [ hospitalisations - and - deaths ] ] hospitalisations and deaths + + + + + + + + + + + + + + + + + + + + + + + + + + + the observed hospitalisations anddeaths [ sources ( iv ) and ( v ) , resp . , see also figure [ fig_sev12 ] ] are binomial realisations , with size parameters and probability parameters given by their respective ( wave- but not age - specific ) detection probabilities . amongst observed hospitalisations for whom we have information on final outcomes [ a subset of source ( iv ) ] , the observed icu admissions and deaths are realisations of binomial distributions with probability parameters given by the conditional probabilities and , respectively ( figure [ fig_sev12 ] ) .fuller details of the model are given in section 2 of the supplementary material [ ] .the changes in surveillance sources available during the third wave , particularly the smaller sample sizes and coarser age aggregation , resulted in the data providing less direct information on the parameters than in the first two waves . to ensure identifiability of all parameters ,informative prior distributions were employed for some parameters .the darker grey circles in figure [ fig_sev3 ] , a dag of the third wave model , denote these parameters , with beta prior distributions chosen to reflect the posterior distributions of the equivalent second wave parameters ( see table 15 of the supplementary material [ ] ) . the changes also entailed two smaller submodels , one for the data on icu patients with suspected pandemic a / h1n1 infection and one for general practice ( gp ) consultation and positivity data , the results of which are incorporated into the third wave severity model as likelihood terms ( see below for more detail ) ._ infections_. the infection attack rate again has a dirichlet prior over the three waves , but it is now more informative : where is the proportion either with antibodies at baseline or infected during one of the first two waves , that is , the post - second wave antibody prevalence . for each age group , and chosen such that a distribution approximates the marginal posterior distribution of derived from the model of section [ sec_model12 ] .the choice of dirichlet parameters allows the prior mean for to reflect the posterior mean from section [ sec_model12 ] , but gives greater prior uncertainty than the corresponding posterior . _ symptomatic infections ._ as the hpa did not produce estimates of the number symptomatic during the third wave , data on ili consultations and virological positivity from an alternative primary care sentinel surveillance system ( [ ] ; see section 1.2.1 of the supplementary material [ ] ) were used to estimate the number symptomatic , before incorporating this estimate into the severity model .a log - linear regression of the ili consultation data on time and age was fitted jointly with a logistic regression of the positivity data on time and age [ cf .a negative binomial likelihood was assumed for the consultation data and a binomial likelihood for the positivity data . the number symptomatic due to the pandemic a / h1n1 strainwas then estimated as the sum over weeks of the product of the expected consultation rate and the expected proportion positive for pandemic a / h1n1 , adjusted for the proportion of symptomatic patients who contact primary care .the resulting posterior mean ( ) and standard deviation ( ) of the logarithm of the number symptomatic are incorporated into the likelihood of the third wave severity model as a normal term : ( see section 3.1 of the supplementary material [ ] for details ) . _ hospitalisations ._ the hospitalisation data for the third wave ( source ( vi ) , section 1.2.2 of the supplementary material [ ] ) come from a sentinel system .the observed number of hospitalisations therefore provides a lower bound for the number of hospitalisations , contributing to the total likelihood as a binomial component with probability parameter given by the ( non - age specific ) detection probability . recall that these data are available at a coarser age aggregation than in the first two waves .the size parameter is therefore a functional parameter that is a sum over the appropriate age groups , where are sets describing the mapping from the coarser age groups to the severity model age groups ._ icu admissions . _ the extra information on suspected patients present in icu ( source ( vii ) , section 1.2.3 of the supplementary material [ ] ) are modelled as a bivariate immigration - death process to represent movement in and out of icu .this process is combined with the positivity data of source ( viii ) to estimate the cumulative number of confirmed pandemic a / h1n1 incident cases admitted to icu during the third wave ( section 4 of the supplementary material [ ] ) .the resulting posterior mean ( standard deviation ) of the logarithm of the cumulative icu admissions , , are incorporated in the likelihood for the third wave severity model as normally distributed : where denotes the age groups available for the suspected icu data ( two groups : children and adults ) . as with the hospitalisation data ,the are sums over the appropriate age groups .the number is still a lower bound for the cumulative number of icu admissions over the third wave , since the data of source ( vii ) cover only a portion of the time of the third wave : this is expressed as having a binomial distribution with size parameter and probability parameter given by the age - constant detection probability . _finally , the observed deaths are again binomially distributed , as in the first two waves .full details of the changes to model the third wave are given in section 3 of the supplementary material [ ] .modelling the three waves of infection in two stages enables the use of the posterior distributions of case - severity in the second wave as prior distributions in the third wave analysis .however , a two - stage approach does not allow estimation of the posterior probability of a change in severity occurring over waves .to do so requires modelling all three waves simultaneously , as if we had not seen any of the data until the end of the third wave .a joint model for all three waves implies different assumptions from the two - stage approach .first , the prior distribution for the infection attack rates in each wave is assumed again to be diffuse : here , the remaining fraction of the population comprises both those with antibodies at baseline ( pre - pandemic ) and those remaining uninfected by the end of the third wave .the proportion symptomatic , , is now constrained to be equal across all three waves and all age groups , instead of its third wave prior being informed by its second wave posterior distribution .likewise , the three conditional probabilities and for are no longer given prior distributions based on second wave posterior distributions , but are parameterised in terms of their corresponding second wave conditional probabilities : .\nonumber\end{aligned}\ ] ] a value of for the standard deviations would imply that the odds ratios of the third compared to the second wave probabilities lie between 0.14 and 7.10 .a value of would imply an odds ratio of 1 , that is , equality of the conditional probabilities : .all other aspects of the joint model for all three waves are as in the separate first / second and third wave models of sections [ sec_model12 ] and [ sec_3wsep ] , respectively . and by age , wave and model .note the different scales on the -axes . ]results from the model for the first two waves , given in full in , suggest a mild pandemic , characterised by case - severity risks increasing between the two waves . from the analysis of data from the third wave , figures [ fig_csrs ] and [ fig_iar ] show the posterior medians and credible intervals for the case - severity risks and infection attack rates , respectively , by age , wave and model . although there are some differences between the two - stage models ( left - hand sides of the figures ) and the combined three - wave model ( right - hand sides ) , the conclusions are broadly similar .there is a clear `` u''-shape to the age distribution of the case - severity risks ( figure [ fig_csrs ] ) in all three waves , with the youngest and oldest age groups having the highest probabilities of experiencing severe events , but also the most uncertainty in the estimates .the age distribution of the infection attack rates ( figure [ fig_iar ] ) , on the other hand , is convex , with school - age children having the highest probability of being infected in the first two waves , though not the third .the joint three - wave model allows estimation of the posterior probabilities of increases across waves in either the attack rates or the case - severity risks ( table [ tab_ppinc ] ) . across waves ,there is some evidence of a shift in the age distribution of the infection attack rates , with posterior probabilities of an increase from the second to third waves seen in adults and the very young , but not in school - age children ( posterior probability ) . at first glance, the estimates averaged over the age groups ( table [ tab_ppinc ] ) suggest the case - severity risks have increased over all three waves .the posterior probabilities of a rise across waves of the and are all greater than . however ,closer scrutiny of the age - specific estimates shows this increase does not occur consistently in every age group and wave .there is stronger evidence of a rise in icu admission and fatalities from the first to second waves than from the second to third .the pattern is less clear in the case - hospitalisation risks ..0d4.0d4.0@ & * age * & & & + & & & & + & 14 & & & + & 514 & & & + & 1524 & & & + & 2544 & & & + & 4564 & & & + & 65 & & & + & all ages & & & + & & & & + & 14 & & & + & 514 & & & + & 1524 & & & + & 2544 & & & + & 4564 & & & + & 65 & & & + & all ages & & & + & & & & + & 14 & & & + & 514 & & & + & 1524 & & & + & 2544 & & & + & 4564 & & & + & 65 & & & + & all ages & & & + & & & & + & 14 & & & + & 514 & & & + & 1524 & & & + & 2544 & & & + & 4564 & & & + & 65 & & & + & all ages & & & + & & & & + & 14 & & & + & 514 & & & + & 1524 & & & + & 2544 & & & + & 4564 & & & + & 65 & & & + & all ages & & & + .0d4.0d4.0@ & * age * & & & + & & & & + & 14 & & & + & 514 & & & + & 1524 & & & + & 2544 & & & + & 4564 & & & + & 65 & & & + & all ages & & & + the reason for the pattern of increase in the and over waves is not immediately apparent without further investigation .three possible hypotheses are as follows : ( a ) that the increase is due to the age shift in the infection attack rate away from school - age children toward adults across waves ; ( b ) that the lack of third wave data and consequent parameterisation of some of the third wave conditional probabilities in terms of the corresponding second wave probabilities [ equation ( [ eqn_3wparam ] ) ] results in the attenuated change in severity from the second to third wave ; and/or ( c ) that unaccounted differences in the representativeness of the different surveillance systems used in the third wave compared to the first two may have an effect on the estimated severity .these possibilities are not mutually exclusive and the extent to which the estimated severity is reliant on each is unknown . the potential for unaccounted biases in the sero - prevalence data ( sections [ sec_data ] and [ sec_lik12 ] ) , as well as the belief that the hpa case estimates represented underestimates , prompted several sensitivity analyses to further assess the uncertainty in the infection attack rates in the first two waves .sensitivity to the choice of data informing the denominators ( the infection attack rate or the number of symptomatic infections ) and to the prior distribution of was assessed .specifically , four models with different data informing and were considered : using the hpa case estimates to inform , assuming they do so unbiasedly in the first two waves ( i.e. , with ) , and using no sero - prevalence data ; the model presented here and in , assuming the hpa case estimates are biased downwards and using only the baseline and post - first wave sero - prevalence data ; as in model 2 , but using all the sero - prevalence data ( up to post - second wave ) of table 5 of section 1.1.3 of the supplementary material [ ] , assuming the hpa case estimates are biased downwards in both waves ; and as in model 3 , but assuming the sero - prevalence data are biased upwards and the hpa case estimates are biased downwards . analyses using models 1 and 2 were then repeated using three different prior distributions for the infection attack rate : , allowing the total attack rate over the two waves to be a priori on average , with prior mass in the interval ( 0.10.7 ) , and with a ratio between the two waves ; , allowing again a prior total attack rate of 0.4 ( 0.10.7 ) , but with a ratio between waves ; , allowing a prior total attack rate of 0.4 ( 0.10.7 ) , with a ratio between waves .the choice of informative priors is motivated by the total attack rates in prior pandemics , with the prior uncertainty still relatively large . found susceptible attack rates ( i.e. , proportion of susceptibles infected , as opposed to proportion of the total population ) of between and in the first wave of the 19681969 pandemic , compared to between and in the second , which motivates prior ( b ) .this prior may in fact be sceptical for the 2009 pandemic , as instead of a ratio between waves , the hpa case estimates and the severe data suggest the ratio was at least , if not or greater . however , this ratio may vary by both age and region , with london in particular experiencing a somewhat different epidemic to the rest of the country [ ] . prior ( c )therefore allows for the converse , with a greater second wave than first .the sensitivity analyses to the choice of prior distribution of the infection attack rate in the first two waves suggest the key messages from are robust to the choice of prior distribution .results were less robust to the choice of denominator data included in the model .the inclusion of the post - second wave sero - prevalence data suggested a higher infection attack rate [ 28.4%(26.030.8% ) ] than the baseline analysis [ 11.2% ( 7.418.9% ) ] , with a corresponding lower case - fatality risk in the second wave [ 0.0027% ( 0.00240.0031% ) compared to 0.009% ( 0.0040.014% ) ] .full details of these sensitivity analyses are given in section 5 of the supplementary material [ ] .recall ( section [ sec_lik12 ] ) that the samples tested post - second wave and before and after the third wave [ ] may overrepresent individuals at higher risk of infection and vaccination .the observed sero - prevalence in these samples may therefore suggest a higher infection attack rate than truly occurred .further work to obtain background information on individuals in the samples , and therefore to account for sampling biases , is underway , prompted in part by the results of these sensitivity analyses . in the third wave , a sensitivity analysis to the set of virological positivity data usedwas performed ( sections 1.2.1 and 3.1 of the supplementary material [ ] ) .the main analysis used the full positivity data , with the results of the bayesian joint regression model of the positivity and primary care consultation data ( table 13 of the supplementary material ) incorporated into the combined 3-wave model as shown in figure [ fig_sev3 ] and section 3.1 of the supplementary material .the sensitivity analysis employed instead a set of virological positivity data restricted to tests made on swabs that were collected within 5 days of an ili consultation , with corresponding results from the joint regression model in table 14 of the supplementary material .the results from including the two alternative sets of estimates from the joint regression model into the combined 3-wave model are compared in figures [ fig_csrs_sens ] and [ fig_iar_sens ] . and from the combined model , by age and source of positivity data .note the different scales on the -axes . ]the general conclusions about the age distribution of the case - severity risks and infection attack rate in the third wave are unchanged by the use of the restricted positivity data .the restricted data do imply a slightly higher and more uncertain attack rate in each age group ( figure [ fig_iar_sens ] ) , due to the higher observed positivity and smaller sample sizes .correspondingly , the case - severity risks are slightly lower in each age group in the sensitivity analysis ( figure [ fig_csrs_sens ] ) , but the greater uncertainty in the denominator does not seem to translate directly into greater uncertainty in the risks .we have extended and further developed a bayesian evidence synthesis model [ ] to characterise and estimate the severity of the 2009 pandemic a / h1n1 strain of influenza in the three waves of infection experienced in england .the model has been adapted to account for changes in the surveillance data available over the course of the three waves , considering two approaches : ( a ) a two - stage approach , using posterior distributions from the model for the first two waves to inform prior distributions for the third wave analysis ; and ( b ) modelling all three waves simultaneously , accounting for the reduction in available data by parameterising the third wave severity parameters in terms of the corresponding second wave parameters .both approaches have resulted in broadly the same three key conclusions : the age distribution in case - severity risks is `` u''-shaped , implying children aged less than a year and older adults have highest severity , although their estimates are also the most uncertain .this pattern is consistent with the increasing severity with age seen in other countries during the 2009 pandemic [ ] , where in each of these analyses , the authors did not distinguish between children under 1 year of age and those aged 14 .the pattern is also consistent with global relative risks by age of severe events compared to the general population estimated by .the age distribution of the infection attack rate changes over waves , with school - age children most affected in the first two waves and an increase in the attack rate in adults aged 25 and older from the second to third waves . when averaged over all ages , severity in those infected appears to increase over the three waves .the changing age distribution and apparent increase in severity over waves is consistent with estimates from the two pandemic waves experienced by other countries [ ] .it is important to note that the estimates presented here do not account for risk factors for severe influenza , nor for vaccination status nor for other preventive measures , such as social distancing , which might have an effect on severity .both the joint regression model of virological positivity and gp consultation and the full severity model would require further development to account for these factors and to be able to use the second and third wave serology data accounting for sampling biases .assessment of the effect on estimates of assumptions such as that of no influenza - related deaths occurring outside of hospital or the parameterisation of the third wave in terms of the second wave in the combined analysis is also key .the possible effect of any differences in representativeness of the various surveillance systems in the third compared to the first two waves is an issue for further investigation .the sample sizes and prior distributions chosen do not provide enough information to enable convergence of the mcmc algorithm for the severity model when taking the number of infections to be a binomial realisation from the number at a less severe level .this lack of convergence implies there may be too much uncertainty to allow identifiability of the model in this case , prompting instead the mean assumption .another area for future investigation is to assess how informative the priors are required to be or how large sample sizes need to be to enable convergence when the are stochastic . despite these challenges ,our bayesian evidence synthesis approach has allowed us to draw important public health conclusions , not only in characterising the severity of the 2009 pandemic , but also in shaping future research .the sensitivity analyses showed the severity estimates were robust to prior assumptions about the infection attack rate , but less robust to the choice of data to include in informing the attack rate .although the magnitude of the severity estimates varied , the conclusions of a `` u''-shaped age distribution to severity and an apparent increase in severity over waves were nevertheless robust .the sensitivity of the results has , furthermore , contributed to the initiation of a project to obtain further data to better understand the potential sampling bias in the sero - prevalence data [ ] .the evidence synthesis framework has also given us the flexibility to account for biases , using prior information on parameters representing the biases , for example .bias modelling has been an integral part of the model development , inference and criticism cycle , as have the sensitivity analyses .it is important , in any analysis , to understand the contribution of each item of evidence , whether in the form of model structure , prior distribution or data , in driving inferences .it is particularly crucial when an analysis relies on informative priors for identifiability , as is the case here .another key aspect of model criticism in an evidence synthesis is to assess the consistency of the various data sources , not only with each other , but also with the model structure .it is possible , and indeed common , in syntheses of multiple sources of evidence to find both that some parameters are only barely identified by the data and that other parameters are informed indirectly by more than one data item . in the latter case , there is clearly potential for different sources of data to conflict , providing inconsistent evidence on a particular parameter [ ] .such conflicts need to be detected , measured , understood and resolved .conflict diagnostics , in the form of cross - validatory posterior prediction , for the first wave confirm the inconsistency between the serology data and the hpa estimates of the number symptomatic if taken at face value [ ] . in our main analysis , we addressed the conflict by incorporating a bias parameter for the hpa estimates , whereas in the sensitivity analyses , we also considered a bias parameter for the serology data .further preliminary work on measuring conflict seems to confirm the suggestion of the sensitivity analyses that the severe end data does indeed conflict with the evidence on the attack rates .given the uncertainties in the attack rates , understanding and resolving this conflict is an important next step .the iterative process of fitting , criticising and further developing an evidence synthesis model to address conflicts , as we have done and are continuing to do here , leads automatically to internal consistency .by contrast , external validation is much more challenging in an evidence synthesis framework . as already noted , due to identifiability issues common to evidence syntheses , it is rare to find external data against which to validate such data are instead used in the synthesis . despite these challenges , an evidence synthesis using a complex probabilistic modelprovides a powerful approach to estimating influenza severity when the available evidence comes from multiple sources that are incomplete and biased .the embedding of a `` pyramid '' approach to severity estimation within an evidence synthesis framework , as presented here , is easily adapted to other contexts , both within epidemiology , where many diseases may be observed at different levels of severity or diagnosis , and in other fields where observation occurs at different levels , for example , quality control or ecology .we thank colleagues at public health england and the royal college of general practitioners , particularly michele barley , who provided data for this analysis .we are grateful to all gps who participate in the rcgp weekly returns service .we would particularly like to acknowledge professor j. r. norris ( university of cambridge ) and professor marc lipsitch ( harvard school of public health ) for advice on early versions of this analysis .
knowledge of the severity of an influenza outbreak is crucial for informing and monitoring appropriate public health responses , both during and after an epidemic . however , case - fatality , case - intensive care admission and case - hospitalisation risks are difficult to measure directly . bayesian evidence synthesis methods have previously been employed to combine fragmented , under - ascertained and biased surveillance data coherently and consistently , to estimate case - severity risks in the first two waves of the 2009 a / h1n1 influenza pandemic experienced in england . we present in detail the complex probabilistic model underlying this evidence synthesis , and extend the analysis to also estimate severity in the third wave of the pandemic strain during the 2010/2011 influenza season . we adapt the model to account for changes in the surveillance data available over the three waves . we consider two approaches : ( a ) a two - stage approach using posterior distributions from the model for the first two waves to inform priors for the third wave model ; and ( b ) a one - stage approach modelling all three waves simultaneously . both approaches result in the same key conclusions : ( 1 ) that the age - distribution of the case - severity risks is `` u''-shaped , with children and older adults having the highest severity ; ( 2 ) that the age - distribution of the infection attack rate changes over waves , school - age children being most affected in the first two waves and the attack rate in adults over 25 increasing from the second to third waves ; and ( 3 ) that when averaged over all age groups , case - severity appears to increase over the three waves . the extent to which the final conclusion is driven by the change in age - distribution of those infected over time is subject to discussion . , , , , , ,
the need for methods described in this paper arose during development of the search for continuous gravitational wave signals . even though aimed at a specific purpose of following up outliers , they have much wider applicability . to that endwe will present a simplified description that omits some technicalities specific to searches for continuous gravitational waves . the algorithm detects gravitational waves by computing power received from a particular direction at a certain frequency and spindown .similar approaches include hough and stackslide searches .also , searches have been carried out with algorithms using substantially larger coherence lengths such as -statistic .the power - based methods are computationally efficient and allow all - sky blind searches to be performed with the sensitivity scaling as fourth root of the amount of analyzed data .in contrast , coherent searches scale as but become impractical for moderate values of .they also rely heavily on knowing the exact form of the expected signal - an assumption that we feel is overly bold when one is looking for a form of radiation for which no prior direct observation exists .there are searches that fill the space between these extremes .one way is to combine incoherently an output of multiple coherent searches .another approach is to perform a hierarchical search that follows up outliers with longer baseline coherent investigation .both employ longer coherence baselines than power - based methods .thus , in order to make a successful detection , one needs to overcome a `` potential barrier '' in computational costs that separates a blind search from an easy verification of a successful candidate .one reason for difficulties with current coherent methods is that they are optimized with a specific signal waveform in mind , and then the search is iterated over many signal templates .the templates often overlap and , in fact , oversampling is routinely used to ensure that no signals are missed .this design is well warranted if sufficient computational power exists to exhaust the entire search space - but this is a situation current gravitational wave searches are * not * in .furthermore , maximization alone is not necessarily the most optimal statistic .we believe that an approach that combines attention to sensitivity and computational efficiency with more agile control over accepted waveforms is both more physically prudent and computationally accessible . to illustrate this , we present a _loosely coherent _ method that is based on estimating power for a family of signal waveforms at once .for the purposes of this paper we will assume that our entire dataset has been broken up into short portions each of which has been subjected to the fourier transform , and we are looking for a signal of constant amplitude that would land into a single frequency bin in -th short fourier transform with varying phases . if the phases were known in advance we could compute the power of a coherent sum the high values of which would indicate the presence of the signal .there is a large body of literature that describes designing statistics with optimal signal - to - noise ratios ( snr ) , in particular . in many cases a part of signal evolution ( such as doppler modulation induced by motion of the earth )is known in advance .if we assume that this contribution has been factored out then the coherent power sum reduces to the case .the set of all possible phases ( modulo ) forms an -dimensional torus on which is a smooth function . in practice, phases can not be determined exactly ahead of time , but rather obey a set of constraints .such family of signals would sweep a submanifold , possibly with boundary .our goal is then to find a statistic that achieves high values when a signal from is present and low values otherwise .one way to do that is take the maximum of over constrained to the submanifold .another approach is to view the unknown parameters as random , with the phases forming stochastic process , usually highly correlated .it is important to note that for either detection or establishment of upper limits we only need to know whether the signal is present , as the parameter estimation can be performed by partitioning into subsets .we call this a _ loosely coherent _ approach , as instead of trying to find signals with a certain pre - determined set of phases , we are content with any signal that has phase evolution from .the choice of the set and the statistic is then up to the designer of the search thus providing the necessary freedom to satisfy conflicting demands of efficiency in computation and signal recovery . of course ,any practical detection algorithm , even designed with full knowledge of expected signal , will respond to data with signals from a wider set of phases than physically expected .tailoring the set at the design stage , rather than simply characterizing it after implementation , allows finer control over which astrophysical signals one can detect and particulars of template placement .the most straightforward way to construct a loosely coherent statistic is to maximize over the set of possible phases .this is a classical optimization problem with a quadratic objective that possesses several difficulties : first , we are trying to _ maximize _ a non - negative definite quadratic function - thus our problem is inherently non - convex is called convex if the set of points is convex . in particular , for a differentiable , this assures that the gradient descent method can not become stuck in a valley . ] , even for small portions of .this precludes the use of well known optimization methods like gradient descent .secondly , the dimension is very large , with small searches starting at .the third difficulty is more subtle and is due to the nature of interesting signal families .these usually involve phases that evolve moderately fast with and can wrap around numerous times .a typical example is a linear evolution produced by mismatch in frequency given by with on the order of . because of the wrap around, a small uncertainty in for some can result in very large uncertainty in for . in the limit the embedding of into the torus ( considered with norm in which it is not compact ) stops being differentiable or continuous altogether .the properties of the map as approaches infinity are tightly connected with the scalability in the number of templates . to describe this connectionwe need some well - known tools from functional analysis .let be a bounded ( i.e. compact ) finite dimensional manifold , possibly with boundary , with a metric . as mentioned before, we consider the torii with metric let be the family of embeddings describing phase evolution for successive sfts .our goal is to select templates in such their image under forms an -net - any point in is within of an image of some template .we distinguish three fundamentally different situations : * the map is lipschitz , i.e. it satisfies the following property : any continuously differentiable map is lipschitz . in this case, we can cover with any desired tolerance by constructing a set of templates in which forms an -net . a well - known fact from topology is that it is possible to find coverings with template count scaling as where is the hausdorff dimension of .+ thus , we see that the template count does not depend on and is proportional to - the best we could hope for .an example of such a map is given by where is a fixed parameter ( such as earth rotation frequency ) and and are bounded search parameters .a physically relevant example is given by phase shifts from amplitude response of the detector .* the map is known to be continuous , but not lipschitz . in this case , we can still find a suitable template set for any desired tolerance , but the spacing of the templates in will not depend linearly on as it does in the lipschitz case .we thus retain independence of but the number of required templates can grow faster than . +a mathematical example of such a map is given by the required template count grows as .we are not aware of any physically motivated search for continuous gravitational radiation that has parameters of this form . *the map is not continuous . while this can be due to trivial causes such as partial breaks in otherwise lipschitz map , in general it would not be possible to find a finite template set to cover . for the finite case the template count will grow with + an example of such a map is given by frequency evolution discussed above : for which the required template count scales as .one way to deal with these difficulties is to partition into small enough sets so that maximization can actually be carried out and combine the results afterwards .further computational savings result from picking described by only a few necessary parameters and overcoming their scaling properties with large computing power .the coherent searches for gravitational radiation such as can be viewed as examples of this approach .another way to bring computational costs under control is to replace with a related function with a smaller lipschitz constant .one can achieve this by averaging over or its subsets , which is equivalent to computing expectation value of over some assumed distribution on .this spreads the signal response over a larger area , but we only have to make the computation once for each subset .for ease of exposition we use the usual lebesgue measure and average power rather than a more complicated statistic such as likelihood . in the most extreme casewe just average away the phases yielding the conventional semi - coherent method : if the phases are truly random this statistic will perform better in the presence of well behaved noise than computation of the maximum .a more conservative approach will limit phase evolution : yielding the following statistic ( computed using variables ) : which interpolates between the fully coherent sum for and the semi - coherent case . the allowed spacing between frequency templates increases with , and in the limiting case determined by the value of in units of frequency bins .this has proven to be a good initial estimate of the spacing required by searches where .this method will lose some power if the true frequency of the signal at the time corresponding to coefficient is not a harmonic sampled by the fourier transform . to avoid this , one can replace with more precise values estimated from the dirichlet kernel .this effectively makes sure that the point with all phases belongs to - a condition we assume from now on .it is also possible to use the same approach to reduce the influence of periodic changes of underlying frequency , such as caused by mismatch in sky position and the resultant doppler shifts .assume where is some unknown ( and irrelevant ) phase , and are known and fixed ( such as from sidereal doppler modulation ) and is allowed to vary , subject to .then as we have chosen a simple power sum as a starting point , our averaged statistic will always have the form for some kernel and is thus similar to cross - correlation search . as we will see later the efficient computation of the sum for small best done in a manner different from the cross - correlation statistic .the statistic can be rewritten as a scalar product of the vector of input data with the image of under the operator which square is given by the kernel : from this point of view acts as a filter rejecting signals outside the expected set , after which we take the usual semi - coherent sum .for example , can be chosen as a low - pass filter given by a or lanczos kernel .this would admit signals with phases varying slower than the filter cutoff frequency . for a practical implementation the main point of concernis the ability of the statistic to tolerate frequency mismatch , as it directly impacts the number of templates . for this purposethe low pass filters are optimum , tolerating mismatch values up to a cutoff frequency and rejecting signals with faster varying phases .a more sophisticated approach is to assume a distribution on the set of allowed phases and then treat our signal as a highly correlated stochastic process .since the data analysis is typically carried out after the data collection is complete , one is not restricted to causal filters alone and , in the case of stationary noise and limited phase evolution , we obtain a low pass filter as a solution . the loosely coherent statistic based on a filteris optimal in the following idealized situation : suppose our data consists of a sum of stationary mean zero gaussian noise of known variation ( which is typically easy to estimate from data known not to contain any signals ) and unknown band - limited signal of limited power , with no additional information on the signal form or phase evolution .a fourier transform will separate our data into high frequency area where there is no signal and which can be safely discarded and low frequency area which phase information is irrelevant due to the signal having an arbitrary spectral shape .we are thus left with a problem of deciding whether our low - frequency data is consistent with gaussian noise alone or there is an arbitrary additive signal present .both the limited power condition and the structure of gaussian noise are symmetric under unitary transformations .thus , if no other restrictions are present , the only meaningful information is the power contained in the low - frequency data . while this fairly standard argument bridges both frequentist and bayesian approaches , it does have a number of limitationsthe most severe is that the symmetry is lost in case of non - stationary noise .additionally , a family of physical signals can be expected to have a spectrum more interesting than a plain flat - top .we will now qualify the phase shift evolution that one expects to encounter in current searches . at the moment , the searches analyze data from hz through hz , accounting for spindowns as large as hz / s .the analysis is done using short fourier transforms ( sfts ) of s length , which have overlap in some searches , no overlap in others and often have gaps . for this paperwe will assume that the time interval between and is s. we will assume that have already been adjusted so that the template with all is in .there are several sources of non - trivial phase shifts , which we will describe in terms of maximum expected difference between nearby phases : * frequency mismatch - a template possessing frequency different from by will experience a linear phase evolution of * sky position mismatch - a mismatch in sky position will produce a slightly different doppler shift . on short time scales this is dominated by earth rotation ( with velocity ) and is periodic in time and linear in sampled frequency : where is the maximum expected mismatch in radians , with practical values usually less than . *spindown mismatch - a spindown different from by will produce a linear evolution of the frequency and , thus , a quadratic change in phase : here shows maximum variation of time variable with respect to reference time .if the reference time is positioned at the center of the run , then is half the time base . *source frequency evolution - the source signal can be modulated by a nearby orbiting object .assuming circular orbit with radius ( expressed in astronomical units ) and using for the ratio of object mass to the star mass ( both expressed in units of solar mass ) the angular frequency of the modulation is : and the maximum doppler shift from the central body is the worst case change in phase induced by this motion over time and assuming radiating frequency is : the curiously small size of is due largely to the small value of product . for a search that assumes a specific phase evolution over a long time intervalthis would be much larger .the loosely coherent search is not completely immune from this effect - it will lose power when enough phase accumulates during integration for the signal to escape into nearby frequency bin .this suggests that searches looking for more extreme systems should use coarser frequency bins , smaller and tighter .table [ tab : phase_shift ] shows the expected phase shift for conditions commonly encountered in present day searches .[ tab : phase_shift ] phase shift cause & hz & hz & hz & hz + frequency mismatch of & & & & + sky position mismatch of & & & & + spindown mismatch of hz / s for y & & & & + source modulation for and au & & & & +we will now turn to efficient computation of the loosely coherent statistic .given reduced sensitivity to perturbations in search parameters compared with purely coherent methods and corresponding reduction in the number of templates , the quadratic cost of computing the sum ( [ eqn : kernel_statistic ] ) is not completely unreasonable . noticing that the kernel is a positive symmetric matrix , one expects to do better by finding eigenvectors and eigenvalues of and discarding eigenvectors with small eigenvalues .this will make the computational cost bilinear in and the number of remaining eigenvectors .let us consider , as an example , the case of limited phase evolution with the previously computed kernel where we introduced .when we are dealing with a fully coherent case and the kernel has only one eigenvector with non - zero eigenvalue , while for we have the semi - coherent case and is the identity matrix for which we have to use the entire basis .it seems reasonable to expect that for small we will have a few - eigenvector situation , while for large we will have something similar to a semi - coherent sum , where it makes sense not to truncate by eigenvalue but rather cut side diagonals of that are small .it turns out that the set of `` small '' values is quite large . to see why this is so , first examine the plot of versus phase mismatch on figure [ fig : alpha_delta ] . even for a phase mismatchas much as the value of is relatively small at . on phase mismatch ,height=302 ] secondly , consider the continuous version of our kernel : (u)=\int^\infty_{-\infty } e^{-\alpha |u - v| } f(v ) dv\ ] ] the operator is given by a convolution of with .as is well - known , fourier transform will convert convolution into multiplication .thus the spectrum of the convolution operator is given as fourier transform of its kernel .the functions can be considered as eigenvectors of in appropriate functional space ( e.g. ) : (u)=\int^\infty_{-\infty } e^{-\alpha |u - v| } e^{i\lambda v } dv=\frac{2\alpha}{\alpha^2+\lambda^2}e^{i\lambda u}\ ] ] the eigenvalues have the familiar lorentzian form with quadratic decay . in hindsight , this is not surprising as the condition is similar to the requirement that the signals we are looking for are band limited . phase shift & & & + & 2 & 3 & 4 + & 7 & 22 & 42 + & 18 & 81 & 162 + & 66 & 324 & 647 + & 148 & 737 & 1474 + & 351 & 1751 & 3502 + [ tab : eigencount1 ] phase shift & & & + & & & + & & & + & & & + & & & + & & & + & & & + [ tab : eigencount5 ] tables [ tab : eigencount1 ] and [ tab : eigencount5 ] show the number of eigenvectors needed to approximate given by formula [ eqn : exp_kernel ] for various numbers of equally spaced sfts and some typical values of .the approximation is done using operator norm which is equivalent to counting the number of eigenvalues that are at least 1% for table [ tab : eigencount1 ] ( 5% for table [ tab : eigencount5 ] ) of the largest eigenvalue of .while the fraction of eigenvectors does rise linearly with and thus the computational requirements are still quadratic , said fraction is a rather small number for and in practical implementations ( especially on processors with vector arithmetic ) the scaling will be close to linear . for larger values of one might wish to go with a different algorithm .in particular , it makes sense to consider decompositions using non - orthogonal vectors , the simplest of which is obtained by truncation of side diagonals . corresponding to first four largest eigenvalues for and ,height=302 ] as we mentioned before , in the continuous case the eigenvectors are simple sine waves .the discrete case is nearly sinusoidal .the eigenvectors of the kernel [ eqn : exp_kernel ] corresponding to first four largest eigenvalues for and are shown on the figure [ fig : eigenvectors1000 ] .the eigenvector corresponding to the largest eigenvalue is not constant and can be regarded as a window one applies to the data in order to make the usual coherent sum respond to signals from .the eigenvector decomposition was done numerically using r .this idea can be exploited to speed up eigenvector decomposition , by analytically transitioning into the basis of pure sine waves and then discarding entries of from higher order modes .the remaining matrix of smaller dimension can then be diagonalized with conventional numerical techniques .it is interesting to consider the case of very short fourier transforms of a few seconds in length and correspondingly small .the phase shifts from most sources ( except for frequency mismatch ) will be small as well , and computation of can be performed by taking a fourier transform of the input data and then summing up power in low frequency harmonics weighted by eigenvalues of .this has close relation to the resampling technique .the resampling implementation of -statistic operates by heterodyning 30 minute sfts to a desired frequency , inverting the fourier transform to obtain a time series which is stitched together and then band - limited and downsampled .the resulting time series is converted into detector frame which allows efficient computation of -statistic using fourier transform .another way to obtain the same time series is to start with shorter sfts which frequency bins are large enough to accommodate doppler shift .a time series of frequency bins of these short sfts is then just another way of heterodyning our input data with the advantage of bypassing the need for inverse fourier transform .if the frequency band that is being searched is significantly smaller than the size of initial frequency bins the time series can be band - limited and downsampled just as done in .the conversion of heterodyned time - series into detector frame consists of two parts : removal of the phase shift from signal evolution due to intrinsic effects or earth motion , which is also done by loosely coherent method , and interpolation in order to obtain evenly spaced time series suitable for fast fourier transform algorithm . the computation of -statistic involves summing three terms quadratic in the elements of our time series with coefficients that depend on time position of the source and the detector but not the amplitude or polarization of the expected signal .this can be viewed as computation of a specific kernel which rank is at most .if we take the interpolation algorithm into account the rank will increase but will still be much smaller than kernel dimension .the same approach can be used to compute loosely coherent statistic where we might need to use additional terms to accommodate kernels with larger rank . in return, the statistic can be made more tolerant of mismatch in source parameters , such as sky location .it must be said that the sensitivity of a given method is best judged from a search made on real data , as computational efficiency and practicalities of detector artifacts in the input data have often a much stronger impact than an extra few percent gained by fine - tuning the algorithm with analytical considerations that assume gaussian noise . nevertheless , it is useful to have an idea of what to expect in the perfect situation as a starting point for practical applications .we will concentrate on the case of perfectly coherent signal and how the performance varies between the extremes of coherent and semi - coherent power sums. the standard methods of filtering theory can be employed to obtain a rough estimate . as we mentioned before , the phase evolution condition is closely related to the condition that our signals are band limited . in this case , the rejection of noise outside the acceptance band results in improvement in the signal - to - noise ratio compared to the usual semi - coherent case which is sensitive to all signals within the frequency bin of the original sfts .the acceptance band is narrowed down by a factor inversely proportional to the number of sfts it takes for the phase to make a full turn ( not to exceed , of course , the total number of sfts available ) .thus , given a fixed number of sfts , we expect the improvement in the signal - to - noise ratio to scale as tempered by the non - linear effects of our statistic .this is illustrated on figure [ fig : loose_snr ] that shows results of simulation evaluating signal - to - noise ratio gain for limited phase evolution statistic as we decrease for a coherent signal .the simulation was performed using sfts which were composed of gaussian noise with standard deviation and a constant signal with amplitude which results in the average signal - to - noise ratio of for a semi - coherent search .the statistic was computed according to the formula where the kernel was either an identity matrix for semi - coherent case , a matrix with in all cells for the coherent case or given by the formula [ eqn : exp_kernel ] for the loosely coherent case .the signal - to - noise ratio in this simulation was defined as the value of the statistic minus the average value obtained on noise alone and divided by the standard deviation of values produced by pure noise : here mean and standard deviation were taken over independent realizations of noise .all of the statistic values are described by a weighted -squared distribution which depends on . for large , however , it is close to a gaussian distribution as well due to the central limit theorem . to illustrate the change in the distribution of our statisticwe show and quantiles of the signal - to - noise ratios obtained as well as the mean .the vertical axis is logarithmic , so the spread in signal - to - noise ratios increases as becomes smaller . .the upper , central and lower curves show 90% quantile , mean and 10% quantile of multiple simulation runs ., height=302 ] the flattening out of the curve for small is due to different scaling regimes near the extremes of coherent and semi - coherent statistics .this can be illustrated by considering a semi - coherent statistic that operates on sfts which are coherently combined in stretches of sfts each and the results are combined incoherently .then the scaling law for the signal - to - noise ratio is : now suppose that is a certain fraction of . then the scaling is as our statistic is power based the sensivity will scale as {\alpha}\sqrt{n})$ ] .the fourth root in has a really slow growth .for example , for it is only - so for less than a factor of loss in sensitivity the coherence length can be dropped by a factor of .an initial implementation of the loosely coherent statistic was done within the framework of the powerflux program .this implementation provided practical experience with a loosely coherent search and addressed the problem of following up outliers from the all - sky powerflux search over ligo s fifth science run .as the underlying code base was not designed with the loosely coherent search in mind , the code has a number of inefficiencies .in particular , the double sum in the statistic was computed by brute force . nevertheless , the speed was sufficient to quickly carry out searches in disks of radians radius on the sky over sfts split evenly between h1 and l1 detectors .the powers from individual detectors were combined incoherently to make the comparison to semi - coherent code more fair .the nearby sfts were separated by min . in practical data ,the sfts are usually overlapped , but there are can also be gaps in the data . the min constant was chosen as a reasonable worst case .while the analysis of actual interferometer data is still underway , we can report on results of simulations using gaussian data .for these simulations we used a lanczos kernel with parameter : this kernel naturally vanishes for widely separated sfts which makes this a variant of cross - correlation search , albeit with particularly large number of off - diagonal entries , which is further increased by the overlap of nearby sfts that is usually employed by powerflux .we explored values of as small as which involves summing up to diagonals when working with overlapped sfts .for these values of the required computational time scales as square of observation time ( for time bases several months and larger ) and as a cube of covered frequency range .figures [ fig : h0_vs_fbin ] and [ fig : snr_vs_h0 ] show results of monte - carlo injection run assuming a static source location ( right ascension 2.0 , declination 1.0 , spindown 0 ) and a linearly polarized signal .this choice was made to increase readability of the plots as all - sky injections with arbitrary polarizations inject different amount of power in the interferometer making the curves wider .the injections were made into gaussian data that was filtered to simulate hann windowed short fourier transforms ( sfts ) .the assumed frequency range varied from to hz and sft frequency bin size was hz .the 95% confidence level upper limits are produced by powerflux code for a set of 501 frequency bins given a particular direction on the sky and a spindown value .the results are then maximized over a set of polarizations and small area on the sky around the injection point .this follows the analysis method used in and .both semi - coherent ( power only ) and loosely coherent algorithms proceed by sampling discrete range of frequencies with configurable spacing in fractions of sft bin size .figure [ fig : h0_vs_fbin ] compares how the mismatch between the actual injected frequency and the sampled frequency affects upper limits produced by semi - coherent and loosely coherent codes .the frequency spacing was set at sft bin and the injected strain value was fixed to .we see that a loosely coherent search with has an initial flat response for small mismatch in frequency which is followed by rapid decay to values below injected strain . in contrast , the semi - coherent search shows only minor reduction in the upper limit which is fully compensated by built - in correction factor .figure [ fig : snr_vs_h0 ] compares the signal to noise ratios ( snrs ) of semi - coherent and loose - coherent methods .the frequency spacing of the loosely coherent search was reduced to of the sft bin which insures correct reconstruction of the upper limit for the entire range of weak and strong signals . because of the larger number of templates , the snr achieved on pure noise is higher for the loosely coherent search than that of the semi - coherent search . for signals above noise the loosely coherent search produces signal - to - noise ratios on average % larger than semi - coherent one .we have discussed the problem of detecting a family of signals from the point of view of computational efficiency and presented a method of creating a statistic that is sensitive to the entire family or its subset .two simple examples were considered which showed close ties to well - known methods of matched filtering , cross - correlation and semi - coherent sums .there are several directions of further study : * the prototype large implementation shows feasibility of the overall method , but does not provide information on the overall computational efficiency .we plan to develop a dedicated small code to be used in targeted searches that cover small sky area ( such as galactic center or globular clusters ) .this should provide experience with scalability properties of the loosely coherent method .* the average of was used to make the maximization computationally tractable .in fact , for small the maximization can be carried out directly .it is worthwhile to investigate the possibility of combining the two techniques .* for the case of the set given by conditions and assuming small the maximization over can be carried out assuming .this converts the problem into the discrete domain and makes it amenable to binary optimization methods which have seen much progress in recent years .a particularly interesting observation is that for a noise dominated signal the function to be optimized has random coefficients , so an optimization method that works only on a certain proportion of objective functions can yield useful results .this work has been done while being a member of ligo laboratory , supported by funding from united states national science foundation .the simulations were completed on the wonderful atlas cluster at albert einstein institute , with special thanks due to bruce allen , carsten aulbert , henning fehrmann and miroslav shaltev .the author has greatly benefited from discussions with his colleagues , in particular joe betzweiser , chris messenger and keith riles .we are greatly thankful to the referee for many useful comments and suggestions .this document has the ligo document number p1000015 .searches for periodic gravitational waves from unknown isolated sources and scorpius x-1 : results from the second ligo science run abbott b ( the ligo scientific collaboration ) , _ phys .* 76 * ( 2007 ) 082001 einstein search for periodic gravitational waves in ligo s4 data , abbott b ( the ligo scientific collaboration ) , _ phys .d _ * 79 * , 022001 ( 2009 )
we introduce a `` loosely coherent '' method for detection of continuous gravitational waves that bridges the gap between semi - coherent and purely coherent methods . explicit control over accepted families of signals is used to increase sensitivity of power - based statistic while avoiding the high computational costs of conventional matched filters . several examples as well as a prototype implementation are discussed .
in this section , we review the literatures that study the impact of the media connections and media networks on financial and economic matters . on the top of that , we make a comparison between the media connection and other pairwise measures , such as simple correlation and distances , and we show that by looking at the stocks from a network angle , the media will provide us more soft information beyond just media coverage and news tones , e.g. interrelations , centrality , and determinants . media connection , by definition , is a connection that built via news stories which may through explicit mentions or implicit affections .the explicit mentions , also known as media co - occurrence , is the most natural way of formulating the connectivity of two entities . studied the social network inferred from the co - occurrence network of reuters news .they show that the network exhibits small - world features with power law degree distribution and it provides a better prediction of the ranking on importance " of people involved in the news comparing to other algorithms . studied the cross - predictability of stock returns by identifying the economic linkage from co - mentions in the news story .they constructed a linkage signal using the weighted average of the connected stock returns and they find that the linked stocks cross - predict one another s returns in the future significantly , and the predictability increases with the number of the connected news . apart from the explicit mentions , the connection may also be built through implicit affections .one of the most popular channels is the industrial chain . as shown in , economic links among certain individual firms and industriescontribute significantly to cross - firm and cross - industry return predictability . extends the perspective of by defining a connection between industries with the predictability of returns . through these industrial interdependencies ,the news that conveys information on one industry will also percolates into the other industries .further , due to the competitive relation of stocks within the industry , the good ( bad ) news to one stock will be bad ( good ) news to its competitors .in addition , business interaction is another important channel that transfers news information from one firm to another .based on media connections , we can formulate a media network by taking the whole picture of the connected stocks as a connected graph with news tones tagged on each stocks .the inter - relationships between stocks are presented by the aggregated connection scores .different from correlation , the connection score is only a measure of concurrent relationship , which does not depend on the outside information . to see that, we consider the following example : suppose there are two connected news for two stocks , and the news tones for each stock is given by : where a positive number indicates a positive news tone for this stock . by cross - section correlation, we will have the correlation between the stocks is -1 .however , this is not true .as we can see , the news tones for these stocks are all positive in both news , which may indicates a positive co - movement relationship .therefore , correlation coefficient is not a proper tool for describing the news connectivities based on this simple example .as simple correlation fails in describing media connections , people may come up with euclidean distance .however , distance is incapable of capturing the sentiment information in news tones , and the following example explains our point . for above cases , the euclidean distance informs us that the distance between these two pairs of stocks are the same .however , we can deduce that the prices of stock 3 and 4 are likely to be co - moving while the prices of stock 1 and 2 may be affected oppositely . as a result , a proper connection score that can correctly reflect the relationship between connected stocks based on the news tones is needed , and we will show that our construction of connection scores will be able to retain the information given in the news completely in the next section .in this section , we introduce the data sources and explain the methodology for constructing the media connection index . beyond that, we further explains the differences between the average correlation index in and the advantages of our measure over theirs .the data we use for identifying media connection is the firm - specific news from the thomson reuters news archive dataset ranging from jan-1996 to dec-2014 .the data contains various types of news , e.g. reviews , stories , analysis and reports etc ., about markets , industries and corporations . in this paper, we identify the news that has mentioned at least two stocks as connected news and the others as self - connected news .this dichotomy allows us to isolate the effect of the media connection by calculating the connectivity measure with one stock as the centre of a news network , and the aggregation of connectivity measures over the whole portfolio will provide information on the whole news network .the information that the news conveys only affects the directly connected stocks .the aggregated news tones reflect the majorities opinions on future prices of both connected stocks .the media connection is identified though the connected news where at least two stocks are mentioned in the text . beyond just connectivity ,the thomson reuters news data also provide us the news tones ( positive , negative and neutral ) for each mentioned stocks within each article . on the top of that, we can compute the daily pairwise connection scores of the news to each stock mentioned , that is , where is the total number of stocks in the sample , the superscript denotes the news in day and is the total number of news of day which may vary everyday . by construction ,the connection score is positive when the news tones of the stocks are the same which indicates a possible co - movement of the stock prices .further , the magnitude of the connection score implies the strength of the connection which ranges from -1 to 1 . with a higher magnitude , the connection of the two stocks will be tighter and a more synchronized movement will occur . with these basic elements available , we construct the _ media connectivity matrix _ on daily basis by construction , the media connectivity matrix have included all the information that a media network can provide us , and it is different with news coverage and average correlation in several aspects .firstly , the off - diagonal elements in the matrix provides us the information on the closeness of each stocks on the news tones while the diagonal elements inform us the degree of self - connection .further , despite the news may debate on the effects of some events happened to certain stocks , the aggregation of the connection score will reflect the opinions of the majority and therefore make the right prediction of the co - movement .this property differs from the correlation , and can be seen from the following example .assume that based on the connnectivity matrix , we finally aggregate the network information to compose a _ media connection index _ ( mci ) on daily basis , this formulation frees us from the effect of media coverage and just measures the fraction of the media connections scaled by the news tones , which proxies the media interaction between the stocks . in figure [ fig - ret - corr] , we plot both media connection index we constructed and the average correlation index by from 1996 to 2014 .\ ] ] from the figure we can observe that our index behaves in the same trend with average correlation index in general .this indicates that our media network index shares some information in average correlation .furthermore , media network index is more volatile that average correlation index during the recession periods while maintaining the same fluctuation as average correlation over expansion periods .this nice feature is driven by the soft information that involved in the media news and it distinguishes our media based measure with price based measures .apart from the media news data , we also collect 12 economic predictors that are linked directly to economic fundamentals from amit goyal s website for comparison purposes .specifically , they are the log dividend - price ratio ( d / p ) , log dividend yield ( d / y ) , log earnings - price ratio ( e / p ) , log dividend payout ratio ( d / e ) , stock return variance ( svar ) , book - to - market ratio ( b / m ) , treasury bill rate ( tbl ) , long - term bond yield ( lty ) , long - term bond return ( ltr ) , term spread ( tms ) , default yield spread ( dfy ) , default return spread ( dfr ) . the basic summary statistics of these predictors are reported in table [ tab - summary ] .\ ] ] from the summary statistics we can observe that the monthly excess market return has a mean of 0.66% and a standard deviation of 5.32% , implying a monthly sharpe ratio of 0.12 . while the excess market return has little autocorrelation , most of other variables are quite persistent .the summary statistics are generally consistent with the literature .in this section , we provide a number of empirical results .section [ subsecfcst ] examines the predictability of media network index and average correlation index on the aggregate market .section [ subsececn ] compares the media network index with economic predictors .section [ subsecoos ] analyzes the out - of - sample predictability , section [ subsecass ] assesses the economic value of predictability via asset allocation and section [ subsecport ] investigates the predictability of characteristics portfolios .consider the standard predictive regression model , where , and is the excess market return , i.e. , the monthly log return on the s&p 500 index in excess of the risk - free rate . is the change of media network index based on optimism score .similarly , ( ) is the change of media network index based on positive ( negative ) score . for comparison, we also construct average correlation measure according to , .the null hypothesis of interest is that media connection has no predictive ability , . in this case , ( [ eqreg1 ] ) reduces to the constant expected return model , .as finance theory suggests a negative sign of , we test against , which is closer to theory than the common alternative of .\ ] ] table [ tab - prednews ] reports the results of the predictive regression .panel b provides the estimation results for the average correlation index ( and ) . consistent with , positively predicts future stock returns , which means when systematic risk increases , risk - averse investors require a higher risk premium to hold aggregate wealth , and the equilibrium expected return must rise .however , we do not observe a significant negative effect from on future return . according to ,if an increase in average correlation is due to an increase in aggregate risk , then the discount rate for future expected cash flows on most assets , including the stock market , should increase as well .holding expected future cash flows constant , higher future expected returns will induce an immediate fall in price , i.e. , negative realized returns , or volatility feedback , as described by .therefore , if stocks are more correlated with each other , volatility feedback will generate negative stock returns on average .indeed , we use to proxy the change of aggregate risk and find it negatively predict future stock returns significantly . with the standard ols estimation , has an estimated coefficient of -0.85 that is statistically significant at the 5% confidence level .economically , the ols coefficient suggests that a one - standard deviation increase in is associated with a 0.85% decrease in expected excess market return for the next month . on the one hand , recall that the average monthly excess market return during our sample period is only 0.66% ,thus the slope of 0.85% implies that the expected excess market return based on varies by about 1.3 times larger than its average level , which indicates a strong economic impact . on the other hand ,if we annualize the 0.85% decrease in one month by the multiplication of 12 , the annualized level of 10.2% is somewhat large . in this case, one may interpret this as the model implied expected change that may not be identical to the reasonable expected change of the investors in the market .empirically , this level is comparable with conventional macroeconomic predictors .for example , a one - standard - deviation increase in the d / p ratio , the cay and the net payout ratio tends to increase the risk premium by 3.60% , 7.39% , and 10.2% per annum , respectively ( see , e.g. and ) .meanwhile , the of with ols forecast is 2.53% , which is substantially greater than 1.16% of .this implies that if this level of predictability can be sustained out - of - sample , it will be of substantial economic significance ( ) .indeed , show that , given the large unpredictable component inherent in the monthly market returns , a monthly out - of - sample of 0.5% can generate significant economic value and our findings in section [ subsecoos ] are consistent with this argument . to figure out the driving force of the predictability, we also calculate media connection index based on negative and positive tones and re - estimate model ( [ eqreg1 ] ) respectively .we find that the main prediction power of media connection index is from negative tone whose -statistic is -2.02 with being 1.80% while the positive tone has no significant prediction power on future returns .this result is consistent with who assert the negative tone is more informative than the positive tone , and our findings complete their argument in the media network aspect .apart from just analyse the predictability over the whole period , it is also important to analyse the predictability during business - cycles to gain a better understanding about the fundamental driving forces . following , we compute the statistics separately for economic expansions ( ) and recessions ( ) , where ( ) is an indicator that takes a value of one when month is in an nber expansion ( recession ) period and zero otherwise ; is the fitted residual based on the in - sample estimates of the predictive regression model in ( [ eqreg1 ] ) ; is the full - sample mean of ; and is the number of observations for the full sample .note that , unlike the full - sample statistic , the ( ) have no sign restrictions .columns 4 and 5 of table [ tab - prednews ] report the and statistics .it is shown that gains a higher the return predictability over recessions while is higher over expansions , and this suggests that media news may reflect some soft information that is different from stock prices .summarizing table [ tab - prednews ] , the change of media network index , exhibits significant in - sample predictability for the monthly excess market return both statistically and economically , which is much stronger than the average correlation index .in addition , media network index performs much better in the recession periods while average correlation outperforms during the expansions .this suggests that media news reflects different information from stock prices for predicting market returns . in this section ,we compare the forecasting power of media network index with economic predictors and examine whether its forecasting power is driven by omitted economic variables related to business cycle fundamentals or changes in investor risk aversion .basically , we ask the question whether the forecasting power of remains significant after controlling for economic predictors . to analyse the marginal forecasting power of , we conduct the following bivariate predictive regressions based on and , where is one of 12 economic predictors described in section [ secmethod ] , and our main interest is the coefficient of , , and to test against .\ ] ] table [ tab - predecon ] shows that the estimates of in ( [ eqreg3 ] ) are negative and large , in line with the results of predictive regression ( [ eqreg1 ] ) reported in table [ tab - prednews ] .more importantly , remains statistically significant when augmented by the economic predictors .these results demonstrate that contains sizeable complementary forecasting information beyond what is contained in the economic predictors .meanwhile , average correlation index loses its prediction power after controlling for all 3 media connection indices , suggesting that news based predictor is more informative than aggregate risk measure . despite that the in - sample analysis provides more efficient parameter estimates and thus more precise return forecasts by utilizing all available data , , among others ,argue that out - of - sample tests seem more relevant for assessing genuine return predictability in real time and avoid the over - fitting issue .in addition , out - of - sample tests are much less affected by finite sample biases such as the stambaugh bias ( ) .hence , it is essential to investigate the out - of - sample predictive performance of media network index . for out - of - sample forecasts at time , we only use information available up to to forecast stock returns at .following , , and many others , we run the out - of - sample analysis by estimating the predictive regression model recursively based on different measures of media connection indices , where and are the ols estimates from regressing with model [ eqreg1 ] recursively . like our in - sample analogues in table [ tab - prednews ] , we consider the media network indices based on optimism , positive and negative news tones respectively . for comparison purposes , we also carry out out - of - sample test with and , and the results are reported in panel b of table [ tab - oos - cycle ] . to evaluate the out - of - sample forecasting performance , we apply the widely used statistic , the -statistic modified by , and the msfe - adjusted statistic .the statistic measures the proportional reduction in mean squared forecast error ( msfe ) for the predictive regression forecast relative to the historical average benchmark . show that the historical average is a very stringent out - of - sample benchmark , and individual economic variables typically fail to outperform the historical average . to compute ,let be a fixed number chosen for the initial sample training , so that the future expected return can be estimated at time .then , we compute out - of - sample forecasts : .more specifically , we use the data over 1996:01 to 2000:12 as the initial estimation period so that the forecast evaluation period spans over 2001:01 to 2014:12 . where denotes the historical average benchmark corresponding to the constant expected return model ( ) , i.e. by construction , the statistic lies in the range ( .if , it means that the forecast outperforms the historical average in terms of msfe .the second statistic we report is statistic modified by ( dm - test hereafter ) , which tests for the equality of the mean squared forecast errors ( msfe ) of one forecast relative to another .our null hypothesis here is that the historical average has a msfe that is not greater than that of the predictive regression model .however , to compare a predictive regression forecast with the historical average entails comparing nested models , as the predictive regression model reduces to the historical average under the null hypothesis . shows that the modified dm - test statistic follows a non - standard normal distribution when testing nested models , and provides bootstrapped critical values for the non - standard distribution .the third statistic is the msfe - adjusted statistic of ( cw - test hereafter ) .it tests the null hypothesis that the historical average msfe is not greater than the predictive regression forecast msfe against the one - sided ( right - tail ) alternative hypothesis that the historical average msfe is greater than the predictive regression forecast msfe , corresponding to against . show that the test has a standard normal limiting distribution when comparing forecasts from the nested models . intuitively , under the null hypothesis that the constant expected return model generates the data , the predictive regression model produces a noisier forecast than the historical average benchmark as it estimates slope parameters with zero population values .we thus expect the benchmark model s msfe to be smaller than the predictive regression model s msfe under the null . the msfe - adjusted statistic accounts for the negative expected difference between the historical average msfe and predictive regression msfe under the null , so that it can reject the null even if the statistic is negative .\ ] ] panel b of table [ tab - oos - cycle ] shows that the return based and index generates a negative statistic ( -9.92 and -7.73 respectively ) and thus delivers a higher msfe than the historical average .indeed , the estimation difference between price based measures and historical average are not significant according to both dm- and cw - test statistics .thus , and have weak out - of - sample predictive ability for the aggregate stock market , confirming our previous in - sample results ( table [ tab - prednews ] ) . in the contrary , exhibits much stronger out - of - sample predictive ability for the aggregate market .its is 5.14% and its dm- and cw - test statistics are 1.36 and 1.75 , which suggest that s msfe is significantly smaller than that of the historical average at the 10% or lower significance level . in addition , the sixth and seventh columns of table [ tab - oos - cycle ] show that , while the predictability of media network indices are only concentrated in recessions , presents strong out - of - sample forecasting ability during both expansions and recessions .besides , seems not performing well during expansions in comparison with , which also support that negative tone is more informative than positive tone of media news . \ ] ] since both and can be proxy of aggregated market risk , their economic sources of predictability are likely the same . to understand their differences in forecasting power , figure [ fig - csfe1 ]depicts the predicted returns based on and for the 2001:012014:12 out - of - sample period .it is clear that the predicted returns are much more volatile than the forecasts . as the actual realized excess returns ( plotted in the figure as 6-month moving average for better visibility ) are even more volatile than the predicted returns .this explains why the media news based index does a better job than the return based correlation approach here in capturing the expected variation in the market return .following and , figure [ fig - csfe2 ] presents the time - series plots of the differences between cumulative squared forecast error ( csfe ) for the historical average benchmark forecasts and the csfe for predictive regression forecasts based on and over 2001:012014:12 .this time - series plot is an informative graphical device on the consistency of out - of - sample forecasting performance over time .when the difference in csfe increases , the model forecast outperforms the historical average , while the opposite holds when the curve decreases .it thus illustrates whether and based forecasts have a lower msfe than the historical average for any particular out - of - sample period .\ ] ] the solid line in figure [ fig - csfe2 ] shows that our media network index , consistently outperforms the historical average except for the end of 2011 .the curve has slopes that are predominantly positive , indicating that the good out - of - sample performance of steps from the whole sample period rather than some special episodes .the figure also graphically illustrates the performances over the nber dated business cycles , complementing table [ tab - oos - cycle ] . for comparison , the dashed line plots the difference in csfe for the average correlation index .the dashed line shows that fails to consistently outperform the historical average . in a result, it does a poor job in terms of monthly out - of - sample forecasts .the curve is negatively sloped almost in the whole sample period which suggests return based measure may contain a lot of noise for the in - sample forecasts .overall , table [ tab - oos - cycle ] shows that is a powerful and reliable predictor for the excess market returns , and consistently outperforms across different sample periods . in summary, out - of - sample analysis shows that the displays strong out - of - sample forecasting power for the aggregate stock market .in addition , substantially outperforms the average correlation index in an out - of - sample setting , consistent with our previous in - sample results ( tables [ tab - prednews ] and [ tab - predecon ] ) .now we examine the economic value of stock market forecasts based on the media network index , .following , and , among others , we compute the certainty equivalent return ( cer ) gain and sharpe ratio for a mean - variance investor who optimally allocates across equities and the risk - free asset using the out - of - sample predictive regression forecasts . at the end of period ,the investor optimally allocates of the portfolio to equities during period , where is the risk aversion coefficient , is the out - of - sample forecast of the simple excess market return , and is the variance forecast .the investor then allocates of the portfolio to risk - free assets , and the realized portfolio return is where is the gross risk - free return . following , we assume that the investor uses a five - year moving window of past monthly returns to estimate the variance of the excess market return and constrains to lie in between 0 and 1.5 to exclude short sales and to allow for at most 50% leverage .the cer of the portfolio is where and are the sample mean and variance for the investor s portfolio over the forecasting evaluation periods respectively .the cer gain is the difference between the cer for the investor who uses a predictive regression forecast of market return generated by [ eqreg4 ] and the cer for an investor who uses the historical average forecast .we multiply this difference by 12 so that it can be interpreted as the annual portfolio management fee that an investor would be willing to pay to have access to the predictive regression forecast instead of the historical average forecast . to examine the effect of risk aversion, we consider portfolio rules based on risk aversion coefficients of 1 , 3 and 5 , respectively . in addition, we also consider the case of 50bps transaction costs which is generally considered as a relatively high number . for assessing the statistical significance ,we follow by testing whether the cer gain is indistinguishable from zero by applying the standard asymptotic theory as in their paper . in addition, we also calculate the monthly sharpe ratio of the portfolio , which is the mean portfolio return in excess of the risk - free rate divided by the standard deviation of the excess portfolio return .following again , we use the approach of corrected by to test whether the sharpe ratio of the portfolio strategy based on predictive regression is statistically indifferent from that of the portfolio strategy based on historical average .\ ] ] table [ tab - alloc ] shows that the index generates small economic gains for a mean - variance investor , consistent with the small statistics in table [ tab - oos - cycle ] .specifically , has a negative cer gain of -1.11% and -2.14% when the risk aversion is 3 and 5 respectively , and small positive cer gains of 1.40% when the risk aversions is 1 .the net - of - transactions - costs cer gains for is even lower , with 1.27% for risk aversion equals 1 .the sharpe ratios of the portfolios using range from 0.07 to 0.09 in all cases . of all the media network indices , stands out again in term of the economic value .the cer gains for across the risk aversions are consistently positive and economically large , ranging from 1.85% to 4.04% .more specifically , an investor with a risk aversion of 1 , 3 , or 5 would be willing to pay an annual portfolio management fee up to 4.04% , 2.58% , and 1.85% , respectively , to have access to the predictive regression forecast based on instead of using the historical average forecast .the net - of - transactions - costs cer gains of the portfolios range from 1.25% to 3.49% , well above , and is of economic significance. the sharpe ratios of portfolios formed based on range from 0.19 to 0.21 , which more than 66% of the market sharpe ratio , 0.12 , with a buy - and - hold strategy . in addition , all the cer gains and shape ratio gains of in all the risk aversion cases are statistically significant .overall , table [ tab - alloc ] demonstrates that the media network index , can generate sizable economic value for a mean - variance investor , while can not .the results are robust to common risk aversion specifications and the same level of transaction cost .media connection has different impacts on different stocks .in particular , stocks that are large , with high media coverage , and in the central position are likely to be more sensitive to media news connection . in this subsection, we investigate how well the media network index can forecast portfolios sorted on book - to - market , size , momentum , industry , investment and profitability .this study not only helps to strengthen our previous findings for aggregate stock market predictability , but also helps to enhance our understanding for the economic sources of return predictability . consider the predictive regression , where is the monthly excess returns for the 10 size , 10 book - to - market , 10 momentum , 10 industry , 10 investment and 10 profitability portfolios , respectively , with the null hypothesis against the alternative hypothesis based on wild bootstrapped -values .\ ] ] panel a of table [ tab - size ] reports the estimation results for in - sample univariate predictive regressions for 10 book - to - market portfolios with media network index over the period of 2001:012014:12 . affirming our findings for the market portfolio in table [ tab - prednews ] , substantially enhances the return forecasting performance relative to across majority groups , with the about two to three times higher than the corresponding of .in addition , almost all of the regression slope estimates for are negative , thus the negative predictability of change of media connection for subsequent stock returns are pervasive across book - to - market portfolios .the regression slope estimates and statistics vary significantly across book - to - market groups , illustrating large cross - sectional difference in the exposures to media network index .indeed , high book - to - market portfolios are the most predictable by media network , whereas the low value stocks present the lowest predictability .the remaining panels of table [ tab - size ] show that improves sharply the forecasting performance relative to for the cross - sectional stock returns of size , and momentum , industry , investment and profitability portfolios as well .in addition , all the statistics of are much larger than the corresponding of .for example , the of for high profitability portfolio is 0.04 , while the corresponding of is 0.01 . in conclusion ,consistent with the literature , there is a fairly large dispersion of regression slope estimates in the cross - section .stocks that are small , distressed ( high book - to - market ratio ) , with high growth opportunity ( low book - to - market ratio ) , or past winners are more predictable by media connections .this paper documented that the change of aggregated news tones for media connected stocks have the ability to predict future market excess returns .specifically , we propose a novel measure for aggregate market risk from the media network angle .our methodology allows us to identify information provided by news articles beyond just news coverage and investor sentiment .based on the media connection index constructed , we find that the change of aggregate media connection based on news tones predicts a negative return with a higher in - sample and out - of - sample performance than average correlation index in .we also find the predictability of our measure mainly comes from the negative new tones which is consistent with in which he find negative tones are more informative .in addition , our findings are also statistically as well as economically significant even though we control for different economic predictors used in .we have also tested the performance of our index in predicting returns during the recession and expansion periods documented by nber .the results shows that our measure obtain larger and positive in recession periods comparing to average correlation measure .this result indicates that our measure possesses different predictive information with just news coverage and average correlation .lastly , we test the portfolio implications of media connection index across different investors with various risk aversion levels as well as different trading strategies .the empirical results show that the portfolios constructed based on media connection index obtain larger sharpe ratios and certainty equivalent return gains than average correlation for mean - variance investors .it also shows that the stocks that are small , distressed ( high book - to - market ratio ) , with high growth opportunity ( low book - to - market ratio ) , or past winners are more predictable by media connections , which indicates a strong predictive power as well .= 2em this table reports summary statistics for the log excess aggregate stock market return defined as the log return on the equal weight crsp stocks in excess of the risk - free rate ( ) , risk - free rate ( ) , media network measures ( based on optimism score , positive score and negative score respectively ) with corresponding first order difference , pollet and wilson average correlation index with its first order difference and 12 economic variables from amit goyal s website : the log dividend - price ratio ( d / p ) , the log dividend - yield ratio ( d / y ) , log earnings - price ratio ( e / p ) , log dividend payout ratio ( d / e ) , , book - to - market ratio ( b / m ) , treasury bill rate ( tbl ) , long - term bond yield ( lty ) long - term bond return ( ltr ) , term spread ( tms ) , default yield spread ( dfy ) , default return spread ( dfr ) , stock return variance ( svar ) . for each variable ,the time - series average ( mean ) , standard deviation ( std . dev . ) , skewness ( skew . ) , kurtosis ( kurt . ) , minimum ( min . ) , maximum ( max . ) , and first - order autocorrelation ( ) are reported .the monthly sharpe ratio ( sr ) is the mean log excess market return divided by its standard deviation .the sample period is over 1996:012014:12 . & mean & std & skew & kurt & min & max & + ( % ) & 0.0066 & 0.0532 & -0.4283 & 4.5904 & -0.2103 & 0.1849 & 0.1709 + ( % ) & 0.0023 & 0.0018 & 0.0477 & 1.4356 & 0.0000 & 0.0056 & 0.7451 + & 0.4798 & 0.1861 & 1.7008 & 6.4594 & 0.2654 & 1.3134 & 0.5907 + & 0.4927 & 0.0896 & 0.6649 & 3.4315 & 0.3255 & 0.8001 & 0.6632 + & 0.6660 & 0.3394 & 1.5194 & 5.3707 & 0.2000 & 1.9984 & 0.7047 + & 0.0005 & 0.1442 & 0.4450 & 12.7742 & -0.6881 & 0.8486 & -0.3640 + & 0.0012 & 0.0492 & -0.1038 & 3.5210 & -0.1416 & 0.1448 & -0.3028 + & 0.0011 & 0.2443 & 0.6457 & 13.0858 & -0.9598 & 1.3559 & -0.2723 + & 2.0631 & 0.9313 & 0.8844 & 3.6575 & 0.4669 & 4.9901 & 1.4640 + & 0.0054 & 0.3561 & 0.3796 & 4.8268 & -1.3030 & 1.3750 & 0.4999 + d / p & -4.0513 & 0.2278 & 0.3652 & 3.7843 & -4.5240 & -3.2811 & 1.0780 + d / y & -4.0474 & 0.2267 & 0.2585 & 3.6326 & -4.5313 & -3.2948 & 0.9728 + e / p & -3.1928 & 0.4263 & -1.7005 & 6.7579 & -4.8365 & -2.5656 & 1.2934 + d / e & -0.8585 & 0.4961 & 2.9080 & 11.8312 & -1.2442 & 1.3795 & 1.5754 + b / m & 0.2555 & 0.0771 & 0.0950 & 2.0541 & 0.1205 & 0.4411 & 0.9550 + tbl ( % ) & 2.7027 & 2.0840 & 0.0238 & 1.3867 & 0.0100 & 6.1700 & 1.1495 + lty ( % ) & 4.9904 & 1.1644 & -0.4161 & 3.0346 & 2.0600 & 7.2600 & 1.0238 + ltr ( % ) & 0.7337 & 3.0785 & 0.0064 & 5.7393 & -11.2400 & 14.4300 & -0.0247 + tms ( % ) & 2.2877 & 1.4041 & -0.0827 & 1.7428 & -0.4100 & 4.5300 & 1.1122 + dfy ( % ) & 1.0313 & 0.4693 & 2.7638 & 12.2444 & 0.5500 & 3.3800 & 1.4193 + dfr ( % ) & -0.0186 & 1.8979 & -0.3835 & 8.7112 & -9.7500 & 7.3700 & 0.0000 + svar ( % ) & 0.0036 & 0.0058 & 5.8563 & 47.9514 & 0.0004 & 0.0581 &0.7795 + = 2em this table provides in - sample estimation results for the predictive regression of monthly excess market return on one of three proxies of lagged news network indices ( optimism , positive and negative score respectively ) and average correlation indices .the sample period is 1996:012014:12 . where denotes the monthly excess market return ( % ) . & beta & -stat & & & + + & -0.85 & -2.41 & 2.53 & 1.32 & 4.10 + & -0.26 & -0.76 & 0.25 & 0.14 & 1.25 + & -0.69 & -2.02 & 1.80 & 1.16 & 2.86 + + & 0.60 & 1.76 & 1.37 & 3.35 & 1.03 + & -0.28 & -0.82 & 0.30 & 0.09 & 1.44 + = 2em this table provides in - sample estimation results for the bivariate predictive regression of monthly excess market return on one of the 12 economic predictors or average correlation indices , , and on the one of lagged news network indices ( optimism , positive and negative score respectively ) .the sample period is 1996:012014:12 . where denotes the monthly excess market return ( % ) the -statistics of the coefficients are reported in the parenthesis . & & & + & & & & & & & & & + & -1.18 & 0.63 & 0.04 & -5.42 & 0.61 & 0.01 & -3.35 & 0.63 & 0.01 + & ( -2.35 ) & ( 1.57 ) & & ( -0.71 ) & ( 1.50 ) & & ( -2.19 ) & ( 1.58 ) & + & -1.10 & -0.44 & 0.03 & -6.72 & -1.23 & 0.01 & -6.72 & -1.23 & 0.01 + & ( -2.10 ) & ( -0.40 ) & & ( -0.87 ) & ( -1.15 ) & & ( -0.87 ) & ( -1.15 ) & + d / p & -1.16 & 2.21 & 0.04 & -5.58 & 2.25 & 0.01 & -5.58 & 2.25 & 0.01 + & ( -2.31 ) & ( 1.35 ) & & ( -0.73 ) & ( 1.36 ) & & ( -0.73 ) & ( 1.36 ) & + d / y & -1.11 & 2.78 & 0.04 & -5.84 & 3.05 & 0.02 & -5.84 & 3.05 & 0.02 + & ( -2.21 ) & ( 1.70 ) & & ( -0.76 ) & ( 1.84 ) & & ( -0.76 ) & ( 1.84 ) & + e / p & -1.15 & -0.35 & 0.03 & -5.65 & -0.52 & 0.00 & -5.65 & -0.52 & 0.00 + & ( -2.28 ) & ( -0.40 ) & & ( -0.73 ) & ( -0.58 ) & & ( -0.73 ) & ( -0.58 ) & + d / e & -1.13 & 0.73 & 0.03 & -5.78 & 0.85 & 0.01 & -5.78 & 0.85 & 0.01 + & ( -2.25 ) & ( 0.96 ) & & ( -0.75 ) & ( 1.12 ) & & ( -0.75 ) & ( 1.12 ) & + b / m & -1.17 & 2.18 & 0.03 & -5.40 & 1.99 & 0.00 & -5.40 & 1.99 & 0.00 + & ( -2.31 ) & ( 0.45 ) & & ( -0.70 ) & ( 0.40 ) & & ( -0.70 ) & ( 0.40 ) & + tbl & -1.15 & -0.09 & 0.03 & -5.55 & -0.11 & 0.00 & -5.55 & -0.11 & 0.00 + & ( -2.28 ) & ( -0.52 ) & & ( -0.72 ) & ( -0.63 ) & & ( -0.72 ) & ( -0.63 ) & + lty & -1.15 & -0.29 & 0.03 & -5.59 & -0.31 & 0.01 & -5.59 & -0.31 & 0.01 + & ( -2.29 ) & ( -0.89 ) & & ( -0.73 ) & ( -0.95 ) & & ( -0.73 ) & ( -0.95 ) & + ltr & -1.19 & 0.12 & 0.03 & -4.95 & 0.10 & 0.01 & -4.95 & 0.10 & 0.01 + & ( -2.36 ) & ( 0.97 ) & & ( -0.64 ) & ( 0.79 ) & & ( -0.64 ) & ( 0.79 ) & + tms & -1.16 & 0.01 & 0.03 & -5.51 & 0.04 & 0.00 & -5.51 & 0.04 & 0.00 + & ( -2.30 ) & ( 0.04 ) & & ( -0.71 ) & ( 0.15 ) & & ( -0.71 ) & ( 0.15 ) & + dfy & -1.16 & 0.02 & 0.03 & -5.56 & 0.11 & 0.00 & -5.56 & 0.11 & 0.00 + & ( -2.31 ) & ( 0.02 ) & & ( -0.72 ) & ( 0.14 ) & & ( -0.72 ) & ( 0.14 ) & + dfr & -1.12 & 0.27 & 0.04 & -6.48 & 0.32 & 0.02 & -6.48 & 0.32 & 0.02 + & ( -2.23 ) & ( 1.39 ) & & ( -0.84 ) & ( 1.58 ) & & ( -0.84 ) & ( 1.58 ) & + svar & -1.10 & -99.95 & 0.04 & -5.60 & -112.82 & 0.02 & -5.60 & -112.82 & 0.02 + & ( -2.18 ) & ( -1.54 ) & & ( -0.73 ) & ( -1.73 ) & & ( -0.73 ) & ( -1.73 ) & + = 2em this table reports the out - of - sample performances of various measures of news network indices in predicting the monthly excess market return .panel a provides the results using the 3 news network inices while panel b are generated by using average correlation index .all of the predictors and regression slopes are estimated recursively using the data available at the forecast formation time . is the out - of - sample .dm - test is the modified -statistic and cw - test is the msfe - adjusted statistic . statistics are calculated over nber - dated business - cycle expansions ( recessions ) .the out - of - sample evaluation period is over 2000:012014:12 . & & cw & -value & dm & -value & & + + & 5.14 & 1.75 & 0.04 & 1.06 & 0.14 & -0.59 & 11.85 + & -3.40 & -2.03 & 0.98 & -2.18 & 0.99 & -2.67 & -4.27 + & 5.22 & 1.83 & 0.03 & 1.18 & 0.12 & 0.88 & 10.30 + + & -9.92 & -1.67 & 0.95 & -2.45 & 0.99 & -10.13 & -9.66 + & -7.33 & -2.56 & 0.99 & -2.83 & 1.00 & -8.57 & -5.88 + = 2em this table reports the portfolio performance measures for a mean - variance investor with a risk aversion coefficient ( ) of 1 , 3 and 5 , respectively , who allocates monthly between equities and risk - free bills using the out - of - sample forecasts of the excess market returns based on lagged news network indices , , and respectively . is average correlation index and is its first order difference .cer gain is the annualized certainty equivalent return gain for the investor and the monthly sharpe ratio is the mean portfolio return in excess of the risk - free rate divided by its standard deviation . the portfolio weights are estimated recursively using the data available at the forecast formation time t. the out - of - sample evaluation period is over 2000:012014:12 . & & + predictor & sharpe ratio & & cer gain & & sharpe ratio & & cer gain & + + & 0.21 & 0.01 & 4.04 & 0.00 & 0.19 & 0.02 & 3.49 & 0.00 + & 0.09 & 0.36 & 1.20 & 0.04 & 0.08 & 0.45 & 0.94 & 0.08 + & 0.17 & 0.03 & 3.15 & 0.00 & 0.14 & 0.06 & 2.59 & 0.00 + & 0.09 & 0.26 & 1.40 & 0.02 & 0.09 & 0.29 & 1.27 & 0.04 + & 0.05 & 0.80 & 0.11 & 0.50 & 0.03 & 0.88 & -0.30 & 0.74 + + & 0.21 & 0.01 & 2.58 & 0.00 & 0.19 & 0.02 & 2.00 & 0.00 + & 0.08 & 0.37 & -0.86 & 0.92 & 0.07 & 0.47 & -1.13 & 0.96 + & 0.17 & 0.03 & 1.52 & 0.00 & 0.14 & 0.07 & 0.94 & 0.06 + & 0.08 & 0.40 & -1.11 & 0.96 & 0.07 & 0.47 & -1.28 & 0.98 + & 0.05 & 0.77 & -1.96 & 1.00 & 0.03 & 0.86 & -2.36 & 1.00 + + & 0.21 & 0.01 & 1.85 & 0.00 & 0.19 & 0.02 & 1.25 & 0.01 + & 0.08 & 0.41 & -1.95 & 1.00 & 0.07 & 0.51 & -2.24 & 1.00 + & 0.17 & 0.03 & 0.70 & 0.12 & 0.14 & 0.07 & 0.10 & 0.45 + & 0.08 & 0.41 & -2.14 & 1.00 & 0.07 & 0.49 & -2.34 & 1.00 + & 0.05 & 0.76 & -3.02 & 1.00 & 0.03 & 0.85 & -3.43 & 1.00 + = 2em this table reports in - sample estimation results for predictive regression where stands for and . is the monthly excess returns ( in percentage ) for the 10 industry , 10 book - to - market , 10 size , and 10 momentum portfolios , respectively .we report the slopes , newey - west -statistics , as well as the .portfolio returns ( % ) are value - weighted and available from kenneth french s data library .the sample period is over 1996:012014:12 . predictor & & + & & -stat & & & -stat & + + growth & -0.53 & -0.94 & 0.00 & 1.06 & 1.94 & 0.02 + 2 & -0.71 & -1.48 & 0.01 & 0.75 & 1.61 & 0.01 + 3 & -0.72 & -1.70 & 0.01 & 0.56 & 1.34 & 0.01 + 4 & -0.95 & -2.31 & 0.02 & 0.55 & 1.37 & 0.01 + 5 & -0.89 & -2.27 & 0.02 & 0.45 & 1.17 & 0.01 + 6 & -0.99 & -2.62 & 0.03 & 0.34 & 0.91 & 0.00 + 7 & -0.99 & -2.80 & 0.03 & 0.09 & 0.25 & 0.00 + 8 & -0.96 & -2.67 & 0.03 & 0.07 & 0.19 & 0.00 + 9 & -0.87 & -2.34 & 0.02 & 0.05 & 0.12 & 0.00 + value & -1.29 & -2.85 & 0.03 & -0.03 & -0.06 & 0.00 + + small & -0.87 & -1.91 & 0.02 & 0.22 & 0.49 & 0.00 + 2 & -1.14 & -2.36 & 0.02 & 0.56 & 1.17 & 0.01 + 3 & -1.11 & -2.42 & 0.03 & 0.74 & 1.65 & 0.01 + 4 & -0.95 & -2.14 & 0.02 & 0.73 & 1.68 & 0.01 + 5 & -0.90 & -2.03 & 0.02 & 0.72 & 1.67 & 0.01 + 6 & -0.71 & -1.75 & 0.01 & 0.79 & 2.00 & 0.02 + 7 & -0.80 & -2.02 & 0.02 & 0.55 & 1.41 & 0.01 + 8 & -0.78 & -2.00 & 0.02 & 0.62 & 1.61 & 0.01 + 9 & -0.70 & -1.95 & 0.02 & 0.57 & 1.63 & 0.01 + big & -0.54 & -1.60 & 0.01 & 0.64 & 1.95 & 0.02 + = 2em this table reports in - sample estimation results for predictive regression where stands for and . is the monthly excess returns ( in percentage ) for the 10 industry , 10 book - to - market , 10 size , and 10 momentum portfolios , respectively .we report the slopes , newey - west -statistics , as well as the .portfolio returns ( % ) are value - weighted and available from kenneth french s data library .the sample period is over 1996:012014:12 . + loser & -1.12 & -1.54 & 0.01 & 1.25 & 1.76 & 0.01 + 2 & -1.15 & -2.38 & 0.02 & 0.74 & 1.57 & 0.01 + 3 & -1.18 & -3.00 & 0.04 & 0.65 & 1.68 & 0.01 + 4 & -0.88 & -2.53 & 0.03 & 0.33 & 0.96 & 0.00 + 5 & -0.90 & -2.79 & 0.03 & 0.24 & 0.76 & 0.00 + 6 & -0.76 & -2.45 & 0.03 & 0.32 & 1.05 & 0.00 + 7 & -0.83 & -2.74 & 0.03 & 0.23 & 0.76 & 0.00 + 8 & -0.79 & -2.49 & 0.03 & 0.16 & 0.53 & 0.00 + 9 & -0.75 & -2.15 & 0.02 & -0.00 & -0.00 & 0.00 + winner & -0.82 & -1.68 & 0.01 & -0.05 & -0.11 & 0.00 + + nodur & -0.96 & -2.65 & 0.03 & 0.36 & 1.02 & 0.00 + durbl & -1.57 & -3.23 & 0.04 & 0.53 & 1.09 & 0.01 + manuf & -1.11 & -2.57 & 0.03 & 0.44 & 1.04 & 0.00 + enrgy & -0.97 & -1.55 & 0.01 & -0.28 & -0.46 & 0.00 + hitec & -0.64 & -0.96 & 0.00 & 0.79 & 1.23 & 0.01 + telcm & -1.11 & -1.76 & 0.01 & 1.23 & 2.02 & 0.02 + shops & -1.19 & -2.72 & 0.03 & 0.68 & 1.57 & 0.01 + hlth & -0.54 & -0.97 & 0.00 & 0.51 & 0.96 & 0.00 + utils & -0.32 & -1.26 & 0.01 & 0.04 & 0.16 & 0.00 + other & -0.97 & -2.99 & 0.04 & 0.10 & 0.32 & 0.00 + = 2em this table reports in - sample estimation results for predictive regression where stands for and . is the monthly excess returns ( in percentage ) for the 10 industry , 10 book - to - market , 10 size , and 10 momentum portfolios , respectively .we report the slopes , newey - west -statistics , as well as the .portfolio returns ( % ) are value - weighted and available from kenneth french s data library .the sample period is over 1996:012014:12 . + low & -1.03 & -1.74 & 0.01 & 0.41 & 0.71 & 0.00 + 2 & -1.08 & -2.55 & 0.03 & 0.32 & 0.76 & 0.00 + 3 & -1.13 & -3.16 & 0.04 & 0.25 & 0.71 & 0.00 + 4 & -0.89 & -2.62 & 0.03 & 0.21 & 0.64 & 0.00 + 5 & -0.83 & -2.51 & 0.03 & 0.16 & 0.50 & 0.00 + 6 & -0.84 & -2.51 & 0.03 & 0.16 & 0.50 & 0.00 + 7 & -0.82 & -2.33 & 0.02 & 0.20 & 0.58 & 0.00 + 8 & -0.90 & -2.42 & 0.03 & 0.22 & 0.59 & 0.00 + 9 & -0.78 & -1.89 & 0.02 & 0.44 & 1.08 & 0.01 + high & -0.75 & -1.33 & 0.01 & 1.04 & 1.91 & 0.02 + + low & -0.86 & -1.41 & 0.01 & 0.67 & 1.13 & 0.01 + 2 & -0.99 & -2.57 & 0.03 & 0.30 & 0.79 & 0.00 + 3 & -0.85 & -2.32 & 0.02 & 0.19 & 0.53 & 0.00 + 4 & -0.91 & -2.62 & 0.03 & 0.24 & 0.70 & 0.00 + 5 & -0.99 & -2.93 & 0.04 & 0.24 & 0.70 & 0.00 + 6 & -0.88 & -2.66 & 0.03 & 0.22 & 0.67 & 0.00 + 7 & -0.83 & -2.43 & 0.03 & 0.22 & 0.66 & 0.00 + 8 & -0.83 & -2.44 & 0.03 & 0.29 & 0.86 & 0.00 + 9 & -0.88 & -2.43 & 0.03 & 0.34 & 0.97 & 0.00 + high & -1.17 & -2.96 & 0.04 & 0.46 & 1.19 & 0.01 + boudoukh , j. , michaely , r. , richardson , m. and roberts , m. r. ( 2007 ) . on the importance of measuring payout yield : implications for empirical asset pricing_ the journal of finance _ , * 62 * ( 2 ) , 877915 .
1.5 media news reveals soft information about economic linkages between firms that is not immediately incorporated into stock price . in this paper , we propose a novel measure for aggregate market risk based on news network and news tones , namely media connection index ( mci ) . we show that the change of mci predicts a negative return with a higher in - sample and out - of - sample performance than average correlation index in . in addition , our findings are also statistically as well as economically significant even though we control for different economic predictors used in . we also find this measure is capable of predicting cross - sectional stock returns sorted by b / m , size , momentum , industry , investment and profitability . further analysis shows that the predictability of our measure mainly comes from the negative tones which is consistent with . _ jel classification _ : g11 , g12 , g17 . _ keywords _ : return predictability ; media connection ; network analysis ; news sentiment ; excess market return . 1.5 _ no man is an island , entire of itself ; every man is a piece of the continent , a part of the main . " john donne . _ _ to develop a complete mind : study the art of science ; study the science of art . learn how to see . realize that everything connects to everything else . " leonardo davinci _ + + economic linkages are shown to have a strong effect on stock returns in the past few years ( see * ? ? ? * ; * ? ? ? * ; * ? ? ? * etc . ) . as quoted in , a corporate news event likely affects not only the firm at the centre of the particular development but also a number of other companies with economic ties to the firm " . undoubtedly , this argument addresses the importance of economic linkage in being a channel for media news to take effect . however , what if the media news itself can form a linkage and serve as the intermediate for information to spread among firms ? consider an example that a piece of news mentions two stocks at the same time , and this is always the case in real life . this news , in fact , not only just conveys knowledge of media coverage , but also informs us there is a media linkage between these two stocks . this linkage , on the one hand , may indicate a possible price co - movement reflected by the soft information of news tones ; on the other hand , it may also lead to investors additional attention on attached stocks . we summarize these two effects as soft information comovement and investor sentiment comovement . specifically , we ask the question how market reacts to the change of media connection . here , we use change of media network instead of original index due to news momentum effect ( and ) . namely , media news has continuation and may have lead - lag effects on stock returns . in this case , the change of media network index is a more informative measure than the original one . importantly , we expect this index negatively predicts future returns . on the one hand , media network reflects soft information about firm fundamentals and hence it shows similar effect as return based comovement measures . according to , if aggregate risk increases , then the discount rate for future expected cash flows on most assets , including the stock market , should increase as well . holding expected future cash flows constant , higher future expected returns induce an immediate fall in price , i.e. , negative realized returns , or volatility feedback , as described by . so , if stocks become more highly correlated due to an increase of media connection , volatility feed back generates negative stock returns on average . on the other hand , media news based network index also show sentiment comovement effect . in this case , media connection reveals additional attentions from investors , thus driving additional sentiment effect ( , , and ) of corresponding stocks . for instance , given two stocks , a and b , without media connection , share holder a ( b ) may only pay attention to stock a ( b ) . now , when both firm a and firm b are mentioned by the same news article , this news may draw shareholder a s ( b s ) attention to stock b ( a ) . due to sentiment effect , this additional attention is asymmetry to the good news and bad news . to take shareholder a as an example , if news has a high sentiment on firm b , it drives optimistic of shareholder a and he can simply long stock b to adjust his position . while due to short sell constraint , a pessimistic investor can not short firm b stock when news story show a low sentiment on firm b. in this case , high media connection drives overprice of stock b , hence a lower future return . on the top of this logic , we compose a novel media connection index based on media network and news tones to model the change of stock comovement as proxy of change of aggregate risk of the stock market . based on this media connection index , we find that the change of aggregate media connection based on news tones predicts a negative return with a higher in - sample and out - of - sample performance than average correlation index in . in fact , we have documented 2.53% and 5.14% monthly in - sample and out - of - sample in ols predictive regressions respectively . in addition , our findings are also statistically as well as economically significant even though we control for different economic predictors used in . we have also tested the performance of our index in predicting returns during the recession and expansion periods documented by nber . the results shows that our measure obtain larger and positive in recession periods comparing to average correlation measure . further , we find the predictability of our measure mainly comes from the negative new tones which is consistent with in which he find negative tones are more informative . lastly , we test the portfolio implications of media connection index across different investors with various risk aversion levels as well as different trading strategies . the empirical results show that the portfolios constructed based on media connection index obtain larger sharpe ratios and certainty equivalent return gains than average correlation for mean - variance investors . it also shows that the stocks that are distressed ( high book - to - market ratio ) , with high investment or past winners are more predictable by media connections , which indicates a strong predictive power as well . in this paper , we shed new light upon a different aspect of media s role in return predictability . in the past decade , the literature that investigates the media s role in financial markets mainly examines how the pessimism sentiment revealed from the content is associated with stock prices . presents that the linguistic tone , especially negative tones , can predict market excess returns . further explore the cross - section predictability of returns by processing firm - specific news . similarly , document a sector specific reaction based on their distilled sentiment measure . further improves by using a term weighting method of content analysis based on ols and naive bayes , and they also find significant return predictability of news articles . unlike these literature that focuses on extracting investors sentiment between the lines , our index is a risk measure based on news tones . we assume that the news tone co - movement provides information about future prices co - movement of stocks that connected by co - occurrence in news articles . we also contribute to the literature that studies risk measures for financial markets . proposed a risk measure by using average correlations of stock returns . they find this measure can predict positive monthly as well as quarterly market excess log returns . decomposed aggregate market variance into an average correlation component and an average variance component . they find the latter commands a negative price of risk in the cross - section of portfolios sorted by idiosyncratic volatility . proposed a measure for systemic risk by defining marginal expected shortfall with net equity returns . our measure distinguishes those risk measures by involving soft information and investor sentiment of media news and it confirms our conjectures by showing strong return predictability . lastly , we contribute to the literature on application of network analysis in financial studies . and find that economic links among certain individual firms and industries contribute to cross - firm and cross - industry return predictability . they interpret their results as evidence of gradual information diffusion across economically connected firms , in line with the theoretical model of . investigate the predictability of industry returns based on a wide array of industry interdependencies . most recent , propose a new method , tail - event driven network risk , to detect risk network . based on this measure , they provide direct evidence on tail event interdependencies of financial institutions . some follow - up empirical studies include , and . most related , have provide empirical evidence that media news reveals additional linkages among individual stocks . they find the lagged return of stocks in the linked group according to media news can predict subsequent return of other stocks within the same group . different from above literature , we are the first paper to construct the market wide network index and provide direct evidence on its market return predictability . the rest of the paper is organized as follows . in section [ secmedianetwork ] , we review the literature exploring media connections and media network in financial markets and explain why other measures may fail in capturing stock price co - movement . in section [ secmethod ] , we show our methodology for composing an aggregate measure of media connection which can overcome the deficit of other measures and describe our data sources . after that , we conduct some empirical tests and present our results in section [ secemprc ] . lastly , we conclude in section [ secconcl ] .
experiments at the large hadron collider ( lhc ) will produce tremendous amounts of data . with instantaneous luminosities of and a crossing rate of 40 mhz , the collision rate will be about hz .but the rate for new physics processes , after accounting for branching fractions and the like , is of order hz , leading to the need to select events out of a huge data sample at the level of . what does this imply for the necessary scale of computing systems for an lhc experiment , and for the compact muon solenoid ( cms ) in particular ?the first run of the lhc in 2009 - 2010 is expected to be quite long , with six million seconds of running time .cms plans to record data at 300 hz , leading to datasets of 2.2 billion events , once dataset overlaps are accounted for . roughly as many events will be simulated .the size of the raw data from a single event is 1.5 mb ( and 2.0 mb for simulated data ) , already implying petabytes worth of raw data alone from just the first year of operations .all of this data must be processed ; detector data is reconstructed at a rate of 100 hs06-sec / event while simulated data is generated and reconstructed at 1000 hs06-sec / event .given these parameters , the cms computing model estimates that 400 khs06 of processing resources , 30 pb of disk and 38 pb of tape will be required to handle just the first year of cms data .cms has been developing a distributed computing model from the very early days of the experiment .there are a variety of motivating factors for this : a single data center at cern would be expensive to build and operate , whereas smaller data centers at multiple sites are less expensive and can leverage local resources ( both financial and human ) .but there are also many challenges in making a distributed model work , some of which are discussed here .the cms distributed computing model has different computing centers arranged in a `` tiered '' hierarchy , as illustrated in figure [ fig : tiers ] , with experimental data typically flowing from clusters at lower - numbered tiers to those at higher - numbered tiers .the different centers are configured to best perform their individual tasks .the tier-0 facility at cern is where prompt reconstruction of data coming directly from the detector takes place ; where quick - turnaround calibration and alignment jobs are run ; and where an archival copy of the data is made .the facility is typically saturated by just those tasks .there are seven tier-1 centers in seven nations ( including at fnal in the united states ) .these centers keep another archive copy of the data , and are responsible for performing re - reconstruction of older data with improved calibration and algorithms , and making skims of primary datasets that are enriched in particular physics signals .they also provide archival storage of simulated samples produced at tier-2 .there are about 40 tier-2 sites around the world ( including seven in the u.s . ); they are the primary resource for data analysis by physicists , and also where all simulations done for the benefit of the whole collaboration take place .these centers thus host both organized and chaotic computing activities .( tier-2 centers are discussed further in section [ sec : t2descr ] ) .of course , the tevatron run ii experiments have also created computing systems of impressive scale .but computing for cms will be something still different .for instance , there will not be enough resources at any single location to perform all analysis ; cdf , by contrast , has approximately equal resources at fnal for reconstruction and analysis .cms in fact depends on large - scale dataset distribution away from cern for successful analysis computing . at cms, all re - processing resources will be remote .it is true that d0 does much of its re - processing off the fnal site , but this was put into place after other elements of the computing system were commissioned .most notably , the commissioning of the distributed computing model will be simultaneous with the commissioning of the cms detector , not to mention the search for new physics that is the object of the experiment . given the stresses that the system will face early on , we must take all steps possible to make sure that the system is ready before we have colliding beams .such a step is a recent exercise called the scale testing of the experimental program ( step ) .this was a multi - virtual organization ( vo ) exercise performed in the context of the worldwide lhc computing grid ( wlcg ) .the primary goal for the wlcg was to make sure that all experiments could operate simultaneously on the grid , and especially at sites that are shared amongst vo s .all of the lhc vo s agreed to do their tests in the first two weeks of june 2009 . for cms , step 09was not an integrated challenge .this way , downstream parts of the system could be tested independently of the performance of upstream pieces .the factorization of the tests made for a much less labor - intensive test , as cms also needed to keep focus on other preparations for data - taking , such as commissioning the detector through cosmic - ray runs , during this time .cms thus focused on the pieces of the distributed system that needed the greatest testing , and had the greatest vo overlap .these were data transfers from tier to tier ; the recording of data to tape at tier 0 ; data processing and pre - staging at tier 1 , and the use of analysis resources at tier 2 .the specific tests and their results are described below .data transfer is a key element of the cms computing model ; remote resources are of little use if data files can not be transferred in and out of them at sufficient rates for sites to be responsive to the evolving demands of experimenters .several elements of data transfer were tested in step 09 .tier-1 sites must archive data to tape at close to the rate that it emerges from the detector , if backups of transfers and disk space are to be avoided . in step 09 , cms exported data from tier 0 at the expected rates to the tier-1 sites for archiving .latencies were observed between the start of the transfer and files being written to tape , and in some cases these latencies had very long tails , with the last files in a block of files being written very long after the first files were .latencies were correlated with the state of the tape systems at the individual sites ; they were longer when there were known backlogs at a given site . while each tier-1 site only has custodial responsibility for a particular fraction of the entire reco - level sample , which contains the full results of event reconstruction , every tier-1 site hosts a full copy of the analysis - object data ( aod ) sample , which contains only a summary of the reconstruction .when a re - reconstruction pass is performed at a particular tier 1 , new aod s are produced for the fraction of the data that the site has archived , and then those particular aod s must be distributed to the other six sites .this results in substantial traffic among the seven sites .these transfers were tested in step 09 by populating seven datasets at the seven sites , with sizes proportional to the custodial fraction , and then subscribing these datasets to the six other sites for transfer .the total size of the dataset was 50 tb , and the goal was to complete all of the transfers in three days .an aggregate sustained transfer rate of 1215 mb / s was required to achieve that goal , and a rate of 989 mb / s was achieved .one interesting feature of these transfers was that it demonstrated the re - routing capabilities of the phedex transfer system .phedex attempts to route files over the fastest links available .if site a is the original source of a file and both sites b and c wish to acquire them , then if b gets a file from a before c does , and the network link between b and c is faster than that between a and c , then site c will obtain the file from site b rather than the originating site a. this is illustrated in figure [ fig : reroute ] , which shows which sites serve as the sources for files that were originally at asgc in taiwan , the tier-1 site that is furthest from all the others in cms . in the early stages of the transfer , asgc is the only source of the files .but once the files have started to arrive in europe , other european tier-1 s start to get the files from their nearest neighbors rather than asgc . in the end , only about half of the transfers of the asgc dataset actually originated at asgc .cms is learning how to best take advantage of such behavior .finally , transfers from tier-1 to tier-2 are important for getting data into the hands of physicists .these transfers typically involve pulling data off tape at the tier-1 site so that disk - resident files can then be copied to tier-2 disk pools .step 09 testing of these transfers focused on stressing tier-1 tape systems by transfering files that were known not to be on disk at the originating sites .in general , the target transfer rates were achieved , with the expected additional load on tape systems observed .one interesting feature that was observed is shown in figure [ fig : t1t2 ] for the case of two datasets being transferred from the tier-1 site at ral in the uk to a nearby tier-2 site .both datasets were brought to disk pretty quickly , and the first dataset was mostly transferred after that .however , the transfer of that dataset was stalled for a while as the second dataset was transferred in its entirety . since only complete blocks of files are visible to cms jobs, the first dataset was probably not in a useable state while the second dataset was being transferred .cms is studying techniques to avoid such issues .the primary responsibility of the tier-0 facility is to do a first pass reconstruction of the raw data , and then to save an archival copy of the raw data and the reconstruction output . in step 09 ,the tier-0 tape system cms stressed by running i / o intensive jobs at the same time that other experiments ran similar jobs .could cms archive data to tape at sufficient rates while other experiments were doing the same ?`` sufficient '' is hard to define , as the 50% duty cycle of the lhc allows time to catch up between fills .cms estimated that a 500 mb / s tape - writing rate would be sufficient .the tape - writing test schedule was constrained by the need to handle real detector data from cosmic - ray runs during the step 09 period , leading to two test periods of four and five days .the results are shown in figure [ fig : t0write ] . in both periods , the target ratewas easily exceeded , even with atlas also writing at a high rate during one of the periods .the only problem that was encountered was the limited amount of monitoring information for the tier-0 facility .the tier-1 sites hold custodial copies of datasets , and will be re - reconstructing those events multiple times . in 2010 , cms expects to do three re - processing passes that will take four months each . in the early stages of the experiment , when data sizes are small , all of the raw data and several versions of the reconstruction will fit onto disk pools at the tier-1 sites , making for efficient processing .but as the collected dataset gets bigger , it will have to be staged from tape to disk for re - processing .this is potentially inefficient ; one would nt want to have re - processing jobs occupying batch slots and waiting for file staging .thus some pre - staging scheme is required to maximize cpu efficiency .the pre - staging has never been tested by cms on this scale or with such coordination .step 09 exercises at tier 1 investigated the prestage rates and stability of the tape systems , and the ability to perform rolling re - reconstruction .a rolling re - processing scheme was established for the exercise . on day 0 , sites pre - staged an amount of data that could be re - reconstructed in a single day from tape to disk . on day 1 ,that data was processed while a new batch of data was pre - staged . on day 2 , the day 0 data was purged from disk , the day 1 data was processed , and new data was again pre - staged .this was repeated throughout the exercise period .how much data was processed varied by the custodial fraction at each site .cms does not yet have a uniform way of handling pre - staging within the workload management system .three different implementations emerged across the seven tier-1 sites .all three worked , and the experienced gained will be used to design a uniform pre - staging system for long - term use .the target pre - staging rates for each site are given in table [ tab : t1stage ] .also shown are the best one - day average rates that were achieved during the exercise . as can be seen ,all sites were able to achieve the targets , although there were some operational problems during the two weeks .the fzk tape system was unavailable at first , and the performance was not clear once it was available .in2p3 had a scheduled downtime during the first week of step 09 .the large rates required at fnal triggered problems at first that led to a backlog , but these were quickly solved ..target and best achieved pre - staging rates at tier-1 sites during step 09.[tab : t1stage ] [ cols="<,^,^",options="header " , ] the re - processing operations ran quite smoothly .a single operator was able to submit many thousands of jobs per day using glide - in pilots , as shown in figure [ fig : t1proc ] .( note the effect of the backlog at fnal mentioned above . )there was no difficulty in getting the pledged number of batch slots from sites , and fair - share batch systems appeared to give each experiment the appropriate resources .the efficiency of the re - processing jobs is reflected in the ratio of cpu time consumed by the jobs to the wall - clock time that the job spends using a batch slot .this ratio should be near one if jobs are not waiting for files to come off tape .figure [ fig : t1eff ] shows the efficiency for jobs on a typical step 09 day .efficiency varies greatly across sites , which bears more investigation .however , pre - staging , which was used here , is generally observed to greatly improve the efficiency .the cms data analysis model depends greatly on the distribution of data to tier-2 sites and the subsequent submission of analysis jobs to those sites .we review those elements of the model here . in cms ,analysis jobs go to the data , and not the other way around , so it is important to distribute data for the most efficient use of resources .the nominal storage available at a tier-2 site is 200 tb ; with about 40 functional tier-2 sites , this is a huge amount of storage that must be partitioned in a sensible way . at each site ,the available disk space is managed by different parties ranging from the central cms data - operations group to large groups of users to individual users , leading to a mix of central and chaotic control .a small amount of disk , about 10 tb , is set aside for as staging space for centrally - controlled simulation production .30 tb at each site is designated as centrally - controlled ; cms will place datasets of wide interest to the collaboration in this space .another 30 - 90 tb of space , divided into 30 tb pieces , is allocated to individual physics groups in cms for distribution and hosting of datasets that are of greatest interest to them .there are 17 such groups in cms .currently no site supports more than three groups and no group is affiliated with more than five sites ; the seven u.s .tier-2 sites support all 17 groups . as a result, there are a manageable number of communication channels between sites and groups , making it easier to manage the data placement across the far - flung sites .the remainder of the space at a tier-2 site is devoted to local activities , such as making user - produced files grid accessible .cms physicists must then be able to access this data .all analysis jobs are submitted over the grid . to shield the ordinary user from the underlying complexity of the grid , cms has created the cms remote analysis builder ( crab ) . a schematic diagram of how the grid submission of an analysis job works is shown in figure [ fig : workflow ] .a user creates a crab script that specifies the job , including the target dataset and the analysis program .the user submits the script to a crab server with a one - line command .the server then determines where the dataset is located .the dataset in question could be either an official cms dataset , or one created by a user that is resident at a tier-2 site .the job is then submitted by the server to the appropriate site through the grid for processing .if the user is creating significant output , that output can be staged to the user s local tier-2 site , and the files can be registered in the data management system for processing by a future crab job . needless to say , many elements of the system must succeed for the user to have a successful job .those of greatest concern at the moment are the scaling of grid submissions , data integrity at the tier-2 sites , and reliability and scaling issues for stageout of user output .50% of the pledged processing resources at tier-2 sites are targeted for user analysis . at the moment , this is about 8,000 batch slots .the primary goal of step 09 tests at tier 2 was to actually fill that many slots .figure [ fig : t2slots ] shows the number of running jobs per day at the tier-2 sites before and during step 09 .all types of jobs that ran at the sites are indicated simulation production , normal analysis run by users throughout cms , and the extra analysis jobs that were submitted for the exercise . between normal and step09 analysis jobs , the pledged analysis resources were more than saturated , with no operational problems at the sites .this apparent spare capacity suggests that cms could be making better use of the analysis resources .indeed , in the month before step 09 , only five out of 48 sites were devoting more than 70% of their analysis resources to analysis jobs . during step 09 ,33 sites did so .this bodes well for the onslaught of user jobs that we expect when lhc data - taking begins .the step 09 jobs all read data from local disk at the sites , but did not stage out any output , so the stageout elements of the analysis model were not tested .the majority of sites handled the step 09 jobs perfectly , as indicated in figure [ fig : t2success ] .the overall success rate for jobs was 80% .90% of the job failures were due to file read errors at the sites , which indicates a clear area that needs improvement .however , this indicates that the bulk of the problems happened after jobs reached the sites , rather than during the grid submission .this would not have been true just a few years ago .the step 09 exercise allowed us to focus on specific key areas of the computing system in a multi - vo environment data transfers between tiers , the use of tape systems at tier 0 and tier 1 , and data analysis at tier 2 .most of the tier-1 sites showed good operational maturity .some may not yet have deployed all of the resources that will be needed at lhc startup this fall , but there are no indications that they will have any problem scaling up . not all tier-1 sites attained the goals of the tests ; specific tests will be re - run after improvements are made .the tests of analysis activities at tier 2 were largely positive .most sites were very successful , and cms easily demonstrated that it can use resources beyond the level pledged by sites .if anything , there are indicators that some resources could be used more efficiently .* the first run of the lhc will be longer than originally imagined .what are the operational impacts ? * if the lhc duty cycle is low at the start , there will be pressure to increase the event rate at cms , possibly to as high as 2000 hz from the nominal 300 hz , and to overdrive the computing systems .will it work ?* datasets will be divided into streams on the basis of triggers for custodial storage at the various tier-1 sites .this will allow re - processing to be prioritized by trigger type , but will the local interests at each tier-1 site by satisfied by the set of triggers it stores ?* read errors were the leading problem in the tier-2 analysis tests .what can be done to make disk systems more reliable and maintainable ? * the current system for remote stageout will not scale .what will ? * during a long run , will we be able to keep multiple copies of reco - level data available at tier-2 sites ? if not , how will people adjust ? 9see http://public.web.cern.ch/public/en/lhc/lhc-en.html for many details .r. adolphi _ et al . _( cms collaboration ) , `` the cms experiment at the cern lhc , '' journal of instrumentation 3 , s08004 ( 2008 ) .m. michelotto , `` a comparison of hep code with spec benchmark on multicore worker nodes , '' proceedings of computing in high energy physics ( chep09 ) , prague , czech republic ( 2009 ) . c. grandi , d. stickland , l. taylor ._ et al . _ , `` the cms computing model , '' cern - lhcc-2004 - 035 ( 2004 ) .j. knobloch , l. robertson _et al . _ ,`` lhc computing grid techical design report , '' cern - lhcc-2005 - 024 ( 2005 ) .r. egeland , t. wildish , and s. metson , `` data transfer infrastructure for cms data taking , '' pos ( acat08 ) 033 .m. corvo _ et al ._ , `` crab , a tool to enable cms distributed analysis , '' proceedings of computing in high energy physics ( chep06 ) , mumbai , india ( 2006 ) .
each lhc experiment will produce datasets with sizes of order one petabyte per year . all of this data must be stored , processed , transferred , simulated and analyzed , which requires a computing system of a larger scale than ever mounted for any particle physics experiment , and possibly for any enterprise in the world . i discuss how cms has chosen to address these challenges , focusing on recent tests of the system that demonstrate the experiment s readiness for producing physics results with the first lhc data .
the band structure of a material represents the relation between wavenumber ( or wave vector ) and frequency , and thus it relates the spatial and temporal characteristics of wave motion in the material .this relation is of paramount importance in numerous disciplines of science and engineering such as electronics , photonics and phononics . it is well - known that periodic materials exhibit gaps in the band structure , referred to as _ band gaps _ or _stop bands_. in these gaps , waves are attenuated whereby propagating waves are effectively forbidden .their defining properties are the frequency range , i.e. , position and width , as well as the depth in the imaginary part of the wavenumber spectrum , which describes the level of attenuation . in the realm of elastic wave propagation ,band gaps are usually created by two different phenomena : bragg scattering or local resonance .bragg scattering occurs due to the periodicity of a material or structure , where waves scattered at the interfaces cause coherent destructive interference , effectively cancelling the propagating waves .research on waves in periodic structures dates , at least , back to newton s attempt to derive a formula for the speed of sound in air , see e.g. , chapter 1 in ref . for a historical review before the 1950 s .later review papers on wave propagation in periodic media include refs . .the concept of local resonance is based on the transfer of vibrational energy to a resonator , i.e. , a part of the material / structure that vibrates at characteristic frequencies . within structural dynamics , the concept dates back , at least , to frahm s patent application and since , dynamic vibration absorbers and tuned mass dampers have been areas of extensive research within structural vibration suppression . in the field of elastic band gaps ,the concept of local resonance is often considered within the framework of periodic structures , as presented in the seminal paper of liu et al ., where band - gaps are created for acoustic waves using periodically distributed internal resonators .the periodic distribution of the resonators does not change the local resonance effects , however it does introduce additional bragg scattering at higher frequencies , as well as allow for a unit - cell wave based description of the medium .local resonance has also been used in the context of attaching resonators to a continuous structure , such as a rod , beam or a plate in order to attenuate waves by creating band gaps in the low frequency range .a problem with this approach in general , which has severely limited proliferation to industrial applications , is that the resonators need to be rather heavy for a practically significant gap to open up .another means for creating band gaps is by the concept of inertial amplification ( ia ) as proposed by yilmaz and collaborators in refs . .in this approach , which has received less attention in the literature , inertial forces are enhanced between two points in a structure consisting of a periodically repeated mechanism .this generates _ anti - resonance _ frequencies , where the enhanced inertia effectively cancels the elastic force ; see e.g. ref . where two levered mass - spring systems are analysed for their performance in generating stop bands .while it is possible to enhance the inertia between two points by means of masses , springs and levers , a specific mechanical element , _ the inerter _ , was created as the ideal inertial equivalent of springs and dampers , providing a force proportional to the relative acceleration between two points .this concept , while primarily used in vehicle suspension systems , has been utilized in refs . in the context of generating band gaps in lattice materials by inertial amplification where the same underlying physical phenomenon is used for generating the anti - resonance frequencies .the frequency responses to various harmonic loadings were obtained , numerically and experimentally , and low - frequency , wide and deep band gaps were indeed observed for these novel lattice structures . in ref ., size and shape optimization is shown to increase the band gap width further , as illustrated by a frequency - domain investigation .until now , both inerters and inertial amplification mechanisms have been used as a backbone structural component in discrete or continuous systems . in this paper, we propose to use inertial amplification to generate band gaps in conventional continuous structures , by attaching light - weight mechanisms to a host structure , such as a rod , beam , plate or membrane , without disrupting its continuous nature ( therefore not obstructing its main structural integrity and functionality ) . with this approach, we envision the inertial amplification effect to be potentially realized in the form a _ surface coating _ , to be used for sound and vibration control . for proof of concept, we consider a simple one - dimensional case by analyzing an elastic rod , with an inertial amplification mechanism periodically attached .the mechanism is inspired by that analyzed in ref ., however the application to a continuous structure increases the practicality and richness of the problem considerably and several novel effects are illustrated .our investigation focuses mainly on the unit - cell band - structure characteristics .however , we also compare our findings from the analysis of the material problem to transmissibility results for structures comprising a finite number of unit cells .the finite systems are modelled by the finite - element ( fe ) method .in order to utilize the concept of inertial amplification in a surface setting as proposed , the mechanisms should be much smaller than the host structure , such that their distributed attachment does not change the main function of the structure , nor occupy a significant amount of space .fulfilling this constraint requires a relatively large effect with only a modest increase in mass .considering the ideal mechanical element , the _ inerter _ , we know that the factor of proportionality , the inertance , can be much larger than the actual mass increase , as demonstrated experimentally in refs . .we propose to utilize the same effect using a mechanism similar to the one considered in refs . .our two - dimensional interpretation of the system may be viewed as a plate with a comparably small inertial amplification mechanisms distributed over the host - structure . in principle, the distributed effect of the mechanisms , in the long wave - limit , reduces to the notion of an inertially - modified constitutive relation in the elastodynamic equations . in this study , we restrict ourselves to a one - dimensional structure with an inertial amplification mechanism attached , as illustrated in fig .[ fig:1d ] , where the mechanism is attached to the rod with bearings . a similar bearingis used at the top connection , such that , ideally , no moment is transferred through the mechanism . ,scaledwidth=43.0% ] this ensures that the connecting links do not deform , but move the amplification mass by rigid - body motion .the 1d - system is simplified further to a hybrid model consisting of a continuous , elastic bar and a discrete mechanism as seen in fig .[ fig : hybrid ] , as this allows for a rigorous analytical formulation for the underlying dynamics ., scaledwidth=40.0% ] the bar has young s modulus , cross - sectional area , mass density and unit - cell length , while the amplification mass is denoted and is the amplification angle . in fig .3 , heavy lines indicate rigid connections and the corners between vertical and inclined rigid connections are moment - free hinges , hence the motion of the amplification mass quantified by and , is governed by the motion at the attachment points , and the amplification angle , where and .it is noted that the model in fig .[ fig : hybrid ] will be unaffected by the mechanism in the static limit , .this is in agreement with the desire to produce a force that is proportional to the relative acceleration , thus changing the effective inertia of the system . from a physical standpoint ,any increase in static stiffness of the mechanism would arise from frictional stiffness in the bearings or at the top point , however it is outside the scope of this work to include these residual stiffness effects , among other things , since they are assumed to be small . the inertial amplification model in fig .[ fig : hybrid ] assumes rigid connections between the rod and the mechanism .should the connections be flexible as illustrated in fig .[ fig : localsystem ] , the mechanism can work as both a local resonator as well as an inertial amplifier , depending on the specific parameters of the system .the local - resonance ( lr ) system in fig .[ fig : localsystem ] recovers the ideal inertial amplification system of fig .[ fig : hybrid ] in the limit . ,scaledwidth=40.0% ] the inertial amplification system in fig .[ fig : hybrid ] is the main system investigated in this paper , while the system including the connection flexibility is used for two purposes : to illustrate that inertial amplification can occur even when connections are non - ideal , i.e. , flexible , and to compare the behaviour of the inertial amplification mechanism to an equivalent , local - resonator - type system .all the analytical formulations in this paper are based on the differential equation of a rod , where is the longitudinal displacement , is the normal stress while and .the body force will not be present in the material problem formulation considering infinite domains .the rod is considered to be homogeneous , however a layered rod would pose no additional difficulty in terms of the transfer matrix method described in section [ ssec : transfermatrix ] , since the method is applicable to layered materials. the rod is further assumed to be linear elastic with infinitesimal strains , which provides the constitutive relation where is the longitudinal strain in the rod .before considering the hybrid systems in figs .[ fig : hybrid ] and [ fig : localsystem ] , the mechanisms are considered with constraint forces applied to account for the rod .these constraint forces are determined in terms of the mechanism parameters , whereby the effect of the mechanism on the rod is given in terms of these constraint forces .considering the isolated inertial amplification mechanism in fig .[ fig : mechanismia ] , the motions and can be determined in terms of , and ., scaledwidth=42.0% ] in the appendix , the full non - linear kinematic relations are derived . in this paperwe consider the linearized version , whereby and are determined as as seen in ref . as well . considering the inertial amplification mechanism in fig .[ fig : mechanismia ] , with the applied constraint forces and , the motions and correspond to longitudinal motion of the rod at the points and . using lagrange s equations ,the governing equations for the mechanism are found , [ eq : constraints ] where and .assuming harmonic motion , , we obtain [ eq : constforceia ] were are the frequency domain representations of and the dynamic stiffness parameters are defined as : next , the constraint forces for the local - resonator - type system in fig .[ fig : localsystem ] are determined . considering the isolated mechanism in fig .[ fig : localmechanism ] , the constraint forces are applied at the constrained coordinates and , while the coordinates and are free . ,scaledwidth=42.0% ] the governing equations are found by lagrange s equations , assuming harmonic motion provides the frequency domain constraint forces , which in terms of the constraint coordinates and are with the dynamic stiffness parameters defined as [ eq : dynstifflr ] where and are the local resonance frequencies defined by , [ eq : lrfreq ] eq .corresponds to the out - of - phase mode while eq .corresponds to the in - phase mode of the mechanism . in the out - of - phase mode ,the mass oscillates in a direction orthogonal to that of the displacement at frequency , thus also benefiting from the inertial amplification effect .the in - phase mode on the other hand corresponds to that of a standard local resonator , since there is no relative acceleration between the attachment points and .it is noted that the dynamic stiffness coefficients for the inertial amplification system are recovered from eqs .when .in order to characterize the effects of the inertial amplification mechanism , the band structure of an infinite array of hybrid rod - mechanism systems is determined using the transfer matrix method .the method has its origins within electrodynamics and optics , but has been widely used within elastic wave propagation. the method is briefly described in section [ ssec : transfermatrix ] , with a focus on the specific extension required for the particular unit cell considered here . before describing the general transfer - matrix methodology , we consider a simplified unit cell in section [ ssec : receptance ] in order to shed light on the band - opening mechanism .we do this by a _ receptance approach _ , where we determine the displacement at one end of a single unit cell when applying harmonic forcing at the other end .the anti - resonance frequencies can then be determined as those frequencies with zero receptance for any forcing magnitude ( zeros ) .these anti - resonance frequencies are shown to be the points of maximum attenuation in the infinite system , and are thus relevant quantities for maximum attenuation design .consider the simplified rod - mechanism system in fig .[ fig : hybrid ] with free boundary conditions , , and harmonic forcing , , at .both the applied and constraint forces can be included via the boundary conditions to the rod differential equation , eq . . with harmonic forcing ,the linear response will be harmonic , , whereby with wavenumber and wave - speed in the homogeneous rod . the solution to is where the constants and are determined by the boundary conditions , given by force equilibria at both ends . utilizing the constitutive relation , the force equilibria yield which , when inserting the constraint forces from eqs . and the solution from eq ., are expressed in matrix form , \textbf{x } = \textbf{f}\ ] ] with , ^t ] .solving for and provides the solution for , where is the receptance function .the displacement at is given by whereby the anti - resonance frequencies can be determined as the frequencies satisfying , i.e. , this transcendental equation is solved numerically for any desired number of anti - resonance frequencies .the approximation for the first anti - resonance frequency is found in the long - wavelength limit where , i.e. in the sub - bragg regime .the approximation is which is essentially the same as the anti - resonance frequency of a discrete system as presented in ref . with effective spring stiffness . hence discretizing the rod as a spring - mass system would provide the semi - infinite gap presented in the mentioned reference .the added complexity from the rod is illustrated by the higher roots of eq . and will be apparent from the band structures calculated in section [ sec : results ] .the transfer matrix method is based on relating the state variables of a system across distances and interfaces , successively creating a matrix product from all the `` sub '' transfer matrices , forming the _ cumulative transfer matrix_. consider the hybrid continuous - discrete , rod - mechanism system illustrated in fig .[ fig : hybrid ] , where the rod is modelled as a continuum and the mechanism is modelled by discrete elements .the transfer matrix for the unit cell is based on the host medium , i.e. , the rod , representing the effects of the mechanisms by point force matrices at and .the state variables for the rod are the longitudinal displacement and the normal stress . dividing the system into three layers separated at and , the solution for the longitudinal displacement in layer can be written as a sum of forward and backward travelling waves , where and are the amplitudes of the forward and backward travelling waves . using the linear elastic constitutive relation for the stress ,the state variables are expressed as = \left[\begin{array}{cc } 1 & 1 \\z & -z\end{array}\right]\left[\begin{array}{l}b_j^{(+)}e^{i\kappa_b x } \\[0.1 cm ] b_j^{(-)}e^{-i\kappa_bx}\end{array}\right ] \\= & \textbf{h}\left[\begin{array}{l}b_j^{(+)}e^{i\kappa_b x } \\[0.1 cm ] b_j^{(-)}e^{-i\kappa_bx}\end{array}\right ] \label{eq : transferbegin } \end{split}\ ] ] thus defining the * h*-matrix , where . relating the state variables at either end of a homogeneous layer separated by the distance yields \left[\begin{array}{l}b_j^{(+)}e^{i\kappa_b x^{j , l } } \\[0.1 cm ] b_j^{(-)}e^{-i\kappa_bx^{j , l}}\end{array}\right ] \\= & \textbf{h}\textbf{d}_j\left[\begin{array}{l}b_j^{(+)}e^{i\kappa_b x^{j , l } } \\[0.1 cm ] b_j^{(-)}e^{-i\kappa_bx^{j , l}}\end{array}\right ] \label{eq : phasematrix } \end{split}\ ] ] defining the `` phase - matrix '' .the coordinate at the left end of layer is denoted .solving eq . with for the vector of amplitudes and inserting into eq .defines the transfer matrix for layer , .\end{split}\ ] ] having defined the transfer matrices , , we turn to the constraint forces at the attachment points of the mechanism .we base our derivation of the point force matrices on a frequency domain force equilibrium at the attachment points , considering first it is noted that the force balance at point depends on the displacement at . using the transfer matrix for layer 2 , is expressed as which , along with the continuity requirement , yields the force equilibrium whereby the point force matrix , relating the state vector to can be identified , \textbf{z}_1^r\\ = & \widehat{\textbf{p}}_1\textbf{z}_1^r .\label{eq : pointforce1 } \end{split}\ ] ] using a similar approach at point provides the point force matrix as \label{eq : pointforce2}\ ] ] which allows for relating the state vector at the right end of the unit cell to the state vector at the left end through the cumulative transfer matrix , the present framework is fully compatible with the local - resonator - type system described in section [ sec : model ] by changing the dynamic stiffness parameters in the point - force matrices to those of eqs ., rather than those defined by eqs . .finally , it is noted that when the internal distance approaches zero , the point force matrices for the inertial amplification system approach that of an attached point mass , while the local - resonance point force matrices approach that of an attached local resonator with resonance frequency , recovering the expected limits . with the cumulative transfer matrix for a unit cell determined , the floquet - bloch theorem for periodic structures , is used to relate the state vector at either end through a phase multiplier where is the wavenumber for the periodic material and is unit - cell length .combining eqs . and yields a frequency - dependent eigenvalue problem in whereby the band structure of our periodic material system is determined within a -formulation .in this section , the band structure is calculated for the systems in figs . [fig : hybrid ] and [ fig : localsystem ] as well as a standard local resonator configuration , with primary focus on the inertial amplification system .the mechanism is attached to an aluminum rod with young s modulus , density , width , height and length .these parameters are given in table [ tab : mainbar ] along with the equivalent mass and stiffness parameters and and the first natural frequency ..parameters of the host rod [ cols="^,^,^,^,^,^,^ " , ] the dashed lines represent the gap frequencies for the local - resonator system .the first line represents the local resonance frequency where is kept constant , while decreases with unit cell size as is kept constant .the subsequent lines represent the bragg - gap limits , . for this particular mass ratio ,the actual gaps are very narrow .thus they are covered entirely by the line representation in fig .[ fig : unitcellsize ] . either way, the conclusion is that for realistic and comparable parameters , the local - resonance system can provide wave attenuation at lower frequencies than the inertial - amplification system , as the unit - cell size decreases .this is a clear advantage of the classical locally resonant systems from the point of view of constraints on unit - cell size .in this section the band - structure results from section [ sec : results ] are compared to the transmissibility for finite systems comprising a certain number of unit cells , illustrating that the material results are representative in a structural setting as well .the transmissibilities are calculated from a finite - element model of the continuous - discrete system in fig .[ fig : hybrid ] , using standard linear elements to discretize the continuous rod , with stiffness and mass matrix contributions from the mechanism at appropriate nodes .the simple fe model is compared to an fe model created in the commercial software abaqus , using 3d beam elements . using the fe implementation to model the 1d finite array illustrated in fig .[ fig : finitearray ] with the number of unit cells denoted , the transmissibility is expressed as the natural logarithm to the ratio between the output and input displacement divided by the number of unit cells .hence the transmissibility expresses the wave propagation / decay per unit cell , and is thus comparable to the dispersion curves . ,scaledwidth=50.0% ] the transmissibilities are calculated for the same aluminum rod as considered in section [ sec : results ] , using the same rod parameters from table [ tab : mainbar ] .we consider the case where the relative mechanism length is equal to the full unit cell length , i.e. , .figure [ fig : femcomparefull ] compares the band structures from the infinite systems to the transmissibilities calculated for and n for two values of the mass - ratio , illustrating the branching point at and the double - dip at corresponding to the single- and double - peak in the attenuation profiles , respectively .the grey areas correspond to band gaps predicted by the infinite model .it is noted that some boundary effects exist , i.e. , the finite systems have resonances within band - gaps , which is due to the symmetry breaking of the system . in spite of the boundary effects , the comparison shows that the infinite - system properties carry over to the finite case , and perhaps equally important , that within the band gaps , the curves for the imaginary part of the wavenumber and the transmissibility have similar shapes and magnitudes .this is expected from the exponential decaying behaviour of waves within the band - gaps , but it does illustrate the design potential for finite structures by just considering the shape of the band - structure within the gaps calculated for infinite structures .the fe implementation of the hybrid rod - mechanism system is tested against an implementation of the finite system in the commercial fe - software abaqus .the abaqus model is created as a 3d deformable wire model , using three - dimensional beam elements for both rod and mechanism .the mechanisms are distributed both above and below the main structure to have equal but opposite transverse force components from the mechanisms .this is necessary to avoid bending phenomena , and may easily be implemented in an experimental setting as well .the rigid connecting links are modelled by assigning very large young s modulus and very low density to the elements .the ideal connections are modelled using translatory constraints to connect the rigid connectors to the bar .hence the abaqus model is used to illustrate the phenomena in a finite setting without obstructing the results with , for the present purpose , unnecessary complexities .indeed , it is a subject of a future research paper to investigate more realistic models of the physical configuration in fig .[ fig:1d ] both numerically and experimentally .the abaqus model is created with the general rod parameters seen in table [ tab : mainbar ] and the general mechanism parameters and .an illustration of the created abaqus model is seen in fig .[ fig : abqcomparison_doublerigid_mu_10](a ) . figure [ fig : abqcomparison_doublerigid_mu_10](b ) shows the comparison between the transmissibilities calculated by the 1d fe implementation of the rod - mechanism system and the 3d fe implementation in abaqus , respectively , for the mass ratio ., [fig : abqcomparison_doublerigid_mu_10],scaledwidth=50.0% ] comparing both maximum attenuation frequencies and gap limits , the transmissibility predicted by abaqus matches rather well , especially for the first gap . concerning the deviations in the transmissibilities , it is worth considering the case of gap - coalescence which , as seen in fig .[ fig : attachmentvariation ] is a rather `` singular '' phenomena . as expected , using the `` coalescence - parameters '' predicted by the analytical model does not cause the gaps to coalesce in the 3d model .figure [ fig : abqcomparison_doublerigid_coalescence ] shows the transmissibility comparison for the analytically predicted coalescence - parameters and the ones found by inverse analysis in abaqus .this pass band could be detrimental for design if not taken into account , since the resonances are so closely spaced .hence , designing for gap - coalescence should be done with care , as mentioned in sec .[ ssec : num_attachment ] .we have investigated the wave characteristics of a continuous rod with a periodically attached inertial amplification mechanism .the inertial amplification mechanism , which is based on the same physical principles as the classical inerter , creates band gaps within the dispersion curves of the underlying continuous rod .the gap - opening mechanism is based on an enhanced inertial force generated between two points in the continuum , proportional to the relative acceleration between these two points .an inertial amplification mechanism has been used previously as a core building block for the generation of a lattice medium , rather than serve as a light attachment to a continuous structure .several prominent effects are featured in the emerging band structure of the hybrid rod - mechanism configuration .the anti - resonance frequencies are governed by both mechanism- and rod - parameters , hence rather than a single anti - resonance frequency we see an infinite number , which can be predicted for a simple choice of unit - cell parameters .for the same choice , we illustrate the presence of multiple attenuation peaks within the same gap .furthermore , when generalizing the parameters of the unit cell , we observe that the anti - resonance frequencies can jump between gaps whereby double - peak behaviour can not be guaranteed for all parameters . at the specific values of the anti - resonance jump, band - gap coalescence emerges providing a very wide and deep contiguous gap .this gap , however , is rather sensitive to design and modelling inaccuracies .in addition to these intriguing effects , we demonstrate how attaching an inertial amplification mechanism to a continuous structure is superior to attaching a classical local resonator in that the former produces much larger gaps for the same amount of added mass .figure [ fig : concluding_performance ] compares the band structure of the proposed inertial amplification system to those of a classical local resonator configuration for two different tunings of the local resonator stiffness .figures [ fig : concluding_performance](b ) and [ fig : concluding_performance](c ) represent two cases of stiffness tuning that provide a locally resonant band gap ( with equal central frequency ) and a bragg coalescence gap , respectively .the central frequency is determined by solving eq .numerically for the first two roots , and .the comparison illustrates that when the same mass is used , the proposed concept achieves a first gap that is much wider than what is obtainable by the classical local resonator configuration , irrespective of the stiffness tuning for the local resonator . in order to obtain comparable performance in terms of band - gap width for the classical local - resonator system ,the added mass should be increased significantly .figure [ fig : concluding_performance2 ] compares similar gap widths for an inertial amplification system and a local resonance system respectively . from the figure, we see that the inertial amplification system is superior in terms of the magnitude of added mass , as the local resonance system requires an approximately twenty times heavier mass to obtain a comparable band - gap width ( a mass that is more than two times as heavy as the rod it is attached to ) . the classical local resonator configuration ,on the other hand , faces less constraints on unit - cell size as demonstrated in fig . [fig : unitcellsize ] .the presented concept of an inertially amplified continuous structure opens a new promising avenue of band - gap design .potentially it could be extended to surfaces of more complex structures such as plates , shells and membranes leading to a general surface - coating design paradigm for wave attenuation in structures .steps toward achieving this goal include a generalization of the formulation to admitting transverse vibrations , incorporation of frictional stiffness and damping in the bearings of the mechanism , and generalization to two dimensions .niels m. m. frandsen and jakob s. jensen were supported by erc starting grant no .279529 innodyn .osama r. bilal and mahmoud i. hussein were supported by the national science foundation grant no .further , n.m.m.f would like to extend his thanks to the foundations : cowifonden , augustinusfonden , hede nielsen fonden , ingenir alexandre haymnan og hustrus fond , oticon fonden and otto mnsted fonden .considering the top part of the deformed mechanism as illustrated in fig .[ fig : topmech_app ] , the motions and can be determined in terms of , and .the horizontal motion is by geometric consideration determined as .the vertical motion is given by the difference .it turns out to be convenient to consider the difference of the squared triangle heights : which provides a quadratic equation in .the equation is made explicit by assuming small displacements , such that and , whereby the linearized kinematics of the mechanism has been determined as 29ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] _( , ) _ _ ( , ) * * , ( ) link:\doibase 10.1006/jsvi.1996.0076 [ * * , ( ) ] link:\doibase 10.1115/1.4026911 [ * * , ( ) ] `` , '' ( ) * * , ( ) link:\doibase 10.1016/j.physleta.2005.08.067 [ * * , ( ) ] link:\doibase 10.1063/1.2400803 [ * * , ( ) ] link:\doibase 10.1103/physrevb.78.104105 [ * * , ( ) ] link:\doibase 10.1063/1.2970992 [ * * , ( ) ] link:\doibase 10.1103/physrevb.76.054309 [ * * , ( ) ] link:\doibase 10.1016/j.jsv.2013.06.022 [ * * , ( ) ] link:\doibase 10.1016/j.jsv.2005.09.016 [ * * , ( ) ] link:\doibase 10.1109/tac.2002.803532 [ * * , ( ) ] link:\doibase 10.1109/mcas.2008.931738 [ * * , ( ) ] link:\doibase 10.1016/j.jsv.2015.06.016 [ * * , ( ) ] link:\doibase 10.1115/1.3023120 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1016/j.jsv.2005.02.030 [ * * , ( ) ] _ _ ( , ) * * , ( ) link:\doibase 10.1016/j.physleta.2011.02.044 [ * * , ( ) ] link:\doibase 10.1115/1.4004592 [ * * , ( ) ] * * , ( ) link:\doibase 10.1088/1367 - 2630/14/3/033042 [ * * , ( ) ] * * , ( ) * * , ( )
wave motion in a continuous elastic rod with a periodically attached inertial - amplification mechanism is investigated . the mechanism has properties similar to an `` inerter '' typically used in vehicle suspensions , however here it is constructed and utilized in a manner that alters the intrinsic properties of a continuous structure . the elastodynamic band structure of the hybrid rod - mechanism structure yields band gaps that are exceedingly wide and deep when compared to what can be obtained using standard local resonators , while still being low in frequency . with this concept , a large band gap may be realized with as much as twenty times less added mass compared to what is needed in a standard local resonator configuration . the emerging inertially enhanced continuous structure also exhibits unique qualitative features in its dispersion curves . these include the existence of a characteristic double - peak in the attenuation constant profile within gaps and the possibility of coalescence of two neighbouring gaps creating a large contiguous gap .
in comparative genomic , the first step of sequence analysis is usually to decompose two or more genomes into syntenic blocks that are segments of homologous chromosomes . for the reliable recovery of syntenic blocks , noise and ambiguities in the genomic mapsneed to be removed first .a genomic map is a sequence of gene markers .a gene marker appears in a genomic map in either positive or negative orientation . given genomic maps , _ maximal strip recovery _ ( ) is the problem of finding subsequences , one subsequence of each genomic map , such that the total length of strips of these subsequences is maximized . herea _ strip _ is a maximal string of at least two markers such that either the string itself or its signed reversal appears contiguously as a substring in each of the subsequences in the solution . without loss of generality , we can assume that all markers appear in positive orientation in the first genomic map .for example , the two genomic maps ( the markers in negative orientation are underlined ) have two subsequences of the maximum total strip length .the strip is positive and forward in both subsequences ; the other two strips and are positive and forward in the first subsequence , but are negative and backward in the second subsequence . intuitively , the strips are syntenic blocks , and the deleted markers not in the strips are noise and ambiguities in the genomic maps .the problem was introduced by zheng , zhu , and sankoff , and was later generalized to for any by chen , fu , jiang , and zhu . for , zheng et al . presented a potentially exponential - time heuristic that solves a subproblem of maximum - weight clique . for , chen et al . presented a -approximation based on bar - yehuda et al.s fractional local - ratio algorithm for maximum - weight independent set in -interval graphs ; the running time of this -approximation algorithm is polynomial if is a constant . on the complexity side , chen et al . showed that several close variants of the problem are intractable .in particular , they showed that ( i ) is np - complete if duplicate markers are allowed in each genomic map , and that ( ii ) is np - complete even if the markers in each genomic map are distinct .the complexity of with no duplicates , however , was left as an open problem . in the biological context, a genomic map may contain duplicate markers as a paralogy set , but such maps are relatively rare .thus without duplicates is the most useful version of in practice .theoretically , without duplicates is the most basic and hence the most interesting version of . also , the previous np - hardness proofs of both ( i ) with duplicates and ( ii ) without duplicates rely on the fact that a marker may appear in a genomic map in either positive or negative orientation .a natural question is whether there is any version of that remains np - hard even if all markers in the genomic maps are in positive orientation .we give a precise formulation of _ the most basic version _ of the problem as follows : instance : given sequences , , where each sequence is a permutation of .question : find a subsequence of each sequence , , and find a set of strips , where each strip is a sequence of length at least two over the alphabet , such that each subsequence is the concatenation of the strips in some order , and the total length of the strips is maximized .the main result of this paper is the following theorem that settles the computational complexity of the most basic version of maximal strip recovery , and moreover provides the first explicit lower bounds on approximating for all : [ thm : msrd ] for any is apx - hard .moreover , , , , and are np - hard to approximate within , , , and , respectively , even if all markers are distinct and appear in positive orientation in each genomic map .recall that for any constant , admits a polynomial - time -approximation algorithm .thus for any constant is apx - complete .our following theorem gives a polynomial - time -approximation algorithm for even if the number of genomic maps is not a constant but is part of the input : [ thm:2d ] for any , there is a polynomial - time -approximation algorithm for if all markers are distinct in each genomic map .this holds even if is not a constant but is part of the input .compare the upper bound of in theorem [ thm:2d ] and the asymptotic lower bound of in theorem [ thm : msrd ] .maximal strip recovery is a maximization problem .wang and zhu introduced complement maximal strip recovery as a minimization problem .given genomic maps as input , the problem is the same as the problem except that the objective is minimizing the number of deleted markers not in the strips , instead of maximizing the number of markers in the strips .a natural question is whether a polynomial - time approximation scheme may be obtained for this problem .our following theorem shows that unless np p , can not be approximated arbitrarily well : [ thm : cmsrd ] for any is apx - hard .moreover , , , , and for any are np - hard to approximate within , , , and , respectively , even if all markers are distinct and appear in positive orientation in each genomic map. if the number of genomic maps is not a constant but is part of the input , then is np - hard to approximate within any constant less than , even if all markers are distinct and appear in positive orientation in each genomic map .note the similarity between theorem [ thm : msrd ] and theorem [ thm : cmsrd ] .in fact , our proof of theorem [ thm : cmsrd ] uses exactly the same constructions as our proof of theorem [ thm : msrd ] .the only difference is in the analysis of the approximation lower bounds .bulteau , fertin , and rusu recently proposed a restricted variant of maximal strip recovery called -gap - msr , which is with the additional constraint that at most markers may be deleted between any two adjacent markers of a strip in each genomic map .we now define and as the restricted variants of the two problems and , respectively , with the additional -gap constraint .bulteau et al . proved that is apx - hard for any , and is np - hard for .we extend our proofs of theorem [ thm : msrd ] and theorem [ thm : cmsrd ] to obtain the following theorem on and for any : [ thm : gap ] let . then 1 . for any is apx - hard .moreover , , , , and are np - hard to approximate within , , , and , respectively , even if all markers are distinct and appear in positive orientation in each genomic map .2 . for any is apx - hard .moreover , , , , and for any are np - hard to approximate within , , , and , respectively , even if all markers are distinct and appear in positive orientation in each genomic map .if the number of genomic maps is not a constant but is part of the input , then is np - hard to approximate within any constant less than , even if all markers are distinct and appear in positive orientation in each genomic map .we refer to for some related results .maximal strip recovery is a typical combinatorial problem in biological sequence analysis , in particular , genome rearrangement .the earliest inapproximability result for genome rearrangement problems is due to berman and karpinski , who proved that sorting by reversals is np - hard to approximate within any constant less than .more recently , zhu and wang proved that translocation distance is np - hard to approximate within any constant less than .similar inapproximability results have also been obtained for other important problems in bioinformatics .for example , nagashima and yamazaki proved that non - overlapping local alignment is np - hard to approximate within any constant less than , and manthey proved that multiple sequence alignment with weighted sum - of - pairs score is apx - hard for arbitrary metric scoring functions over the binary alphabet . the rest of this paper is organized as follows .we first review some preliminaries in section [ sec : pre ] .then , in sections [ sec : msr4 ] , [ sec : msr3 ] , [ sec : msr2 ] , and [ sec : msrd ] , we show that for any is apx - hard , and prove explicit approximation lower bounds .( for any two constants and such that , the problem is a special case of the problem with redundant genomic maps .thus the apx - hardness of implies the apx - hardness of for all constants . to present the ideas progressively , however ,we show that , , and are apx - hard by three different l - reductions of increasing sophistication . ) in section [ sec:2d ] , we present a -approximation algorithm for that runs in polynomial time even if the number of genomic maps is not a constant but is part of the input . in section[ sec : more ] , we present inapproximability results for , , and .we conclude with remarks in section [ sec : remarks ] .[ [ l - reduction . ] ] l - reduction .+ + + + + + + + + + + + given two optimization problems x and y , an _l - reduction _ from x to y consists of two polynomial - time functions and and two positive constants and satisfying the following two properties : 1 . for every instance of x , is an instance of y such that 2 . for every feasible solution to , is a feasible solution to such that denotes the value of the optimal solution to an instance , and denotes the value of a solution .the two properties of l - reduction imply the following inequality on the relative errors of approximation : a relative error of corresponds to an approximation factor of for a minimization problem , and corresponds to an approximation factor of for a maximization problem .thus we have the following propositions : 1 . for a minimization problem x and a minimization problem y , if x is np - hard to approximate within , then y is np - hard to approximate within .2 . for a maximization problem x and a maximization problem y , if x is np - hard to approximate within , then y is np - hard to approximate within .3 . for a minimization problem x and a maximization problem y , if x is np - hard to approximate within , then y is np - hard to approximate within .4 . for a maximization problem x and a minimization problem y , if x is np - hard to approximate within , then y is np - hard to approximate within .[ [ apx - hard - optimization - problems . ] ] apx - hard optimization problems .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we review the complexities of some apx - hard optimization problems that will be used in our reductions .* is the problem maximum independent set in graphs of maximum degree .is apx - hard ; see .moreover , chlebk and chlebkov showed that and are np - hard to approximate within and , respectively .trevisan showed that is np - hard to approximate within .* is the problem minimum vertex cover in graphs of maximum degree .is apx - hard ; see .moreover , chlebk and chlebkov showed that and are np - hard to approximate within and , respectively , and , for any , is np - hard to approximate within .dinur and safra showed that minimum vertex cover is np - hard to approximate within any constant less than . *given a set of variables and a set of clauses , where each variable has exactly literals ( in different clauses ) and each clause is the disjunction of exactly literals ( of different variables ) , is the problem of finding an assignment of that satisfies the maximum number of clauses in .note that .berman and karpinski showed that is np - hard to approximate within any constant less than . * given disjoint sets of vertices , , and given a set of hyper - edges , is the problem of finding a maximum - cardinality subset of pairwise - disjoint hyper - edges .hazan , safra , and schwartz showed that is np - hard to approximate within .[ [ linear - forest - and - linear - arboricity . ] ] linear forest and linear arboricity .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + a _ linear forest _ is a graph in which every connected component is a path .the _ linear arboricity _ of a graph is the minimum number of linear forests into which the edges of the graph can be decomposed .akiyama , exoo , and harary conjectured that the linear arboricity of every graph of maximum degree satisfies .this conjecture has been confirmed for graphs of small constant degrees , and has been shown to be asymptotically correct as .in particular , the proof of the conjecture for and are constructive and lead to polynomial - time algorithms for decomposing any graph of maximum degree and into at most and linear forests , respectively .also , the proof of the first upper bound on linear arboricity by akiyama , exoo , and harary implies a simple polynomial - time algorithm for decomposing any graph of maximum degree into at most linear forests .define where ranges over all graphs of maximum degree , and denotes the number of linear forests that akiyama , exoo , and harary s algorithm decomposes into. then this section , we prove that is apx - hard by a simple l - reduction from . before we present the l - reduction , we first show that is np - hard by a reduction in the classical style , which is perhaps more familiar to most readers . throughout this paper, we follow this progressive format of presentation .let be a graph of maximum degree .let be the number of vertices in .partition the edges of into two linear forests and .let and be the vertices of that are _ not _ incident to any edges in and in , respectively .we construct four genomic maps , , , and , where each map is a permutation of the following distinct markers all in positive orientation : * pairs of vertex markers and , . and are concatenations of the pairs of vertex markers with ascending and descending indices , respectively : and are represented schematically as follows : and consist of vertex markers of the vertices incident to the edges in and , respectively .the markers of the vertices in each path are grouped together in an interleaving pattern : for , the left marker of , the right marker of ( if ) , the left marker of ( if ) , and the right marker of are consecutive . and consist of vertex markers of the vertices in and , respectively . the left marker and the right marker of each pairare consecutive .this completes the construction .we refer to figure [ fig : msr4 ] ( a ) and ( b ) for an example .: is a single solid path , consists of two dotted paths and , , .( b ) the four genomic maps .( c ) the four subsequences of the genomic maps corresponding to the independent set in the graph.,title="fig : " ] + ( a ) ( b ) ( c ) two pairs of markers _ intersect _ in a genomic map if a marker of one pair appears between the two markers of the other pair .the following property of our construction is obvious : [ prp : msr4 ] two vertices are adjacent in the graph if and only if the corresponding two pairs of vertex markers intersect in one of the two genomic maps .we say that four subsequences of the four genomic maps are _ canonical _ if each strip of the subsequences is a pair of vertex markers .we have the following lemma on canonical subsequences : [ lem : canon4 ] in any four subsequences of the four genomic maps , respectively , each strip must be a pair of vertex markers . by construction ,a strip can not include two vertex markers of different indices because they appear in different orders in and in .the following lemma establishes the np - hardness of : [ lem : iff4 ] the graph has an independent set of at least vertices if and only if the four genomic maps have four subsequences whose total strip length is at least .we first prove the `` only if '' direction .suppose that the graph has an independent set of at least vertices .we will show that the four genomic maps have four subsequences of total strip length at least . by proposition[ prp : msr4 ] , the vertices in the independent set correspond to pairs of vertex markers that do not intersect each other in the genomic maps .these pairs of vertex markers induce a subsequence of length in each genomic map . in each subsequence ,the left marker and the right marker of each pair appear consecutively and compose a strip .thus the total strip length is at least .we refer to figure [ fig : msr4](c ) for an example .we next prove the `` if '' direction .suppose that the four genomic maps have four subsequences of total strip length at least .we will show that the graph has an independent set of at least vertices . by lemma [ lem : canon4 ], each strip of the subsequences must be a pair of vertex markers .thus we obtain at least pairs of vertex markers that do not intersect each other in the genomic maps .then , by proposition [ prp : msr4 ] , the corresponding set of at least vertices in the graph form an independent set .we present an l - reduction from to as follows .the function , given a graph of maximum degree , constructs the four genomic maps as in the np - hardness reduction .let be the number of vertices in a maximum independent set in , and let be the maximum total strip length of any four subsequences of , respectively . by lemma [ lem : iff4 ] , we have choose , then property of l - reduction is satisfied .the function , given four subsequences of the four genomic maps , respectively , returns an independent set of vertices in the graph corresponding to the pairs of vertex markers that are strips of the subsequences .let be the total strip length of the subsequences , and let be the number of vertices in the independent set returned by the function . then .it follows that choose , then property of l - reduction is also satisfied .we have obtained an l - reduction from to with .chlebk and chlebkov showed that is np - hard to approximate within .it follows that is also np - hard to approximate within .the lower bound extends to for all constants .the l - reduction from to can be obviously generalized : [ lem : mis ] let and .if there is a polynomial - time algorithm for decomposing any graph of maximum degree into linear forests , then there is an l - reduction from to with constants and . in this section, we prove that is apx - hard by a slightly more sophisticated l - reduction again from. let be a graph of maximum degree .let be the number of vertices in .partition the edges of into two linear forests and .let and be the vertices of that are _ not _ incident to any edges in and , respectively .we construct three genomic maps , , and , where each map is a permutation of the following distinct markers all in positive orientation : * pairs of vertex markers and , ; * pairs of dummy markers and , . consists of the pairs of vertex and dummy markers in an alternating pattern : and are represented schematically as follows : and consist of vertex markers of the vertices incident to the edges in and , respectively .the markers of the vertices in each path are grouped together in an interleaving pattern : for , the left marker of , the right marker of ( if ) , the left marker of ( if ) , and the right marker of are consecutive . and consist of vertex markers of the vertices in and , respectively . the left marker and the right marker of each pairare consecutive . is the reverse permutation of the pairs of dummy markers : this completes the construction .we refer to figure [ fig : msr3 ] ( a ) and ( b ) for an example .: is a single ( solid ) path , consists of two ( dotted ) paths and , , .( b ) the three genomic maps .( c ) the three subsequences of the genomic maps corresponding to the independent set in the graph.,title="fig : " ] + ( a ) ( b ) ( c ) it is clear that proposition [ prp : msr4 ] still holds .the following lemma on canonical subsequences is analogous to lemma [ lem : canon4 ] : [ lem : canon3 ] if the three genomic maps have three subsequences of total strip length , then they must have three subsequences of total strip length at least such that each strip is either a pair of vertex markers or a pair of dummy markers , and each pair of dummy markers is a strip .we present an algorithm that transforms the subsequences into canonical form without reducing the total strip length . by construction, a strip can not include both a dummy marker and a vertex marker because they appear in different orders in and in , and a strip can not include two dummy markers of different indices because they appear in different orders in and in and .suppose that a strip consists of vertex markers of two or more different indices .then there must be two vertex markers and of different indices and that are consecutive in . since the vertex markers and the dummy markers appear in in an alternating pattern with ascending indices , we must have .moreover , the pair of dummy markers of index , which appears between and in , must be missing from the subsequences .now cut the strip into and between and .if ( resp . ) consists of only one marker ( resp . ) , delete the lone marker from the subsequences ( recall that a strip must include at least two markers ) .this decreases the total strip length by at most two .next insert the pair of dummy markers of index to the subsequences as a new strip .this increases the total strip length by exactly two .repeat this operation whenever a strip contains two vertex markers of different indices and whenever a pair of dummy markers is missing from the subsequences , then in steps we obtain three subsequences of total strip length at least in canonical form .the following lemma , analogous to lemma [ lem : iff4 ] , establishes the np - hardness of : [ lem : iff3 ] the graph has an independent set of at least vertices if and only if the three genomic maps have three subsequences whose total strip length is at least .we first prove the `` only if '' direction .suppose that the graph has an independent set of at least vertices .we will show that the three genomic maps have three subsequences of total strip length at least . by proposition [ prp : msr4 ] , the vertices in the independent set correspond to pairs of vertex markers that do not intersect each other in the genomic maps .these pairs of vertex markers together with the pairs of dummy markers induce a subsequence of length in each genomic map . in each subsequence, the left marker and the right marker of each pair appear consecutively and compose a strip .thus the total strip length is at least .we refer to figure [ fig : msr3](c ) for an example .we next prove the `` if '' direction .suppose that the three genomic maps have three subsequences of total strip length at least .we will show that the graph has an independent set of at least vertices . by lemma [ lem : canon3 ] , the three genomic maps have three subsequences of total strip length at least such that each strip is a pair of markers . excluding the pairs of dummy markers , we obtain at least pairs of vertex markers that do not intersect each other in the genomic maps .then , by proposition [ prp : msr4 ] , the corresponding set of at least vertices in the graph form an independent set .we present an l - reduction from to as follows .the function , given a graph of maximum degree , constructs the three genomic maps as in the np - hardness reduction .let be the number of vertices in a maximum independent set in , and let be the maximum total strip length of any three subsequences of , respectively .since a simple greedy algorithm ( which repeatedly selects a vertex not adjacent to the previously selected vertices ) finds an independent set of at least vertices in the graph of maximum degree , we have . by lemma [ lem : iff3 ] , we have .it follows that choose , then property of l - reduction is satisfied .the function , given three subsequences of the three genomic maps , respectively , transforms the subsequences into canonical form as in the proof of lemma [ lem : canon3 ] , then returns an independent set of vertices in the graph corresponding to the pairs of vertex markers that are strips of the subsequences .let be the total strip length of the subsequences , and let be the number of vertices in the independent set returned by the function .then .it follows that choose , then property of l - reduction is also satisfied .we have obtained an l - reduction from to with .chlebk and chlebkov showed that is np - hard to approximate within .it follows that is np - hard to approximate within . in this section , we prove that is apx - hard by an l - reduction from with and .let be an instance of , where is a set of variables , , and is a set of clauses , .without loss of generality , assume that the literals of each variable are neither all positive nor all negative .since , it follows that each variable has either positive and negative literals , or positive and negative literals .we construct two genomic maps and , each map a permutation of distinct markers all in positive orientation : * pair of variable markers for each variable , ; * pairs of true markers and for each variable , ; * pairs of false markers and for each variable , ; * pair of clause markers for each clause , ; * pairs of literal markers , , for each clause , ; * pairs of dummy markers and . the construction is done in two steps : first arrange the variable markers , the true / false markers , the clause markers , and the dummy markers into two sequences and , next insert the literal markers at appropriate positions in the two sequences to obtain the two genomic maps and .the two sequences and are represented schematically as follows : for each variable , consists of the corresponding four pairs of true / false markers in and , and in addition the pair of variable markers in .these markers are arranged in the two sequences in a special pattern as follows ( the indices are omitted for simpler notations ) : now insert the literal markers to the two sequences and to obtain the two genomic maps and .first , . for each positiveliteral ( resp .negative literal ) of a variable that occurs in a clause , place a pair of literal markers , , around a false marker ( resp . true marker ) , .the four possible positions of the three pairs of literal markers of each variable are as follows : next , . without loss of generality ,assume that the pairs of literal markers of each clause appear in with ascending indices : insert the pairs of literal markers in immediately after the pair of clause markers , in an interleaving pattern : this completes the construction .we refer to figure [ fig : msr2 ] ( a ) and ( b ) for an example of the two steps .+ ( a ) + ( b ) + ( c ) + ( d ) we say that two subsequences of the two genomic maps and are _ canonical _ if each strip of the two subsequences is a pair of markers .we refer to figure [ fig : msr2 ] ( c ) and ( d ) for two examples of canonical subsequences .the following lemma on canonical subsequences is analogous to lemma [ lem : canon4 ] and lemma [ lem : canon3 ] : [ lem : canon2 ] if the two genomic maps and have two subsequences of total strip length , then they must have two subsequences of total strip length at least such that each strip is a pair of markers and , moreover , the two pairs of dummy markers are two strips , the pairs of clause markers and the pairs of variable markers are strips , at most one pair of literal markers of each clause is a strip , either both pairs of true markers or both pairs of false markers of each variable are two strips .we present an algorithm that transforms the subsequences into canonical form without reducing the total strip length .the algorithm performs incremental operations on the subsequences such that the following eight conditions are satisfied progressively : * _ 1 .each strip that includes a dummy marker is a pair of dummy markers ._ * a strip can not include two dummy markers of different indices because they appear in different orders in and in .note that in the dummy markers appear after the other markers .suppose that a strip includes both a dummy marker and a non - dummy marker. then there must be a non - dummy marker and a dummy marker consecutive in . since the two pairs of dummy markers appear consecutively but in different orders in and in , one of the two pairs must appear between and either in or in .this pair is hence missing from the subsequences .now cut the strip into and between and . if ( resp . ) consists of only one marker ( resp . ) , delete the lone marker from the subsequences ( recall that a strip must include at least two markers ) .this decreases the total strip length by at most two .next insert the missing pair of dummy markers to the subsequences .this pair of dummy markers becomes either a new strip by itself , or part of a longer strip ( recall that a strip must be maximal ) . in any case, the insertion increases the total strip length by exactly two .overall , this _ cut - delete - insert _ operation ( also used in lemma [ lem : canon3 ] ) does not reduce the total strip length .after the first operation , a second operation may be necessary .but since each operation here deletes only lone markers ( in and ) and inserts always a pair of markers , the pair inserted by one operation is never deleted by a subsequent operation .thus at most two operations are sufficient to transform the subsequences until each strip that includes a dummy marker is indeed a pair of dummy markers . *the two pairs of dummy markers are two strips . _* suppose that the subsequences do not have both pairs of dummy markers as strips .then , by condition 1 , we must have either both pairs of dummy markers missing from the subsequences , or one pair missing and the other pair forming a strip .note that in the dummy markers separate the true / false and literal markers on the left from the clause and variable markers on the right , and that in the dummy markers appear after the other markers .if the missing dummy markers do not disrupt any existing strips in , then simply insert each missing pair to the subsequences as a new strip .otherwise , there must be a true / false or literal marker and a clause or variable marker consecutive in a strip , such that both pairs of dummy markers appear in between and and hence are missing from the subsequences .cut the strip between and , delete any lone markers if necessary , then insert the two pairs of dummy markers to the subsequences as two new strips .each strip that includes a clause or variable marker is a pair of clause markers or a pair of variable markers . _ * note that in the clause and variable markers are separated by the dummy markers from the other markers .thus , by condition 2 , a strip that includes a clause or variable marker can not include any markers of the other types .also , a strip can not include two clause markers of different clauses , or two variable markers of different variables , or a clause marker and a variable marker , because these combinations appear in different orders in and in .thus this condition is automatically satisfied after conditions 1 and 2 .the pairs of clause markers and the pairs of variable markers are strips . _* suppose that the subsequences do not have all pairs of clause and variable markers as strips . by condition 3, the clause and variable markers in the subsequences must be in pairs , each pair forming a strip .then the clause and variable markers missing from the subsequences must be in pairs too . for each missing pair of clause or variable markers ,if the pair does not disrupt any existing strips in , then simply insert it to the subsequences as a new strip .otherwise , there must be two true / false or literal markers and consecutive in a strip , such that the missing pair appears in between and .cut the strip between and , delete any lone markers if necessary , then insert each missing pair of clause markers between and to the subsequences as a new strip . *each strip that includes a literal marker is a pair of literal markers . _ * note that in the dummy and clause markers separate the literals markers from the other markers , and separate the literal markers of different clauses from each other .thus , by conditions 2 and 4 , a strip can not include both a literal marker and a non - literal marker , or two literal markers of different clauses .suppose that a strip includes two literal markers and of the same clause but of different indices and .assume without loss of generality that and are consecutive in .recall the orders of the literal markers of each clause in the two genomic maps : since in the pairs of literal markers appear with ascending indices , the index of the marker must be less than the index of the marker . then , since in the left markers appear with descending indices before the right markers also with descending indices , must be a left marker , and must be a right marker .that is , .all markers between and in must be missing from the subsequences . among these missing markers , those that are literal markers of in either consecutively before or consecutively after .replace either or by a missing literal marker of , that is , either by , or by , then and become a pair .denote this _ shift _ operation by the strip can not include any other literal markers of the clause besides and because ( i ) the markers before in appear after in , and ( ii ) the markers after in appear before in .* _ 6 . at most one pair of literal markers of each clause is a strip ._ * note that the pairs of literal markers of each clause appear in in an interleaving pattern .it follows by condition 5 that at most one of the pairs can be a strip .each strip that includes a true / false marker is a pair of true markers or a pair of false markers . _ * by conditions 1 , 3 , and 5 , it follows that each strip that includes a true / false marker must include true / false markers only .a strip can not include two true / false markers of different variables because they appear in different orders in and in .suppose that a strip includes two true / false markers and of the same variable such that and are not a pair .recall the orders of the four pairs of true / false markers of each variable in and , the four possible positions of the three pairs of literal markers in , and the position of the variable marker in : note that the pair of variable markers in forbids a strip from including two true / false markers of different indices .thus the strip must consist of true / false markers of both the same variable and the same index .assume without loss of generality that appears before in .it is easy to check that there are only two such combinations of and : either or .moreover , the strip must include only the two markers and .for either combination of and , use a shift operation to make and a pair : * _ 8 .either both pairs of true markers or both pairs of false markers of each variable are two strips ._ * consider the conflict graph of the four pairs of true / false markers and the three pairs of literal markers of each variable in figure [ fig:7 ] .the graph has one vertex for each pair , and has an edge between two vertices if and only if the corresponding pairs intersect in either or . by conditions 1 , 3 , 5 , and 7 ,the strips of the subsequences from the seven pairs correspond to an independent set in the conflict graph of seven vertices .-cycle are thick . in this examplethe strip is first deleted then inserted back . ]note that the four vertices corresponding to the four pairs of true / false markers induce a -cycle in the conflict graph .suppose that neither both pairs of true markers nor both pairs of false markers are strips .then at most one of the four pairs , say , is a strip .delete from the subsequences .recall that each variable has either positive and negative literals , or positive and negative literals .let be the pair of literal markers whose sign is opposite to the sign of the other two pairs of literal markers . also delete from the subsequences if it is there .next insert two pairs of true / false markers to the subsequences : if is positive , both pairs of false markers and ; if is negative , both pairs of true markers and . when all eight conditions are satisfied , the subsequences are in the desired canonical form .the following lemma , analogous to lemma [ lem : iff4 ] and lemma [ lem : iff3 ] , establishes the np - hardness of : [ lem : iff2 ] the variables in have an assignment that satisfies at least clauses in if and only if the two genomic maps and have two subsequences whose total strip length is at least .we first prove the `` only if '' direction .suppose that the variables in have an assignment that satisfies at least clauses in .we will show that the two genomic maps and have two subsequences of total strip length at least . for each variable , choose the two pairs of true markers if the variable is assigned true , or the two pairs of false markers if the variable is assigned false . for each satisfied clause , choose one pair of literal markers corresponding to a true literal ( when there are two or more true literals , choose any one ) . also choose all pairs of clause and variable markers and both pairs of dummy markers .the chosen markers induce two subsequences of the two genomic maps .it is easy to check that , by construction , the two subsequences have at least strips , each strip forming a pair .thus the total strip length is at least .we refer to figure [ fig : msr2 ] ( c ) and ( d ) for two examples .we next prove the `` if '' direction .suppose that the two genomic maps and have two subsequences of total strip length at least . we will show that the variables in have an assignment that satisfies at least clauses in . by lemma [ lem : canon2 ] , the two genomic maps have two subsequences of total strip length at least such that each strip is a pair and , moreover , the two pairs of dummy markers , the pairs of clause and variable markers , at most one pair of literal markers of each clause , and either both pairs of true markers or both pairs of false markers of each variable are strips .thus at least strips are pairs of literal markers , each pair of a different clause .again it is easy to check that , by construction , the assignment of the variables in to either true or false ( corresponding to the choices of either both pairs of true markers or both pairs of false markers ) satisfies at least clauses in ( corresponding to the at least pairs of literal markers that are strips ) .we present an l - reduction from to as follows .the function , given the instance , constructs the two genomic maps and as in the np - hardness reduction .let be the maximum number of clauses in that can be satisfied by an assignment of , and let be the maximum total strip length of any two subsequences of and , respectively .since a random assignment of each variable independently to either true or false with equal probability satisfies each disjunctive clause of literals with probability , we have . by lemma [ lem : iff2 ], we have .recall that .it follows that the function , given two subsequences of the two genomic maps and , respectively , transforms the subsequences into canonical form as in the proof of lemma [ lem : canon2 ] , then returns an assignment of corresponding to the choices of true or false markers .let be the total strip length of the subsequences , and let be the number of clauses in that are satisfied by this assignment . then .it follows that let be an arbitrary small constant .note that by brute force we can check whether and , in the affirmative case , compute an optimal assignment of that satisfies the maximum number of clauses in , all in time , which is polynomial in for a constant . therefore we can assume without loss of generality that . then , with the two constants and , both properties and of l - reduction are satisfied .in particular , for and , berman and karpinski showed that is np - hard to approximate within any constant less than .thus is np - hard to approximate within any constant less than this section , we derive an asymptotic lower bound for approximating by an l - reduction from to .let be a set of hyper - edges over disjoint sets of vertices , .we construct two genomic maps and , and genomic maps , , where each map is a permutation of the following distinct markers all in positive orientation : * pairs of edge markers and , . the two genomic maps and are concatenations of the pairs of edge markers with ascending and descending indices , respectively : each genomic map corresponds to a vertex set , , and is represented schematically as follows : here each consists of the edge markers of hyper - edges containing the vertex , grouped together such that the left markers appear with ascending indices before the right markers also with ascending indices .this completes the construction .we refer to figure [ fig : msrd](a ) for an example . ( a ) ( b ) the following property of our construction is obvious : [ prp : msrd ] two hyper - edges in intersectif and only if the corresponding two pairs of edge markers intersect in one of the genomic maps , .the following lemma is analogous to lemma [ lem : canon4 ] : [ lem : canond ] in any subsequences of the genomic maps , respectively , each strip must be a pair of edge markers . by construction ,a strip can not include two edge markers of different indices because they appear in different orders in and in .the following lemma , analogous to lemma [ lem : iff4 ] , lemma [ lem : iff3 ] , and lemma [ lem : iff2 ] , establishes the np - hardness of : [ lem : iffd ] the set has a subset of pairwise - disjoint hyper - edges if and only if the genomic maps have subsequences whose total strip length is at least .we first prove the `` only if '' direction .suppose that the set has a subset of at least pairwise - disjoint hyper - edges .we will show that the genomic maps have subsequences of total strip length at least . by proposition[ prp : msrd ] , the pairwise - disjoint hyper - edges correspond to pairs of edge markers that do not intersect each other in the genomic maps .these pairs of edge markers induce a subsequence of length in each genomic map . in each subsequence ,the left marker and the right marker of each pair appear consecutively and compose a strip .thus the total strip length is at least .we refer to figure [ fig : msrd](b ) for an example .we next prove the `` if '' direction .suppose that the genomic maps have subsequences of total strip length at least .we will show that the set has a subset of at least pairwise - disjoint hyper - edges . by lemma [ lem : canon4 ], each strip of the subsequences must be a pair of edge markers .thus we obtain at least pairs of edge markers that do not intersect each other in the genomic maps . then , by proposition [ prp : msrd ] , the corresponding set of at least hyper - edges in are pairwise - disjoint .we present an l - reduction from to as follows .the function , given a set of hyper - edges , constructs the genomic maps as in the np - hardness reduction .let be the maximum number of pairwise - disjoint hyper - edges in , and let be the maximum total strip length of any subsequences of , respectively . by lemma [ lem : iffd ] , we have choose , then property of l - reduction is satisfied .the function , given subsequences of the genomic maps , respectively , returns a subset of pairwise - disjoint hyper - edges in corresponding to the pairs of edge markers that are strips of the subsequences .let be the total strip length of the subsequences , and let be the number of pairwise - disjoint hyper - edges returned by the function . then .it follows that choose , then property of l - reduction is also satisfied .we have obtained an l - reduction from to with .hazan , safra , and schwartz showed that is np - hard to approximate within .it follows that is also np - hard to approximate within .this completes the proof of theorem [ thm : msrd ] .in this section we prove theorem [ thm:2d ] .we briefly review the two previous algorithms for this problem .the first algorithm for is a simple heuristic due to zheng , zhu , and sankoff : 1 .extract a set of pre - strips from the two genomic maps ; 2 .compute an independent set of strips from the pre - strips .this algorithm is inefficient because the number of pre - strips could be exponential in the sequence length , and furthermore the problem maximum - weight independent set in general graphs is np - hard .chen , fu , jiang , and zhu presented a -approximation algorithm for . for any , a _-interval _ is the union of disjoint intervals in the real line , and a _ -interval graph _ is the intersection graph of a set of -intervals , with a vertex for each -interval , and with an edge between two vertices if and only the corresponding -intervals overlap .the -approximation algorithm works as follows : 1 .compose a set of -intervals , one for each combination of substrings of the genomic maps , respectively .assign each -interval a weight equal to the length of a longest common subsequence ( which may be reversed and negated ) in the corresponding substrings .2 . compute a -approximation for maximum - weight independent set in the resulting -interval graph using bar - yehuda et al.s fractional local - ratio algorithm .let be the number of markers in each genomic map .then the number of -intervals composed by this algorithm is because each of the genomic maps has substrings .consequently the running time of this algorithm can be exponential if the number of genomic maps is not a constant but is part of the input . in the following ,we show that if all markers are distinct in each genomic map ( as discussed earlier , this is a reasonable assumption in application ) , then the running time of the -approximation algorithm can be improved to polynomial for all .this improvement is achieved by composing a smaller set of candidate -intervals in step 1 of the algorithm .the idea is actually quite simple and has been used many times previously .note that any strip of length is a concatenation of shorter strips of lengths and , for example , , , etc .since the objective is to maximize the total strip length , it suffices to consider only short strips of lengths and in the genomic maps , and to enumerate only candidate -intervals that correspond to these strips .when each genomic map is a signed permutation of the same distinct markers , there are at most strips of lengths and , and for each strip there is a unique shortest substring of each genomic map that contains all markers in the strip .thus we compose only -intervals , and improve the running time of the -approximation algorithm to polynomial for all .this completes the proof of theorem [ thm:2d ] .in this section we prove theorem [ thm : cmsrd ] and theorem [ thm : gap ] . [ [ and - are - apx - hard . ] ] and are apx - hard . + + + + + + + + + + + + + + + + + + for any , the decision problems of and are equivalent .thus the np - hardness of implies the np - hardness of , although the apx - hardness of does not necessarily imply the apx - hardness of .note that the two problems and complement each other just as the two problems and complement each other .thus our np - hardness reduction from to in section [ sec : msr3 ] can be immediately turned into an np - hardness reduction from to .we present an l - reduction from 3 to 3 as follows .the function , given a graph of maximum degree , constructs the three genomic maps as in the np - hardness reduction in section [ sec : msr3 ] .let be the number of vertices in a maximum independent set in , and let be the maximum total strip length of any three subsequences of , respectively .also let be the number of vertices in a minimum vertex cover in , and let be the minimum number of markers that must be deleted to transform the three genomic maps into strip - concatenated subsequences. then and . by lemma [ lem : iff3 ], we have .it follows that choose , then property of l - reduction is satisfied .the function , given three subsequences of the three genomic maps , respectively , transforms the subsequences into canonical form as in the proof of lemma [ lem : canon3 ] , then returns a vertex cover in the graph corresponding to the deleted pairs of vertex markers .let be the number of deleted vertex markers , and let be the number of vertices in the vertex cover returned by the function . then .it follows that choose , then property of l - reduction is also satisfied .the l - reduction from to can be obviously generalized : [ lem : mvc ] let and .if there is a polynomial - time algorithm for decomposing any graph of maximum degree into linear forests , then there is an l - reduction from to with constants and .recall that there exist polynomial - time algorithms for decomposing a graph of maximum degree and into at most and linear forests , respectively .thus we have an l - reduction from to and an l - reduction from to , with the same parameters , , and .chlebk and chlebkov showed that and are np - hard to approximate within and , respectively .it follows that and are np - hard to approximate within and , respectively , too .the lower bound for extends to for all .note that we could use an l - reduction from to similar to the l - reduction from to in section [ sec : msr4 ] , but that only gives us a weaker lower bound of for .[ [ is - apx - hard . ] ] is apx - hard .+ + + + + + + + + + + + + let and .we present an l - reduction from to as follows .the function , given the instance , constructs the two genomic maps and as in our np - hardness reduction in section [ sec : msr2 ] . as before , let be the maximum number of clauses in that can be satisfied by an assignment of , and let be the maximum total strip length of any two subsequences of and , respectively .also let be the minimum number of deleted markers .then is exactly the number of markers in each genomic map , that is , . by lemma [ lem : iff2 ], we have .thus .since a random assignment of each variable independently to either true or false with equal probability satisfies each disjunctive clause of literals with probability , we have .recall that .it follows that for and , we can choose .then property of l - reduction is satisfied .the function , given two subsequences of the two genomic maps and , transforms the subsequences into canonical form as in the proof of lemma [ lem : canon2 ] , then returns an assignment of corresponding to the choices of true or false markers .let be the total strip length of the subsequences , and let be the number of deleted markers .let be the number of clauses in that are satisfied by this assignment. then choose .then property of l - reduction is satisfied .berman and karpinski showed that is np - hard to approximate within any constant less than . since , is np - hard to approximate within any constant less than [ [ an - asymptotic - lower - bound - for - and - a - lower - bound - for - with - unbounded - d . ] ] an asymptotic lower bound for and a lower bound for with unbounded .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + chlebk and chlebkov showed that for any , is np - hard to approximate within . by the second inequality in , it follows that if , then . consequently , if , then .by lemma [ lem : mvc ] , there is an l - reduction from to with and .therefore , for any , is np - hard to approximate within .the maximum degree of a graph of vertices is at most .again by the second inequality in , we have .thus is bounded by a polynomial in . if is not a constant but is part of the input , then a straightforward generalization of the l - reduction from to as in lemma [ lem : mvc ] gives an l - reduction from minimum vertex cover to with and .dinur and safra showed that minimum vertex cover is np - hard to approximate within any constant less than .it follows that if is not a constant but is part of the input , then is np - hard to approximate within any constant less than .this completes the proof of theorem [ thm : cmsrd ] .[ [ inapproximability - of - and- . ] ] inapproximability of and .+ + + + + + + + + + + + + + + + + + + + + + + + + + it is easy to check that all instances of and in our constructions for theorem [ thm : msrd ] and theorem [ thm : cmsrd ] admit optimal solutions in canonical form with maximum gap , except for the following two cases : 1 . in the l - reduction from to and ,a strip that is a pair of literal markers has a gap of , which is larger than for .2 . in the l - reduction from to ,a strip that is a pair of edge markers may have an arbitrarily large gap if it corresponds to one of many hyper - edges that share a single vertex .to extend our results in theorem [ thm : msrd ] and theorem [ thm : cmsrd ] to the corresponding results in theorem [ thm : gap ] , the first case does not matter because we set the parameter to when deriving the lower bounds for and from the lower bound for .the second case is more problematic , and we have to use a different l - reduction to obtain a slightly weaker asymptotic lower bound for .trevisan showed that is np - hard to approximate within .by lemma [ lem : mis ] , there is an l - reduction from to with . by the two inequalities in, we have . thus is np - hard to approximate within .this completes the proof of theorem [ thm : gap ] .a strip of length has _ adjacencies _ between consecutive markers . in general , strips of total length have adjacencies .besides the total strip length , the total number of adjacencies in the strips is also a natural objective function of .it can be checked that our l - reductions for and still work even if the objective function is changed from the total strip length to the total number of adjacencies in the strips .the only effect of this change is that the constant is halved and correspondingly the constant is doubled ( from to ) .since the product is unaffected , theorem [ thm : msrd ] and the second part of theorem [ thm : gap ] remain valid . for theorem [ thm:2d ] , we can adapt the -approximation algorithm for maximizing the total strip length to a -approximation algorithm for maximizing the total number of adjacencies in strips , for any constant .the only change in the algorithm is to enumerate all -intervals of strip lengths at most , instead of and .we note that the small difference between the two objective functions , total length versus total number of adjacencies , has led to difference in the complexities of two other bioinformatics problems : for rna secondary structure prediction , the problem maximum stacking base pairs ( msbp ) maximizes the total length of helices , and the problem maximum base pair stackings ( mbps ) maximizes the total number of adjacencies in helices . on implicit input of base pairs determined by pair types ,msbp is polynomially solvable , but mbps is np - hard and admits a polynomial - time approximation scheme ; on explicit input of base pairs , msbp and mbps are both np - hard , and admit constant approximations with factors and , respectively . in our theorem[ thm : msrd ] and theorem [ thm : cmsrd ] , we have chosen to display explicit lower bounds for and , despite the fact that they are rather small and unimpressive . as commented by m. karpinski after the author s isaac presentation , it may be possible to improve the lower bound for by an l - reduction from another problem .for example , berman and karpinski proved that is apx - hard to approximate within any constant less than by an l - reduction from e-occ - e-lin- , and proved that e-occ - e-lin- is np - hard to approximate within some other constant by an l - reduction from yet another problem , and so on . by constructing an l - reduction directly from e-occ - e-lin- to , say, we might obtain a better lower bound .we were not engaged in such pursuits in this paper . sincesatisfiability problems are well - known , we chose an l - reduction from to for the sake of a gentle presentation , and we made no effort in optimizing the constants .we proved theorem [ thm : gap ] by extending our proofs of theorem [ thm : msrd ] and theorem [ thm : cmsrd ] with minimal modifications .we note that the -gap constraint actually makes it easier to prove the apx - hardness of and than to prove the apx - hardness of and .for example , our constructions for and can be much simplified to obtain better approximation lower bounds for and .we omit the details and refer to for more results on these restricted variants . on the other hand , the correctness of our reductions does require gaps of at least markers .thus our proofs do not imply the apx - hardness of or .consistent with our results , bulteau , fertin , and rusu proved that is apx - hard for all and is np - hard for .a curious concept called _ paired approximation _ was recently introduced by eppstein . for certain problems on the same input ,say clique and independent set on the same graph , sometimes we would be happy to find a good approximation to either one , if not both .inapproximability results for pairs of problems are often _ incompatible _ : the hard instances for one problem are disjoint from the hard instances for the other problem . as a result ,an approximation algorithm may find a solution to one or the other of two problems on the same input that is better than the known inapproximablity bounds for either individual problem .note that our inapproximability results for and are compatible because they are obtained from the same reduction from . thus even as a paired approximation problem , ( , )is still apx - hard .this is the first inapproximability result for a paired approximation problem in bioinformatics .the apx hardness results for and in theorem [ thm : msrd ] was obtained in december 2008 .the author was later informed by binhai zhu in january 2009 that lusheng wang and he had independently and almost simultaneously proved a weaker result that is np - hard .berman and m. karpinski : on some tighter inapproximability results , in _ proceedings of the 26th international colloquium on automata , languages and programming ( icalp99 ) _ , 1999 , lncs 1644 , pp .200209 .l. bulteau , g. fertin , and i. rusu : maximal strip recovery problem with gaps : hardness and approximation algorithms , in _ proceedings of the 20th international symposium on algorithms and computation ( isaac09 ) _ , 2009 , lncs 5878 , pp .710719 .v. choi , c. zheng , q. zhu , and d. sankoff : algorithms for the extraction of synteny blocks from comparative maps , in _ proceedings of the 7th international workshop on algorithms in bioinformatics ( wabi07 ) _ , 2007 , pp .277288 .m. jiang : on the parameterized complexity of some optimization problems related to multiple - interval graphs , in _ proceedings of the 21st annual symposium on combinatorial pattern matching ( cpm10 ) _ , 2010 , lncs 6129 , pp .125137 .l. wang and b. zhu : on the tractability of maximal strip recovery , in _ proceedings of the 6th annual conference on theory and applications of models of computation ( tamc09 ) _ , 2009 , lncs 5532 , pp .400409 . c. zheng , q. zhu , and d. sankoff: removing noise and ambiguities from comparative maps in rearrangement analysis , _ ieee / acm transactions on computational biology and bioinformatics _ , * 4 * ( 2007 ) , 515522 .
in comparative genomic , the first step of sequence analysis is usually to decompose two or more genomes into syntenic blocks that are segments of homologous chromosomes . for the reliable recovery of syntenic blocks , noise and ambiguities in the genomic maps need to be removed first . maximal strip recovery ( msr ) is an optimization problem proposed by zheng , zhu , and sankoff for reliably recovering syntenic blocks from genomic maps in the midst of noise and ambiguities . given genomic maps as sequences of gene markers , the objective of is to find subsequences , one subsequence of each genomic map , such that the total length of syntenic blocks in these subsequences is maximized . for any constant , a polynomial - time -approximation for was previously known . in this paper , we show that for any , is apx - hard , even for the most basic version of the problem in which all gene markers are distinct and appear in positive orientation in each genomic map . moreover , we provide the first explicit lower bounds on approximating for all . in particular , we show that is np - hard to approximate within . from the other direction , we show that the previous -approximation for can be optimized into a polynomial - time algorithm even if is not a constant but is part of the input . we then extend our inapproximability results to several related problems including , , and . * keywords : * computational complexity , bioinformatics , sequence analysis , genome rearrangement .
the relentless growth of the internet goes along with a wide range of internetworking problems related to routing protocols , resource allowances , and physical connectivity plans .the study and optimization of algorithms and policies related to such problems rely heavily on theoretical analysis and simulations that use model abstractions of the actual internet . on the other hand , in order to extract the maximum benefit from these studies , it is necessary to work with reliable internet topology generators .the basic priority at this respect is to best define the topology to use for the network being simulated .this implies the characterization of how routers , hosts , and physical links interconnect with each other in shaping the actual internet . in the last years, research groups started to deploy technologies and infrastructures in order to obtain a more detailed picture of the internet .several studies , aimed at tracking and visualizing the internet large scale topology and/or performance , are leading to internet mapping projects at different resolution scales .these projects typically collect data on internet elements ( routers , domains ) and the connections among them ( physical links , peer connections ) , in order to create a graph - like representation of large parts of the internet in which the nodes represent those elements and the links represent the respective connections .mapping projects focus essentially on two levels of topological description .first , by inferring router adjacencies it has been possible to measure the internet router ( ir ) level topology .the second measured topology works at the autonomous system ( as ) level and the connectivity obtained from as routing path information .although these two representations are related , it is clear that they describe the internet at rather different scales .in fact , each as groups a generally large number of routers , and therefore the as maps are in some sense a coarse - grained view of the ir maps .internet maps exhibit an extremely large degree of heterogeneity and the use of statistical tools becomes mandatory to provide a proper mathematical characterization of this system .statistical analysis of the internet maps fabric have pointed out , to the surprise of many researchers , a very complex connectivity pattern with fluctuations extending over several orders of magnitude . in particular , it has been observed a power - law behavior in metrics and statistical distributions of internet maps at different levels . this evidence makes the internet an example of the so - called _ scale - free _networks and uncover a peculiar structure that can not be satisfactorily modeled with traditional topology generators .previous internet topology generators , based in the classical erds and rnyi random graph model or in hierarchical models , yielded an exponentially bounded connectivity pattern , with very small fluctuations and in clear disagreement with the recent empirical findings . a theoretical framework for the origin of scale - free graphs has been put forward by barabsi and albert by devising a novel class of dynamical growing networks . following these ideas ,several internet topology generators yielding power - law distributions have been subsequently proposed .data gathering projects are progressively making available larger as and ir level maps which are susceptible of more accurate statistical analysis and raise new and challenging questions about the internet topology .for instance , statistical distributions show deviations from the pure power - law behavior and it is important to understand to which extent the internet can be considered a scale - free graph . the way these scaling anomalies usually signaled by the presence of cut - offs in the corresponding statistical distributions are related to the internet finite size and physical constraints is a capital issue in the characterization of the internet and in the understanding of the dynamics underlying its growth .a further important issue concerns the fact that the internet is organized on different hierarchical levels , with a set of backbone links carrying the traffic between local area providers .this structure is reflected in a hierarchical arrangement of administrative domains and in a different usage of links and connectivity of nodes .the interplay between the scale - free nature and the hierarchical properties of the internet is still unclear , and it is an important task to find metrics that can exploit and characterize hierarchical features on the as and ir level . finally , although one would expect internet as and ir level maps to exhibit similar scale - free properties , the different resolution in both kinds of maps might lead to a diversity of metrics properties . in this paperwe present a detailed statistical analysis of large as and ir level maps .we study the scale - free properties of these maps , focusing on the degree and betweenness distributions . while scale - free properties are confirmed for maps at both levels ,ir level maps show also the presence of an exponential cut - off , that can be related to constraints acting on the physical connectivity and load of routers .power - law distributions with a cut - off are a general feature of scale - free phenomena in real finite systems and we discuss their origin in the framework of growing networks . at the as level we confirm the presence of a strong scale - free character for the large - scale degree and betweenness distributions .we also discuss that deviations from the pure power - law behavior found in recent maps at intermediate connectivities has a marginal impact on the resilience and information spreading properties of the internet .furthermore , we propose two metrics based on the connectivity and the clustering correlation functions , that appear to sharply characterize the hierarchical properties of internet maps . in particular , these metrics clearly distinguish between the as and ir levels , which show a very different behavior at this respect . while ir level maps appear to possess almost no hierarchical structure , as maps fully exploit the hierarchy of domains around which the internet revolves .the differences highlighted between the two levels might be very important in the developing of faithful internet topology generators .the testing of internet protocols working at different levels might need of topology generators accounting for the different properties observed .hierarchical features are also important to scrutinize theoretical models proposing new dynamical growth mechanisms for the internet as a whole .nowadays the internet can be partitioned in autonomously administered domains which vary in size , geographical extent , and function .each domain may exercise traffic restrictions or preferences , and handle internal traffic according to particular autonomous policies .this fact has stimulated the separation of the inter - domain routing from the intra - domain routing , and the introduction of the autonomous systems number ( asn ) .each as refers to one single administrative domain of the internet . within eachas , an interior gateway protocol is used for routing purposes . between ass ,an exterior gateway protocol provides the inter - domain routing system .the border gateway protocol ( bgp ) is the most widely used inter - domain protocol . in particular, it assigns a 16-bit asn to identify , and refer to , each as .the internet is usually portrayed as an undirected graph .depending on the meaning assigned to the nodes and links of the associated graph , we can obtain different levels of representation , each one corresponding to a different degree of coarse - graining respect to the physical internet . _ internet router level _ :in the ir level maps , nodes represents the routers , while links represent the physical connections among them . in general , all mapping efforts at the ir level are based on computing router adjacencies from _ traceroute _ sequences sent to a list of networks in the internet .the traceroute command performed from a single source provides a spanning tree from that source to every other ( reachable ) node in the network . by merging the information obtained from different sources it is possible to construct ir level maps of different portions of the internet . in order to catch all the various cross - links , however ,a large number of source probes is needed .in addition , the instability of paths between routers and other technical problems such as multiple alias interfaces make the mapping a very difficult task .these difficulties have been diversely tackled by the different internet mapping projects : the lucent project at bell labs , the cooperative association for internet data analysis , and the scan project at the information sciences institute , that develop methods to obtain partial maps from a single source . _autonomous system level _ : in the as level graphs each node represents an as , while each link between two nodes represents the existence of a bgp peer connection among the corresponding ass .it is important to stress that each as groups many routers together and the traffic carried by a link is the aggregation of all the individual end - host flows between the corresponding ass .the as map can be constructed by looking at the bgp routing tables .in fact , the bgp routing tables of each as contains a spanning tree from that node to every other ( reachable ) as . we can then try to reconstruct the complete as map by merging the connectivity information coming from a certain fraction of these spanning trees .this method has been actually used by the national laboratory for applied network research ( nlanr ) , using the bgp routing tables collected at the oregon route server , that gathers bgp - related information since 1997 .enriched maps can be obtained from some other public sources , such as looking glass sites and the reseaux ip europeens ( ripe ) , getting about 40% of new as - as connections .these graph representations do not model individual hosts , too numerous , and neglect link properties such as bandwidth , actual data load , or geographical distance . for these reasons , the graph - like representation must be considered as an overlay of the basic topological structure : the skeleton of the internet . moreover , the data collected for the two levels are different , and both representations may be incomplete or partial to different degrees . in particular , measurements may not capture all the nodes present in the actual network and , more often , they do not include all the links among nodes .it is not our purpose here to argue about the reliability of the different maps .however , the conclusions we shall present in this paper seem rather stable in time for the different maps .hopefully , this fact means that , despite the different degrees of completeness , the present maps represent a fairly good statistical sampling of the internet as a whole .in particular , we shall use the map collected during october/ november 1999 by the scan project with the mercator software as representative of the internet router level . at the autonomous system level we consider the ( as ) map collected at oregon route server and the enriched ( as+ ) map ( available at ) , both dated may 25 , 2001 .we start our study by analysing some standard metrics : the total number of nodes and edges , the node connectivity , the minimum path distance between pairs of nodes , the clustering coefficient , and the betweenness .the connectivity of a node is defined as the number of edges incident to that node , _i.e. _ the number of connections of that node with other nodes in the network .if nodes and are connected we will say that they are nearest neighbors .the minimum path distance between a pair of nodes and is defined as the minimum number of nodes traversed by a path that goes from one node to the other .the clustering coefficient of the node is defined as the ratio between the number of edges in the sub - graph identified by its nearest neighbors and its maximum possible value , corresponding to a complete sub - graph , _ i.e. _ .this magnitude quantifies the tendency that two nodes connected to the same node are also connected to each other .the clustering coefficient takes values of order for grid networks . on the other hand , for random graphs , which are constructed by connecting nodes at random with a fixed probability ,the clustering coefficient is of order .finally , the betweenness of a node is defined as the total number of minimum paths that pass through that node .it gives an measure of the amount of traffic that goes through a node , if the minimum path distance is considered as the metric defining the optimal path between pairs of nodes .the average values of these metrics over every node ( or pair of nodes for ) in the as , as+ , and ir maps is given in table [ tab:1 ] ..average metrics of the as , as+ , and ir maps .see text for the metrics definitions . [cols="<,>,>,<,<,<,<,<",options="header " , ] the average connectivity for the three maps is of order ; therefore , they can be considered as _ sparse _ graphs . despite the small average connectivity , however , the average minimum path distance is also very small , compared to the size of the maps .the probability distribution of the minimum path distance , ] . in the case of a pure power - law probability distribution , we expect the functional behavior , where is a normalization constant . in fig .[ fig:2 ] we show the connectivity distribution for the as , as+ , and ir maps . for the as map a clear power law decay with exponent observed , as it has been already reported elsewhere .the reported distribution is also stable in time as found by analyzing different time snapshot of the as level maps obtained by the nlanr . as noted in ref . , the connectivity distribution for the as+ enriched data deviates from a pure power law at intermediate connectivities .this anomaly might or might not be related to the biased enrichment of the internet sampling ( see ref . ) . while this represents an important point in the detailed description of the connectivity properties , it is not critical concerning the scale - free nature of the internet . with respect to the network physical properties ,it is just the large connectivity region that is actually effective .indeed , recent studies about network resilience to removal of nodes and virus spreading have shown that the relevant parameter is the ratio between the first two moments of the connectivity distribution . if then the network manifests some properties that are not observed for networks with exponentially bounded connectivity distributions .for instance , we can randomly remove practically all the nodes in the network and a giant connected component will still exist . in both the as and as+ maps ,in fact , we observe a wide connectivity distribution , with the same dependency for very large .the factor is mainly determined by the tail of the distribution , and is very similar for both maps . in particular , we estimate and for the as and as+ maps , respectively . with such a large values , for all practical purposes ( resilience , virus spreading , traffic , etc . )the as and as+ maps behave almost identically .the connectivity distribution of the ir level map has a power - law behavior that is , however , smoothed by a clear exponential cut - off .the existence of a power - law tendency for small connectivities is better seen for the probability distribution ] .while we do not want to enter into the details of the different fitting procedures , we suggest that the more general fitting form , in which is an independent fitting parameter , is likely a better option .the presence of truncated power laws must not be considered a surprise , since it finds a natural place in the context of scale - free phenomena . actually , bounded scale - free distributions ( _ i.e. _ power - law distributions with a cut - off ) are implicitly present in every real world system because of finite - size effects or physical constraints .truncated power laws are observed also in other real networks and different mechanisms have been proposed to explain the cut - off for large connectivities . actually , we can distinguish two different kinds of cut - offs in real networks .the first is an exponential cut - off , , which can be explained in terms of a finite connectivity capacity of the network elements or incomplete information .this is likely what is happening at the ir level , where the finite capacity constraint ( maximum number of router interfaces ) is , in our opinion , the dominant mechanism affecting the tail of the connectivity distribution . in this perspective, larger and more recent samples at the ir level could present a shift in the cut - off due to the improved technical router capabilities and the larger statistical sampling .a second possibility is given by a very steep cut - off such as , where is the heaviside step function .this is what happens in growing networks with a finite number of elements . since sf networksare often dynamically growing networks , this case represents a network which has grown up to a finite number of nodes .the maximum connectivity of any node is related to the network age .the scale - free behavior is evident up the and then decays as a step function since the network does not possess any node with connectivity larger than . by inspecting fig .[ fig:2 ] , this second possibility appears realized at the as level .indeed , the dominant mechanism at this level is the finite size of the network , while connectivity limits are not present , since each as is a collection of a large number of routers , and it can handle a very large connectivity load .the connection between finite capacity and bounded distributions becomes evident also if we consider the betweenness .this magnitude is a static estimate of the amount of traffic that a node supports .hence , if a router has a bounded capacity , the betweenness distribution should also be bounded at large betweenness . on the contrary, this effect should be absent for the as maps .the integrated betweenness distribution $ ] for the as , as+ , and ir maps is shown in fig . [ fig:4 ] .the as and as+ distributions are practically the same and they are well fitted by a power law with an exponent . in the case of the ir map , on the other hand , the betweenness distribution follows a truncated power law , in analogy to what is observed for the connectivity distribution .the betweenness distribution , therefore , corroborates the equivalence between the as and as+ maps , and the existence of truncated power laws for the ir map .finally , it is worth to stress that while the power law truncation is an expected feature of finite systems , the scale - free regime is the important signature of an emergent cooperative behavior in the internet dynamical evolution .this dynamics play therefore a central role in the understanding and modeling of the internet . in this persepective , the developing of a statistical mechanics approach to complex networks is providing a new dynamical framework where the distinctive statistical regularities of the internet can be understood in term of the basic processes ruling the appearance or disappearence of nodes and links .the topological metrics analyzed so far give us a distinction between the as and ir maps with respect to the large connectivity and betweenness properties .the difference becomes , however , more evident if we consider properties related with the existence of hierarchy and correlations .the primary known structural difference in the internet is the distinction between _ stub _ and _ transit _ domains . nodes in stub domains have links that go only through the domain itself .stub domains , on the other hand , are connected via a gateway node to transit domains that , on the contrary , are fairly well interconnected via many paths .this hierarchy can be schematically divided in international connections , national backbones , regional networks , and local area networks .nodes providing access to international connections or national backbones are of course on top level of this hierarchy , since they make possible the communication between regional and local area networks . moreover , in this way , a small average minimum path length can be achieved with a small average connectivity .this hierarchical structure will introduce some correlations in the network structure , and it is an important issue to understand how these features manifest at the topological level . in order to exploit the presence of hierarchies in internet maps we introduce two metrics based on the clustering coefficient and the nearest neighbor average connectivity .the previously defined clustering coefficient is the average probability that two neighbors and of a node are connected .let us consider the _adjacency matrix _ , that indicates whether there is a connection between the nodes and ( ) , or the connection is absent ( ) .given the definition of the clustering coefficient , it is easy to see that the number of edges in the subgraph identified by the nearest neighbors of the node can be computed as . therefore , the clustering coefficient measures the existence of _ correlations _ in the adjacency matrix , weighted by the corresponding node connectivity . in section [ sec : ave ] we have shown that the clustering coefficient for the as , as+ , and ir maps is four orders of magnitude larger than the one expected for a random graph and , therefore , that they are far from being random .further information can be extracted if one computes the clustering coefficient as a function of the node connectivity . in fig .[ fig:5 ] we plot the average clustering coefficient for nodes with connectivity . in the case of the as and as+ maps this quantity follows a similar trend that can be approximated by a power law decay with an exponent around . for the ir map , however , except for a sharp drop for large values of , attributable to low statistics , it is almost constant , and equal to the average clustering coefficient .this implies that , in the as and as+ maps , nodes with a small number of connections have larger local clustering coefficients than those with a large connectivity .this behavior is consistent with the picture described in the previous section of highly clustered regional networks sparsely interconnected by national backbones and international connections .the regional clusters of ass are probably formed by a large number of nodes with small connectivity but large clustering coefficients .moreover , they should also contain nodes with large connectivities that are connected with the other regional clusters .these large connectivity nodes will be on their turn connected to nodes in different clusters which are not interconnected and , therefore , will have a small local clustering coefficient . on the contrary , in the ir level map these correlationsare absent .somehow the domain hierarchy does not produce any signature at the single router scale , where the geographic constraints and connectivity bounds probably play a more important role .these observations for the clustering coefficient are supported by another metric related with the correlations between node connectivities .these correlations are quantified by the probability that , given a node with connectivity , it is connected to a node with connectivity . with the available data , a direct plot of results very noisy and difficult to interpret .thus in ref . we suggested to measure instead the nearest neighbors average connectivity of the nodes of connectivity , , and to plot it as a function of the connectivity . if there are no connectivity correlations ( _ i.e. _ for a random network ) , then , where is the connectivity distribution , and we obtain , which is independent of .the corresponding plots for the as , as+ , and ir maps are shown in fig . [ fig:6 ] . for the as and as+ maps we observe a power - law decay for more than two decades , with a characteristic exponent , clearly indicating the existence of correlations .on the contrary , the ir map displays again an almost constant nearest neighbors average connectivity , very similar to the expected value for a random network with the same connectivity distribution , .again , the sharp drop for large can be attributed to the low statistics for such large connectivities .therefore , also in this case the two levels of representation show very different features .it is worth remarking that the present analysis of the hierarchical and correlation properties shows a very good consistency of results in the case of the as and as+ maps .this points out a robustness of these features that can thus be considered as general properties at the as level . on the other hand, the ir map shows a marked difference that must be accounted for when developing topology generators . in other words , internet protocols working at different representation levels must be thought as working on different topologies .topology generators as well must include these differences , depending on the level at which we intend to model the internet topology .the increasing availability of larger internet maps and the proliferation of growing networks models with scale - free features have recently stimulated a more detailed statistical analysis aimed at the identification of distinctive metrics and features for the internet topology . at this respect , in the present work we have presented a detailed statistical analysis of several metrics on internet maps collected at the router and autonomous system levels .our analysis confirms the presence of a power - law ( scale - free ) behavior for the connectivity distribution , as well as for the betweenness distribution , that can be associated to a measure of the load of the nodes in the maps .the exponential cut - offs observed in the ir maps , associated to the limited capacity of the routers , are absent in the as level , which conglomerate a large number of routers and are thus able to bear a larger load .the analysis of the clustering coefficient and the nearest neighbors average connectivity show in a quantitative way the presence of strong correlations in the internet connectivity at the as level , correlations that can be related to the hierarchical distribution of this network .these correlations , on the other hand , seem to be nonexistent at the ir level .the correlation properties clearly indicate the presence of strong diferences between the ir and as levels of representation .our findings represent a step forward in the characterization of the internet topology , and will be helpful for scrutinizing more thoroughly the actual validity of the network models proposed so far , and as ingredient in the elaboration of new and more realistic internet topology generators .a first step in this direction has been already given in the network model proposed in ref .this work has been partially supported by the european commission - fet open project cosin ist-2001 - 33555 .r.p .- s . acknowledges financial support from the ministerio de ciencia y tecnologa ( spain ) .we thank t. erlebach for the help in the data collection process .the national laboratory for applied network research ( nlanr ) , sponsored by the national science foundation , provides internet routing related information based on bgp data ( see http://moat.nlanr.net/ ) .r. a. albert , h. jeong , and a .-barabsi , nature * 406 * , 378 ( 2000 ) ; d. s. callaway , m. e. j. newman , s. h. strogatz , and d. j. watts , phys .lett . * 85 * , 5468 ( 2000 ) ; r. cohen , k. erez , d. ben - avraham , and s. havlin , phys . rev . lett . * 86 * , 3682 ( 2001 ) .
we present a statistical analysis of different metrics characterizing the topological properties of internet maps , collected at two different resolution scales : the router and the autonomous system level . the metrics we consider allow us to confirm the presence of scale - free signatures in several statistical distributions , as well as to show in a quantitative way the hierarchical nature of the internet . our findings are relevant for the development of more accurate internet topology generators , which should include , along with the scale - free properties of the connectivity distribution , the hierarchical signatures unveiled in the present work .
heisenberg discussed a thought experiment about the position measurement of a particle by the -ray microscope and found the trade - off relation between the error in the position measurement and the disturbance to the momentum caused by the measurement process : this inequality epitomizes the complementarity of quantum measurements : we can not perform the measurement of an observable without causing disturbance to its canonically conjugate observable . at the inception of quantum mechanics , the kennard - robertson inequality \rangle}}|\ ] ] was erroneously interpreted as the mathematical formulation of the trade - off relation of error and disturbance in quantum measurement , where }} ] of an observable ,suppose that we perform the same measurement described by meausrement operators , where the first index denotes the measurement outcome .the probability distribution of the measurement outcomes and the post - measurement state are given by } } = { { \operatorname{tr } [ { { \hat\rho}}\hat e_i ] } } , \\{ { \hat\rho } } ' = \sum_{i , a } { { \hat m}}_{i , a}{{\hat\rho}}{{\hat m}}_{i , a}^\dagger,\end{gathered}\ ] ] where is the positive operator - valued measure ( povm ) corresponding to . if the measurement is the projection measurement , then the estimated value of is calculated by where are the eigenvalues of , and is the number of times that the outcome is obtained ( ) . in general ,the measurement error affects the outcomes , and thus the estimation of is nontrivial . a reasonable requirement to the estimatorsis the so - called consistency that for all quantum states and an arbitrary the estimated value asymptotically converges to : an example of the consistent estimator is the maximum likelihood estimator .since the estimated value is calculated from the measurement outcomes , the estimator of is a function of : .the expectation value and variance of the estimator are calculated to be : = \sum_{\{n_i\ } } p(\{n_i\ } ) x^{{\text{est}}}(\{n_i\ } ) , \label{eq : ex - x } \\\operatorname{var}[x^{{\text{est } } } ] : = \operatorname{\mathbb{e}}[(x^{{\text{est}}})^2 ] - \operatorname{\mathbb{e}}[x^{{\text{est}}}]^2,\end{gathered}\ ] ] where the summation in is taken over all sets that satisfy and , and is the probability that each outcome is obtained times : from , the average of the estimator satisfies = { { \langle \hat x \rangle}}.\ ] ] the variance ] such asthe maximum likelihood estimator .the variance ] of the optimal estimators , equivalent to the rhs of , does not caused in the estimation process .therefore , the rhs of shows the quantum fluctuation and measurement error .the rhs of is independent of the specification of by .thus , we use the following parameterization . where is the identity operator , and is the generators of the lie algebra .the generator satisfy } } = 0 , \quad { { \operatorname{tr } [ \hat\lambda_\mu\hat\lambda_\nu ] } } = \delta_{\mu\nu}.\ ] ] in terms of this generator , the observable , and the povm can be written as the expectation value and the probability distribution can be calculated as then , the rhs of can be calculated to be ^{-1 } \bm{x}.\ ] ] the fisher information matrix varies with varying , but it is bounded from above by the quantum cramr - rao inequality : where is the quantum fisher information , that depend only on quantum state .the quantum fisher information is a monotone metric on the quantum state space with the coordinate system . here , by monotone means that for any quantum operation the following inequality is satisfied : where is the quantum fisher information on .although the quantum fisher information is not uniquely determined , from the monotonicity condition there exist the minimum and the maximum .the minimum is the symmetric logarithmic derivative ( sld ) fisher information .the sld fisher information is a real symmetric matrix , whose -element is defined as {\mu\nu } : = \frac{1}{2}{{\operatorname{tr } [ { { \hat\rho}}\{\hat l_\mu , \hat l_\nu\ } ] } } , \label{eq : def - sld - fisher}\ ] ] where the curly brackets denote the anti - commutator , and is a hermitian operator called sld operator defined as the solution to the following operator equation : the maximum quantum fisher information is the right logarithmic derivative ( rld ) fisher information .the rld fisher information is a hermitian matrix , whose -element is defined as {\mu\nu } : = { { \operatorname{tr } [ { { \hat\rho}}\hat l'_\nu \hat l'_\mu ] } } , \label{eq : def - rld - fisher}\ ] ] where is an operator called rld operator defined as the solution to the following operator equation : the inverse of the sld and rld fisher information matrices are calculated to be {\mu\nu } = \mathcal{c}_s(\hat\lambda_\mu , \hat\lambda_\nu ) : = \frac{1}{2}{{\langle \{\hat \lambda_\mu , \hat \lambda_\nu\ } \rangle } } - { { \langle \hat \lambda_\mu \rangle}}{{\langle \hat \lambda_\nu \rangle } } , \\ [ j_r^{-1}]_{\mu\nu } = \mathcal{c}(\hat\lambda_\mu , \hat\lambda_\nu ) : = { { \langle \hat \lambda_\mu \hat \lambda_\nu \rangle } } - { { \langle \hat \lambda_\mu \rangle}}{{\langle \hat \lambda_\nu \rangle}},\end{gathered}\ ] ] where and are the symmetrized and non - symmetrized correlation functions . for the observables and , from and , the rhs of is bounded from below as the equality is achieved if and only if is the projection measurement of , that is the povm corresponding to satisfies since the left - hand side ( lhs ) shows the quantum fluctuation and measurement error , and the rhs is the quantum fluctuation , the difference of both sides gives the measurement error .we define the measurement error as from , the measurement error is non - negative , and vanishes if and only if is the projection measurement of .since the fisher information matrix is defined by the probability distribution of the measurement outcomes , the measurement error is independent of the post - measurement state .moreover , if the measurement processes and satisfy with unitary operators , the measurement error and are equivalent . next , we discuss the disturbance caused by the measurement . the disturbance can not be quantified by the variance of an observable on the post - measurement state .it is essential to consider another measurement on the post - measurement state and estimation process .if the disturbance caused by the measurement is small , then we can accurately estimate the expectation value of another observable from the post - measurement state by performing an appropriate measurement . if the disturbance causes a drastic state change , then it is hard to estimate from the post - measurement state .suppose that we perform the measurement on the post - measurement state .the probability distribution of the measurement outcomes is given by }}.\ ] ] the estimated value of is calculated from the outcomes of the measurement .the average and the variance of the estimator are : = \sum_{\{n_j\ } } q(\{n_j\})y^{{\text{est}}}(\{n_j\ } ) , \label{eq : ex - y}\\ \operatorname{var}'[y^{{\text{est } } } ] : = \operatorname{\mathbb{e}}'[(y^{{\text{est}}})^2 ] - \operatorname{\mathbb{e}}'[y^{{\text{est}}}]^2,\end{gathered}\ ] ] where is the number of times that the outcome is obtained , the summation in is taken over all sets that satisfy , and the probability is the variance ] . from the classical and quantum cramr - rao inequalities ,any consistent estimator of satisfies \geq \bm{y}^{{\mathrm{t}}}j_s'^{-1}\bm{y } , \label{eq : cq - cramer - rao}\ ] ] the rhs implies the quantum fluctuation and disturbance caused by .the sld fisher information matrix may have eigenvalues .the rhs of is defined by \\+ \infty & \quad \text{otherwise}. \end{cases}\ ] ] that the rhs of is infinite means that for any measurement there does not exist consistent estimator . since the sld fisher information is the monotone metric , it satisfies .thus we obtain the difference of both sides corresponds to the disturbance caused by .we define the disturbance caused by as from the definitions of the sld fisher information matrix and the sld operators , the sld fihser information matrix is invariant under the unitary transformation : .if the measurement processes and satisfy the disturbances and are equivalent .thus , the definition of the disturbance in terms of the fisher information can extract the non - unitary effect in the measurement process .to derive the trade - off relations between error and disturbance in quantum measurement , we show some inequalities satisfied by the error and disturbance . in ref , it is shown that there exist the measurement such that this measurement is the optimal measurement that retrieves the information about from the disturbed state .the disturbance can be written as performing measurements and sequencially is equivalent to performing the measurement whose elements are the probability that the outcome and are obtained is }}.\ ] ] the probability distributions and are calculated to be these imply that the mapping from to and the mapping to are the markovian mapping . from the monotonicity of the fisher information , we obtain where is calculated to be {\mu\nu } = \sum_{i , j } r_{i , j } ( \partial_\mu \log r_{i , j})(\partial_\nu \log r_{i , j}).\ ] ] therefore , the noise and disturbance in the measurement satisfy where the equalities are simultaneously satisfied if and only if that the povm satisfies for all outcomes , and the associated post - measurement state satisfies in ref , it is proved that any quantum measurement satisfies \rangle}}|^2 .\label{eq : simul - heisenberg}\ ] ] from and , we obtain that the noise and disturbance in the measurement satisfies \rangle}}|^2 .\label{eq : error - disturbance - heisenberg}\ ] ] the inequalities and are similar , but their physical meaning are completely different .the inequality is the trade - off relation of the measurement errors of the two observables , and implies that we can not perform the precise measurements of the non - commutable observables simultaneously .since the measurement error is independent of the post - measurement state , indicates nothing about the disturbance in the measurement process .the inequality is the trade - off relation between the error and disturbance in the measurement process , and implies that we can not retrieve the information about an observable without dicreasing the information on the post - measurement state .the trade - off relation originally discussed by heisenberg is rigorously proved by the inequality . in the previous section ,we show that the error and disturbance are bounded by the commutation relation of the observables .however , the equality of can not be achieved for all quantum states .for example , if , \rangle } } = 0\ ] ] for any and .thus , the rhs of vanish . the measurement error vanish if is the projection measurement of , but in this case the disturbance diverges. the product of the measurement errors of non - commutable observables can not vanish .therefore , there exist a stronger bound for the error and disturbance . in this section, we derive the attainable bound of the error and disturbance . in ref , it is proved that any measurement scheme that performs two projection measurements probabilistically satisfies the following stronger inequality : here and are defined as follows .let ( ) be the simultaneous irreducible invariant subspace of and , and the projection operator on .we define the probability distribution as and the post - measurement state of the projection measurement as .then , and are defined as } } - { { \operatorname{tr } [ { { \hat\rho}}_a \hat x ] } } ^2\right ) , \\ \mathcal{c}_q(\hat x , \hat y ) : = \sum_a p_a \left(\frac{1}{2}{{\operatorname{tr } [ { { \hat\rho}}_a \{\hat x , \hat y\ } ] } } - { { \operatorname{tr } [ { { \hat\rho}}_a \hat x ] } } { { \operatorname{tr } [ { { \hat\rho}}_a \hat y ] } } \right).\end{gathered}\ ] ] from the schwarz inequality , \rangle}}\right|^2 \notag \\ & \quad=\left| \sum_a p_a \left ( { { \operatorname{tr } [ { { \hat\rho}}_a \hat x\hat y ] } } - { { \operatorname{tr } [ { { \hat\rho}}_a \hat x ] } } { { \operatorname{tr } [ { { \hat\rho}}_a \hat y ] } } \right ) \right|^2 \notag \\ & \quad\leq ( \delta_q x)^2(\delta_q)^2\end{aligned}\ ] ] the following inequality can be obtained : \rangle}}|^2 .\label{eq : generalized - schrodinger - ineq}\ ] ] therefore , the bound set by is stronger than that set by .the importance of the inequality is that for all states and observables there exist measurement processes that achieve the equality of .the inequality is not proved for all measurement process , but numerically vindicated . from and, we obtain the tighter bound for the error and disturbance in the measurement : from the conditions for the equality of , and , the measurement which achieves the equality of is obtained as where and are positive with , and are the eigenstates of observables and , respectively , and s are orthogonal to each other . the observables and are the linear combination of the and : satisfying the following equation invoking quantum estimation theory , we define the error and disturbance in the quantum measurement .the error and disturbance are expressed in terms of the fisher information that gives the precision of the estimation concerning observables .we prove that the product of the error and disturbance is bounded from below by the commutation relation of the observables .moreover , we find the attainable bound . the measurement scheme that achieves the bound set by requires that the hilbert space of the post - measurement state satisfies .if the dimension of is less than , especially the case , the bound set by may not be attainable .the bound for the case that is an outstanding issue .this work was supported by kakenhi 22340114 , a grant - in aid for scientific research on innovation areas `` topological quantum phenomena '' ( kakenhi 22103005 ) , the global coe program `` the physical sciences frontier , '' and the photon frontier network program , from mext of japan .acknowledge support from jsps ( grant no .216681 ) .99 w. heisenberg , zeitschrift fr physik * 43 * , 172 ( 1927 ) , english translation : j. a. wheeler and h. zurek , _ quantum theory and measurement _ ( princeton univ . press , new jersey , 1983 ) , p. 62 .e. h. kennard , z. phys .* 44 * , 326 ( 1927 ) .h. p. robertson , phys . rev . * 34 * , 163 ( 1929 ) .m. ozawa , phys .a * 320 * , 367 ( 2004 ) .k. kraus , annals of physics * 64 * , 311 ( 1971 ) .h. cramr , _ mathematical methods of statistics _ ( princeton university , princeton , nj , 1946 ) .s. l. braunstein and c. m. caves , phys .lett . * 72 * , 3439 ( 1994 ) .d. petz , linear algebra appl .* 244 * , 81 ( 1996 ) . c. w. helstrom , phys .a * 25 * , 101 ( 1967 ) .y. watanabe , t. sagawa , and m. ueda , phys .lett . * 104 * , 020401 ( 2010 ) .y. watanabe , t. sagawa , and m. ueda , _arxiv_:1010.3571 ( 2010 ) .
we formulate the error and disturbance in quantum measurement by invoking quantum estimation theory . the disturbance formulated here characterizes the non - unitary state change caused by the measurement . we prove that the product of the error and disturbance is bounded from below by the commutator of the observables . we also find the attainable bound of the product .
with dense deployment of distributed access points known as remote radio heads ( rrhs ) under the coordination of a central unit ( cu ) , cloud radio access network ( c - ran ) has been envisioned as a promising candidate for the fifth - generation ( 5 g ) wireless networks in future .unlike the base station ( bs ) in the traditional cellular networks which encodes or decodes the user messages locally , in c - ran each rrh merely forwards the signals of wireless users from / to the cu via a high - speed fronthaul link ( fiber or wireless ) in the downlink and uplink communications , respectively , while leaving the joint encoding / decoding complexity to a baseband unit ( bbu ) in the cu .the centralized baseband processing at the cu enables enormous spectrum efficiency and energy efficiency gains for c - ran over conventional cellular networks . despite the theoretical performance gains , the practically achievable throughput of c - ranis largely constrained by the finite - capacity fronthaul links between the rrhs and the cu . in the literature, a considerable amount of effort has been dedicated to study effective techniques to reduce the fronthaul capacity required in both the uplink and downlink communications in c - ran . in the uplink communication ,the so - called `` quantize - and - forward ( qf ) '' scheme is proposed to reduce the communication rates between the cu and rrhs , where each rrh samples , quantizes and forwards its received wireless signals to the cu over its fronthaul link with a given capacity . in the downlink communication , besides the qf scheme , the cu can more efficiently send the user messages to each rrh directly over its fronthaul link , which then encodes the user messages into wireless signals and transmits them to users , .in this scheme , user - rrh association is crucial to the performance of c - ran since in general the cu can only send the messages for a subset of users to each rrh due to the limited capacity of each fronthaul link . in this paper, we consider the downlink communication in a c - ran consisting of multi - antenna rrhs and single - antenna users , where user messages are sent from cu to distributed rrhs via individual fronthaul links for coordinated transmission , as shown in fig .[ fig1 ] . by jointly designing the beamforming at all rrhs and user - rrh associations, we aim to maximize the minimum signal - to - interference - plus - noise ratio ( sinr ) of all users subject to each rrh s individual transmit power constraint as well as fronthaul capacity constraint .it is worth noting that without the fronthaul capacity constraints , each user can be served by all the rrhs and the resulted beamforming problem for sinr balancing has been solved in , by utilizing bisection method jointly with conic optimization techniques .however , with the newly introduced fronthaul constraints , the joint optimization of beamforming and user association results in a combinatorial problem , which is np - hard and thus difficult to be optimally solved in a network with large number of users and rrhs . in this paper , we propose a new method for practically solving this problem , which effectively decouples the design of rrh beamforming and user - rrh association , thus achieving significant complexity reduction .specifically , we first associate each user to all rrhs , and then iteratively reduce the number of users served by each rrh until the corresponding optimal beamforming solution given this user association solution satisfies all the rrhs fronthaul capacity constraints . a monotonic convergence is proved for the proposed iterative algorithm , and numerical results show that its performance is significantly better as compared to other heuristic solutions , especially when the fronthaul capacity is more stringent .this paper studies the downlink communication in c - ran , as shown in fig .the studied system consists of one cu , rrhs , denoted by the set , and users , denoted by .it is assumed that each rrh is equipped with antennas , while each user is equipped with one single antenna .it is further assumed that each rrh is connected to the cu via a fronthaul link with a capacity of bits per second ( bps ) . in the downlink, the cu sends the user messages and corresponding quantized beamforming vectors to each rrh via its fronthaul link .then , each rrh upconverts the digital messages into wireless signals and sends them to the users .the details are given as follows .it is assumed that the rrhs communicate with the users over quasi - static flat - fading channels over a given bandwidth of hz .the equivalent baseband transmit signal of rrh is denotes the message intended for user , which is modeled as a circularly symmetric complex gaussian ( cscg ) random variable with zero - mean and unit - variance , and denotes rrh s beamforming vector for user .suppose that rrh has a transmit sum - power constraint ; from ( [ eqn : transmit signal scheme 1 ] ) , we thus have =\sum_{k=1}^k\|{\mbox{\boldmath{ } } } _ { k , n}\|^2\leq \bar{p}_n ] .consider another sinr vector for the users denoted by ^t$ ] , where .first , the objective value of problem ( p1 ) is not changed with the new users sinr vector .next , without the fronthaul constraints given in ( [ eqn : constraint 3 ] ) , there must exist a beamforming solution , denoted by , such that is achievable by all users since , .similarly , also satisfies the fronthaul constraints ( [ eqn : constraint 3 ] ) . as a result , given any beamforming solution to problem ( p1 ), we can always find another solution such that all the users achieve the same sinr .proposition [ proposition1 ] is thus proved .the optimal value of problem ( p2,t ) must be upper - bounded by the optimal values of its sub - problems ( p2 - 1,t ) and ( p2 - 2,t ) , i.e. , . in the following ,we show that the equality is always achievable .first , consider the case when . since is no larger than the optimal value of problem ( p2 - 1,t ) ,there must exist one beamforming solution , denoted by s , such that the sinr target is simultaneously achieved by all the users over the wireless links . as a result , in this case the optimal value of problem ( p2,t ) is , which is achieved by the beamforming solution s .next , consider the case when . in this case , the fronthaul capacity constraints ( [ eqn : constraint 7 ] ) are satisfied even if all the users sinrs are equal to . as a result , the optimal value of problem ( p2,t ) is , which is achieved by the beamforming solution s .proposition [ proposition2 ] is thus proved .first , it can be shown that if we shut down one wireless link , the max - min sinr over the wireless links is non - increasing . as a result, it follows that the optimal value of problem ( p2 - 1,t ) is non - increasing with .second , since the number of users served by each rrh is non - increasing with , according to ( [ eqn : opt 1 ] ) the optimal value of problem ( p2 - 2,t ) is non - decreasing with . as a result, the gap between and will be non - increasing as increases . moreover , if all the wireless links are shut down at some iteration , then it follows that . as a result , there must exist one such that when , but when .the first part of proposition [ proposition3 ] is thus proved .next , according to proposition [ proposition1 ] , before the stopping criterion is satisfied , we have , which is non - decreasing with . as a result , the second part of proposition [ proposition3 ] is proved .l. zhou and w. yu , `` uplink multicell processing with limited backhaul via per - base - station successive interference cancellation , '' _ieee j. sel .areas commun ._ , vol . 30 , no . 10 , pp .1981 - 1993 , oct . 2013 .s. h. park , o. simeone , o. sahin , and s. shamai , `` robust and efficient distributed compression for cloud radio access networks , '' _ ieee trans .vehicular technology _692 - 703 , feb . 2013 .l. liu , s. bi , and r. zhang , `` joint power control and fronthaul rate allocation for throughput maximization in ofdma - based cloud radio access network , '' to appear in _ ieee trans .commun._. ( available on - line at arxiv:1407.3855 ) s. h. park , o. simeone , o. sahin and s. shamai , joint precoding and multivariate backhaul compression for the downlink of cloud radio access networks , _ ieee trans . signal process .5646 - 5658 , nov .
cloud radio access network ( c - ran ) with centralized baseband processing is envisioned as a promising candidate for the next - generation wireless communication network . however , the joint processing gain of c - ran is fundamentally constrained by the finite - capacity fronthaul links between the central unit ( cu ) where joint processing is implemented and distributed access points known as remote radio heads ( rrhs ) . in this paper , we consider the downlink communication in a c - ran with multi - antenna rrhs and single - antenna users , and investigate the joint rrh beamforming and user - rrh association problem to maximize the minimum signal - to - interference - plus - noise ratio ( sinr ) of all users subject to each rrh s individual fronthaul capacity constraint . the formulated problem is in general np - hard due to the fronthaul capacity constraints and thus is difficult to be solved optimally . in this paper , we propose a new iterative method for this problem which decouples the design of beamforming and user association , where the number of users served by each rrh is iteratively reduced until the obtained beamforming and user association solution satisfies the fronthaul capacity constraints of all rrhs . a monotonic convergence is proved for the proposed algorithm , and it is shown by simulation that the algorithm achieves significant performance improvement over other heuristic solutions . cloud radio access network ( c - ran ) , fronthaul constraint , beamforming , user association , signal - to - interference - plus - noise ratio ( sinr ) balancing . [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ]
near - earth asteroid ( 29075 ) 1950 da is currently considered to be among the most hazardous asteroids due to its close encounter to the earth in 2880 . conducted comprehensive radar observations of this asteroid and derived two possible shape models .the first is a prograde model , which is a spherical shape , and the second is a retrograde model , which is an oblate shape . reported that the orbital semi - major axis of 1950 da has been changing due to the yarkovsky effect and confirmed that the retrograde model was consistent with their analysis . because of its spin period , 2.1216 hr , this object may be close to its structural failure point if this body is a rubble pile .if 1950 da has no cohesion , i.e. , zero shear resistance at a zero - normal stress , the bulk density should be higher than 3.5 g/ to prevent the body from failing structurally . using an advanced thermophysical model and archival wise thermal - infrared data to derive a bulk density of g/, indicated that 1950 da should have cohesive strength to keep its current shape .they used holsapple s limit analysis technique to derive the lowest cohesion for keeping the current shape of 1950 da .the detailed development was described by . however , since the applied technique was based on taking the average of stress components over the whole volume , this discards crucial information about the disruption processes .disruption events of asteroids have been recently reported .for example , active asteroid p/2013 r3 was observed breaking into multiple components . from this fact, found that this asteroid has a level of cohesion ranging between 40 pa and 210 pa .other disrupting bodies have also been observed .active astroid p/2013 p5 is currently shedding discontinuous dust tails . discovered a new active asteroid , ( 62412 ) 2000 sy178 , exhibiting a tail from its nucleus .these observations reveal that disruption events in our solar system are more common than previously known .thus , a understanding of such disruption events provides better constraints on the formation of small bodies . developed a finite element model that takes into account plastic deformation characterized by the von mises yield criterion , a pure - shear - dependent criterion and applied it to the failure condition of a rotating , non - gravitating ellipsoid .we extend his model to a model that can take into account self - gravity and plastic deformation characterized by a shear - pressure yield criterion .for our computations we use a commercial finite element software , ansys , version 15.03 .we investigate the internal condition of the 1950 da retrograde model at the current spin period .our analysis provides a more precise lower bound on the necessary cohesion to hold 1950 da together in a stable state .more importantly , our stress analysis also shows that the preferred mode of failure is for the central region of the body to collapse along the rotation axis , which would cause the equator to expand outward .the analysis indicates that the body s failure state is not close to either surface material being shed or landsliding .given that plastic flow will generally increase volume in an associated flow rule , this failure mode predicts that the central core of the asteroid may have a reduced density , which is a previously unpredicted state for such oblate , rapid rotators .information from radar observations can constrain an asteroid s near - surface bulk density , as a function of the material density and porosity . using this technique, obtained the minimum surface material density as 2.4 g/ and derived two compositional possibilities of the surface , enstatite chondrite and nickel - iron . on the other hand ,the thermophysical model used by gave a bulk density of g/ .here , we use the bulk density , derived by , which also includes the estimated value by . the retrograde model of 1950 da derived by is oblate and has an equatorial ridge ( fig . 5 in their article ). in the following discussion , the minimum , intermediate , and maximum moment of inertia axesare defined as the , , and axes , respectively .the mesh is constructed using the retrograde model .each element is a 10-node tetrahedron .the numbers of elements and nodes are 5569 and 8595 , respectively .the initial size of the shape is fixed to be constant and so the initial volume is 1.145 km .the spin period is 2.1216 hr . it is assumed that 1950 da is a principal axis rotator and is affected only by self - gravity and rotation . here , the rotation pole is fixed along the axis .we define young s modulus as pa , a typical value for geological sand and gravel , and poisson s ratio as 0.25 , giving compressibility of the volume . a friction angle , , is fixed at 35 degrees , the mean value of a friction angle of a geological material ranging from 30 degrees and 40 degrees .cohesion is kept as a free parameter and we will determine the lowest cohesion that prevents this object from failing structurally . here , we calculate finite element solutions for three different bulk density cases : 1.0 g/ , 1.7 g/ and 2.4 g/ .in the current finite element model , if a stress state is below the yield condition , the behavior of a material follows linear elasticity .it is assumed that the behavior of plastic deformation does not include material - hardening and softening ( perfect plasticity ) .body forces are defined at each node so that the total force acting on each element is equally split and apparently concentrates on its nodes . in nature, an asteroid rotates in free space , implying that there are no boundary conditions .however , in our numerical environment , the setting of body forces makes it difficult for the simulations to converge correctly because it could cause rigid motion , i.e. , rotation and translation . to avoid this issuewe artificially constrain six degrees of freedom to eliminate such motion .1950 da is currently considered to be a rubble pile .for such a body , the mechanical behavior of its material depends on shear and pressure . in this study, we use the drucker - prager yield criterion , a smooth function in the principal stress space , which is given as where .\end{aligned}\ ] ] , , and are the principal components of the stress state , and and are free parameters that need to be fixed .we choose these parameters by the following consideration .since the retrograde shape model of 1950 da is oblate , only the stress components along the and axes are affected by the centrifugal force .in such a case , the relation of the stress components is described as ( here , a negative value means compression , while a positive value indicates tension ) .thus , since the actual stress state is near the compression meridian of the mohr - coulomb yield criterion , , we choose and such that the drucker - prager yield envelope touches the mohr - coulomb yield envelope at this meridian .these parameters are then given as where is cohesion and is a friction angle .plastic deformation is assumed to be small .the constitutive law of plastic deformation is constructed based on an associated flow rule .this rule guarantees that the direction of plastic deformation is always perpendicular to the yield envelope . the plastic strain rate , where and are indices , is described by the rate of an arbitrary coefficient times the partial derivative of the function with respect to the stress tensor , which is given as note that it is known that the behavior of a typical geological material would rather follow a non - associated flow rule than be characterized by an associated flow rule .however , since ( 1 ) choosing an associated flow improves the convergence of a finite element solution in the current model and ( 2 ) consideration of a non - associated flow rule requires additional free parameters ( usually , unknown ) , we use the associated flow rule described in eq .( [ eq : associated ] ) .we leave the application of a non - associated flow rule as future work .a plastic solution depends on loading path .although such a path represents evolutional history , in general it is usually unknown . here, we assume that the gravitational and centrifugal forces acting on small elements ramp up linearly .this load step implies that the evolution of 1950 da results from its accretion process due to a catastrophic disruption .the time scale of the accretion process due to a catastrophic disruption is considered to be negligibly short , compared to the life time of an asteroid , probably much less than a year . to derive the plastic deformation mode at the lowest cohesion that can keep the current shape of 1950 da, we conduct an iteration method described as follows .first , we choose a value of cohesion and calculate a plastic solution that corresponds to it .if this solution only includes elastic states everywhere , we choose a lower value for cohesion and recalculate the stress solution .we iterate this process until we obtain a solution in which a majority of the internal structure reaches the yield condition .to go beyond such a condition , one usually encounters computational issues and so we terminate our iteration .thus , the solution described in this study is as close as possible to the numerical computation limit of plastic analysis .we implement a 3-dimensional finite element model of 1950 da ( the retrograde model ) on ansys , version 15.03 .the purpose of this paper includes visualization of plastic deformation in the interior of an asteroid . to do so, we use the ratio of the current stress state to the yield stress , so - called stress ratio " .although this parameter does not show the plastic state during a loading path , ( in order words , it does not show the history of the plastic state ) , it is useful to visualize the final state . using the derivation given by , we write the current stress state , the so - called equivalent stress " , as where and are obtained using the current stress state .the yield stress is given as where is given in eq .( [ eq : k ] ) .then , the stress ratio is obtained as if , an element in the body is in a plastic state .figure [ fig : failuremode ] shows a plastic solution for the case of a bulk density of 1.0 g/ .figure [ fig : fm0 ] gives the total deformation vectors of the shape in meters .figure [ fig : fma ] shows the stress ratio over the cross section along the and axes .the cohesion for this case is 75 pa .the stress ratio around the center region is over 0.99 , while the region with a stress ratio of 0.9 is much wider than that of 0.99 .this indeed indicates that plastic deformation occurs in the center of the interior first .this plastic deformation comes from the fact that the strong centrifugal force acts in the horizontal plane , while only the gravitational force acts along the axis .thus , the internal core experiences strong shear , equivalent to half of a difference between the principal components of the stress state , leading to a large scale of plastic flow . specifically , the stress components in the and axes may be strongly tensile due to the fast spin period , while the component in the axis is always compressive . for this reason ,plastic deformation occurs in the internal core and so the shape becomes more oblate than the current shape .on the other hand , fig .[ fig : fmb ] indicates the stress ratio on the surface .the stress ratio of almost all the surface area is below 0.8 , meaning that the near - surface region is below the yield condition .this implies that for the case of uniform material distribution , the central core of 1950 da is the most sensitive part to structural failure , while the surface is not .based on an associated flow rule , plastic flow must always be accompanied by an increase in volume .hence , since the mass is conserved over the granular flow , the porosity of the internal core should be lower than that of other regions .we calculate finite element solutions for three difference bulk density cases : 1.0 g/ , 1.7 g/ , and 2.4 g/ .figure [ fig : denssol ] gives the variation of the lowest cohesion with respect to different bulk densities by the solid line and the dark markers .the resolution of the lowest cohesion is 2 pa .the shadow area shows the cohesion area in which the original shape can remain , while the white region indicates the prohibited cohesion for the existence of 1950 da .the lowest cohesion ranges from 75 pa to 85 pa .it is also found that the lowest cohesion increases as the bulk density becomes higher .this comes from the fact that as the bulk density increases , higher body forces induce higher shear and normal stresses , requiring higher cohesion for keeping the current shape .the sensitivity of the internal core of an oblate body to failure is a new insight about the internal structure of an asteroid .earlier studies using a volume averaging method ( e.g. , , ) could only derive an upper bound condition for structural failure of an asteroid , while the present technique can consider a detailed deformation mode as well as the precise condition for it .for the case of 1950 da , since the spin state is considered to be near its failure condition , we find that at the lowest cohesion for keeping its current shape , the internal core fails structurally , while the surface region does not . because of this , horizontal outward deformation occurs in that plastic region and pushes the outer region to build the equatorial ridge .this result implies that if cohesion of the material of 1950 da were constant over the whole volume , the formation of the equatorial ridges would result from plastic deformation of the internal core and not from a landslide .the present failure model for 1950 da predicts that the core of the asteroid should be under - dense " .this is the opposite of the prediction if the surface regolith flows to the equator to form the bulge then the bulge is under - dense .this is a distinction that can be specifically tested for future asteroid exploration missions such as the osiris - rex mission , which should be able to distinguish whether the core is under- or over - dense relative to the rest of asteroid ( 101955 ) bennu . from the current finite element model ,it is unlikely that surface failure occurs . as seen in fig .[ fig : fmb ] , the surface region is always below the yield condition .this implies that although the centrifugal force exceeds the gravitational force , the surface region does not fail structurally and so the surface particles will not fly off because of its cohesion . here, we examine whether or not a small boulder is stable in the case when the cohesion obtained above represents the material strength .this condition can be obtained by considering the force balance on a boulder at the equatorial region . here , to simplify this analysis, we assume that 1950 da and the boulder are spherical bodies .the masses of 1950 da and the boulder are denoted as and , respectively .the radii of 1950 da and the boulder are defined as and , respectively .the bulk density of 1950 da and the material density of the boulder are given as and , respectively .also , is the current spin rate of 1950 da . in order for the boulder to separate ,the centrifugal force acting on the boulder should exceed the gravitational force , which is given as the second term on the right - hand side indicates a cohesion force . here , using the results in , we simply describe this term as , where represents cohesion and is an apparent area of the boulder touching the surface .we assume that the radius of the boulder is very small compared to that of 1950 da , i.e. , . by choosing g/ , an upper bound for the bulk density of 1950 da , g/ ,from the fact that 1950 da may be composed of a nickel - iron or enstatite chondritic composition , and = 649 m , the mean radius of 1950 da , we obtain the lowest cohesion to retain the boulder on the surface as } \ : r. \nonumber\end{aligned}\ ] ] where is the radius of the boulder .thus , even if the size of the boulder is on the order of a few hundred meters , the lowest cohesion to retain the boulder is on the order of a few pascals .this implies that since the necessary cohesion that prevents 1950 da from failing structurally is pa , if this is representative of the cohesion within the regolith , any embedded boulders will be stable .this simplified analysis also supports the primary result in this study that the internal core is more sensitive to structural failure than the surface region .finally , we emphasize our contributions . used holsapple s limit analysis technique to derive the minimum cohesion of 1950 da as pa .however , since this technique was based on the upper bound theorem , the cohesion derived by this technique indicates the lowest limit for the necessary condition only .our finite element model is capable of giving a more precise failure condition and possible failure modes of an asteroid .we note that for this asteroid only a lower bound on strength can be given , as it is not seen to be failing currently .this is different from the study by who obtained both lower and upper limits for the cohesion of p/2013 r3 from its disruption event . in addition , we clearly show that the failure mode for this body is for the interior to fail .this leads to a new prediction for the central core of the body to be underdense .future analysis will explore this in more detail .mh deeply appreciates eleanor matheson for her dedicated review of grammar in the current paper .mh wishes to thank keith a. holsapple at university of washington and carlos a. felippa at university of colorado for their useful advice about development of a finite element model for plastic deformation .mh also acknowledges ben rozitis at university of tennessee for constructive discussions about their technique used in .finally , mh appreciates paul snchez for useful discussion about the interpretation of our study .this research was supported by grant nna14ab03a from nasa s sservi program .busch , m. w. , giorgini , j. d. , ostro , s. j. , benner , l. a. , jurgens , r. f. , rose , r. , hicks , m. d. , pravec , p. , kusnirak , p. , ireland , m. j. , scheeres , d. j. , broschart , s. b. , magri , c. , nolan , m. c. , hine , a. a. , & margot , j .- l .2007 , icarus , 190 , 608 , deep impact mission to comet 9p / tempel 1 , part 2 giorgini , j. d. , ostro , s. j. , benner , l. a. m. , chodas , p. w. , chesley , s. r. , hudson , r. s. , nolan , m. c. , klemola , a. r. , standish , e. m. , jurgens , r. f. , rose , r. , chamberlin , a. b. , yeomans , d. k. , & margot , j .-2002 , science , 296 , 132
recently reported that near - earth asteroid ( 29075 ) 1950 da , whose bulk density ranges from 1.0 g/ to 2.4 g/ , is a rubble pile and requires a cohesive strength of at least 44 pa to 74 pa to keep from failing due to its fast spin period . since their technique for giving failure conditions required the averaged stress over the whole volume , it discarded information about the asteroid s failure mode and internal stress condition . this paper develops a finite element model and revisits the stress and failure analysis of 1950 da . for the modeling , we do not consider material - hardening and softening . under the assumption of an associated flow rule and uniform material distribution , we identify the deformation process of 1950 da when its constant cohesion reaches the lowest value that keeps its current shape . the results show that to avoid structural failure the internal core requires a cohesive strength of at least 75 pa - 85 pa . it suggests that for the failure mode of this body , the internal core first fails structurally , followed by the surface region . this implies that if cohesion is constant over the whole volume , the equatorial ridge of 1950 da results from a material flow going outward along the equatorial plane in the internal core , but not from a landslide as has been hypothesized . this has additional implications for the likely density of the interior of the body .
isochronous mass spectrometry at heavy - ion storage rings plays an important role in mass measurements of short - lived nuclei far from the valley of -stability .many important results in nuclear physics and nuclear astrophysics have been obtained in recent years based on the accurate determination of mass values by applying ims at gesellschaft fr schwerionenforschung ( gsi ) and institute of modern physics ( imp ) , chinese academy of sciences .the basic principle of ims describing the relationship between mass - over - charge ratio ( ) and revolution period ( ) can be expressed in first order approximation where is the ion velocity .the momentum compaction factor is defined as : it describes the ratio between the relative change of orbital length of an ion stored in the ring and the relative change of its magnetic rigidity . , defined as , is the transition energy of a storage ring . by setting the lorentz factor of one specific ion species to satisfy the isochronous condition , the revolution period of the ions is only related to its mass - over - charge ratio , and independent of their momentum spreads .obviously , this property can only be fulfilled within a small mass - over - charge region , which is called the isochronous window .however , for other ion species beyond the isochronous window , or even for the ions within the isochronous window , the isochronous condition is not strictly fulfilled . as a result ,the unavoidable momentum spread due to the nuclear reaction process will broaden the distribution of revolution period , and thus , lead to a reduction in mass resolving power . to decrease the spread of revolution period , it is obvious that the magnetic rigidity spread of stored ions should be corrected for .however , the magnetic rigidity of each ion can presently not be measured directly , especially for the ions with unknown mass - to - charge ratio , according to the definition of magnetic rigidity : we note that , as shown in eq .( [ alphap ] ) , ions with the same magnetic rigidity will move around the same closed orbit , regardless of their .therefore , the correction of magnetic rigidity of the stored ions can be established via the correction of their corresponding orbit . to realize this kind of correction , one recently - proposed method is to measure the ion s position in the dispersive arc section by using a cavity doublet , which consists of a position cavity to determine the ion position and a reference cavity to calibrate the signal power .however , the establishment of this method may strongly depend on the sensitivity of the transverse schottky resonator .another possible method , which will be described in detail below , is to measure the ion velocity by using the combination of the two tof detectors installed in a straight section of a storage ring .the original idea was first proposed at gsi , and has recently been tested in on - line ims experiments at imp and studied by simulations . fig .[ fig01 ] illustrates the schematic view of the setup of the double tof detectors at the experimental cooler storage ring ( csre ) . by employing this additional velocity information of stored ions to correct the momentumspread , the mass resolving power of ims will significantly be improved ., width=8 ]in typical ims experiments , the primary beam is accelerated and then bombards a target to produce the secondary particles .after in - flight separation , several secondary ions are injected and stored in the ring simultaneously .when the stored ions penetrates a thin carbon foil of the tof detector , secondary electrons are released from the surface of the foil and are guided by perpendicularly arranged electrostatic and magnetic fields to a set of microchannel plate detectors ( mcp ) .the timestamps of each ion penetrating the tof detector are recorded by a high - sampling - rate oscilloscope and are analyzed to determine the revolution periods of all stored ions . after correcting for instabilities of magnetic fields of the storage ring , all revolution periods of stored ionsare superimposed to a revolution period spectrum , which can be used to determine the unknown masses and the corresponding errors . obviously , the uncertainty of the revolution period , which directly relates to the error of the determined mass value , takes into account the contributions of momentum spread of stored ions . in order to correct for such effects, we should think about the basic relationship of the revolution period ( ) versus the velocity ( ) of an individual ion , or the relationship of the orbital length ( ) versus the velocity . in an ims experiment, we can assume that when an ion was injected into the storage ring , its initial orbit and magnetic rigidity are and , respectively .after penetrating the carbon foil of the tof detector for many times , the ion s orbit and magnetic rigidity change to and , due to the energy loss in the carbon foil . because the relative energy loss after each penetration is tiny ( in the order of ), the change of the orbit can be regarded to be continuous .therefore , the relationship between the orbit and the magnetic rigidity can be obtained by integration of eq .( [ alphap ] ) : it is clear that the knowledge of is crucial for solving this problem . in reality , the momentum compaction factor is a function of the magnetic rigidity of the ring . for an ion stored in the ring , it can be expressed as : where is the constant part of the momentum compaction factor determined by the dispersion function , and is related to the perturbation of the momentum compaction factor , which has contributions from the slope of the dispersion function . to clearly express the principle of ims with double tof detectors ,firstly we ignore all the higher order components of and consider the simplest approximation of .the effect of higher order components of will be discussed in the section 4 .the result of the integration of eq .( [ eq3 ] ) yields the relationship of the orbital length versus the velocity we emphasize that is only determined by the kinetic parameters of a given ion , and keeps constant after the ion is injected into the ring , despite of its energy loss in the carbon foil of the tof detectors . according to eq .( [ eq6 ] ) , we can calculate the revolution period of each ion corresponding to any arbitrary orbital length , if we can measure the revolution period and the velocity of each ion simultaneously .therefore , by correcting the revolution period of each ion to a certain orbital length ( equivalent to a certain magnetic rigidity for all ions ) , the corrected revolution periods can superimpose a spectrum with just higher - order contributions of momentum spread .this is the cornerstone of our methodology .let us define a reference orbit with a certain length , and the kinetic parameters of that ion circulating in this orbit are \{}. as discussed before , the reference orbit and the real orbit can be connected by the constant : since the orbital length of a stored ion is : , the parameter can be determined experimentally .the revolution period of the ion can be extracted from the timestamps of either of the two detectors using the previous method as described in ref . , and the velocity of the ion in any revolution can be directly measured by the double tof detectors : where is the straight distance between the double tof detectors , and are the timestamps recorded by them respectively .al ions .the red shadow shows the spread of revolution periods after orbital correction to a reference value at m. [ fig02],width=8 ] in a real ims experiment , a stored ion has betatron motions around the closed orbit , with the oscillation amplitude depending on its emittance . during the time of acquisition in one injection ( about ) ,the timestamps for one ion are recorded for hundreds of revolutions in the ring . in consequence, the average values of , and for each individual ion can be determined very precisely . in principle , the reference orbit can be defined arbitrarily , even though the ions may never move on that reference orbit .however , the central orbit of the storage ring is recommended in order to avoid the error caused by an extrapolation .the momentum compaction factor can be extracted from the lattice setting of the ring or be experimentally determined with a relative precision of by using the method described in .finally , the velocity of the ion corresponding to the reference orbit can be calculated from eq .( [ eq8 ] ) , and then the revolution period is deduced . after repeating the procedure described above for each individual ion ,all the obtained ( corresponding to the same magnetic rigidity ) can be accumulated into a revolution period spectra for mass calibration . in this waythe resolution of the revolution period , and consequently the mass resolution , can be improved for all ion species .the relative precision of velocity measurements in ims with double tof detectors at csre can be roughly estimated .ions with will spend ns flying the m distance between the two tof detectors in csre , and the time resolutions of both tof detectors are ps ( 1 sigma ) based on the results from an off - line test .so for an individual ions in one circulation , the relative precision of the time of flight , which is the same as the relative precision of velocity , is about .since the ions penetrate through the double tof detectors for hundreds of times during the acquisition time , the fluctuations in the measurement of timestamps can be averaged in principle , leading to a relative precision of the velocity of , which is much better than the -acceptance of about of csre .to test the principle of isochronous mass spectrometry with double tof detectors , a monte - carlo simulation of an ims experiment in the experimental storage ring csre has been made .six - dimensional phase - space linear transmission theory was employed to simulate the motions of stored ions in csre .the timestamps of ions penetrating the double tof detectors are generated by the simulation code , considering the beam emittance , the momentum spread of ions , and the energy loss in the carbon foil as well as the timing resolutions of the tof detectors .then the method proposed in the above section can be applied to the simulated data .the details of simulations can be found in ref . .[ fig02 ] illustrates the simulated results of the revolution period versus the orbital length of ions .the projection of the black points to the vertical axis represents the spread of the revolution period . after transforming the revolution period at any orbit to the revolution period on the selected reference orbit , the spread of revolution periodscould be significantly improved ( the area with red shadow ) .however , to achieve a better mass resolving power , there are still some technical challenges to overcome .it is clear that the spread of the corrected revolution periods of stored ions strongly depends on the precision of velocity measurement , which is limited by the timing performance of the tof detectors according to eq .( [ eq9 ] ) and so as the mass resolving power .the left picture in fig .[ fig03 ] illustrates the effect of the time resolution of the double tof detectors on the revolution period resolving power .better time resolution of the tof detector leads to higher mass resolving power , especially for the ion outside the isochronous window .in the ideal case with no timing error , the standard deviation of revolution period approaches to zero , and is almost the same for all ion species , which means the momentum spread of ions are no longer the main source of the revolution period spread . in order to meet the requirement of a high - resolution tof detector ,two improved tof detectors were tested offline and online at imp in 2013 . by applying new cables with higher bandwidth and increasing the electric field strength and the corresponding magnetic field , the time resolution of the tof detector in offline testswas significantly improved to 18 ps . however , tof detector with higher time - resolving power is still highly recommended .the determination of is also very important to reduce the standard deviation of revolution periods .the picture on the right in fig .[ fig03 ] illustrates the impact of on the corrected revolution period .the mismatch between the real of the storage ring and the parameter used in the data analysis will distort the systematic behavior of standard deviation as a function of the corrected revolution period .this may be helpful for the determination of .in the discussion above , the momentum compaction factor is regarded for simplicity to be a constant . in the real situation, the field imperfections of the dipole and quadrupole magnets will contribute to the high - order terms of the momentum compaction factor .the nonlinear field is critical for the ims experiment since it directly alter the orbital length variation and thus lead to a significantly lower mass resolving power .however , high - order isochronous condition can be fulfilled via some isochronicity correction by means of sextupoles and octupoles . even though the momentum compaction factor can be corrected to be a constant within the -acceptance, we can still investigate the effect of on the relative variation of orbital length , which is the same magnitude as the relative variation of revolution period .of csre for the relative magnetic rigidities around the central magnetic rigidity . in the optical lattice of ,the designed value of at csre is about .[ fig04],width=8 ] according to eq .( [ eq8 ] ) , the relationship between the momentum and the orbital length , for an individual ion , is independent from the selection of the initial orbit .therefore , in order to investigate the impact of on the ion orbit around the reference orbit , we can assume the initial orbit to be , and a similar integration as eq .( [ eq3 ] ) is obtained : where . simplifying the result of this integration, we can obtain : the effect of on the orbital length can be estimated by comparing eqs .( [ eq6 ] ) and ( [ eq13 ] ) ( with / without ) .the relative variation of the ion orbit is approximately determined by and the square of momentum spread : where / denotes the orbital length with / without the contribution of . as shown in fig .[ fig04 ] , the designed value of at csre is about in the lattice setting with .if the momentum spread is at the magnitude of about , the contribution of on the relative change of orbital length would be about , which is the same magnitude of the mass resolution in present ims . to eliminate the influence of , we can simply define an effective orbital length : and then eq . ( [ eq13 ] ) becomes which is similar to eq .( [ eq6 ] ) .therefor , the constant for each ion can be determined using instead of the real orbital length . in this way, the effect of the first - order term of momentum compaction factor is included , and can in principle be corrected .however , to our knowledge , the important parameter can not be precisely measured in experiments .it may be necessary to scan the as a free parameter in the data analysis until the minimum standard deviations of revolution period are achieved for all ion species .in this paper , we present the idea of momentum correction for isochronous mass measurements in a storage ring by directly measuring the velocities of stored ions using the combination of two tof detectors . for all ions stored in the ring , their revolution periods can be corrected for using the information of ion velocity , so that all the corrected revolution periods correspond to the same reference orbit ( or the same magnetic rigidity ) . in this way, revolution periods of all ions with the same magnetic rigidity can be obtained , and thus leading to much improved mass resolving power for all ions . according to the results of simulations , the achievable mass resolving power of ims with double tof detectorsstrongly depends on the timing performance of the tof detectors and the accuracy of the determination of .furthermore , the effects of high - order terms of the momentum compaction factor have been discussed , and besides the isochronicity correction by using sextupoles and octupoles , a possible solution to eliminate the influence of has been proposed .this work is supported in part by the 973 program of china ( no .2013cb834401 ) , national nature science foundation of china ( u1232208 , u1432125 , 11205205,11035007 ) , the chinese academy of sciences .y.a.l is supported by cas visiting professorship for senior international scientists and the helmholtz - cas joint research group ( group no .hcjrg-108 ) .k.b . and y.a.l .acknowledge support by the nuclear astrophysics virtual institute ( navi ) of the helmholtz association .f. bosch , y. a. litvinov , t. st http://www.sciencedirect.com/science/article/pii/s0146641013000744[nuclear physics with unstable ions at storage rings ] , progress in particle and nuclear physics 73 ( 2013 ) 84 140 . http://dx.doi.org/http://dx.doi.org/10.1016/j.ppnp.2013.07.002 [ ] .h. s. xu , y. h. zhang , y. a. litvinov , http://www.sciencedirect.com/science/article/pii/s138738061300170x[accurate mass measurements of exotic nuclei with the csre in lanzhou ] , international journal of mass spectrometry 349 - 350 ( 2013 ) 162 171 , 100 years of mass spectrometry .http://dx.doi.org/http://dx.doi.org/10.1016/j.ijms.2013.04.029 [ ] .b. franzke , h. geissel , g. mnzenberg , http://dx.doi.org/10.1002/mas.20173[mass and lifetime measurements of exotic nuclei in storage rings ] , mass spectrometry reviews 27 ( 5 ) ( 2008 ) 428469 . http://dx.doi.org/10.1002/mas.20173 [ ] . http://dx.doi.org/10.1002/mas.20173 h. geissel , r. knbel , y. litvinov , b. sun , k. beckert , p. beller , f. bosch , d. boutin , c. brandau , l. chen , b. fabian , m. hausmann , c. kozhuharov , j. kurcewicz , s. litvinov , m. mazzocco , f. montes , g. mnzenberg , a. musumarra , c. nociforo , f. nolden , w. pla , c. scheidenberger , m. steck , h. weick , m. winkler , http://dx.doi.org/10.1007/s10751-007-9541-4[a new experimental approach for isochronous mass measurements of short - lived exotic nuclei with the frs - esr facility ] , hyperfine interactions 173 ( 1 - 3 ) ( 2006 ) 4954 .http://dx.doi.org/10.1007/s10751-007-9541-4 x. chen , m. s. sanjari , j. piotrowski , p. hlsmann , y. a. litvinov , f. nolden , m. steck , t. sthlker , http://dx.doi.org/10.1007/s10751-015-1183-3[accuracy improvement in the isochronous mass measurement using a cavity doublet ] , hyperfine interactions 235 ( 1 ) ( 2015 ) 5159 . http://dx.doi.org/10.1007/s10751-015-1183-3 [ ] . http://dx.doi.org/10.1007/s10751-015-1183-3 m. s. sanjari , x. chen , p. hlsmann , y. a. litvinov , f. nolden , j. piotrowski , m. steck , t. sthlker , http://stacks.iop.org/1402-4896/2015/i=t166/a=014060[conceptual design of elliptical cavities for intensity and position sensitive beam measurements in storage rings ] , physica scripta 2015 ( t166 ) ( 2015 ) 014060 .h. geissel , y. a. litvinov , http://stacks.iop.org/0954-3899/31/i=10/a=072[precision experiments with relativistic exotic nuclei at gsi ] , journal of physics g : nuclear and particle physics 31 ( 10 ) ( 2005 ) s1779 .y. m. xing , m. wang , y. h. zhang , p. shuai , x. xu , r. j. chen , x. l. yan , x. l. tu , w. zhang , c. y. fu , h. s. xu , y. a. litvinov , k. blaum , x. c. chen , z. ge , b. s. gao , w. j. huang , s. a. litvinov , d. w. liu , x. w. ma , r. s. mao , g. q. xiao , j. c. yang , y. j. yuan , q. zeng , x. h. zhou , http://stacks.iop.org/1402-4896/2015/i=t166/a=014010[first isochronous mass measurements with two time - of - flight detectors at csre ] , physica scripta 2015 ( t166 ) ( 2015 ) 014010 . http://stacks.iop.org/1402-4896/2015/i=t166/a=014010 x. xing , w. meng , s. peng , c. rui - jiu , y. xin - liang , z. yu - hu , y. you - jin , x. hu - shan , z. xiao - hong , y. a. litvinov , s. litvinov , t. xiao - lin , c. xiang - cheng , f. chao - yi , g. wen - wen , g. zhuang , h. xue - jing , h. wen - jia , l. da - wei , x. yuan - ming , z. qi , z. wei , http://stacks.iop.org/1674-1137/39/i=10/a=106201[a data analysis method for isochronous mass spectrometry using two time - of - flight detectors at csre ] , chinese physics c 39 ( 10 ) ( 2015 ) 106201 .j. xia , w. zhan , b. wei , y. yuan , m. song , w. zhang , x. yang , p. yuan , d. gao , h. zhao , x. yang , g. xiao , k. man , j. dang , x. cai , y. wang , j. tang , w. qiao , y. rao , y. he , l. mao , z. zhou , http://www.sciencedirect.com/science/article/pii/s0168900202004758[the heavy ion cooler - storage - ring project ( hirfl - csr ) at lanzhou ] , nuclear instruments and methods in physics research section a : accelerators , spectrometers , detectors and associated equipment 488 ( 12 ) ( 2002 ) 11 25 . http://dx.doi.org/http://dx.doi.org/10.1016/s0168-9002(02)00475-8 [ ] . http://www.sciencedirect.com/science/article/pii/s0168900202004758 y. h. zhang , h. s. xu , y. a. litvinov , x. l. tu , x. l. yan , s. typel , k. blaum , m. wang , x. h. zhou , y. sun , b. a. brown , y. j. yuan , j. w. xia , j. c. yang , g. audi , x. c. chen , g. b. jia , z. g. hu , x. w. ma , r. s. mao , b. mei , p. shuai , z. y. sun , s. t. wang , g. q. xiao , x. xu , t. yamaguchi , y. yamaguchi , y. d. zang , h. w. zhao , t. c. zhao , w. zhang , w. l. zhan , http://link.aps.org/doi/10.1103/physrevlett.109.102501[mass measurements of the neutron - deficient , , , and nuclides : first test of the isobaric multiplet mass equation in -shell nuclei ] , phys . rev .( 2012 ) 102501 .[ ] . http://link.aps.org/doi/10.1103/physrevlett.109.102501 x. tu , m. wang , y. litvinov , y. zhang , h. xu , z. sun , g. audi , k. blaum , c. du , w. huang , z. hu , p. geng , s. jin , l. liu , y. liu , b. mei , r. mao , x. ma , h. suzuki , p. shuai , y. sun , s. tang , j. wang , s. wang , g. xiao , x. xu , j. xia , j. yang , r. ye , t. yamaguchi , x. yan , y. yuan , y. yamaguchi , y. zang , h. zhao , t. zhao , x. zhang , x. zhou , w. zhan , http://www.sciencedirect.com/science/article/pii/s0168900211014471[precision isochronous mass measurements at the storage ring csre in lanzhou ] , nuclear instruments and methods in physics research section a : accelerators , spectrometers , detectors and associated equipment 654 ( 1 ) ( 2011 ) 213 218 .j. p. delahaye , j. jger , variation of the dispersion function , momentum compaction factor , and damping partition numbers with particle energy deviation , part .accel . 18 ( slac - pub-3585 ) ( 1986 ) 183201 .30 p. x. gao , y .- j .yuan , j. cheng yang , s. litvinov , m. wang , y. litvinov , w. zhang , d .- y .yin , g .- d .shen , w. ping chai , j. shi , p. shuai , http://www.sciencedirect.com/science/article/pii/s0168900214006913[isochronicity corrections for isochronous mass measurements at the hirfl - csre ] , nuclear instruments and methods in physics research section a : accelerators , spectrometers , detectors and associated equipment 763 ( 2014 ) 53 57 .http://dx.doi.org/http://dx.doi.org/10.1016/j.nima.2014.05.122 [ ] .http://www.sciencedirect.com/science/article/pii/s0168900214006913 w. zhang , x. tu , m. wang , y. zhang , h. xu , y. a. litvinov , k. blaum , r. chen , x. chen , c. fu , z. ge , b. gao , z. hu , w. huang , s. litvinov , d. liu , x. ma , r. mao , b. mei , p. shuai , b. sun , j. xia , g. xiao , y. xing , x. xu , t. yamaguchi , x. yan , j. yang , y. yuan , q. zeng , x. zhang , h. zhao , t. zhao , x. zhou , http://www.sciencedirect.com/science/article/pii/s0168900214004562[time-of-flight detectors with improved timing performance for isochronous mass measurements at the csre ] , nuclear instruments and methods in physics research section a : accelerators , spectrometers , detectors and associated equipment 756 ( 2014 ) 1 5 . http://dx.doi.org/http://dx.doi.org/10.1016/j.nima.2014.04.051 [ ] .r. j. chen , y. j. yuan , m. wang , x. xu , p. shuai , y. h. zhang , x. l. yan , y. m. xing , h. s. xu , x. h. zhou , y. a. livinov , s. litvinov , x. c. chen , c. y. fu , w. w. ge , z. ge , x. j. hu , w. j. huang , d. w. liu , q. zeng , w. zhang , http://stacks.iop.org/1402-4896/2015/i=t166/a=014044[simulations of the isochronous mass spectrometry at the hirfl - csr ] , physica scripta 2015 ( t166 ) ( 2015 ) 014044 .a. dolinskii , s. litvinov , m. steck , h. weick , http://www.sciencedirect.com/science/article/pii/s0168900207002525[study of the mass resolving power in the cr storage ring operated as a tof spectrometer ] , nuclear instruments and methods in physics research section a : accelerators , spectrometers , detectors and associated equipment 574 ( 2 ) ( 2007 ) 207 212 .http://dx.doi.org/http://dx.doi.org/10.1016/j.nima.2007.01.182 [ ] .s. litvinov , d. toprek , h. weick , a. dolinskii , http://www.sciencedirect.com/science/article/pii/s0168900213006633[isochronicity correction in the cr storage ring ] , nuclear instruments and methods in physics research section a : accelerators , spectrometers , detectors and associated equipment 724 ( 2013 ) 20 26 .http://dx.doi.org/http://dx.doi.org/10.1016/j.nima.2013.05.057 [ ] .
isochronous mass spectrometry ( ims ) in storage rings is a powerful tool for mass measurements of exotic nuclei with very short half - lives down to several tens of microseconds , using a multicomponent secondary beam separated in - flight without cooling . however , the inevitable momentum spread of secondary ions limits the precision of nuclear masses determined by using ims . therefore , the momentum measurement in addition to the revolution period of stored ions is crucial to reduce the influence of the momentum spread on the standard deviation of the revolution period , which would lead to a much improved mass resolving power of ims . one of the proposals to upgrade ims is that the velocity of secondary ions could be directly measured by using two time - of - flight ( double tof ) detectors installed in a straight section of a storage ring . in this paper , we outline the principle of ims with double tof detectors and the method to correct the momentum spread of stored ions . isochronous mass spectrometry , storage ring , double tof detectors , velocity measurement
one of the motivations for using distributed multisensor networks is to make the network resilient to loss of communication .this has led to an extensive research into distributed filtering over networks with time - varying , randomly switching topology . in particular , the markovian approach to the analysis and synthesis of estimator networks has received a significant attention in relation to the problems involving random data loss in channels with memory which are governed by a markov switching rule .in addition to capturing memory properties of physical communication channels , markovian models allow for other random events in the network , such as sensor failures and recovery , to be considered in a systematic manner within the markov jump systems framework .however , the markov jump systems theory usually assumes the complete state of the underlying markov chain to be known to every controller or filter . in the context of distributed estimation and control , this requires each node of the network to know the complete instantaneous state of the network to be able to deploy suitable gains . to circumvent such an unrealistic assumption, the literature focuses on networks whose communication state is governed by a random process decomposable into independent two - state markov processes describing the status of individual links , even though this typically leads to design conditions whose complexity grows exponentially .also , the assumption of independence between communication links may not always be practical , e.g. , when dealing with congestions .the objective of this paper is to develop a distributed filtering technique which overcomes the need for broadcast of global communication topology and does not require markovian segmentation of the network .our main contribution is the methodology of robust distributed observer design which enables the node observers to be implemented in a truly distributed fashion , by utilizing only locally available information about the system s connectivity , and without assuming the independence of communication links .this information structure constraint is a key distinction of this work , compared with the existing results , e.g. , .in addition , the proposed methodology allows to incorporate other random events such as sensor failures and recoveries .the paper focuses on the case where the plant to be observed , as well as sensing and communication models are not known perfectly . to deal with uncertain perturbations in the plant , sensors and communications, we employ the distributed filtering framework which has received a significant deal of attention in the recent literature .the motivation for considering observers in this paper , instead of kalman filters , is to obtain observers that have guaranteed robustness properties .it is well known that the standard kalman filter is sensitive to modelling errors , and consensus kalman filters may potentially suffer from the same shortcomings .this explains our interest in robust performance guarantees in the presence of uncertainty .in contrast to , in this paper the node estimators are sought to reach relative consensus about the estimate of the reference plant . as an extension of the consensus estimation methodology , our approach responds to the challenge posed by the presence of uncertain perturbations in the plant , measurements and interconnections .typically , a perfect consensus between sensors - agents is not possible due to perturbations . to address this challenge, we employ the approach based on optimization of the transient relative consensus performance metric , originally proposed in .we approach the robust consensus - based estimation problem from the dissipativity viewpoint , using vector storage functions and vector supply rates .this allows us to establish both mean - square robust convergence and robust convergence with probability 1 of the distributed filters under consideration and guarantee a prespecified level of mean - square disagreement between node estimates in the presence of perturbations and random topology changes .the information structure constraint , where the filters must rely on the local knowledge of the network topology , poses the main challenge in the derivation of the above - mentioned results .the standard framework of markov jump systems is not directly applicable to the problem of designing locally constrained filters whose information about the network status is non - markovian . to overcome this difficulty, we adopt the approach recently proposed for decentralized control of jump parameter systems .it involves a two - step design procedure .first , an auxiliary distributed estimation problem is solved under simplifying assumption that the complete markovian network topology is instantaneously available at each node .however , we seek a solution to this problem using a network of _ non - fragile _ estimators subject to uncertainty .resilience of the auxiliary estimator to uncertain perturbations is the key property to allow this auxiliary uncertain estimator network to be modified , at the second step , into an estimator network which satisfies the information structure constraint and retains robust performance of the auxiliary design .an important question in connection with our distributed observer architecture is concerned with requirements on the communication topology under which the consensus of node observers is achievable . for networks of one- or two - dimensional agents , and networks consisting of identical agents , conditions for consensusare tightly related to properties of the graph laplacian matrix . in a more general situation involving nonidentical node observers , the role of the interconnection graphis often hidden behind the design conditions , e.g. , see .our second contribution is to show that for the distributed estimation problem under consideration to have a solution , the standard requirement for the graph laplacian to have a simple zero eigenvalue must be complemented by detectability properties of certain matrix pairs formed by parameters of the observers and interconnections .the paper is organized as follows .the problem formulation is given in section [ distr.cons ] .section [ sec : design - global ] studies an auxiliary distributed estimation problem without the information structure constraints .the results of this section are then used in section [ main ] where the main results of the paper are given .section [ requirements ] discusses requirements on the observer communication topology .section [ example ] presents an illustrating example .[ [ notation ] ] notation + + + + + + + + is the real euclidean -dimensional vector space , with the norm ; denotes the transpose of a matrix or a vector . also , for a given , .'\in \mathbf{r}^k ] is the block - diagonal matrix , whose diagonal blocks are .the symbol in position of a block - partitioned matrix denotes the transpose of the block of the matrix . is the lebesgue space of -valued vector - functions , defined on , with the norm .consider a directed weakly connected graph , where is the set of nodes , and is the set of edges .the edge originating at node and ending at node represents the event `` transmits information to '' . in accordance with a common convention, we consider graphs without self - loops , i.e. , .however , each node is assumed to have complete information about its filter , measurements and the status of incoming communication links .we consider two types of random events at each node .firstly , node neighborhoods change randomly as a result of random link dropouts and recovery .also , to account for sensor adjustments in response to these changes , as well as sensor failures / recoveries , we allow for random variations of the sensing regime at each node .letting , denote an observed process and its measurement taken at node at time , and using a standard linear relation between these quantities such adjustments are associated with randomly varying coefficients , , .these random events are additional to link dropouts .this leads us to consider the combined evolution of each node s neighbourhood and sensing regime .[ distinct.n ] for a node , let , be its neighbourhood set and the measurement matrix triplet , respectively , at a certain time .the pair , is said to represent the _ local communication and sensing state _ ( or simply the _ local state _ ) of node at time .two states of at times , , , are distinct if , or . from now on , we associate with every node the ordered collection of all its feasible distinct local states and denote the corresponding index . the time evolution of each local state will be represented by a random mapping . the global configuration and sensing pattern of the network at any time can be uniquely determined from its local states .this leads us to define the _ global state _ of the network as an -tuple , where .consider the ordered collection of all feasible global states of the network and let denote its index set .in general , not all combinations of local states correspond to feasible global states .owing to dependencies between network links and/or sensing regimes , the number of feasible global states may be substantially smaller than the cardinality of the set of all combinations of local states .the one - to - one mapping between the set of feasible global states and its index set will be denoted , i.e. , , where is the index of the -tuple .also , we write , whenever . using the one - to - one mapping , define the _ global _ process to describe the evolution of the network global state .the local state processes are related to it as . throughout the paper , we assume that is a stationary markov random process defined in a filtered probability space , where denotes a right - continuous filtration with respect to which is adapted and error dynamics of the estimator introduced in the next section . ]the -algebra is the minimal -algebra which contains all measurable sets from the filtration .the transition probability rate matrix of the markov chain will be denoted {k , l=1}^m ] will denote the adjacency matrix of the digraph .note that if and only if . here and hereafter, the symbol describes the neighbourhood of node when this node is in local state . in accordance with this notation , is the neighbourhood of node when the network is in global state .also , , , and denote the in- and out - degrees of node and the laplacian matrix of the corresponding graph , respectively .we will use the notation to refer to the switching network described above .since is stationary , then each process is also stationary .however , in general the local state processes are not markov , and the components of the multivariate process may statistically depend on each other .hence our network model allows for dependencies between links within the network .consider a plant described by the equation here is the state , is a deterministic disturbance .we assume that , and that the solution of ( [ eq : plant.1 ] ) exists on any finite interval ] .also , consider an observer network whose nodes take measurements of the plant ( [ eq : plant.1 ] ) as follows where represents the deterministic measurement uncertainty at sensing node , .the coefficients of equation ( [ u6.yi.1 ] ) take values in given sets of constant matrices of compatible dimensions , it will be assumed throughout the paper that for all and .the measurements are processed at node according to the following estimation algorithm ( cf . ) : where is the signal received at node from node , describes the channel uncertainty affecting the information transmission from node to .it is assumed that belongs to the class of mean - square -integrable random disturbances , adapted to the filtration .it will be further assumed that for all and , .also in ( [ up7.c.d.loc ] ) , , are matrix - valued functions of the local state process .these functions are the design parameters of the algorithm describing innovation and interconnection gains of the observer ( [ up7.c.d.loc ] ) .note that the coupling and observer gains , are required to be functions of the local state ( i.e. , functions of ) , rather than the global state .this ` locality ' information structure constraint is additional to the assumption about the markov nature of the communication graph ; cf . where the complete communication graph was assumed to be known at each node .the problem in this paper is to determine these functions to satisfy certain robust performance criteria to be presented in definition [ def1 ] below .[ rem2 ] in equation ( [ vij ] ) , the matrices and do not depend on .this is to reflect a situation where node _ always _ broadcasts its information to node , but node randomly fails to receive this information , or chooses not to accept it , e.g. due to random congestion .it is possible to consider a more general situation where the matrices and also depend on .technically , this more general case is no different from the one pursued here . associated with the system ( [ eq : plant.1 ] ) andthe set of filters ( [ up7.c.d.loc ] ) is the disagreement function ( cf . ) ' ] , ] .note that the graph corresponding to state was used in to demonstrate synchronization of chua systems .indeed , the filters share the same matrix as the plant , and can be interpreted as ` slave ' chua systems operating in the same regime as the master . accordingly, the convergence of the filters in our example can be interpreted as the observer - based synchronization between the slaves and the master ; see for further details . however different from , in this example the graph topology is time - varying , as explained below . from figure [ fig.ex ] ,nodes 3 , 4 , and 5 have varying neighbourhoods .also , in this example we suppose that node 2 changes its sensor parameters when the network switches between two configurations . as a result , in this example , each local state process , except for that of node 1 , has two states and always takes the same value as the global state process . on the other hand, node 1 always maintains the same local state , and its local process is constant .therefore , we seek to obtain nonswitching observer gains for node 1 only . according to this description , in this example , , , and the mapping is as follows : , .numerical values of the matrices , , for this example are given in table [ tablec ] ; they are assumed to take one of the two values , , shown in the table .these values were chosen so that the pairs , , , corresponding to nodes 1 and 4 , had undetectable modes , while node 2 was allowed to switch between detectable and undetectable coefficient pairs .therefore , for estimation these nodes were to rely on communication with their neighbours .also , we let , for all nodes and all , and , .note that both instances of the network have spanning trees with roots at nodes 3 and 5 .these nodes have detectable matrix pairs , , , respectively .also , is observable .it follows from these properties that the conditions in part ( b ) of theorem [ u7.prop.2 ] are satisfied .hence , the necessary condition for global detectability , stated in theorem [ u7.prop1 ] holds ..coefficients for the example , ] . [ cols="^,^,^,^,^,^",options="header " , ] the design of the observer network was carried out using matlab and the lmi solver lmirank based on . to obtain a set of non - switching gains for node 1 , the norm - bounded uncertainty constraints of the form ( [ wv.constr.nb ] )were defined for the communication link at node 1 , where we set , .these constants as well as were chosen by trial and error , to ensure that the corresponding rank constrained lmis in theorem [ t.main ] were feasible .the feasibility was achieved with .this allowed us to compute the nonswitching gains and for node 1 using ( [ lim.y ] ) . to validate the design ,the system and the designed filters were simulated numerically , with a random initial condition .all uncertain perturbations were chosen to be of the form , with different coefficients and for each perturbation .also we let , assuming an undirected nature of the channels in this example .the graphs of one realization of the global state process , and the corresponding estimation errors at nodes 1 ( the nonswitching filter ) , 2 ( the filter with the switching sensing regime ) and 5 ( the filter with the varying neighbourhood ) are shown in figures [ eta.graph ] and [ errors.graph ] , respectively .the graph in figure [ errors.graph ] confirms the ability of the proposed node estimators to successfully mitigate the changes in the graph topology and sensing regimes , as well as uncertain perturbations in the plant , measurements and interconnections .the paper has presented sufficient conditions for the synthesis of robust distributed consensus estimators connected over a markovian network .the proposed estimator provides a guaranteed suboptimal disagreement of estimates , while using only locally available information about the communication and sensing state of the network .our conditions allow a robust filter network to be constructed by solving an lmi feasibility problem .the lmis are partitioned in a way which opens a possibility for solving them in a decentralized manner . when the network s global state is available at every node , this feasibility problem is convex , and the corresponding lmis are solvable , e.g. , using the decentralized gradient descent algorithm in .however , the elimination of the network state broadcast has led to the introduction of rank constraints additional to the lmi conditions .therefore , new numerical algorithms need to be developed to exploit the proposed partition of the lmis and rank constraints .this problem is left for future research .other possible directions for future research may be concerned with an integration of our approach with other distributed filtering techniques , such as for example , techniques involving randomly sampled measurements .discussions with c. langbort are gratefully acknowledged .the following continuous - time counterpart of the robbins - siegmund convergence theorem will be used in the proof of theorem [ t.aux ] .its proof is similar to .a. is right - continuous on ; b. is locally lebesgue - integrable on with probability 1 , i.e. , almost all realizations of have the property for all ; c. ; d. the following inequality holds a.s . for all \le v(s ) + \mathsf{e}\left[\int_s^t\phi(\theta)d\theta\big|\bar{\mathcal{f}}_s\right].\end{aligned}\ ] ] let denote the infinitesimal generator of the interconnected system consisting of subsystems ( [ e.w ] ) .consider the vector lyapunov candidate for this system , ' ] , where (e , k)\triangleq \sum\nolimits_{l=1}^m\lambda_{kl}v_i(e_i , l)+\left(\frac{\partial v_i}{\partial e_i}\right)^t \big((a - l_i^kc_i^k)e_i } & & \\ & & + \sum\nolimits_{j\in \mathbf{v}_i^{k_i } } k_{ij}^k(h_{ij}(e_j - e_i)-g_{ij}w_{ij})\\ & & + ( \hat b_2-l_i^k\hat d_i^k)\hat \xi_i -\omega_i-\sum\nolimits_{j\in\mathbf{v}_i^{k_i}}(\omega_{ij}^{(1)}+\omega_{ij}^{(2)})\big ) . \end{aligned}\ ] ] for arbitrary , consider the expression (e , k ) + \sum\nolimits_{i=1}^n\big [ \tau_i^k(\alpha_i^2 \|c_i^k e_i+\hat d_i^k\hat \xi_i\|^2-\|\omega_i\|^2 ) } & & \nonumber \\ & & + \sum\nolimits_{j\in\mathbf{v}_i^{k_i}}\theta_{ij}^k(\beta_{ij}^2\|h_{ij}e_i+g_{ij}w_{ij}\|^2-\|\omega_{ij}^{(1)}\|^2)\nonumber \\ & & + \sum\nolimits_{j\in\mathbf{v}_i^{k_i}}\vartheta_{ij}^k(\beta_{ij}^2\|h_{ij}e_j\|^2-\|\omega_{ij}^{(2)}\|^2)\big ] = \sum\nolimits_{i=1}^n\mathfrak{r}_i(e , k ) , \end{aligned}\ ] ] where we let (e , k ) + \tau_i^k\left(\alpha_i^2 \|c_i^ke_i+\hat d_i^k\hat \xi_i\|^2-\|\omega_i\|^2\right ) } & & \nonumber \\ & & + \sum_{j\in\mathbf{v}_i^{k_i}}\theta_{ij}^k\left(\beta_{ij}^2\|h_{ij}e_i+g_{ij}w_{ij}\|^2-\|\omega_{ij}^{(1)}\|^2\right)\nonumber \\ & & + e_i'\big(\sum_{j:~ i\in\mathbf{v}_j^{k_j}}\vartheta_{ji}^k\beta_{ji}^2h_{ji}'h_{ji}\big)e_i- \sum_{j\in\mathbf{v}_i^{k_i}}\vartheta_{ij}^k\|\omega_{ij}^{(2)}\|^2 .\qquad \label{lv.1}\end{aligned}\ ] ] by completing the squares , one can establish that where we now observe that it follows from the lmi ( [ t4.lmi.1 ] ) that for any nonzero collection of vectors where are elements of the matrix , defined as together with ( [ rem.2 ] ) , the latter inequality leads to it is easy to verify using ( [ pi ] ) that all components of the vector are negative and do not exceed , where .hence , it follows from ( [ lv.1 ] ) , ( [ * * * ] ) that the following dissipation inequality holds for all , , , , and for all uncertainty signals , , satisfying the constraints ( [ wv.constr ] ) (e , k)\le -\epsilon v(e , k ) } & & \nonumber \\ & & + \gamma^2\sum_{i=1}^n\left ( \|\xi_i\|^2 + \|\xi\|^2 + \sum_{j\in \mathbf{v}_i^{k_i } } \|w_{ij}\|^2 \right ) .\label{lyap.3}\end{aligned}\ ] ] the statement of theorem [ t.aux ] now follows from ( [ lyap.3 ] ) .this can be shown using the same argument as that used to derive the statement of theorem 1 in from a similar dissipation inequality .indeed , let , .since equation ( [ e.w ] ) defines to be a markov process , we obtain from ( [ lyap.3 ] ) using the dynkin formula that -v(e(s),\eta(s ) ) } & & \nonumber \\ & & + \mathsf{e}\left[\int_s^t(\epsilon v(e(t),\eta(t))+n\psi^{\eta(t)}(e(t))dt\big|e(s),\eta(s)\right ] \nonumber \\ & & \le \gamma^2\mathsf{e}\left[\int_s^t\left(\sum_{i=1}^n\|\xi_i(t)\|^2 + \|\xi(t)\|^2 \right.\right .\nonumber \\ & & + \sum_{j=1}^n \mathbf{a}_{ij}^{\eta(t)}\|w_{ij}(t)\|^2 \bigg)dt\bigg|e(s),\eta(s)\bigg ] .\label{lyap.5}\end{aligned}\ ] ] here $ ] is the expectation conditioned on the -field generated by .we now observe that the processes , satisfy the conditions of lemma [ supermartingale.lemma ] .this leads to the conclusion that a.s . , and also a.s .. due to the condition for all , we conclude that exists and a.s .. this implies with probability 1 for all and arbitrary disturbances ; i.e. , ( [ convergence.p1 ] ) holds . in the casewhere , , , the above observation immediately yields the statement of the theorem about internal stability of the system ( [ e.w ] ) , ( [ lim ] ) with probability 1 .the claim of internal exponential mean - square stability follows directly from ( [ lyap.3 ] ) , since by definition .also , by taking the expectation conditioned on , on both sides of ( [ lyap.5 ] ) and then letting , we obtain condition ( [ objective.i.1 ] ) , in which . condition ( [ convergence ] ) follows from ( [ lyap.5 ] ) in a similar manner .taking the expectation conditioned on , on both sides of ( [ lyap.5 ] ) , then dropping the nonnegative term and letting , we establish that .hence .
the paper considers a distributed robust estimation problem over a network with markovian randomly varying topology . the objective is to deal with network variations locally , by switching observer gains at affected nodes only . we propose sufficient conditions which guarantee a suboptimal level of relative disagreement of estimates in such observer networks . when the status of the network is known globally , these sufficient conditions enable the network gains to be computed by solving certain lmis . when the nodes are to rely on a locally available information about the network topology , additional rank constraints are used to condition the gains , given this information . the results are complemented by necessary conditions which relate properties of the interconnection graph laplacian to the mean - square detectability of the plant through measurement and interconnection channels . large - scale systems , distributed robust estimation , worst - case transient consensus , vector lyapunov functions .
verbs are sometimes omitted in japanese sentences .it is necessary to resolve verb ellipses for purposes of language understanding , machine translation , and dialogue processing .therefore , we investigated verb ellipsis resolution in japanese sentences . in connection with our approach, we would like to emphasize the following points : * little work has been done so far on resolution of verb ellipsis in japanese .* although much work on verb ellipsis in english has handled the reconstruction of the ellipsis structure in the case when the omitted verb is given , little work has handled the estimation of what is the omitted verb .on the contrary , we handle the estimation of what is the omitted verb .* in the case of japanese , the omitted verb phrase is sometimes not in the context , and the system must construct the omitted verb by using knowledge ( or common sense ) .we use example - based method to solve this problem .this paper describes a practical method to recover omitted verbs by using surface expressions and examples . in short ,( 1 ) when the referent of a verb ellipsis is in the context , we use surface expressions ( clue words ) ; ( 2 ) when the referent is not in the context , we use examples ( linguistic data ) .we define the verb to which a verb ellipsis refers as _ the recovered verb_. for example , `` [ kowashita ] a phrase in brackets `` [ ' ' , `` ] '' represents an omitted verb . ] ( broke ) '' in the second sentence of the following example is a verb ellipsis .`` kowashita ( broke ) '' in the first sentence is a recovered verb . {10 cm } \small \begin{tabular}[t]{lll } kare - wa & ironna mono - wo & kowashita.\\ ( he ) &( several things ) & ( broke ) \\ \multicolumn{3}{l } { ( he broke several things.)}\\ \end{tabular } \vspace{0.3 cm } \begin{tabular}[t]{lll } kore - mo & are - mo & [ kowashita].\\ ( this ) & ( that ) & ( broke ) \\ \multicolumn{2}{l } { ( [ he broke ] this and that.)}\\ \end{tabular } \end{minipage}\ ] ] \(1 ) when a recovered verb exists in the context , we use surface expressions ( clue words ) .this is because an elliptical sentence in the case ( 1 ) is in one of several typical patterns and has some clue words .for example , when the end of an elliptical sentence is the clue word `` mo ( also ) '' , the system judges that the sentence is a repetition of the previous sentence and the recovered verb ellipsis is the verb of the previous sentence .\(2 ) when a recovered verb is not in the context , we use examples .the reason is that omitted verbs in this case ( 2 ) are diverse and we use examples to construct the omitted verbs .the following is an example of a recovered verb that does not appear in the context .{10 cm } \begin{tabular}[t]{lll } sou & umaku ikutowa & [ omoenai ] .\\( so ) & ( succeed so well ) & ( i do n't think)\\ \multicolumn{3}{l } { ( [ i do n't think ] it succeeds so well . ) } \end{tabular } \end{minipage } \label{eqn:6c_souumaku}\ ] ] when we want to resolve the verb ellipsis in this sentence `` sou umaku ikuto wa [ omoenai ] '' , the system gathers sentences containing the expression `` sou umaku ikutowa ( it succeeds so well . ) '' from corpus as shown in figure [ tab : how_to_use_corpus ] , and judges that the latter part of the highest frequency in the obtained sentence ( in this case , `` omoenai ( i do nt think ) '' etc . )is the desired recovered verb .l@ l@ l & * the matching part * & * the latter part * + konnani & & omoenai . + ( like this ) & ( it succeeds ) & ( i do nt think ) + + itumo & & kagiranai . + ( every time ) & ( it succeeds ) & ( can not expect to ) + + kanzenni & & ienai . + ( completely )& ( it succeeds ) & ( it can not be said ) + +we handle only verb ellipses in the ends of sentences .we classified verb ellipses from the view point of machine processing .the classification is shown in figure [ fig : shouryaku_bunrui ] .first , we classified verb ellipses by checking whether there is a recovered verb in the context or not .next , we classified verb ellipses by meaning .`` in the context '' and `` not in the context '' in figure [ fig : shouryaku_bunrui ] represent where the recovered verb exists , respectively .although the above classification is not perfect and needs modification , we think that it is useful to understand the outline of verb ellipses in machine processing .the feature and the analysis of each category of verb ellipsis are described in the following sections . in question answer sentences verbs in answer sentences are often omitted , when answer sentences use the same verb as question sentences .for example , the verb of `` kore wo ( this ) '' is omitted and is `` kowashita ( break ) '' in the question sentence . {11.5 cm } \begin{tabular}[t]{ll } nani - wo & kowashitano\\ ( what ) & ( break ) \\ \multicolumn{2}{l } { ( what did you break?)}\\ \end{tabular } \vspace{0.3 cm } \begin{tabular}[t]{ll } kore - wo & [ kowashita].\\ ( this ) & ( break)\\ \multicolumn{2}{l } { ( [ i broke ] this.)}\\ \end{tabular } \end{minipage}\ ] ] the system judges whether the sentences are question answer sentences or not by using surface expressions such as `` nani ( what ) '' , and , if so , it judges that the recovered verb is the verb of the question sentence . in sentences which play a supplementary role to the previous sentence , verbs are sometimes omitted . for example , the second sentence is supplementary , explaining that `` the key i lost '' is `` house key '' . {11.5 cm } \begin{tabular}[t]{ll } kagi - wo & nakushita.\\ ( key ) & ( lost)\\ \multicolumn{2}{l } { ( i lost my key . ) } \end{tabular } \vspace{0.3 cm } \begin{tabular}[t]{lll } ie - no & kagi - wo & [ nakushita.]\\ ( house ) & ( key ) & ( lost)\\ \multicolumn{2}{l } { ( [ i lost ] my house key . ) } \end{tabular } \end{minipage}\ ] ] to solve this , we present the following method using word meanings . when the word at the end of the elliptical sentence is semantically similar to the word of the same case element in the previous sentence , they correspond , and the omitted verb is judged to be the verb of the word of the same case element in the previous sentence . in this case , since `` kagi ( key ) '' and `` ie - no kagi ( house key ) '' are semantically similar in the sense that they are both keys , the system judges they correspond , and the verb of `` ie - no kagi - wo ( house key ) '' is `` nakushita ( lost ) '' .in addition to this method , we use methods using surface expressions . for example , when a sentence has clue words such as the particle `` mo '' ( which indicates repetition ) , the sentence is judged to be the supplement of the previous sentence .there are many cases when an elliptical sentence is the supplement of the previous sentence . in this work , if there is no clue , the system judges that an elliptical sentence is the supplement of the previous sentence .sometimes , in interrogative sentences , the particle `` wa '' is at the end of the sentence and the verb is omitted .for example , the following sentence is an interrogative sentence and the verb is omitted .{11.5 cm } \begin{tabular}[t]{ll } namae - wa & [ nani - desuka.]\\ ( name ) & ( what?)\\ \multicolumn{2}{l } { ( [ what is ] your name ? ) } \end{tabular } \end{minipage}\ ] ] if the end is of the form of `` noun wa '' , the sentence is probably an interrogative sentence , and thus the system judges it to be an interrogative sentence . in the case of `` not in the context ''the following example exists besides `` interrogative sentence '' .{11.5 cm } \small\begin{tabular}[t]{l@ { } l@ { } l@ { } l } \footnotesize jitsu - wa & \footnotesize chotto & \footnotesize onegaiga & \footnotesize [ arimasu].\\ \small ( the truth ) & ( a little ) & ( request ) & ( i have)\\ \multicolumn{4}{l } { ( to tell you the truth , [ i have ] a request . ) } \end{tabular } \end{minipage}\ ] ] this kind of ellipsis does not have the recovered expression in sentences .the form of the recovered expression has various types .this problem is difficult to analyze . to solve this problem , we estimate a recovered content by using a large amount of linguistic data .when japanese people read the above sentence , they naturally recognize the omitted verb is `` arimasu ( i have ) '' .this is because they often use the sentence `` jitsu - wa chotto onegaiga arimasu .( to tell the truth , i have a request . ) '' in daily life .when we perform the same interpretation using a large amount of linguistic data , we detect the sentence containing an expression which is semantically similar to `` jitsu - wa chotto onegaiga .( to tell you the truth , ( i have ) a request . ) '' , and the latter part of `` jitsu - wa chotto onegaiga '' is judged to be the content of the ellipsis . to put itconcretely , the system detects sentences containing the longest characters at the end of the input sentence from corpus and judges that the verb of the highest frequency in the latter part of the detected sentences is a recovered verb .sentence + + 1 & when the end of the sentence is a formal form of a verb or terminal postpositional particles such as `` yo '' and `` ne '' , & the system judges that a verb ellipsis does not exist . & 30 & sono mizuumi wa , kitano kunini atta .( the lake was in a northern country . ) + + 2 & when the previous sentence has an interrogative pronoun such as `` dare ( who ) '' and `` nani ( what ) '' , & the verb modified by the interrogative pronoun & & `` dare - wo koroshitanda '' `` watashi - ga katte - ita saru - wo [ koroshita ] '' ( `` who did you kill ? '' `` [ i killed ] my monkey '' ) + + 3 & when the end is noun x followed by a case postpositional particle , there is a noun y followed by the same case postpositional particle in the previous sentence , and the semantic similarity between noun x and noun y is a value , & the verb modified by noun y & & subeteno aku - ga nakunatteiru . goutou - da - toka sagi - da - toka , arayuru hanzai - ga [ nakunatteiru ] .( all the evils have disappeared .all the crimes such as robbery and fraud [ have disappeared ] . )+ 4 & when the end is the postpositional particle `` mo '' or there is an expression which indicates repetition such as `` mottomo '' , the repetition of the same speaker s previous sentence is interpreted , & the verb at the end of the same speaker s previous sentence is judged to be a recovered verb & & `` otonatte warui koto bakari shiteirundayo .yoku wakaranaikeredo , wairo nante koto - mo [ shiteirundayo ] . ''( `` adults do only bad things .i do nt know , but [ they do ] bribe . '' ) + 5 & in all cases , & the previous sentence & & + + 6 & when the end is a noun followed by postpositional particle `` wa '' , & the sentence is interpreted to be an interrogative sentence .& & `` namae - wa [ nani - desuka ] '' ( `` [ what is ] your name ? '' ) + + 7 & when the system detects a sentence containing the longest expression at the end of the sentence from corpus , ( if the highest frequency is much higher than the second highest frequency , the expression is given 9 points , otherwise it is given 1 point . ) & the expression of the highest frequency in the latter part of the detected sentences & 1 or 9 & sou umaku ikutowa [ omoenai ] .( [ i do nt think ] it will succeed . ) +before the verb ellipsis resolution process , sentences are transformed into a case structure by the case structure analyzer .verb ellipses are resolved by heuristic rules for each sentence from left to right . using these rules ,our system gives possible recovered verbs some points , and it judges that the possible recovered verb having the maximum point total is the desired recovered verb .this is because a number of types of information is combined in ellipsis resolution .an increase of the points of a possible recovered verb corresponds to an increase of the plausibility of the recovered verb .the heuristic rules are given in the following form . \ { _ proposal , proposal , _ .. } + : = ( _ possible recovered verb , _ _ point _ ) surface expressions , semantic constraints , referential properties , etc . , are written as conditions in the _ condition _ section .a possible recovered verb is written in the _possible recovered verb _ section ._ point _ means the plausibility of the possible recovered verb .[ sec:6c_ref_pro ] we made 22 heuristic rules for verb ellipsis resolution .these rules are made by examining training sentences in section [ sec:6c_jikken ] by hand .we show some of the rules in table [ tab : doushi_shouryaku_bunrui ] .the value in rule 3 is given from the semantic similarity between noun and noun in edr concept dictionary .the corpus ( linguistic data ) used in rule 7 is a set of newspapers ( one year , about 70,000,000 characters ) .the method detecting similar sentences by character matching is performed by sorting the corpus in advance and using a binary search .we show an example of a verb ellipsis resolution in figure [ tab:6c_dousarei ] .figure [ tab:6c_dousarei ] shows that the verb ellipsis in `` onegai ( request ) '' was analyzed well . since the end of the sentence is not an expression which can normally be at the end of a sentence , rule 1 was not satisfied and the system judged that a verb ellipsis exists . by rule 5the system took the candidate `` the end of the previous sentence '' .next , by rule 7 using corpus , the system took the candidate `` arimasu ( i have ) '' . although there are `` aru ( i have ) '' and `` arimasu ( i have ) '' , the frequency of `` arimasu ( i have ) '' is more than the others and it was selected as a candidate .the candidate `` arimasu ( i have ) '' having the best score was properly judged to be the desired recovered verb .we ran the experiment on the novel `` bokko- chan '' .this is because novels contain various verb ellipses . in the experiment , we divided the text into training sentences and test sentences .we made heuristic rules by examining training sentences .we tested our rules by using test sentences .we show the results of verb ellipsis resolution in table [ tab:0verb_result ] .to judge whether the result is correct or not , we used the following evaluation criteria . when the recovered verb is correct , even if the tense , aspect , etc .are incorrect , we regard it as correct . for ellipses in interrogative sentences ,if the system estimates that the sentence is an interrogative sentence , we judge it to be correct .when the desired recovered verb appears in the context and the recovered verb chosen by the rule using corpus is nearly equal to the correct verb , we judge that it is correct . as in table[ tab:6c_sougoukekka ] we obtained a recall rate of 73% and a precision rate of 66% in the estimation of indirect anaphora on test sentences . the recall rate of `` in the context '' is higher than that of `` not in the context '' . for `` in the context '' the system only specifies the location of the recovered verb .but in the case of `` not in the context '' the system judges that the recovered verb does not exist in the context and gathers the recovered verb from other information .therefore `` not in the context '' is very difficult to analyze .the accuracy rate of `` other ellipses ( use of common sense ) '' was not so high .but , since the analysis of the case of `` other ellipses ( use of common sense ) '' is very difficult , we think that it is valuable to obtain a recall rate of 56% and a precision rate 59% .we think that when the size of corpus becomes larger , this method becomes very important .although we calculate the similarity between the input sentence and the example sentence in the corpus only by using simple character matching , we think that we must use the information of semantics and the parts of speech when calculating the similarity. moreover we must detect the desired sentence by using only examples of the type ( whether it is an interrogative sentence or not ) whose previous sentence is the same as the previous sentence of the input sentence. although the accuracy rate of the category using surface expressions is already high , there are some incorrect cases which can be corrected by refining the use of surface expressions in each rule .there is also a case which requires a new kind of rule in the experiment on test sentences .{10 cm } \small \begin{tabular}[t]{l@ { } l@ { } l@ { } l } sonototan & watashi - wa & himei - wo & kiita.\\ ( at the moment ) & ( i ) & ( a scream ) & ( hear)\\ \multicolumn{4}{l } { ( at the moment , i heard a scream?)}\\ \end{tabular } \vspace{0.2 cm } \begin{tabular}[t]{l@ { } l@ { } l@ { } l } { \footnotesize nanika - ni } & { \footnotesize tubusareruyouna } & { \footnotesize koe - no}.\\ ( something ) & ( be crushed ) & ( voice)\\ \multicolumn{4}{l } { \hspace*{-0.2 cm } { \footnotesize ( of a fearful voice such that he was crushed by something)}}\\ \end{tabular } \end{minipage}\ ] ] in these sentences , `` osoroshii koe - no ( of a fearful voice ) '' is the supplement of `` ookina himei ( a scream ) '' in the previous sentence . to solve this ellipsis, we need the following rule .{6.5 cm } when the end is the form of `` noun x no(of ) '' and there is a noun z which is semantically similar to noun y in the examples of `` noun x no(of ) noun y '' , the system judges that the sentence is the supplement of noun z. \end{minipage}\ ] ]in this paper , we described a practical way to resolve omitted verbs by using surface expressions and examples .we obtained a recall rate of 73% and a precision rate of 66% in the resolution of verb ellipsis on test sentences .the accuracy rate of the case of recovered verb appearing in the context was high .the accuracy rate of the case of using corpus ( examples ) was not so high .since the analysis of this phenomena is very difficult , we think that it is valuable to have proposed a way of solving the problem to a certain extent .we think that when the size of corpus becomes larger and the machine performance becomes greater , the method of using corpus will become effective .research on this work was partially supported by jsps - rftf96p00502 ( the japan society for the promotion of science , research for the future program )
verbs are sometimes omitted in japanese sentences . it is necessary to recover omitted verbs for purposes of language understanding , machine translation , and conversational processing . this paper describes a practical way to recover omitted verbs by using _ surface expressions _ and _ examples_. we experimented the resolution of verb ellipses by using this information , and obtained a recall rate of 73% and a precision rate of 66% on test sentences .
research into the bias temperature instability ( bti ) has revealed a plethora of puzzling issues which have proven a formidable obstacle to the understanding of the phenomenon . in particular , numerous modeling ideas have been put forward and refined at various levels .most of these models have in common that the overall degradation is assumed to be due to two components : one component ( ) is related to the release of hydrogen from passivated silicon dangling bonds at the interface , thereby forming electrically active centers , while the other ( ) is due to the trapping of holes in the oxide .however , these models can differ significantly in the details of the physical mechanisms invoked to explain the degradation . at present , from all these modeling attempts two classes have emerged that appear to be able to explain a wide range of experimental observations : the first class is built around the concept of the reaction - diffusion ( rd ) model , where it is assumed that it is the _ diffusion _ of the released hydrogen that dominates the dynamics .the other class is based on the notion that it is the _ reactions _ which essentially limit the dynamics , and that the reaction rates are distributed over a wide range . in other words , in this _reaction_-limited class of models , both interface states ( ) and oxide charges ( ) are assumed to be ( in the simplest case ) created and annealed by first - order reactions . in contrast , in the _diffusion_-limited class ( rd models ) , the dynamics of creation and annealing are assumed to be dominated by a _diffusion_-limited process , which controlles both long term degradation and recovery .many of these models have been developed to such a high degree that they appear to be able to predict a wide range of experimental observations .typically , however , experimental data are obtained on large - area ( macroscopic ) devices where the microscopic physics are washed out by averaging . in nanoscale devices , on the other hand , it has been shown that the creation and annihilation of individual defects can be observed at the statistical level .we will demonstrate in the following that _ this statistical information provides the ultimate benchmark for any bti model , as it reveals the underlying microscopic physics to an unprecedented degree_. this allows for an evaluation of the foundations of the two model classes , as it clearly answers the fundamental question : _ is bti reaction- or diffusion - limited _ ?as such , the benchmark provided here is simple and not clouded by the complexities of the individual models .since the stochastic response of nanoscale devices to bias - temperature stress lies at the heart of our arguments , we begin by experimentally demonstrating the equivalence of large- and small - area devices . for this, we compare the degradation of a large - area device to the average degradation observed in 27 small - area devices when subjected to negative bti ( nbti ) .all measurements in the present study rely on the ultra - fast technique published previously , which has a delay of on large devices . due to the lower current levels, the delay increases to in small - area devices . as can be seen in fig .[ f : smalllarge ] , although the degradation in small - area devices shows larger signs of variability , discrete steps during recovery , and is about 30% larger than in this particular large - area device , the average dynamics are identical .in particular , for a measurement delay of , a power - law in time ( ) with exponent is observed during stress while the averaged recovery is roughly logarithmic over the relaxation time .this demonstrates that by using nanoscale devices , the complex phenomenon of nbti can be broken down to its microscopic constituents : the defects that cause the discrete steps in the recovery traces .analysis of the statistics of these steps will thus reveal the underlying physical principles .it has been shown that the hole trapping component depends sensitively on the process details , particularly for high nitrogen contents , possibly making the choice of benchmark technology crucial for our following arguments . however , for industrial grade devices with low nitrogen content such as those used in this study , no significant differences in reported drifts to published data have been found .the pmos samples used here are from a standard cmos process with a moderate oxide thickness of and with a nitride content of approximately 6% , while the poly - si gates are boron doped with a thickness of .in particular , our previously published data obtained on the same technology as that of fig . [ f : smalllarge ] has recently been interpreted from the rd perspective as shown in fig .[ f : relaxrd ] , without showing any anomalies .this fit seems to suggest that after and recovery is dominated by _ diffusion_-limited recovery , a conclusion we will put to the test in the following .for our experimental assessment we use the time - dependent defect spectroscopy ( tdds ) , which has been extensively used to study bti in small - area devices at the single - defect level .since such devices contain only a countable number of defects , the recovery of each defect is visible as a discrete step in the recovery trace , see fig .[ f : smalllarge ] .the large variability of the discrete step - heights is a consequence of the inhomogeneous surface potential caused by the random discrete dopants in the channel , leading to percolation paths and a strong sensitivity of the step - height to the spatial location of the trapped charge .typically , these step - heights are approximately exponentially distributed with the mean step - height given by . here , is the value expected from the simple charge sheet approximation , where is the elementary charge , the permittivity of the oxide , the area , and the oxide thickness .experiments and theoretical values for the mean correction factor are in the range . in a tdds setup ,a nanoscale device is repeatedly stressed and recovered ( say times ) using fixed stress / recovery times , and .the recovery traces are analyzed for discrete steps of height occurring at time .each pair is then placed into a 2d histogram , which we call the spectral map , formally denoted by .the clusters forming in the spectral maps reveal the probability density distribution and thus provide detailed information on the statistical nature of the average trap annealing time constant . from the evolution of with stress time , the average capture time can be extracted as well .so far , only exponential distributions have been observed for , consistent with simple independent first - order reactions . in our previous tdds studies ,mostly short - term stresses ( ) had been used .based on this short - term nature , the generality of these results may be questioned , since also recovery predicted by rd models result in discrete steps .as we have pointed out a while ago , however , the distribution of these rd steps would be loglogistic rather than exponential , a fact that should be clearly visible in the spectral maps . in the following , we will conduct a targeted search for such loglogistic distributions and other features directly linked to _ diffusion_-limited recovery processes using extended long - term tdds experiments with .before discussing the long - term tdds data , we summarize the basic theoretical predictions of the two model classes . both model classes have in common that the charges trapped in interface and oxide states induce a change of the threshold voltage .depending on the location of the charge along the interface or in the oxide , it will contribute a discrete step to the total .due to only occasional electrostatic interactions with other defects and measurement noise , is typically normally distributed with mean .the mean values themselves , however , are exponentially distributed .the major difference between the model classes is whether creation and annealing of is _ diffusion- _ or _reaction_-limited , resulting in a fundamentally different form of the spectral map , as will be derived below .being the simpler case , we begin with the dispersive _ reaction_-limited models . in an agnostic formulation of dispersive _reaction_-limited models , creation and annealing of a single defect are assumed to be given by a simple first - order reaction with being the probability of having a charged defect after stress and recovery times and , respectively .the physics of trap creation enter the average forward and backward time constants and .it is important to highlight that equation ( [ e : avg ] ) may describe both the _ reaction_-limited creation and annealing of interface states , as well as a charge trapping process .we recall that even more complicated charge trapping processes involving structural relaxation and meta - stable defect states ( such as switching oxide traps ) can be approximately described by an effective first - order process , at least under quasi - dc conditions . having defects present in a given device ,the overall is then simply given by a sum of such first - order processes most important aspect is that the time constants are observed to be widely distributed .we have recently used such a model to explain bti degradation and recovery over a very wide experimental window assuming the time constants to belong to two different distributions , one tentatively assigned to charge - trapping and the other to interface state generation . at the statistical level , recovery in such a modelis described by the sum of exponential distributions . the spectral map , which records the emission times on a logarithmic scale , is then given by with the stress time dependent amplitude and describing the p.d.f .of , with mean and standard deviation .an example spectral map simulated at two different stress times is shown in fig .[ f : nmp - threedefects ] , which clearly reveals the three contributing defects .we note already here that contrary to the rd model , the spectral map of the dispersive first - order model depends on the individual , which can be strongly bias and temperature dependent . as a benchmark rd modelwe take the latest , and according to the physically most likely variant , the poly / model : here it is assumed that is released from bonds at the interface , diffuses to the oxide - poly interface , where additional bonds are broken to eventually create , the _ diffusion _ of which results in the degradation behavior typically associated with rd models .recovery then occurs via reversed pathways .while other variants of the rd model have been used , which can not possibly be exhaustively studied here , we believe our findings are of general validity , as all these models are built around _diffusion_-limited processes . in large - area devices the predicted long - term recovery after long - term stress can be fitted by the empirical relation with , provided diffusion is allowed into a semi - infinite gate stack with constant diffusivity in order to avoid saturation effects . quite intriguingly , a similar mathematical form has been successfully used to fit a wide range of experimental data , using a scaled stress time , though . remarkably , experimentally observed exponents are considerably smaller than what is predicted by rd models , corresponding to a wider spread over the time axis . in an empirically modified model, it has been assumed that in a real 3d device , recovery will take longer compared to ( [ e : rd - relax ] ) since the atoms will have to `` hover '' until they can find a suitable dangling bond for passivation .however , using a rigorous stochastic implementation of the rd model , we have not been able to observe significant deviations from ( [ e : rd - relax ] ) , irrespective of whether the model is solved in 1d , 2d , or 3d , provided one is in the diffusion - limited regime . as such , significant deviations from the basic recovery behavior ( [ e : rd - relax ] )still have to be rigorously justified .one option to stretch the duration of recovery would be the consideration of dispersive transport .our attempts in this direction were , however , not found to be in agreement with experimental observations .alternatively , consistent with experiment , a distribution in the forward and backward reactions can be introduced into the model .this dispersion will stretch the distribution ( [ e : rd - relax ] ) , i.e. increase the parameter , but may also lead to a temperature dependence of the power - law slope , features which have not been validated so far .nevertheless , a dispersion in the reaction - rates as used for instance in will not change the basic _ diffusion_-limited nature of the microscopic prediction as shown below . in order to study the stochastic response of the poly / model, we extended our previous stochastic implementation of the / rd model to include the oxide / poly interface following ideas and parameters of .since any sensible macroscopic model is built around a well - defined microscopic picture , in this case non - dispersive diffusion and non - dispersive rates , these features of the microscopic model must be preserved in the macroscopic theory , leaving little room for interpretation . in order to be consistent with the devices used in our tdds study , we chose . furthermore, a typical density of interface states is assumed .we would thus expect about 300 such interface states to be present for our tdds devices . before looking into the predictions of this rd model in a tdds setting, we calibrate our implementation of the poly / model to experimental stress data , see fig .[ f : rd - check ] ( left ) . in order to obtain a good fit, we follow the procedure suggested in and subtract a virtual hole trapping contribution of from the experimental data to obtain the required power - law . also , we remark that to achieve this fit , unphysically large hydrogen hopping distances had to be used in the microscopic model .furthermore , had to be allowed to diffuse more than a micrometer deep into the gate stack with unmodified diffusion constant to maintain the power - law exponent , despite the fact that our poly - si gate was only thick . from ( [ e :rd - relax ] ) we can directly calculate the expected unnormalized probability density function for rd recovery as which after normalization by is a loglogistic distribution of with parameter and mean . in the framework of the standard non - dispersive rd model ,all interface states are equivalent in the sense that on average they will have degraded and recovered with the same probability at a certain stress / recovery time combination . in terms of impact on , we again assume that the mean impact of a single trap is exponentially distributed . using ( [ e : rd - relax - pdf - single ] ) , the spectral map built of subsequent stress / relax cycles can be obtained . since except for their step - heightsall defects are equivalent , the time dynamics can be pulled out of the sum to eventually give this is a very interesting result , as it implies that all defects are active with the same probability at any time , leading to a dense response in the spectral map as shown in fig .[ f : rd - numberofdefects ] . as will be shown ,this is incompatible with our experimental results . inorder to more clearly elucidate the features of the rd model , we will in the following use a device , in which only a small number of defects ( about ten ) contribute to the spectral maps .the crucial fingerprint of the rd model would then be that these clusters are loglogistically distributed and thus much wider than the previously observed exponential distributions .furthermore , we note that the rd spectral map does not depend on any parameter of the model nor does it depend on temperature and bias , but due to the _ diffusion_-limited nature of the model shifts to larger times with increasing stress time ( see fig . [f : rd - sm ] ) , facts we will compare against experimental data later .as noted before , previous tdds experiments had been limited to stress times mostly smaller than about , which may limit the relevance of our findings for long - term stress . as such, it was essential to extend the stress and relaxation times to , which is a typically used experimental window .unfortunately , the stress / relax cycles needed to be repeated at least 100 times , otherwise differentiation between exponential and logistic distributions would be difficult .we therefore used 9 different stress times for each experiment , starting from up to with recovery lasting , repeated 100 times , requiring a total of about 12 days .about 20 such experiments were carried out on four different devices over the course of more than half a year .since we are particularly interested in identifying a _ diffusion_-limited contribution to nbti recovery , we tried to minimize the contribution of charge trapping . with increasing stress voltage ,an increasing fraction of the bandgap becomes accessible for charging , which is why we primarily used stress voltages close to of our technology ( about ) .furthermore , it has been observed that at higher stress voltages defect generation in a tddb - like degradation mode can become important , an issue we avoid at such low stress voltages .two example measurements are shown in fig .[ f : longtermtdds - maps ] for at 150 and at 175 ( about 4 ) .as already observed for short - term stresses , all clusters are exponential and have a temperature-_dependent _ but time-_independent _ mean .most noteworthy is the fact that _ no sign of an rd signature as discussed in section [ s : rd ] was observed_. we remark that defects tend to show strong signs of volatility at longer stress and recovery times , a fascinating issue to be discussed in more detail elsewhere . even at longer stress times ( ) and higher temperatures , 150 ( left / top ) and 175 ( right / top ) , all clusters ( symbols ) are exponential ( lines ) and do not move with stress time ( bottom ) , just like the prediction of a _reaction_-limited model , see fig .[ f : nmp - threedefects ] . due to the increasing number of defects contributing to the emission events, the data becomes noisier with increasing stress bias , temperature , and time . with increasing stress , defectc4 shows signs of volatility , leading to a smaller number of emission events at longer times ., title="fig : " ] even at longer stress times ( ) and higher temperatures , 150 ( left / top ) and 175 ( right / top ) , all clusters ( symbols ) are exponential ( lines ) and do not move with stress time ( bottom ) , just like the prediction of a _reaction_-limited model , see fig .[ f : nmp - threedefects ] . due to the increasing number of defects contributing to the emission events ,the data becomes noisier with increasing stress bias , temperature , and time . with increasing stress , defectc4 shows signs of volatility , leading to a smaller number of emission events at longer times ., title="fig : " ] even at longer stress times ( ) and higher temperatures , 150 ( left / top ) and 175 ( right / top ) , all clusters ( symbols ) are exponential ( lines ) and do not move with stress time ( bottom ) , just like the prediction of a _ reaction_-limited model , see fig .[ f : nmp - threedefects ] . due to the increasing number of defects contributing to the emission events ,the data becomes noisier with increasing stress bias , temperature , and time . with increasing stress , defectc4 shows signs of volatility , leading to a smaller number of emission events at longer times ., title="fig : " ] even at longer stress times ( ) and higher temperatures , 150 ( left / top ) and 175 ( right / top ) , all clusters ( symbols ) are exponential ( lines ) and do not move with stress time ( bottom ) , just like the prediction of a _reaction_-limited model , see fig .[ f : nmp - threedefects ] . due to the increasing number of defects contributing to the emission events ,the data becomes noisier with increasing stress bias , temperature , and time . with increasing stress , defect c4 shows signs of volatility , leading to a smaller number of emission events at longer times ., title="fig : " ] even at longer stress times ( ) and higher temperatures , 150 ( left / top ) and 175 ( right / top ) , all clusters ( symbols ) are exponential ( lines ) and do not move with stress time ( bottom ) , just like the prediction of a _reaction_-limited model , see fig .[ f : nmp - threedefects ] . due to the increasing number of defects contributing to the emission events ,the data becomes noisier with increasing stress bias , temperature , and time . with increasing stress , defectc4 shows signs of volatility , leading to a smaller number of emission events at longer times ., title="fig : " ] even at longer stress times ( ) and higher temperatures , 150 ( left / top ) and 175 ( right / top ) , all clusters ( symbols ) are exponential ( lines ) and do not move with stress time ( bottom ) , just like the prediction of a _reaction_-limited model , see fig .[ f : nmp - threedefects ] . due to the increasing number of defects contributing to the emission events ,the data becomes noisier with increasing stress bias , temperature , and time . with increasing stress , defectc4 shows signs of volatility , leading to a smaller number of emission events at longer times ., title="fig : " ] to confirm that our extracted capture and emission times fully describe recovery _ on average _ , we calculate the average of all 100 recovery traces recorded at each stress time and compare it with the prediction given by the extracted and values using ( [ e : avgtrapping ] ) , which corresponds to the expectation value and thus the average . indeed , as shown in fig .[ f : longtermtdds - fit ] , excellent agreement is obtained , finally proving that our extraction captures the essence of nbti recovery .it is worth pointing out that this agreement is obtained _ without _ fitting of the average data : we simply use the extracted capture and emission times as well as the extracted step - heights and put them into ( [ e : avgtrapping ] ) . also shownis a comparison of the capture / emission times extracted by tdds with a capture / emission time ( cet ) map extracted on large devices .the capture and emission times extracted on the nanoscale device are fully consistent with the macroscopic distribution and correspond to a certain _ realization _ , which will vary from device to device .* top * : comparison of the extracted capture and emission times vs. a capture / emission time ( cet ) map from a large area device .the size of the dots represents the value of each defect .the distribution of the individual defects seen in tdds agrees well with the cet map . *bottom * : using the time constants extracted from the long - term tdds data ( lines ) , it is possible to _ fully _ reconstruct the average recovery traces ( symbols , corresponding to the expectation value ) for all stress and recovery times .the average offset at is added ( dotted lines ) to show the build - up of defects with larger emission / annealing times ( permanent component ) ., width=219 ] as a final point , we compare the _ averaged _ recovery over 100 repetitions obtained from four different nanoscale devices after stress under the same conditions in fig .[ f : dev2dev ] . clearly, all devices recover in a very unique way .for instance , device f shows practically no recovery between and while device d has a very strong recoverable component in this time window but practically no recovery from up to .furthermore , this unique recovery depends strongly on bias and temperature , as demonstrated in fig .[ f : dev2dev ] ( right ) for device c. for example , after a stress at at 125 , strong recovery is observed between and , which is completely absent at 200 .on the other hand , if the stress bias is increased to say ( about ) , a nearly logarithmic recovery is observed in the whole experimental window , consistent with what is also seen in large - area devices . in the non - dispersive rd picture, hundreds of defects would be equally contributing to the average recovery of such devices .as such , the model is practically immune to the spatial distribution of the defects which would be the dominant source of device - to - device variability in this non - dispersive rd picture , lacking any other significant parameters .such a model can therefore not explain the strong device - to - device variations observed experimentally . also , as discussed before , in non - dispersive rd models in their present form recoveryis independent of bias and temperature , which is also at odds with these data . on the other hand , our datais perfectly consistent with a collection of defects with randomly distributed and . in this picture, the occurrence of a recovery event only depends on whether a defect with a suitable pair ( , ) exists in this particular device .since these time constants depend on bias and temperature , the behavior seen in fig .[ f : dev2dev ] is a natural consequence .the question whether nbti is due to a _ diffusion- _ or __ reaction-__limited process is of high practical significance and not merely a mathematical modeling detail .first of all , it is essential from a process optimization point of view : if the rd model in any variant were correct , then one should seek to prevent the diffusion into the gate stack by , for instance , introducing hydrogen diffusion barriers .this is because according to rd models , upon hitting such a barrier , the hydrogen concentration in the gate stack would equilibrate , leading to an end of the degradation . on the other hand ,reaction-__limited models are correct and our results clearly indicate that they are device optimization from a reliability perspective should focus on the distribution of the time constants / reaction rates in the close vicinity of the channel that are responsible for charge trapping and the _ _ reaction-__limited creation of interface states .secondly , our results have a fundamental impact on our understanding of the stochastic reliability of nanoscale devices .we have demonstrated that even the averaged response of individual devices will be radically different from device to device , whereas in non - dispersive rd models all devices will _ on average _ degrade in the same manner .given the strong bias- and temperature - dependence of this individual response , it is mandatory to study and understand the distribution of the bias- and temperature - dependence of the responsible reaction - rates .this is exactly the route taken recently in , where it was shown that the energetic alignment of the defects in the oxide with the channel can be tuned by modifying the channel materials in order to optimize device reliability .using nanoscale devices , we have established _ an ultimate benchmark _ for bti models at the statistical level .contrary to previous studies , we have used a very wide experimental window , covering stress and recovery times from the microsecond regime up to kiloseconds , as well as temperatures up to 175 .the crucial observations are the following : * using time - dependent defect spectroscopy ( tdds ) , all recovery events create exponentially distributed clusters on the spectral maps which do not move with increasing stress time . * the location of these clusters is marked by a capture time , an emission time , and the step - height . in an agnostic manner , we also consider the forward and backward rates for the creation of interface states on the same footing . the combination of such clusters forms a unique fingerprint for each nanoscale device .* given the strong bias- and temperature - dependence of the capture and emission times , the degradation in each device will have a unique temperature and bias dependence . at the microscopic level ,any bti model describing charge trapping as well as the creation of interface states should be consistent with the above findings .given the wide variety of published models , we have compared two _ model classes _ against these benchmarks , namely _reaction- _ versus _ _ diffusion-__limited models . as a representative for _ _ diffusion-__limited models ,we have used the poly / reaction - diffusion model .we have _ observed a complete lack of agreement _ , as this non - dispersive reaction - diffusion model predicts _ ( ) _ that a very large number of equal interface states contribute equally to recovery , while experimentally only a countable number of clusters can be identified , _ ( ) _ that the clusters observed in the spectral map should be loglogistically distributed with an increasing mean value given by the stress - time , and _ ( ) _ that the _ averaged _ long - term degradation and recovery should be roughly the same in all devices , independent of temperature and bias . based on these observationswe conclude that the mainstream non - dispersive reaction - diffusion models in their present form are unlikely to provide a correct physical picture of nbti .these issues should be addressed in future variants of rd models and benchmarked against the observations made here . on the other hand , if we go to the other extreme and assume that nbti recovery is not _diffusion- _ but _ reaction_-limited , the characteristic experimental signatures are naturally reproduced .such models are _ ( ) _ consistent with the exponential distributions in the spectral map , _ ( ) _ are based on widely dispersed capture and emission times which result in fixed clusters on the spectral maps , and _ ( ) _ naturally result in a unique fingerprint for each device , as the parameters of the reaction are drawn from a wide distribution . as the time constants are bias- and temperature - dependent , the unique behavior of each device can be naturally explained and predicted , provided the distribution of these time constants is understood .finally , we have argued that our results are not only interesting for modeling enthusiasts , but have fundamental practical implications regarding the way devices should be optimized and analyzed for reliability , particularly for nanoscale devices , which will show increased variability .the research leading to these results has received funding from the fwf project n-m24 and the european community s fp7 project n ( mordred ) .in this appendix three finer points are discussed , namely strictly speaking , equation ( [ e : rd - relax - pdf - single ] ) is valid for a single stress / relax cycle while the tdds consists of a large number of repeated cycles . as such, the tdds setup corresponds to an ultra - low - frequency ac stress and the devices will not be fully recovered prior to the next stress phase .this implies that would be able to move deeper into the gate stack during cycling and that the profile would not be precisely the same as that predicted during dc stress . for short stress times and long enough recovery times ,e.g. versus , the impact of this would be small , since ( [ e : rd - relax - pdf - single ] ) predicts nearly full recovery in this case ( 97% ) .however , for larger stress times , recovery by the end of the cycle will only be partial and ( [ e : rd - relax - pdf - single ] ) may no longer be accurate in a tdds setting .we have considered this case numerically in fig .[ f : rd - histo ] ( left ) , which shows that although this impacts the absolute number of recorded emission events , the general features namely loglogistically distributed clusters which move in time remains . as can be seen from fig .[ f : longtermtdds - maps ] , with increasing stress time the number of visible clusters increases , as does the noise - level , making an accurate extraction of the statistical parameters more challenging than for shorter stress times . in order to guarantee that our extraction algorithm , which splits the recovery trace into discrete steps ,does not miss any essential features and the noise in the spectral maps is really just unimportant noise rather than an overshadowed rd contribution , we performed one additional test : we calculate the difference between the extracted response of forward and backward reactions and subtract it from all recorded steps , see fig .[ f : rd - histo ] ( right ) .as can be seen , even if due to noise not all steps are considered in the fit , no hidden rd component is missed . finally , we comment on the permanent part that builds up during the tdds cycles , see fig .[ f : longtermtdds - fit ] .this contribution is not explicitly modeled here but only extracted from the experimental data to be added to the modeled recoverable part . from an agnostic perspective, one could simply refer to this permanent build - up as due to those defects with emission or annealing times larger than the maximum recovery time , in our case .this permanent build - up is typically assigned to interface states ( centers ) , but likely also contains a contribution from charge traps with large time constants .b. kaczer , v. arkhipov , r. degraeve , n. collaert , g. groeseneken , and m. goodwin , `` temperature dependence of the negative bias temperature instability in the framework of dispersive transport , '' , vol .86 , no . 14 , pp . 13 , 2005 .t. wang , c .- t .chan , c .- j .tang , c .- w .tsai , h. wang , m .- h .chi , and d. tang , `` a novel transient characterization technique to investigate trap properties in hfsion gate dielectric mosfets - from single electron emission to pbti recovery transient , '' , vol .5 , pp . 10731079 , 2006 .h. reisinger , o. blank , w. heinrigs , a. mhlhoff , w. gustin , and c. schlnder , `` analysis of nbti degradation- and recovery - behavior based on ultra fast -measurements , '' in _ proc .( irps ) _ , 2006 , pp .448453 .m. houssa , v.v .afanasev , a. stesmans , m. aoulaiche , g. groeseneken , and m.m .heyns , `` insights on the physical mechanism behind negative bias temperature instabilities , '' , vol . 90 , no .4 , pp . 043505 , 2007 .t. grasser , w. goes , v. sverdlov , and b. kaczer , `` the universality of nbti relaxation and its implications for modeling and characterization , '' in _ proc .( irps ) _ , 2007 , pp .268280 .zhang , z. ji , m.h .chang , b. kaczer , and g. groeseneken , `` real instability of pmosfets under practical operation conditions , '' in _ proc .intl.electron devices meeting ( iedm ) _ , 2007 , pp .817820 .h. reisinger , t. grasser , w. gustin , and c. schlnder , `` the statistical analysis of individual defects constituting nbti and its implications for modeling dc- and ac - stress , '' in _ proc .( irps ) _ , may 2010 , pp .715 .t. grasser , b. kaczer , w. goes , h. reisinger , th .aichinger , ph .hehenberger , p .- j .wagner , f. schanovsky , j. franco , m. toledano - luque , and m. nelhiebel , `` the paradigm shift in understanding the bias temperature instability : from reaction - diffusion to switching oxide traps , '' , vol .11 , pp . 36523666 , 2011 .j. zou , r. wang , n. gong , r. huang , x. xu , j. ou , c. liu , j. wang , j. liu , j. wu , s. yu , p. ren , h. wu , s. lee , and y. wang , `` new insights into ac rtn in scaled high-/metal - gate mosfets under digital circuit operations , '' in _ ieee symposium on vlsi technology digest of technical papers _ , 2012 , pp . 139140 .s. mahapatra , n. goel , s. desai , s. gupta , b. jose , s. mukhopadhyay , k. joshi , a. jain , a.e .islam , and m.a .alam , `` a comparative study of different physics - based nbti models , '' , vol .3 , pp . 901916 , 2013 .c. shen , m .- f .wang , h.y .feng , a.t .- l .lim , y.c .yeo , d.s.h .chan , and d.l .kwong , `` negative traps in gate dielectrics and frequency dependence of dynamic bti in mosfets , '' in _ proc .intl.electron devices meeting ( iedm ) _ , 2004 , pp. 733736 .t. grasser , p .- j .wagner , h. reisinger , th .aichinger , g. pobegen , m. nelhiebel , and b. kaczer , `` analytic modeling of the bias temperature instability using capture / emission time maps , '' in _ proc .intl.electron devices meeting ( iedm ) _ , dec .2011 , pp .27.4.127.4.4 .t. grasser , h. reisinger , p .- j .wagner , w. goes , f. schanovsky , and b. kaczer , `` the time dependent defect spectroscopy ( tdds ) for the characterization of the bias temperature instability , '' in _ proc. intl.rel.phys.symp .( irps ) _ , may 2010 , pp. 1625 . n. goel , k. joshi , s. mukhopadhyay , n. nanaware , and s. mahapatra , `` a comprehensive modeling framework for gate stack process dependence of dc and ac nbti in sion and hkmg p - mosfets , '' , vol .491519 , 2014 .b. kaczer , t. grasser , j. martin - martinez , e. simoen , m. aoulaiche , ph.j .roussel , and g. groeseneken , `` nbti from the perspective of defect states with widely distributed time scales , '' in _ proc .( irps ) _ , 2009 , pp .s. desai , s. mukhopadhyay , n. goel , n. nanaware , b. jose , k. joshi , and s. mahapatra , `` a comprehensive ac / dc nbti model : stress , recovery , frequency , duty cycle and process dependence , '' in _ proc .( irps ) _ , 2013 , pp .xt.2.1xt.2.11 . m. toledano - luque , b. kaczer , ph.j .roussel , t. grasser , g.i .wirth , j. franco , c. vrancken , n. horiguchi , and g. groeseneken , `` response of a single trap to ac negative bias temperature stress , '' in _ proc .( irps ) _ , 2011 , pp .364371 .t. grasser , k. rott , h. reisinger , p .- j .wagner , w. goes , f. schanovsky , m. waltl , m. toledano - luque , and b. kaczer , `` advanced characterization of oxide traps : the dynamic time - dependent defect spectroscopy , '' in _ proc .( irps ) _ , apr .2013 , pp . 2d.2.12d.2.7 .b. kaczer , t. grasser , ph.j .roussel , j. franco , r. degraeve , l.a .ragnarsson , e. simoen , g. groeseneken , and h. reisinger , `` origin of nbti variability in deeply scaled pfets , '' in _ proc .( irps ) _ , 2010 , pp .j. franco , b. kaczer , m. toledano - luque , ph.j .roussel , j. mitard , l.a .ragnarsson , l. witters , t. chiarella , m. togo , n. horiguchi , g. groeseneken , m.f .bukhori , t. grasser , and a. asenov , `` impact of single charged gate oxide defects on the performance and scaling of nanoscaled fets , '' in _ proc .( irps ) _ , 2012 ,p. 5a.4.1 .t. naphade , k. roy , and s. mahapatra , `` a novel physics - based variable nbti simulation framework from small area devices to 6t - sram , '' in _ proc .intl.electron devices meeting ( iedm ) _ , 2013 , pp. 838841 .t. grasser , h. reisinger , w. goes , th .aichinger , ph .hehenberger , p.j .wagner , m. nelhiebel , j. franco , and b. kaczer , `` switching oxide traps as the missing link between negative bias temperature instability and random telegraph noise , '' in _ proc .intl.electron devices meeting ( iedm ) _ , 2009 , pp. 729732 .t. grasser , h. reisinger , k. rott , m. toledano - luque , and b. kaczer , `` on the microscopic origin of the frequency dependence of hole trapping in pmosfets , '' in _ proc .intl.electron devices meeting ( iedm ) _ , dec .2012 , pp . 19.6.119.6.4 .s. chakravarthi , a.t .krishnan , v. reddy , c.f .machala , and s. krishnan , `` a comprehensive framework for predictive modeling of negative bias temperature instability , '' in _ proc .( irps ) _ , 2004 , pp .273282 .s. choi , y .-park , c .- k .baek , and s. park , `` an improved 3d monte carlo simulation of reaction diffusion model for accurate prediction on the nbti stress / relaxation , '' in _ proc .simulation of semiconductor processes and devices _ , 2012 ,. 185188 .f. schanovsky and t. grasser , `` on the microscopic limit of the modified reaction - diffusion model for the negative bias temperature instability , '' in _ proc .( irps ) _ , apr .2012 , pp .xt.10.1xt.10.6 .b. kaczer , v. arkhipov , r. degraeve , n. collaert , g. groeseneken , and m. goodwin , `` disorder - controlled - kinetics model for negative bias temperature instability and its experimental verification , '' in _ proc . intl.rel.phys.symp .( irps ) _ , 2005 , pp .381387 .t. naphade , n. goel , p.r .nair , and s. mahapatra , `` investigation of stochastic implementation of reaction diffusion ( rd ) models for nbti related interface trap generation , '' in _ proc .( irps ) _ , 2013 , pp .xt.5.1xt.5.11 .h. reisinger , r.p .vollertsen , p.j .wagner , t. huttner , a. martin , s. aresu , w. gustin , t. grasser , and c. schlnder , `` the effect of recovery on nbti characterization of thick non - nitrided oxides , '' in _ proc .intl.integrated reliability workshop _ , 2008 , pp .s. mahapatra , p.b .kumar , and m.a .alam , `` investigation and modeling of interface and bulk trap generation during negative bias temperature instability of p - mosfets , '' , vol .9 , pp . 13711379 , 2004 .s. mahapatra , a.e .islam , s. deora , v.d .maheta , k. joshi , a. jain , and m.a .alam , `` a critical re - evaluation of the usefulness of r - d framework in predicting nbti stress and recovery , '' in _ proc .( irps ) _ , 2011 , pp .614623 .t. grasser , k. rott , h. reisinger , m. waltl , p. wagner , f. schanovsky , w. goes , g. pobegen , and b. kaczer , `` hydrogen - related volatile defects as the possible cause for the recoverable component of nbti , '' in _ proc .intl.electron devices meeting ( iedm ) _ , dec .j. franco , b. kaczer , j. mitard , m. toledano - luque , ph.j .roussel , l. witters , tibor grasser , and g. groeseneken , `` nbti reliability of sige and ge channel pmosfets with and dielectric stack , '' , vol .4 , pp . 497506 , 2013 .t. grasser , th .aichinger , g. pobegen , h. reisinger , p .- j .wagner , j. franco , m. nelhiebel , and b. kaczer , `` the ` permanent ' component of nbti : composition and annealing , '' in _ proc .( irps ) _ , apr .2011 , pp . 605613 .
after nearly half a century of research into the bias temperature instability ( bti ) , two classes of models have emerged as the strongest contenders : one class of models , the reaction - diffusion models , is built around the idea that hydrogen is released from the interface and that it is the _ diffusion _ of some form of hydrogen that controls both degradation and recovery . while many different variants of the reaction - diffusion idea have been published over the years , the most commonly used recent models are based on non - dispersive reaction rates and non - dispersive diffusion . the other class of models is based on the idea that degradation is controlled by first - order _ reactions _ with widely distributed ( dispersive ) reaction rates . we demonstrate that _ these two classes give fundamentally different predictions for the stochastic degradation and recovery of nanoscale devices , therefore providing the ultimate modeling benchmark . _ using detailed experimental time - dependent defect spectroscopy ( tdds ) data obtained on such nanoscale devices , we investigate the compatibility of these models with experiment . our results show that the _ diffusion _ of hydrogen ( or any other species ) is unlikely to be the limiting aspect that determines degradation . on the other hand , the data are fully consistent with _ reaction_-limited models . we finally argue that only the correct understanding of the physical mechanisms leading to the significant device - to - device variation observed in the degradation in nanoscale devices will enable accurate reliability projections and device optimization .
given an undirected graph containing nodes , determining whether any simple cycles of length exist in the graph solves the hamiltonian cycle problem .simple cycles of length are known as hamiltonian cycles .this paper is concerned with finding a hamiltonian cycle ( hc ) of a graph by finding a global minimizer of a smooth function .we associate a variable with each ( directed ) arc .define a matrix , whose element is if , or 0 otherwise .it was shown in that a longest cycle of a graph is a global minimizer of the problem : \subject & p(x ) \in \mathcal{s } , \quad x \ge 0 , \end{array}\ ] ] where is the set of stochastic matrices .we shall refer to the linear constraints that arise from this restriction on as the constraints .it follows we may also restrict , where is the set of doubly stochastic matrices , since if a hc exists and the solution to ( [ eqn - det ] ) is defined to be then is a permutation matrix .it is these two forms of the problem that we investigate .the elements of that are 1 denote the arcs in the hc .although we introduced as the nonzero elements of , in practice is a vector .we number the indices by row .so if there are 3 arcs from the first node there will be , and in the first row of .the first nonzero element of in the next row is and so on . is symmetric in the pattern of the nonzero elements but is not a symmetric matrix .the elements in the upper triangular half correspond to arcs in one direction and their lower triangular half reflection is the arc being taken in the opposite direction .there would be no reason not to label arcs so that the ( 1,2 ) element was always nonzero .the corresponding ( 2,1 ) element must be zero if at the solution is nonzero .however , it is possible for both to be zero . when it is possible to replace the objective function in ( [ eqn - det ] ) by the negative determinant of the leading principal minor of .the result follows from among other things that the restrictions we place on and ensure the lu factorization of exists without the need to permute the rows or columns .also has rank and the leading principal minor is full rank .the proof was given in .unfortunately this does not hold when . using the leading principal minorhas the advantage that the rank - one modification is not required , which makes calculating the gradient and the hessian a little simpler .a method for efficiently computing the gradient and hessian of the negative determinant of the leading principal minor was provided in , and was proved in to be more numerically stable than the objective function in ( [ eqn - det ] ) .another benefit is that the maximum value is independent of the size of the graph , eliminating the need to scale any parameters by the size of the graph . in the casethe problem of interest is of the form subject to where , is the leading principle minor of , is the set of nodes reachable in one step from node . constraints ( [ eq - ds1])([eq - ds3 ] ) are called the _ doubly - stochastic _ constraints . for neatness , we refer to constraints ( [ eq - ds1])([eq - ds3 ] ) as the constraints .it is assumed that any graph considered is simple and undirected .although this is a classical linearly constrained problem it is different in character from those whose variables are not related to a binary - variable problem .a consequence of the multiple global minimizers ( and we believe this is true in many other problems in discrete variables such as assignment problems that are relaxed ) is the presence of saddlepoints that are almost minimizers . indeedpotentially the number of saddlepoints can be much larger than the number of global minimizers .it can be shown that there exists a path between any two isolated global minimizers that contains a feasible saddlepoint . moreover ,there exists a saddlepoint for which that has only one negative eigenvalue .there is a potential for the number of such points to grow exponentially with the number of global minimizers .it was shown in how to compute and its first and second derivatives very efficiently .this is critical since we show that directions of negative curvature are essential to solving this problem and they play a much more critical role than is typically the case .a key issue is symmetry .obviously for _ every _ hc there is a hc obtained by reversing the direction .this symmetry reveals itself in the problem variables .if there is a hc with then there exists a reverse cycle that is also a hc in which and its twin is 1 .we need to set the initial value of these two variables to be identical in order not to introduce bias ( they may both be 0 in another hc ) .quite frequently ( and this almost always happens with some pairs ) when using only descent many of these twin variables remain equal . in such circumstancesit is only the use of a direction of negative curvature that breaks the tie .while such behavior is possible for general problems it is quite rare .consequently , in this class of problem directions of negative curvature play a more important role and often more important than that of using a direction of descent .again unlike the general case where we usually observe no directions of negative curvature in the neighborhood of the solution here they are always present , which is one reason why the solution is at a vertex .what is happening is that from our current iterate there are two equally attractive minimizers so it steers a course going to neither unless directions of negative curvature are used .the symmetry reveals itself also in the problem function and derivatives .if the iterates to solve the problem are denoted by then at is it usually the case that the gradient of at is orthogonal to the eigenvectors corresponding to negative eigenvalues of the hessian of at .consequently when are solving for the newton step using the conjugate gradient algorithm it will not be detect when hessian is indefinite .typically in an optimization problem it is better to have more constraints if such constraints can be added even if these are inequalities and are known not to be active at the solution .however , it is not always the case .we have a choice of either or ( eliminate equations [ eq - ds2 ] or [ eq - ds3 ] ) .note here when solving with we are adding more equality constraints without adding extra variables and hence we are reducing the degrees of freedom in the problem .however , for this particular problem there are some theoretical differences that alter the usual picture of potentially reducing the search space but adding to the complexity of computing the iterates .it was shown in that when that the lu factors of exist regardless of the pivoting order .this has many beneficial consequences not the least of which is the objective of the problem may be recast to be , where is the leading principle minor of . although is singular its leading principle minor is nonsingular as a consequence of the existence of the lu factorization .note that is typically very sparse ( if it is not finding a hc is usually trivial ) .it also has other benefits , the main one being that it enables the problem to be recast in a wide variety of ways .we shall show in section [ sec - prelim ] that this property is also true even when so this is not a reason for preferring .in it was shown as part of the proof that when , if a variable is not 0 or 1 altering it to one of them reduces the objective .one consequence is that all local minimizers are binary .it has not been shown that this result is true for .indeed it seems likely it is not true .the issue that makes it more complicated is that altering a variable in the usually requires altering many or all of the other variables in order to retain feasibility .there are many ways that could be done .it will be seen that one of the steps we propose in our algorithm is deletion or deflation , which occur when one or more of the variables is set to 0 or 1 respectively . for the case it is simple to adjust the corresponding variables in a row of to satisfy the constraint simply by scaling the relevant row . for the case it more complex and either an lp or qp needs to be solved .it is made more complicated by the need to determine a strictly interior point and sometimes one does not exist . usually one benefit of more constraints is the reduced hessian is smaller and the linear system needed to be solved at each iteration is also smaller . finding both a sufficient descent direction and a direction of sufficient negative curvature requires finding a null space matrix . if is the constraint coefficients we require a matrix such that and the matrix is full rank .the matrix is almost always dense .consequently , the smaller the dimension of the better . however , it was shown in that there exists a for the case that is sparse and structured .paradoxically the larger for the case is simpler and sparser .this alters the balance when computing the search directions needed to solve our problem .we show that the lu factorization of and of exists when p is a stochastic matrix . as already noted thiswas shown to be true for a doubly stochastic matrix . to determine the determinant of the objective we need to compute the lu factorization of a matrix and this result implies no pivoting is required .a stochastic matrix may have either rows or columns that sum to unity . in forming the lu factorizationit is common to assume row interchanges rather than column interchanges .this is just convention and there is no advantage to doing it one way or the other .however , for sparse matrices the manner the sparse elements are stored does matter when performing the lu factorization . sincewhen forming such matrices it is assumed that row interchanges will be done that impacts how best to store the sparse matrix in compact form . in the proofwe assume row interchanges may be made and this causes us to prefer to assume that has unit columns .the converse result for unit rows follows immediately from this result .a matrix is said to have property if 1 . for 2 . for 3 . . if has property an lu factorization of exists . * proof * if then the first row of is and the first row of is the same as the first row of . consequently , there is no loss of generality if we assume that .note that is the element of largest magnitude in the first column of .after one step of standard gaussian elimination ( ge ) we get we have , which implies , since that it follows that and .by definition we have since and it follows that . from this result and it follows that and that has property .we can now proceed with next step of ge .note that if we must have the first row and column of be zero and we can skip the steps of ge until we have a nonzero diagonal element of .regardless of the rank of we have this follows from having property and the only matrix ( the size of for the last step of ge ) with that property being 0 . if has rank then . if has rank the leading principle minor is nonsingular .when performing ge the elements being eliminated are not larger in magnitude than the pivot .this implies that .this property implies that is about as well conditioned as it can be and that if software to perform ge is used even if it performs row interchanges when needed they will never be required and the lu factorization of will be obtained and not that of , where is a permutation matrix .[ nonsing ] the matrix is nonsingular when has rank and .* proof * we have where .note that is upper triangular .moreover , the element of is . since we get , which implies is nonsingular .a matrix is said to have property if 1 . for 2 . for 3 . . if has property an lu factorization of exists .this follows from the fact that the lu factors of exist .lu factors of can be obtained from the transpose of these factors .note that although this is an lu factorization it differs from that typically found since it is now that has unit diagonal elements .the basic approach used is similar to that due to murray and ng , who first relax the problem and then solve a sequence of problems in which a strictly convex function is added to the objective together with a nonconvex function that attempts to force the variables to be binary . initially the strictly convex function dominates the objective and in the limit the nonconvex term dominates .our approach is a simplification since the nonconvex term is not needed .also since we are applying this general approach to a specific problem with significant structure the algorithm can be modified to improve not only efficiency , but also to improve the likelihood of obtaining a global minimizer and hence a hc .how the individual problems in the sequence are solved is the main focus .it will be seen a much heavier use of negative curvature is made and with less emphasis on the use of descent directions , which is the reverse of what optimization algorithms usually do .a peculiarity of the problem , which we think may be true of most problems with multiple global minimizers , is the gradient at the iterates is often spanned by the eigenvectors corresponding to the positive eigenvalues of the hessian even though the hessian is indefinite .this corresponds to the so - called hard case " in trust region methods . typically in such methods little or no attentionis paid to it since it is considered very unlikely to arise and essentially impossible to keep arising . in both the stochastic and doubly stochastic casewe are interested in solving a problem of the form : strictly speaking the upper bounds on are not required since the equality constraints and the lower bounds ensure that the upper bound on holds .however , for now we shall leave them in .we are interested in the global minimizer and a typical descent algorithm will converge to the local minimizer associated with the initial point .murray and ng propose adding a strictly convex function to the objective , where is a positive scalar . a sequence of problems is then solved for a sequence of strictly monotonically decreasing values of . for with certain continuity properties the trajectory of minimizers is a unique , continuous , and smooth trajectory .when the initial value of is sufficiently large the new objective is also strictly convex and has a unique and therefore global minimizer .consequently , the minimizer found by this algorithm is the one whose trajectory is linked to the initial unique global minimizer .a feature of the problem is it has what we term twin variables " . in the definition of the variables as elements of the matrix ,if is not always zero then neither is its twin is . in terms of the graphthis is the same edge except in the opposite direction . since the reverse of a hc is itself a hc twin variables have an equal probability of being in a hc .it is essential that the minimizer of is a neutral point with regard to the minimizers of the original problem .for example , if the feasible region is a hypercube the unique neutral point is the center . since we know the binary minimizers are extreme points of the feasible region the center " of the feasible region is such a point .one way of achieving such a point is to choose : an alternative is these are well known barrier functions used in interior point methods . by using either of these functionswe have transformed the original problem into minimizing a sequence of barrier functions .note the reason here for using such functions is not eliminating inequality constraints , that is simply a side benefit . solving the original problem using say an active set method is efficient especially since we do not expect the size of the problems to be extremely large ( 100,000 variables or more ) .our motivation is different and consequently it impacts how the initial is chosen and how it is subsequently adjusted . since we seek a neutral point ( dropped from the objective ) .a test of whether the choice of leads to a neutral initial point is whether the twin variable have the same initial value and this is observed in the numerical testing of the barrier function . since the barrier function removes the need for the inequality constraints the algorithms requires the solution of a sequence of linearly _ equality _ constrained optimization problems .the choice of method is dictated by the need to converge to points that at least satisfy second - order optimality conditions .this requires the algorithm to determine whether the reduced hessian is positive semidefinite . to obtain the reduced hessian matrix we need the null space matrix , which is such that and is full rank .the reduced hessian is then given by , where is the hessian of .we use a line search method based on determining a descent direction and when available a direction of negative curvature .a sequence of improving estimates is generated from an initial feasible estimate from where is a steplength that ensures a sufficient decrease , is a sufficient descent direction , and is a direction of sufficient negative curvature .it was shown by forsgren and murray that this sequence converges to a point that satisfies the second - order optimality conditions .typically such methods combine a direction of descent with a direction of negative curvature when the latter exists .our observation is when a direction of negative curvature does exist and is used purely as the search direction then at almost every subsequent iteration a direction of negative curvature exists and is usually getting stronger .consequently , when we get a direction of negative curvature we do not bother computing the direction of descent . given the importance of the direction of negative curvature we depart from normal practice and apply the modified cholesky algorithm to the following matrix where is an estimate of the smallest eigenvalue of when it is thought is indefinite , otherwise is negative and very small in magnitude .the rational is that when is indefinite this leads to a very good direction of sufficient negative curvature .when is positive definite the small shift ensures that the matrix has a condition number that is sufficiently small to ensure sufficient accuracy in the direction of descent .if no modification is made in the modified cholesky factorization then is positive definite and we compute a direction of sufficient descent by solving : where is the upper triangular factor , and is the gradient of .the sufficient descent direction is then given by , where .if a modification is made then is indefinite and the following system is solved where the index is obtained during the modified cholesky factorization .it can be shown that , where is a direction of sufficient negative curvature . moreover , we have we can improve this direction of negative curvature by minimizing .we reduce the value by doing a sweep of univariate minimization of this function .this cost is roughly the same as a matrix - vector multiplication and so can be repeated if need be .we use the improved value as the estimate of in the following iteration . note that the sign of is always chosen so that .if is not small in magnitude we will not know if negative curvature exists when no modification is made in the modified cholesky algorithm. however we will know that the smallest eigenvalue is bigger than .we repeat the modified cholesky factorization with . if after a small number of reductions we still get no modification then is set to the default small value .we use a very crude linesearch along either or .we compute the maximum step to the boundary and take a step times the value .if that is not a lower point we multiply the step by 0.5 until we succeed . typically andalmost always is successful .a key difference with the use of a barrier function here compared to solving problems unrelated to relaxed discrete problems is that behaves in an unusual way . after a strictly feasible pointis found this is used to minimize the barrier function alone ( equivalent to setting ) .this is an easy function to minimize and to do so accurately .this is necessary to avoid bias unlike when minimizing a regular function where we are often able to provide an initial point reasonably close to the solution .indeed we are attempting to find the initial point to be as far away as possible from the solutions . in some cases such as for cubic graphs the minimizer of the barrier functionis known ( ) .typically we want to reduce at a slow rate .however , another feature of the hc problem is the point that minimizes the barrier function is either a saddle point of the determinant function or very close to it .again this rarely if ever happens when using a barrier function for normal problems .the consequence is that moving the iterates from their current location requires changing sufficiently to make the current reduced hessian indefinite .quite how much is not difficult to estimate .the hessian of the barrier function is a well conditioned diagonal at the minimizer .it is usually less than 2 and for cubic graphs is 1 . in both the stochastic and doubly stochastic casethe matrix has a low condition number .consequently , given an estimate of the smallest eigenvalue of either , the hessian of determinant function , or of it is easy to find a good estimate of the change needed in .if it is not sufficient then we can simply divide by 10 until it is . in our testing this was never needed .once we get negative curvature we usually never reduce again since either we succeed in finding a hc without needing to , or we fail . an alternative to solvingthe standard problem is to use the primal dual approach .the standard approach means that the newton direction is poor when is small .there are two reasons not to use the primal dual method .firstly , we do not need to have very small since we know the solution is converging to a binary point and so we can round and test the solution . ill - conditioning arises due to a variable becoming close to a bound .should that happen such a variable can be removed from the problem . how to do this is described in the sections on deletion and deflation .secondly , we need to use directions of negative curvature but the hessian in the primal dual formulation is not assured to give a direction of negative curvature except in the neighborhood of a stationary point .a common way of defining , such that and is full rank , is to first partition the columns of ] , and therefore the condition number of is the ratio of the largest degree to the smallest degree . in order to define the null space matrix for , it is trivial to reorder the columns of such that the first column of each submatrix appears first .the reordered matrix is ] , where is a _ triangular _ matrix .this can be achieved by using the following algorithm .* input * : + * output * : + + * begin * + count 0 + rows rank + _ with rows removed to make _ _ full rank _ + cols columns + rows + + + c identify a set of columns such that _ + + * from * 1 * to * + count count + 1 + $ ] _ ( moving _ _ into position _ count _ )_ + _ ( moving column to column _ count _ ) _ + _reorder the rows to get a 1 in positive _ ) + + rows - count + + _ reverse the order of the first _rows _ entries in _ + _ ( reverse the order of the first _rows _ columns in ) _+ + + * end * + + + then the null space for is defined to be .\ ] ] unlike typical problems the constructed in this way is sparse ( similar to that of ) and does not require the lu factorization of since is lower triangular .moreover , has elements that are either 0 or 1 .consequently operations with do not require any multiplication .if at any stage , one or more of the variables approach their extremal values ( 0 or 1 ) , we fix these values and remove the variables from the problem .this process takes two forms : _ deletion _ and _ deflation _ , that is , setting to 0 or 1 , respectively .note that we use the term deflation because in practice the process of fixing results in two nodes being combined to become a single node , reducing the total number of nodes in the graph by 1 .deletion is a simple process of fixing a variable to 0 by simply removing its associated arc from the graph .when a variable is close to 1 , we perform a deflation step by combining nodes and by removing node from the graph . then , we redirect any arcs that previously went into node to become , unless this creates a self - loop arc . after deletion or deflationwe construct the new constraint matrix and update appropriately .the thresholds for the deletion or deflation process to take place can be set as input parameters . during deflation, we not only fix one variable ( ) to have the value 1 , but also fix several other variables to have the value 0 .namely , we fix all variables corresponding to arcs for , for , and to have the value 0 .whenever we perform deflation the information about the deflated arcs are stored in order to construct a hc in the original graph once a hc is found in the reduced graph . after performing deletion or deflation ,a reduced vector is obtained , which is infeasible in the resultant smaller dimension problem . in the stochastic case obtaining a feasible pointis trivial since only the variables in the specific rows where fixing has occurred need to adjusted . the simplest way is to multiply the remaining variables in an impacted rows by , where is the variable that has been fixed .note that this increases the remaining variables so will not trigger another deletion in the row .it is possible it triggers a deflation , but this is unlikely . in the doubly stochastic case many orall of the variables may be impacted even for a single variable being deleted .define to be the error induced by such a process . note that is a nonnegative vector in the case of both deletion and deflation .then , we find a new such that , and , where the size of depends on the deletion or deflation thresholds chosen .the interpretation of is that it is a point that satisfies the constraints , and is as close as possible to the point we obtained after deleting or deflating .we determine by first defining and so that .define as the smallest element of .then , we solve where is chosen large enough that is reduced to 0 whenever possible. constraints ( [ eq - lp_scaling3 ] ) are designed to ensure that .however , it may be impossible to satisfy the above constraints for a value of because some variables may need to be 0 or a value very close to 0 . in this case, we reduce and solve the lp again , continuing this process until we obtain a solution with .if unless we set for some and , then we delete these variables , as they can not be nonzero in a hamiltonian cycle ( or in fact any point ) containing the currently fixed arcs .at each iteration we test if a hc can be obtained by a simple rounding procedure . in the casewe set the largest element in each row to one , starting with the row with the largest overall element .if the largest element happens to already have a unit element in the column we set the second largest in that row to one and so on .if there is no element available we have not identified a hc . after each setting of a variable to onewe rebalance the constraints .a similar procedure is used in the case except in this case setting an element to unity induces more elements being set to zero .moreover , we now fail to satisfy the constraints and so rebalancing is not done .obviously , we could use more sophisticated rounding methods , which may allow us to identify a hc earlier .one potential improvement of this method would be to solve a heuristic at the completion of each iteration , using the current point , that tries to find a nearby hc .such a hybrid approach was considered in , with promising results .this has not been explored since we are interested in testing our algorithm to the limit .below we outline the structure of the algorithm , which we term dipa ( determinant interior point algorithm )in order to investigate the character and to test the performance of the algorithm we generated a test set of 350 problems . specifically , we randomly generated 50 problems for each of 20 , 30 , .... , and 80 nodes with node degree between 3 and 6 .the computer used for performing all the experiments was a pc with intelcore i7 - 4600u cpu , 2.70 ghz , 16 gb of ram , and running on the operating system windows 8.1 enterprise .dipa was implemented in matlab r2014b , with all lps solved by ibm ilog cplex optimization studio v12.6 via its concert interface to matlab .the choice of initial and the rate of reduction of did not prove to be critical .for successful runs once had been reduced to a sufficient level for the reduced hessian to be indefinite it almost always remained indefinite until it stopped .indeed no further reduction in was required . in figure [ fig - paths ] we show how the determinant function behaves as it goes from the initial point to all of the hcs of a 20 node graph . it can be seen all curves are remarkably similar and that each can be reached by going down a direction of negative curvature . also the degree of curvature increases the closer the point gets to the hc .we attempted to solve the 350 problems in the test set with the algorithm applied to the stochastic and doubly stochastic form of the problem . in both casesan attempt was made to perform neither a deletion or deflation by setting extreme values for the relevant parameters that invoke these steps within the algorithm .the results given in table [ tab - results ] are unambiguous .it is clear that the doubly stochastic form of the problem is far superior .we did do further tests on the stochastic case by varying the adjustable parameters but the gap in performance was far too large to bridge ..numbers of graphs solved when deletions and deflations are suppressed for the stochastic and doubly - stochastic forms of the problem . [ cols="^,^,^,^,^,^,^,^ " , ] not using deflation typically increased the number of iterations about 50% over using deflation . in figure[ fig - descent ] we show how our algorithm typically converges when a hc is found .since we are converging to a vertex the nature of converges differs considerably from a typical minimization algorithm where the reduction made in the object slows as the solution is approached .clearly the results indicate that the problem with constraints yields far better results than the problem with constraints .it also supports the view that different forms of a problem with identical complexity properties can have quite differing performance .it was not the intent of this paper to show or suggest that this approach is competitive with alternative algorithms to find a hamiltonian cycle .however , it is quite distinct from other methods and there is much that can be done to improve its performance .more constraints can be added .for example , and we know twin variables , say and must satisfy and .the product from the latter constraints can be added directly to the objective and can be used initially to give a strictly convex problem .however , we plan to try and find a form of the problem that eliminates the occurrence of reverse cycles and so eliminates twin variables from the problem. the new variables would be the elements of , where .knowing it is trivial to find the elements of .the current problem has a dense hessian matrix .although we have shown how all the elements can be computed efficiently it still leaves a dense matrix , which has computational implications when computing the search direction for large problems .we are investigating transformations that should lead to the hessian being sparse .also if the conjugate gradient algorithm is used to compute the search direction and direction of negative curvature it may be possible to compute efficiently even when is dense .although we have addressed the hamiltonian cycle problem an equally important interest is developing algorithms to determine global minimizers . in particular problems that have arisen from relaxing discrete problemsmany of the issues that arise in such problems are identical to those arising in the hc problem .for example , lots of global minimizers and hence lots of stationary points that have reduced hessians that are almost positive definite ( one negative eigenvalue ) . moreover , symmetry is also present .problems such as the frequency assignment problem have an equally good solution simply from any permutation of a known solution .also solutions are typically at a highly degenerate vertex .we are encouraged by the success of the algorithm we have developed , which has demonstrated the ability to find global minimizers of highly nonlinear and nonconvex problems with several hundred binary variables .a key requirement when solving problems with relaxed discrete variables is to have an unbiased initial point .as already noted a common technique used in global optimization is to use multiple starting points.the approach we advocate requires a neutral starting point .we have demonstrated that an equally good alternative is to vary some of the parameters and options that algorithms to solve such problems typically have .we have shown that very small changes both to the strategy and flexible parameters leads to distinct solution enabling us to reduce significantly the number of problems on which we fail to find a global minimizer . moreover, these variations do not lead to less efficient methods .
it has been shown that the global minimizer of a smooth determinant of a matrix function reveals the largest cycle of a graph . when it exists this is a hamiltonian cycle . finding global minimizers even of a smooth function is a challenge . the difficulty is often exacerbated by the existence of many global minimizers . one may think this would help but in the case of hamiltonian cycles the ratio of the number of global minimizers to the number of local minimizers is typically astronomically small . we describe efficient algorithms that seek to find global minimizers . there are various equivalent forms of the problem and here we describe the experience of two . the matrix function contains a matrix , where are the variables of the problem . may be constrained to be stochastic or doubly stochastic . more constraints help reduce the search space but complicate the definition of a basis for the null space . even so we derive a definition of the null space basis for the doubly stochastic case that is as sparse as the constraint matrix and contains elements that are either 1 , -1 or 0 . such constraints arise in other problems such as forms of the quadratic assignment problem . keywords : hamiltonian cycle , barrier functions , interior - point methods , negative curvature .
let be a bounded domain with smooth boundary .for any given , we consider for the evolution equation arising in the theory of uniaxial deformations in isothermal viscoelasticity ( see e.g. ) subject to the homogeneous dirichlet boundary condition the unknown variable describes the _ axial displacement field _ relative to the reference configuration of a viscoelastic body occupying the volume at rest , and is interpreted as an initial datum for , where it need not solve the equation . here , is a nonlinear term , an external force , and the convolution ( or memory ) kernel is a function of the form where is a ( nonnegative ) convex summable function .the values represent the _ instantaneous elastic modulus _ , and the _ relaxation modulus _ of the material , respectively .since , a formal integration by parts yields so that can be rewritten as a simplified , yet very effective , way to represent linear viscoelastic materials is through rheological models , that is , by considering combinations of linear elastic springs and viscous dashpots . in particular ,a standard viscoelastic solid is modeled as a maxwell element , i.e. a hookean spring and a newtonian dashpot sequentially connected , which is in parallel with a lone spring .the resulting memory kernel turns out to be of exponential type . in this context, the aging of the material corresponds to a change of the physical parameters along the time leading , possibly , to a different shape of the memory kernel .there are several ways to reproduce this phenomenon within a rheological framework ( see e.g. ) . here, we propose to describe aging as a deterioration of the elastic response of the viscoelastic solid , translating into a progressive stiffening of the spring in the maxwell element . in the limiting situation ,when the spring becomes completely rigid , the outcome is the kelvin - voigt ( solid ) model , depicted by a damper and an elastic spring connected in parallel .mathematically speaking , the kelvin - voigt model can be obtained from by keeping fixed the total mass of the kernel , that is , and letting . or , in other words , by taking the limit " where is the dirac mass at .this leads to the equation in the terminology of dautray and lions , this is the passage from viscoelasticity with _ long memory _ to viscoelasticity with _ short memory_. in spite of a relatively vast literature concerning both and ( see e.g. and references therein ), we are not aware of analytic studies which consider the possibility of including aging phenomena ( or , more generally , changes of the structural properties ) of the material _ within _ the dynamics .thus , from our point of view , it is of great interest to have a model whose physical parameters can evolve over time .this would allow , for instance , to describe the transition from long to short memory of a given viscoelastic material .the way to pursue this goal is to let the memory kernel depend itself on time .accordingly , we will consider a modified version of , namely , subject to the boundary condition , with where the time - dependent function is convex and summable for every fixed . here and inwhat follows , the _ prime _ denotes the partial derivative with respect to .it is worth noting that the nonautonomous character of is structural , in the sense that the leading differential operator depends explicitly on time . a much different situation than , say , having a time - dependent external force .the equation is supplemented with the initial conditions where , and the function are assigned data . in order to study the initial - boundary value problem above , following the pioneering idea of dafermos introduce for the _ past history _variable besides , aiming to incorporate the boundary conditions , we consider the strictly positive linear operator on the hilbert space of square summable functions on , with domain where and denote the usual sobolev spaces . then, calling and setting for simplicity the constant , problem with the dirichlet boundary condition reads denoting in view of it is readily seen that , for every , accordingly , viewing the original problem as the evolution system - in the variables and , the initial conditions turn into the focus of this paper is a global well - posedness result for problem - in a suitable functional space . from the mathematical point of view , the presence of a time - dependent kernel introduces essential difficulties , and new ideas are needed .indeed , in the classical dafermos scheme , one has a supplementary differential equation ruling the evolution of the variable , generated by the right - translation semigroup on the history space , whose mild solution is given by .but in our case , the natural phase space for the past history is itself time - dependent , suggesting that the right strategy is to work within the theory of processes on time - dependent spaces , recently devised by di plinio _ et al . _ , and further developed in .still , in those papers the time dependence entered only via the definition of the norm in a universal reference space , i.e. the spaces are in fact the _ same _ linear space endowed with different norms , all equivalent for running in compact sets . on the contrary , here the phase space depends on time at a _ geometric _ level , and we only have a set inclusion as .this poses some problems even in the definition of the time derivative . to overcome this obstacle, we propose a different notion of solution ( which boils down to the usual one when the memory kernel is time - independent ) , where the evolution of is actually postulated via the representation formula . at the same time , this prevents us to obtain directly differential inequalities , essential to produce any kind of energy estimates , so that the main technical tool in our approach turns out to be a family of integral inequalities , which are obtained by several approximation steps .the theory , along with the techniques developed in this work , open the way to the longterm analysis of the solutions , which will be the object of future works . besides , a paradigm is set in order to tackle any equation of memory type with time - dependent kernels .it is worth mentioning also the possibility of extending in a quite natural way the underlying ideas to the study of systems with memory in the so - called minimal state framework introduced in . in section 2we stipulate our assumptions on the time - dependent memory kernel , showing a concrete example of physical relevance , while in section 3 we introduce the proper functional spaces .the global well - posedness result is stated in section 4 .the main technical tool needed in our analysis is discussed in section 5 , and the remaining of the paper is devoted to the proofs : existence of solutions ( section 6 ) , uniqueness ( section 7 ) and further regularity ( section 8) . in the final appendix we provide a physical derivation of our equation via a rheological model for aging materials . for , we define the compactly nested hilbert spaces endowed with the inner products and norms the index will be always omitted when equal to zero . for , it is understood that denotes the completion of the domain , so that is the dual space of .the symbol will also be used to denote the duality pairing between and . in particular , along the paper , we will repeatedly use without explicit mention the young , hlder and poincar inequalities , as well as the standard sobolev embeddings , e.g. .in order to prove a well - posedness result for our problem , we suppose that the function satisfies the following set of assumptions , where and we agree to denote whenever such derivatives exist . * for every fixed , the map is nonincreasing , absolutely continuous and summable . * for every there exists a function , summable on any interval ] with initial datum if : * , , . * , . *the function fulfills the representation formula . * for every test function and almost every ] . by means of standard embeddings( see e.g ) , point ( i ) of the definition yields at once , \h ) \cap \c^1([\tau , t ] , \h^{-1}).\ ] ] thus , speaking of the initial values of and makes sense . as already mentioned in the introduction ,it is worth noting that the definition above , where the representation formula is actually postulated , is applicable as well to classical systems with memory ( i.e. in presence of time - independent kernels ) , providing a notion of solution completely equivalent to the usual one ( see e.g. ) .in fact , this approach seems to be even more natural , and considerably simplifies the proofs of existence and uniqueness results . in particular, it allows to avoid cumbersome regularization arguments , needed to justify certain formal multiplications ( cf . ) . within the assumptions above on , and , we can state our well - posedness theorem .[ thm - ex - un ] for every and every initial datum , problem - admits a unique solution on the interval ] , the following continuous dependence result holds .[ thm - cont-2 ] there exists a positive constant , depending only on and the size of the initial data , such that for every ] , by the formula the following theorem holds .[ theorem - eta - norm ] for all , we have the inequality having set }m(t).\ ] ] the proof of theorem [ theorem - eta - norm ] requires a number of preparatory lemmas .[ lemma - bd - eta ] setting we have that with ,\ ] ] and recalling that is nonincreasing , we have the latter inequality follows from * ( m2 ) * and .[ rem - bd - eta ] it is clear from the proof that the conclusion of lemma [ lemma - bd - eta ] is true without any assumption on , provided that ,\h^{\sigma+1}) ] , and the differential equation holds in . since and , we can differentiate with respect to and to in the weak sense , so obtaining and let us prove that . since ,\h^{\sigma+1}) ] and .then , for all , we have the inequality \|\eta^t(s)\|^2_{\sigma+1}\d s\,\d t + 2\int_a^b\l\pt u(t),\eta^t\r_{\m_t^{\sigma}}\d t.\ ] ] for every small , we introduce the cut - off function correspondingly , we define the family of approximate kernels denoting now we claim that indeed , from lemma [ lemma - bd - eta ] we know that for every fixed .moreover , since ) ] , there exists such that }(s).\ ] ] this proves . at this point , introducing the -dependent memory space with the usual scalar product and norm , we multiply by in , so to get making use of , \d s \\ & = \ddt\|\eta^t\|^2_{\m_t^{\sigma,\eps } } - \int_0^\infty \dot\mu^\eps_t(s)\|\eta^t(s)\|^2_{\sigma+1}\d s.\end{aligned}\ ] ] besides , from applied in the space , in summary , we end up with \|\eta^t(s)\|^2_{\sigma+1}\d s + 2\l\pt u(t),\eta^t\r_{\m_t^{\sigma,\eps}}.\ ] ] as a byproduct of - , we also infer that the map is absolutely continuous .this allows us to integrate the differential identity above , obtaining \|\eta^t(s)\|^2_{\sigma+1}\d s\,\d t\\\nonumber & = 2\int_a^b\l\pt u(t),\eta^t\r_{\m_t^{\sigma,\eps}}\d t.\end{aligned}\ ] ] in order to complete the proof , it suffices to pass to the limit in as . note first that , for any fixed , analogously , for any fixed we verify that exploiting * ( m2 ) * and lemma [ lemma - bd - eta ] , and the dominated convergence theorem entails thus , denoting \|\eta^t(s)\|^2_{\sigma+1},\\ g(t , s ) & = -\big[\dot\mu_t(s)+\mu_t'(s)\big]\|\eta^t(s)\|^2_{\sigma+1},\end{aligned}\ ] ] we are left to prove that indeed , in light of * ( m4 ) * , \|\eta^t(s)\|^2_{\sigma+1}\\ & \geq -m(t)\mu_t(s)\|\eta^t(s)\|^2_{\sigma+1}-\frac{1}{\eps}\chi_{[\eps,2\eps]}(s)\mu_t(s)\|\eta^t(s)\|^2_{\sigma+1}.\end{aligned}\ ] ] we infer from lemma [ lemma - bd - eta ] that the first term in the right - hand side above belongs to . concerning the second one , we observe that implying in turn , as is nonincreasing , where is invoked in the last passage .besides , since we can assume , }(s)\leq 2\chi_{[0,2]}(s).\ ] ] collecting the two inequalities above , we end up with }(s)\mu_t(s)\|\eta^t(s)\|^2_{\sigma+1}\leq 2\psi(u,\eta_\tau)\chi_{[0,2]}(s)k_\tau(t)\in l^1((a , b)\times \r^+).\ ] ] in conclusion , we found a ( positive ) function }(s)k_\tau(t)\in l^1((a , b)\times\r^+)\ ] ] satisfying we are in a position to apply fatou s lemma : since almost everywhere , the required inequality follows . by * ( m4 ) * we have a straightforward corollary . [ cor - eta - norm ] within the hypotheses of lemma [ lemma - eta - norm ] , for all , we have the inequality choose two sequences ,\h^{\sigma+1})\ ] ] such that and define as from corollary [ cor - eta - norm ] , we know that all is needed is passing to the limit in the inequality above . by means of lemma [ lemma - bd - eta ] applied to the difference and to , we draw the estimate implying the pointwise convergence ,\ ] ] along with the control in particular , ,\ ] ] and , by the dominated convergence theorem , in order to establish the remaining convergence we argue as in the proof of lemma [ lemma - eta - norm ] .indeed , ,\ ] ] and having set .\ ] ] a further application of the dominated convergence theorem will do .this finishes the proof of theorem [ theorem - eta - norm ] .we are now ready to prove the existence result .[ thm - ex - un - wak ] for every and every initial datum , problem - admits at least a solution on ] and \to \h_n\ ] ] satisfying , for every test function and every ] and a pair satisfying - , where is of the form ).\ ] ] the proof is completely standard , and therefore omitted .it is enough to note that translates into a system of integro - differential equations in the unknowns , and the existence ( and uniqueness ) of a local solution is guaranteed by a classical odes result , owing to the fact that the nonlinearity is locally lipschitz . according to lemma [ l - galerkin ] , we denote by the ( local ) solution to the approximating problem at time . in what follows, will denote a generic positive constant and a generic nondecreasing positive function , both ( possibly ) depending only on , and the structural parameters of the problem , but _independent _ of .the crucial step is finding suitable a priori estimates for the approximate solution .[ lemma - bdd - n-1 ] let for some .then for every and }\|z_n(t)\|_{\h_t}\leq \q(r).\ ] ] we preliminarily observe that , owing to - , for ] with yields knowing that , and fulfills , we are allowed to apply theorem [ theorem - eta - norm ] for , so obtaining therefore , setting and adding the latter two integral inequalities , using again we end up with the claim follows from the gronwall lemma and a further application of , together with .since the estimates for do not depend on , we conclude that the solutions to the approximate problems are global , namely , from lemma [ lemma - bdd - n-1 ] we learn that hence , there exists such that , up to a subsequence , by the classical simon - aubin compact embedding ,\h),\ ] ] we deduce ( up to a further subsequence ) ,\h),\ ] ] along with the pointwise convergence \times \omega.\ ] ] thanks to the continuity of , this also yields \times \omega.\ ] ] at this point , having and , we merely define the function for ] .the function fulfills point ( iv ) of definition [ def - sol ] .let be fixed . then , for every , we have multiplying the above equality by an arbitrary ) ] we are led to we claim that we can pass to the limit in this equality , getting owing to the density of in as , this finishes the proof of the lemma .coming to the claim , we see that the only nontrivial terms to control are the nonlinear one containing and concerning the first , the convergence to the corresponding one with follows by observing that indeed , by the growth condition and lemma [ lemma - bdd - n-1 ] and the result is a consequence of the weak dominated convergence theorem , in light of the pointwise convergence .we are left to pass to the limit .to this aim , we set and , for every ] defined as in light of , writing explicitly as we have it is easy to see that the first term in the right - hand side goes to zero . indeed , by * ( m2 ) * and , and readily gives concerning the second term , an application of the fubini theorem yields appealing again to the fubini theorem and exploiting * ( m2 ) * , we obtain and ensures the convergence finally , recalling that is nonincreasing , owing to * ( m2 ) * and using and , we draw \d s\,\d t \\ & \leq c\bigg[\int_{0}^\infty \mu_\tau(s)\|\bar\eta_{\tau n}(s)\|_1\d s + \kappa(\tau)\|\bar u_{\tau n}\|_1\bigg]\int_\tau^{t}k_\tau(t)\d t \\ \noalign{\vskip2 mm } & \leq c \big[\sqrt{\kappa(\tau)}\|\bar\eta_{\tau n}\|_{\m_\tau } + \|\bar u_{\tau n}\|_1\big]\to 0.\end{aligned}\ ] ] in summary , which completes the proof of the claim .we already know that , , for almost every ] .in order to comply with definition [ def - sol ] , we are left to verify that we need a useful observation .[ lemma - l1 ]let and be arbitrarily fixed . if , then the map a simple computation yields and the thesis follows from * ( m2)*. in light of , by applying lemma [ lemma - l1 ] for , the claimed regularity for is obtained by comparison in . as a byproduct, we deduce the continuity ,\h^{-1}).\ ] ] here we show that the initial conditions are fulfilled , i.e. we take any and ) ] and }\|z(t)\|_{\h_t}\leq \q(r),\ ] ] whenever .this is obtained by passing to the limit in the uniform estimate of lemma [ lemma - bdd - n-1 ] .due to the convergence in ( at any fixed ) , together with the ( weak ) continuity , \h ) \cap \c^1([\tau , t ] , \h^{-1}) ] and }\big[\|u(t)\|_1+\|\pt u(t)\|\big]\leq \q(r).\ ] ] the only difficult part is showing that for every ] , lemma [ lemma - bdd - n-1 ] provides the convergence ( up to a subsequence ) for some .accordingly , consequently , if we prove the equality in we are done . to see that , it is enough to show that but , since ,\h) ] by , and in by construction .the proof of theorem [ thm - ex - un - wak ] is completed .uniqueness is an immediate consequence of the following weak continuous dependence .[ lemma - dep ] let and be any two solutions on ] .thanks to theorem [ thm - ex - un - wak ] , }\big[\|u_1(t)\|_1+\|u_2(t)\|_1\big]\leq c,\ ] ] where here and along the proof , will stand for a generic constant ( possibly ) depending on and the size of the initial data in . for ] we have with using as a test function in , we obtain exploiting and the uniform boundedness , we have the estimate accordingly , and we arrive at an integration on ] , and , \h^1 ) \cap \c^1([\tau , t ] , \h).\ ] ] define the energy functionals and within the galerkin approximation scheme , we test the equation by .this gives since is uniformly bounded by theorem [ thm - ex - un - wak ] , owing to we find the controls where , along this proof , denotes a positive constant depending on the size of . from the first inequality , we easily conclude that in turn , from the second inequality we deduce the estimate so obtaining at this point , we apply theorem [ theorem - eta - norm ] for , and we get adding this inequality to integrated in time over ] , where the positive constant , beside and , depends ( increasingly ) only on the norms of and in .we argue as in the proof of proposition [ lemma - dep ] , the only difference being that now we can use as a test function in .accordingly , we obtain leaning on and exploiting the boundedness of and , we estimate where depends on the size of the initial data in only .the conclusion follows as in the proof of proposition [ lemma - dep ] , making use of theorem [ theorem - eta - norm ] for .let be any fixed initial datum , and let be the unique solution satisfying .then , we choose a sequence such that and we denote by the corresponding sequence of solutions satisfying . for every , we know from lemma [ thm - ex - un - wak-1 ] that , \h^1 ) \cap \c^1([\tau , t ] , \h).\ ] ] let now ] .hence , it converges to some ,{\rm w}) ] .this finishes the proof of theorem [ thm - ex - un ] . with a similar argument, we establish the continuous dependence estimate of theorem [ thm - cont-2 ] .let be two solutions , and let be their respective approximating sequences .for an arbitrarily fixed ] , gives on account of , for every fixed and , thus , under the reasonable assumption that is uniformly bounded in the past , letting we have and we conclude that on the other hand , making use of - , we can write in terms of and as \epsilon(t ) - \frac{\sigma(t)}{\textsf{k}_0(t)}.\ ] ] collecting the two equalities above , we end up with at this point , an integration by parts together with a further use of , assuming uniformly bounded in the past , lead to the integral - type constitutive equation the final goal is to determine the kinematic equation of the viscoelastic body . denoting by the axial displacement field relative to the reference configuration , the balance of linear momentum in lagrangian coordinates reads where is the reference density of the body and is an external force per unit mass .hence , from the explicit form of , and recalling that is related to the displacement as , we obtain where we set and equivalently , using in place of , with the original equation is then recovered when is a displacement - dependent external force of the form .observe that is convex . indeed , for every fixed , ^ 2\bigg]\e^{-\frac1\gamma\int_0^s \textsf{k}_0(t - y)\d y}\geq 0,\ ] ] where we are exploiting the fact that is nondecreasing . besides , owing to , proving that is summable .in the particular case when for every , we recover the classical time - independent kernel widely used in the modeling of ( non - aging ) standard viscoelastic solids .see e.g. . assumption * ( m3 ) * is obviously true as . in particular , ^ 2 \textsf{k}_0(t - s ) + \frac{1}{\gamma}\textsf{k}_0(t)[\textsf{k}_0(t - s)]^2\bigg]\e^{-\frac1\gamma\int_0^s \textsf{k}_0(t - y)\d y}. \end{aligned}\ ] ] assumption * ( m4 ) * holds with indeed , ^ 2\bigg ] \textsf{k}_0(t - s)\e^{-\frac1\gamma\int_0^s \textsf{k}_0(t - y)\d y}\\ \noalign{\vskip1 mm } & \leq \frac{1}{\varrho\gamma}\dot{\textsf{k}}_0(t)\textsf{k}_0(t - s)\e^{-\frac1\gamma\int_0^s \textsf{k}_0(t - y)\d y } \\ & = \frac{\dot{\textsf{k}}_0(t)}{\textsf{k}_0(t)}\mu_t(s ) .\end{aligned}\ ] ] the aim of this final subsection is to render remark [ remmyapp ] more rigorous .namely , we prove that within - the distributional convergence generically occurs as , so that the equation with memory collapses into the kelvin - voigt viscoelastic model more precisely , this will happen under the additional very mild assumption ^ 2}=0.\ ] ] this is always the case , for instance , when is eventually concave down as .since the function is nonnegative for every , our claim follows by showing that , for every fixed , to this end , introducing the antiderivative let us write and denote , for , we first establish the convergence . indeed , for , we infer from that accordingly , which readily gives in order to reach the desired conclusion , we note that implies that the nonnegative function is eventually decreasing , hence bounded at infinity. therefore , de lhpital s rule and a further exploitation of give ^ 2}\,\textsf{q}(t)=0.\ ] ] as far as is concerned , we write then , since we showed that , applying de lhpital s rule and exploiting we get }}{\frac1\gamma-\frac{\dot{\textsf{k}}_0(t)}{[\textsf{k}_0(t)]^2 } } = \frac\gamma\varrho \lim_{t\to\infty } \e^{-\frac1\gamma [ \textsf{h}(t)-\textsf{h}(t-\nu)]}.\ ] ] the latter limit clearly equals when , whereas when }\to 0.\ ] ] indeed , recalling and the fact that is nondecreasing , the claim is proven .we point out that the function has an independent interest . indeed , for , it provides an approximation ( from the right ) of the dirac delta function , which does not seem to be known in the literature .
we consider the model equation arising in the theory of viscoelasticity here , the main feature is that the memory kernel depends on time , allowing for instance to describe the dynamics of aging materials . from the mathematical viewpoint , this translates into the study of dynamical systems acting on time - dependent spaces , according to the newly established theory of di plinio _ et al . _ . in this first work , we give a proper notion of solution , and we provide a global well - posedness result . the techniques naturally extend to the analysis of the longterm behavior of the associated process , and can be exported to cover the case of general systems with memory in presence of time - dependent kernels .