article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
the class of problems known as assembly line balancing problems ( albps ) concerns the optimization of processes related to the manufacturing of products via assembly lines .their importance in the industrial world is shown by the fact that much research efforts have been dedicated to many different types of albps during the past 50 - 60 years .the specific problem considered in this paper is the so - called simple assembly line balancing problem ( salbp ) , a well - studied scientific test case .an assembly line is composed of a set of work stations arranged in a line , and by a transport system which moves the product to be manufactured along the line .the product is manufactured by executing a given set of tasks .each of these tasks has a pre - defined processing time . in order to obtain a solution to a given salbp instance ,all tasks must be assigned to work stations subject to precedence constraints between the tasks . in the context of the salbp, all work stations are considered to be of equal size .moreover , the assembly line is assumed to move in constant speed .this implies a maximum of time units the so - called _cycle time_for processing the tasks assigned to each work station .the salbp has been tackled with several objective functions among which the following ones are the most studied ones in the literature : * given a fixed cycle time , the optimization goal consists in minimizing the number of necessary work stations .this version of the problem is refered to as salbp-1 . *given a fixed number of work stations , the goal is to minimize the cycle time .the literature knows this second problem version as salbp-2 .the feasibility problem salbp - f arises when both a cycle time and a number of work stations is given and the goal is to find a feasible solution respecting and .in this work we will deal with the salbp-2 version of the problem . for what concerns the comparison between salbp-1 and salbp-2 ,much of the scientific work has been dedicated to the salbp-1 .however , also for the salbp-2 exists a considerable body of research papers .an excellent survey was provided by .approaches for the salbp-2 can basically be classified as either _ iterative solution approaches _ or _ direct solution approaches_. iterative approaches tackle the problem by iteratively solving a series of salbp - f problems that are obtained by fixing the cycle time .this process is started with a cycle time that is set to some calculated upper bound .this cycle time is then decremented during the iterative process , which stops as soon as no solution for the corresponding salbp - f problem can be found .in contrast to these indirect approaches , direct approaches intend to solve a given salbp-2 instance directly .heuristic as well as complete approaches have been devised for the salbp-2 . among the existing complete methods we find iterative approaches such as the ones proposed in but also direct approaches such as the ones described in .moreover , the performance of different integer programming formulation of the salbp-2 has been evaluated in .the currently best - performing exact method is salome-2 .surprisingly this exact method even outperforms the existing heuristic and metaheuristic approaches for the salbp-2 .the most successful metaheuristic approach to date is a tabu search method proposed in .another tabu search proposal can be found in .other metaheuristic approaches include evolutionary algorithms and simulated annealing .finally , a two - phase heuristic based on linear programming can be found in , whereas a heuristic based on petri nets was proposed in .[ [ contribution - of - this - work . ] ] contribution of this work .+ + + + + + + + + + + + + + + + + + + + + + + + + + subsequently we propose to tackle the salbp-2 by means of an iterative approach based on beam search , which is an incomplete variant of branch & bound .the resulting iterative beam search algorithm is inspired by one of the current state - of - the - art methods for the salbp-1 , namely beam - aco .beam - aco is a hybrid approach that is obtained by combining the metaheuristic ant colony optimization with beam search . in this workwe propose to use the beam search component of beam - aco in an iterative way for obtaining good salbp-2 solutions .our computational results show indeed that the proposed algorithm is currently a state - of - the - art method for the salbp-2 .it is able to obtain optimal , respectively best - known , solutions in 283 out of 302 test cases .moreover , in further 9 cases the algorithm is able to produce new best - known solutions .[ [ organization - of - the - paper . ] ] organization of the paper .+ + + + + + + + + + + + + + + + + + + + + + + + + + in section [ sec : salbp-2 ] we present a technical description of the tackled problem .furthermore , in section [ sec : algo ] the proposed algorithm is described .finally , in section [ sec : results ] we present a detailed experimental evaluation and in section [ sec : conclusions ] we conclude our work and offer an outlook to future work .the salbp-2 can technically be described as follows .an instance consists of three components . is a set of tasks .each task has a pre - defined processing time .moreover , given is a precedence graph , which is a directed , acyclic graph with as node set .finally , is the pre - defined number of work stations which are ordered from 1 to .an arc indicates that must be processed before .given a task , denotes the set of tasks that must be processed before .a feasible solution is obtained by assigning each task to exactly one work station such that the precedence constraints between the tasks are satisfied .the objective function consists in minimizing the so - called cycle time .the salbp-2 can be expressed in the following way as an integer programming ( ip ) problem . to : this ip model makes use of the following variables and constants : is a binary variable which is set to 1 if and only if task is assigned to work station .the objective function ( 1 ) minimizes the cycle time . , while fixed cycle times are denoted by . ] the constraints ( 2 ) ensure that each task is assigned to a single work station .constraints ( 3 ) reflect the precedence relationships between the tasks .more specifically , if task is assigned to a work station , all tasks must be assigned to work stations with .the constraints ( 4 ) ensure that the sum of the processing times of the tasks assigned to a work station do not exceed the cycle time . note that this model is not necessarily the most efficient ip model for solving the salbp-2 .an evaluation of several different models can be found in .the following solution representation is used for the description of the algorithm as given in section [ sec : algo ] . a solution is an ordered list of sets of tasks , where denotes the set of tasks that are assigned to the -th work station . abusing notation we henceforth call a work station .note that for a solution to be valid the following conditions must be fulfilled : 1 . and , that is , each task is assigned to exactly one work station .2 . for each task it is must hold that .this ensures that the precedence constraints between the tasks are not violated .the reverse problem instance with respect to an original instance is obtained by inverting the direction of all arcs of .it is well - known from the literature that tackling the reverse problem instance may lead an exact algorithm faster to an optimal solution , respectively , may provide a better heuristic solution when tackled with the same heuristic as the original problem instance . moreover , a solution to the reverse problem instance can easily be converted into a solution to the original problem instance as follows : mentioned in the introduction , the basic component of our algorithm for the salbp-2 consists of beam search ( bs ) , which is an incomplete derivative of branch & bound .initially bs has especially been used in the context of scheduling problems ( see , for example , ) . to date only very few applications to other types of problemsexist ( see , for example , ) . in the followingwe briefly describe how one of the standard versions of bs works .the crucial aspect of bs is the parallel extension of partial solutions in several ways . at all times, the algorithm keeps a set of at most partial solutions , where is the so - called _ beam _ , and is known as the _ beam width_. at each step , at most feasible extensions of each partial solution in are selected on the basis of greedy information . in general , this selection is done deterministically . at the end of each step, the algorithm creates a new beam by choosing up to partial solutions from the set of selected feasible extensions . for that purpose ,bs algorithms determine in the case of minimization a lower bound value for each extension . only the maximally extensions with respect to these lower bound values are included in .finally , if any complete solution was generated , the algorithm returns the best of those .note that the underlying constructive heuristic that defines feasible extensions of partial solutions and the lower bound function for evaluating partial solutions are crucial for the working of bs . in the followingwe first present a description of the implementation of the bs component , before we describe the algorithmic scheme in which this bs component is used . the bs component described in this section see algorithm [ algo : bs ] for the pseudo - code is the main component of the proposed algorithm for the salbp-2 .the algorithm requires a problem instance , a fixed cycle time , a beam width , and a maximal number of extensions as input .given a fixed cycle time and ( the number of work stations ) bs tries to find at least one feasible solution .as mentioned before , the crucial aspect of bs is the extension of partial solutions in several possible ways . at each stepthe algorithm extends each partial solution from in a maximum number of ways .more specifically , given a partial solution with work stations already filled , an extension is generated by assigning a set of so - far unassigned tasks to the next work station such that the given cycle time is not surpassed and the precedence constraints between the tasks are respected ( see lines 1112 of algorithm [ algo : bs ] ) .the algorithm produces extensions in a ( partially ) probabilistic way rather than in the usual deterministic manner .each generated extension ( partial solution ) is either stored in set in case it is a complete solution , or in set otherwise ( see lines 1319 of algorithm [ algo : bs ] ) .however , a partial solution is only stored in set if it uses at most work stations , and if its -th work station is different to the -th work station of all partial solutions that are already in .finally , bs creates a new beam by selecting up to solutions from set of further extensible partial solutions ( see line 22 of algorithm [ algo : bs ] ) .this is done in function selectsolutions(, ) on the basis of a lower bound function lb . in the following we describe in detail the extension of partial solutions and the working of function selectsolutions(, ) .an instance , a fixed cycle time , a beam width , and initialization of an empty solution extendpartialsolution( ) (, ) if the output is , otherwise [ [ extending - partial - solutions . ] ] extending partial solutions . + + + + + + + + + + + + + + + + + + + + + + + + + + + + the generation of an extension of a partial solution with work stations already filled works as follows .unassigned tasks are iteratively assigned to work station until the sum of their processing times is such that no other task can be added to without exceeding the given cycle time .this procedure is pseudo - coded in algorithm [ algo : fill - station ] . at each step, denotes the set of so - far unassigned tasks that may be added to without violating any constraints .the definition of this set of _ available tasks _ is given in line 3 , respectively 8 , of algorithm [ algo : fill - station ] .a partial solution , the index of the work station to be filled , and the cycle time ( ) filled work station it remains to describe the implementation of function choosetask( ) of algorithm [ algo : fill - station ] .for that purpose let us first define the following subset of : this definition is such that contains all tasks that _ saturate _ , in terms of processing time , the -th work station .the choice of a task from is made on the basis of greedy information , that is , on the basis of values that are assigned to all tasks by a greedy function .the first action for choosing a task from consists in flipping a coin for deciding if the choice is made deterministically , or probabilistically . in case of a deterministic choice, there are two possibilities .first , if , the best task from is chosen , that is , the task with maximal greedy value among all tasks in .otherwise , we choose the task with maximal greedy value from . in case of a probabilistic decision , a task from is chosen on the basis of the following probability distribution : for completing the description of function choosetask( ) , we must describe the definition of the greedy values , . in a first step a term defined as follows : hereby , denotes the set of all tasks that can be reached from in precedence graph via a directed path .this definition combines two greedy heuristics that are often used in the context of assembly line balancing problems .the first one concerns the task processing times and the second one concerns the size of .the influence of both heuristics can be adjusted via the setting of weights and . in order to be more flexible we decided to allow for both weights a value from ] . instead of trying to find a good parameter setting for each instance, we decided for a tuning process based on precedence graphs , that is , we wanted to choose a single setting of and for all instances concerning the same precedence graph .for that purpose we applied a specific version of ibs for all combinations of to all 302 instances .this makes a total of 441 different settings for each instance . the specific version of ibs that we applied for the tuning process differs from ibs as outlined in algorithm [ algo : scheme ] in that lines 14 - 24 were replaced by a single , deterministic , application of beam search with and was done for the purpose of saving computation time . based on the obtained results we chose the settings presented in table [ tab : tuning ] for the different precedence graphs .it is interesting to note that , apart from a few exceptions , the greedy heuristic based on task processing times does not seem necessary for obtaining good results ..[tab : tuning ] values of parameters and for the final experiments . [ cols="<,<,<,<,<,<",options="header " , ]in this work we have proposed an iterative beam search algorithm for the simple assembly line balancing problem with a fixed number of work stations , salbp-2 .the experimental evalution of the algorithm has shown that it is currently a state - of - the - art method for this problem .appart from producing optimal , respectively best - known , solutions in 283 out of 302 test cases , our algorithm generated new best - known solutions in further 9 test cases . encouraged by the results for the salbp-1 version of the problem ( as published in ) and the results obtained in this paper for the salbp-2 we intent to apply similar algorithms based on beam search to other assembly line balancing problems .this work was supported by grant tin2007 - 66523 ( formalism ) of the spanish government .moreover , christian blum acknowledges support from the _ ramn y cajal _ program of the spanish ministry of science and innovation .many thanks go to armin scholl for verifying the new best - known solutions found by the algorithm proposed in this work .finally , we would also like to express our thanks to cristbal miralles who was involved as a co - author of a similar work for the more general assembly line worker assignment and balancing problem. 10 h. akeba , m. hifib , and r. mhallah . a beam search algorithm for the circular packing problem ., 36(5):15131528 , 2009 . e. j. anderson and m. c. ferris .genetic algorithms for combinatorial optimization : the assembly line balancing problem ., 6:161173 , 1994 . c. blum . for simple assembly line balancing, 20(4):618627 , 2008 .c. blum , m. j. blesa , and m. lpez ibez .beam search for the longest common subsequence problem ., 36(12):31783186 , 2009 . c. blum and c. miralles . on solving the assembly line worker assignment and balancing problem via beam search, 38(1):328339 , 2011 .the application of a tabu search metaheuristic to the assembly line balancing problem ., 77:209227 , 1998 .m. ghirardi and c. n. potts .makespan minimization for scheduling unrelated parallel machines : a recovering beam search approach ., 165(2):457467 , 2005 .s. gosh and r. j. gagnon . a comprehensive literature review and analysis of the design , balancing and scheduling of assembly systems ., 27:637670 , 1989 .s. t. hackman , m. j. magazine , and t. s. wee .fast , effective algorithms for simple assembly line balancing problems . , 37:916924 , 1989 .a. henrici . a comparison between simulated annealing and tabu search with an example from the production planning . in h.dyckhoff et al . , editor , _ operations research proceedings _ , pages 498503 .springer verlag , berlin , germany , 1994 .o. kilincci .a petri net - based heuristic for simply assembly line balancing problem of type 2 . , 46:329338 , 2010 .r. klein and a. scholl . maximizing the production rate in simple assembly line balancing a branch and bound procedure ., 91:367385 , 1996 .lee and d. l. woodruff .beam search for peak alignment of nmr signals ., 513(2):413416 , 2004 .a. c. nearchou .balancing large assembly lines by a new heuristic based on differential evolution .34:10161029 , 2007 . p. s. ow and t. e. morton .filtered beam search in scheduling ., 26:297307 , 1988 . r. pastor , l. ferrer , and a. garca . evaluating optimization models to solve salbp . in o.gervasi and m. l. gavrilova , editors , _ proceedings of iccsa 2007 international conference on computational science and its applications _ ,volume 4705 of _ lecture notes in computer science _ , pages 791803 .springer verlag , berlin , germany , 2007 .i. sabuncuoglu and m. bayiz .job shop scheduling with beam search ., 118:390412 , 1999 .m. e. salveson .the assembly line balancing problem ., 6:1825 , 1955 .a. scholl . . in h.dyckhoff et al . , editor , _ operations research proceedings _ , pages 175181 .springer verlag , berlin , germany , 1994 .a. scholl . .physica verlag , heidelberg , germany , 2nd edition edition , 1999 .a. scholl and c. becker .state - of - the - art exact and heuristic solution procedures for simple assembly line balancing . , 168(3):666693 , 2006 .a. scholl and s. voss .simple assembly line balancing heuristic approaches ., 2:217244 , 1996 .h. f. ugurdag , r. rachamadugu , and c. a. papachristou . designing paced assembly lines with fixed number of stations ., 102:488501 , 1997 .j. m. s. valente and r. a. f. s. alves . filtered and recovering beam search algorithms for the early / tardy scheduling problem with no idle time ., 48(2):363375 , 2005 .t. watanabe , y. hashimoto , l. nishikawa , and h. tokumaru .line balancing using a genetic evolution model ., 3:6976 , 1995 . | the simple assembly line balancing problem ( salbp ) concerns the assignment of tasks with pre - defined processing times to work stations that are arranged in a line . hereby , precedence constraints between the tasks must be respected . the optimization goal of the salbp-2 version of the problem concerns the minimization of the so - called cycle time , that is , the time in which the tasks of each work station must be completed . in this work we propose to tackle this problem with an iterative search method based on beam search . the proposed algorithm is able to obtain optimal , respectively best - known , solutions in 283 out of 302 test cases . moreover , for 9 further test cases the algorithm is able to produce new best - known solutions . these numbers indicate that the proposed iterative beam search algorithm is currently a state - of - the - art method for the salbp-2 . |
beyond the original goal of detecting gravitational waves , modern interferometric sensors ( ifo ) ( such as the future advanced ligo ( aligo ) , the advanced virgo ( avirgo ) , the lcgt , the aei 10 m interferometer ; and the proposed third generation detectors such as the einstein telescope ( et ) ) can be viewed as unique pioneering instruments capable of measuring induced displacements below the scale .this feature opens up myriads of new possibilities for fundamental science . in searches for gravitational waves ( gw )the signature of the induced displacement of the ifo s test mass ( tm ) ranges from the basically unknown to the well - predicted but yet undetected .in contrast to this , one may design a precision experiment in which the local gravitational field around the tm of the interferometric sensor is modulated at a given frequency on a well - controlled manner by a dynamic gravity field generator ( dfg ) .a dfg consists essentially of a rotating mass of null odd and non - null even moments , from which the quadrupole term dominates . in the newtonian limit ,its effect on the detector is to add a signal at the even - harmonics of the rotation frequency .a well - characterized dfg has the potential to provide sub - percent calibration for the gw detectors in phase and amplitude as well as to evaluate current theoretical newtonian noise estimates .this is the subject of a separate publication .it has been shown in the past that devices capable of producing gravity field gradients can be employed , in conjunction with a suitable displacement sensor , for testing newton s law in the laboratory scale . in the first experiment in 1967 forward andmiller used an orbiter sensor , originally developed for measuring the lunar mass distribution , to test newton s law in the scale .weber and sinsky used a gw bar detector as a sensor , acoustically stressing a volume of matter at , measuring an excess in noise in the detector consistent to theory . in the 1980s at the university of tokyo , a series of experiments were carried through to test violations to the inverse square law ( isl ) up to a distance of . in these studies ,the coupling between the dynamic gravity field generated by a rotating mass , and the quadrupole moment of a mechanical oscillator antenna was measured as a function of the rotor - antenna separation . with this methodlimits on non - newtonian gravity were provided and a measurement of the newtonian constant , g , was found in agreement with previous experiments . in the 1990s ,the gravitational - wave group at the university of rome developed and carried out experiments on the cryogenic gw bar detector , explorer , at cern .a device , rotating at a frequency close to half of the antenna s resonant frequency was developed , and the resulting dynamic field was measured as a function of the source - sensor separation .the results were then used to derive upper limits to yukawa - like gravitational potential violations at laboratory scale .the experimental designs cited above consisted in the use of a single dfg in conjunction with bar type gw detectors . in this article we propose a significantly different design , exploiting the exceptional bandwidth and sensitivity of state - of - the - art interferometric sensors .we illustrate the method through the sensitivity of advanced and third - generation gravitational wave detectors to emphasize the feasible scientific reach . however , it is expected that due to the technical details of gravitational wave detectors , the dfgs based isl measurements shall require dedicated custom configured interferometry .nevertheless , the sensitivity measure we derive is a solid baseline .the concept is based on a null - experiment configuration , where a _ pair _ of well matched and symmetrical dfgs , rotating at the same frequency but out of phase , generates a null - signal in the absence of violations to newton s law . in the presence of violations , and within experimental uncertainties , the effect of the two dfgs would not cancel , and a measure of such deviations would be achievable . + the most common way of interpreting composition independent tests of non - newtonian gravity ( for reviewsee e.g. and ) is through the yukawa formalism , where an additional so - called yukawa term ( ) is added to the classical newtonian gravitational potential ( ) : \ ] ] in this expression is the gravitational constant , and are the interacting masses , and denotes the distance between the two point masses .the yukawa parameters of interest are , denoting the yukawa interaction coupling strength , and , giving the length scale of the coupling . for a yukawa range of , the current limit onthe coupling strength , , is in the order of for both negative , positive , and absolute value of .( see fig .[ summary ] . ) .the concept designed here shows promise to explore deviations from isl significantly below present bounds , maybe even down to the order of if enabled by technology and cost .in this work we study the application of future interferometric sensors for artificially generated gravity fields by a pair of hypothetical dfgs in a null experiment configuration .we show that a pair of symmetrical and matched dfgs cancel each other s effect on the tm at twice the rotation frequency in the newtonian limit .thus we relate the estimated test mass displacement to non - newtonian parametrization of the law of gravitation .we also consider the contribution from some of the undesired system asymmetries due to production / measurement uncertainties .we numerically compute the tolerances of a hypothetical experimental setup to measure yukawa - like deviations from the law and we estimate a limit on the coupling strengths , -s , that might be achieved with future measurements .consider a hypothetical dfg , consisting of two point masses , and , separated by a distance of and respectively from the center of rotation , rotating at a frequency of .a point mass , , representing the tm , is then placed at distance from the dfg s center of rotation .and are rotating with a frequency of and radii of and from the center of rotation , respectively .the dfg center of rotation is placed in a distance of from the mass , representing the tm center of mass .the distance between the dfg masses and the tm are and , respectively , , and only accelerations of the tm along the ifo cavity axis are considered.,title="fig:",width=326 ] + we first calculate the acceleration of induced by the dfg along the axis connecting and the center of rotation of the dfg .this axis corresponds to the optical axis of the ifo . introducing the dimensionless parameters , and ,the distance between the dfg s -th mass and the point mass , , can be written as where .the newtonian potential at the position of point mass is then given by the magnitude of the induced acceleration on along the optical axis can be written as here is a geometrical factor as we have shown elsewhere , for the case of a much smaller lever arm than distance ( ) , considering only the dominant terms in the first few harmonics , the induced displacement of a free mass along the axis connecting it to the dfg s center of rotation can be approximated as }\end{aligned}\ ] ] where describes the -th multipole moment of the dfg s mass distribution .we observe that using eq .( [ displ2 ] ) and in the case of a dfg mass distribution invariant under rotation by pi , all odd moments vanish and the induced displacement is dominated by the quadrupole moment at twice the rotation frequency .we note , that the solution represented in eq .( [ displ2 ] ) is valid for a free body , which , in case of a suspended tm is a good approximation for frequencies well above the eigenfrequencies of the suspension ( typically around ) . in the same way as above, we calculate the field dynamics arising from a yukawa - like potential perturbation to the classical newtonian field . in view of the fact that the potentials are additive and all operations to calculate the acceleration are linear, the acceleration of point mass due to the two potential terms will also be additive .using eq .( [ pot ] ) , the yukawa perturbation to the newtonian potential at due to the dfg can be expressed as where is given by eq .( [ hi ] ) . in the same way ,the magnitude of the tm induced acceleration along the optical axis is where , and is a function of the length scale of the yukawa coupling , : } & \times & \\ \nonumber & & \hskip -1.5 in \times \exp{\bigg{(}-\frac{d}{\lambda } \sqrt{1 + r_\mathrm{i}^2 - 2r_\mathrm{i } \cos \theta}\bigg{)}}\end{aligned}\ ] ] analytical integration of eq .( [ cla_acc ] ) and eq .( [ yuk_acc ] ) can not be easily formulated without using some additional level of approximation .for this reason , we treat the problem numerically , facilitating the treatment of some of the experimental uncertainties .we have modeled both the newtonian and the non - newtonian dynamics on the tm while taking into account many of the dfg fabrication and measurement procedural uncertainties . in this workthe tm is approximated as a driven , damped pendulum .let the pendulum s longitudinal eigenfrequency be with quality factor .casting the differential equations of motion ( eq . ( [ cla_acc ] ) and eq . ( [ yuk_acc ] ) ) in the laplace frequency domain , the transfer function relating the acceleration to the induced displacement is where ( the upper index denotes that the same expression applies to both newtonian and yukawa case ) .we used monte carlo simulations to model a null - experiment design using dfgs to measure hypothetical violations to the classical inverse square law . to do this , we first numerically computed the induced acceleration due to the newtonian and non - newtonian terms using eq .( [ cla_acc ] ) and eq .( [ yuk_acc ] ) .these results , generated in the time domain , are then mapped in the fourier - domain to generate an acceleration spectrum , which is then filtered through the transfer function presented in eq .( [ susp ] ) .the displacement contributions at the different harmonics , for different sets of parameters and uncertainties are then computed and analyzed . to assess the short and long term feasibility of the proposed method , the simulated results are compared to the displacement sensitivity of future most sensitive interferometric gw detectors .[ senscurves ] . shows the design sensitivities for aligo , avirgo , the et ( for a reference to these , see ) , lcgt , and the aei 10 m detector .two conceptualized dfgs are placed in - line with one of the ifo s arms at a distance of and from the tm .the dfgs positions and quadrupole moments are chosen such that when operated at a relative phase of the net displacement at will largely depend on the non - newtonian coupling strength .,title="fig:",width=326 ] + in order to mitigate the effect of the ifo s calibration uncertainty , two dfgs rotating at the same frequency ( ) , but out of phase ( ) , are set up to generate a null effect on the tm in the newtonian limit at twice the rotation frequency .for this reason , the uncertainties of the proposed measurement , whose setup is shown in fig . [ tworotord ] ., will solely depend on the production and procedural uncertainties of dfgs .for example , similarly to the one proposed in as a detector calibration tool , a single hypothetical dfg may consist of an aircraft grade ( 6al/6v/2sn ) titanium disc in diameter and in height .the disc has two cylindrical slots , apart ( ) , which can hold different materials .tungsten cylinders , in diameter , can serve as rotating masses , with effective mass each , where and are mass densities of tungsten and titanium respectively , and is the volume of a cylinder . in the ideal case of symmetrical and matched dfgs ,no effect on the tm is observed at in the absence of violations . in the classical sense , this condition is satisfied if the dfg radii and placement follow the scaling laws , where indices and are associated with the two dfgs , and is the mass ratio ( see fig .[ tworotord ] . ) .note , that if we take into account the cylindrical geometry of the tm , the scaling laws presented in eq .( [ etarelations ] ) will differ by o , where the corresponding and will still be analytically calculable .in non - newtonian dynamics , using the yukawa formalism , limits on the parameter alpha are given as a function of the yukawa range , lambda .first we address an ideal case , where the parameters of both dfgs are known exactly . as an example , let s assume the first dfg , with parameters described above , positioned at from the center of mass of the ifo tm .specifications and position for the second dfg are determined by eq .( [ etarelations ] ) . on fig .we plot the interaction range for which the tm displacement at is maximal , , as a function of ( continuous line ) . on the same plot, the dashed line shows the corresponding tm rms displacement at due to the yukawa term . using the above case and , fig .gives an interaction range of , corresponding to a maximum displacement of the tm due to the yukawa term with an rms value of .for the case of the aei 10 m detector with a noise floor of at , and an integration time of day , a limit of can be provided with a signal to noise ratio ( defined as the ratio of the rms signal to the displacement noise spectrum density integrated for a time ) of , given the basic physical limitations described later in this section .in general terms , for an arbitrary noise floor , and integration time , the signal to noise ratio scales as expressed in units , where denotes the yukawa interaction range , for which the rms displacement due to the yukawa term is maximal . the corresponding rms displacement of the tm ( using , , , and ) , expressed in units , is shown by the dashed curve.,title="fig:",width=326 ] + there are two fundamental classes of effects that limit our capability of measuring an inverse - square - law - violating gravitational force within the experimental concept .the first one is due to the noise level of the ifo , which effect can be suppressed by using sufficiently long integration times , as discussed in sec .[ yuk ] . in this sectionwe discuss some of the effects from the second class , which are due to the finite measuring and manufacturing precision when setting up the null - measurement .errors due to conceivable machining precision and procedure , uncertainties related to measurement of parameters of the final dfg products as well as their final placement relative to the tm and to each other will all infer an uncertainty in the induced displacement measurement , and propagate an error into the measurement of the non - newtonian parameter . in case of purely symmetric dfgs positioned in a null experiment configuration , and taking into account only the newtonian field component ,the induced displacement of the tm will be dominated by a term . in the presence of asymmetries ,the displacement will also have components at the odd harmonics and the classical terms at will not cancel each other completely .this means that the newtonian peak due to this imperfect cancellation will appear at , and will contaminate a potential yukawa - signal from the very beginning of the measurement . first , we calculate the achievable lower limits on the yukawa strength parameter , , in case of ideal dfgs with no parameter errors .we chose the parameters of the first dfg to be , , and , and maximized the integration time of the measurement at s ( months ) . because of the slight dependence of the tm response on ( see fig .[ eta ] . ) , and taking into account manufactural limitations , we set to 2 .we chose a conservative limit of as the condition for detection .the optimal dfg frequencies that maximize the snr in terms of for the different detectors ( see fig . [ senscurves ] ) , along with the lowest achievable limits on the strength parameter , , are given in table [ classicerrors ] .results on as a function of the yukawa scale parameter , , for five different ifos are provided in fig .note , that these results are also valid for the case , when the measurement setup errors are kept low enough to allow the detector noise levels to be the main limiting factors .[ classicerrors ] in order to study the feasibility and real - life limitations of null - experiment concept , we numerically studied the effect of uncertainties associated with the system parameters included in the simulations ( rotation frequency ( ) , mass ( ) and radius ( ) of each dfg , as well as the dfg positions relative to the tm and to each other ( and ) ) , thereby mimicking a realistic construction and measurement procedure . to do this , a large number ( ) of hypothetical setups were monte carlo simulated with the dfg parameters normally distributed around a mean .the mean of the parameter distributions ( e.g. kg , and = 0.2 m ) are chosen such as to maximize the response of the ifo in terms of , while taking into account the technically achievable range of values for each parameter , as well as safety issues .we again set to for all setups , similarly to the ideal case .following eq .( [ etarelations ] ) , in this case determines the means of the distributions corresponding to parameters of the second dfg . the means of dfg operational frequencies were chosen to be equal to the values given in table [ classicerrors ] .the low optimal dfg frequencies allow the effective masses of dfg fillings for the first dfg ( rotating closer to the ifos vacuum chamber ) to .the highest corresponding mechanical energy of a dfg ( ; ; ) would still be less than of the energy proposed in a similar experiment described in .note , that as an alternative to the dfg geometry we proposed in sec .[ yuk ] , a rotating mass design similar to the one suggested by can also be used in the measurement .naturally , we are seeking to achieve the highest precision measurement currently possible or feasible in the near future , and we let the parameter uncertainties vary to values corresponding to state - of - the - art limits with the investment of plausible amount of work and finances .our goal is to find , using monte carlo simulations , the limits on , allowed by using a two - dfg configuration with parameter uncertainties .each parameter associated to hypothetical setups were randomized independently , from a gaussian distribution , using the same variances for corresponding parameters of different setups .we used a chosen set of variances for the parameter distributions ( discussed in sec .[ feasibility ] ) that are within the limits of current state - of - the - art machining and measurement technologies , and at the same time , minimize the newtonian peak we get at due to imperfect cancellation .we generated two - dfg setups , and determined the 95% quantile of the induced newtonian displacement values at ( from now on , denoted by ) .note that this quantile is completely independent from the detector noise and thus from the measurement integration time .we then , for each , calculated the value for which the resulting yukawa peak at equals .the limits on defined this way , versus , are shown in fig .[ perf ] . via laser interferometric methodproposed here .the grey shadings show the parameter regions excluded by previous measurements that were sensitive to negative , positive , and absolute alpha values only .the thick black line corresponds to the limit achievable with an ongoing experiment , denoted by uw - uci .the red curves show the results for various ifos in case of infinite dfg machining precision , using an integration time of sec .the ifo+pa curve gives the 95% confidence limits for all ifos but lcgt and aei 10 m , when allowing dfg parameter uncertainties given in sec .[ feasibility ] .the limiting factor in the measurement is the finite integration time when using lcgt and aei 10 m , and the finite machining precision in case of all the other ifos .the coloured background areas correspond to different theoretical models presented in , that predict yukawa parameters within the areas .our tolerable parameter uncertainties were chosen such as to give the best coverage of the parallelogram corresponding to model `` t '' , while taking into account the feasible parameter adjustment precision.,title="fig:",width=326 ] + using a two - dfg setup with allowed parameter uncertainties given in sec .[ feasibility ] , the limiting factor in case of the lcgt and aei 10 m ifo is its relatively high noise level at .however this means , that we can tolerate higher parameter uncertainties compared to the ones given in sec .[ feasibility ] , when using lcgt and aei 10 m and an integration time of 4 months . in case of the other three ifos ( aligo , avirgo , and et ) the limiting factor is the imperfect cancellation of the newtonian peaks at due to parameter errors , when considering a 4 months long integration time . therefore in order to reach the best possible limits for the chosen set of parameter uncertainties, we can use a reduced integration time of o( ) , depending on which ifo we choose .the reach of the allowed exclusion region suggests that the displacement sensitivity of any dedicated ifo for the null - measurement should have comparable displacement sensitivity to advanced interferometric gravitational wave detectors .custom configured interferometry could possibly have several advantages such as double dfg compatible suspensions , being able to reduce the distance between dfgs and the ifo test mass , and thus to obtain better measurement results .monte carlo test results presented in sec .[ yukawaunc ] were provided by using a chosen set of dfg parameter uncertainties , that gives the best coverage of yukawa parameter predictions given by the `` t '' model ( see for more details ) , and that are still within the feasible measuring and machining precision .an optimization of tolerable dfg parameter uncertainties in terms of measuring an isl - violating gravitational force can be given with the fisher matrix method , however this analysis is beyond the scope of this proof - of - concept paper . in this sectionwe discuss the dfg parameter uncertainties that correspond to the ifo+pa curve given in fig .note , that we assumed that the parameter uncertainties are kept at the same levels throughout the whole integration time of the yukawa measurement .we also discuss the possible methods in measuring and machining that allow us to reach the chosen dfg parameter precision . in the conceptual experimental setup considered the two dfgs are rotating at 11 hz , 20 hz , 5 hz , 26 hz , and 5 hz using aligo , avirgo , et , lcgt , and aei 10 m , respectively . in all cases the two - dfgs in a setup have to be in a relative phase of . in our simulations we chose the following measurement uncertainties : * uncertainty in the frequency of the first dfg ( absolute measurement ) : * uncertainty in the frequency of the second dfg , relative to the first dfg ( relative measurement ) : * uncertainty in the initial phase difference between the two dfgs ( absolute measurement ) : frequencies and the relative phase between the different dfgs can be finetuned to their desired value by calibrating and locking the dfgs rotation signal with an accurate and low - phase - noise clock ( e.g. , good cesium or gps ) , also used for time - stamping the ifo data . a sophisticated dfg frequency ( and phase )control system can be constructed allowing nanosecond timing precision . in our simulations we used tungsten filling masses of 30 kg for the first , and kg for the second dfg .the masses had the following permissible uncertainties : * uncertainty in the exact mass of one of the tungsten cylinders in the first dfg ( absolute measurement ) : * uncertainty in the mass of any other dfg cylinders of the two - dfg setup ( relative measurement ) .the mass of one of the tungsten cylinders built into the second dfg has to be times the mass of the one in the first dfg , where we proposed to use .this way we can use a relative measurement , and thus , we have to adjust the mass of one of the cylinders in the second dfg to the sum of the masses in the first dfg .we adjust the mass of the second tungsten cylinder in the second dfg to the mass of the first cylinder in the same dfg .the mass of the second cylinder in the second dfg can be then finetuned to the mass of the first cylinder within the same dfg . the tolerable uncertainty of these relative measurements and mass adjustments : for the dfg mass measurements, one can use scales such as the sartorius me415s scale , that can determine the absolute value of the masses with an uncertainty of kg with 410 g weighing capacity .using state - of - the - art mass comparators allows us to reduce the relative measurement ( mass - to - mass comparison ) error to kilograms ( e.g ) . a complete tungsten cylinder can be built from an assembly of several smaller disks , allowing us to reduce uncertainties with precision scales having lower weighing capacity .these smaller and lighter components of the tungsten fillings are also easier to be manufactured and handled .a combination of precision absolute scales , mass comparators , and fine mass adjustment through abrasion leads to precise mass standards and also allows manufacturing of matched mass pairs beyond the load limit of the absolute scale .arm lengths and dfg positions have to be set to definite values ( in our example case , , , and ) to achieve precise cancellation of the newtonian component .the corresponding parameters have the following uncertainties : * uncertainty in the distance between the first dfg s rotation center and the tm s center of gravity ( absolute measurement ) : * uncertainty in the distance between the two dfgs relative to ( relative measurement ) : * uncertainty in the length of one arm in the first dfg ( absolute measurement ) : * uncertainty of in the equality of arms within one dfg ( ) , and the uncertainty in the length of one arm of the second dfg relative to ( ) ( relative measurements ) : distance - like quantities ( -s and -s ) can be measured using interferometric methods with accuracy significantly surpassing the wavelength of the laser light .the exact technique of the measurement greatly depends on the dfg placement , accelerating and controlling configuration , and is beyond the scope of this article .setting the arm lengths within one dfg to be equal with a precision of m is challenging , but possibly feasible .the uncertainty of equivalence of arms within one dfg will be in the order of the equivalence of masses within the dfg , and the uncertainty of being zero. the feasible precision of mass - to - mass comparison can be as low as kg . in case of a non - zero , the axis of rotation of a spinning dfg will subject a periodic driving force ( that is proportional to ) , causing the support of the axis to vibrate .the amplitude of the vibration can be measured with high - precision interferometry , and the placement of the center of rotation can be adjusted until the vibration amplitude is zero within the measurement precision .the adjustment can be carried out using much higher dfg frequencies than the values given in table [ classicerrors ] , allowing an amplification of dfg vibrations in case of a non - zero .the non - newtonian and the classical gravitational potentials scale differently with , the distance between the tm and the center of rotation of a dfg . using the two dfgs positioned in a null experiment configuration , and measuring the tm displacement spectrum in case of different -s , can also help us to determine measurement and setup inaccuracies other than errors of .it is possible that periodically varying one or more of the tunable parameters ( e.g. , distance or phase ) will allow precise , tuning of the cancellation necessary for the null - measurement .also , an additional effect can be exploited to estimate the dominating error in the two - dfg setup parameters . by applying the parameter uncertainty tolerances given in this section ,the tm s induced displacement at due to imperfect cancellation of the newtonian term will be orders of magnitudes above the detector noise level and the yukawa term at , and thus will be detectable . by monitoring this signal at we can give a rough estimation on the dominating error in the parameters of the two - dfg setup .we investigated the projected contributions of the different parameter uncertainties , relative to each other , to the uncertainty of an measurement . to do this, we carried out a series of monte carlo simulations , where in each simulation , we set the relative uncertainty of a chosen parameter to , and kept the uncertainties of all the other parameters at zero level .we simulated two - dfg setups , and calculated the relative error of obtained from the hypothetical measurements of . in order to characterize the relative dominance of the different parameters compared to each other, we normalized the measured relative errors of with the highest such relative error value we obtained .this way , our final results were basically independent from the relative uncertainty chosen for each individual setup parameters .the resulting relative dominances of the different parameter uncertainties in the measurement are given in table [ alpharelativeerrors ] .our results show that the dominant parameter - for which the allowed uncertainty , , should be kept as low as possible - is the distance between the two dfgs , relative to ..the relative dominance ( rd ) of parameter uncertainties in terms of their contribution to the uncertainty of in a hypothetical measurement .we carried out monte carlo trials for each uncertainties , setting the chosen uncertainty to a relative value , and while setting all the other uncertainties to zero .the relative dominance for each parameter uncertainty was obtained by calculating the relative uncertainty of measured from the trials , and normalizing it by the largest such relative uncertainty value ( thus we get a relative dominance of 1 for the most dominant parameter uncertainty ) . [ cols="^,^,^,^,^,^",options="header " , ] [ alpharelativeerrors ] in this subsection we addressed the fundamental sources of measurement errors and the uncertainty limits for each parameter of interest .we also pointed out , that uncertainty values for dfg parameters used in our proposed experimental setup are within the limits of current state - of - the - art machining and measurement technologies . for a more realistic application a full dynamic finite element simulation of the mass distributionsmust be performed .the expansion and stress factors of the dfgs under prolonged operating conditions must be modeled and simulated , then subsequently measured and taken into account .practical limitations due to the complex mechanical and detector dynamics / geometry around the test mass and other second order error sources need to be investigated , via experiments and more sophisticated simulations as they might prevent us from reaching the full potential of dfg device based measurements .for example , in practice material inhomogeneities can give rise to a substantial uncertainty that could be minimized by using ndt - rated materials and also reduced by different geometries .alternative , more simplified dfg geometries different from the one proposed here ( e.g. rotating rods ) could also possibly lead to higher measuring and machining precision , and therefore should be studied carefully .all these issues will be the scopes of future studies .significant kinetic energy ( i.e. 100 kj ) is stored in the dfg once it rotates , therefore crucial safety considerations must be addressed .there are two major points of failure management to be concerned with .( a. ) the vacuum chamber of the dfgs must be made strong enough to withstand the damage of an accidentally disintegrating disk .this is the standard solution for high speed gyroscopes .( b. ) for added security , the gap between the inner wall of the vacuum chamber and the outer edge of the rotating disk could be kept relatively _small_. in the event of an incident where the dfg s material starts to yield or its angular acceleration is uncontrolled the disk will expand radially touching the sidewall and may slowly stop , potentially preempting some of the catastrophic failure modes . adding a heavy ring in contact with the inner lateral wall of the vacuum chamber ( similarly as described in )would absorb a huge angular momentum of the dfg and dissipate its rotation energy via the friction against the lateral wall .these conditions can be met using finite element analysis ( fea ) aided design , in - house destructive testing of sacrificial parts and relying only on the best base materials .two dfgs in a null experiment setup in conjunction with an interferometric sensor allow for studies of composition independent non - newtonian contribution to the classical gravitational field in the meter scale .simulation results presented in this paper indicate that by taking advantage of two matched dfgs and the relatively large bandwidth and high sensitivity of state - of - the - art interferometric sensors there is an exciting opportunity to explore below the current limit in the meter yukawa range addressing standing theoretical predictions . for the proof of concept study , we chose a conservative 2 meter distance for our device from the test mass of the interferometric detector and used the sensitivity of advanced and third - generation gravitational wave detectors as the realistic baseline .we intend to note here , that putting the devices considerably closer to the test mass can yield orders of magnitude better limits in isl violation parameters through the characteristic length scale changes for the yukawa case .there are many practical details that still need close attention when designing and manufacturing a practical device .finite element analysis of the dfgs and subsequent experimental studies are necessary to completely understand the stresses the dfg is subjected to .the dfgs need to be enclosed in separate vacuum chambers seated on seismic attenuation stage(s ) .prototype design and test will be necessary to balance the disk and test vibration control .direct gravitational coupling into the test mass suspension and seismic isolation system must be carefully designed , simulated and mitigated .any structure supporting the test mass must be placed far away ( i.e. , tens of meters ) from the double dfg assembly .other mostly practical problems , such as safety , can also be solved as was shown in past applications / experiments that have used rapidly rotating instruments ( for examples see references in sec . [ introduction ] ) .it is also likely that dedicated and custom designed interferometric sensors will be necessary and the direct use of advanced interferometric gravitational wave detectors will not be ideal for technical reasons . in spite of the promising estimates one should not underestimate the technical difficulties imposed by ( 1 ) the required design precision , ( 2 ) the manufacturing of the dfgs and ( 3 ) the unknown , but experimentally possible to study , complex nature of the coupling routes of the dynamic gravity fields of the dfgs into the complicated mechanical structure of the ifo test massesthe authors are grateful for the support of the united states national science foundation under cooperative agreement phy-04 - 57528 and columbia university in the city of new york .we acknowledge the support of the hungarian national office for research and technology ( nkth ) through the polanyi program ( grant no .kfkt-2006 - 01 - 0012 ) .we greatly appreciate the support of ligo collaboration .we are indebted to many of our colleagues for frequent and fruitful discussion .in particular we d like to thank m. landry , p. sutton , d. sigg , g. giordano , r. adhikari , v. sandberg , p. shawhan , r. desalvo , h. yamamoto , j. harms , c. matone and z. frei for their support and valuable comments on the manuscript . | we present an experimental opportunity for the future to measure possible violations to newton s law in the range using dynamic gravity field generators ( dfg ) and taking advantage of the exceptional sensitivity of modern interferometric techniques . the placement of a dfg in proximity to one of the interferometer s suspended test masses generates a change in the local gravitational field that can be measured at a high signal to noise ratio . the use of multiple dfgs in a null experiment configuration allows to test composition independent non - newtonian gravity significantly beyond the present limits . advanced and third - generation gravitational - wave detectors are representing the state - of - the - art in interferometric distance measurement today , therefore we illustrate the method through their sensitivity to emphasize the possible scientific reach . nevertheless , it is expected that due to the technical details of gravitational - wave detectors , dfgs shall likely require dedicated custom configured interferometry . however , the sensitivity measure we derive is a solid baseline indicating that it is feasible to consider probing orders of magnitude into the pristine parameter well beyond the present experimental limits significantly cutting into the theoretical parameter space . |
statistical model selection in the high - dimensional regime arises in a number of applications . in many data analysis problems in geophysics , radiology , genetics , climate studies , and image processing ,the number of samples available is comparable to or even smaller than the number of variables .however , it is well - known that empirical statistics such as sample covariance matrices are not well - behaved when both the number of samples and the number of variables are large and comparable to each other ( see ) .model selection in such a setting is therefore both challenging and of great interest . in order for model selection to be well - posedgiven limited information , a key assumption that is often made is that the underlying model to be estimated only has _ a few degrees of freedom_. common assumptions are that the data are generated according to a graphical model , or a stationary time - series model , or a simple factor model with a few latent variables .sometimes geometric assumptions are also made in which the data are viewed as samples drawn according to a distribution supported on a low - dimensional manifold .a model selection problem that has received considerable attention recently is the estimation of covariance matrices in the high - dimensional setting . as the sample covariance matrix is poorly behaved in such a regime , some form of _ regularization _ of the sample covariance is adopted based on assumptions about the true underlying covariance matrix .for example approaches based on banding the sample covariance matrix have been proposed for problems in which the variables have a natural ordering ( e.g. , times series ) , while `` permutation - invariant '' methods that use thresholding are useful when there is no natural variable ordering .these approaches provide consistency guarantees under various sparsity assumptions on the true covariance matrix .other techniques that have been studied include methods based on shrinkage and factor analysis .a number of papers have studied covariance estimation in the context of _ gaussian graphical model selection_. in a gaussian graphical model the _ inverse _ of the covariance matrix , also called the concentration matrix , is assumed to be sparse , and the sparsity pattern reveals the conditional independence relations satisfied by the variables .the model selection method usually studied in such a setting is -regularized maximum - likelihood , with the penalty applied to the entries of the inverse covariance matrix to induce sparsity .the consistency properties of such an estimator have been studied , and under suitable conditions this estimator is also `` sparsistent '' , i.e. , the estimated concentration matrix has the same sparsity pattern as the true model from which the samples are generated .an alternative approach to -regularized maximum - likelihood is to estimate the sparsity pattern of the concentration matrix by performing regression separately on each variable ; while such a method consistently estimates the sparsity pattern , it does not directly provide estimates of the covariance or concentration matrix . in many applications throughout science and engineering, a challenge is that one may not have access to observations of all the relevant phenomena , i.e. , some of the relevant variables may be hidden or unobserved .such a scenario arises in data analysis tasks in psychology , computational biology , and economics . in general latent variablespose a significant difficulty for model selection because one may not know the number of relevant latent variables , nor the relationship between these variables and the observed variables .typical algorithmic methods that try to get around this difficulty usually fix the number of latent variables as well as the structural relationship between latent and observed variables ( e.g. , the graphical model structure between latent and observed variables ) , and use the em algorithm to fit parameters .this approach suffers from the problem that one optimizes non - convex functions , and thus one may get stuck in sub - optimal local minima .an alternative method that has been suggested is based on a greedy , local , combinatorial heuristic that assigns latent variables to groups of observed variables , based on some form of clustering of the observed variables ; however , this approach has no consistency guarantees . in this paperwe study the problem of latent - variable graphical model selection in the setting where all the variables , both observed and hidden , are jointly gaussian .more concretely let the covariance matrix of a finite collection of jointly gaussian random variables be denoted by , where are the observed variables and are the unobserved , hidden variables .the marginal statistics corresponding to the observed variables are given by the marginal covariance matrix , which is simply a submatrix of the full covariance matrix .however suppose that we parameterize our model by the concentration matrix , which as discussed above reveals the connection to graphical models .in such a parametrization , the _ marginal concentration matrix _ corresponding to the observed variables is given by the schur complement with respect to the block : thus if we only observe the variables , we only have access to ( or ) .the two terms that compose above have interesting properties .the matrix specifies the concentration matrix of the _ conditional statistics _ of the observed variables given the latent variables .if these conditional statistics are given by a sparse graphical model then is _sparse_. on the other hand the matrix serves as a _ summary _ of the effect of marginalization over the hidden variables .this matrix has small rank if the number of latent , unobserved variables is small relative to the number of observed variables ( the rank is equal to ) . therefore the marginal concentration matrix of the observed variables is generally _ not sparse _ due to the additional low - rank term . hence standard graphical model selection techniques applied directly to the observed variables are not useful . a modeling paradigm that infers the effect of the latent variables would be more suitable in order to provide a simple explanation of the underlying statistical structure .hence we _ decompose _ into the sparse and low - rank components , which reveals the conditional graphical model structure in the observed variables as well as the _ number _ of and effect due to the unobserved latent variables .such a method can be viewed as a blend of principal component analysis and graphical modeling . in standard graphical modelingone would directly approximate a concentration matrix by a sparse matrix in order to learn a sparse graphical model . on the other hand in principal component analysisthe goal is to explain the statistical structure underlying a set of observations using a small number of latent variables ( i.e. , approximate a covariance matrix as a low - rank matrix ) . in our framework based on decomposing a concentration matrix , we learn a graphical model among the observed variables _ conditioned _ on a few ( additional ) latent variables . notice that in our setting these latent variables are _ not _ principal components , as the conditional statistics ( conditioned on these latent variables ) are given by a graphical model .therefore we refer to these latent variables informally as _hidden components_. our first contribution in section [ sec : iden ] is to address the fundamental question of _ identifiability _ of such latent - variable graphical models given the marginal statistics of only the observed variables .the critical point is that we need to tease apart the correlations induced due to marginalization over the latent variables from the conditional graphical model structure among the observed variables . as the identifiability problem is one of _ uniquely _ decomposing the sum of a sparse matrix and a low - rank matrix into the individual components , we study the algebraic varieties of sparse matrices and low - rank matrices .an important theme in this paper is the connection between the tangent spaces to these algebraic varieties and the question of identifiability .specifically let denote the tangent space at to the algebraic variety of sparse matrices , and let denote the tangent space at to the algebraic variety of low - rank matrices .then the _ statistical _ question of identifiability of and given is determined by the _geometric _ notion of _ transversality _ of the tangent spaces and .the study of the transversality of these tangent spaces leads us to natural conditions for identifiability .in particular we show that latent - variable models in which the sparse matrix has a small number of nonzeros per row / column , and the low - rank matrix has row / column spaces that are not closely aligned with the coordinate axes , are identifiable .these two conditions have natural statistical interpretations .the first condition ensures that there are no densely - connected subgraphs in the conditional graphical model structure among the observed variables given the hidden components , i.e. , that these conditional statistics are indeed specified by a sparse graphical model .such statistical relationships may otherwise be mistakenly attributed to the effect of marginalization over some latent variable .the second condition ensures that the effect of marginalization over the latent variables is `` spread out '' over many observed variables ; thus , the effect of marginalization over a latent variable is not confused with the conditional graphical model structure among the observed variables .in fact the first condition is often assumed in some papers on standard graphical model selection without latent variables ( see for example ) .we note here that question of parameter identifiability was recently studied for models with discrete - valued latent variables ( i.e. , mixture models , hidden markov models ) .however , this work is not applicable to our setting in which both the latent and observed variables are assumed to be jointly gaussian . as our next contributionwe propose a _regularized maximum - likelihood decomposition _framework to approximate a given sample covariance matrix by a model in which the concentration matrix decomposes into a sparse matrix and a low - rank matrix .a number of papers over the last several years have suggested that heuristics based on using the norm are very effective for recovering sparse models .indeed such heuristics have been effectively used , as described above , for model selection when the goal is to estimate sparse concentration matrices . in her thesis fazel suggested a convex heuristic based on the nuclear norm for rank - minimization problems in order to recover low - rank matrices .this method generalized the previously studied trace heuristic for recovering low - rank positive semidefinite matrices .recently several conditions have been given under which these heuristics provably recover low - rank matrices in various settings .motivated by the success of these heuristics , we propose the following penalized likelihood method given a sample covariance matrix formed from samples of the observed variables : here represents the gaussian log - likelihood function and is given by for , where is the trace of a matrix and is the determinant .the matrix provides an estimate of , which represents the conditional concentration matrix of the observed variables ; the matrix provides an estimate of , which represents the effect of marginalization over the latent variables .notice that the regularization function is a combination of the norm applied to and the nuclear norm applied to ( the nuclear norm reduces to the trace over the cone of symmetric , positive - semidefinite matrices ) , with providing a tradeoff between the two terms .this variational formulation is a _ convex optimization _ problem .in particular it is a regularized max - det problem and can be solved in polynomial time using standard off - the - shelf solvers .our main result in section [ sec : main ] is a proof of the consistency of the estimator in the high - dimensional regime in which both the number of observed variables and the number of hidden components are allowed to grow with the number of samples ( of the observed variables ) .we show that for a suitable choice of the regularization parameter , there exists a range of values of for which the estimates have the same sparsity ( and sign ) pattern and rank as with high probability ( see theorem [ theo : main ] ) .the key technical requirement is an identifiability condition for the two components of the marginal concentration matrix with respect to the fisher information ( see section [ subsec : fi ] ) .we make connections between our condition and the irrepresentability conditions required for support / graphical - model recovery using regularization .our results provide numerous scaling regimes under which consistency holds in latent - variable graphical model selection .for example we show that under suitable identifiability conditions consistent model selection is possible even when the number of samples and the number of latent variables are on the same order as the number of observed variables ( see section [ subsec : scal ] ) .[ [ related - previous - work ] ] related previous work + + + + + + + + + + + + + + + + + + + + + the problem of decomposing the sum of a sparse matrix and a low - rank matrix , with no additional noise , into the individual components was initially studied in by a superset of the authors of the present paper .specifically this work proposed a convex program using a combination of the norm and the nuclear norm to recover the sparse and low - rank components , and derived conditions under which the convex program exactly recovers these components . in subsequent work cands et al . also studied this noise - free sparse - plus - low - rank decomposition problem , and provided guarantees for exact recovery using the convex program proposed in .the problem setup considered in the present paper is quite different and is more challenging because we are only given access to an inexact sample covariance matrix , and we are interested in recovering components that preserve both the sparsity pattern and the rank of the components in the true underlying model .in addition to proving such a consistency result for the estimator , we also provide a statistical interpretation of our identifiability conditions and describe natural classes of latent - variable gaussian graphical models that satisfy these conditions . as suchour paper is closer in spirit to the many recent papers on covariance selection , but with the important difference that some of the variables are not observed .[ [ outline ] ] outline + + + + + + + section [ sec : bg ] gives some background on graphical models as well as the algebraic varieties of sparse and low - rank matrices .it also provides a formal statement of the problem .section [ sec : iden ] discusses conditions under which latent - variable models are identifiable , and section [ sec : main ] states the main results of this paper .we provide experimental demonstration of the effectiveness of our estimator on synthetic and real data in section [ sec : sims ] .section [ sec : conc ] concludes the paper with a brief discussion .the appendices include additional details and proofs of all of our technical results .we briefly discuss concepts from graphical modeling and give a formal statement of the latent - variable model selection problem .we also describe various properties of the algebraic varieties of sparse matrices and of low - rank matrices .the following matrix norms are employed throughout this paper : * : denotes the spectral norm , which is the largest singular value of . * : denotes the largest entry in magnitude of . * : denotes the frobenius norm , which is the square - root of the sum of the squares of the entries of .* : denotes the nuclear norm , which is the sum of the singular values of .this reduces to the trace for positive - semidefinite matrices . * : denotes the sum of the absolute values of the entries of . a number of _ matrix operator _norms are also used .for example , let be a linear operator acting on matrices. then the induced operator norm is defined as : therefore , denotes the spectral norm of the matrix operator . the only vector norm used is the euclidean norm , which is denoted by .a graphical model is a statistical model defined with respect to a graph in which the nodes index a collection of random variables , and the edges represent the conditional independence relations ( markov structure ) among the variables .the absence of an edge between nodes implies that the variables are independent conditioned on all the other variables .a _ gaussian graphical model _ ( also commonly referred to as a gauss - markov random field ) is one in which all the variables are jointly gaussian . in such models the sparsity pattern of the inverse of the covariance matrix , or the _ concentration _ matrix , directly corresponds to the graphical model structure .specifically , consider a gaussian graphical model in which the covariance matrix is given by and the concentration matrix is given by .then an edge is present in the underlying graphical model if and only if .our focus in this paper is on gaussian models in which some of the variables may not be observed .suppose represents the set of nodes corresponding to observed variables , and the set of nodes corresponding to unobserved , hidden variables with and .the joint covariance is denoted by , and joint concentration matrix by .the submatrix represents the marginal covariance of the observed variables , and the corresponding marginal concentration matrix is given by the schur complement with respect to the block : the submatrix specifies the concentration matrix of the conditional statistics of the observed variables conditioned on the hidden components .if these conditional statistics are given by a sparse graphical model then is sparse . on the other handthe marginal concentration matrix of the marginal distribution of is _ not _ sparse in general due to the extra correlations induced from marginalization over the latent variables , i.e. , due to the presence of the additional term .hence , standard graphical model selection techniques in which the goal is to approximate a sample covariance by a sparse graphical model are not well - suited for problems in which some of the variables are hidden . however , the matrix is a low - rank matrix if the number of hidden variables is much smaller than the number of observed variables ( i.e. , ) .therefore , a more appropriate model selection method is to approximate the sample covariance by a model in which the concentration matrix decomposes into the sum of a sparse matrix and a low - rank matrix .the objective here is to learn a sparse graphical model among the observed variables _ conditioned _ on some latent variables , as such a model explicitly accounts for the extra correlations induced due to unobserved , hidden components . in order to analyze latent - variable model selection methods ,we need to define an appropriate notion of model selection consistency for latent - variable graphical models .notice that given the two components and of the concentration matrix of the marginal distribution , there are _ infinitely _ many configurations of the latent variables ( i.e. , matrices ) that give rise to the _ same _ low - rank matrix . specifically for any non - singular matrix , one can apply the transformations and still preserve the low - rank matrix . in _ all_ of these models the marginal statistics of the observed variables remain the same upon marginalization over the latent variables . the key _invariant _ is the low - rank matrix , which _ summarizes _ the effect of marginalization over the latent variables .these observations give rise to the following notion of consistency : a pair of ( symmetric ) matrices with is an _ algebraically consistent _ estimate of a latent - variable gaussian graphical model given by the concentration matrix if the following conditions hold : 1 .the sign - pattern of is the same as that of : here we assume that .the rank of is the same as the rank of : 3 .the concentration matrix can be realized as the marginal concentration matrix of an appropriate latent - variable model : the first condition ensures that provides the correct structural estimate of the conditional graphical model ( given by ) of the observed variables conditioned on the hidden components .this property is the same as the `` sparsistency '' property studied in standard graphical model selection .the second condition ensures that the number of hidden components is correctly estimated .finally , the third condition ensures that the pair of matrices leads to a realizable latent - variable model . in particularthis condition implies that there exists a valid latent - variable model on variables in which the conditional graphical model structure among the observed variables is given by , the number of latent variables is equal to the rank of , and the extra correlations induced due to marginalization over the latent variables is equal to . any method for matrix factorization( see for example , ) can be used to factorize the low - rank matrix , depending on the properties that one desires in the factors ( e.g. , sparsity ) .we also study parametric consistency in the usual sense , i.e. , we show that one can produce estimates that converge in various norms to the matrices .notice that proving is close to in some norm does not in general imply that the support / sign - pattern and rank of are the same as those of .therefore parametric consistency is different from algebraic consistency , which requires that have the same support / sign - pattern and rank as .[ [ goal ] ] goal + + + + let denote the concentration matrix of a gaussian model .suppose that we have samples of the observed variables .we would like to produce estimates that , with high - probability , are both algebraically consistent and parametrically consistent ( in some norm ) .given samples of a finite collection of jointly gaussian zero - mean random variables with concentration matrix , we define the sample covariance as follows : it is then easily seen that the log - likelihood function is given by : where is a function of .notice that this function is strictly concave for .now consider the latent - variable modeling problem in which we wish to model a collection of random variables ( with sample covariance ) by adding some extra variables .with respect to the parametrization ( with representing the conditional statistics of given , and summarizing the effect of marginalization over the additional variables ) , the likelihood function is given by : the function is _ jointly concave _ with respect to the parameters whenever , and it is this function that we use in our variational formulation to learn a latent - variable model . in the analysis of a convex program involving the likelihood function, the fisher information plays an important role as it is the negative of the hessian of the likelihood function and thus controls the curvature .as the first term in the likelihood function is linear , we need only study higher - order derivatives of the log - determinant function in order to compute the hessian . letting denote the fisher information matrix, we have that for .if is a concentration matrix , then the fisher information matrix has dimensions .next consider the latent - variable situation with the variables indexed by being observed and the variables indexed by being hidden .the concentration matrix of the marginal distribution of the observed variables is given by the schur complement , and the corresponding fisher information matrix is given by notice that this is precisely the submatrix of the full fisher information matrix with respect to all the parameters ( corresponding to the situation in which _ all _ the variables are observed ) .the matrix has dimensions , while is an matrix . to summarize, we have for all that : {(i , j),(k , l ) } = { \mathcal{i}}(k^\ast_{(o~h)})_{(i , j),(k , l)}.\ ] ] in section [ subsec : fi ] we impose various conditions on the fisher information matrix under which our regularized maximum - likelihood formulation provides consistent estimates with high probability . an algebraic variety is the solution set of a system of polynomial equations .the set of sparse matrices and the set of low - rank matrices can be naturally viewed as algebraic varieties . herewe describe these varieties , and discuss some of their properties .of particular interest in this paper are geometric properties of these varieties such as the tangent space and local curvature at a ( smooth ) point .let denote the set of matrices with at most nonzeros : the set is an algebraic variety , and can in fact be viewed as a union of subspaces in .this variety has dimension , and it is smooth everywhere except at those matrices that have support size strictly smaller than . for any matrix ,consider the variety ; is a smooth point of this variety , and the tangent space at is given by in words the tangent space at a smooth point is given by the set of all matrices that have support contained within the support of .we view as a subspace in .next let denote the algebraic variety of matrices with rank at most : it is easily seen that is an algebraic variety because it can be defined through the vanishing of all minors .this variety has dimension equal to , and it is smooth everywhere except at those matrices that have rank strictly smaller than . consider a rank- matrix with svd given by , where and .the matrix is a smooth point of the variety , and the tangent space at with respect to this variety is given by in words the tangent space at a smooth point is the span of all matrices that have either the same row - space as or the same column - space as . as with view as a subspace in . in section [ sec : iden ]we explore the connection between geometric properties of these tangent spaces and the identifiability problem in latent - variable graphical models .the sparse matrix variety has the property that it has _zero _ curvature at any smooth point .consequently the tangent space at a smooth point is the _ same _ as the tangent space at any point in a neighborhood of .this property is implicitly used in the analysis of regularized methods for recovering sparse models .the situation is more complicated for the low - rank matrix variety , because the curvature at any smooth point is nonzero .therefore we need to study how the tangent space changes from one point to a neighboring point by analyzing how this variety curves locally .indeed the amount of curvature at a point is directly related to the `` angle '' between the tangent space at that point and the tangent space at a neighboring point . for any subspace of matrices ,let denote the projection onto .given two subspaces of the same dimension , we measure the `` twisting '' between these subspaces by considering the following quantity . ( n)\|_2 .\label{eq : rho}\ ] ] in appendix [ app : matper ] we briefly review relevant results from matrix perturbation theory ; the key tool used to derive these results is the resolvent of a matrix . based on these tools we prove the following two results in appendix [ app : rankcurv ] , which bound the twisting between the tangent spaces at nearby points .the first result provides a bound on the quantity between the tangent spaces at a point and at its neighbor .[ theo : tspace ] let be a rank- matrix with smallest nonzero singular value equal to , and let be a perturbation to such that .further , let be a rank- matrix .then we have that the next result bounds the error between a point and its neighbor in the normal direction .[ theo : nspace ] let be a rank- matrix with smallest nonzero singular value equal to , and let be a perturbation to such that .further , let be a rank- matrix. then we have that these results suggest that the closer the smallest singular value is to zero , the more curved the variety is locally .therefore we control the twisting between tangent spaces at nearby points by bounding the smallest nonzero singular value away from zero .in the absence of additional conditions , the latent - variable model selection problem is ill - posed . in this sectionwe discuss a set of conditions on latent - variable models that ensure that these models are identifiable given marginal statistics for a subset of the variables .suppose that the low - rank matrix that summarizes the effect of the hidden components is itself sparse .this leads to identifiability issues in the sparse - plus - low - rank decomposition problem .statistically the additional correlations induced due to marginalization over the latent variables could be mistaken for the conditional graphical model structure of the observed variables . in order to avoid such identifiability problems the effect of the latent variables must be `` diffuse '' across the observed variables . to address this pointthe following quantity was introduced in for any matrix , defined with respect to the tangent space : thus being small implies that elements of the tangent space can not have their support concentrated in a few locations ; as a result can not be too sparse .this idea is formalized in by relating to a notion of `` incoherence '' of the row / column spaces , where the row / column spaces are said to be incoherent with respect to the standard basis if these spaces are not aligned closely with any of the coordinate axes .letting be the singular value decomposition of , the incoherence of the row / column spaces of ( initially proposed and studied by cands and recht ) is defined as : here denote projections , and projections onto matrix subspaces ( defined by a general linear operator ) by the calligraphic . ] onto the row / column spaces of , and is the standard basis vector . hence measures the projection of the most `` closely aligned '' coordinate axis with the row / column spaces . for any rank- matrix m we have that where the lower bound is achieved ( for example ) if the row / column spaces span any columns of a orthonormal hadamard matrix , while the upper bound is achieved if the row or column space contains a standard basis vector .typically a matrix m with incoherent row / column spaces would have .the following result ( proved in ) shows that the more incoherent the row / column spaces of , the smaller is .[ theo : xiinc ] for any , we have that where and are defined in and .based on these concepts we roughly require that the low - rank matrix that summarizes the effect of the latent variables be _ incoherent _ , thereby ensuring that the extra correlations due to marginalization over the hidden components can not be confused with the conditional graphical model structure of the observed variables .notice that the quantity is not just a measure of the number of latent variables , but also of the overall effect of the correlations induced by marginalization over these variables .* curvature and change in * : as noted previously an important technical point is that the algebraic variety of low - rank matrices is locally curved at any smooth point. consequently the quantity changes as we move along the low - rank matrix variety smoothly .the quantity introduced in also allows us to bound the variation in as follows .[ theo : rhotspace ] let be two matrix subspaces of the same dimension with the property that , where is defined in .then we have that .\ ] ] this lemma is proved in appendix [ app : rankcurv ] .an identifiability problem also arises if the conditional graphical model among the observed variables contains a densely connected subgraph .these statistical relationships might be mistaken as correlations induced by marginalization over latent variables .therefore we need to ensure that the conditional graphical model among the observed variables is sparse .we impose the condition that this conditional graphical model must have small `` degree '' , i.e. , no observed variable is directly connected to too many other observed variables conditioned on the hidden components .notice that bounding the degree is a more refined condition than simply bounding the total number of nonzeros as the _ sparsity pattern _ also plays a role . in authors introduced the following quantity in order to provide an appropriate measure of the sparsity pattern of a matrix : the quantity being small for a matrix implies that the spectrum of any element of the tangent space is not too `` concentrated '' , i.e. , the singular values of the elements of the tangent space are not too large . in is shown that a sparse matrix with `` bounded degree '' ( a small number of nonzeros per row / column ) has small .[ theo : mudeg ] let be any matrix with at most nonzero entries per row / column , and with at least nonzero entries per row / column . with defined in , we have that suppose that we have the sum of two vectors , each from two known subspaces .it is possible to uniquely recover the individual vectors from the sum if and only if the subspaces have a transverse intersection , i.e. , they only intersect at the origin . this simple observation leads to an appealing algebraic notion of identifiability .consider the situation in which we have the sum of a sparse matrix and a low - rank matrix .in addition to this sum , suppose that we are also given the tangent spaces at these matrices with respect to the algebraic varieties of sparse and low - rank matrices respectively .then a necessary and sufficient condition for _ local _ identifiability is that these tangent spaces have a transverse intersection .it turns out that these transversality conditions on the tangent spaces are also sufficient for the regularized maximum - likelihood convex program to provide consistent estimates of the number of hidden components and the conditional graphical model structure of the observed variables conditioned on the latent variables ( without any side information about the tangent spaces ) . in order to quantify the level of transversality between the tangent spaces and we study the _ minimum gain _ with respect to some norm of the addition operator restricted to the cartesian product .more concretely let represent the addition operator , i.e. , the operator that adds two matrices .then given any matrix norm on , the minimum gain of restricted to is defined as follows : where denotes the projection onto the space , and denotes the adjoint of the addition operator ( with respect to the standard euclidean inner - product ) .the tangent spaces and have a _ transverse _ intersection if and only if . the `` level '' of transversality is measured by the magnitude of .note that if the norm used is the frobenius norm , then is the square of the _ minimum singular value _ of the addition operator restricted to .a natural norm with which to measure transversality is the dual norm of the regularization function in , as the subdifferential of the regularization function is specified in terms of its dual . the reasons for this will become clearer as we proceed through this paper .recall that the regularization function used in the variational formulation is given by : where the nuclear norm reduces to the trace function over the cone of positive - semidefinite matrices .this function is a norm for all .the dual norm of is given by the following simple lemma records a useful property of the norm that is used several times throughout this paper .[ theo : gg ] let and be tangent spaces at any points with respect to the algebraic varieties of sparse and low - rank matrices . then for any matrix , we have that and that .further we also have that and that .thus for any matrices and for , one can check that and that .next we define the quantity as follows in order to study the transversality of the spaces and with respect to the norm : here and are defined in and .we then have the following result ( proved in appendix [ app : rg ] ) : [ theo : rg1 ] let be matrices such that and let .then we have that ] such that : this assumption is to be viewed as a generalization of the irrepresentability conditions imposed on the covariance matrix or the fisher information matrix in order to provide consistency guarantees for sparse model selection using the norm . with this assumption we have the following proposition , proved in appendix [ app : rg ] , about the gains of the operator restricted to .this proposition plays a fundamental role in the analysis of the performance of the regularized maximum - likelihood procedure .[ theo : irr ] let and be the tangent spaces defined in this section , and let be the fisher information evaluated at the true marginal concentration matrix .further let be as defined above .suppose that and that is in the following range : .\ ] ] then we have the following two conclusions for with : 1 .the minimum gain of restricted to is bounded below : specifically this implies that for all 2 .the effect of elements in on the orthogonal complement is bounded above : specifically this implies that for all the last quantity we consider is the spectral norm of the marginal covariance matrix : a bound on is useful in the probabilistic component of our analysis , in order to derive convergence rates of the sample covariance matrix to the true covariance matrix .we also observe that denote the full concentration matrix of a collection of zero - mean jointly - gaussian observed and latent variables , let denote the number of observed variables , and let denote the number of latent variables .we are given samples of the observed variables .we consider the high - dimensional setting in which are all allowed to grow simultaneously .the quantities defined in the previous section are accounted for in our analysis , although we suppress the dependence on these quantities in the statement of our main result .we explicitly keep track of the quantities and as these control the complexity of the latent - variable model given by .in particular controls the sparsity of the conditional graphical model among the observed variables , while controls the incoherence or `` diffusivity '' of the extra correlations induced due to marginalization over the hidden variables .based on the tradeoff between these two quantities , we obtain a number of classes of latent - variable graphical models ( and corresponding scalings of ) that can be consistently recovered using the regularized maximum - likelihood convex program ( see section [ subsec : scal ] for details ) . specifically we show that consistent model selection is possible even when the number of samples and the number of latent variables are on the same order as the number of observed variables .we present our main result next demonstrating the consistency of the estimator , and then discuss classes of latent - variable graphical models and various scaling regimes in which our estimator is consistent .[ theo : main ] let denote the concentration matrix of a gaussian model .we have samples of the observed variables denoted by .let and denote the tangent spaces at and at with respect to the sparse and low - rank matrix varieties respectively .* assumptions * : suppose that the following conditions hold : 1 .the quantities and satisfy the assumption of proposition [ theo : irr ] for identifiability , and is chosen in the range specified by proposition [ theo : irr ] .the number of samples available is such that 3 .the regularization parameter is chosen as 4 .the minimum nonzero singular value of is bounded as 5 .the minimum magnitude nonzero entry of is bounded as * conclusions * : then with probability greater than we have : 1 .algebraic consistency : the estimate given by the convex program is algebraically consistent , i.e. , the support and sign pattern of is the same as that of , and the rank of is the same as that of .2 . parametric consistency : the estimate given by the convex program is parametrically consistent : the proof of this theorem is given in appendix [ app : main ] .the theorem essentially states that if the minimum nonzero singular value of the low - rank piece and minimum nonzero entry of the sparse piece are bounded away from zero , then the convex program provides estimates that are both algebraically consistent and parametrically consistent ( in the and spectral norms ) . in section[ subsec : cov ] we also show that these results easily lead to parametric consistency rates for the corresponding estimate of the marginal covariance of the observed variables .notice that the condition on the minimum singular value of is more stringent than on the minimum nonzero entry of .one role played by these conditions is to ensure that the estimates do not have smaller support size / rank than .however the minimum singular value bound plays the additional role of bounding the curvature of the low - rank matrix variety around the point , which is the reason for this condition being more stringent .notice also that the number of hidden variables does not explicitly appear in the bounds in theorem [ theo : main ] , which only depend on .however the dependence on is implicit in the dependence on , and we discuss this point in greater detail in the following section .finally we note that algebraic and parametric consistency hold under the assumptions of theorem [ theo : main ] for a _ range _ of values of : .\ ] ] in particular the assumptions on the sample complexity , the minimum nonzero singular value of , and the minimum magnitude nonzero entry of are governed by the lower end of this range for .these assumptions can be weakened if we only require consistency for a smaller range of values of .the following corollary conveys this point with a specific example : [ theo : maincorl ] consider the same setup and notation as in theorem [ theo : main ] .suppose that the quantities and satisfy the assumption of proposition [ theo : irr ] for identifiability .suppose that we make the following assumptions : 1 .let be chosen to be equal to ( the upper end of the range specified in proposition [ theo : irr ] ) , i.e. , .2 . .3 . .4 . .5 . . then with probability greater than we have estimates that are algebraically consistent , and parametrically consistent with the error bounded as the proof of this corollary, one can further remove the factor of in the lower bound for .specifically the lower bound suffices for consistent estimation if bound the minimum / maximum gains of for _ all _ matrices ( rather than just those near ) , and bounds the -inner - product for _ all _ pairs of orthogonal matrices ( rather than just those near and ) . ] is analogous to that of theorem [ theo : main ] .we emphasize that in practice it is often beneficial to have consistent estimates for a range of values of ( as in theorem [ theo : main ] ) .specifically the stability of the sparsity pattern and rank of the estimates for a range of tradeoff parameters is useful in order to choose a suitable value of , as prior information about the quantities and is not typically available ( see section [ sec : sims ] ) .next we consider classes of latent - variable models that satisfy the conditions of theorem [ theo : main ] .recall that denotes the number of samples , denotes the number of observed variables , and denotes the number of latent variables . recall the assumption that the quantities defined in section [ subsec : fi ] remain bounded , and do not scale with the other parameters such as or or .in particular we focus on the tradeoff between and ( the quantities that control the complexity of a latent - variable graphical model ) , and the resulting scaling regimes for consistent estimation .let denote the degree of the conditional graphical model among the observed variables , and let denote the incoherence of the correlations induced due to marginalization over the latent variables ( we suppress the dependence on ) .these quantities are defined in section [ sec : iden ] , and we have from propositions [ theo : xiinc ] and [ theo : mudeg ] that since are assumed to be bounded , we also have from proposition [ theo : irr ] that the product of and must be bounded by a constant .thus , we study latent - variable models in which as we describe next , there are non - trivial classes of latent - variable graphical models in which this condition holds .* bounded degree and incoherence * : the first class of latent - variable models that we consider are those in which the conditional graphical model among the observed variables ( given by ) has constant degree .recall from equation that the incoherence of the effect of the latent variables ( given by ) can be as small as . consequently latent - variable models in which can be estimated consistently from samples as long as the low - rank matrix is almost maximally incoherent , i.e. , so the effect of marginalization over the latent variables is diffuse across almost all the observed variables .thus consistent latent - variable model selection is possible even when the number of samples and the number of latent variables are on the same order as the number of observed variables .* polylogarithmic degree * the next class of models that we study are those in which the degree of the conditional graphical model of the observed variables grows poly - logarithmically with .consequently , the incoherence of the matrix must decay as the inverse of poly- . using the fact that maximally incoherent low - rank matrices can have incoherence as small as , latent - variable models in which can be consistently estimated as long as - .the main result theorem [ theo : main ] gives conditions under which we can consistently estimate the sparse and low - rank parts that compose the marginal concentration matrix .here we prove a corollary that gives rates for covariance matrix estimation , i.e. , the quality of the estimate with respect to the `` true '' marginal covariance matrix . under the same conditions as in theorem[ theo : main ] , we have with probability greater than that ) \lesssim \frac{1}{\xi(t ) } \sqrt{\frac{p}{n}}.\ ] ] specifically this implies that . * proof * : the proof of this lemma follows directly from duality .based on the analysis in appendix [ app : main ] ( in particular using the optimality conditions of the modified convex program ) , we have that ) \leq \lambda_n.\ ] ] we also have from the bound on the number of samples that with probability greater than ( see appendix [ app : final ] ) ) \lesssim \lambda_n\ ] ] based on the choice of in theorem [ theo : main ] , we then have the desired bound . standard results from convex analysis state that is a minimum of the convex program if the zero matrix belongs to the subdifferential of the objective function evaluated at ( in addition to satisfying the constraints ) .the subdifferential of the norm at a matrix is given by for a symmetric positive semidefinite matrix with svd , the subdifferential of the trace function restricted to the cone of positive semidefinite matrices ( i.e. , the nuclear norm over this set ) is given by : ~~ \leftrightarrow ~~ { \mathcal{p}}_{t(m)}(n ) = u u^t , ~ { \mathcal{p}}_{t(m)^\bot}(n ) \preceq i,\ ] ] where denotes the characteristic function of the set of positive semidefinite matrices ( i.e. , the convex function that evaluates to over this set and outside ) .the key point is that elements of the subdifferential decompose with respect to the tangent spaces and .this decomposition property plays a critical role in our analysis .in particular it states that the optimality conditions consist of two parts , one part corresponding to the tangent spaces and and another corresponding to the normal spaces and . consider the optimization problem with the additional ( non - convex ) constraints that the variable belongs to the algebraic variety of sparse matrices and that the variables belongs to the algebraic variety of low - rank matrices . while this new optimization problem is non - convex , it has a very interesting property . at a globally optimal solution ( and indeed at any locally optimal solution ) such that and are smooth points of the algebraic varieties of sparse and low - rank matrices , the first - order optimality conditions state that the lagrange multipliers corresponding to the additional variety constraints must lie in the _ normal spaces _ and .this fundamental observation , combined with the decomposition property of the subdifferentials of the and nuclear norms , suggests the following high - level proof strategy : 1 .let be the globally optimal solution of the optimization problem with the additional constraints that belong to the algebraic varieties of sparse / low - rank matrices ; specifically constrain to lie in and constrain to lie in .show first that are smooth points of these varieties .the first part of the subgradient optimality conditions of the original convex program corresponding to components _ on _ the tangent spaces and is satisfied .this conclusion can be reached because the additional lagrange multipliers due to the variety constraints lie in the normal spaces and .3 . finally show that the second part of the subgradient optimality conditions of ( without any variety constraints ) corresponding to components in the normal spaces and is also satisfied by .combining these steps together we show that satisfy the optimality conditions of the _ original convex program _ .consequently is also the optimum of the convex program . as this estimate is also the solution to the problem with the variety constraints , the algebraic consistency of be directly concluded .we emphasize here that the variety - constrained optimization problem is used solely as an analysis tool in order to prove consistency of the estimates provided by the convex program .these steps describe our broad strategy , and we refer the reader to appendix [ app : main ] for details .the key technical complication is that the tangent spaces at and are in general different .we bound the twisting between these tangent spaces by using the fact that the minimum nonzero singular value of is bounded away from zero ( as assumed in theorem [ theo : main ] and using proposition [ theo : tspace ] ) .in this section we give experimental demonstration of the consistency of our estimator on synthetic examples , and its effectiveness in modeling real - world stock return data .our choices of and are guided by theorem [ theo : main ] .specifically , we choose to be proportional to . for observe that the support / sign - pattern and the rank of the solution are the same for a _ range _ of values of . therefore one could solve the convex program for several values of , and choose a solution in a suitable range in which the sign - pattern and rank of the solution are stable .in practical problems with real - world data these parameters may be chosen via cross - validation . for small problem instanceswe solve the convex program using a combination of yalmip and sdpt3 , which are standard off - the - shelf packages for solving convex programs . for larger problem instances we use the special purpose solver logdetppa developed for log - determinant semidefinite programs . in the first set of experiments we consider a setting in which we have access to samples of the observed variables of a latent - variable graphical model .we consider several latent - variable gaussian graphical models .the first model consists of observed variables and hidden variables .the conditional graphical model structure of the observed variables is a cycle with the edge partial correlation coefficients equal to ; thus , this conditional model is specified by a sparse graphical model with degree .the second model is the same as the first one , but with latent variables .the third model consists of latent variable , and the conditional graphical model structure of the observed variables is given by a nearest - neighbor grid ( i.e. , and degree ) with the partial correlation coefficients of the edges equal to . in all three of these models each latent variable is connected to a random subset of of the observed variables ( and the partial correlation coefficients corresponding to these edges are also random ) .therefore the effect of the latent variables is `` spread out '' over most of the observed variables , i.e. , the low - rank matrix summarizing the effect of the latent variables is incoherent . for each modelwe generate samples of the observed variables , and use the resulting sample covariance matrix as input to our convex program .figure [ fig : fig1 ] shows the probability of recovery of the support / sign - pattern of the conditional graphical model structure in the observed variables and the number of latent variables ( i.e. , probability of obtaining algebraically consistent estimates ) as a function of .this probability is evaluated over experiments for each value of . in all of these cases standardgraphical model selection applied directly to the observed variables is not useful as the marginal concentration matrix of the observed variables is not well - approximated by a sparse matrix .these experiments agree with our theoretical results that the convex program is an algebraically consistent estimator of a latent - variable model given ( sufficiently many ) samples of only the observed variables . in the next experiment we model the statistical structure of monthly stock returns of 84 companies in the s&p 100 index from 1990 to 2007 ; we disregard 16 companies that were listed after 1990 .the number of samples is equal to .we compute the sample covariance based on these returns and use this as input to .the model learned using for suitable values of consists of latent variables , and the conditional graphical model structure of the stock returns conditioned on these hidden components consists of edges .therefore the number of parameters in the model is .the resulting kl divergence between the distribution specified by this model and a gaussian distribution specified by the sample covariance is .figure [ fig : fig2 ] ( left ) shows the _ conditional _ graphical model structure .the strongest edges in this conditional graphical model , as measured by partial correlation , are between baker hughes - schlumberger , a.t.&t .- verizon , merrill lynch - morgan stanley , halliburton - baker hughes , intel - texas instruments , apple - dell , and microsoft - dell .it is of interest to note that in the standard industrial classification system for grouping these companies , several of these pairs are in different classes . as mentioned in section [ subsec : ps ] our method estimates a low - rank matrix that summarizes the effect of the latent variables ; in order to factorize this low - rank matrix , for example into sparse factors , one could use methods such as those described in .we compare these results to those obtained using a sparse graphical model learned using -regularized maximum - likelihood ( see for example ) , without introducing any latent variables .figure [ fig : fig2 ] ( right ) shows this graphical model structure .the number of edges in this model is ( the total number of parameters is equal to ) , and the resulting kl divergence between this distribution and a gaussian distribution specified by the sample covariance is . indeed to obtain a comparable kl divergence to that of the latent - variable model described above , one would require a graphical model with over edges .these results suggest that a latent - variable graphical model is better suited than a standard sparse graphical model for modeling the statistical structure among stock returns .this is likely due to the presence of global , long - range correlations in stock return data that are better modeled via latent variables .we have studied the problem of modeling the statistical structure of a collection of random variables as a sparse graphical model conditioned on a few additional hidden components . as a first contribution we described conditions under which such latent - variable graphical models are identifiable given samples of only the observed variables .we also proposed a convex program based on regularized maximum - likelihood for latent - variable graphical model selection ; the regularization function is a combination of the norm and the nuclear norm . given samples of the observed variables of a latent - variable gaussian model we proved that this convex program provides consistent estimates of the number of hidden components as well as the conditional graphical model structure among the observed variables conditioned on the hidden components .our analysis holds in the high - dimensional regime in which the number of observed / latent variables are allowed to grow with the number of samples of the observed variables .in particular we discuss certain scaling regimes in which consistent model selection is possible even when the number of samples and the number of latent variables are on the same order as the number of observed variables .these theoretical predictions are verified via a set of experiments on synthetic data .we also demonstrate the effectiveness of our approach in modeling real - world stock return data .several research questions arise that are worthy of further investigation .while the convex program can be solved in polynomial time using off - the - shelf solvers , it is preferable to develop more efficient special - purpose solvers that can scale to massive datasets by taking advantage of the structure of the formulation .finally it would be of interest to develop a similar convex optimization formulation with consistency guarantees for latent - variable models with non - gaussian variables , e.g. , for categorical data .we would like to thank james saunderson and myung jin choi for helpful discussions , and kim - chuan toh for kindly providing us specialized code to solve larger instances of our convex program .given a low - rank matrix we consider what happens to the invariant subspaces when the matrix is perturbed by a small amount .we assume without loss of generality that the matrix under consideration is square and symmetric , and our methods can be extended to the general non - symmetric non - square case .we refer the interested reader to for more details , as the results presented here are only a brief summary of what is relevant for this paper .in particular the arguments presented here are along the lines of those presented in .the appendices in also provide a more refined analysis of second - order perturbation errors .the resolvent of a matrix is given by , and it is well - defined for all that do not coincide with an eigenvalue of .if has no eigenvalue with magnitude equal to , then we have by the cauchy residue formula that the projector onto the invariant subspace of a matrix corresponding to all singular values smaller than is given by where denotes the positively - oriented circle of radius centered at the origin .similarly , we have that the weighted projection onto the invariant subspace corresponding to the smallest singular values is given by suppose that is a low - rank matrix with smallest nonzero singular value , and let be a perturbation of such that .we have the following identity for any , which will be used repeatedly : ^{-1 } - [ m - \zeta i]^{-1 } = - [ m - \zeta i]^{-1 } \delta [ ( m+\delta ) - \zeta i]^{-1}. \label{eq : matexp}\ ] ]we then have that ^{-1 } - [ m - \zeta i]^{-1 } d \zeta \nonumber \\ & = & \frac{1}{2 \pi i } \oint_{{\mathcal{c}}_\kappa } [ m - \zeta i]^{-1 } \delta [ ( m+\delta ) - \zeta i]^{-1 } d \zeta .\label{eq : projexp}\end{aligned}\ ] ] similarly , we have the following for : ^{-1 } - [ m - \zeta i]^{-1}\right\ } d \zeta \nonumber \\ & = & \frac{1}{2 \pi i } \oint_{{\mathcal{c}}_\kappa } \zeta ~ \left\{[m - \zeta i]^{-1 } \delta [ ( m+\delta ) - \zeta i]^{-1}\right\ } d \zeta \nonumber \\ & = & \frac{1}{2 \pi i } \oint_{{\mathcal{c}}_\kappa } \zeta ~ [ m - \zeta i]^{-1 } \delta [ m - \zeta i]^{-1 } d \zeta \nonumber \\ & & - \frac{1}{2 \pi i } \oint_{{\mathcal{c}}_\kappa } \zeta ~ [ m - \zeta i]^{-1 } \delta [ m - \zeta i]^{-1 } \delta [ ( m+\delta ) - \zeta i]^{-1 } d \zeta .\label{eq : wprojexp}\end{aligned}\ ] ] given these expressions , we have the following two results .[ theo : projbound ] let be a rank- matrix with smallest nonzero singular value equal to , and let be a perturbation to such that with .then we have that * proof * : this result follows directly from the expression , and the sub - multiplicative property of the spectral norm : here , we used the fact that ^{-1}\|_2 \leq \frac{1}{\sigma - \kappa} ] for . next , we develop a similar bound for .let denote the invariant subspace of corresponding to the nonzero singular values , and let denote the projector onto this subspace .[ theo : wprojbound ] let be a rank- matrix with smallest nonzero singular value equal to , and let be a perturbation to such that with .then we have that * proof * : one can check that ^{-1 } \delta [ m - \zeta i]^{-1 } d \zeta = ( i - p_{{u}(m ) } ) \delta ( i - p_{{u}(m)}).\ ] ] next we use the expression , and the sub - multiplicative property of the spectral norm : as with the previous proof , we used the fact that ^{-1}\|_2 \leq \frac{1}{\sigma - \kappa} ] for . we will use these expressions to derive bounds on the `` twisting '' between the tangent spaces at and at with respect to the rank variety .for a symmetric rank- matrix , the projection onto the tangent space ( restricted to the variety of symmetric matrices with rank less than or equal to ) can be written in terms of the projection onto the row space . for any matrix onecan then check that the projection onto the normal space ( n)= ( i - p_{u(m ) } ) ~ n ~ ( i - p_{u(m)}).\ ] ] * proof of proposition [ theo : nspace ] * : since both and are rank- matrices , we have that for .consequently , where we obtain the last inequality from proposition [ theo : wprojbound ] with . * proof of lemma [ theo : rhotspace ] * : since one can check that the largest principal angle between and is strictly less than . consequently, the mapping restricted to is bijective ( as it is injective , and the spaces have the same dimension ) . consider the maximum and minimum gain of the operator restricted to ; for any : (m)\|_2 \\ & \in & [ 1-\rho(t_1,t_2 ) , 1+\rho(t_1,t_2)].\end{aligned}\ ] ] therefore , we can rewrite as follows : (n)\|_\infty ] \\& \leq & \frac{1}{1 ~ - ~ \rho(t_1,t_2 ) } ~ \left[\xi(t_1 ) ~ + ~ \max_{n \in t_1 , \|n\|_2 \leq 1 } ~ \|[{\mathcal{p}}_{t_1 } - { \mathcal{p}}_{t_2}](n)\|_\infty \right ] \\ & \leq & \frac{1}{1 ~ - ~ \rho(t_1,t_2 ) } ~ \left[\xi(t_1 ) ~ + ~ \max_{\|n\|_2 \leq 1 } ~ \|[{\mathcal{p}}_{t_1 } - { \mathcal{p}}_{t_2}](n)\|_2 \right ] \\ & \leq & \frac{1}{1 ~ - ~ \rho(t_1,t_2 ) } ~ \left[\xi(t_1 ) ~ + ~ \rho(t_1,t_2 ) \right].\end{aligned}\ ] ] this concludes the proof of the lemma . * proof of lemma [ theo : rg1 ] * : we have that ; therefore , .we need to bound and .first , we have \\ & \subseteq & [ \|s\|_{\infty } - \|l\|_{\infty } , \|s\|_{\infty } + \|l\|_{\infty } ] \\ & \subseteq & [ \gamma - \xi(t ) , \gamma + \xi(t)].\end{aligned}\ ] ] similarly , one can check that \\& \subseteq & [ 1 - 2\|s\|_2 , 1 + 2 \|s\|_2 ] \\ & \subseteq & [ 1 - 2 \gamma \mu(\omega ) , 1 + 2 \gamma \mu(\omega)].\end{aligned}\ ] ] thus , we can conclude that .\ ] ] where is defined in . * proof of proposition [ theo : irr ] * : before proving the two parts of this proposition we make a simple observation about using the condition that by applying lemma [ theo : rhotspace ] : here we used the property that in obtaining the final inequality .consequently , noting that ] . *part * : note that for with similarly combining these last two bounds with the bounds from the first part , we have that this concludes the proof of the proposition .here we prove theorem [ theo : main ] . throughout this sectionwe denote .further and denote the tangent spaces at the `` true '' sparse matrix and low - rank matrix .we assume that \label{eq : grange}\ ] ] we also let denote the difference between the true marginal covariance and the sample covariance .finally we let throughout this section . for in the above range we note that standard facts that we use throughout this section are that and that for any matrix .we study the following convex program : - \log\det(s - l ) ~ + ~ \lambda_n [ \gamma \|s\|_{1 } + \|l\|_\ast ] \\\mbox{s.t . } & ~~~ s - l \succ 0 . \end{aligned } \label{eq : sdpsy}\ ] ] comparing with the convex program ,the main difference is that we do not constraint the variable to be positive semidefinite in ( recall that the nuclear norm of a positive semidefinite matrix is equal to its trace ) .however we show that the unique optimum of under the hypotheses of theorem [ theo : main ] is such that ( with high probability ) .therefore we conclude that is also the unique optimum of .the subdifferential with respect to the nuclear norm at a matrix with ( reduced ) svd given by is as follows : 1 .we show that if we solve the convex program subject to the additional constraints that and for some `` close to '' ( measured by ) , then the error between the optimal solution and the underlying matrices is small .this result is discussed in appendix [ app : errbound ] .we analyze the optimization problem with the additional constraint that the variables and belong to the algebraic varieties of sparse and low - rank matrices respectively , and that the corresponding tangent spaces are close to the tangent spaces at .we show that under suitable conditions on the minimum nonzero singular value of the true low - rank matrix and on the minimum magnitude nonzero entry of the true sparse matrix , the optimum of this modified program is achieved at a _ smooth _ point of the underlying varieties .in particular the bound on the minimum nonzero singular value of helps bound the curvature of the low - rank matrix variety locally around ( we use the results described in appendix [ app : rankcurv ] ) .these results are described in appendix [ app : var ] .3 . the next step is to show that the variety constraint can be linearized and changed to a tangent - space constraint ( see appendix [ app : ts ] ) , thus giving us a _convex program_. under suitable conditions this tangent - space constrained program also has an optimum that has the same support / rank as the true .based on the previous step these tangent spaces in the constraints are close to the tangent spaces at the true .therefore we use the first step to conclude that the resulting error in the estimate is small .4 . finally we show that under the identifiability conditions of section [ sec : iden ] these tangent - space constraints are inactive at the optimum ( see appendix [ app : final ] ) .therefore we conclude with the statement that the optimum of the convex program without any variety constraints is achieved at a pair of matrices that have the same support / rank as the true ( with high probability ) .further the low - rank component of the solution is positive semidefinite , thus allowing us to conclude that the original convex program also provides estimates that are consistent .consider the taylor series of the inverse of a matrix : where .\ ] ] this infinite sum converges for sufficiently small .the following proposition provides a bound on the second - order term specialized to our setting : * proof * : we have that where the second - to - last inequality follows from the range for and that ] , we conclude that = z , \label{eq : bfpteq1}\ ] ] with . since the optimum is unique , one can check using lagrangian duality theory that is the unique solution of the equation . rewriting in terms of the errors , we have using the taylor series of the matrix inverse that ^{-1 } \nonumber \\ & = & e_n - r_{\sigma^\ast_o}({\mathcal{a}}(\delta_s,\delta_l ) ) + { \mathcal{i}}^\ast { \mathcal{a}}(\delta_s,\delta_l ) \nonumber \\ & = & e_n - r_{\sigma^\ast_o}({\mathcal{a}}(\delta_s,\delta_l ) ) + { \mathcal{i}}^\ast { \mathcal{a}}{\mathcal{p}}_{\mathcal{y}}(\delta_s,\delta_l ) + { \mathcal{i}}^\ast { \mathcal{c}}_{t'}. \label{eq : bfpteq2}\end{aligned}\ ] ] since is a tangent space such that , we have from proposition [ theo : irr ] that the operator from to is bijective and is well - defined . now consider the following matrix - valued function from to : - z \right\}.\ ] ]a point is a fixed - point of if and only if = z ] .we prove each step separately . for the fourth step let denote the minimum singular value of .consequently , \geq 8 \|\delta_l\|_2.\ ] ] using the same reasoning as in the proof of the second step , we have that hence notice that this corollary applies to _ any _ , and is hence applicable to _ any solution _ of the -constrained program .for now we choose an arbitrary solution and proceed . in the next stepswe show that is _ the unique _ solution to the convex program , thus showing that is also the unique solution to .given the solution , we show that the solution to the convex program with the tangent space constraint is the same as under suitable conditions : - \log\det(s - l ) ~ + ~ \lambda_n [ \gamma \|s\|_{1 } + \|l\|_\ast ] \\ \mbox{s.t . } & ~~~ s - l \succ 0 , ~~ s \in \omega , ~~ l \in t_\mathcal{m}. \end{aligned } \label{eq : sdptsm}\ ] ] assuming the bound of corollary [ theo : nlpcor ] on the minimum singular value of the uniqueness of the solution is assured .this is because we have from proposition [ theo : irr ] and from corollary [ theo : nlpcor ] that is injective on .therefore the hessian of the convex objective function of is strictly positive - definite at .[ theo : avtots ] let be in the range specified by .suppose that the minimum nonzero singular value of is such that ( is defined in corollary [ theo : nlpcor ] ) .suppose also that the minimum magnitude nonzero entry of is greater than or equal to ( is defined in corollary [ theo : nlpcor ] ) .let .further suppose that then we have that 1 .first we can change the non - convex constraint to the linear constraint .this is because the lower bound assumed for implies that is a smooth point of the algebraic variety of matrices with rank less than or equal to ( from corollary [ theo : nlpcor ] ) .due to the convexity of all the other constraints and the objective , the optimum of this `` linearized '' convex program will still be .2 . next we can again apply corollary [ theo : nlpcor ] ( based on the bound on ) to conclude that the constraint is _ locally inactive _ at the point .consequently , we have that can be written as the solution of a _ convex program _ : - \log\det(s - l ) ~ + ~ \lambda_n [ \gamma \|s\|_{1 } + \|l\|_\ast ] \\ \mbox{s.t .} & ~~~ s - l \succ 0 , ~~ s \in \omega , ~~ l \in t_\mathcal{m } , \\ & ~~~ { g_\gamma}({\mathcal{a}}^\dag { \mathcal{i}}^\ast { \mathcal{a}}(s - { s^\ast } , { l^\ast}- l ) ) \leq 11 \lambda_n .\end{aligned } \label{eq : sdptsmg}\ ] ] we now need to argue that the constraint is also inactive in the convex program .we proceed by showing that the solution of the convex program has the property that , which concludes the proof of this proposition .we have from corollary [ theo : nlpcor ] that .since by assumption , one can verify that & \leq & \frac{8\lambda_n}{\alpha}\left[1 + \frac{\nu}{3(2-\nu ) } \right ] \\ & = & \frac{16 ( 3-\nu ) \lambda_n}{3 \alpha ( 2-\nu ) } \\ & \leq & \min\left\{\frac{1}{4 c_1 } , \frac{\alpha \xi(t)}{64 d \psi c_1 ^ 2}\right\}.\end{aligned}\ ] ] the last line follows from the assumption on .we also note that from corollary [ theo : nlpcor ] , which implies that . letting , we can conclude from proposition [ theo : bfpt ] that .next we apply proposition [ theo : rem ] ( as ) to conclude that from the optimality conditions of one can also check that for , \\ & \leq & 4 \left[\frac{2 ( 3-\nu ) \lambda_n}{3 ( 2-\nu)}\right].\end{aligned}\ ] ] here we used in the last inequality , and also that ( as noted above from corollary [ theo : nlpcor ] ) and that .therefore , because ] , we conclude that = z , \label{eq : tscond}\ ] ] with .it is clear that the optimality condition of the convex program ( without the tangent - space constraints ) on is satisfied .all we need to show is that ) <\label{eq : offts}\ ] ] rewriting in terms of the error , we have that restating the condition on , we have that .\label{eq : tscond2}\ ] ] ( recall that . ) a sufficient condition to show and complete the proof of this lemma is that ).\ ] ] we prove this inequality next . recall from corollary [ theo : nlpcor ] that .therefore , from equation we can conclude that ) ) \\& \leq & \lambda_n + 2\left[\frac{3 \lambda_n \nu}{6 ( 2-\nu ) } \right ] \\ & = & \frac{2\lambda_n}{2-\nu}.\end{aligned}\ ] ] here we used the bounds assumed on and on . applying the second part of proposition [ theo : irr ] , we have that ) \\ & \leq & \lambda_n - { g_\gamma}({\mathcal{p}}_{{\mathcal{y}}^\bot } { \mathcal{a}}^\dag [ - e_n + r_{\sigma^\ast_o } ( { \mathcal{a}}(\delta_s,\delta_l ) ) - { \mathcal{i}}^\ast { \mathcal{c}}_{t_\mathcal{m}}]).\end{aligned}\ ] ] here the second - to - last inequality follows from the bounds on , , and , and the last inequality follows from lemma [ theo : gg ] .this concludes the proof of the lemma . all the analysis described so far in this section has been completely deterministic in nature .here we present the probabilistic component of our proof . specifically, we study the rate at which the sample covariance matrix converges to the true covariance matrix . the following result from plays a key role in our analysis : [ theo : davsz ] given natural numbers with ,let be a matrix with i.i.d .gaussian entries that have zero - mean and variance . then the largest and smallest singular values and of are such that , \pr\left[{s}_p(\gamma ) \leq 1 - \sqrt{\tfrac{p}{n } } - t \right ] \right\ } \leq \exp\left\{-\tfrac{n t^2}{2}\right\},\ ] ] for any . using this resultthe next lemma provides a probabilistic bound between the sample covariance formed using samples and the true covariance in spectral norm .this result is well - known , and we mainly discuss it here for completeness and also to show explicitly the dependence on defined in .[ theo : problemma ] let . given any with ,let the number of samples be such that .then we have that \leq 2 \exp\left\{-\tfrac{n \delta^2}{128 \psi^2}\right\}.\ ] ] * proof * : since the spectral norm is unitarily invariant , we can assume that is diagonal without loss of generality .let , and let denote the largest / smallest singular values of .note that can be viewed as the sample covariance matrix formed from independent samples drawn from a model with identity covariance , i.e. , where denotes a matrix with i.i.d .gaussian entries that have zero - mean and variance .we then have that & \leq & \pr\left[\|\bar{\sigma}^n ~ - i\|_2 \geq \tfrac{\delta}{\psi}\right ] \\ & \leq & \pr\left[{s}_1(\bar{\sigma}^n ) \geq 1 + \tfrac{\delta}{\psi } \right ] + \pr\left[{s}_p(\bar{\sigma}^n ) \leq 1 - \tfrac{\delta}{\psi } \right ] \\ & = & \pr\left[{s}_1(\gamma)^2 \geq 1 + \tfrac{\delta}{\psi } \right ] + \pr\left[{s}_p(\gamma)^2 \leq 1 - \tfrac{\delta}{\psi } \right ] \\ & \leq & \pr\left[{s}_1(\gamma ) \geq 1 + \tfrac{\delta}{4\psi } \right ] + \pr\left[{s}_p(\gamma ) \leq 1 - \tfrac{\delta}{4\psi } \right ] \\ &\leq & \pr\left[{s}_1(\gamma ) \geq 1 + \sqrt{\tfrac{p}{n } } + \tfrac{\delta}{8 \psi } \right ] + \pr\left[{s}_p(\gamma ) \leq 1 - \sqrt{\tfrac{p}{n } } - \tfrac{\delta}{8 \psi } \right ] \\ & \leq & 2 \exp\left\{-\tfrac{n \delta^2}{128 \psi^2}\right\}.\end{aligned}\ ] ] here we used the fact that in the fourth inequality , and we applied theorem [ theo : davsz ] to obtain the final inequality by setting . [ theo : samples ] let be the sample covariance formed from samples of the observed variables . set . if , then we have with probability greater than that \geq 1 - 2 \exp\{-p\}.\ ] ] in this section we tie together the results obtained thus far to conclude the proof of theorem [ theo : main ] . we only need to show that the sufficient conditions of lemma [ theo : mainlemma ] are satisfied .it follows directly from corollary [ theo : avtotscor ] that the low - rank part is positive semidefinite , which implies that is also the solution to the original regularized maximum - likelihood convex program with the positive - semidefinite constraint . as usual set , and set . 1 .let , and let the number of samples be such that note that .2 . set , and then set as follows : note that .3 . let the minimum nonzero singular value of be such that where is defined in corollary [ theo : nlpcor ] .note that .4 . let the minimum magnitude nonzero entry of be such that where is defined in corollary [ theo : nlpcor ] .note that .* proof of theorem [ theo : main ] * : we condition on the event that , which holds with probability greater than from corollary [ theo : samples ] as by assumption .we note that based on the bound on , we also have that .\ ] ] in particular , these bounds imply that and that both these weaker bounds are used later . based on the assumptions above, the requirements of lemma [ theo : mainlemma ] on the minimum nonzero singular value of and the minimum magnitude nonzero entry of are satisfied .we only need to verify the bounds on and from proposition [ theo : avtots ] , and the bound on from lemma [ theo : mainlemma ] .finally we provide a bound on the remainder by applying propositions [ theo : bfpt ] and [ theo : rem ] , which would satisfy the last remaining condition of lemma [ theo : mainlemma ] . in order to apply proposition [ theo : bfpt ] , we note that & \leq & \frac{8}{\alpha } \left[\frac{\nu}{3 ( 2-\nu ) } + 1 \right ] \lambda_n \nonumber \\ & = & \frac{16(3-\nu)\lambda_n}{3 \alpha ( 2-\nu ) } \nonumber \\ & = & \frac{32(3-\nu)d}{\alpha \xi(t ) \nu } \delta_n \label{eq : lambound}\\ & \leq & \min\left\{\frac{1}{4c_1},\frac{\alpha \xi(t)}{64 d \psi c_1 ^ 2 } \right\}. \nonumber\end{aligned}\ ] ] in the first inequality we used the fact that ( from above ) and that is similarly bounded ( from corollary [ theo : nlpcor ] due to the bound on ) . in the second equality we used the relation . in the final inequality we used the bound on from .this satisfies one of the requirements of proposition [ theo : bfpt ] .the other condition on is also similarly satisfied due to the bound on from corollary [ theo : nlpcor ] .specifically , we have that from corollary [ theo : nlpcor ] , and use the same sequence of inequalities as above to satisfy the second requirement of proposition [ theo : bfpt ] .thus we conclude from proposition [ theo : bfpt ] and from that this bound implies that , which proves the parametric consistency part of the theorem . since the bound also satisfies the condition of proposition [ theo : rem ] ( from the inequality following above we see that ) , we have that \frac{d \delta_n}{\xi(t ) } \\ & \leq & \frac{d \delta_n}{\xi(t ) } \\ & = & \frac{\lambda_n \nu}{6 ( 2-\nu)}.\end{aligned}\ ] ] in the final inequality we used the bound on , and in the final equality we used the relation . this concludes the algebraic consistency part of the theorem . | suppose we have samples of a _ subset _ of a collection of random variables . no additional information is provided about the number of latent variables , nor of the relationship between the latent and observed variables . is it possible to discover the number of hidden components , and to learn a statistical model over the entire collection of variables ? we address this question in the setting in which the latent and observed variables are jointly gaussian , with the conditional statistics of the observed variables conditioned on the latent variables being specified by a graphical model . as a first step we give natural conditions under which such latent - variable gaussian graphical models are identifiable given marginal statistics of only the observed variables . essentially these conditions require that the conditional graphical model among the observed variables is sparse , while the effect of the latent variables is `` spread out '' over most of the observed variables . next we propose a tractable convex program based on regularized maximum - likelihood for model selection in this latent - variable setting ; the regularizer uses both the norm and the nuclear norm . our modeling framework can be viewed as a combination of dimensionality reduction ( to identify latent variables ) and graphical modeling ( to capture remaining statistical structure not attributable to the latent variables ) , and it consistently estimates both the number of hidden components and the conditional graphical model structure among the observed variables . these results are applicable in the high - dimensional setting in which the number of latent / observed variables grows with the number of samples of the observed variables . the geometric properties of the algebraic varieties of sparse matrices and of low - rank matrices play an important role in our analysis . * keywords * : gaussian graphical models ; covariance selection ; latent variables ; regularization ; sparsity ; low - rank ; algebraic statistics ; high - dimensional asymptotics |
during the last decade , the development of non - volatile electronic memories based on the resistive switching ( rs ) effect in transition metal oxides made a huge progress , becoming one of the promising candidates to substitute the standard technologies in a near future .rs refers to the reversible change of the resistance of a nanometer - sized media by the application of electrical pulses .though immediately after the discovery of the reversible rs effect in transition metal oxides its application for multi - level cell memories ( mlc ) was envisioned , reports so far chiefly focused on single - level cell ( slc ) devices . unlike slc , which can only store one - bit per cell , mlc memories may store multiple bits in a single cell . mlcs of up to 4 bits ( 16 levels ) are currently available based in both , standard flash and phase - change technologies . in an -bit rs - based mlc memory , one has to encode the memory levels as distinct resistance states .thus , the memory states can be identified with each one of the consecutive and adjacent resistance bins of width , with . to store the memory statethe device has to be set to the corresponding bin , ie .the requirement of non - volatility implies that the value should remain within that bin , even after the input is disconnected or the memory state is read out . in fig .[ fig : zoom ] we plot data of a random sequence of stored memory states in our implementation of a 64-level ( 6-bit ) mlc . for practical convenience resistance levelsare translated into voltage levels by a fixed bias current .the central component of the mlc is a rs device of a resistance , whose magnitude can be changed by application of a current pulse .the resistance change depends on through a highly non - trivial function ; the function is usually unknown , however , an important general requirement for non - volatile memory applications using rs is that for currents below a given threshold .this allows to sense ( ie , read - out ) a stored memory state , ie the so called remnant resistance , injecting a bias current without modifying the stored information .thus , , .a two level memory can be simply implemented by pulsing a rs - device strongly with opposite polarities and sensing the corresponding high and low states with a weak bias current .however , the previous observation of a current threshold has an important consequence for the implementation of mlc memories .in fact , in order to implement an mlc one should be able to tune the value of to any desired value .the better the control on this tuning , the better the ability to define the bins and a larger number of bits could be coded in the mlc . in principle , a perfect knowledge of the function should allow for this , but in practice the function is not known .yet , any reasonable form of would allow to tune any arbitrary memory state by applying a sequence of pulses of decreasing intensity , following a simple zero - finding algorithm as in standard control circuit .nevertheless , the requirement of a threshold current difficult the fine tuning , as small corrections beneath have no effect . in practice , the dead - zone introduced by relatively large values of a straightforward implementation of rs - device as mlc memories with a large number of levels .below we shall demonstrate how this conceptual problem can be overcome , and exhibit the implementation of a 64-level mlc ., and a write state where current pulses are applied in order to cancel the difference between and . when wr is deactivated , is force to zero . ]we adopt a manganite - based rs - device , made by depositing silver contacts on a sintered pellet of la ( lpcmo ) .a rs - device is defined between the pulsed ag / lpcmo and a second non - pulsed contact . a third electrode ( earth )is required in a minimal 3-contact configuration setup .a requirement for our mlc is that the rs - device operates in bipolar rs mode , ie , depending on the pulse polarity the remnant resistance either increases or decreases .it is now well established that the mechanism behind bipolar rs is the redistribution of oxygen vacancies , within the nanometer - scale region of the sample in contact with the electrodes . in the case of lpcmo, the oxygen vacancies significantly increase its resistivity because the electrical transport relies on the double - exchange mechanism mediated by the oxygen atoms .pulsing an electrical current trough the contact will produce a large electric field at the interface due to the high resistivity of the ag / lpcmo schottky barrier .if the pulse is strong enough , it will enable the migration oxygen ions across the barrier , modifying the concentration of vacancies and , hence , changing the interface resistance .the ionic migration remains always near the interface and does not penetrate deep into the bulk , since the much larger conductivity there prevents the development of high electric fields .thus , the rs effect remains confined to a nanometer - sized region near the interface , as schematically depicted in fig .[ phys ] .we now introduce a practical implementation of an rs - based mlc that produced the results shown in fig .[ fig : zoom ] .we used an ag / lpcmo interface and off - the - shelf electronic components .the block diagram of the concept is presented in fig .[ f4 ] and the schematics and technical details are included in the appendix .we envisage an implementation of an rs - based mlc memory chip where the storage core is a set of many rs - units with a single common control circuit .the common control would set each of the individual rs - units , one at a time , thus resembling the concept of the refresh logic circuit in the drams . in this workwe demonstrate the implementation of the control circuit with a single rs memory unit .key to our mlc implementation is to adopt a discrete - time algorithm that overcomes the problems discussed above , and which we describe next .the required memory state is coded in terms of a , while indicates the actual stored value ( fig .the system iteratively applies pulses of a strength that is an estimate of the required value to set the target state , eventually converging to it .this discrete - time feedback loop continuously cycles between 2 stages ; probe and correct . in probe stagethe switch connects the rs - device to the current source in order to sense the remnant resistance . in the correct stage, the switch connects the rs - device to a pulse generator that applies a corrective pulse , of a strength that is obtained from the difference between the delayed and the target value as follows , =k_{p}\,e\left[k\right]+k_{i}\,\sum_{i=0}^{k}e\left[i\right],\label{eq : pi}\ ] ] where the error signal =v_{out}\left[k-1\right]-v_{in}\left[k\right] ] , being the frequency of the system clock and an integer . and are generic proportional and integral constants respectively with unit , being and the electric current and voltage units .the first term of the equation represents a proportional estimator . the second term prevents the system to get stuck in a condition where .in fact , for low ] would lay below the threshold , thus not producing any further change in the state of the system .the magnitude of the second term linearly increases in time , thus making to eventually overcome the threshold and correct the output voltage in the desired direction . notice that even if this approach resembles a standard proportional - integral ( pi ) control loop with dead - zone , there are substantial differences .first , the remnant resistance reading and the correcting pulse application occur at different times .this requires the addition of the continuously commuting switch .second , we also needed the introduction of a delay , implemented as a sample and hold circuit , as is required for the feedback path . these differences andthe strong non - linearity in makes the stability analysis of this approach ( which also depends on specific values of and ) a significant issue that we describe below .the study of the system stability is based on the discrete - time model presented in fig .[ fig : concept ] . in the z - domain ,[ eq : pi ] becomes is compared with the output signal after being delayed , generating the error signal .based on this signal , the estimator generates the corrective pulses . ] the resulting after these corrective pulses is probed by connecting the rs - device to the bias current source ( ) , obtaining the output signal ] generating the error signal ] .a concrete implementation of nl is presented below =\begin{cases } \left(i_{pulse}\left[k\right]+i_{th}\right)\frac1a & i_{pulse}\left[k\right]<-i_{th}\\ 0 & -i_{th}<i_{pulse}\left[k\right]<i_{th}\\ \left(i_{pulse}\left[k\right]-i_{th}\right)\,u_{1 } & i_{pulse}\left[k\right]>i_{th}\ , .\end{cases}\label{eq : nl}\ ] ] the unit of is .when a negative signal exceeds the threshold <-i_{th} ] with slope 1 . for a positive signal greater than , increases with slope . for the sake of stability analysis[ eq : rrem ] is simplified as : =v\sum_{j=-\infty}^{k}\mathrm{nl}\left(i_{pulse}\left[j\right]\right).\label{eq : c k - domain}\ ] ] in this way , the analysis is not considering any instability when sensing with .indeed , our actual implementation did not present any critical issue at this level ( see the appendix ) . from now on ,unit - step sequence is considered as input signal .the steady - state error for the system with ( just proportional control ) and is =\frac{i_{th}}{k_{p}} ] the excitation of the rs - device is \right|\leq i_{th} ] makes ] ( and ) .the transfer function of the system is . for ( typical value )the system is critically damped when , and remains stable for .increasing moves the location of the poles following , arriving to when , where the system becomes unstable for any ( see fig .5 ( m ) ) .although the introduction of nonlinearities in eq .[ eq : nl ] does not allow to study the system analytically , as in the previous section , its response might be numerically simulated .in fact , simulations where performed in the -space by solving eq .5 , after substituting ] ( eq .4 ) . for the computation of ] and the unitary step function at the input ( ie .=1v ] is `` frozen '' because \right|\leq i_{th} ] ( see eq .[ eq : nl ] ) .in fact , fig . [fig : pi](j ) evidences a clear difference between the system speed when ] is decreasing .the case for is equivalent to a case where and .simulations also show a change in the stability limit as reported in the following table : [ cols="^,^,^",options="header " , ] summarizing , the simulations show that the system stability is not compromised after the introduction of nonlinearities , even if in some conditions , the system might require a considerably higher number of cycles to stabilize .two tests were done to evaluate the performance of the mlc memory .experimental results for the memory retention test are presented in fig .[ results ] .we emphasize that the relatively slow operation speed of our memory mlc would be dramatically improved upon device miniaturization and integration . for simplicity , we sampled a random subset of 16 out of 64 different voltages uniformly distributed along the operative range ( fig .[ results](a ) ) .each write and read cycle had a duration of 13 seconds .in the first 5 s the wr signal was active and a memory value was set . after a memory value was set , we observed a small relaxation of with time constant of .6 s. the value of remained essentially stable afterwards .thus , for the sake of performing a large number measurements , we only monitored the memory retentivity during the 5 time - constants ( ie , 8 seconds ) that immediately followed the set . within that period ,the wr signal was inactive , and the memory state stored in was probed every 0.1 s ( ie , the memory was read out 80 times ) .continuously sampled at 10 reading per second . during the readings indicated with the red bars the signal wr was active ;the output value accurately follows the sequence of input values that is randomly chosen on the grid . during the readings indicated with blue bars ,no pulses were applied to the rs - device , resulting in a drift .( b ) histogram of the last read value of the memory in each cycle ( ie , last value of blue sections ) .the horizontal bars denote the intervals corresponding to a 16 , 32 and 64 level devices .the location of the bars are chosen so to minimize the error probability ( ie , area of histogram outside the interval ) . ] in fig .[ results](b ) we present the histogram of the distribution of observed errors in the reading of a given stored memory value .the observed values correspond to the remnant resistance of the memory cell , after reading it 80 times at a 10 hz rate , following the set .the histogram is constructed from a sequence of 460 memory state recordings .the finite dispersion of the histogram is the main limitation for implementing a high number of memory levels . to easily visualize the retentivity performance of the memory for increasing number of levels , we indicate with horizontal bars the resistance interval ( = ) that corresponds to having a 16 , 32 and 64 mlc memory ( ie , 4 , 5 and 6 bits , respectively ) .the probability of error in storing a memory state corresponds to the area of the histogram that lays outside the resistance interval normalized to the total area . with a confidence level of 95% , the probability that the mlc fails to retain a stored state is 0.21.05 for a 64-level system ( 6 bit mlc ) .this error probability rapidly improves as the number of levels is decreased , being 0.012 for a 16-level mlc .the previous data on the performance of the memory can also be expressed in terms of the bit error rates ( ber ) . in our mlc , we find ber.006 for the 16-level system , ber.07 for the 32-level system and ber.1 for the 64-level system .lower error probabilities for bits than for levels , reflects the fact that , typically , the error corresponds to the memory level drifting to an immediate neighbor level ( that often shares a large number of bits ) .in conclusion , we have introduced a feedback algorithm to precisely set the remnant resistance of a rs - device to an arbitrary desired value within the device working range .this overcomes the conceptual problem of fine tuning a resistance value in non - volatile rs - devices for mlc applications , due the presence of a threshold current behavior .the feedback configuration is intrinsically time - discrete , since it is based on read / write sequences. the overhead in the writing time introduced in this feedback system may be a limitation for its utilization as primary memory in a computer system , but it can easily compete with actual speed of the current mass storage devices ( flash memories ) .the applicability of the concept to implement an n - bit mlc memory was successfully demonstrated for = 4 , 5 and 6 , with an lpcmo - based circuit ; which clearly illustrates the critical trade - off between ber , number of memory levels and ( power and area ) overhead of the control circuitry .[ fig : simplified - circuit ] shows a simplified circuit of our implementation .the estimator circuit ( see fig . [ f4 ] and fig .[ fig : concept ] ) is implemented by ic100d , ic101a and the high - current buffer a3 ( and associated components ) .it compares the output of the sample and hold ( s&h ) against the input signal divided by 2 , and then computes the pi function . and are set by means of p101 and p100 respectively . during the correct state , corrective pulsesare applied to the rs - devices ( lpcmo ) , by closing s1 during .c102 and r109 introduce a time constant of , reducing the rise time of the pulses in order to avoid overshoots . in the probe state , ic103b closes first and after , ic103a .also , s4 clamps the inverting input of a3 to + .the timing of the circuit is generated by a timing circuit based on the master clock clk ( see fig .a current flows trough the lpcmo ( v+ = 7.5v , v- = -7.5v ) .the voltage drop at the switching interface of the rs - device ( ranging 20 - 50 mv ) is amplified by the instrumentation amplifier a1 ( gain=21 ) , eventually setting the voltage at the output of the s&h .both interfaces of the rs - device behave complementary ( ie .when one interface reduces its resistance , the other increases ) , then the total drop across the device is mv and quite insensitive to the state , and therefore , .m. j. rozenberg , m. j. snchez , r. weht , c. acha , f. gomez - marlasca , and p. levy , `` mechanism for bipolar resistive switching in transition - metal oxides , '' _ phys .b _ , vol . 81 , p. 115101, mar 2010 .r. waser , r. dittmann , g. staikov , and k. szot , `` redox - based resistive switching memories - nanoionic mechanisms , prospects , and challenges , '' _ advanced materials _ , vol .21 , no .25 - 26 , pp . 26322663 , 2009 .h. lee , p. chen , t. wu , y. chen , c. wang , p. tzeng , c. lin , f. chen , c. lien , and m. tsai , `` low power and high speed bipolar switching with a thin reactive ti buffer layer in robust hfo2 based rram , '' in _ electron devices meeting , 2008 .iedm 2008 .ieee international _ , pp . 14 , ieee , 2008 .f. alibart , l. gao , b. d. hoskins , and d. b. strukov , `` high precision tuning of state for memristive devices by adaptable variation - tolerant algorithm , '' _ nanotechnology _ , vol .23 , no .7 , p. 075201, 2012 . c. schindler , s. thermadam , r. waser , and m. kozicki , `` bipolar and unipolar resistive switching in cu - doped sio , '' _ electron devices , ieee transactions on _ , vol .54 , no .10 , pp . 27622768 , 2007 .n. papandreou , h. pozidis , a. pantazi , a. sebastian , m. breitwisch , c. lam , and e. eleftheriou , `` programming algorithms for multilevel phase - change memory , '' in _ proc .ieee int circuits and systems ( iscas ) symp _ , pp .329332 , 2011 . | we study the resistive switching ( rs ) mechanism as way to obtain multi - level memory cell ( mlc ) devices . in a mlc more than one bit of information can be stored in each cell . here we identify one of the main conceptual difficulties that prevented the implementation of rs - based mlcs . we present a method to overcome these difficulties and to implement a 6-bit mlc device with a manganite - based rs device . this is done by precisely setting the remnant resistance of the rs - device to an arbitrary value . our mlc system demonstrates that transition metal oxide non - volatile memories may compete with the currently available mlcs . multilevel cell , resistive switching , non - volatile memory , reram . |
one of the most intriguing observations made over the years in financial data analysis concerns the tendency of financial forecasters to imitate each other .an account of this phenomenon is found in , where earning forecasts from different markets over a 17 year period are analyzed , showing ( among other results ) that : ( a ) forecasts are typically optimistic , that is the difference between the forecast and the actual earning is positive on average ; and ( b ) the spread of forecasts among analysts is significantly smaller on average than the forecast error , i.e. the typical spread of forecasts around the real earning . in other words ,analysts forecasts are more similar to each other than they are to the variable they are trying to forecast .more recently , a similar data set has been studied to estimate the fraction of herding analysts , with the surprising conclusion that around 75% of the analysts in the data set displayed a marked tendency to herd as a forecasting strategy .interestingly , about 10% of analysts were instead found to be anti - herding .stock prices in turn tend to react more strongly to forecasts that differ from the consensus ( see e.g. for a recent study ) . while psychological factors like social pressure , reputation issues and ( for anti - herding ) a desire for visibility can be crucial in determining this scenario , it is quite difficult to explain these results within the efficient market hypothesis ( emh ) . in the emh world , one would expect different forecasters to use their respective partial information to produce a proxy for the target variable that is unbiased and such that the forecast dispersion and the forecast error are roughly the same , as would result from forecasters that are independent and fully heterogeneous with respect to information and forecasting ability .different models in the economic literature have addressed the problem of the origin of herding behavior among financial forecasters in a bayesian game theoretic setting ( see e.g. ) , proving that if an analyst aims at maximizing the value of his reputation with investors ( or the chances that investors believe he s a good forecaster ) then it may actually be more profitable for him to replicate other agents forecasts rather than putting forward a guess based on his private information . a recent agent - based model proposed by curty and marsili focuses instead on the limits that herding imposes on the efficiency with which information is aggregated .specifically , it was shown that when the fraction of herders in a population of agents increases , the probability that herding produces the correct forecast ( i.e. that individual information bits are correctly aggregated ) undergoes a transition to a state in which either all herders forecast rightly , or no herder does . in this notewe study a variation on the theme by curty and marsili , aiming at characterizing further the dynamical interplay between learning and heterogeneity of information in a population of agents aiming to predict a continuous exogenous random variable ( learning was briefly considered in a discrete forecasting setting also in ) . at each time step, every agent is required to formulate a forecast either using his private information or by herding with a group of peers and selects the strategy to adopt based on his past performance .we show that the structure of the agents choices changes significantly depending on the heterogeneity of information .in particular , herding becomes increasingly preferred by agents as information becomes more and more unevenly spread across the population .however , the herding coefficient ( measured by the ratio of the forecast error to the forecast dispersion ) peaks roughly where informational inhomogeneity is maximal , implying that learning in such conditions does not allow for an efficient aggregation of the available information .the results we discuss are mostly obtained by computer simulations .deeper analytical progress ( beyond the simple considerations made here ) could be possible either along the lines of or by reasonably simplifying the coupled herding and learning mechanisms .following , we consider a population of agents ( labeled ) who have to forecast at each time step a continuous random variable drawn independently at each from a uniform distribution in ] is within from , hence we are assuming that the private information has better - than - random predictive power .larger values of correspond to higher forecasting abilities .we assume , along the lines of , that the s are sampled independently from \label{distr}\ ] ] tuning one passes from a situation in which almost all agents are well informed ( small ) , to one in which almost all agents have no forecasting ability ( large ) ; as increases , the information heterogeneity ( or the a priori forecasting ability ) reaches a maximum when , corresponding to a uniform distribution .we shall denote by the average value of , given by .when herding , an agent uses instead a prediction obtained by pooling a group of peers chosen randomly and uniformly for each ( our results do not appear to depend significantly on as long as is not extensive ; we shall use here ) .this represents , in analogy with e.g. , the `` contact network '' of the agent . our choice for a plain erds - renyi topology parallels that made in , but resultsare expected to change if different topologies are employed .the herding forecast is defined as the fixed point of the iterative process with . in words , at time step every agent only interacts with his peers whose initial ( private ) guess is sufficiently close , within a range measured by , to his .the `` effective interaction range '' is a crucial parameter to model social interaction and has proved to play a non trivial role in other contexts .note that the number of such peers is obviously bounded by but it fluctuates in time ; furthermore , in the above sum denotes the forecast of agent who may be herding ( in which case changes with ) or not ( in which case it is fixed to ) .this defines a dynamical `` imitation network '' onto the contact network , close in spirit to that defined in in the context of minority games . in this case , however , the averaging operation through which herding is performed does not allow for a straightforward identification of an imitation hierarchy . as in many other instances of games with learning , we take agents to be inductive : they monitor the performances of their two strategies over time via scoring functions indexed by that are updated in time according to and the agent s chosen strategy is selected by a logit rule with learning rate : the different parameters appearing above have all been introduced and discussed at length in the context of minority games and related models ( see e.g. for a broad review ) . in the present case , denotes a memory length parameter , roughly corresponding to the inverse of the time scale over which agents preserve a memory of the past performance of their strategies . denote instead the cost ( or the incentive ) faced by each player to get his private information or to herd . denotes the profit faced by agent at time step . for the sake of simplicity , we reward agents who guess correctly with one point , while we take one point from agents who guessed incorrectly .finally , is a parameter that encodes for a tunable stochasticity in the agents choice rules , with deterministic behavior recovered for .note that at every time step both the strategy scores of every agent are updated .starting from initial conditions , we are interested in observing the steady state behavior of the following quantities : * the herding probability , measured by the time - averaged fraction of herders : * the probability of success of strategy ( averaged over time and agents ) : where if the event is true , and zero otherwise * the time- and agent - averaged forecast error : * the time - averaged forecast dispersion : * the _ herding ratio _ the way in which agents produce their forecasts , and as a consequence the herding ratio will be affected by all of the above parameters but , most importantly , will depend on the information heterogeneity , measured by .note that .we remind , finally , that , under the emh , , so one would expect .we begin by analyzing the case in which every agent consults all his peer group ( or for all times ) , has infinite score memory ( ) and learning rate ( , corresponding to a deterministic choice rule ) and faces no costs ( or receives no incentives ) to use his strategies ( ) .this will be used as a reference situation to evaluate the impact of the above parameters on the game .[ uno ] shows the average herding success probability as well as the fraction of herders as a function of .probability of successful herding versus .markers denote results for individual samples ; the continuous line stands for the average .left inset : ( dashed line ) and ( continuous line ) versus .right inset : versus .simulation parameters : , , .,width=377 ] we see that when agents are well informed ( small ) herding outperforms the private information strategy and the majority of agents correctly learn to herd to increase their success probability well above . for large ,on the other hand , when agents have limited predictive ability , they learn to use their private signal as it slightly outperforms the herding forecast .this behavior , including the decay of that is observed for large ( as ) essentially parallels that observed in .the behavior of , by contrast , displays a sharp drop as informational heterogeneity increases beyond the point where , as is clear from the inset , the private signal outperforms herding .note that for large sample - to - sample fluctuations become larger and larger as , so that realizations with many herders ( up to a 70% fraction of the population ) coexist with realizations with few herders ( less than 10% ) .it is worthwhile to inspect the dependence of sample - to - sample fluctuations on more closely , see fig .[ fluc ] .sample - to - sample variance of versus for different values of .simulation parameters : , .the dashed vertical line marks the position of in this case ., width=377 ] besides the increase for large , a sharp peak ( roughly independent of the system size ) appears for intermediate , marking a qualitative change in the game s macroscopic properties .again , this signals a strong sample - dependence of the fraction of herders and occurs when the payoffs of the two strategies become comparable .such an effect is absent in the one - shot game and is induced by learning .note that the fraction of herders for s close the the peak is consistent with the results of . in turn, the herding ratio displays the behavior shown in fig . [ due ] .inverse herding ratio versus .inset : ( continuous line ) and ( dashed line ) versus .simulation parameters : , .,width=377 ] one observes a sharp minimum in around , where it attains a value consistent with that found empirically in , where . away from this crossover region ( where ,we remind , information heterogeneity is maximal ) , is instead closer to the emh limit .this suggests that inductive agents indeed manage to aggregate information quite efficiently when it is distributed uniformly across them , regardless of the quality of the information .this process is no longer possible in presence of a more diversified information landscape .a naive analytical estimate of the value of where the game undergoes a crossover can be obtained by arguing as in . denoting by the probability of a correct forecast, one has where denotes the probability of a correct forecast by herding . neglecting both the learning and the herding dynamics, is given by the probability that the majority of the peer group members have a correct forecast , i.e. a qualitative change is expected to occur when .one easily sees from ( [ b ] ) that this condition is satisfied ( besides the trivial solutions or ) when , implying for the prediction agrees well with the numerical experiments .( [ bstar ] ) also characterizes the role of the resolution parameter : as increases , the number of effective alternative `` states '' of the variable ( given by ) decreases and becomes larger , extending the range of effectiveness of herding as a forecasting strategy .( the fact that can not exceed was implicit in the model s definition . ) note that the nave guess for the fraction of herders given by according to which agents with are assumed to herd asymptotically , does not produce the correct results in the present model , suggesting that the specific forms of and of its fluctuations , including the large increase , are likely due to the learning dynamics itself .we now focus our analysis on the additional model parameters described in sec .2 . for the sake of simplicity , we will study one of them at a time as variations of the basic case investigated above .figures [ param ] and [ param2 ] show specifically how the fraction of herders and the herding ratio change when these parameters are varied for different values of .fraction of herders versus for ( triangles ) , ( circles ) and ( squares ) .simulation parameters : , .,width=453 ] starting with ] in the context of opinion dynamics models has been discussed e.g. in .it takes into account the chance that people may prefer to interact , and possibly accept to adjust their beliefs , only with opinions not too far from their initial ones . for all opinions in the peer group are treated equally .for other values of the parameter , instead , the sum in is restricted only to agents whose forecasts are within from each agent s initial choice .it is interesting to note that when herding represents the best strategy , i.e. around , the fraction of herders is almost insensitive to different values of ( which possibly suggests also a weak dependence of the results on ) . nevertheless reducing herding ratio appears to get smaller , indicating that a more rational aggregation of information may occur .by contrast , for larger agents may be unable to identify as the most likely successful strategy if is too small .coming finally to the incentives ( which are known to have a far from trivial impact on minority games , see e.g. ) , we focus on the dependence of on the parameter , which is easily understood to be the relevant quantity in this case .as to be expected , incentives to herding , or higher costs for using the private signal , shift agents towards the strategy , and lead to an increase of ( and vice versa for incentives to the strategy ) .this crossover appears to be smooth only when is sufficiently large . for smaller it sharpens and one observes a steep jump at in both and , reminiscent of similar effects induced by incentives or tobin taxes in minority games . in this case , agents appear to polarize on the herding strategy as soon as a small incentive is available , leading to disastrous consequences for and suggesting the existence of a first - order transition in ( though a more refined numerical analysis would be needed to clarify this point ) .the qualitative outlook is however essentially unchanged with respect to the the case of larger s .we have studied here a simple forecasting game with inductive agents who must formulate a forecast by either using their private information or by herding with a group of peers .the quality of the information at an agent s disposal is measured by the a priori predictive ability of his private signal , and we investigate how the game s overall properties are affected by increasing informational heterogeneity in the population .our main result is that inductive agents may be unable to produce rational forecasts when the heterogeneity is large . in this situation ,the herding ratio becomes significantly larger than one , taking on values similar to those measured empirically in the financial literature .we have also observed that the efficacy of herding depends strongly on the distribution of information .the role of several parameters of interest in the context of games with inductive agents has finally bee analyzed .generically speaking , a finite learning rate , a shorter memory or a smaller interaction range may all contribute to reduce the herding ratio when the informational heterogeneity is large .the interest in forecasting games is based on the fact that they present a simple outlook and a rich phenomenology that allows to shed some light on the process of information aggregation and its limits .it is however difficult to establish a direct contact between these toy models and financial markets , which still represent the main source of empirical data .one possible step in this direction that is worth exploring would consist in coupling the event to be forecasted , here , with the agents choices , as done for example in . would play in these models the role of a ` price ' . depending on the form of the payoff onewould then observe a dynamical feedback between learning and ` price ' leading to a rich outlook possibly similar to that studied in minority games . | we investigate a toy model of inductive interacting agents aiming to forecast a continuous , exogenous random variable . private information on is spread heterogeneously across agents . herding turns out to be the preferred forecasting mechanism when heterogeneity is maximal . however in such conditions aggregating information efficiently is hard even in the presence of learning , as the herding ratio rises significantly above the efficient - market expectation of and remarkably close to the empirically observed values . we also study how different parameters ( interaction range , learning rate , cost of information and score memory ) may affect this scenario and improve efficiency in the hard phase . |
detecting and measuring the properties of objects in astronomical images in an automated fashion is a fundamental step underlying a growing proportion of astrophysical research .there are many existing tasks , some quite sophisticated , for performing such analyses .regardless of the wavelength at which an image has been made however , each of these tasks has one thing in common : a threshold needs to be defined above which pixels will be believed to belong to real sources .defining an appropriate threshold is a complex issue and , owing to the unavoidable presence of noise , _ any _ chosen threshold will result in some true sources being overlooked and some false sources measured as real . varying the chosen threshold to one extreme or the other will minimise one of these types of error at the expense of maximising the other .clearly choosing a threshold to jointly minimise both types of error is not trivial , but even more problematic is that it is not even clear that one can , _ a priori _ , make a well defined estimate of the magnitude of each type of error .typically this is done by comparing measured source counts with existing estimates for the expected number of sources .this is an unsatisfactory solution for surveys reaching to new sensitivity limits , or at previously uninvestigated wavelengths ( where there can be no estimate of the expected number of sources from existing studies ) , and also for small area imaging where the properties of large populations are affected by clustering or small number statistics . throughout this paperwe shall use the terms ` source - pixel ' to mean a pixel in an image which is above some threshold , and thus assumed to be part of a true source .the term ` source ' shall be used to mean a contiguous collection of ` source - pixels ' which corresponds to an actual astronomical object , a star or galaxy for example , whose properties we are interested in measuring . in this work we concentrate on radio images , specifically images produced by radio interferometers .we emphasise , though , that all of the conclusions presented here are valid for any pixelized map where the ` null hypothesis ' is known at each pixel . throughout this paper the null hypothesis is taken to be the ` sky background ' at each pixel ( after the image is normalized ) .now consider the following possible criteria applied to some chosen threshold : ( 1 ) that there be no falsely discovered source - pixels ; ( 2 ) that the proportion of falsely discovered source - pixels be some small fraction of the total number of pixels ( background plus source ) ; ( 3 ) that the fraction of false positives ( i.e. the number of falsely discovered sources over the total number of discovered sources ) be small .the first of these can be achieved by using a very high threshold called the bonferroni threshold .this threshold is rarely used since , although guaranteeing no false detections , it detects so few real sources .the second criterion is most often applied in astronomy , and can be achieved by simply choosing the appropriate significance threshold .a threshold of , for example , ensures that of the total number of _ pixels _ are falsely discovered .unfortunately this is not the same as a constraint on the fraction of false detections compared to total number of _detections_. this quantity is a more meaningful measure to use in defining a threshold for the following reason . consider a threshold in an image composed of pixels ( ) and containing only gaussian noise .this would yield , on average , 1000 pixels above the threshold .if real sources are also present , these 1000 pixels appear as false source - pixels , and if it happens that only 2000 pixels are measured as source - pixels , then half the detections are spurious ! if many more true source - pixels are present , of course , this threshold may be quite adequate .the third criterion defines a more ideal threshold .such a threshold allows one to specify _ a priori _ the maximum number of false discoveries , on average , as a fraction of the total number of discoveries .such a method should be independent of the source distribution ( i.e. , it will adapt the threshold depending on the number and brightness of the sources ) .the false discovery rate ( fdr ) method does precisely this , selecting a threshold which controls the fraction of false detections .we have implemented the fdr technique in a task for detecting and measuring sources in images made with radio telescopes .radio images were chosen for the current analysis for several reasons , including previous experience at coding of radio source detection tasks , but also since the conservative nature of constructing many radio source catalogues allows the value of this method to be emphasised .traditionally radio source catalogues are constructed in a fashion aimed at minimising spurious sources , accomplished by selecting a very conservative threshold , which is usually or even .this is partly driven by the difficulty of completely removing residual image artifacts such as sidelobes of bright sources , even after applying very sophisticated image reconstruction methods , and the desire to avoid classifying these as sources .many of the issues surrounding radio source detection are described by .while such a conservative approach may minimise false detections , it has the drawback of not detecting large numbers of real , fainter , sources important in many studies of the sub - mjy and populations , for example .an fdr defined threshold may allow many more sources to be included in a catalogue while providing a quantitative constraint to be placed on the fraction of false detections .table [ extasks ] briefly describes the algorithms used in some common radio source detection and measurement tasks . there is much similarity in the selection of tasks available within the two primary image analysis packages , aips ( astronomical image processing system ) and miriad ( multichannel image reconstruction , image analysis and display ) , as alluded to in the summaries given in this table , although the specifics of each such task certainly differ to a greater or lesser extent .in addition to these packages , aips++ also has a image measurement task , _ imagefitter _ , similar in concept to the imfit tasks in aips and miriad .a stand - alone task , sextractor , is used extensively in image analysis for detecting and measuring objects , primarily in images made at optical wavelengths .given the flexibility of the sextractor code provided by the many user - definable parameters , it is possible to use this task also for detecting objects in radio images . in the analysis belowwe compare the effectiveness of sfind 2.0 , which runs under the miriad package , with that of imsad ( also a miriad task , and which employs essentially the same algorithm as sad and vsad in aips ) and sextractor .section [ sfind2 ] describes the operation of the sfind 2.0 task and the implementation of the fdr method .section [ tests ] describes the monte - carlo construction of artificial images on which to compare sfind 2.0 , imsad and sextractor , the tests used in the comparison and their results .subsequent implementation using a portion of a real radio image is also presented .section [ disc ] discusses the relative merits of each task , the effectiveness of the fdr method , and raises some issues regarding the validity of the null hypothesis ( i.e. the background level ) and the correlation between neighbouring pixels .section [ conc ] presents our conclusions along with the strong suggestion that implementing fdr as a threshold defining method in other existing source detection tasks is worthwhile .lll imfit / jmfit & imfit & fits multiple gaussians to all pixels in a defined area .+ sad / vsad / happy & imsad & defines ` islands ' encompassing pixels above a user + & & defined threshold , and fits multiple gaussians within these areas .+ & sfind 2.0 & defines threshold using fdr to determine pixels belonging + & & to sources , and fits those by a gaussian .here we describe the algorithm used by sfind 2.0 , which implements fdr for identifying a detection threshold .we include the version number of the task simply to differentiate it from the earlier version of sfind , also implemented under miriad , as the source detection algorithm is significantly different .subsequent revisions of sfind will continue to use the fdr thresholding method .the elliptical gaussian fitting routine used to measure identified sources has not changed , however , and is the same as that used in imfit and imsad .an example of the use of the earlier version of sfind can be found in the source detection discussion of .the first step performed is to normalise the image .a gaussian is fit to the pixel histogram in regions of a user - specified size to establish the mean and standard deviation , , for each region .then for each region of the image , the mean is subtracted and the result is divided by .ideally this leaves an image with uniform noise characteristics , defined by a gaussian with zero mean and unit standard deviation . in practicethe finite size of the regions used may result in some non - uniformity , although a judicious choice of size for these regions should minimise any such effect .we note that radio interferometer images often contain image artifacts such as residual sidelobes arising from the image - processing , sampling effects , and so on . with adequate samplingthese effects should be statistically random with zero mean , and simply add to the overall image noise . next the fdr threshold is calculated for the whole image .the null hypothesis is taken to be that each pixel is drawn from a gaussian distribution with zero mean and unit standard deviation .this corresponds to the ` background pixels ' . in the absence of real sources ,each pixel has a probability , ( which varies with its normalised intensity ) , of being drawn from such a distribution . in images known to contain real sources , a low -value for a pixel ( calculated under the assumption that no sources are present )is often used as an indicator that it is a ` source - pixel . ' the -values for all pixels in the image are calculated and ordered .the threshold is then defined by plotting the ordered -values as a function of ( where is the total number of pixels and is the index , from 1 to n ) and finding the -value , say , corresponding to the last point of intersection between these and a line of slope .here is the maximum fraction of falsely detected source - pixels to allow , on average ( over multiple possible instances of the noise ) , and if the statistical tests are fully independent ( the pixels are uncorrelated ) .if the tests are dependent ( the pixels are correlated ) then since most radio images ( and indeed astronomical images in general ) show some degree of correlation between pixels , but tend not to be _ fully _ correlated , i.e. the intensity of a given pixel is not influenced by that of _ every _ other pixel , we have chosen to take an intermediate estimate for reflecting the level of correlation present in the image .this is related to the synthesised beam size , or point - spread function ( psf ) .if is the ( integer ) number of pixels representing the psf we define .this will be discussed further in section [ cnchoice ] .a diagrammatic example of the threshold calculation is shown in figure [ fdreg ] .it becomes obvious from this figure that increasing or decreasing the value chosen for corresponds to increasing or decreasing the resulting -value threshold , , and the number of pixels thus retained as ` source - pixels ' .the fdr formalism ensures that the average fraction of false ` source - pixels ' will never exceed .as described in , this explanation of implementing fdr does not explain or justify the validity of fdr .the reader is referred to for a heuristic justification , and to and for the detailed statistical proof .finally , once the fdr threshold is defined , the pixels with , corresponding to ` source - pixels ' , can be analysed .each of the source - pixels are investigated in turn as follows .a hill - climbing procedure starts from the source - pixel and finds the nearest local maximum from among the contiguous source - pixels . from this local peak ,a collection of contiguous , monotonically decreasing pixels are selected to represent the source . at this point , it is possible to either ( 1 ) use all of the pixels around the peak which satisfy these criteria , or ( 2 ) use only those which are themselves above the fdr threshold .the latter is the default operation of sfind 2.0 , but the user can specify an option for choosing the former method as well . the former method , which allows pixels below the fdr threshold to be included in a source measurement , may be desirable for obtaining more reasonable source parameters for sources close to the threshold . in either case , the resulting collection of source - pixels is fit in a least - squares fashion by a two - dimensional elliptical gaussian . if the fitting procedure does not converge , the source is rejected .this is typically the case when the potential source consists of too few pixels to well - constrain the fit .it is likely that most such rejections will be due to noise - spikes , which typically contain a small number of pixels , although some may be due to real sources lying just below the threshold such that only a few true source - pixels appear above it .if the fit is successful the source is characterised by the fitted gaussian parameters , and the pixels used in this process are flagged as already ` belonging ' to a source to prevent them from being investigated again in later iterations of this step .on completion , a source catalogue is written by the task , and images showing ( 1 ) the pixels above the fdr threshold , and ( 2 ) the normalised image , may optionally be produced .to compare the effectiveness of sfind 2.0 , imsad and sextractor , one hundred artificial images pixels in size were generated .each of these contained a different instance of random gaussian noise and the same catalogue of 72 point sources with known properties ( position and intensity ) .the intensity distribution of the sources spans a little more than 2 orders of magnitude , ranging from somewhat fainter than the noise level to well above it .many fewer bright sources were assigned than faint sources , in order to produce a realistic distribution of source intensities .the artificial images were convolved with a gaussian to mimic the effects of a radio telescope psf .the test images have pixels , and the gaussian chosen to represent the psf has fwhma of with a position angle of .the ( convolved ) artificial sources in the absence of noise can be seen in figure [ artimages ] , along with one of the test images . on each image, sfind 2.0 was run with , and , and the resulting lists of detected sources compared with the input catalogue . by way of an example, the sources detected in a single test of sfind 2.0 are shown in figure [ artannimages ] , which indicates by an ellipse the location , size and position angle of each detected source .this example demonstrates the difficulty of detecting faint sources , and the ability of noise to mimic the characteristics of faint sources . examined the simplest possible scenario , source - pixels placed on a regular grid in the presence of uncorrelated gaussian noise . herewe investigate a much more realistic situation . the ` source - pixels ' now lie in contiguous groups comprising ` real ' sources in the sense that the whole image ( background and sources ) has been convolved with a psf .the number of pixels in each source above a certain threshold will vary depending on the intensity of the source .we now confirm that the fdr method works consistently on these realistic images . to verify the reliability of the fdr defined threshold ,the number of pixels detected above the fdr threshold in each test were recorded along with the number which were unassociated with any true source .the distribution of this fraction of false fdr pixels should never exceed the value specified for , and this can be seen in the histogram in figure [ fdrhist ] .this figure also shows how the distribution of falsely detected pixels changes with the form chosen for , emphasising that the assumption of complete independence of the pixels is not justified ( as expected ) , but neither is the image fully correlated , evidenced by the conservative level of false detections seen under this assumption .our choice for the form of , which is not in fact a result of the rigorous statistical proof , appears to be a feasible and reliable intermediate for such ` partially correlated ' images .the fdr formalism ensures that the average fraction of falsely detected _ pixels _ will be less than .the connection between numbers of pixels and numbers of sources is complex , however , and the same criterion can not be said to be true for the fraction of falsely detected _sources_. the number of source - pixels per source will vary according to both instrumental effects , such as the sampling and the psf , as well as intrinsic source sizes compared to the instrumental resolution and the source brightnesses compared to the noise level in an image .even if all sources are point - like , and hence should appear in the image as a psf , the number of source - pixels above a given threshold for a given source depends on its brightness , and the number of source - pixels per source would not be expected to be constant . to investigate the effect of this complex relation ,we explore empirically the results of applying fdr thresholding to our simulated images .the fraction of falsely detected _ sources _ in each image , as well as the fraction of true sources not detected , are shown in figure [ comparison ] as distributions for each tested value of . by construction ,a number of the artificial sources have intensities comparable to or lower than the noise level in the images , so not all sources will be able to be recovered in every image .this is reflected in the fact that somewhat more than of sources are missed ( by all tasks tested ) even with very liberal thresholds .the histograms in figure [ comparison](a ) show that for , where up to of pixels could be expected to be false , the fraction of false sources is not much more .the result for , is also quite reasonable , although for the outliers are further still , relatively speaking , from the expected fraction .while the strict constraint applicable to the fraction of false source - pixels no longer holds for false sources , it still seems to be quite a good estimator .for the case where only the peak pixel is required to be above the fdr threshold , figure [ comparison](c ) , the fractions of falsely detected sources are not so strongly constrained . for , the fraction may be almost twice that expected . in both cases , although with greater reliability in the former , this allows the fdr method to provide an estimate of the fraction of false sources to expect . even though the constraint may not be rigorous , andclearly the estimate will be much more reliable in the case where all source - pixels are required to be above the fdr threshold , the fdr method allows a realistic _ a priori _ estimate of the fraction of false detections to be made .this feature is not possible with the simple assumption of , say , a threshold . to test imsad and sextractor in the same fashion as sfind 2.0, a choice of threshold value as a multiple of the image noise level ( ) was required . simply selecting a canonical value of , or ,for example , would complicate the comparison between these tasks and sfind 2.0 , as this would be testing not only different source measurement routines but also potentially different thresholds .the values of selected for testing sfind 2.0 result in threshold levels which correspond approximately , ( since the noise level varies minimally from image to image ) , to , and .a threshold in these simulations would correspond to a value of . in a brief aside , it should be emphasised that this particular correspondence between a choice of and a particular -threshold is only valid for the noise and source characteristics of the images used in the present simulations . for images with different noise levels or different source intensity distributions, any particular value of will correspond to some different multiple of the local noise level .the primary advantage of specifying an fdr value over choosing a threshold , say , is that the fdr threshold is adaptive .the fdr threshold will assume a different value depending on the source intensity distribution relative to the background .this point is made very strongly in in the diagrams of their figure 4 .as a specific example in the context of the current simulations , we investigated additional simulated images containing the same noise as in the current simulations but containing sources having very different intensity distributions .we chose one intensity distribution such that every source was 10 times brighter than in the current simulations and one such that every source was 10 times fainter . in the brighter case , the fdr threshold for , ensuring that on average no more than of source - pixels would be falsely detected , corresponded not to , but to about .the reason here is that as the source distribution becomes brighter , many more pixels will have low -values . to retain the constant fraction of false pixels more background pixelsmust be included , so the threshold becomes lower . in the fainter case ,where the artificial sources are very close in intensity to the noise level , the same fdr threshold corresponds to about .of course in this case very few sources are detected , for obvious reasons , but the same constraint on the fraction of falsely detected pixels applies . herefewer pixels will have low -values , thus fewer background pixels are allowed , maintaining a constant fraction of false pixels , and the threshold increases . in the brighter case the simple assumption of , say , a threshold would give a lower rate of false pixels , while in the fainter case it would give a higher fraction .the importance of these examples is to emphasise that fdr provides a consistent constraint on the fraction of false detections in an adaptive way , governed by the source intensity distribution relative to the background , which can not be reproduced by the simple assumption of , for example , a threshold .while the source distributions in most astronomical images typically lie between the two extremes presented for this illustration , the adaptability of the fdr thresholding method still presents itself as an important tool .returning now to the comparison between sfind 2.0 , imsad and sextractor , the imsad and sextractor thresholds were set to correspond to those derived from the values used in sfind 2.0 .the distributions of falsely detected and missed sources were similarly calculated .these are shown in figures [ comparison](e ) to [ comparison](h ) .one of the features of sextractor is the ability to set a minimum number of contiguous pixels required before a source is considered to be real , and obviously the number of detected sources varies strongly with this parameter . after some experimentation we set this parameter to 7 pixels , as this resulted in a distribution of false detections most similar to that seen with sfind 2.0 , for the case where only pixels above the fdr threshold are used in source measurements .values larger than 7 reduced the number of false detections at the expense of missing more true sources , and vice - versa . from this comparison, sfind 2.0 appears to miss somewhat fewer of the true sources than sextractor when sextractor is constrained to the same level of false detections .this is also true if sextractor is constrained to a similar distribution of false detections as obtained by sfind 2.0 for the case where only the peak pixel is required to be above the fdr threshold ( corresponding to 4 contiguous pixels ) . in both cases , allowing sextractor to detect more true sources by lowering the minimum pixel criterion introduces larger numbers of false detections .sfind 2.0 also performs favourably compared to imsad .while sfind 2.0 seems to miss a few percent more real sources than imsad , imsad seems to detect many more false sources than sfind 2.0 in either of its source - measurement modes . of primary importance in source measurementis the reliability of the source parameters measured .figure [ measvstrue ] shows one example from the one hundred tests comparing the true intensities and positions of the artificial sources with those measured by sfind 2.0 .similar results are obtained with imsad , which uses the same gaussian fitting routine .as expected , the measured values of intensity and position become less reliable as the source intensity becomes closer to the noise , although they are still not unreasonable . a comprehensive analysis of gaussian fitting in astronomical applications has been presented by , and the results of the gaussian fitting performed by sfind 2.0 ( and imsad ) are consistent with the errors expected . additionally , the assumption that a source is point - like , or only slightly extended , and thus well fit by a two - dimensional elliptical gaussian , is clearly not always true .complex sources in radio images , as in any astronomical image , present difficulties for simple source detection algorithms such as the ones investigated here .it is not the aim of the current analysis to address these problems , except to mention that the parameters of such sources measured under the point - source assumption will suffer from much larger errors than indicated by the results of the gaussian fitting . as a final test, sfind 2.0 was used to identify sources in a real radio image , a small portion of the _ phoenix deep survey _ .this image contains sources with extended and complex morphologies as well as point sources .figure [ realfdr ] shows the results , with sfind 2.0 reliably identifying point source and extended objects as well as the components of various blended sources and complex objects .the main aims of this analysis have been to ( 1 ) investigate the implementation of fdr thresholding to an astronomical source detection task , and ( 2 ) compare the rates of missed and falsely detected sources between this task and others commonly used .implementation of the fdr thresholding method is very straightforward , ( evidenced by the seven step idl example of , in their appendix b ) .the fdr method performs as expected in providing a statistically reliable estimate of the fraction of falsely detected _pixels_. performing source detection on a set of pixels introduces the transformation of pixel groups into sources .this ultimately results in the strong constraint on the false fraction of fdr - selected pixels becoming a less rigorous , but still useful and reliable estimate of the fraction of false sources .as already mentioned , this is still a more quantitative statement than can be made of the rate of false sources in the absence of the fdr method .it is possible that rigorous quantitative constraints on the fraction of false source detections may be obtained empirically for individual images or surveys . by performing monte - carlo source detection simulations with artificial images having noise properties similar to the ones under investigation ,the trend of falsely detected sources with may be able to be reliably characterised .this has not been examined in the current analysis , but will be included in subsequent work with sfind 2.0 . while constraints on the fraction of falsely detected sources may be possible , neither fdr nor any other thresholding method provides constraints on the numbers of true sources remaining undetected .a study is also ongoing into whether a more sophisticated fdr thresholding method for defining a source may be feasible .this would involve examining the combined size and brightness properties of groups of contiguous pixels to define a new -value .this would represent the likelihood that such a collection of pixels comes from a ` background distribution ' or null - hypothesis , corresponding to the properties exhibited by noise in various types of astronomical images .using this new -value an fdr threshold , now in the size - brightness parameter space , could be applied for defining a source catalogue .clearly much care will need to be taken to avoid discriminating against true sources which may lie in certain regions of the size - brightness plane , such as low surface - brightness galaxies .the assumption in many existing source - detection algorithms that sources are point - like already suffers from such discrimination , though , so even if such bias is unavoidable , some progress may still be achievable .the form assumed for in this analysis is not in fact a rigorous result of the formal fdr proof .instead it is a ` compromise ' estimate that seems mathematically reasonable , and gives reliable results in practice . to be strictly conservative, the form of given by equation [ cn ] should be adopted to ensure that the fraction of falsely detected ` source - pixels ' is strictly less than .this rigorous treatment , however , is dependent on the number of pixels present in the image .now consider analysis of a sub - region within an image . as the size of this sub - regionis changed , the number of pixels being considered similarly changes , and this will have the effect of changing the threshold level , and the resulting source catalogue .this , perhaps non - intuitive , aspect of the fdr formalism is the adaptive mechanism which allows it to be rigorous in constraining the fraction of false detections .there are additional complicating factors which must be taken into account when performing source detection .the null hypothesis assumed for the fdr method ( and indeed for all the source detection algorithms ) is that the background pixels have intensities drawn from a gaussian distribution ( or other well - characterised statistical distribution such as a poissonian ) .this is not strictly true for radio images , where residual image processing artifacts may affect the noise properties , albeit at a low level . in all cases, such deviations will result in a larger fraction of false pixels than expected , some of which may be clumped in a fashion sufficient to mimic , and be measured as , sources , thus increasing the fraction of falsely detected sources .this comment is simply to serve as reminder to use caution when analysing images with complex noise properties .we have implemented the fdr method in an astronomical source detection task , sfind 2.0 , and compared it with two other tasks for detecting and measuring sources in radio telescope images .sfind 2.0 compares favourably to both in the fractions of falsely detected sources and undetected true sources .the fdr method reliably selects a threshold which constrains the fraction of false pixels with respect to the total number of ` source - pixels ' in realistic images .the fraction of falsely detected sources is not so strongly constrained , although quantitative estimates of this fraction are still reasonable .more investigation of the relationship between ` source - pixels ' and ` sources ' is warranted to determine if a more rigorous constraint can be established . with the ability to quantify the fraction of false detections provided by the fdr method, we strongly recommend that it is worthwhile implementing as a threshold defining method in existing source detection tasks .we would like to thank the referee for several suggestions which have improved this paper .amh acknowledges support provided by nasa through hubble fellowship grant hst - hf-01140.01-a awarded by the space telescope science institute ( stsci ) .amh and ajc acknowledge support provided by nasa through grant numbers go-07871.02 - 96a and nra-98 - 03-ltsa-039 from stsci , and aisr grant nag-5 - 9399 .stsci is operated by the association of universities for research in astronomy , inc ., under nasa contract nas5 - 26555 .benjamini , y. , hochberg , y. 1995 , j. r. stat .b , 57 , 289 benjamini , y. , yekutieli , d. 2001 , ann .statist . , ( in press ) bertin , e. , arnouts , s. 1996 , , 117 , 393 condon , j. j. 1997 , , 109 , 166 hopkins , a. , afonso , j. , cram , l. , mobasher , b. 1999 , , 519 , l59 miller , c. j. , genovese , c. , nichol , r. , wasserman , l. , connolly , a. , reichart , d. , hopkins , a. , schneider , j. , moore , a. 2001 , aj ( accepted ) ( astro - ph/0107034 ) white , r. l. , becker , r. h. , helfand , d. j. , gregg , m. d. , 1997 , , 475 , 479 | the false discovery rate ( fdr ) method has recently been described by , along with several examples of astrophysical applications . fdr is a new statistical procedure due to for controlling the fraction of false positives when performing multiple hypothesis testing . the importance of this method to source detection algorithms is immediately clear . to explore the possibilities offered we have developed a new task for performing source detection in radio - telescope images , sfind 2.0 , which implements fdr . we compare sfind 2.0 with two other source detection and measurement tasks , imsad and sextractor , and comment on several issues arising from the nature of the correlation between nearby pixels and the necessary assumption of the null hypothesis . the strong suggestion is made that implementing fdr as a threshold defining method in other existing source - detection tasks is easy and worthwhile . we show that the constraint on the fraction of false detections as specified by fdr holds true even for highly correlated and realistic images . for the detection of true sources , which are complex combinations of source - pixels , this constraint appears to be somewhat less strict . it is still reliable enough , however , for a priori estimates of the fraction of false source detections to be robust and realistic . |
efforts in overcoming the limit to optical communications imposed by fiber nonlinearity can be broadly grouped into two areas : optical and digital techniques .optical techniques include , for example , optical phase conjugation ( opc ) using twin waves or opc devices placed mid - span .digital techniques include transmitter- and receiver - side digital nonlinearity compensation ( nlc ) , simple nonlinear phase shifts , perturbation - based precompensation , adaptive filtering and optimum detection . with the exception of optimum detection ( a special case of receiver - side nlc for single span transmission ) the digital signal processing ( dsp ) techniques are algorithms which invert the propagation equations for the optical fiber , either exactly or with simplifying approximations .consider the model in fig .[ fig : config ] , which shows a transmission link with digital nlc at both the transmitter and the receiver . to date ,the best performing experimentally demonstrated digital technique for receiver - side digital nlc is the digital backpropagation ( dbp ) algorithm .this algorithm numerically solves the inverse of the optical fiber propagation equations to compensate the linear and nonlinear impairments introduced by the optical fiber transmission ; albeit not taking account of amplifier noise ( cf .zero - forcing equalization ) .the demonstrations of multi - channel nlc via receiver - side dbp which takes account of the inter - channel nonlinear distortions , have recently been repeated using similar digital techniques to predistort for nonlinearity at the transmitter digital precompensation ( dpc ) .we note that , as might be expected due to the symmetry of the transmission link , the experimentally demonstrated performance of the pre- and post - compensation algorithms is similar ; achieving a 100% increase in transmission reach . whether applying dbp or dpc , additive noise from in - line optical amplifiers is enhanced , which limits the performance of the nlc .one can , therefore , make a heuristic argument for dividing the dbp equally between transmitter and receiver , thus limiting the noise enhancement in the compensated waveform to the signal - noise interaction present at the center ( rather than the end ) of the transmission link .although split nlc ( dividing digital nlc between transmitter and receiver ) has previously been considered , experimental implementation was confined to the special case of simplified dsp ( single nonlinear phase shift ) and theoretical analysis considered only residual nlc after opc . in this letter , we assess the performance of split nlc via numerical simulations , and characterize performance in terms of achievable signal - to - noise ratio ( snr ) and mutual information ( mi ) .further , we confirm these results theoretically .to model the effect of digital nlc we used a coherent gaussian noise ( gn ) model of nonlinear interference including the effect of signal - ase ( amplified spontaneous emission ) noise interactions .the model treats the field propagating in the fiber as a summation of signal and noise fields , incorporating the signal - ase noise interaction as a form of cross channel interference .similar to the symbol snr ( signal - to - noise ratio ) at the receiver is approximated as where is the signal power , is the number of spans , is the ase noise power in the signal bandwidth from a single span amplifier , is a single span nonlinear interference factor for the self channel interference ( i.e. , signal - signal interactions ) , is a single span nonlinear interference factor for signal - squared ase noise interference , is the coherence factor for self channel interference and is a factor depending on the number of spans and the method used for digital nlc .the nonlinear interference terms with ase noise squared and cubed have been neglected as insignificant . in order to analytically calculate the snr when applying different nonlinear equalization methods , the parameter must be computed for dpc , dbp and split nlc ( , and , respectively ) . following the method in the appendix, it is found that the difference in snr at optimum signal launch power when applying split nlc versus dbp is given by where and are given in the appendix by eqs . and .choosing a 50% transmitter : receiver split ratio for nlc , and for a large number of spans , becomes for this work , . for from 0 to 0.3 ( a conservatively high value ) varies between 1.5 db and 1.95 db . in the large bandwidth limit, tends to zero .consider the point - to - point transmission link shown in fig .[ fig : config ] , consisting of an idealized optical transmitter and coherent receiver separated by spans of standard single mode fiber ( ssmf ) , followed by erbium doped fiber amplifiers ( edfa ) .the simulation parameters are shown in table [ tab : parameters ] .the transmitted signal was single channel 50 gbd polarization division multiplexed 256-_ary _ quadrature amplitude modulation ( pdm-256qam ) .this format was chosen as it is sufficiently high cardinality to demonstrate increases in mi when using nlc for all transmission distances considered .the signal was sampled at 4 samples / symbol ( to take account of the signal broadening due to fiber nonlinearity ) and shaped using a root - raised cosine ( rrc ) filter . where dpc was considered , it was applied at this point using the split step fourier method ( ssfm ) to solve the manakov equation ( * ? ? ?* eq . ( 12 ) ) .the optical fiber span was again modeled by solving the manakov equation using the ssfm .each fiber span was followed by an edfa which applied gain which exactly compensated the previous span loss . where required , the receiver dsp applied either frequency domain chromatic dispersion compensation ( linear case ) , or dbp .subsequently , a matched rrc filter was applied to the signal , and the signal was downsampled to 1 sample / symbol . to mitigate any residual phase rotation due to uncompensated nonlinear interference ,carrier phase recovery was performed as described in .finally , the snr was estimated over symbols by comparing the transmitted and received symbols as in .for ssfm simulations , the mi was computed using monte carlo integration and is included as a figure of merit to provide insight into the gains in throughput possible when employing digital nlc .note that the analytical mi results are obtained using numerical integration ..summary of system parameters used in fiber simulation [ cols="<,>,<",options="header " , ] [ tab : parameters ]c + [ only marks , mark = triangle*,mark size=3,draw = s1,mark options = fill = white , style = solid ] table [ x index=0 , y index=1 ] ; + [ only marks , mark = x , mark size=3,draw = s3 ] table [ x index=0 , y index=1 ] ; + [ only marks , mark = square*,mark size=2,draw = s2,mark options = fill = white , style = solid ] table [ x index=0 , y index=1 ] ; + [ only marks , mark=+,mark size=3,draw = s4 ] table [ x index=0 , y index=1 ] ; + [ mark = none , draw = s1,style = dotted ] table [ x index=0 , y index=1 ] ; + [ mark = none , draw = s3,style = dashdotted ] table [ x index=0 , y index=1 ] ; + [ mark = none , draw = s2,style = solid ] table [ x index=0 , y index=1 ] ; + [ mark = none , draw = s4,style = dashed ] table [ x index=0 , y index=1 ] ; ( source ) at ( axis cs:5200,19.50 ) ; ( source ) at ( axis cs:3200,21.10 ) ; ( destination ) at ( axis cs:3200,18.00 ) ; ( source)(destination ) ; at ( axis cs:154,12 ) ; + + [ mark = none , draw = s1,style = dotted ] table [ x index=0 , y index=1 ] ; + [ mark = none , draw = s2,style = solid ] table [ x index=0 , y index=1 ] ; + [ mark = none , draw = s3,style = dashdotted ] table [ x index=0 , y index=1 ] ; + [ mark = none , draw = s4,style = dashed ] table [ x index=0 , y index=1 ] ; + [ only marks , mark = triangle*,mark size=3,draw = s1,mark options = fill = white ] table [ x index=0 , y index=1 ] ; + [ only marks , mark = square*,mark size=2,draw = s2,mark options = fill = white , style = solid ] table [ x index=0 , y index=1 ] ; + [ only marks , mark = x , mark size=3,draw = s3 ] table [ x index=0 , y index=1 ] ; + [ only marks , mark=+,mark size=3,draw = s4 ] table [ x index=0 , y index=1 ] ; ( source ) at ( axis cs:5900,12.50) bit / symbol ; ( source ) at ( axis cs:3200,13.10 ) ; ( destination ) at ( axis cs:3200,11.75 ) ; ( source)(destination ) ; at ( axis cs:154,6 ) ; + + [ mark = none , draw = black , style = dotted ] table [ x index=0 , y index=1 ] ; + [ mark = none , draw = black , style = dashed ] table [ x index=0 , y index=1 ] ; + [ mark = none , draw = black , style = solid ] table [ x index=0 , y index=1 ] ; coordinates ( 0,0 ) ; coordinates ( 100,0.44 ) ; coordinates ( 50,1.92 ) ; coordinates ( 0,0 ) ; coordinates ( 100,0.25 ) ; coordinates ( 50,1.83 ) ; coordinates ( 0,0 ) ; coordinates ( 100,0.06 ) ; coordinates ( 50,1.74 ) ; at ( axis cs:0.3,0.26 ) ; the analytical expression for snr , , was evaluated using the methods outlined in the appendix for calculating for both linear signal equalization ( cdc ) and for each nlc technique : dpc , dbp and split nlc . in simulation , transmission distanceswere considered between 200 and 10000 km ( 2 - 100 spans ) , with the signal launch power varied in 1 db steps . for each transmission distance , the mi and snr were determined at the optimum launch power .[ fig : results](a ) shows how the maximum achievable snr varies with transmission distance when applying different digital nlc techniques .a 50% split ratio is used for split nlc .it should be noted that there is excellent agreement between the analytical expressions and the ssfm simulations , with an snr estimation accuracy better than 0.5 db for short distances , and better than 0.2 db for distances above 1000 km , where the gn model is known to have greater accuracy due to the high accumulated chromatic dispersion .an snr improvement for split nlc over both dbp and dpc at all distances is also observed , as predicted by .[ fig : results](b ) shows how the mi of the received signal degrades with distance when applying different digital techniques for equalization . the modulation format considered ( dp-256qam )encodes a maximum 16 bits of information .the advantage of digital nlc is clear as , even at short distances , this maximum mi can not be achieved without either dpc , dbp or split nlc . in longerreach scenarios ( km ) there is a clear gain in mi when using split nlc compared with dpc or dbp , and this gain saturates for distances greater than approximately 2000 km to be approximately 1 bit / symbol .the results in fig .[ fig : results](c ) show the snr gain that can be achieved by dividing the nlc between transmitter and receiver with different ratios .note that the gain of dpc over dbp rapidly diminishes with transmission distance .further , it can be seen that a 50% nlc split ratio is optimum for all transmission distances .it should be noted that the ssfm simulations and the theoretical analysis represent a somewhat idealized model of an optical fiber transmission system .for example , polarization mode dispersion is known to negatively impact on the performance of digital nlc , and yet is not considered in this model .therefore , these results should be interpreted as an optimistic estimation of performance using digital nlc .nevertheless , this work demonstrates that the current arrangements of digital nlc ( dpc or dbp ) can be substantially improved .we used a closed form approximation for the accumulation of the signal - ase interaction over multiple spans in order to analyse the potential snr gain when dividing nlc between transmitter and receiver .the optimum launch power , and hence snr gain , when using split nlc will increase by 1.5 db with respect to both dpc and dbp in the limit of long distance , high bandwidth transmission .split nlc is shown , both theoretically and by numerical simulation , to globally outperform both dpc and dbp for all transmission distances .there is scope to use this snr gain to reduce the complexity of nlc by dividing the dsp between transmitter and receiver ; a subject for further investigation .the following is a derivation of analytical expressions for in the case of both linear cdc at the receiver , and nonlinear compensation using pre- , post- or split - nlc .the nonlinear interference factors , and , were calculated using numerical integration of the gn model reference equation ( * ? ? ?* eq . ( 1 ) ) .note but is more accurately given by numerical integration of ( * ? ? ?* eq . ( 7 ) ) where the spectral shape , , is replaced by unity to represent the uniformity of the ase spectrum . coherence factors and were calculated by obtaining the nonlinear interference factors for 100 spans by numerical integration and using where is the single span nonlinear interference factor and is the nonlinear interference factor for 100 spans . in each case , is substituted by or , as appropriate .note that the purpose of is to change the coherence of the interference terms , altering the accumulation of the nonlinear interference with number of spans , and that .the effect of nlc on the effective received snr is modeled by assuming that nlc effectively subtracts in power the nonlinear interference generated by the forward propagating field .this simplification is customary in the literature and can be seen as the result of two assumptions : i ) a perturbative first - order approximation , and ii ) uncorrelation of all the optical fields involved in the snr calculation . as shown in , dbp generates a first - order field , identical , but with opposite sign , to the forward - propagated field , provided that the linearly - propagated field ( zeroth - order solution ) along the fiber link is the same , hence the cancellation .however , due to the noise accumulation over the link , there is a mismatch between the linearly forward - propagated field and the backward - propagated field ( or the precompensated field ) . as a result , residual signal - ase interaction terms are still present after the application of either dbp or dpc , representing one of the main performance limitations . further assuming a weak nonlinear interaction between ase noise contributions along the link, the calculation of the signal - ase interaction terms can be performed by considering each ase noise contribution as separately interacting with the signal in each span . the nonlinear interference scaling coefficient , accounts for the noise generated due to this signal - ase interaction . in the case of linear chromatic dispersion compensation ( cdc ) , each contribution of ase noise interacts with the signal from the span following its addition , up until the end of the link . in the configurationanalysed herein ( fig .[ fig : config ] ) , the first ase noise contribution interacts with the signal in the second span .likewise , in the case of dpc , signal - ase noise interference accumulates from the second span onwards , since the first noise source follows the first span .thus , for the cdc and dpc scenarios , for dbp , noise from the last amplifier will be backpropagated as if it were signal for spans .the noise from the penultimate amplifier will have interacted with the signal for one span but will be backpropagated as if it were signal over all spans .thus the signal noise interaction over the final span will be correctly compensated , leading to spans of excess nonlinear interference .thus the total signal - ase noise interference is given by the following sum over all spans if the nlc is split between spans of dpc and spans of dbp such that the total number of spans is , then is given by the advantage of splitting the compensation arises since and increase superlinearly with the number of spans . is minimized for and .the snr gain due to the split nlc can be quantified using an approximated closed - form expression for the summation , in each of and .using faulhaber s formula ( * ? ? ?ce 332 , pg . 1 ), such a summation can be expressed as where the coefficients are known as the bernoulli numbers .a sufficiently accurate closed - form for can be derived by truncating to the first 2 terms .these terms rapidly dominate the higher order terms as increases , particularly considering that and .the snr gain , , for split compensation over dbp can be defined as the ratio between the snrs achieved by each compensation technique at optimum launch power .all nlc techniques remove the cubic terms in .thus , considering that maximizing the snr leads to the optimum launch power given by and that at the optimum power since the overall ase noise power is equal to the signal - ase interaction power , the change in snr is given by . substituting the first two terms from the approximation into , and choosing when calculating , for an asymptotically large number of spans , we obtain .the authors wish to thank prof .a. ellis for useful discussions and comments on an earlier draft of this letter .12 x. liu , a. r. chraplyvy , p. j. winzer , r. w. tkach , and s. chandrasekhar , `` phase - conjugated twin waves for communication beyond the kerr nonlinearity limit , '' _ nat .photon _ , vol. 7 , no . 7 , pp. 560568 , apr . 2013 .a. d. ellis , m. e. mccarthy , m. a. z. al - khateeb , and s. sygletos , `` capacity limits of systems employing multiple optical phase conjugators , '' _ opt . express _, vol . 23 , no . 16 , pp . 2038120393 , aug .e. temprana , _ et al ._ , `` two - fold transmission reach enhancement enabled by transmitter - side digital backpropagation and optical frequency comb - derived information carriers , '' _ opt . express _ , vol . 23 , no . 16 , pp .2077420783 , aug .2015 .a. d. ellis , _ et al ._ , `` the impact of phase conjugation on the nonlinear - shannon limit : the difference between optical and electrical phase conjugation , '' . _ in proc ._ ieee summer topical meeting on nonlinear optical signal processing , nassau , bs , vol .2 , pp . 209210 , jul . 2015 .d. marcuse , c. r. menyuk , and p. k. a. wai , `` application of the manakov - pmd equation to studies of signal propagation in optical fibers with randomly varying birefringence , ''_ j. lightwave technol ._ , vol . 15 , no . 9 , pp . 17351746 , sep. 1997 .a. alvarado , e. agrell , d. lavery , r. maher , and p. bayvel , `` replacing the soft - decision fec limit paradigm in the design of optical communication systems , '' _ j. lightwave technol ._ , vol .33 , no . 20 , pp .43384352 , oct .d. j. ives , p. bayvel , and s. j. savory , `` adapting transmitter power and modulation format to improve optical network performance utilizing the gaussian noise model of nonlinear impairments , ''_ j. lightwave technol ._ , vol . 32 , no . 21 , pp . 34853494 , nov . 2014 .m. secondini , e. forestieri , and g. prati , `` achievable information rate in nonlinear wdm fiber - optic systems with arbitrary modulation formats and dispersion maps '' _ j. lightwave technol ._ , vol . 31 , no . 23 , pp .38393852 , dec . | in this letter we analyze the benefit of digital compensation of fiber nonlinearity , where the digital signal processing is divided between the transmitter and receiver . the application of the gaussian noise model indicates that , where there are two or more spans , it is always beneficial to split the nonlinearity compensation . the theory is verified via numerical simulations , investigating transmission of single channel 50 gbd polarization division multiplexed 256-_ary _ quadrature amplitude modulation over 100 km standard single mode fiber spans , using lumped amplification . for this case , the additional increase in mutual information achieved over transmitter- or receiver - side nonlinearity compensation is approximately 1 bit for distances greater than 2000 km . further , it is shown , theoretically , that the snr gain for long distances and high bandwidth transmission is 1.5 db versus transmitter- or receiver - based nonlinearity compensation . coherent optical communications , quadrature amplitude modulation ( qam ) , digital backpropagation , nonlinearity compensation . |
network coding ( nc ) refers to mixing different information flows at the sender or intermediate nodes in a data communication network .it has been shown that nc can substantially improve the throughput of many wireless communication systems . as a result, it has become a promising candidate for delivering high data rate content in future wireless communication networks .for example , nc has been considered for delivering high data rate multimedia broadcast or multicast services ( mbms ) .in addition to being high data rate in nature , such applications also often have strict delay requirements. however , the higher throughput offered by nc does not necessarily translate into faster delivery of information to the application .in general , the mixed information needs to be disentangled or network decoded first . understanding the interplay between throughput and delay and devising nc schemes that strike a balance between the two are particularly important , which has proven to be challenging .an important example that illustrates the tension between throughput and delay is random linear network coding ( rlnc ) in broadcast erasure channels . in rlnc , the sender combines a frame or block of packets using random coefficients from a finite field and broadcasts different combinations until all receivers have received linearly independent coded packets . in this case, rlnc achieves the best throughput ( block completion time ) among block - based nc schemes .however , the delay performance may not be desirable , as decoding at the receivers is generally only possible after independent coded packets are successfully received . in order to reduce the decoding delay in nc systems ,an attractive strategy is to employ instantly decodable nc ( idnc ) .as the name suggests , idnc aims to provide instant packet decoding at the receivers upon successful packet reception , a property that rlnc does not guarantee .a decoding delay occurs at a receiver when it is not targeted in an idnc transmission .that is , it receives a packet that contains either no or more than one desired packets of that receiver . compared to rlnc , idnc in broadcast erasure channels can have a lower throughput .in other words , idnc incurs a generally higher completion time for the broadcast of the same number of packets .however , it can provide a faster delivery of uncoded packets to the application layer , as required for mbms .therefore , similar tension between throughput and delay can also be observed in idnc .inspired by the low - complexity xor - based encoding and decoding process of idnc and its potential application in mbms and unicast settings , in this paper we are interested in understanding the interplay between its throughput and delay over broadcast erasure channels and proposing novel idnc schemes that offer a better control of these performance metrics .the problem of maximizing the throughput for a deadline - constrained video - streaming scenario is considered in , where each packet has a delivery deadline and has to be decoded before the deadline , otherwise it is expired . in this paper , however , we consider a block - based transmission , where all the packets in the block have to be received by all the receivers and there is no explicit packet deadline .furthermore , in this paper , no new packet arrival is considered in the system while the transmission of a block is in progress .in addition , this study is applicable where partial decoding is beneficial and can result in lower delays irrespective of the order in which packets are being decoded .examples of such applications can be found in sensor or emergency networks and multiple - description source coded systems , in which every decoded packet brings new information to the destination , irrespective of its order . in this context ,the closest works to ours are and .in particular , the authors in aimed to improve the decoding delay of a generalized idnc scheme .they showed that for a lower decoding delay , maximum number of receivers with the lowest packet erasure probabilities should be targeted in each idnc transmission .in separate works , the same authors aimed to improve the completion time of idnc .they showed that for this purpose , the receivers with the maximum number of missing packets with the highest erasure probabilities should be targeted in each idnc transmission .a close study of reveals that trying to improve either idnc s decoding delay or completion time on its own can result in undermining the other performance metric .in other words , while trying to improve the decoding delay , the receiver(s ) with the maximum number of missing packets may remain untargeted , which can increase the completion time . also trying to improve the completion timemay limit the total number of receivers that can be targeted in each idnc transmission , which can increase the decoding delay . to the best of our knowledge, there is no joint control of completion time and decoding delay for idnc schemes in the literature .thus , in this paper , our objective is to take a holistic approach , in which the completion time and decoding delay of idnc are taken into account at the same time .in addition , we have observed that the decoding delay across various receivers in idnc schemes of and can vary significantly .this may not be desirable in mbms or other applications which should guarantee a certain quality of service across all receivers .these observations lead us to the following open problems : _ is there an idnc scheme that can offer a balanced performance in terms of the completion time and decoding delay and can also provide a more uniform or fair decoding delay across all receivers for the broadcast of packets in erasure channels ? _ to address these questions in this paper , we propose a new idnc transmission scheme which builds upon the contributions in and . at its core, our proposed scheme recognizes that 1 ) the completion time of each individual receiver is determined not only by the number of packets it is missing , but also by the number of idnc transmissions in which it is not targeted ( while still needing a packet(s ) ) and 2 ) the overall idnc completion time is the maximum of individual completion times .therefore , our idnc transmission scheme gives priority to the receivers that have the highest expected completion time so far .more precisely , the priority of each receiver is the sum of two terms : the first term is its number of missing packets divided by its average packet reception probability .this is the expected number of transmissions to serve this receiver if it is targeted in all following transmissions .the second term is the decoding delay the receiver has experienced so far . under this scheme, a receiver with a small number of missing packets which has remained untargeted in a number of previous transmissions may take precedence over other receivers .hence , our scheme tends to equalize the decoding delay experience across the receivers .furthermore , we will extend our proposed scheme to the case of broadcast erasure channels with memory , where the packet erasures occur in bursts , due to deep fading and shadowing . by following the proposed channel models in , we model the bursts of erasures ( i.e. the memory of the channel ) by a simple two - state gilbert - elliott channel ( gec ) model and propose two algorithms that can offer an improved balance between the completion time and decoding delay of idnc for different ranges of the channel memory . with this introduction ,we summarize the contributions and findings of our paper as follows : first , we present a holistic viewpoint of idnc .we formulate the idnc optimal packet selection that provides an improved balance between the completion time and decoding delay for broadcast transmission over memoryless channels as an ssp problem . however , since finding the optimal packet selection in the proposed ssp scheme is computationally complex , we use the ssp formulation and its geometric structure to find some guidelines that can be used to propose a new heuristic packet selection algorithm that efficiently improves the balance between the completion time and decoding delay in idnc systems .second , we extend the proposed packet selection algorithm to erasure channels with memory and propose two different variations of the algorithm that take into account the channel memory conditions and improve the balance between the completion time and decoding delay by selecting the packet combinations more effectively based on the channel memory conditions compared to the algorithms that are ignorant to the channel memory . finally , by taking into account both the number of missing packets and the decoding delay of the receivers , the proposed algorithm provides a more uniform decoding delay experience across all receivers .the rest of this paper is organized as follows .the system model is presented in section ii .the idnc graph representation and packet generation is introduced in section iii .section iv , presents the ssp problem formulation . in sectionv , we present a geometric structure for the ssp problem that helps us to find the properties of the optimal packet selection policy . a heuristic algorithm for idnc packet selection is proposed in section vi .the proposed heuristic algorithm is then extended to erasure channels with memory in section vii , where also a new layered algorithm is introduced .section viii presents the simulation results .finally , section ix concludes the paper .the system model consists of a wireless sender that is required to deliver a block ( denoted by ) of source packets to a set ( denoted by ) of receivers .each receiver is interested in receiving all the packets of .the sender initially transmits the packets of the block uncoded in an _ initial transmission phase_. each sent packet is subject to erasure at receiver with the probability , which is assumed to be fixed during a block transmission period .each receiver listens to all transmitted packets and feeds back a positive or negative acknowledgment ( ack or nak ) for each received or lost packet . at the end of the initial transmission phase ,two `` feedback sets '' can be attributed to each receiver : 1 .the has set ( denoted by ) is defined as the set of packets correctly received by receiver .2 . the wants set ( denoted by ) is defined as the set of packets that are missed at receiver in the initial transmission phase of the current block . in other words .the senders then stores this information in the _ state feedback matrix _ ( sfm ) , \forall i \in \mathcalm , j\in \mathcal n ] and ] is defined as the final accumulative decoding delay vector , where is the final accumulative decoding delay experienced by receiver ( i.e. the accumulative decoding delay experienced by receiver until it receives all its missing packets ) .the best possible performance of idnc in terms of the oct and decoding delay can be achieved if in every single transmission all the receivers with non - empty wants sets are targeted . in this case , after each transmission , assuming that no erasure occurs , the remaining number of transmissions is reduced by one and the accumulative decoding delays experienced by the receivers are zero . under this scenario ,the ict of each receiver is equal to the size of its initial wants set , , and the oct of the system is equal to the maximum ict of the receivers ( the size of the largest initial wants set , i.e. ) .furthermore . however ,since it is not always possible to target all the receivers with non - empty wants sets in every single transmission , due to instant decodability constraint , the receivers that are not targeted will experience a decoding delay , and thus , their icts will be increased by the value of their final accumulative decoding delay ( i.e. the total number of the time - slots that they were not targeted ) . therefore , we can write the ict of receiver , denoted by , as as shown in , the ict of each receiver depends on the size of its initial wants set , , and the final accumulative decoding delay it experiences , .having defined the receivers icts , it can be easily inferred that oct of the system is equal to the maximum ict of the receivers , and can be expressed as it is worth noting that based on , minimizing the decoding delay of receiver is equivalent to minimizing its ict .furthermore , based on , minimizing the oct is equivalent to minimizing the largest icts .therefore , the problem of providing a balance between the decoding delay and oct can be translated into balancing between and of the receivers . in the next section, we will show that the packet selection problem that offers such balance between the oct and decoding delay of the receivers for the idnc can be formulated in the form of an ssp problem .the ssp problem is a special case of an infinite horizon markov decision process , which can model decision based stochastic dynamic systems with a terminating state .ssp problem was first used in the context of idnc in in order to select the packet combinations that result in minimum completion time . in ssp problem, different possible situations that the system could encounter are modeled as states ( where denotes the state space of the ssp problem ) . in each state , the system must select an action from an action space that will charge it an immediate cost ( denotes the action space of the ssp problem ) . in the general form ,the cost of a transition from state to state is modelled as a scalar that depends on , the taken action , and . under this scenario , in the ssp formulation , the expected cost is calculated as , where represents the probability of system moving from state to state once action is taken .the terminating condition of the system can be thus represented as a zero - cost _ absorbing goal state_. an ssp policy ] , ] , respectively .the values of , and at the starting state of the recovery transmission phase , , are represented by ] and ] .in addition , for each state , we define to be the set of receivers who still need one or more packets .it is worth noting that the value of the accumulative decoding delay vector at the absorbing state ; i.e. ] .now , the state - action transition probability for an action , can be defined based on the possibilities of the variations in from state to state . to define , here ,we first introduce the following three sets : where denotes all the receivers with non - empty wants sets at state and represents the set of all the targeted receivers in the maximal clique . here , the first set consists of the receivers who have been targeted by the clique and their have been decreased from state to state .this means that these receivers have successfully received an idnc packet , which addressed them by one of their missing packets .thus , the size of their wants sets is reduced and their accumulative decoding delays are remained unchanged .the second set includes the receivers who have not been targeted but have successfully received the transmitted packet . in this case is increased from state to state , since the wants sets of these receivers have remained unchanged and their accumulative decoding delays have increased due to successfully receiving either a non - innovative or a non - instantly decodable packet .the third set includes the receivers who have not received any packet due to packet erasure and as a result , their wants sets and accumulative decoding delays have remained unchanged , thus .based on the definitions of these three sets , can be expressed as follows : of the example sfm in , title="fig : " ] of the example sfm in , title="fig : " ] the best possible action is the action that addresses all the receivers with non - empty wants sets at state , denoted by , by one of their missing packets . under this scenario , assuming no erasure occurs , the wants sets of all the receivers are reduced from state to state and their accumulative decoding delays remain unchanged ( i.e. and ) . in this case , for each receiver we will have .this is the best performance that can be achieved for an idnc scheme . knowing that any transition ( due to any action ) takes one packet transmission , the cost of action on each receiver can be defined as .this results in three possible cost values , i.e. , associated with action on receiver that can be expressed as follows : * means that action does not incur any cost on receiver in terms of its wants set and accumulative decoding delay , if it successfully receives one of its missing packets . in this case , and .* means that receiver ( targeted / untargeted ) did not receive the coded packet due to packet erasure . in this case, there is no cost on the accumulative decoding delay , however , the wants set of receiver remains unchanged , as no missing packet was decoded . here, at least one more time - slot ( one transmission ) is required to be able to reduce the size of receiver s wants set . under this scenario , and .* means that receiver was not targeted by action and has successfully received either a non - instantly decodable or a non - innovative packet . in this case , there are costs on both the accumulative decoding delay and wants set of receiver , as it experiences an increase in its accumulative decoding delay and the size of its wants set remains unchanged .as a result and . based on the above discussion ,if receiver is targeted by action , i.e. , the cost will be thus , the expected cost given receiver is targeted by action can be calculated as however , if receiver is not targeted by action , i.e. , the cost will be thus , the expected cost given receiver is not targeted by action can be calculated as the total expected cost of action over all the receivers in can thus be defined as the optimal policy as presented in section [ ssp_problem ] can be expressed as \}}\end{aligned}\ ] ] where is the expectation operator over different transmission probabilities when action is taken .thus , the optimal action at state is the action that minimizes the cost as well as the expectation of the optimal value functions of the successor states .however , solving this ssp problem is computationally complex and requires exhaustive iterative techniques .furthermore , there is no closed - form solution to this problem .thus , instead of solving the ssp problem formulated in , we can study its properties and structure to draw the characteristics of the optimal policy . to this end , we will study the geometric structure of the ssp solution in the context of the proposed idnc scheme .in other words , our aim of the ssp formulation is not to use it as a solution , but to study its properties by the help of its geometric structure and find some guidelines for policies that can improve the balance between the oct and decoding delay in idnc systems .we then use these policies to design simple yet efficient heuristic algorithms in section [ sec : heuristicalg ] .in order to find some guidelines for the policies that can efficiently improve the balance between the oct and decoding delay in idnc systems , in this section , we study the geometric structure of the ssp problem . given the representation of the ssp problem in each state by the state vector of the receivers ] in this space all the states that have the state vectors ] , where based on this new vector definition , we can re - define our space such that the points are identified by the coordinates of the vectors instead of . in this case , the actions move the system within hyper - rectangles with sides either equal to 1 or in the -th dimension .it means if an action results in an increase in the accumulative decoding delay , then , however , if it addresses one of the receiver s missing packets , it leads to .moreover , if receiver does not receive the packet due to erasure , then . in other words : in the next section , by the help of the above - mentioned design guidelines , we will propose a heuristic packet selection algorithm .in this section , we propose a greedy algorithm to select the clique according to the findings in the previous section .we use norm here , but other norms are also possible .the proposed algorithm performs clique selection , using a maximum weight vertex search approach . for this search to be efficient in finding maximal cliques, the vertices weights must not only reflect the values of their inducing receivers , but also their adjacency to the vertices with high .we then define the weighted degree of vertex , denoted by , as : where was defined in .thus , a large weighted degree reflects its adjacency to a large number of vertices belonging to receivers with large values of .we finally design the vertex weight for vertex as : consequently , a vertex has a high weight if it both belongs to a receiver with large , and is connected to the receivers with large values .based on the above weight definition , we introduce our proposed packet selection algorithm as follows . in each state ,the algorithm starts by selecting the vertex with the maximum weight , denoted by , and adds it to the clique .note that at first , is an empty set . then at each following iteration, the algorithm first recomputes the new vertices weights within the subgraph connected to all previously selected vertices in , denoted by , then adds the new vertex with the maximum weight to it .the algorithm stops when there is no further vertex connected to all vertices in .we refer to this algorithm as _ maximum weight vertex search algorithm _ ( mwvs ) .the proposed algorithm is summarized in algorithm [ alg : mwvs ] . 1 .* initialize * + construct based on .while * do + compute using , and .+ select . + set .+ update subgraph this section , our goal is to extend our proposed mwvs scheme to the coded transmissions in erasure channels with memory . to model erasure channels with memory , we employ the well - known gilbert - elliott channel ( gec ) which is a markov model with a _ good _ and a _ bad _ state .when the channel is in the good state packets can be successfully received , and when the channel is in the bad state packets are lost ( e.g. , due to deep fades in the channel ) .the probability of moving from the good state to the bad state is and the probability of moving from the bad state to the good state is .steady - state probabilities are derived as and , where is the channel state of receiver in the previous transmission . here , without loss of generality , we assume that , which results in equiprobable states in the steady - state regime .other scenarios can be considered in a similar manner .following , we define the memory content of the gec as , which signifies the persistence of the channel in remaining in the same state .a small means a channel with little memory and a large means a channel with large memory .we assume that different receivers links are independent of each other with the same state transition probabilities . here, the proposed mwvs algorithm in section [ sec : heuristicalg ] is modified so that it takes into account the channel memory conditions . in the modified framework , the positive or negative acknowledgment ( ack or nak )that each receiver feeds back for each received or lost packet can be utilized to infer the channel state of that receiver in the previous transmission .the proposed mwvs algorithm in section [ sec : heuristicalg ] can then be generalized for erasure channels with memory by defining the probability of successful reception by the receiver as the probability of moving to the good state in the current time - slot from its previous state , i.e. . so the proposed mwvs algorithm can be easily implemented in erasure channels with memory by replacing with in as in other words , the weight of each vertex in can now be recalculated based on the conditional reception probability of its inducing receiver , given its previous state , as ^ 2\theta_{i , j}(s)\end{aligned}\ ] ] however , for erasure channels with strong memory , the receivers have a strong tendency to stay in their previous states .it means if they have been in state in the previous time - slot , they are most likely to stay in state in the current time - slot , and vice versa , if they have been in state , they are most likely to stay in state . under this case , for the receivers in state b , will be very small and as a result the term in will be large .consequently , high weights will be given to the receivers that have been in state in the previous transmission ( also referred to as _ bad - channel receivers _( bcr ) ) .but it should be noted that targeting the bcrs most likely would not result in any decoding for them , as with a very high probability their channels will remain in state in the current transmission .however , addressing the receivers that were in sate in the previous transmission ( also referred to as _ good - channel receivers _ ( gcr ) ) can potentially result in the decoding of their missing packets .inspired by these scenarios , in the next sub - section , we will introduce a _ layered maximum weight vertex search algorithm _ ( referred to as mwvs - layered ) , which is specifically designed for erasure channels with persistent memory . here, our goal is to extend the proposed mwvs algorithm in section [ subsec : mwvsmemory ] for erasure channels with persistent memory . in order to do so, we follow the same approach as in .the proposed algorithm comprises two different layers of subgraphs .the first layer of subgraph , , consists of vertices of gcrs . in the first step , the mwvs algorithm is applied on the subgraph , and is obtained .then , in the second step , the algorithm finds by applying the mwvs algorithm another time on the second layer of subgraph , , consisting of bcrs that are adjacent to all the vertices of the chosen clique .thus , the final clique can be obtained by the union of the cliques from the two layers as .the steps of mwvs - layered algorithm are summarized in algorithm [ alg : lmwvs ] . 1 .* initialize * and + construct based on . + form and according to the channels previous states .2 . * while * do + compute using , and .+ select . + set .+ update subgraphs and .while * do + compute using .+ select . + set .+ update subgraph .4 . this section , we present the simulation results comparing the performance of our proposed mwvs and mwvs - layered algorithms and the schemes in over a wide range of channel memory conditions . furthermore , as our benchmark for the minimum oct performance, we will compare the oct of our proposed mwvs and mwvs - layered algorithms with the rlnc scheme .we start with our simulation results for memoryless erasure channels and compare the performance of our proposed mwvs algorithm with the schemes in and , denoted by `` min - oct '' and `` min - dd '' , respectively .furthermore , we have simulated the proposed scheme for and , denoted by mwvs ( ) " and mwvs ( ) " , respectively . corresponds to the case that the objective of the proposed scheme is to reduce the accumulative decoding delay and corresponds to the case where the objective of the proposed scheme is to reduce the oct of the system in each time slot .the simulation results of the proposed mwvs algorithm when equal weights are assigned to and , as in , are denoted by mwvs " . in our simulations for the broadcast memoryless erasure channels , we assume that packet erasures of different receivers change from block to block in the range $ ] with an average equal to 0.15 .the simulations are performed for different number of packets and receivers in the system .it should be noted that the presented simulation results in this section are the mean values , i.e. the oct results show the average oct of the transmission of packets over 500 instances of sfm . in terms of the decoding delay ,the mean decoding delay of different receivers are computed per block , and then these mean decoding delays are averaged over 500 instances of sfm . hence , the decoding delay results are actually the mean of mean decoding delays . figure [ fig : octvsdd](a ) depicts the oct and decoding delay tradeoff curves of different algorithms for various number of packets for receivers . moreover ,the oct and decoding delay tradeoff curves of these algorithms for various number of receivers for packets is presented in figure [ fig : octvsdd](b ) . from these figures, we first observe that the min - oct algorithm in that achieves the minimum oct among the idnc schemes in figures [ fig : octvsdd](a ) and [ fig : octvsdd](b ) , results in the worst decoding delay performance , and the min - dd algorithm in that achieves the minimum decoding delay performance , results in the worst oct performance .however , in these figures it is shown that our proposed mwvs algorithm provides an improved balance between the oct and decoding delay for the whole range of number of packets and receivers . furthermore ,as it was expected , we observe that the performance of the proposed mwvs algorithm with is the same as the performance of min - oct algorithm proposed in .also , it can be seen that the performance of the proposed algorithm with is very close to the performance of the min - dd algorithm proposed in .however , it is worth noting that the proposed mwvs algorithm when aims to reduce the accumulative decoding delay ( defined in definition 4 ) , while the min - dd algorithm in aims to reduce the decoding delay in each time - slot ( defined in definition 3 ) .figure [ fig : varddvsk ] illustrates the variance of the decoding delay versus the number of packets for receivers . from this figure , it can be seen that our proposed mwvs algorithm significantly outperforms the other algorithms in terms of the variance of the decoding delay .this can be translated into a better fairness in the decoding delay experienced by different receivers . for erasure channels with memory ,the full graph search and the layered graph search algorithms proposed in are used as our reference for the minimum decoding delay performance .these algorithms are denoted by `` min - dd '' and `` min - dd - layered '' in the figures , respectively . as our reference for the minimum oct performance for erasure channels with memory ,we have modified the algorithm in to become channel memory aware by replacing the probability of successful reception at receiver with .we refer to this scheme as `` min - oct '' .furthermore , we have extended this scheme to a two - layered algorithm , where the first layer consists of gcrs and the second layer consists of bcrs . in the first step ,the algorithm is applied on the first layer and a clique of gcrs is obtained .then , in the second step , the algorithm is applied to the second layer and a clique of bcrs that are adjacent to all the vertices of the chosen clique of gcrs is found .then , the final clique is obtained by the union of the cliques from the two layers . in our simulation results ,this scheme is referred to as `` min - oct - layered '' . for the broadcast erasure channels with memory , we assume for all the receivers , and the channel memory , , ranges from 0 ( memoryless ) to 0.98 ( very persistent memory ) .the simulation results are provided for a wide range of channel memory contents as well as different number of packets and receivers .figures [ fig : cdvsmemory](a ) and [ fig : cdvsmemory](b ) illustrate the oct of the receivers versus channel memory for packets and receivers , respectively . as can be seen from these figures , for low channel memory content ( roughly ranging from 0 to 0.45 ) ,the full graph algorithms outperform their layered graph counterparts in terms of oct .however , when the memory content of the channel is high ( roughly ranging from 0.45 - 0.98 ) , the layered graph techniques significantly outperform their full graph counterparts .the mean decoding delay performance versus channel memory is depicted in figure [ fig : ddvsmemory ] . from this figurewe can see that in terms of the decoding delay , the min - dd algorithm outperforms the min - dd - layered for memory content ranging from 0 to 0.5 , while the min - dd - layered outperforms min - dd for higher channel memory contents ( ranging from 0.5 to 0.98 ) .for all the other investigated schemes , the layered graph techniques always result in lower decoding delays compared to their full graph counterparts .this is due to the fact that in the layered graph techniques , the priority is always given to the gcrs to be addressed by one of their missing packets , and as shown in giving higher priorities to the receivers with higher probabilities of successful reception improves the decoding delay experienced by the receivers .furthermore , as shown in these figures , the proposed mwvs - layered scheme provides a better balance between the oct and decoding delay for the whole range of channel memory content .figure [ fig : memoryzeropointsix ] shows the oct and decoding delay tradeoff curves of the system for different number of packets for channel memory and .the results show that for the layered graph techniques outperform their full graph counterparts .again it can be seen that the min - oct - layered algorithm that achieves the lowest oct among the idnc schemes results in the worst mean decoding delay among the layered graph algorithms , and the min - dd - layered algorithm that achieves the lowest decoding delay results in the worst oct . however , for as we expected , the proposed mwvs - layered algorithm results in an improved balance between the oct and decoding delay .in this paper , we proposed a new holistic viewpoint of instantly decodable network coding ( idnc ) schemes that simultaneously takes into account both the overall completion time ( oct ) and decoding delay and improves the balance between these two performance metrics for broadcast transmission over erasure channels with a wide range of memory conditions .we formulated the optimal packet selection for such systems using an ssp technique .however , since solving the ssp problem in the proposed scheme is computationally complex , we further proposed two different heuristic algorithms that each improves this balance between the oct and decoding delay for a specific range of channel memory conditions . furthermore , it was shown that the proposed scheme offers a more uniform decoding delay experience across all receivers .extensive simulations were conducted to assess the performance of the proposed algorithms compared to the best known existing algorithms in the literature .the simulation results show that our proposed algorithms achieve an improved balance between the oct and decoding delay .x. li , c. wang , and x. lin , `` on the capacity of immediately - decodable coding schemes for wireless stored - video broadcast with hard deadline constraints , '' _ ieee journal on selected areas in communications _ ,29 , no . 5 , pp .10941105 , may 2011 .s. sorour and s. valaee , `` an adaptive network coded retransmission scheme for single - hop wireless multicast broadcast services , '' _ ieee / acm transactions on networking _ , vol .19 , no . 3 , pp .869878 , jun . 2011 .r. costa , d. munaretto , j. widmer , and j. barros , `` informed network coding for minimum decoding delay , '' in _ proc .ieee international conference on mobile ad hoc and sensor systems ( mass ) _ , 2008 , pp .8091 .yeow , a. t. hoang , and c .- k .tham , `` minimizing delay for multicast - streaming in wireless networks with network coding , '' in _ proc .ieee conference on computer communications ( infocom ) _ , apr .2009 , pp .190198 .j. k. sundararajan , p. sadeghi , and m. mdard , `` a feedback - based adaptive broadcast coding scheme for reducing in - order delivery delay , '' in _ proc .workshop on network coding , theory , and applications ( netcod ) _ , 2009 ,a. yazdi , s. sorour , s. valaee , and r. kim , `` optimum network coding for delay sensitive applications in wimax unicast , '' in _ proc .ieee conference on computer communications ( infocom ) _ , apr .2009 , pp .25762580 .p. sadeghi , r. shams , and d. traskov , `` an optimal adaptive network coding scheme for minimizing decoding delay in broadcast erasure channels , '' _ eurasip journal on wireless communications and networking _ , pp .114 , jan . 2010 .b. swapna , a. eryilmaz , and n. shroff , `` throughput - delay analysis of random linear network coding for wireless broadcasting , '' in _ proc .international symposium on network coding ( netcod ) _ , jun .2010 , pp . 16 .m. nistor , d. e. lucani , t. t. v. vinhoza , r. a. costa , and j. barros , `` on the delay distribution of random linear network coding , '' _ ieee journal on selected areas in communications _ , vol .29 , no . 5 , pp .10841093 , may 2011 .p. sadeghi and m. yu . instantly decodable versus random linear network coding : a comparative framework for throughput and decoding delay performance .[ online ] .available : \url{http://arxiv.org / abs/1208.2387}[\url\{http://arxiv.org / abs/1208.2387 } ] t. ho , m. mdard , r. koetter , d. karger , m. effros , j. shi , and b. leong , `` a random linear network coding approach to multicast , '' _ ieee transactions on information theory _52 , no .10 , pp . 44134430 , 2006 .s. wang , c. gong , x. wang , and m. liang , `` instantly decodable network coding schemes for in - order progressive retransmission , '' _ ieee communications letters _17 , no . 6 , pp . 10691072 , jun .a. le , a. s. tehrani , a. g. dimakis , and a. markopoulou , `` instantly decodable network codes for real - time applications , '' in _ proc .workshop on network coding , theory and applications ( netcod ) _ , 2013 .p. sadeghi , r. a. kennedy , p. rapajic , and r. shams , `` finite - state markov modeling of fading channels : a survey of principles and applications , '' _ ieee signal processing magazine _ , vol . 25 , no . 5 , pp . 5780 , sep .m. s. karim and p. sadeghi , `` decoding delay reduction in broadcast erasure channels with memory for network coding , '' in _ proc .ieee international symposium on personal , indoor and mobile radio communications ( pimrc ) _ , sep .2012 , pp .s. sorour , n. aboutorab , p. sadeghi , m. s. karim , t. al - naffouri , and m. alouini , `` delay reduction in persistent erasure channels for generalized instantly decodable network coding , '' in _ proc .ieee vehicular technology conference ( vtc ) _, jun . 2013 , pp . 15 . | this paper studies the complicated interplay of the completion time ( as a measure of throughput ) and the decoding delay performance in instantly decodable network coded ( idnc ) systems over wireless broadcast erasure channels with memory , and proposes two new algorithms that improve the balance between the completion time and decoding delay of broadcasting a block of packets . we first formulate the idnc packet selection problem that provides joint control of the completion time and decoding delay as a statistical shortest path ( ssp ) problem . however , since finding the optimal packet selection policy using the ssp technique is computationally complex , we employ its geometric structure to find some guidelines and use them to propose two heuristic packet selection algorithms that can efficiently improve the balance between the completion time and decoding delay for broadcast erasure channels with a wide range of memory conditions . it is shown that each one of the two proposed algorithms is superior for a specific range of memory conditions . furthermore , we show that the proposed algorithms achieve an improved fairness in terms of the decoding delay across all receivers . instantly decodable network coding , decoding delay , completion time , broadcast , gilbert - elliott channels . |
modern portfolio theory formulates the asset allocation problem as an optimization model , with the objective of maximising the expected portfolio return subject to keeping the estimated risk below a pedefined level ( the `` risk budget '' ) . in theory, this approach should result in a carefully diversified asset allocation across various investable assets . in practice , however , optimal asset allocations computed from portfolio models are often seen as counterintuitive and ill - diversified , as they may contain large positions in just a few assets , and a large number of very small positions . to overcome this problem, fund managers often impose additional constraints in the form of strategic allocation targets .we have found however , that the imposition of allocation targets is artficial and unnecessary if both the parameter estimation and the numerics that go into a portfolio model are carried out very carefully .this paper is aimed primarily at practitioners . inspired by an essay by wilmott ,we give a list of seven sins in porfolio optimization that should be avoided at all cost .the best known investment model is the one - period mean - variance ( mv ) model of markowitz . for the purposes of this paper we restrict ourselves to this model , as it is both very simple and widely known among the readership and yet of fundamental interest in finance .a very similar discussion could be held about more complex multiperiod models that involve more general risk terms . for the sake of completeness and to define the notation , we start with a brief recap of the mv model . an investor wishes to actively manage a portfolio in risky assets .the investor holds fixed positions in asset over an investment period ] . is the random variable describing the return per unit position in asset over the investment period ] between and lies in ) , and if is a convex function ( that is , for any and $ ] , ) , then the problem is called a _convex optimization problem _ be a convex set . ] .if is symmetric positive definite and , the analytic solution of this problem is given as note that , e.g. the optimum lies at the boundary of the aforementioned ellipsoid .we introduce the sharpe ratio for in this context as where is a risk - free interest rate over the considered investment period .we will henceforth assume short investment periods on the order of days and set , but all the calculations easily extend to the case where .note that the sharpe ratio of the optimal portfolio does not depend on .we identified the model formulation , the estimation of parameters and the algorithmic solution as potential mine fields of portfolio optimization .another mine field concerns parameter uncertainty . in order to keep the technical difficulty of this paper to a minimum , we chose not to deal with this issue here , but we refer our readers to the extensive recent literature on robust optimization . among the countless ways to produce unsatisfactory results through portfolio modelling , we chose to discuss just seven of the most common mistakes that we have seen committed .this is a true classic that deserves the top spot here . a single negative eigenvalue ( even if close to zero ) can spoil it all . in practicethe covariance matrix is estimated using historic data .it is tempting to estimate the elements of the matrix using independent processes for each entry , e.g. moving averages with distinct update rates .this is a recipe for disaster .we illustrate the consequences : let be a symmetric real matrix .then there exists a real orthonormal matrix such that is a diagonal matrix .the diagonal entries of are the eigenvalues of .the columns of are the eigenvectors .let us assume that there exists an eigenvector associated with a negative eigenvalue .then .hence we can find a portfolio with a negative variance , that is , corresponds to a portfolio of negative risk .taking weights , where when and otherwise , and where , we obtain a portfolio with expected return that satisfies the risk budget , since this might entice one to take arbitrarily large positions in the erroneous belief that the risk budget will not be exceeded .some argue that the introduction of extra constraints such as the self - financing condition and non - shortselling conditions avoid the problem of large positions .however , the positions will still be completely nonsensical , and the extra constraints may not be applicable to all asset classes . although there exist some approaches to correct estimated and polluted covariances , see higham , we found that a careful analysis of the estimation process itself adds far more value to the trading strategy .the second sin is of more subtle nature and yet more dramatic . for almost rank deficient matrices the ellipsoid in fig .[ fig : volcon1 ] becomes very cigar - shaped .the eigenvectors associated with small eigenvalues will change strongly under even very small perturbations of the matrix . in a typical backtestthe entries of such a matrix are updated and hence the ellipsoid will rotate. even if has only positive eigenvalues they might be still too small .the inverse of is where and are the aforementioned matrices of eigenvectors and eigenvalues .this decomposition can be represented as sum of outer products , that is we assume .inserting into yields the position is a linear combination of _ principal portfolios _ .this term only gives the explicit solution for the standard model .the effect of ill - conditioning is equally important in problems with more constraints .this reveals the fundamental problem of the markowitz model : the weights scale as , that is , the optimizer will try to align the position with principal portfolios linked with small eigenvalues .michaud calls this effect the maximization of the estimation error as the eigenvectors associated with small eigenvalues are most sensitive to noise . in practice, it has often been observed that this results in rather extreme positions .there are numerous approaches to overcoming this behavior .a popular approach is to ignore eigenvalues smaller than a threshold in the sum for .often the threshold is induced by the wigner semicircle distribution .another powerful approach is to use shrinkage estimators , the most simple of which is the convex combination where is the identity matrix .so the eigenvalues of are .although matrix shrinkage towards does not change the eigenvectors , and hence eigenvectors are still equally sensitive to noise , the eigenvalues change ( and are in particular larger than ) .this weakens the impact of the last few eigenvectors , especially as tends to be close to orthogonal to them . a third approach that shows excellent practial resultswas developed by zuev , who used semi - definite programming to maximize the smallest eigenvalue among the covariance matrices that lie in a certain subset of the set of symmetric positive definite matrices .we have observed that some find it simply too tempting to bypass all the maths by using a more trivial intermediate stage in the optimization process . rather than solving the toy problem, some practitioners first identify the unit - vector that results in the maximal expected return .this vector is . in a second stagethey then identify the scaling factor that achieves to achieve . using this approachthe optimal vector is although this looks close to the analytic solution , dramatic differences exist : in the numerator of all information contained in is ignored . as a result ,this approach does not exploit diversification , one of the uncontested benefits of investing in a portfolio rather than a single asset .worse even , there is some danger lurking in the denominator .as soon as is aligned with an eigenvector corresponding to a small eigenvalue the denominator can be extremely small and blow up the position size and hence the portfolio .a modern solver may iterate through the feasible domain by taking dozens of intermediate steps that converge towards the unique global maximum of .however , the choice of those steps should be left to the solver .most portfolio optimization models are convex programming problems , a class of well - studied optimization problems with rich mathematical structure .non - convex optimization problems may have multiple local extrema and convergence to a global extrema ( it may be not unique ) can not be guaranteed as the solver may get attracted to any of the local extrema . when multiple optimization problems are solved sequentially , the solution may get attracted to a different local extremum each time , even when the model parameters change only slightly , and this can result in high trading costs .it is therefore often preferrable to use a convex portfolio model that avoids these costs or at least do extensive backtesting to estimate the impact of these costs on performance .a related problem is the failure to recognize convexity when it is present implicitly in a model , as this prevents one from using convex optimization software that would be able to solve the problem much efficiently and robustly than most nonconvex solvers . as an example , consider a portfolio manager who aims to maximize the sharpe ratio we note that the sharpe ratio is not defined for and the limit of for does not exist .further , the sharpe ratio function has no unique maximizer , since for all and .using model in conjunction with a nonconvex solver may lead to the evaluation of for points close to the origin , which causes numerical inaccuracy . to avoid this problem, we wish to avoid the origin and a reasonably large region around it . exploiting the fact that , we may rescale to satisfy . hence, the problem becomes the numerator of now being constrained to a fixed number , it plays no role in the maximization of and can be neglected , that is , the optimal decision vector can be found by solving the simpler problem this problem is still not convex , as the feasible domain is the boundary of the risk ellipsoid , but it provably has the same solution as the convex problem for generically we have , and if , then is an improved feasible solution for .this contradicts the optimality of and shows that the optimal must satisfy the constraint of the nonconvex problem automatically . in this fashion , solving the convex problem yields a maximizer of the nonconvex problem whilst avoiding the associated numerical problems .most portfolio managers solve a variant of with additional constraints for which the optimal solution is no longer given in closed form .a solution has to be computed algorithmically in this case , and the choice of a good solver is crucial .the most robust and powerful solvers require the problem to be reformulated in a specific standard form , which usually requires a lift - and - project approach that will be further described below .for example , matlab s ` quadprog ` is unable to solve problem directly without lifting the risk constraint into to the utility function . bypassing such steps by writing proprietary optimization softwareis not recommended , as the numerical challenges faced in such attempts are easily underestimated .a particularly popular route in this context is to use _ simulated annealing _ , because it is intuitive to understand and trivial to code .however , whilst simulated annealing has its place in highly nonconvex and unstructured global optimization problems , it is not designed to exploit any of the strong mathematical structures that underly portfolio problems , in particular convexity , and as a result it converges far less robustly and exceedingly more slowly than modern convex optimization solvers . to demonstrate the fallacy of simulated annealing we tried to implement a simple scheme . however , even the most basic convex and constrained problems are hard to approximate with such algorithms and require careful tuning . it remains a black art . in a multi - period setting in which the portfolio positions are frequently updated , it is of utmost importance to choose a robust solver that yields reproducible results free of random fluctuations that cause unnecessary transactions costs when rebalancing the portfolio .the issue of transaction costs is so important that is wise to reduce the fluctuations that occur as a function of the changing model parameters even further , by ways of regularization terms . the use of a fast , robust , reliable , deterministic algorithm is also hugely important in any back - testing framework for quantitative trading in which one wishes to study the statistical behavior of the trading strategy and avoid all avoidable sources of artificial randomness .most portfolio problems have reformulations as second order cone programming problems ( socps ) , or , occasionally semidefinite programming problems ( sdps ) .such problems can be solved in polynomial time via iterative schemes known as _ interior - point methods_. practical implementations of such schemes are currently the leading codes for solving portfolio problems in terms of robustness , speed and reliability .leading codes include the following : * sdpt3 , see http://www.math.nus.edu.sg//sdpt3.html , is freely available , only available for matlab .* sedumi , see http://sedumi.ie.lehigh.edu/ , is very similar to sdpt3 . * mosek , see http://www.mosek.com is the leading commercial conic programming solver and has interfaces for the most common programming languages .lifting is a powerful technique to make hard problems look simple by introducing additional dimensions . using a classic example we demonstrate what it is all about :often problems of type are solved in a large loop over thousands of investment periods . for each period some of the estimates may change and the portfolio is rebalanced .it is certainly a good idea to take into account some costs when changing the position .most quants use quadratic costs as the resulting cost term is differentiable .however , such terms tend to overestimate costs for large trades .let us assume that the system currently holds the position and a new investment periods starts . the formulation above is not taking into account the position at all .this may result in large costs and spurious oscillations in the position.to address these problems we penalize deviations from , that is the positive parameters reflect the estimate costs per unit position .introducing auxiliary variables the problem can be reformulated as \label{lttf2}\\ \text{s.t . } & x^{\operatorname{t}}c x\leq\operatorname{\sigma^2_{max } } , \nonumber \\ & x_i - x^0_i \leq t_i,\nonumber \\ & x^0_i - x_i \leq t_i.\nonumber\end{aligned}\ ] ] problem is called a _ lifting _ of problem . through the introduction of extra variables ,liftings involve an inflation of the problem dimension .this seeming disadvantage is offset when the following occurs : * the lifted problem belongs to a problem class with lower complexity than the original problem .for example , in the above case , a nonsmooth problem turned into a smooth one ( the nondifferentiable absolute value terms in the constraint have disappeared ) .in other cases , nonconvex problems can be convexified through liftings . *the lifted problem belongs to a problem class for which efficient standard software already exists , avoiding the need to implement a custom designed algorithm . in the example above , a quadratic programm resulted , one of the most efficiently solved class of optimization problems .as long as problem has a non - empty feasible domain as it contains the trivial portfolio . in equity portfoliosthe position is often interpreted as a fraction of the investors s capital .being _ fully invested _ is then reflected by the constraint .the enhanced problem \label{equity}\\ \text{s.t . } & x^{\operatorname{t}}\operatorname{cov}(r , r)x\leq\operatorname{\sigma^2_{max}},\nonumber\\ & e^{\operatorname{t } } x = 1 \nonumber\end{aligned}\ ] ] has no solution if is smaller than the minimum variance of all fully invested portfolios , e.g. c. stein .inadmissibility of the usual estimator for the mean of a multivariate distribution . in j. neyman ( ed . ) , _ proc .berkeley symp .statist ._ , vol . 1 , pp .california press , 1956 .todd , k.c .toh and r.h .ttnc . on the implementation and usage of sdpt3 a matlab software package for semidenite - quadratic - linear programming , version 4.0 ._ http://www.math.nus.edu.sg/mattohkc / sdpt3.html_. p. wilmott .the commonest mistakes in quantitative finance : a dozen basic lessons in commonsense for quants and risk managers and the traders who rely on them . _ frequently asked questions in quantitative finance , 2nd edition _ , wiley , 2009 , pp . | although modern portfolio theory has been in existence for over 60 years , fund managers often struggle to get its models to produce reliable portfolio allocations without strongly constraining the decision vector by tight bands of strategic allocation targets . the two main root causes to this problem are inadequate parameter estimation and numerical artifacts . when both obstacles are overcome , portfolio models yield excellent allocations . in this paper , which is primarily aimed at practitioners , we discuss the most common mistakes in setting up portfolio models and in solving them algorithmically . primary 91g10 . secondary 90c25 , 90c90 . portfolio theory , mean - variance optimization , conic optimization . |
it is well known that many of the stars in the solar neighbourhood exist in multiple systems . as the number of planetary surveys increases ,planets are regularly being found not only in single star systems , but binaries and triples as well .for example , recently a hot jupiter has been claimed in the triple system hd 188753 ( konacki 2005 ; but see also eggenberger et al .2007 ) , and lists 33 binary systems and 2 other hierarchical triples known to harbour exoplanets . as the majority of work on planetary dynamics has been for single star systems , the dynamics of bodies in these multiple stellar systems is of great interest . at first sight , it might not seem likely to expect long term stable planetary systems to exist in binary star systems , let alone triples , but numerical and observational work is starting to show otherwise . in recent years, much study has been devoted to the stability of planets and planetesimal discs in binary star systems .there are several investigations of individual systems ( e.g. work on cephei by , and ) as well as substantial amounts of work on more general limits for stability of smaller bodies in these systems .notably , approach this problem by using numerical simulation data to empirical fit critical semimajor axes for test particle stability as functions of binary mass ratio and eccentricity .these general studies are of great use in the investigation of observed systems and their stability properties , giving an effective and fast method of placing limits on stability in the large parameter space created by observational uncertainties .the aim of this work is to investigate test particle stability in hierarchical triple star systems , and to see if any similar boundaries can be defined for them . to do this , a new method for numerically integrating planets in such systemsis presented , following the ideas for binary systems presented by .although there have been empirical studies of the stability of hierarchical star systems themselves , there do not appear to have been studies of small bodies in such systems .there is a great deal of literature concerning periodic orbits in the general three and four body problems ( see e.g. ) , but these are almost entirely devoted to considerations of planetary satellites in single star systems , for example satellite transfer in the sun - earth - moon systems .also , while periodic orbits prove that stable solutions exist in these problems , they are of little practical use in determining general stability limits . the problem of planetary orbits in triple systems is more complex than for those in binary systems , as there are many different orbital configurations possible relative to the three stars .these are considered in section [ sec : orbits ] , and a method for classifying them is described in order to simplify the discussion in this paper .the derivation of a method to numerically integrate such planets is then given in section [ sec : maths ] . in section [ sec : stats ] is a brief overview of the statistical properties of known triple systems , as a basis for deciding the parameters of the systems used in the numerical simulations .the results of numerical investigations into stability properties are then presented in section [ sec : sp ] for one of the possible orbital types .the triple systems studied here are all hierarchical in nature .that is , they can all be considered to be a close binary orbited by a distant companion in effect , an inner binary pair and an outer binary pair . here, the inner binary stars are labelled as a and b , with a being the more massive , and the distant companion star as c. there are however three possible ways to pair the three stars into the hierarchy described above . the hierarchy by requiring that the instantaneous keplerian orbits of the inner and outer pair are bound , that the outer binary has a longer period than the inner binary and that the ratio of the two periods is the largest of the three possible pairings of the stars .this definition is adopted here .although other , non - hierarchical , types of motion are possible for triple stars ( see for example ) and observed ( see for example the trapezium systems listed in ) , they are not considered here .it is an open question whether non - hierarchical systems can sustain long - term stable planets .next some system of classifying planetary orbits is needed .the orbital motion of small particles under the influence of two or three masses has been studied to some extent through periodic orbits of the three and four body problems , as mentioned in the introduction . as a result of this many families and classification schemes exist . gives a good review , mentioning for example the ( a ) to ( r ) types designated for the copenhagen problem , dependent on the nature and location of the particles motion in both inertial and corotating frames .many other features have been used to describe orbital families , for example symmetry or a parameter used to generate the orbital family . however , as mentioned by , there is no overall method , and the descriptions would not be applicable to non - periodic orbits .orbital types in binary systems have generally been classified into three main groups , those around a single star ( circumstellar ) , those around both stars ( circumbinary ) and those in the middle ground as it were , coorbiting with one of the stars i.e. librating about the triangular lagrangian points , similar to trojan asteroids in the solar system .a convenient labelling of these was given by , who designated planets as either s - orbits ( satellite ) , p - orbits ( planet ) , or l - orbits ( librator ) for circumstellar , circumbinary and coorbital respectively .these ideas can be extended to hierarchical triple systems fairly simply .it is clear that there will be three types of circumstellar motion , one for each star , and as such orbits like this can be labelled s(a ) , s(b ) and s(c ) for their primary .the circumtriple orbit about the centre of mass of all three can be identified as a p orbit and labelled as such in analogy to dvorak s scheme .finally , planets which orbit the centre of mass of the inner binary are circumbinary but also share characteristics with the satellite orbit , and can be labelled with the combination s(ab)-p .these orbital types are shown in figure [ fig : types ] along with the binary cases for comparison , and listed in table [ tab : types ] .a superscript 2 has been given to those in binary systems and a 3 for those in triple systems for clarity .although these are the only types of motion studied here , for completeness the coorbital types are also be included. this type of motion can occur for both the inner and outer binary , and labels ( ab ) and ( abc ) can be used to designate this . instead of the single l - orbit typethey can be broken up into t and h orbits to indicated tadpole ( about one of the triangular lagrangian points ) or horseshoe ( about both triangular points and the collinear point ) .these are again illustrated in figure [ fig : types ] and included in table [ tab : types ] .these orbits are defined to be bound relative to their focus and hierarchical in the same sense as the definition given for the stellar system . a particle that orbits outside the extent of the inner binary but has a bound orbit with respect to star a and the binary centre of mass would be an s(ab)-p orbit and not an s(a ) orbit .this method of labelling clearly and concisely designates the exact nature of a planetary orbit , and extends a system already in use for binary stars .it can also be easily applied to systems with other levels of hierarchy , for example if star c was replaced with another close binary pair c and d additional classes s(d ) and s(cd)-p would be possible , and the outer p type could be relabelled p(abcd ) . .labels for the basic planetary orbital types in multiple star systems as defined in this work and compared to those of .[ cols= " < , < , < , < " , ] [ tab : hwfit2 ] a more detailed investigation of the inner boundary can be carried out in a similar manner to the outer edge , by fixing star c at 1 and in a circular 50 au orbit and varying the inner binaries mass ratio and eccentricity , as discussed earlier .the radius of the inner pair was kept at 1 au and their total mass as 2 .the eccentricity of this binary was varied from 0.0 to 0.6 in steps of 0.2 again , and the mass ratio varied from 1.0 to 2.0 in steps of 0.1 .figure [ fig : sp_crit_in ] shows the results from these simulations , with the locations of the first and last stable radii plotted as functions of the inner binaries eccentricity and mass ratio .also plotted again is the fit given by to the critical radius for p type planets in binary systems .as expected , the outer stability boundary seems unaffected by changes to the inner binary pair .there are some unstable points between the two stability boundaries , most notably around 3.2 au for the simulations with . find unstable islands appearing at the first beyond the critical semimajor axis in this configuration .however , the location of these unstable radii here is well beyond the first mmr in the stable region .in addition , running the and case without star c reveals that these locations are now stable .this would indicate that this is again an effect due to the combination of all three stars .because of this the inner edges location is simply taken as the same as the first stable radius .the location of this boundary appears to be a very weak function of mass ratio and an approximately linear function of eccentricity , and also agrees well with the predictions of . for this boundary ,they fit a function of the form where the constants are given in the third column of table [ tab : hwfit2 ] .the terms up to fourth order are included to fit a variation in the position of the critical radius at smaller mass ratios than those considered here . fitting this function to the results heredoes not produce well determined coefficients despite a low value of about 29 , as shown in the first column of the table .in fact , a better solution is obtained by only including the first four terms , as shown in the middle column of the table .the value for this fit is slightly higher at about 42 .this second fit is plotted in figure [ fig : sp_i_fit ] , and seems to describe the data well , although it seems to slightly overestimate the size of the stable region . despite the smaller values here the fitted parameters are less well determined than those for the outer edge .this is a reflection on the relative size of the test particle grid spacing compared to the size of the inner binaries orbit . herethe boundary is determined to within a tenth of the relative separation of the stars , while for the outer edge it was determined to within a thousandth of the size of the binary s orbit .the parameter space investigated so far is somewhat limited .the stars orbital longitudes have been ignored , assumed to be a minor influence on stability , the sets of simulations have always kept one star on a circular orbit , and all objects have been taken as coplanar .brief investigations of these three extensions to the parameter space can be made .firstly , the assumption that the initial longitudes of the stellar orbits has little effect on the stability boundaries was tested by running the inner edge simulations with mass ratios .33 to 0.40 with a different initial longitude of the inner binary pair .the results from these simulations were almost identical to those of the initial set , providing some evidence that this assumption is valid . next , the effect of both stars having non - circular orbits was investigated .the semimajor axes of the inner and outer binary were kept at 1 and 50 au , and the eccentricities of both varied in steps of 0.2 from 0.0 to 0.6 .the mass ratio of the outer binary only was varied as before .the results of these simulations are shown in figure [ fig : eccentricity ] .each panel shows the location of the stability radii and unstable points as for all values of and one given value of .there is almost no change in the position of the outer stability boundary , but as both stellar eccentricities and the mass ratio increase , the inner boundary starts to move outwards .the stellar eccentricities are still not varying to any significant extent and the additional instability must be due to the combined perturbations of all three stars .lastly , the effect of inclination was studied .the semimajor axes were kept as 1 and 50 au , was set to 0 , and the outer mass ratio and eccentricity varied as before .sets of simulations where then run for inclinations of the outer star of , and .the test particles were kept coplanar with the inner binary .this is a rather limited investigation , as any dependence on the longitude of ascending node has been ignored , and only a small range of inclinations included .however , higher inclinations will be subject to the kozai instability , causing large variations in the stellar orbits , which is expected to rapidly destabilise test particles . in these simulationsthe stellar mutual inclination remains fairly constant and their orbits are stable .figure [ fig : inclination ] shows the stability boundaries for these simulations , each panel comparing the different inclination results for a different value of .there is little change in the inner boundary but the outer edge moves somewhat .interestingly , for the case higher inclinations are more stable .if the test particles are started instead coplanar with the outer binary the stability is similar , although not identical .these results are consistent with the conclusions of , who show that inclination is not a significant effect on the stability of p type planets in binary system .the main achievement of this paper is the formulation of a symplectic integrator algorithm suitable for hierarchical triple systems .this extends the algorithm for binary systems presented by .the positions of the stars are followed in hierarchical jacobi coordinates , whilst the planets are referenced purely to their primary .each of the five distinct cases , namely circumtriple orbits , circumbinary orbits and circumstellar orbits around each of the stars in the hierarchical triple , requires a different splitting of the hamiltonian and hence a different formulation of the symplectic integration algorithm . here, we have given the mathematical details for each of the five cases , and presented a working code that implements the algorithm .as an application , a survey of the stability zones for circumbinary planets in hierarchical triples is presented . here , the planet orbits an inner binary , with a more distant companion star completing the stellar triple . using a set of numerical simulations, we found the extent of the stable zone which can support long - lived planetary orbits and provided fits to the inner and outer edges .the effect of low inclination on this boundary is minimal . a reasonable first approximation to a behaviour of a hierarchical tripleis to regard it as a superposition of the dynamics of the inner binary and a pseudo - binary consisting of the outer star and a point mass approximation to the inner binary .if it is considered as two decoupled binary systems , then the earlier work of holman & wiegert ( 1999 ) on binaries is applicable to triples , except in the cases of high eccentricities and close or massive stars .the implication here is that the addition of a stable third star does not distort the original binary stability boundaries .as mentioned , have shown that overlapping sub - resonances are the cause of the boundary in the binary case .it is reasonable to expect that in triples the same process is responsible , and the similarities between the binary and triple results support this theory .it is also expected that there is a regime in which the resonances from each sub - binary start to overlap as well , further destabilising the test particles .evidence of this is the deviation from the binary results when the stars are close , massive and very eccentric , when resonances would be both stronger and wider . since the parameter space investigated was chosen to reflect the observed systems it would seem to be a reflection on the characteristics of known triple stars that most lie in the decoupled regime .the relatively constant nature of the stellar orbits in the simulations is however a consequence of the test particle orbits being destabilised long before the stars are close enough to interact . by extension of all these arguments, it is expected that the binary criteria can be used to fairly accurately predict the stability zones in any hierarchical stellar system , no matter the number of stars .the results presented here can be used to estimate the number of known hierarchical triple systems that could harbour s(ab)-p planets . lists 54 systems with semimajor axis , eccentricity and masses for both the inner and outer components .the mutual inclinations of most are not well known , but there are nine systems listed for which this angle can be determined . for five of these it is less than , two are around and two are retrograde .although a small sample it suggests that there are systems that fall within the low inclination regime investigated here . using the criteria of and those found here for the position of the inner and outer critical semimajor axis the size of the coplanar stable region for each of these triples can be calculated .this can be considered an upper limit , since it is likely that very non - coplanar systems and those with significant eccentricities for both binary components will further destabilise planets .of the 54 systems 13 are completely unstable to circumbinary planets according to the four parameter fits ( compared to 11 using s criteria ) .figure [ fig : zone ] shows a plot of the width of the stable region for the remaining systems .interestingly , the majority seem to have very small stable zones , with 16 smaller than an au . whether this is a feature of triple systems , or an observation bias is not apparent. it does indicate though that circumbinary planets are unlikely to exist in at least 50 % of observable systems . .note that using the criteria of gives almost identical results . ]beust , h. 2003 , a&a , 400 , 1129 broucke , r. a. 2004 , ann . new york acad .sciences , 1019 , 408 chambers , j. e. , quintana , e. v. , duncan , m. j. , & lissauer , j. j. 2002 , aj , 123 , 2884 danby , j. m. a. 1988 , richmond , va . , u.s.a . : willmann - bell , 1988 .2nd ed .desidera , s. , & barbieri , m. 2007 , a&a , 462 , 345 dvorak , r. , pilat - lohinger , e. , funk , b. , & freistetter , f. 2003 , a&a , 398 , l1 dvorak , r. 1986 , a&a , 167 , 379 dvorak , r. 1984 , celestial mechanics , 34 , 369 eggenberger a. , udry s. , mazeh t. , segal y. , mayor m. , 2007 , a&a , in press eggleton , p. , & kiseleva , l. 1995 , apj , 455 , 640 haghighipour , n. 2006 , apj , 644 , 543 holman , m. j. , & wiegert , p. a. 1999 , aj , 117 , 621 konacki , m. 2005 , nature , 436 , 230 mudryk , l. r. , & wu , y. 2006 , apj , 639 , 423 murray c. d. , dermott s. f. 2000 , solar system dynamics , cambridge university press pilat - lohinger , e. , funk , b. , & dvorak , r. 2003 , a&a , 400 , 1085 press , w. h. , teukolsky , s. a. , vetterling , w. t. , & flannery , b. p. 2002, numerical recipes in c++ : the art of scientific computing , cambridge university press szebehely , v. 1977 , revista mexicana de astronomia y astrofisica , vol . 3 , 3 , 145 szebehely , v. 1967 , theory of orbits , chapter 9 , new york : academic press tokovinin , a. a. 1997 , a&as , 124 , 75 verrier , p. e. in prep . , phd thesis verrier , p. e. , & evans , n. w. 2006 , mnras , 368 , 1599 wisdom , j. , & holman , m. 1991 , aj , 102 , 1528in this appendix are details of the symplectic integration method derived in section [ sec : maths ] for the orbital types p , s(a ) , s(b ) and s(c ) .the split hamiltonian in each case is presented , and the evolution under these is readily obtainable using the method given in section [ sec : maths ] . also presented are details of the testing of the implementation of this symplectic integrator .note that throughout this appendix is the total system mass and the total inner binary mass , including planets where relevant . for s(a ) orbits , the hamiltonian is where is the mass of star a and its planets , is the reduced mass of the inner binary , including the planets , is the reduced mass of the outer binary and . for s(b ) type planets only , the equations are where is the mass of star b and its planets , is the reduced mass of the inner binary , including the planets and is the reduced mass of the outer binary . here now , and and . the corresponding hamiltonian for s(c )type planets is where is the reduced mass of the inner binary stars and is the reduced mass of the outer binary , including the planets . here now , , and is the mass of star c and its planets . these differing orbital cases are all dealt with separately at present . to test the method and its implementation comparisons of the simulations of systems of various levels of complexity were made with the literature . as the equations given above allow to be set to zero , the conservation of the jacobi constant of test particles in the circular restricted three body problem was calculated and compared to that given by for their binary systems . the evolution of various triple star system was compared to studies of hierarchical triples in the literature , for example those given in . simulations of discs of test particles were also compared to those given by for circumbinary discs .more complex systems of planets and test particles were also compared to the results of a standard bulirsch - stoer integrator . in all casesexcellent agreement was found . | a symplectic integrator algorithm suitable for hierarchical triple systems is formulated and tested . the positions of the stars are followed in hierarchical jacobi coordinates , whilst the planets are referenced purely to their primary . the algorithm is fast , accurate and easily generalised to incorporate collisions . there are five distinct cases circumtriple orbits , circumbinary orbits and circumstellar orbits around each of the stars in the hierarchical triple which require a different formulation of the symplectic integration algorithm . as an application , a survey of the stability zones for planets in hierarchical triples is presented , with the case of a single planet orbiting the inner binary considered in detail . fits to the inner and outer edges of the stability zone are computed . considering the hierarchical triple as two decoupled binary systems , the earlier work of holman & wiegert on binaries is shown to be applicable to triples , except in the cases of high eccentricities and close or massive stars . application to triple stars with good data in the multiple star catalogue suggests that more than 50 % are unable to support circumbinary planets , as the stable zone is non - existent or very narrow . [ firstpage ] celestial mechanics planetary systems binaries : general methods : _ n_-body simulations |
dynamical systems which display scale - free behaviour have attracted much interest in recent years .equilibrium thermodynamic systems do only exhibit scale - free behaviour for a subset of the parameter space of measure zero ( the critical values of the parameters ) .nevertheless , in nature scale - free systems can be found in abundant variety ( earthquakes , avalanches in rice - piles , infected people in epidemics , jams in internet traffic , extinction events in biology , life - times of species and many more .see also and the references therein ) .the origin of this abundance lies probably in the broad variety of systems far from equilibrium that can be found in nature . with the onset of non - equilibrium dynamics ,new mechanisms come into play which seem to make scale - free behaviour a generic feature of many systems .however , unlike equilibrium thermodynamics , where scaling is thoroughly understood , for non - equilibrium dynamical systems there does not yet exist a unified theory of scale - free phenomena ( apart from non - equilibrium phase transitions ) .there do , however , exist several distinct classes of systems with generic scale - free dynamic .one of the first ideas to explain scale - free behaviour in a large class of dynamical systems was the notion of self - organized criticality ( soc ) proposed by bak , tang and wiesenfeld in 1987 .they proposed that certain systems with local interactions can , under the influence of a small , local driving force , self - organize into a state with diverging correlation length and therefore scale - free behaviour .this state is similar to the ordinary critical state that arises at the critical point in phase transitions , although no fine - tuning of parameters is necessary to reach it .since 1987 literally thousands of research papers have been written concerning soc ( for a review see ) , and many different dynamical systems have been called soc ( e.g. ) , mostly just because they showed some power - law distributed activity patterns .recently it has become clear that several soc models ( sandpile models , forest - fire models ) can be understood in terms of ordinary nonequilibrium critical phenomena ( like e.g. ) . driving rate and dissipation act as critical parameters. the critical value , however , is 0 .therefore , it suffices to choose a small driving rate and dissipation to fine - tune the system to the critical point , and this choice is usually implicit in the definition of the model .scale - free behaviour does not , however , depend crucially on some sort of critical phenomenon .a simple multiplicative stochastic process ( msp ) of the form where and are random variables , can produce a random variable with a probability - density function ( pdf ) with power - law tail . in processes of this type, the power - law appears under relatively weak conditions on the pdf s of and , thus making the intermittend behaviour a generic feature of such models .applications of eq .( [ eq : multstoch ] ) can be found in population dynamics with external sources , epidemics , price volatility in economics , and others .another class of models with a very simple and robust mechanism to produce scale - free behaviour has been introduced recently by newman and sneppen .these so called coherent - noise models consist of a large array of agents which are forced to reorganize under externally imposed stress . in their simplest form ,coherent - noise models do not have any interactions between the agents they consist of , and hence , certainly do not display criticallity .nevertheless , these models show a power - law distribution of the reorganization events with a wide range of different exponents , depending on the special implementation of the basic mechanism .moreover they display power - law distributions in several other quantities , e.g. , the life - time distribution of the agents .these models have been used to study earthquakes , rice piles , and biological extinction .coherent - noise models have a feature that usually is not present in soc models and is never present in msp s , which is the existence of aftershocks . in most coherent - noise modelsthe probability for a big event to occur is very much increased immediately after a previous big event and then decays with a power - law .this leads to a fractal pattern of events that are followed by smaller ones which themselves are followed by even smaller ones and so on . in most soc models and all msp s , on the contrary, the state of the system is statistically identical before and after a big event .therefore in these models no aftershocks are visible .the existence or non - existence of aftershock events should be easily testable in natural systems .this could provide a means to decide what mechanism is most likely to be the cause for scale - free behaviour in different situations .but to achieve this it is important to have a deep understanding of the decay - pattern of the aftershock events .the goal of the present paper is to investigate in detail the aftershock dynamics of coherent - noise models .we concentrate mainly on the original model introduced by newman and sneppen because there can be obtained several analytical results .we find a power - law decrease in time of the aftershocks probability to appear , as has been found already in .but unlike stated there , we can show that the exponent does indeed depend on the microscopic details of the simulation .we find a wide range of exponents , from 0 to values well above 1 , whereas in the authors report only the value 1 .we will now describe the model introduced by newman and sneppen .the system consists of units or agents. every agent has a threshold again external stress .the thresholds are initially chosen at random from some probability distribution .the dynamics of the system is as follows : 1 .a stress is drawn from some distribution .all agents with are given new random thresholds , again from the distribution .a small fraction of the agents is selected at random and also given new thresholds .3 . the next time - step begins with ( i ) .step ( ii ) is necessary to prevent the model from grinding to a halt . without this random reorganization the thresholds of the agentswould after some time be well above the mean of the stress distribution and average stress could not hit any agents anymore .the most common choices for the threshold and stress distributions are a uniform threshold distribution and some stress distribution that is falling off quickly , like the exponential or the gaussian distribution . under these conditions( with reasonably small ) it is guaranteed that the distribution of reorganization events that arises through the dynamics of the system will be a power - law .there are several possibilities to extend the model to make it more general , without loss of the basic features .two extensions that have been studied by sneppen and newman are * a lattice version where the agents are put on a lattice and with every agent hit by stress its nearest neighbours undergo reorganization , even if their threshold is above the current stress level . * a multi - trait version where , instead of a single stress , there are different types of stress , i.e. the stress becomes a -dimensional vector .accordingly , every agent has a vector of thresholds .an agent has to move in this model whenever at least one of the components of the threshold vector is exceeded by the corresponding component of the stress vector .an extension that is especially important for the application of coherent noise models to biological evolution and the dynamics of mass extinctions has been studied recently by wilke and martinetz . in biologyit is not a good assumption to keep the number of agents ( in this case species ) constant in time .rather , species which go extinct should be removed , and there should be a steady regrowth of new species . in a generalized model , the system size is allowed to vary .agents that are hit by stress are removed from the system completely , but at the end of every time - step a number of new agents is introduced into the system . here is a function of the actual system size , the maximal system size and some growth rate .wilke and martinetz have studied in detail the function which resembles logistic growth . in the limit their model reduces to the original one by newman and sneppen . in the followingwe will refer to the original model as the infinite - growth version and to the model introduced by wilke and martinetz as the finite - growth version.we base our analysis of the aftershock structure on the meassurement - procedure proposed by sneppen and newman .they drew a histogram of all the times whenever an event of size some constant happened after an initial event of size some constant , for all events .consequently , we measure the frequency of events larger than occuring exactly time - steps after an initial event larger than , for all times .this means that we consider sequences of events in the aftermath of initial large events .normalized appropriately , our measurement gives just the probability to find an event at time after some arbitrarily chosen event .for this to make sense in the context of aftershocks we usually have . throughout the rest of this paper we use and as percentage of the maximal system size .therefore a value for example means that we are looking for initial events which span the whole system .we will denote the probability to find an event of size at time after a previous large event by . in order to keep the notation simplewe omit the constant . it will be clear from the context what we use in different situations .note that is not a probability distribution , but a function of time .therefore , every increase or decrease of in time will indicate correlations between the initial event and the subsequent aftershocks .for we expect all correlations to disappear , and hence to tend towards a constant .it is possible to obtain some analytical results about the probability if we restrict ourself to the model with infinite growth and a special choice for the threshold and stress distributions . if not indicated otherwise , throughout the rest of this section we assume to be uniform between 0 and 1 , and the stress distribution to be exponential : .furthermore , we focus on the case .that means that we are looking at the events in the aftermath of an initial event of infinite size , an event that spans the whole system .this is a reasonable situation because we use a uniform threshold distribution . in this casethere is a finite probability to generate a stress which exceedes the largest threshold , thus causing the whole system to reorganize .for some of the examples presented in section [ sec : numerical_results ] the probability to find an infinite event is even higher than .this probability can be considered relatively large in a system where one has to do about time - steps to get a good statistics . the exact way to calculate is the following .one has to compute the distribution which is the distribution that arises if after the big event at time a sequence of stress values is generated during the following time steps. then the equation has to be solved . that gives the quantity , the threshold that has to be exceeded by the stress at time to generate an event . from the stress distributionone can then calculate the corresponding probability .finally the average over all possible sequences has to be taken to end up with the exact solution for .obviously there is no hope doing this analytically . instead of the exact solution for we can calculate a mean - field solution if we average over all possible sequences before we solve eq .( [ exact_cal_p ] ) .note that in this context , the notion mean - field does not stand for the average state of the system , which does not tell us anything about aftershocks , but for the average fluctuations typically found in a time - intervall .these average fluctuations are a measure for the return to the average state , after a large event has caused a significant departure from it . consequently , the mean - field solution is time - dependent . for ,the time - dependent mean - field threshold distribution converges to the average threshold distribution , denoted by in in appendix [ a : master - equation ] , we show that the averaging over all fluctuations in a time intervall of time - steps equals to times iterating the master - equation . therefore , to calculate the mean - field solution for we have to insert , the -th iterate of the master - equation , into eq .( [ exact_cal_p ] ) .the details of this calculation are given in appendix [ a : master - iteration ] .in this paragraph we will calculate the dependency of the exponent on under the assumption that the probability to find aftershocks decays indeed as a power - law , i.e. that we can assume .a fairly simple argument shows that for the probability to decrease as a power - law the exponent must be proportional to for not too small .again we concentrate on exponentially distributed stress only .we begin with an approximation of the quantity , which is the average threshold at time above which a stress value must lie to trigger an event of size . in the mean - field approximation , is defined by the equation because and are normalized , we can rewrite this equation ( again we assume to be uniform between 0 and 1 ) : for the most reasonable stress distributions the distribution of the agents forms a plateau in the region close to . therefore for sufficient large we can approximate the integral in eq .( [ reversed_integral ] ) by substituting with its value at , which is .( [ reversed_integral ] ) then becomes the values are a function of .we define and find for : the probability now becomes the principal idea to derive a relation between and is as follows .the function is obviously independent of and .we make the ansatz , rearrange eq .( [ p_texp ] ) for and then get a condition on and since they should cancel exactly .hence we have to solve the equation where is the constant of proportionality .we take the logarithm on both sides to get and finally this is of the form , where and for every function of the form , the constants and are unique , as can be seen if we write a change in results in a rescaling of the variable , while a change in results in a rescaling of the whole function .consequently , neither nor can depend on or .this can be seen as follows . if , e.g. , depended on , then a change in would rescale the function .but this function is independent of according to its definition ( eq . ( [ rtdef ] ) ) .hence must be independent of in itself .a similar argument holds for the variable .therefore , and must cancel exactly in eq .( [ c_1def ] ) .this leads to the condition up to now we have not done any assumptions about the size of the first big event after which we are measuring the subsequent aftershocks . therefore the proportionality should hold in general , as long as is not too small . if we additionally assume the inital event to have infinite size ( ) we can easily calculate the constant in eq .( [ p_texp2 ] ) .the meaning of this constant is the probability to get an event of size immediately after the initial big event , as can be seen by setting : for the case the distribution of thresholds is uniform and thus with eqs .( [ p_texp ] ) , ( [ p_texp2 ] ) , ( [ c_1def ] ) , and ( [ const_a ] ) we can write the probability as in section [ sec : numerical_results ] we will find numerically that , and therefore . for two limiting cases we can deduce the behaviour of the exponent regardless of the stress distribution .we begin with the case , . from eq .( [ tau_prop_s1 ] ) we find that as under the assumption of an exponential stress distribution .but this result is more general .for the probability reads simply and hence is constant in time .consequently we have for any stress distribution . from continuitywe have as .a similar argument holds when either or approaches 0 . for ,the probability reduces to the mean probability to find an event of size .hence . for ,the probability becomes 1 , because an event of size at least zero can be found in every time step .hence also in this case . from continuitywe have again as either or .in the previous section we have focused on the behaviour of the system in the aftermath of an infinite event .this situation is not only analytically tractable , but it also makes it simpler to obtain good numerical results . if we want to measure the probability to find aftershocks following events exceeding some finite but large , we have to wait a long time for every single measurement since the number of those large events vanishes with a power - law .this makes it hard to get a good statistics within a reasonable amount of computing time .if , on the other hand , we focus on the situation of an infinite initial event , we can simply initialize the agents with the uniform threshold distribution , take the measurement up to the time we are interested in and repeat this procedure until the desired accuracy is reached .unless stated otherwise , the results reported below have been obtained in this way , and with exponentially distributed stress .the -th iteration of the master - equation should give exactly the average distribution of the agent s thresholds at time . in fig .[ fig_rho_t ] it can be seen that this is indeed the case .the points , which represent simulation results , lie exactly on the solid lines , which stem from the exact analytical calculation .the mean - field approximation for should be valid if the agent s distribution at time does not fluctuate too much about the average distribution .since there are many more small events than big ones the fluctuations should occur primarily in the region of small . consequently we expect the mean - field approximation to be valid for large , and to break down for small .[ fig_mf1 ] shows that already for moderately large the mean - field approximation captures the behaviour of , with a slight tendency to underestimate the results of the measurement .note that the statistics is becoming worse with increasing due to the rapidly decreasing probability to find any events of size for large . in fig .[ fig_limf ] a measurement of the probability is presented for a number of simulations with different values of the parameter .as it can be seen , the parameter does not affect the exponent of the power - law , but limits the region where scaling can be observed .note the difference between the effect seen here and typical cut off effects in the theory of phase transitions .the quantity is not a probability distribution , but a time dependent probability , which tends towards a constant for .therefore , we do not see an exponential decrease at the cut off timescale . instead , the probability levels out to the time - averaged value , which is the average probability to find events of size .in section [ sect : approx_tau ] we showed that , under the condition of a sufficiently large .simulations indicate that the constant of proportionality is just 1 , which means the constant in eq .( [ c_1def ] ) equals .if this hypothesis is true , eq . ( [ p_tfinal ] ) becomes this means , a rescaling of the form should yield a functional dependency .[ fig_scaledp ] shows the results of such a rescaling for different and .all the data - points lie exactly on top of each other in the region where the statistics is good enough ( about ) .we find that for up to 0.1 , eq .( [ eq : conj1 ] ) is very accurate for between about 0.1 and 1 . with smaller , eq .( [ eq : conj1 ] ) holds even for much smaller .the situation becomes more complicated if we study the sequence of aftershocks caused by an initial event of finite size .the probability to find an event of size after some initial event of size decreases with a power - law , but the exponent is not a simple function of .rather , it depends on as well . in fig .[ fig_finites0 ] we have displayed the results of a measurement with and several different , ranging from to 1 . the curve for has been obtained with the method described at the beginning of this section .we find that the aftershocks decay pattern for continuously approaches the one for as .this shows that it is indeed justified to study the system in the aftermath of an infinite initial event and then to extrapolate to finite but large initial events .note that in fig .[ fig_finites0 ] , is so small that eq .( [ eq : conj1 ] ) does not hold anymore .sneppen and newman have argued that the decay pattern of the aftershocks is independent of the respective stress distribution .our results do not support that . despite the fact that the exponent of the power - law seems to be independent of in the case of exponential stress , as we could show above, the exponent is not independent of the functional dependency of the stress distribution .if we impose , for example , gaussian stress with mean 0 and variance , we find ( fig . [ fig_gauss.1 ] ) exponents larger than 1 for moderate .we do event find a qualitatively new behaviour of the system .the exponent is getting larger with increasing , as opposed to the exponent getting smaller for exponential stress .of course , this can only be true for intermediate .ultimately , we must have for . finally , we present some results about systems with finite growth. in these systems , there exists some competitive dynamics between the removal of agents with small thresholds through stress and their regrowth .aftershocks appear in the infinite growth model because the reorganization event leaves a larger proportion of agents in the region of small thresholds , thus increasing the probability for succeding large events . in the finite growth version , on the contrary , this can happen only if the regrowth is faster than the constant removal of agents with small thresholds through average stress .if the regrowth is too slow , the probability to find large events actually is decreased in the aftermath of an initial large event .the interplay between these two mechanisms is shown in fig .[ fig_grow ] .the regrowth of species is done according to eq .( [ eq : logistic_growth ] ) .for a small growth - rate , the probability to find aftershocks is reduced significantly , and it aproaches the equilibrium value after about 100 time - steps . with larger ,the probability increases in time until a maximum well above the equilibrium level is reached , and then decreases again .the maximum moves to the left to earlier times with increasing .when is so large that the maximum coincides with the post initial event , the original power - law is restored .note that , as in the case of infinite growth , the measurement depends on the choice of the parameters and .consequently , with a different set of parameters the curves will look different , and the maximum will appear at a different time .nevertheless , we find that the qualitative behaviour is not altered if we change or . instead of logistic growth, we can also think of linear growth , i.e. , where is again the growth rate . in order to keep the system finite ,we stop the regrowth whenever the system size exceedes the maximal system size .in such a system , aftershocks can be seen for much smaller growth rates ( fig .[ fig_growl ] ) .note that apart from the growth rate , all other settings are identical in fig .[ fig_grow ] and fig .[ fig_growl ] .linear growth refills the system much quicker than logistic growth with the same growth rate .therefore the time intervall in which aftershocks are suppressed is much shorter for linear growth .we could show in the present paper that the decay pattern of the aftershock events depends on the details of the measurement .although the qualitative features remain the same for different parameters and ( e.g. a power - law decrease in the infinite - growth version ) , the quantitative features vary to a large extend .the exponent of the power - law is significantly affected by an alteration of or .therefore the measurement - procedure proposed by sneppen and newman can reveal the complex structure of aftershock - events only if the change of the measured decay pattern with varying and is recorded over a reasonably large intervall of different values .this should be considered in a possible comparison between the aftershocks decay pattern from a model and from experimental data .a more in - depth analysis could probably be achieved with the formalism of multifractality ( see e.g. ) .we found the aftershocks decay pattern to vary with different stress distributions .this is in clear contrast to sneppen and newman .they reported a power - law with exponent for the infinite growth version , independent of the respective stress distribution they used .the question remains why sneppen and newman measured such an exponent in all their simulations .the answer to this lies in the fact that they did only simulations with .for the reasons explained at the beginning of section [ sec : numerical_results ] , under this condition one has to choose a relatively small , and accordingly , a very small .this causes the measurement almost inevitably to lie in the intermediate region between the limiting cases of sec .[ sect : limiting_cases ] . in this region , for the most reasonable stress distributions and a large array of different values for and , we find indeed exponents around 1 .the application of coherent - noise models to earthquakes has been discussed in .two very important observations regarding earthquakes , the gutenberg - richter law and omori s law , can be explained easily with a coherent - noise model .the gutenberg - richter law states that earthquake magnitudes are distributed according to a power - law .omori s law , which interests us here , is a similar statement for the aftershocks decay pattern of earthquakes . in the data from real earthquakes , the number of events larger than a certain magnitude decreases as in the aftermath of a large initial earthquake .consequently , we can only apply infinite - growth coherent - noise models to earthquakes .but this is certainly no drawback , since we expect the thresholds against movement at various points along a fault ( with which we identify the agents of the coherent - noise model ) to reorganize almost instantaneously during an earthquake .the exponent is not universal , but can vary significantly , e.g. , in from to .this would cause problems if the statement was true that for coherent - noise models we have .but as we could show above , the exponent can assume values in the observed range , depending on the stress distribution , the size of the initial event , and the lower cut - off ( which we called for earthquakes and throughout the rest of the paper ) .for a further comparison , it should be interesting to study the dependency of the exponent on variation of the cut - off in real data .we are only aware of a single work where that has been done .interestingly , the authors do not find a clear dependency .nevertheless , this is not a strong evidence against coherent - noise models , since the aftershock series analysed in consists mainly of very large earthquakes with magnitude between 6 and about 8 , which does not allow a sufficient variation of .statistical variations in the exponent are probably larger for this aftershock series than the possible variations because of an assumed dependency .numerical simulations of the finite - growth version have revealed a much more complex structure of aftershock events than present in the infinite - growth version .the competition between regrowth of agents and agent removal through reorganization events leads to a pattern where the probability to find events after an initial large event is suppressed for short times , enhanced for intermediate times and then falls off to the background level for long times .this observation can be important for the application of coherent - noise models to biological extinction .it might be possible to identify a time of reduced and a time of enhanced extinction activity in the aftermath of a mass extinction event in the fossil record .this would be a good indication for biological extinction to be dominated by external influences ( coherent - noise point of view ) rather than by coevolution ( soc point of view ) .we thank mark newman for interesting conversations about coherent - noise models .in this appendix we are interested in the average state a coherent - noise system will be found in several time - steps after some initial state with threshold distribution .our calculations will lead to a rederivation of the master - equation for coherent - noise systems .although a master - equation has already been given for the infinite - growth version and has been generalized to the finite - growth version , these master - equations have not been derived in a stringent way , but just have been written down intuitively .our calculation will confirm the main terms of the previously used equations , but we will find an additional correcting term that becomes important for large . consider the case of infinite growth . at time the threshold - distribution may be .we construct the distribution , which is the distribution that arises at time if a stress is generated at time . a stress will cause a proportion of the agents to move .we have to distinguish two regions . for ,all agents are removed .then they are redistributed according to .a small fraction of the agents is then mutated , which results in for , the redistribution of the agents gives . with the subsequent mutationwe obtain : we take the average over to get the distribution that will on average be found one time - step after : \notag\\ & \qquad + \int\limits_0^x \!d\eta\ , p_{\rm stress } ( \eta ) \rho_{t_0 } ( x)(1-f ) \notag\\ & = p_{\rm thresh}(x)\big[f+(1-f)\int\limits_0^\infty \!d\eta\ , p_{\rm stress}(\eta ) \int\limits_0^\eta \rho_{t_0}(x')dx'\big ] \notag\\ & \qquad + \rho_{t_0 } ( x)(1-f)(1-p_{\rm move}(x))\,.\end{aligned}\ ] ] here , is the probability for an agent with threshold to get hit by stress , viz . . to proceedfurther we have to interchange the order of integration in the remaining double integral .note that , and therefore \notag\\ & \qquad + \rho_{t_0 } ( x)(1-f)(1-p_{\rm move}(x ) ) \notag\\ & = p_{\rm thresh}(x)\big[f+(1-f)\int\limits_0^\infty \!dx'\ , \rho_{t_0}(x')p_{\rm move}(x')\big ] \notag\\ & \qquad + \rho_{t_0 } ( x)(1-f)(1-p_{\rm move}(x ) ) \notag\\ & = p_{\rm thresh}(x)\int\limits_0^\infty \!dx'\,\big ( f+(1-f ) p_{\rm move}(x')\big)\rho_{t_0}(x ' ) \notag\\ & \qquad + \rho_{t_0 } ( x)(1-f)(1-p_{\rm move}(x ) ) \end{aligned}\ ] ] we are thus led to the master - equation where is the normalization constant .we notice the appearance of the term which was not present in the master - equation used by sneppen and newman .the term arises if one takes into account the fact that the agents which are hit by stress get new thresholds before the mutation takes place .therefore every agent with threshold has the probability to get two new thresholds in one time - step .but obviously this is exactly the same as geting only one new threshold .consequently , the term has to be present to avoid double - counting of those agents which are hit both by stress and by mutation .nevertheless , this term does not affect the scaling behaviour of the system , because the derivation of the event size distribution in has been done under the assumption .( [ master_cont ] ) gives the average state of the system one time - step after the initial state .if we are interested in the average state time - steps after the initial state , we have to repeat the calculations in eqs .( [ rhostep])-([master_cont ] ) times .since all averages in these calculations can be taken independently , this is exactly the same as times iterating the master - equation ( [ master - equation ] ) .we assume that at time a big event takes place and produces the distribution .if we apply the master - equation ( [ master - equation ] ) times to this distribution , we will end up with a distribution that tells us the average state of the system at time after the big event .in the following we will use and write for the normalization constant that appears on the right - hand side of eq .( [ master - equation ] ) at time .the average distribution at time then becomes we integrate both sides of eq .( [ master - t ] ) and find a recursion relation for the constants : all integrals can be calculated analytically for a special choice of the threshold and stress distributions . as threshold distribution ,we choose the uniform distribution , and as stress distribution we choose the exponential distribution .furthermore , we assume that the initial event was so large as to span the whole system , i.e. . under the above assumptions there is basically one integral that appears in eq .( [ at - recursion ] ) , which is and eq .( [ at - recursion ] ) becomes with the aid of the binomial expansion we find we are now in the position to calculate the probability that an event of size occurs at time after the initial big event .the minimal stress value that suffices on average to generate such an event is the solution to the equation the corresponding probability is .we invert this expression and insert it into eq .( [ eta_mineq ] ) .the resulting equation determines the probability : the integrals that appear after inserting into the above equation are very similar to the integral defined in eq .( [ i_ndef ] ) .we define this integral can be taken in the same fashion as the calculation of in eq .( [ i_ncomp ] ) .we find eq . ( [ p - eq ] ) now becomes all the quantities which appear in this equation are given above in analytic form .therefore solving eq .( [ p - eq2 ] ) is simply a problem of root - finding . with a computer - algebra program such as mathematica, the recursion relation for the constants as well as the sums that appear in the quantities and can be evaluated analytically if we restrict ourselves to moderate .then the only numerical calculation involved in the computation of is the calculation of the root of eq .( [ p - eq2 ] ) .99 ma zongjin , fu zhengxiang , zhang yingzhen , wang chengmin , zhang guomin , liu defu , _ earthquake prediction : nine major earthquakes in china ( 1966 - 1976 ) _ , seismological press beijing and springer - verlag berlin heidelberg , 1990 .v. frette , k. chistensen , a. malthe - srenssen , j. feder , t. jssang , and p. meakin , nature * 379 * , 49 ( 1996 ) . c. j. rhodes and r. m. anderson , nature * 381 * , 600 - 602 ( 1996 ) .m. takayasu , h. takayasu , and t. sato , physica a*233 * , 824 ( 1996 ) .r. v. sol and j. bascompte , proc .london b*263 * , 161 ( 1996 ) .t. h. keitt and p. a. marquet , j. theor . biol . * 182 * , 161 ( 1996 ) .p. dutta and p. m. horn , rev .phys . * 53 * , 497 ( 1981 ) .m. e. fischer , rev .phys . * 46 * , 597 ( 1974 ) .k. g. wilson , rev . of mod .phys . * 47 * , 773 ( 1975 ) .p. bak , c. tang , and k. wiesenfeld , phys .* 59 * , 381 ( 1987 ) .p. bak , c. tang , and k. wiesenfeld , phys .a * 38 * , 364 ( 1988 ) .p. bak and m. creutz , in _ fractals in science _ , ed . by a. bunde and s. havlin , ( springer , berlin , 1994 ) .p. bak and k. sneppen , phys .* 71 * , 4083 .s. clar , b. drossel , and f. schwabl , j. phys .: cond . mat .* 8 * , 6803 ( 1996 ) .b. drossel and f. schwabl , phys .* 69 * , 1629 ( 1992 ) .z. olami , h. j. s. feder , and k. christensen , phys .68 , 1244 ( 1992 ) .m. paczuski , s. maslov , and p. bak , phys .e*53 * , 414 ( 1996 ) .a. vespignani and s. zapperi , phys .78 , 4793 ( 1997 ) .p. grassberger and a. de la torre , ann. phys . * 122 * , 373 ( 1979 ) .e. w. montroll and m. f. schlessinger , proc .usa 79 , 3380 - 3383 ( 1982 ) .m. levy and s. solomon , int .j. of mod .c * 7 * 595 ( 1996 ) .s. solomon and m. levy , int .j. of mod .c*7 * 745 ( 1996 ) .d. sornette and r. cont , j. phys .i france * 7 * , 431 ( 1997 ) .d. sornette , cond - mat/9708231 .d. sornette , physica a ( in press ) , also cond - mat/9709101 .r. mantegna and h. e. stanley , nature * 376 * , 46 ( 1995 ) m. e. j. newman and k. sneppen , phys .e * 54 * , 6226 ( 1996 ) . k. sneppen and m. e. j. newman , physica d*110 * , 209 ( 1997 ) .m. e. j. newman , proc .london b*263 * , 1605 ( 1996 ) .m. e. j. newman , physica d*107 * , 292 ( 1997 ) . m. e. j. newman , j. theor . biol .( in press ) , also adap - org/9702003 . c. wilke and t. martinetz , phys . rev . e*56 * , 7128 ( 1997 ) .r. pastor - satorras , phys .e*56 * , 5284 ( 1997 ) .b. gutenberg and c. f. richter , _ seismicity of the earth _, princeton university press , 1954 .b. gutenberg and c. f. richter , ann .di geofis .* 9 * , 1 ( 1956 ) .f. omori , journal of the college of science , imperial university of tokyo * 7 * , 111 ( 1894 ) . h. m. hastings , g. sugihara , _ fractals : a user s guide for the natural sciences _ , oxford university press , 1993 . | the decay pattern of aftershocks in the so - called coherent - noise models [ m. e. j. newman and k. sneppen , phys . rev . e*54 * , 6226 ( 1996 ) ] is studied in detail . analytical and numerical results show that the probability to find a large event at time after an initial major event decreases as for small , with the exponent ranging from 0 to values well above 1 . this is in contrast to sneppen und newman , who stated that the exponent is about 1 , independent of the microscopic details of the simulation . numerical simulations of an extended model [ c. wilke , t. martinetz , phys . rev . e*56 * , 7128 ( 1997 ) ] show that the power - law is only a generic feature of the original dynamics and does not necessarily appear in a more general context . finally , the implications of the results to the modeling of earthquakes are discussed . , , |
the purpose of this paper is to develop the geometric foundations for multisymplectic - momentum integrators for variational partial differential equations ( pdes ) .these integrators are the pde generalizations of symplectic integrators that are popular for hamiltonian odes ( see , for example , the articles in marsden , patrick and shadwick [ 1996 ] , and especially the review article of mclachlan and scovel [ 1996 ] ) in that they are covariant spacetime integrators which preserve the geometric structures of the system . because of the covariance of our method which we shall describe below , the resulting integrators are spacetime localizable in the context of hyperbolic pdes , and generalize the notion of symplecticity and symmetry preservation in the context of elliptic problems . herein , we shall primarily focus on spacetime integrators ; however , we shall remark on the connection of our method with the finite element method for elliptic problems , as well as the gregory and lin [ 1991 ] method in optimal control .historically , in the setting of odes , there have been many approaches devised for constructing symplectic integrators , beginning with the original derivations based on generating functions ( see de vogelaere [ 1956 ] ) and proceeding to symplectic runge - kutta algorithms , the shake algorithm , and many others .in fact , in many areas of molecular dynamics , symplectic integrators such as the verlet algorithm and variants thereof are quite popular , as are symplectic integrators for the integration of the solar system . in these domains , integrators that are either symplectic or which are adaptations of symplectic integrators , are amongst the mostwidely used .a fundamentally new approach to symplectic integration is that of veselov [ 1988 ] , [ 1991 ] who developed a discrete mechanics based on a discretization of hamilton s principle .their method leads in a natural way to symplectic - momentum integrators which include the shake and verlet integrators as special cases ( see wendlandt and marsden [ 1997 ] ) .in addition , veselov integrators often have amazing properties with regard to preservation of integrable structures , as has been shown by moser and veselov [ 1991 ] .this aspect has yet to be exploited numerically , but it seems to be quite important . the approach we take in this paper is to develop a veselov - type discretization for pde s in variational form .the relevant geometry for this situation is multisymplectic geometry ( see gotay , isenberg , and marsden [ 1997 ] and marsden and shkoller [ 1997 ] ) and we develop it in a variational framework . as we have mentioned , this naturally leads to multisymplectic - momentum integrators .it is well - known that such integrators can not in general preserve the hamiltonian _ exactly _ ( ge and marsden [ 1988 ] ) .however , these integrators have , under appropriate circumstances , very good energy performance in the sense of the conservation of a nearby hamiltonian up to exponentially small errors , assuming small time steps , due to a result of neishtadt [ 1984 ] .see also dragt and finn [ 1979 ] , and simo and gonzales [ 1993 ] .this is related to backward error analysis ; see sanz - serna and calvo [ 1994 ] , calvo and hairer [ 1995 ] , and the recent work of hyman , newman and coworkers and references therein. it would be quite interesting to develop the links with neishtadt s analysis more thoroughly .an important part of our approach is to understand how the symplectic nature of the integrators is implied by the variational structure . in this waywe are able to identify the symplectic and momentum conserving properties after discretizing the variational principle itself .inspired by a paper of wald [ 1993 ] , we obtain a formal method for locating the symplectic or multisymplectic structures directly from the action function and its derivatives .we present the method in the context of ordinary lagrangian mechanics , and apply it to discrete lagrangian mechanics , and both continuous and discrete multisymplectic field theory .while in these contexts our variational method merely uncovers the well - known differential - geometric structures , our method forms an excellent pedagogical approach to those theories .[ [ outline - of - paper . ] ] outline of paper .+ + + + + + + + + + + + + + + + + * in this section we sketch the three main aspects of our variational approach in the familiar context of particle mechanics .we show that the usual symplectic -form on the tangent bundle of the configuration manifold arises naturally as the boundary term in the first variational principle .we then show that application of to the variational principle restricted to the space of solutions of the euler - lagrange equations produces the familiar concept of conservation of the symplectic form ; this statement is obtained variationally in a non - dynamic context ; that is , we do not require an evolutionary flow .we then show that if the action function is left invariant by a symmetry group , then noether s theorem follows directly and simply from the variational principle as well .* here we use our variational approach to construct discretization schemes for mechanics which preserve the discrete symplectic form and the associated discrete momentum mappings . *this section defines the three aspects of our variational approach in the multisymplectic field - theoretic setting . unlike the traditional approach of defining the canonical multisymplectic form on the dual of the first jet bundle and then pulling back to the lagrangian side using the covariant legendre transform ,we obtain the geometric structure by staying entirely on the lagrangian side .we prove the covariant analogue of the fact that the flow of conservative systems consists of symplectic maps ; we call this result the _ multisymplectic form formula_. after variationally proving a covariant version of noether s theorem , we show that one can use the multisymplectic form formula to recover the usual notion of symplecticity of the flow in an infinite - dimensional space of fields by making a spacetime split .we demonstrate this machinery using a nonlinear wave equation as an example .* in this section we develop discrete field theories from which the covariant integrators follow .we define discrete analogues of the first jet bundle of the configuration bundle whose sections are the fields of interest , and proceed to define the discrete action sum .we then apply our variational algorithm to this discrete action function to produce the discrete euler - lagrange equations and the discrete multisymplectic forms . as a consequence of our methodology, we show that the solutions of the discrete euler - lagrange equations satisfy the discrete version of the multisymplectic form formula as well as the discrete version of our generalized noether s theorem . using our nonlinear wave equation example , we develop various multisymplectic - momentum integrators for the sine - gordon equations , and compare our resulting numerical scheme with the energy - conserving methods of li and vu - quoc [ 1995 ] and guo , pascual , rodriguez , and vazquez [ 1986 ] .results are presented for long - time simulations of kink - antikink solutions for over 5000 soliton collisions . *this section contains some important remarks concerning the variational integrator methodology .for example , we discuss integrators for reduced systems , the role of grid uniformity , and the interesting connections with the finite - element methods for elliptic problems .we also make some comments on future work .[ [ hamiltons - principle . ] ] hamilton s principle .+ + + + + + + + + + + + + + + + + + + + + we begin by recalling a problem going back to euler , lagrange and hamilton in the period 17401830 .consider an -dimensional configuration manifold with its tangent bundle .we denote coordinates on by and those on by .consider a lagrangian . construct the corresponding action functional on curves in by integration of along the tangent to the curve . in coordinate notation , this reads the action functional depends on and , but this is not explicit in the notation . * _ _ * hamilton s principle seeks the curves for which the functional is stationary under variations of with fixed endpoints ; namely , we seek curves which satisfy for all , where is a smooth family of curves with and . using integration by parts ,the calculation for this is simply the last term in ( [ 2 ] ) vanishes since , so that the requirement ( [ 1b ] ) for to be stationary yields the * _ _ * euler - lagrange equations recall that is called * _ _ * regular when the symmetric matrix ] .the value of on that curve is denoted by , and again called the * _ _ * action .thus , we define the map by where . the fundamental equation ( [ 7 ] ) becomes where is an arbitrary curve in such that and .we have thus derived the equation taking the exterior derivative of ( [ 8 ] ) yields the fundamental fact that the flow of is symplectic : which is equivalent to this leads to the following : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ using the variational principle , the fact that the evolution is symplectic is a consequence of the equation , applied to the action restricted to the space of solutions of the variational principle . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in passing , we note that ( [ 8 ] ) also provides the differential - geometric equations for .indeed , one time derivative of ( [ 8 ] ) and using ( [ actiont.equation ] ) gives , so that if we define . thus , we quite naturally find that .of course , this set up also leads directly to hamilton - jacobi theory , which was one of the ways in which symplectic integrators were developed ( see mclachlan and scovel [ 1996 ] and references therein . )however , we shall not pursue the hamilton - jacobi aspect of the theory here .[ [ momentum - maps . ] ] momentum maps .+ + + + + + + + + + + + + + suppose that a lie group , with lie algebra , acts on , and hence on curves in , in such a way that the action is invariant .clearly , leaves the set of solutions of the variational principle invariant , so the action of restricts to , and the group action commutes with . denoting the infinitesimal generator of on by , we have by ( [ 8 ] ) , for , define by .then ( [ 9 ] ) says that is an integral of the flow of .we have arrived at a version of noether s theorem ( rather close to the original derivation of noether ) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ using the variational principle , noether s theorem results from the infinitesimal invariance of the action restricted to space of solutions of the variational principle . the conserved momentum associated to a lie algebra element is , where is the lagrange one - form ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [ [ reformulation - in - terms - of - first - variations . ] ] reformulation in terms of first variations .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we have just seen that symplecticity of the flow and noether s theorem result from restricting the action to the space of solutions .one tacit assumption is that the space of solutions is a manifold in some appropriate sense .this is a potential problem , since solution spaces for field theories are known to have singularities ( see , e.g. , arms , marsden and moncrief [ 1982 ] ) .more seriously there is the problem of finding a multisymplectic analogue of the statement that the lagrangian flow map is symplectic , since for multisymplectic field theory one obtains an evolution picture only after splitting spacetime into space and time and adopting the `` function space '' point of view .having the general formalism depend either on a spacetime split or an analysis of the associated cauchy problem would be contrary to the general thrust of this article .we now give a formal argument , in the context of lagrangian mechanics , which shows how both these problems can be simultaneously avoided .given a solution , a * _ _ * first variation at is a vector field on such that is also a solution curve ( i.e. a curve in ) .we think of the solution space as being a ( possibly ) singular subset of the smooth space of all putative curves in , and the restriction of to as being the derivative of some curve in at .when is a manifold , a first variation is a vector at tangent to .temporarily define where by abuse of notation is the one form on defined by then is defined by and we have the equation so if and are first variations at , we obtain we have the identity ),\ ] ] which we will use to evaluate ( [ 54 ] ) at the curve .let denote the flow of , define , and make similar definitions with replacing .for the first term of ( [ 50 ] ) , we have which vanishes , since is zero along for every .similarly the second term of ( [ 50 ] ) at also vanishes , while the third term vanishes since .consequently , symplecticity of the lagrangian flow may be written for all first variations and .this formulation is valid whether or not the solution space is a manifold , and it does not explicitly refer to any temporal notion . similarly , noether s theorem may be written in this way .summarizing , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ using the variational principle , the analogue of the evolution is symplectic is the equation restricted to first variations of the space of solutions of the variational principle .the analogue of noether s theorem is infinitesimal invariance of restricted to first variations of the space of solutions of the variational principle . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the variational route to the differential - geometric formalism has obvious pedagogical advantages .more than that , however , it systematizes searching for the corresponding formalism in other contexts .we shall in the next sections show how this works in the context of discrete mechanics , classical field theory and multisymplectic geometry .the discrete lagrangian formalism in veselov [ 1988 ] , [ 1991 ] fits nicely into our variational framework .veselov uses for the discrete version of the tangent bundle of a configuration space ; heuristically , given some a priori choice of time interval , a point corresponds to the tangent vector . define a * _ _ * discrete lagrangian to be a smooth map , and the corresponding action to be the variational principle is to extremize for variations holding the endpoints and fixed .this variational principle determines a `` discrete flow '' by , where is found from the * _ _ * discrete euler - lagrange equations ( del equations ) : in this section we work out the basic differential - geometric objects of this discrete mechanics directly from the variational point of view , consistent with our philosophy in the last section .a mathematically significant aspect of this theory is how it relates to integrable systems , a point taken up by moser and veselov [ 1991 ] .we will not explore this aspect in any detail in this paper , although later , we will briefly discuss the reduction process and we shall test an integrator for an integrable pde , the sine - gordon equation . [ [ the - lagrange-1-form . ] ] the lagrange -form .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we begin by calculating for variations that do not fix the endpoints : it is the last two terms that arise from the boundary variations ( i.e. these are the ones that are zero if the boundary is fixed ) , and so these are the terms amongst which we expect to find the discrete analogue of the lagrange -form. actually , interpretation of the boundary terms gives the _ two _ -forms on and and we regard _ the pair _ as being the analogue of the one form in this situation .[ [ symplecticity - of - the - flow . ] ] symplecticity of the flow .+ + + + + + + + + + + + + + + + + + + + + + + + + + we parameterize the solutions of the variational principle by the initial conditions , and restrict to that solution space. then equation ( [ 12 ] ) becomes we should be able to obtain the symplecticity of by determining what the equation means for the right - hand - side of ( [ 15 ] ) . at first, this does not appear to work , since gives which apparently says that pulls a certain -form back to a different -form .the situation is aided by the observation that , from ( [ 13 ] ) and ( [ 14 ] ) , and consequently , thus , there are _ two _ generally distinct -forms , but ( up to sign ) only _ one _ -form . if we make the definition then ( [ 16 ] ) becomes .equation ( [ 13 ] ) , in coordinates , gives which agrees with the discrete symplectic form found in veselov [ 1988 ] , [ 1991 ] .[ [ noethers - theorem . ] ] noether s theorem .+ + + + + + + + + + + + + + + + + + suppose a lie group with lie algebra acts on , and hence diagonally on , and that is -invariant .clearly , is also -invariant and sends critical points of to themselves .thus , the action of restricts to the space of solutions , the map is -equivariant , and from ( [ 15 ] ) , for , or equivalently , using the equivariance of , since is -invariant , ( [ 20 ] ) gives , which in turn converts ( [ 21 ] ) to the conservation equation defining the discrete momentum to be we see that ( [ 24 ] ) becomes conservation of momentum .a virtually identical derivation of this discrete noether theorem is found in marsden and wendlant [ 1997 ] .[ [ reduction . ] ] reduction .+ + + + + + + + + + as we mentioned above , this formalism lends itself to a discrete version of the theory of lagrangian reduction ( see marsden and scheurle [ 1993a , b ] , holm , marsden and ratiu [ 1998a ] and cendra , marsden and ratiu [ 1998 ] ) .this theory is not the focus of this article , so we shall defer a brief discussion of it until the conclusions .[ [ multisymplectic - geometry . ] ] multisymplectic geometry .+ + + + + + + + + + + + + + + + + + + + + + + + + we now review some aspects of multisymplectic geometry , following gotay , isenberg and marsden [ 1997 ] and marsden and shkoller [ 1997 ] .we let be a fiber bundle over an oriented manifold .denote the first jet bundle over by or and identify it with the _ affine _ bundle over whose fiber over consists of aff , those linear mappings satisfying we let and the fiber dimension of be .coordinates on are denoted , and fiber coordinates on are denoted by .these induce coordinates on the fibers of .if is a section of , its tangent map at , denoted , is an element of .thus , the map defines a section of regarded as a bundle over .this section is denoted or and is called the first jet of . in coordinates , is given by where .higher order jet bundles of , , then follow as .analogous to the tangent map of the projection , , we may define the jet map of this projection which takes onto let so that . then we define the subbundle of over which consists of second - order jets so that on each fiber in coordinates , if is given by , and is given by , then is a second - order jet if .thus , the second jet of , , given in coordinates by the map , is an example of a second - order jet .the * _ _ * dual jet bundle is the _ vector _ bundle over whose fiber at is the set of _ affine _ maps from to , the bundle of -forms on .a smooth section of is therefore an affine bundle map of to covering .fiber coordinates on are , which correspond to the affine map given in coordinates by where .analogous to the canonical one- and two - forms on a cotangent bundle , there exist canonical ( )- and ( )-forms on the dual jet bundle . in coordinates , with ,these forms are given by a lagrangian density is a smooth bundle map over . in coordinates , we write the corresponding covariant legendre transformation for is a fiber preserving map over , , expressed intrinsically as the first order vertical taylor approximation to : where .a straightforward calculation shows that the covariant legendre transformation is given in coordinates by we can then define the * _ _ * cartan form as the -form on given by and the -form by with local coordinate expressions { \displaystyle \omega_\mathcal{l } = dy^a \wedge d\left ( \frac{\partial l } { \partial { v^a}_\mu } \right ) \wedge d^nx_\mu - d\left [ l - \frac{\partial l}{\partial { v^a}_\mu } { v^a}_\mu \right ] \wedge d^{n+1}x . }\end{array } \label{d1a}\ ] ] this is the differential - geometric formulation of the multisymplectic structure .subsequently , we shall show how we may obtain this structure directly from the variational principle , staying entirely on the lagrangian side .[ [ the - multisymplectic - form - formula . ] ] the multisymplectic form formula .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in this subsection we prove a formula that is the multisymplectic counterpart to the fact that in finite - dimensional mechanics , the flow of a mechanical system consists of symplectic maps .again , we do this by studying the action function .let be a smooth manifold with ( piecewise ) smooth closed boundary .we define the set of smooth maps for each , we set and so that is a diffeomorphism . .we may then define the infinite - dimensional manifold to be the closure of in either a hilbert space or banach space topology .for example , the manifold may be given the topology of a hilbert manifold of bundle mappings , , ( considered a bundle with fiber a point ) for any integer , so that the hilbert sections in are those whose distributional derivatives up to order are square - integrable in any chart . with our condition on ,the sobolev embedding theorem makes such mappings well defined .alternately , one may wish to consider the banach manifold as the closure of in the usual -norm , or more generally , in a holder space -norm .see palais [ 1968 ] and ebin and marsden [ 1970 ] for a detailed account of manifolds of mappings .the choice of topology for will not play a crucial role in this paper .let be the lie group of -bundle automorphisms covering diffeomorphisms , with lie algebra .we define the * _ _ * action by furthermore , if , then .the * _ _ * tangent space to the manifold at a point is the set defined by of course , when these objects are topologized as we have described , the definition of the tangent space becomes a theorem , but as we have mentioned , this functional analytic aspect plays a minor role in what follows .given vectors we may extend them to vector fields on by fixing vector fields such that and , and letting and .thus , the flow of on is given by where covering is the flow of .the definition of the bracket of vector fields using their flows , then shows that (\rho ) = [ v , w ] \circ ( \rho \circ \rho_x^{-1 } ) .\ ] ] whenever it is contextually clear , we shall , for convenience , write for .the * _ _ * action function on is defined as follows : let be an arbitrary smooth path in such that , and let be given by we say that is a * _ _ * stationary point , critical point , or extremum of if then , where we have used the fact that for all holonomic sections of ( see corollary [ cor4.2 ] below ) , and that using the cartan formula , we obtain that \nonumber \\ & & \qquad + \int_{\partial u_x } j^1({\phi \circ \phi_x^{-1}})^ * [ { j^1(v ) } \intprod \theta_\mathcal{l } ] . \label{s6}\end{aligned}\ ] ] hence , a necessary condition for to be an extremum of is that the first term in ( [ s6 ] ) vanish .one may readily verify that the integrand of the first term in ( [ s6 ] ) is equal to zero whenever is replaced by which is either -vertical or tangent to ( see marsden and shkoller [ 1997 ] ) , so that using a standard argument from the calculus of variations , ] since such vectors arise by differentiating = 0 ] , we have ) = \int_{\partial u_x } { j^1}({\phi \circ \phi_x^{-1}})^*[{j^1(v)},{j^1(w ) } ] \intprod \theta_\mathcal{l}.\ ] ] now \intprod \theta_\mathcal{l } = { \mathfrak l}_{j^1(v ) } ( { j^1(w)}\intprod \theta_\mathcal{l } ) - { j^1(w ) } \intprod { \mathfrak l}_{j^1(v ) } \theta_\mathcal{l},\ ] ] so that \\ & & + \int_{\partial u_x } { j^1}({\phi \circ \phi_x^{-1}})^ * [ { j^1(v)}\intprod { \mathfrak l}_{j^1(w ) } \theta_\mathcal{l } - { \mathfrak l}_{j^1(v)}({j^1(w)}\intprod \theta_\mathcal{l})].\end{aligned}\ ] ] but and hence , } \\ & = & \int_{\partial u_x } { j^1}({\phi \circ \phi_x^{-1}})^ * ( { j^1(v)}\intprod { j^1(w)}\intprod \omega_\mathcal{l } ) \\ & & - \int_{\partial u_x } { j^1}({\phi \circ \phi_x^{-1}})^ * d ( { j^1(v)}\intprod { j^1(w)}\intprod \theta_\mathcal{l}).\end{aligned}\ ] ] the last term once again vanishes by stokes theorem together with the fact that is empty , and we obtain that we now use ( [ dalpha ] ) on . a similar computation as above yields \nonumber\ ] ] which vanishes for all and .similarly , for all and . finally , for all .hence , since we obtain the formula ( [ s11 ] ) . [ [ symplecticity - revisited . ] ] symplecticity revisited .+ + + + + + + + + + + + + + + + + + + + + + + + let be a compact oriented connected boundaryless -manifold which we think of as our reference cauchy surface , and consider the space of embeddings of into , emb ; again , although it is unnecessary for this paper , we may topologize emb by completing the space in the appropriate or -norm .let be an -dimensional manifold .for any fiber bundle , we shall , in addition to , use the corresponding script letter to denote the space of sections of .the space of sections of a fiber bundle is an infinite - dimensional manifold ; in fact , it can be precisely defined and topologized as the manifold of the previous section , where the diffeomorphisms on the base manifold are taken to be the identity map , so that the tangent space to at is given simply by where denotes the vertical tangent bundle of .we let be the vector bundle over whose fiber at , , is the set of linear mappings from to .then the cotangent space to at is defined as integration provides the natural pairing of with : in practice , the manifold will either be or some ( )-dimensional subset of , or the -dimensional manifold , where for each , .we shall use the notation for the bundle , and for sections of this bundle . for the remainder of this section, we shall set the manifold introduced earlier to .the infinite - dimensional manifold is called the * _ _ * -configuration space , its tangent bundle is called the * _ _ * -tangent space , and its cotangent bundle is called the * _ _ * -phase space . just as we described in section [ mechanics ] , the cotangent bundle has a canonical -form and a canonical -form .these differential forms are given by where , , and is the cotangent bundle projection map .an infinitesimal slicing of the bundle consists of together with a vector field which is everywhere transverse to , and covers which is everywhere transverse to .the existence of an infinitesimal slicing allows us to invariantly decompose the temporal from the spatial derivatives of the fields .let , , and let be the inclusion map .then we may define the map taking to over by in our notation , is the collection of restrictions of holonomic sections of to , while are the holonomic sections of .it is easy to see that is an isomorphism ; it then follows that is an isomorphism of with , since is completely determined by .this bundle map is called the jet decomposition map , and its inverse is called the jet reconstruction map . using this map , we can define the instantaneous lagrangian .the instantaneous lagrangian is given by \label{t3}\ ] ] for all . the instantaneous lagrangian has an instantaneous legendre transform which is defined in the usual way by vertical fiber differentiation of ( see , for example , abraham and marsden [ 1978 ] ) . using the instantaneous legendre transformation, we can pull - back the canonical - and -forms on . denote , respectively , the instantaneous lagrange - and -forms on by alternatively , we may define using theorem [ thm2.1 ] , in which case no reference to the cotangent bundle is necessary .we will show that our covariant multisymplectic form formula can be used to recover the fact that the flow of the euler - lagrange equations in the bundle is symplectic with respect to .to do so , we must relate the multisymplectic cartan ( )-form on with the symplectic -form on .[ thm3.2 ] let be the canonical -form on given by ,\ ] ] where , .( a ) if the -form on is defined by , then for , .\ ] ] ( b)let the diffeomorphism be a slicing of such that for , where is given by . for any ,let so that for each , , and let . then [ [ proof.-1 ] ] proof .+ + + + + + part ( a ) follows from the cartan formula together with stokes theorem using an argument like that in the proof of theorem [ thm1 ] . for part ( b ) , we recall that the multisymplectic form formula on states that for any subset with smooth closed boundary and vectors , , =0.\ ] ] let } \sigma_\lambda.\ ] ] then , so that ( [ t8 ] ) can be written as \\ & & \qquad - \int_{\sigma_{\lambda_1 } } { j^1}(\phi \circ i_{\tau_{\lambda_1}})^ * [ { j^1v_{\tau_{\lambda_1 } } } \intprod { j^1w_{\tau_{\lambda_1 } } } \intprod \omega_\mathcal{l } ] \\ & = & \omega_{\tau_{\lambda_1}}^\mathcal{l}(j^1v_{\tau_{\lambda_1}},j^1w_{\tau_{\lambda_1}})- \omega_{\tau_{\lambda_2}}^\mathcal{l}(j^1v_{\tau_{\lambda_2}},j^1w_{\tau_{\lambda_2}}),\end{aligned}\ ] ] which proves ( [ t7 ] ) . [ thm3.3 ] the identity holds .[ [ proof.-2 ] ] proof .+ + + + + + let , which we identify with , where is a -vertical vector .choose a coordinate chart which is adapted to the slicing so that .with , we see that now , from ( [ t3 ] ) we get \\ & = & \int_{\sigma_\tau } \frac{\partial l}{\partial { v^a}_0 } ( \phi^b , { \phi^b}_{,\mu})dy^a\otimes d^nx_0,\end{aligned}\ ] ] where we arrived at the last equality using the fact that in this adapted chart . since , we see that , and this completes the proof .let the instantaneous energy associated with be given by and define the `` time''-dependent lagrangian vector field by since over emb is infinite - dimensional and is only weakly nondegenerate , the second - order vector field does not , in general , exist .in the case that it does , we obtain the following result .assume exists and let be its semiflow , defined on some subset of the bundle over emb .fix so that where and . then .[ [ proof.-3 ] ] proof .+ + + + + + this follows immediately from theorem [ thm3.2](b ) and theorem [ thm3.3 ] and the fact that induces an isomorphism between and . [ [ example - nonlinear - wave - equation .] ] example : nonlinear wave equation .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + to illustrate the geometry that we have developed , let us consider the scalar nonlinear wave equation given by where is the laplace - beltrami operator and is a real - valued function of one variable . for concreteness ,fix so that the spacetime manifold , the configuration bundle , and the first jet bundle . equation ( [ t11 ] ) is governed by the lagrangian density + n(\phi ) \right\ } dx^1 \wedge dx^0 .\ ] ] using coordinates for , we write the multisymplectic -form for this nonlinear wave equation on in coordinates as a short computation verifies that solutions of ( [ t11 ] ) are elements of , or that = 0 ] , and since this must hold for all infinitesimal generators at , the integrand must also vanish so that = 0 , \label{s23}\ ] ] which is precisely a restatement of the covariant noether theorem .we now generalize the veselov discretization given in section ( 3 ) to multisymplectic field theory , by discretizing the spacetime .for simplicity we restrict to the discrete analogue of ; i.e. .thus , we take and the fiber bundle to be for some smooth manifold . [ [ notation . ] ] notation .+ + + + + + + + + the development in this section is aided by a small amount of notation and terminology .elements of over the base point are written as and the projection acts on by .the fiber over is denoted .a * _ _ * triangle of is an ordered triple of the form the first component of is the * _ _ * first vertex of the triangle , denoted , and similarly for the * _ _ * second and * _ _ * third vertices .the set of all triangles in is denoted . by abuse of notation the same symbolis used for a triangle and the ( unordered ) set of its vertices .a point is * _ _ * touched by a triangle if it is a vertex of that triangle .if , then is an * _ _ * interior point of if contains all three triangles of that touch .the * _ _ * interior of is the collection of the interior points of .the * _ _ * closure of is the union of all triangles touching interior points of .a * _ _ * boundary point of is a point in and which is not an interior point .the * _ _ * boundary of is the set of boundary points of , so that generally , properly contains the union of its interior and boundary , and we call * _ _ * regular if it is exactly that union .a * _ _ * section of is a map such that .= 0.9 [ [ multisymplectic - phase - space . ] ] multisymplectic phase space .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + we define the * _ _ * first jet bundle , but may not lead to a good numerical scheme .later , we shall also use four vertices together with averaging to define the partial derivatives of the fields .] of to be heuristically ( see figure ( [ 25 ] ) ) , corresponds to some grid of elements in continuous spacetime , say , and corresponds to , where is `` inside '' the triangle bounded by , and is some smooth section of interpolating the field values .the * _ _ * first jet extension of a section of is the map defined by given a vector field on , we denote its restriction to the fiber by , and similarly for vector fields on .the * _ _ * first jet extension of a vector field on is the vector field on defined by for any triangle .= 0.5 [ [ the - variational - principle . ] ] the variational principle .+ + + + + + + + + + + + + + + + + + + + + + + + + + let us posit a * _ _ * discrete lagrangian . given a triangle , define the function by so that we may view the lagrangian as being a choice of a function for each triangle of .the variables on the domain of will be labeled , irrespective of the particular .let be regular and let be the set of sections of on , so is the manifold .the * _ _ * action will assign real numbers to sections in by the rule given and a vector field , there is the 1-parameter family of sections where denotes the flow of on .the * _ _ * variational principle is to seek those for which for all vector fields .[ [ the - discrete - euler - lagrange - equations . ] ] the discrete euler - lagrange equations .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the variational principle gives certain field equations , the * _ _ * discrete euler - lagrange field equations ( delf equations ) , as follows .focus upon some , and abuse notation by writing .the action , written with its summands containing explicitly , is ( see figure ( [ 26 ] ) ) so by differentiating in , the delf equations are for all .equivalently , these equations may be written for all .[ [ the - discrete - cartan - form . ] ] the discrete cartan form .+ + + + + + + + + + + + + + + + + + + + + + + + + now suppose we allow nonzero variations on the boundary , so we consider the effect on of a vector field which does not necessarily vanish on .for each find the triangles in touching .there is at least one such triangle since ; there are not three such triangles since . for each such triangle , occurs as the vertex , for one or two of , and those expressions from the list yielding one or two numbers .the contribution to from the boundary is the sum of all such numbers . to bring this into a recognizable format ,we take our cue from discrete lagrangian mechanics , which featured _ two _ -forms . here the above list suggests the _ three _-forms on , the first of which we define to be and being defined analogously . with these notations, the contribution to from the boundary can be written , where is the -form on the space of sections defined by (\delta)\right).\ ] ] in comparing ( [ 31 ] ) with ( [ s21 ] ) , the analogy with the multisymplectic formalism of section ( 4 ) is immediate .[ [ the - discrete - multisymplectic - form - formula . ] ] the discrete multisymplectic form formula .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + given a triangle in , we define the projection by in this notation , it is easily verified that ( [ 31 ] ) takes the convenient form a * _ _ * first - variation at a solution of the delf equations is a vertical vector field such that the associated flow maps to other solutions of the delf equations . set . since one obtains so that only two of the three -forms , are essentially distinct .exactly as in section ( 2 ) , the equation , when specialized to two first - variations and now gives , by taking one exterior derivative of ( [ 43 ] ) , which in turn is equivalent to (\delta)\right)=0.\ ] ] again , the analogy with the multisymplectic form formula for continuous spacetime ( [ s11 ] ) is immediate . [ [ the - discrete - noether - theorem . ] ] the discrete noether theorem .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + suppose that a lie group with lie algera acts on by vertical symmetries in such a way that the lagrangian is -invariant .then acts on and in the obvious ways . since there are three lagrange -forms , there are three momentum maps , , each one a -valued function on triangles in , and defined by for any .invariance of and ( [ 35 ] ) imply that so , as in the case of the -forms , only two of the three momenta are essentially distinct . for any ,the infinitesimal generator is a first - variation , so invariance of , namely , becomes . by left insertion into ( [ 31 ] ) , this becomes the discrete version of noether s theorem : = 0.7 [ [ conservation - in - a - space - and - time - split . ] ] conservation in a space and time split .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + to understand the significance of ( [ 32 ] ) and ( [ 38 ] ) consider a discrete field theory with space a discrete version of the circle and time the real line , as depicted in figure ( [ 39 ] ) , where space is split into space and time , with `` constant time '' being constant and the `` space index '' being cyclic . applying ( [ 38 ] ) to the region shown in the figure , noether s theorem takes the conservation form similarly , the discrete multisymplectic form formula also takes a conservation form .when there is spatial boundary , the discrete noether theorem and the discrete multisymplectic form formulas automatically account for it , and thus form nontrivial generalizations of these conservation results . furthermore , as in the continuous case , we can achieve `` evolution type '' symplectic systems ( i.e. discrete moser - veselov mechanical systems ) if we define as the space of fields at constant , so , and take as the discrete lagrangian ,[q_j^1])\equiv\sum_{i=1}^n l(q_i^0,q_i^1,q_{i+1}^1).\ ] ] then the moser - veselov del evolution - type equations ( [ 40 ] ) are equivalent to the delf equations ( [ 41 ] ) , the multisymplectic form formula implies symplecticity of the moser - veselov evolution map , and conservation of momentum gives identical results in both the `` field '' and `` evolution '' pictures .[ [ example - nonlinear - wave - equation.-1 ] ] example : nonlinear wave equation .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + to illustrate the discretization method we have developed , let us consider the lagrangian ( [ lag ] ) of section ( 4 ) , which describes the nonlinear sine - gordon wave equation .this is a completely integrable system with an extremely interesting hierarchy of soliton solutions , which we shall investigate by developing for it a variational multisymplectic - momentum integrator ; see the recent article by palais [ 1997 ] for a wonderful discussion on soliton theory .to discretize the continuous lagrangian , we visualize each triangle as having base length and height , and we think of the discrete jet as corresponding to the continuous jet where is a the center of the triangle . for insertion into the nonlinear term instead of .] this leads to the discrete lagrangian with corresponding delf equations when ( wave equation ) this gives the explicit method which is stable whenever the courant stability condition is satisfied . [[ extensions - jets - from - rectangles - and - other - polygons . ] ] extensions : jets from rectangles and other polygons .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + our choice of discrete jet bundle is obviously not restricted to triangles , and can be extended to rectangles or more general polygons ( left of figure ( [ 60 ] ) ) .a * _ _ * rectangle is a quadruple of the form , a point is an * _ _ * interior point of a subset of rectangles if contains all four rectangles touching that point , the discrete lagrangian depends on variables , and the delf equations become the extension to polygons with even higher numbers of sides is straightforward ; one example is illustrated on the right of figure ( [ 60 ] ) .= 0.4 the motivation for consideration of these extensions is enhancing the stability of the triangle - based method in the nonlinear wave example just above .[ [ example - nonlinear - wave - equation - rectangles . ] ] example : nonlinear wave equation , rectangles .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + think of each rectangle as having length and height , and each discrete jet being associated to the continuous jet where is a the center of the rectangle .this leads to the discrete lagrangian if , for brevity , we set then one verifies that the delf equations become -\biggl[\frac { y_{{i\,j+1 } } -2\,y_{{ij}}+y_{{i\,j-1}}}{{h}^{2}}\biggr]\\ & & \qquad\qquad\mbox { } + \frac14\biggl[n^\prime(\bar y_{ij})+ n^\prime(\bar y_{i\,j-1})+ n^\prime(\bar y_{i-1\,j-1})+n^\prime(\bar y_{i-1\,j } ) \biggr]=0,\end{aligned}\ ] ] which , if we make the definitions ,&\end{aligned}\ ] ] is ( more compactly ) -\frac1{h^2}\partial^2_hy_{ij } + \bar n^\prime(\bar y_{ij})=0.\label{101}\ ] ] these are implicit equations which must be solved for , , given , , ; rearranging , an iterative form equivalent to ( [ 101 ] ) is in the case of the sine - gordon equation the values of the field ought to be considered as lying in , by virtue of the vertical symmetry .soliton solutions for example will have a jump of and the method will fail unless field values at close - together spacetime points are differenced modulo . as a result it becomes important to calculate using integral multiples of small field - dependent quantities , so that it is clear when to discard multiples of , and for this the above iterative form is inconvenient .but if we define then there is the following iterative form , again equivalent to ( [ 101 ] ) one can also modify ( [ 100 ] ) so as to treat space and time symmetrically , which leads to the discrete lagrangian and one verifies that the delf equations become \nonumber\\ & & \qquad\mbox{}-\frac1{h^2}\biggl [ \frac14\partial^2_hy_{i+1\,j}+\frac12\partial^2_hy_{ij}+ \frac14\partial^2_hy_{i-1\,j}\biggr ] + \bar n^\prime(\bar y_{ij})=0,\label{102}\end{aligned}\ ] ] an equivalent iterative form of which is = 2.5 in = 2.5 in = 2.5 in = 2.5 in while the focus of this article is not the numerical implementation of the integrators which we have derived , we have , nevertheless , undertaken some preliminary numerical investigations of our multisymplectic methods in the context of the sine - gordon equation with periodic boundary conditions .[ [ the - rectangle - based - multisymplectic - method . ] ] the rectangle - based multisymplectic method .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the top half of figure ( [ 200 ] ) shows a simulation of the collision of `` kink '' and `` antikink '' solitons for the sine - gordon equation , using the rectangle - based multisymplectic method ( [ 102 ] ) . in the bottom half of that figurewe show the result of running that simulation until the solitons have undergone about collisions ; shortly after this the simulation stops because the iteration ( [ 111 ] ) diverges .the anomalous spatial variations in the waveform of the bottom left of figure ( [ 200 ] ) have period spatial grid divisions and are shown in finer scale on the bottom right of that figure .these variations are reminiscent of those found in ablowitz , herbst and schober [ 1996 ] for the completely integrable discretization of hirota , where the variations are attributed to independent evolution of waveforms supported on even vs. odd grid points .observation of ( [ 102 ] ) indicates what is wrong : the nonlinear term contributes to ( [ 102 ] ) in a way that will average out these variations , and consequently , once they have begun , ( [ 102 ] ) tends to continue such variations via the linear wave equation . in ablowitz et ., the situation is rectified when the number of spatial grid points is not even , and this is the case for ( [ 102 ] ) as well .this is indicated on the left of figure ( [ 202 ] ) , which shows the waveform after about soliton collisions when rather than .figure ( [ 203 ] ) summarizes the evolution of energy error ] ] for that simulation .[ [ initial - data . ] ] initial data .+ + + + + + + + + + + + + for the two - soliton - collision simulations , we used the following initial data : ( except where noted ) , where and spatial grid points ( except figure ( [ 200 ] ) where ) .the circle that is space should be visualized as having circumference .let where , , and then is a kink solution if space has a circumference of .this kink and an oppositely moving antikink ( but placed on the last quarter of space ) made up the initial field , so that , , where while where = 2.5 in = 2.5 in = 2.5 in = 2.5 in [ [ comparison - with - energy - conserving - methods . ] ] comparison with energy - conserving methods .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + as an example of how our method compares with an existing method , we considered the energy - conserving method of vu - quoc and li [ 1993 ] , page 354 : -\frac1{h^2}\partial^2_hy_{ij}\nonumber\\ & & \qquad\mbox{}+\frac12\left ( \frac{n(y_{i\,j+1})-n(y_{ij})}{y_{i\,j+1}-y_{ij } } + \frac{n(y_{ij})-n(y_{i\,j-1})}{y_{ij}-y_{i\,j-1 } } \right)=0.\label{155}\end{aligned}\ ] ] this has an iterative form similar to ( [ 111 ] ) and is quite comparable with ( [ 101 ] ) and ( [ 102 ] ) in terms of the computation required .our method seems to preserve the soliton waveform better than ( [ 155 ] ) , as is indicated by comparison of the left and right figure ( [ 202 ] ) . in regards to the closely related papers vu - quoc and li [ 1993 ] and li and vu - quoc [ 1995 ], we could not verify in our simulations that their method conserves energy , nor could we verify their _ proof _ that their method conserves energy .so , as a further check , we implemented the following energy - conserving method of guo , pascual , rodriguez , and vazquez [ 1986 ] : which conserves the discrete energy this method diverged after just 345 soliton collisions .= 2.5 in = 2.5 in = 2.5 in = 2.5 in as can be seen from ( [ 500 ] ) , the nonlinear potential enters as a difference over two grid spacings , which suggests that halving the time step might result in a more fair comparison with the methods ( [ 102 ] ) or ( [ 155 ] ) . with this advantage ,method ( [ 500 ] ) was able to simulate 5000 soliton collisions , with a waveform degradation similar to the energy - conserving method ( [ 155 ] ) , as shown at the bottom right of figure ( [ 705 ] ) .the same figure also shows that , although the energy behavior of ( [ 500 ] ) is excellent for short time simulations , it drifts significantly over long times , and the final energy error has a peculiar appearance .figure ( [ 706 ] ) shows the time evolution of the waveform through the soliton collision that occurs just before the simulation stops .apparently , at the soliton collisions , significant high frequency oscillations are present , and these are causing the jumps in the energy error in the bottom left plot of figure ( [ 705 ] ) .this error then accumulates due to the energy - conserving property of the method .in these simulations , so as to guard against the possibility that this behavior of the energy was due to inadequately solving the implicit equation ( [ 500 ] ) , we imposed a minimum limit of 3 iterations in the corresponding iterative loop , whereas this loop would otherwise have converged after just 1 iteration .= 5.5 in = 5.5 in = 5.5 in = 5.5 in [ [ comparison - with - the - triangle - based - multisymplectic - method . ] ] comparison with the triangle - based multisymplectic method .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the discrete second derivatives in the method ( [ 500 ] ) are the same as in the triangle - based multisymplectic method ( [ explicit ] ) ; these derivatives are simpler than either our rectangle - based multisymplectic method ( [ 102 ] ) or the energy - conserving method of vu - quoc and li ( [ 155 ] ) . to explore this we implemented the triangle - based multisymplectic method ( [ explicit ] ) .even with the less complicated discrete second derivatives our triangle - based multisymplectic method simulated 5000 soliton collisions with comparable energy ] ] and waveform preservation properties as the rectangle - based multisymplectic method ( [ 102 ] ) , as shown in figure ( [ 703 ] ) .figure ( [ 790 ] ) shows the time evolution of the waveform through the soliton collision just before the simulation stops , and may be compared to figure ( [ 706 ] ) .as can be seen , the high frequency oscillations that are present during the soliton collisions are smaller and more smooth for the triangle - based multisymplectic method than for the energy - conserving method ( [ 500 ] ) .a similar statement is true irrespective which of the two multisymplectic or two energy conserving methods we tested , and is true all along the waveform , irrespective of whether or not a soliton collision is occurring .= 2.5 in = 2.5 in = 2.5 in = 2.5 in [ [ summary . ] ] summary .+ + + + + + + + our multisymplectic methods are finite difference methods that are computationally competitive with existing finite difference methods .our methods show promise for long - time simulations of conservative partial differential equations , in that , for long - time simulations of the sine - gordon equation , our method 1 ) had superior energy - conserving behavior , _ even when compared with energy - conserving methods _ ; 2 ) better preserved the waveform than energy - conserving methods ; and 3 ) exhibited superior stability , in that our methods excited smaller and more smooth high frequency oscillations than energy - conserving methods . however , further numerical investigation is certainly necessary to make any lasting conclusions about the long - time behavior of our integrator .[ [ the - programs . ] ] the programs .+ + + + + + + + + + + + + the programs that were used in the preceding simulations are `` c '' language implementations of the various methods .a simple tridiagonal lud method was used to solve the linear equations ( e.g. the left side of ( [ 111 ] ) ) , as in vu - quoc and li [ 1993 ] , page 379 .an order extrapolator was used to provide a seed for the implicit step .all calculations were performed in double precision while the implicit step was terminated when the fields ceased to change to single precision ; the program s output was in single precision .the extrapolation usually provided a seed accurate enough so that the methods became practically _explicit _ , in that for many of the time - steps the first or second run through the iterative loops solving the implicit equations solved those equations to single precision .however , in the absence of a regular spacetime grid the expenses of the extrapolation and solving the linear equation would grow .our programs are freely available at url ` http://www.cds.caltech.edu/cds ` .here we make a few miscellaneous comments and remark on some work planned for the future . as mentioned in the text , it is useful to have a discrete counterpart to the lagrangian reduction of marsden and scheurle [ 1993a , b ] , holm , marsden and ratiu [ 1998a ] and cendra , marsden and ratiu [ 1998 ] .we sketch briefly how this theory might proceed .this reduction can be done for both the case of `` particle mechanics '' and for field theory . for particle mechanics ,the simplest case to start with is an invariant ( say left ) lagrangian on the tangent bundle of a lie group : .the reduced lagrangian is and the corresponding euler poincar equations have a variational principle of lagrange dalembert type in that there are constraints on the allowed variations .this situation is described in marsden and ratiu [ 1994 ] .the discrete analogue of this would be to replace a discrete lagrangian by a reduced discrete lagrangian related to by in this situation , the algorithm from to reduces to one from to and it is generated by in a way that is similar to that for .in addition , the discrete variational principle for which states that one should find critical points of with respect to to implicitly define the map , reduces naturally to the following principle : find critical points of with respect to variations of and of the form and where and denote left and right translation and where . in other words ,one sets to zero , the derivative of the sum with respect to at for a curve in that passes through the identity at .this defines ( with caveats of regularity as before ) a map of to itself , which is the reduced algorithm .this algorithm can then be used to advance points in itself , by advancing each component equally , reproducing the algorithm on .in addition , this can be used with the adjoint or coadjoint action to advance points in or to approximate the euler poincar or lie poisson dynamics .these equations for a discrete map , say generated by on are called the * _ _ * discrete euler poincar equations as they are the discrete analogue of the euler poincar equations on .notice that , at least in theory , computation can be done for this map first and then the dynamics on is easily reconstructed by simply advancing each pair as follows : , where .if one identifies the discrete lagrangians with generating functions ( as explained in wendlandt and marsden [ 1997 ] ) then the reduced lagrangian generates the reduced algorithm in the sense of ge and marsden [ 1988 ] , and this in turn is closely related to the lie hamilton jacobi theory .next , consider the more general case of with its discretization with a group action ( assumed to be free and proper ) by a lie group .the reduction of by the action of is , which is a bundle over with fiber isomorphic to .the discrete analogue of this is which is a bundle over with fiber isomorphic to itself .the projection map is given by \mapsto ( [ q_1 ] , [ q_2]) ] denotes the relevant equivalence class .notice that in the case in which this bundle is `` all fiber '' .the reduced discrete euler - lagrange equations are similar to those in the continuous case , in which one has shape equations couples with a version of the discrete euler poincar equations .of course all of the machinery in the continuous case can be contemplated here too , such as stability theory , geometric phases , etc .in addition , it would be useful to generalize this lagrangian reduction theory to the multisymplectic case .all of these topics are planned for other papers .consider an autonomous , continuous lagrangian where , for simplicity , is an open submanifold of euclidean space .imagine some _ not necessarily uniform _ temporal grid ( ) of , so that .in this situation , it is natural to consider the discrete action this action principle deviates from the action principle ( [ 850 ] ) of section 3 in that the discrete lagrangian density depends explicitly on .of course nonautonomous continuous lagrangians also yield -dependent discrete lagrangian densities , irrespective of uniformity of the grid .thus , nonuniform temporal grids or nonautonomous lagrangians give rise to discrete lagrangian densities which are more general those those we have considered in section ( 3 ) . for field theories , the lagrangian in the action ( [ 30 ] ) depends on the spacetime variables already , through its explicit dependence on the triangle .however , it is only in the context of a uniform grid that we have experimented numerically and only in that context that we have discussed the significance of the discrete multisymplectic form formula and the discrete noether theorem . using ( [ 800 ] ) as an example , will now indicate why the issue of grid uniformity may not be serious .the del equations corresponding to the action ( [ 800 ] ) are and this gives evolution maps defined so that when ( [ 802 ] ) holds . for the canonical 1-forms corresponding to ( [ 13 ] ) and ( [ 14 ] ) we have the -dependent one forms and and equations ( [ 16 ] ) and ( [ 2010 ] ) become respectively . together , these two equations give and if we set then ( [ 807 ] ) chain together to imply .this appears less than adequate since it merely says that the pull back by the evolution of a certain 2-form is , in general , a different 2-form .the significant point to note , however , is that _ this situation may be repaired at any simply by choosing . _it is easily verified that the analogous statement is true with respect to momentum preservation via the discrete noether theorem .specifically , imagine integrating a symmetric autonomous mechanical system in a timestep adaptive way with equations ( [ 802 ] ) . as the integration proceeds, various timesteps are chosen , and if momentum is monitored it will show a dependence on those choices . _a momentum - preserving symplectic simulation may be obtained by simply choosing the last timestep to be of equal duration to the first ._ this is the highly desirable situation which gives us some confidence that grid uniformity is a nonissue .there is one caveat : symplectic integration algorithms are evolutions which are high frequency perturbations of the actual system , the frequency being the inverse of the timestep , which is generally far smaller than the time scale of any process in the simulation .however , timestep adaptation schemes will make choices on a much larger time scale than the timestep itself , and then drift in the energy will appear on this larger time scale .a meaningful long - time simulation can not be expected in the unfortunate case that the timestep adaptation makes repeated choices in a way that resonates with some process of the system being simulated .the sphere can not be generally uniformly subdivided into spherical triangles ; however , a good approximately uniform grid is obtained as follows : start from an inscribed icosahedron which produces a uniform subdivision into twenty spherical isosceles triangles ; these are further subdivided by halving their sides and joining the resulting points by short geodesics .the variational approach we have developed allows us to examine the multisymplectic structure of elliptic boundary value problems as well .for a given lagrangian , we form the associated action function , and by computing its first variation , we obtain the unique multisymplectic form of the elliptic operator .the multisymplectic form formula contains information on how symplecticity interacts with spatial boundaries . in the case of two spatial dimensions , , , we see that equation ( [ t15 ] ) gives us the conservation law where the vector .furthermore , using our generalized noether theory , we may define momentum - mappings of the elliptic operator associated with its symmetries .it turns out that for important problems of spatial complexity arising in , for example , pattern formation systems , the covariant noether current intrinsically contains the constrained toral variational principles whose solutions are the complex patterns ( see marsden and shkoller [ 1997 ] ) .there is an interesting connection between our variational construction of multisymplectic - momentum integrators and the finite element method ( fem ) for elliptic boundary value problems .fem is also a variationally derived numerical scheme , fundamentally differing from our approach in the following way : whereas we form a discrete action sum and compute its first variation to obtain the discrete euler - lagrange equations , in fem , it is the original continuum action function which is used together with a projection of the fields and their variations onto appropriately chosen finite - dimensional spaces .one varies the projected fields and integrates such variations over the spatial domain to recover the discrete equations . in general, the two discretization schemes do not agree , but for certain classes of finite element bases with particular integral approximations , the resulting discrete equations match the discrete euler - lagrange equations obtained by our method , and are hence naturally multisymplectic .to illustrate this concept , we consider the gregory and lin method of solving two - point boundary value problems in optimal control . in this scheme ,the discrete equations are obtained using a finite element method with a basis of linear interpolants . over each one - dimensional element ,let and be the two linear interpolating functions . as usual, we define the action function by .discretizing the interval $ ] into uniform elements , we may write the action with fields projected onto the linear basis as since the euler - lagrange equations are obtained by linearizing the action and hence the lagrangian , and as the functions are linear , one may easily check that by evaluating the integrals in the linearized equations using a trapezoidal rule , the discrete euler - lagrange equations given in ( [ 12 ] ) are obtained .thus , the gregory and lin method is actually a multisymplectic - momentum algorithm .fluid problems are not literally covered by the theory presented here because their symmetry groups ( particle relabeling symmetries ) are not vertical .a generalization is needed to cover this case and we propose to work out such a generalization in a future paper , along with numerical implementation , especially for geophysical fluid problems in which conservation laws such as conservation of enstrophy and kelvin theorems more generally are quite important . it remains to link the approaches here with other types of integrators , such as volume preserving integrators ( see , eg , kang and shang [ 1995 ] , quispel [ 1995 ] ) and reversible integrators ( see , eg , stoffer [ 1995 ] ) . in particularsince volume manifolds may be regarded as multisymplectic manifolds , it seems reasonable that there is an interesting link .one of the very nice things about the veselov construction is the way it handles constraints , both theoretically and numerically ( see wendlandt and marsden [ 1997 ] ) . for field theoriesone would like to have a similar theory .for example , it is interesting that for fluids , the incompressibility constraint can be expressed as a pointwise constraint on the first jet of the particle placement field , namely that its jacobian be unity .when viewed this way , it appears as a holonomic constraint and it should be amenable to the present approach . under reduction by the particle relabeling group ,such a constraint of course becomes the divergence free constraint and one would like to understand how these constraints behave under both reduction and discretization .we would like to extend our gratitude to darryl holm , tudor ratiu and jeff wendlandt for their time , encouragement and invaluable input .work of j. marsden was supported by the california institute of technology and nsf grant dms 9633161 .work by g. patrick was partially supported by nserc grant ogp0105716 and that of s. shkoller was partially supported by the cecil and ida m. green foundation and doe .we also thank the control and dynamical systems department at caltech for providing a valuable setting for part of this work .arms , j.m .marsden , and v. moncrief [ 1982 ] the structure of the space solutions of einstein s equations : ii several killings fields and the einstein - yang - mills equations ._ ann . of phys . _* 144 * , 81106 .arms , j.m .marsden , and v. moncrief [ 1982 ] the structure of the space solutions of einstein s equations : ii several killings fields and the einstein - yang - mills equations ._ ann . of phys . _* 144 * , 81106 .holm , d. d. , j.e .marsden and t. ratiu [ 1998b ] the euler - poincar equations in geophysical fluid dynamics , in _ proceedings of the isaac newton institute programme on the mathematics of atmospheric and ocean dynamics _ , cambridge university press ( to appear ) .marsden , j.e . and j.m . wendlandt [ 1997 ] mechanical systems with symmetry , variational principles and integration algorithms ._ current and future directions in applied mathematics _ , edited by m. alber , b. hu , and j. rosenthal , birkhuser , 219261 | this paper presents a geometric - variational approach to continuous and discrete mechanics and field theories . using multisymplectic geometry , we show that the existence of the fundamental geometric structures as well as their preservation along solutions can be obtained directly from the variational principle . in particular , we prove that a unique multisymplectic structure is obtained by taking the derivative of an action function , and use this structure to prove covariant generalizations of conservation of symplecticity and noether s theorem . natural discretization schemes for pdes , which have these important preservation properties , then follow by choosing a discrete action functional . in the case of mechanics , we recover the variational symplectic integrators of veselov type , while for pdes we obtain covariant spacetime integrators which conserve the corresponding discrete multisymplectic form as well as the discrete momentum mappings corresponding to symmetries . we show that the usual notion of symplecticity along an infinite - dimensional space of fields can be naturally obtained by making a spacetime split . all of the aspects of our method are demonstrated with a nonlinear sine - gordon equation , including computational results and a comparison with other discretization schemes . |
massive multiple - input multiple - output ( mimo ) technique exploiting hundreds of antennas at the base station ( bs ) , is one of the key enabler for future 5 g wireless cellular systems . to achieve the theoretical performance gains in massive mimo systems ,accurate channel state information at the transmitter ( csit ) is crucial . for csit acquisition , frequency - division duplexing ( fdd )requires direct feedback of the csi from the users to the bs , but such process is unnecessary for time division duplexing ( tdd ) since the csit can be obtained from the uplink channel estimation by leveraging the channel reciprocity . while many of massive mimo works consider the tdd mode due to this reason , fdd has many benefits over tdd ( especially in delay - sensitive or traffic - symmetric applications ) and also dominates current cellular networks .thus , it is of importance to come up with solutions to the csit acquisition problem for fdd massive mimo systems .conventional csit acquisition for fdd mimo systems consists of two separate steps : channel estimation in the downlink and feedback of csi in the uplink .first , the bs transmits orthogonal pilots in the downlink , and each user estimates its own channel using the pilot observation . commonly used channel estimation algorithms include least squares ( ls ) and minimum mean square error ( mmse ) .then , the estimated channel is fed back to the bs via dedicated uplink channels .since the number of pilots grows with the number of transmit antennas at the bs , overhead of downlink pilot signaling becomes overwhelming for massive mimo systems .also , the overhead of csi feedback is a serious concern due to the same reason . in order to address these issues ,various approaches have been proposed in recent years - . in and ,authors propose to reduce the downlink training ovehead by carefully designing the training pilots . in , an approach to reduce the csi feedback overhead whenthe bs antennas are highly correlated has been proposed . in ,an approach based on compressive sensing ( cs ) has been proposed to reduce both the downlink training overhead and uplink csi feedback overhead .while this approach is promising when the channel matrices of different users are sparse and partially share common support , such is not true when these assumptions are violated . in this letter ,we propose a joint csit acquisition scheme based on low - rank matrix completion for fdd massive mimo systems .specifically , the bs transmits pilots for downlink channel training and the scheduled users _ directly _ feed back the pilot observation to the bs without performing the individual channel estimation .then , the joint recovery of the csi for all users is performed at the bs based on the low - rank matrix completion algorithm , whereby the low - rank property of the massive mimo channel matrix caused by correlation among users is exploited . in this way, the overhead of downlink channel training as well as uplink channel feedback can be reduced , which will be verified by simulation results ._ notation _ : lower - case and upper - case boldface letters denote vectors and matrices , respectively ; , and denote the transpose , conjugate transpose , and inverse of a matrix , respectively ; is the right moore - penrose pseudoinverse ; denotes the rank of ; and denote the vectorization and unvectorization , respectively ; denotes the kronecker product ; denotes the identity matrix of size ; is the - ; is the nuclear norm denoting the sum of singular values of .we consider the downlink of fdd massive mimo system with antennas at the bs and users with single receive antenna .the bs transmits pilots at the -th channel use ( ) . at the -th user , the pilot observation during channel usescan be expressed as where ] , where and denote the antenna spacing at the bs and carrier wavelength , respectively .in conventional csit acquisition schemes , the channel vector of each user is estimated individually using classical algorithms such as ls or mmse , and then the estimated csi is fed back to the bs .for example , ls algorithm generates the estimated channel vector . in our work , we propose a joint csit acquisition scheme , where each user directly feeds back its own pilot observation to the bs for the joint mimo channel recovery of all users .the aggregate pilot observation ^t\in\mathcal{c}^{k\times t} ] is the mimo channel matrix to be recovered , and ^t ] . as , we have . for massive mimo systems , and are usually large but the number of resolvable paths is relatively small due to the limited number of clusters around the bs , , so that . that is , the rank of of size is much smaller than its dimension . in the sequel, we call this property as low - rank property " of the massive mimo channel matrix . the pilot observation at the bs can be expressed as where is the uplink rayleigh fading channel matrix whose entries follows , and is the uplink noise matrix whose entries follow . to recover the downlink channel matrix at the bs, we firstly estimate the aggregate pilot observation by then , by exploiting the low - rank property of , the joint mimo channel recovery problem at the bs can be formulated as a rank minimization problem : note that this problem is non - convex and np - hard .one possible solution to avoid the computional difficulty is to use the nuclear norm minimization problem note that this problem can be solved by semidefinite programming ( sdp ) , but the computational complexity of the solver ( e.g. , sedumi ) is still high especially when the problem dimension is large in massive mimo systems . to alleviate the computational complexity, we need to reformulate the problem .firstly , we vectorize ( [ eq4 ] ) as where , , and . then , the joint mimo channel recovery problem can be reformulated as a low - rank matrix completion problem : where . without the low - rank constraint , it is clear that the solution to the unconstrained optimization problem can be easily obtained by using the classical gradient descent algorithm or newton s algorithm . however , when the low - rank constraint is added , novel algorithm must be developed to solve the constrained optimization problem ( [ eq10 ] ) . the solution to ( [ eq10 ] ) can be obtained by using singular value projection ( svp ) based algorithms or riemannian pursuit ( rp ) algorithms . in this letter , we use the modified version of the svp - based algorithm . for traditional svp - based algorithms such as svp - based gradient decent algorithm( svp - g ) and svp - based newton s algorithm ( svp - n ) , the solution satisfying the low - rank constraint can be achieved by svp at every iteration . in the -th iteration , the current result of linear searchis projected onto a low - rank matrix , which is defined as , where is the most significant singular values of .the resulting low - rank matrix will be the starting point of a linear search for the next iteration . ; ; .+ initialization : , , + , , . , % svp - n , % svp - g .however , as the cost function in ( [ eq10 ] ) is a quadratic convex function of , svp - n simply converges after one iteration ( see appendix a ) .since the svp operation is performed only once ( i.e. , the low - rank constraint will be used only once ) , the performance of svp - n is generally not appealing . on the other hand ,svp - g executes svp in every iteration and hence a better solution can be achieved at the cost of slow convergence . to combine the advantages of svp - n and svp - g, we propose the svp - based hybrid low - rank marix completion algorithm ( svp - h ) as shown in * algorithm 1 * , where svp - n is used in the first iteration ( step 4 ) to realize fast convergence and svp - g is used for the rest iterations ( step 6 ) to achieve high accuracy . during the -th iteration , the solution is obtained through a line search along the negative gradient or newton s direction ( step 8) .after that , the unvectorized solution of is projected onto a low - rank matrix via svp ( step 9 ) . the vectorized solution of used as the starting point of a linear search for the next iteration .note that in the proposed svp - h algorithm , the search direction for svp - g is the gradient , while the search direction for svp - n is the newton s direction . the optimal step size is chosen to minimize .that is , denoting , then the derivative of is combining this together with , we have , and thus the optimal step size is given by the existing algorithms to solve the sdp problem ( casted from ( [ eq9 ] ) ) have high complexity .if the general - purpose sdp algorithm such as sedumi is employed , the complexity would be burdensome . for the svp - g algorithm , in each iteration, the matrix multiplication to compute the search direction has the complexity , since .the computation of the step size is complex but we can simply assume a constant step size by allowing marginal decrease in the convergence speed .the svp operation requires the complexity .thus , the complexity of svp - g algorithm is , where is the number of iterations . for the svp - n algorithm , since , the matrix multiplication to compute the solution has the complexity ( see appendix a ) .there is no need to compute the optimal step size for svp - n since the step size for svp - n is a constant ( ) as shown in appendix a. thus , the complexity of svp - n is . finally , we can conclude that the proposed svp - h algorithm has the complexity , which is much lower than that of existing sdp algorithms .in this section , we investigate the performance of the proposed joint csit acquisition scheme as well as the svp - h algorithm .the simulation parameters are set as : , , ; , ; , .the overhead for downlink channel training as well as uplink channel feedback are channel uses . in fig .[ svp_itr ] , we compare the normalized mean squared error ( nmse ) performance of the conventional csit acquisition scheme and the proposed joint csit acquisition scheme .note that the nmse of the joint orthogonal matching pursuit ( j - omp ) algorithm based compressive csit estimation and feedback scheme proposed in is also presented for comparison .the uplink channel is assumed to suffer from rayleigh fading , and both the downlink and uplink signal - to - noise ratio ( snr ) are set to 25 db . as a conventional channel estimation scheme at the user side , we use the widely used ls algorithm .in addition , the proposed joint csit acquisition using conventional svp - n , svp - g algorithms and the proposed svp - h algorithm for joint mimo channel recovery at the bs side are also compared in fig .[ svp_itr ] . due to the utilization of correlations among users and the resulting low - rank property of mimo channel matrix ,it is clear that the proposed schemes using svp - g , svp - n , and svp - h outperform the conventional one using ls .the nmse performance of j - omp is not good because the angular - domain channel matrix in our system model does not satisfy the sparse channel assumption , and the imperfect uplink channel degrades the performance of j - omp .we also observe that both svp - h and svp - g achieve much smaller nmse than svp - n achieves because they repeatedly exploit the low - rank constraint as mentioned in section iii - b .in addition , svp - h converges faster than svp - g due to the utilization of fast convergent svp - n in the first iteration .that is to say , the proposed svp - h algorithm achieves accurate csit and fast convergence .[ svp_n_c_t ] we next investigate the overhead reduction of the proposed joint csit acquisition scheme against the channel use . note that the nmse performance of sdp - based method and rp algorithm have also been shown in fig .[ svp_n_c_t ] for comparison .we can observe that both approaches perform similar to the proposed svp - h algorithm .we can also observe that the channel use required for the proposed scheme using svp - h is much smaller than that required for the conventional scheme .for example , to achieve the targeted nmse , the channel use required for the conventional scheme is , while that required for the proposed scheme is .this clearly indicates that the proposed scheme can reduce the overhead of downlink channel training and uplink channel feedback .in this letter , we investigate a novel csit acquisition scheme for fdd massive mimo systems by exploiting the property that the channel matrix of massive mimo system has low - rank structure . using this property, we formulate the joint csit acquisition scheme as a low - rank matrix completion problem .simulations have verified that the proposed svp - h algorithm can achieve accurate csit with fast convergence .by substituting into ( [ eq14 ] ) , we can compute the step size of svp - n the solution in the -th iteration is given by which is a constant vector and independent of the iteration index .thus , svp - n algorithm obtains the solution after the first iteration .f. zhu , f. gao , m. yao , and h. zou , joint information- and jamming- beamforming for physical layer security with full duplex base station , " _ ieee trans . signal process .24 , pp . 6391 - 6401 , dec .f. rusek , d. persson , b. lau , e. larsson , t. marzetta , o. edfors , and f. tufvesson , scaling up mimo : opportunities and challenges with very large arrays , " _ ieee signal process . mag ._ , vol . 30 , no .40 - 60 , jan . 2013 .j. choi , d. j. love , and p. bidigare , downlink training techniques for fdd massive mimo systems : open - loop and closed - loop training with memory , " _ ieee j. sel . areassignal process ._ , vol . 8 , no .802 - 814 , oct . 2014 .n. song , m. d. zoltowski , and d. j. love , downlink training codebook design and hybrid precoding in fdd massive mimo systems , " in _ proc .ieee global commun .( ieee globecom14 ) _ , dec . 2014 , pp . 1631 - 1636 .b. lee , j. choi , j. seol , d. j. love , and b. shim , antenna grouping based feedback reduction for fdd - based massive mimo systems , " in _ proc .conf . on commun .( ieee icc14 ) _ , jun .2014 , pp .4477 - 4482 .y. shi , j. zhang , and k. b. letaief , low - rank matrix completion via riemannian pursuit for topological interference management , " in _ proc .information theory ( ieee isit15 ) _ , jun .2015 , pp .1831 - 1835 .t. d. bie and n. cristianini , fast sdp relaxations of graph cut clustering , transduction , and other combinatorial problems , " _ journal of machine learning research _ , vol . 7 , pp .1409 - 1436 , dec . 2006 .r. meka , p. jain , and i. s. dhillon , guaranteed rank minimization via singular value projection , " in _ proc .neural information processing systems ( nips10 ) _ , 2010 . | channel state information at the transmitter ( csit ) is essential for frequency - division duplexing ( fdd ) massive mimo systems , but conventional solutions involve overwhelming overhead both for downlink channel training and uplink channel feedback . in this letter , we propose a joint csit acquisition scheme to reduce the overhead . particularly , unlike conventional schemes where each user individually estimates its own channel and then feed it back to the base station ( bs ) , we propose that all scheduled users directly feed back the pilot observation to the bs , and then joint csit recovery can be realized at the bs . we further formulate the joint csit recovery problem as a low - rank matrix completion problem by utilizing the low - rank property of the massive mimo channel matrix , which is caused by the correlation among users . finally , we propose a hybrid low - rank matrix completion algorithm based on the singular value projection to solve this problem . simulations demonstrate that the proposed scheme can provide accurate csit with lower overhead than conventional schemes . massive mimo , fdd , csit , low - rank matrix completion . |
the goal of quantum state tomography , or quantum state estimation , is to arrive at a best guess of the unknown state of a quantum system , based on data collected from measuring a number of identical copies of the state .an accurate guess is needed in all aspects of quantum information or quantum computation , ranging from the characterization of an unknown quantum communication channel , to a check of a quantum gate implementation , or to the verification of a state preparation procedure in the lab . whether the guess from a particular tomography recipe can be considered the best , or most accurate , depends on one s figure - of - merit , which should be chosen according to the quantum information processing task at hand . in many situations ,one is interested not in the state of the system itself , but in a quantity computed from it , _e.g. _ , the amount of entanglement in the state . in such cases , rather than reporting a best guess for , one expects to get a more accurate answer by directly estimating the quantity of interest from the data , as done in a related procedure carrying the name of `` parameter estimation . '' in other situations , one is interested in a range of quantities related to , and reporting a best guess for the state itself [ _ e.g. _ , by maximizing the likelihood for the data over all physical states , as is done for the maximum - likelihood estimator ( mle ) ] can be a convenient and self - consistent way of summarizing and interpreting the data . in the latter case , one might assess the accuracy of the estimate obtained from a particular tomography scheme by examining some measure of closeness between the estimate , and the true state , for a variety of known true states ( _ e.g. _ , from sources that have previously been fully characterized ) .a poorer , but possibly useful , gauge is to look at the accuracy of the prediction of one of the quantities of interest computed from . reference compares the performance between an estimator from a procedure the authors refer to as linear inversion ( lin ) , and two other estimators , ( the standard mle ) , and ( from a procedure known as `` free least squares '' ( fls ) ) .the article assesses the three estimators by looking at the accuracy in the prediction for target fidelity , a relevant quantity when the source is supposed to produce a certain target state .the fidelity takes value between and for physical and . in ref ., the quality of an estimator is measured by the target fidelity , the fidelity between the target state and the estimator , which is compared with the actual true value computed for the true state . here , the `` true '' state yields the probabilities that are used for the generation of the simulated data from which the various estimators are derived .the authors of ref . perform this comparison for different and , for many repeated simulations of the measurement data ( and hence , many , one for each data ) .they draw the conclusion that is always the best , because it is an unbiased estimator , _i.e. _ , fluctuations from different runs of the same experiment lead to fluctuations in the predicted value , but all centered about the true value ; and , on the other hand , give predictions that are biased , _i.e. _ , have a systematic shift away from ( see fig . 1 of ref . and fig .[ fig : noisyghz ] below ) .the authors go on to point out that any estimation procedure that always produces a physical ( _ i.e. _ , nonnegative ) state will unavoidably be biased ; their , coming from an unbiased estimation procedure , is not guaranteed to be a physical density operator , and in fact , generically has negative eigenvalues . here , the qualifiers _ biased _ and_ unbiased _ have the technical meaning that is discussed below in the context of eq .( [ twopieces ] ) .contrary to their connotation in common parlance , they are not synonyms of `` bad '' and `` good . ''one must not fall into the trap of regarding a biased estimator as automatically inferior to an unbiased one .indeed , it is well known in classical statistics that unbiased estimators are not always the best choice . instead , minimizing the mean squared error ( mse , a popular measure of estimation accuracy ) is key , and this is often not accomplished by minimizing the bias .in fact , we will show that the lin approach yields mses that are comparable to ( and sometimes worse than ) what one obtains from the mle procedure ; yet the lin technique forces us to give up physicality , which leads to many severe problems and highly restricts the usefulness of the estimator .the mle itself also does not and was never designed to minimize the mse , but it does a comparably good job as while enforcing physicality .we hence see little utility at all in employing the lin strategy .below , we remind the reader why , in the quantum context , it is usually not a good idea to treat relative frequencies obtained from the data as probabilities , as is prescribed by the lin procedure of ref . .then , we explain why focusing on reducing bias only , and not the overall mse , constitutes a conceptual misunderstanding .lastly , we compare the mses obtained from the mle and the lin approaches and observe that one can easily find examples in which the biased mles have smaller mses than the unbiased lin estimators .before we begin , a brief note on notation is in order .the tomography measurement is described by a positive - operator - valued measure ( povm ) , or , if we use a more descriptive name , a probability - operator measurement ( pom ) : it comprises a set of outcomes , one for each detector , with for all and .the probability of getting a click in detector , corresponding to outcome , is given by the born rule , .in the tomography experiment , identically prepared copies of the ( unknown ) state are measured using the pom . the data consist of a particular sequence of detector clicks , summarized by the set of relative frequencies , where is the number of clicks in detector . from ,one estimates the probabilities using a chosen procedure like mle or lin , and from these ( if one can , _e.g. _ , in the case of tomographically complete poms ) , one constructs the estimator .lin , as proposed in ref ., sets the estimated probabilities equal to the relative frequencies of the observed data , and then obtains by `` linear inversion '' of the born rule . while relative frequencies will be close to probabilities when there is a lot of data , they are most certainly not the same thing : relative frequencies satisfy only one constraint , that of unit sum : ; probabilities ( for pom ) that arise from a physical state through the born rule satisfy further constraints imposed by the positivity of .the latter constraints can be easily stated for the case of measuring a qubit state with the symmetric informationally complete pom ( sic pom ) , the tetrahedron measurement , where requires the four tetrahedron probabilities to satisfy , in addition to the unit - sum constraint . since the relative frequencies do not themselves satisfy these physicality constraints , is hence not necessarily a physical state , as is also emphasized in ref .( and many other existing references in the literature ) . that is not necessarily nonnegative , is not a minor nuisance : many quantities associated with a physical state are ill - defined for and can no longer be computed ,_ e.g. _ , entropy , negativity , and the fidelity with another state .other quantities , such as the purity or the expectation value of an observable , are computable for , but the numbers so obtained do not mean purity , expectation value , etc .hence , may not just lack a reasonable physical interpretation , but may also not be useful at all . in the case of uncomputable quantities , the proposal of ref . is to be content with the bounds that can be computed from linear approximations .these bounds , however , also lack a physical meaning if they are evaluated for . while one might choose not to be too concerned if is only slightly unphysical ( however one may want to quantify that statement ) , or if an unphysical occurs only rarely , getting an unphysical can be generic in certain situations .for example , imagine a qubit state measured with the tetrahedron measurement , and suppose that the true state is orthogonal to one of the tetrahedron outcomes ( say the one labeled by ) .then , the only relative frequencies that can give a physical are and , _ i.e. _ , the detector counts for all outcomes , other than the tetrahedron leg orthogonal to the true state , must be exactly equal .this is not even possible if the total number of counts is different from a multiple of 3 .mse-fig1.pdf lest the reader complains that the above is a pathological case , another situation where one sees a stark contrast between frequencies and probabilities can be found in the commonly used `` bb84-like '' measurements , _i.e. _ , measure the pauli and on a single qubit with equal probability . in an optical implementation ,where the qubit is the photon polarization , the usual way this measurement is implemented is by having a 50 - 50 beam - splitter direct the incoming photons into two possible paths , one carrying out the measurement , the other the measurement ; see fig .[ fig : bb84 ] .now , the probabilities for such a measurement , by the very nature of the measurement structure , satisfy , alongside the positivity constraint .the relative frequencies , however , obey no such constraints : despite the 50 - 50 nature of the beam - splitter , one hardly ever encounters the situation where _ exactly _ half the photons travel down the path , and half down the other .the procedure of finding from such relative frequencies will then typically be internally inconsistent , and yields no solution .one common fix used to circumvent the above problem requires one to ignore the counts in one of the detectors , _e.g. _ , the one measuring the eigenstate for ( outcome in fig .[ fig : bb84 ] ) . to ensure that the relative frequencies comply with the constraint of , in imitation of the probabilities , one replaces the actual count obtained by , and modifies the total number to be .however , this ad - hockery , which involves discarding data , does not guarantee that the relative frequencies satisfy the remaining positivity constraint on the probabilities . the simulations in ref .mimic the tomography of four - qubit states using product pauli poms , _i.e. _ , measure \equiv o_{1}\otimes o_{2}\otimes o_{3}\otimes o_{4}} ] sum to , as do the corresponding probabilities .however , these constraints are not the only ones needed to ensure internal consistency .there is , for example , the problem that frequencies obtained when measuring , say , ] give two values for the expectation values of the two - qubit observable =x\otimes1\otimes1\otimes y ] and ] , ] as well as ] and the observables obtained by permuting the four qubits .the ghz state is an eigenstate of every one of these observables .hence , there are no statistical fluctuations when their expectation values are estimated from the data obtained from the product pauli pom . | because of the constraint that the estimators be _ bona fide _ physical states , any quantum state tomography scheme including the widely used maximum likelihood estimation yields estimators that may have a bias , although they are consistent estimators . schwemmer _ et al . _ ( arxiv:1310.8465 [ quant - ph ] ) illustrate this by observing a systematic underestimation of the fidelity and an overestimation of entanglement in estimators obtained from simulated data . further , these authors argue that the simple method of linear inversion overcomes this ( perceived ) problem of bias , and there is the suggestion to abandon time - tested estimation procedures in favor of linear inversion . here , we discuss the pros and cons of using biased and unbiased estimators for quantum state tomography . we conclude that the little occasional benefit from the unbiased linear - inversion estimation does not justify the high price of using unphysical estimators , which are typically the case in that scheme . |
we consider solving the laplace - beltrami problem on a smooth two dimensional surface embedded into a three dimensional space partitioned into a mesh consisting of shape regular tetrahedra .the mesh does not respect the surface and thus the surface cuts through the elements .following olshanskii , reusken , and grande we construct a galerkin method by using the restrictions of continuous piecewise linears defined on the tetrahedra to the surface . the resulting discrete method may be severely ill - conditioned and the main purpose of this paper is to suggest a remedy for this problem based on adding a consistent stabilization term to the original bilinear form .the stabilization term we consider here controls jumps in the normal gradient on the faces of the tetrahedra and provides a certain control of the derivative of the discrete functions in the direction normal to the surface .similar terms have recently been used for stabilization of cut finite element methods for fictitious domain methods , , , and . notethat none of these references involve any partial differential equations on surfaces only regular boundary and interface conditions . in principle , it is possible , in this situation , to deal with the ill conditioning problem in the linear algebra using a scaling , see .starting from a stable method has clear advantages in more complex applications that may need stabilization anyway , such as problems with hyperbolic character or coupled bulk - surface problems .it is also not clear that matrix based algebraic scaling procedures is possible in all situations and thus alternative approaches must be investigated . using the additional stability we first prove an optimal estimate for the condition number , independent of the location of the surface , in terms of the mesh size of the underlying tetrahedra .the key step in the proof is certain discrete poincar estimates that are also of general interest .then we prove a priori error estimates in the energy and norms . in a companion paper, we will consider the more challenging problems of the surface helmholtz equation and show error estimates for a stabilized method under a suitable condition on the product of the mesh size and the wave number .finally , we refer to , , , and for general background on finite element methods for partial differential equations on surfaces .the outline of the reminder of this paper is as follows : in section 2 we formulate the model problem and the finite element method , in section 3 we summarize some preliminary results involving lifting of functions from the discrete surface to the continuous surface , in section 4 we prove an optimal bound on the condition number of the stabilized method , in section 5 we prove a priori error estimates in the energy and norms , and finally in section 6 we present numerical investigations confirming our theoretical results .let be a smooth -dimensional closed surface embedded in , or , with signed distance function such that the exterior surface unit normal is given by .let be the nearest point projection mapping onto , i.e. , is the point on that minimizes the euclidian distance to . for let be the tubular neighborhood of .then and there is a such that for each there is a unique .using we may extend any function defined on to by defining we consider the following problem : find such that where is a given function such that . here is the laplace - beltrami operator defined by where is the tangent gradient with the projection of onto the tangent plane of at , defined by where is the identity matrix , and the gradient .let and be the inner product and norm on the set equipped with the appropriate measure .let be the sobolev spaces on with norm where and the norm for a matrix is based on the pointwise frobenius norm .we then have the weak problem : find such that where it follows from the lax - milgram lemma that the weak problem has a unique solution for such that . for smooth surfaces we also have the elliptic regularity estimate here and below denotes less or equal up to a positive constant .let be a quasi uniform partition into shape regular tetrahedra for and triangles for with mesh parameter of a polygonal domain in completely containing .let be the space of continuous piecewise linear polynomials defined on .let be an approximation of the distance function and let be the zero levelset then is piecewise linear and we define the exterior normal to be the exact exterior unit normal to .we consider a family of such surfaces such that ( a ) , ( b ) the closest point mapping , is a bijection , and ( c ) the following estimates hold for .these properties are , for instance , satisfied if is the lagrange interpolant of and is small enough .let and be the continuous piecewise linear functions defined on with average zero .the finite element method on takes the form : find such that here the bilinear form is defined by with and ,[{\boldsymbol n}_f \cdot \nabla w])_f\ ] ] where denotes the set of internal interfaces in , = ( { \boldsymbol n}_f \cdot \nabla v)^+ - ( { \boldsymbol n}_f \cdot \nabla v)^-$ ] with , is the jump in the normal gradient across the face , denotes a fixed unit normal to the face , and is a constant of .the tangent gradients are defined using the normal to the discrete surface and the right hand side is given by introducing the mesh dependent norm where we note that is indeed a norm on ( for fixed ) since if we have . to see thiswe first note that if then must be a linear polynomial with , , and the center of gravity of , here since , secondly if and thus since is a closed surface and thus can not be normal to everywhere .thus it follows from the lax - milgram lemma that there exists a unique solution to ( [ eq : fem ] ) .in this section we collect some essentially standard results , see , , and , related to lifting of functions from the discrete surface to the exact surface . for each let be the distance function to the hyperplane , with normal , that contains and be the associated nearest point projection onto the hyperplane .then we define the mapping where and we defined for .we note that is an invertible mapping , , and is a partition of .the derivative of is given by where we used the identity . here and we have the identity where are the principal curvatures of with corresponding principal curvature vectors , see lemma 14.7 .thus we note that is tangential to and that for small enough we have the estimate .in particular , on we have and thus we obtain the simplified expression where we used the fact that is a tangential tensor to , i.e. . the mapping maps the tangent and normal spaces of at onto the tangent and normal spaces of at .we define the lift of a function to by the tangent gradient of is given by using the fact that and preserves the normal and tangent directions we finally obtain the identity where we introduced the notation here is a symmetric matrix with eigenvalues , which are all strictly greater than zero on for small enough , and is nonsymmetric with singular values , which are also strictly greater than zero .we will need the following estimates for : the first estimate follows directly from the definition of , the second can be proved using the the fact that eigenvalues and singular values discussed above are all strictly greater than zero . to prove the third we proceed as follows where we collected the terms involving the distance function in the last term and used the assumption ( [ eq : rhobounds ] ) that .next for the remaining term we write and then we note that the identity holds . using the bound estimate follows .the surface measure on is related to the surface measure on by the identity where is the determinant of which is given by using this identity we obtain the estimate where we finally used the bound . in summarywe have the following estimates for the determinant this section we derive several discrete poincar estimates .we begin with the standard poincar inequality on for functions in with average zero and a constant uniform in for small enough .then we show estimates that essentially quantifies the improved control of the solution and its gradient provided by the gradient jump stabilization term . in order to prepare for the proof of our main estimates lemma [ lemmacondb ] and lemma [ lemmacondbb ]we first prove a poincar inequality for piecewise constant functions defined on in lemma [ lemmaconda ] and then in lemma [ lemmacondaa ] we quantify the improved control of the total gradient provided by the stabilization term .the proof of lemma [ lemmaconda ] builds on the idea of using a covering of in terms of sets consisting of a uniformly bounded number of elements . on these sets a local poincar estimate holds for functions with local average zero .the local averages can then be approximated by a smooth function for which a standard poincar estimate on the exact surface can finally be applied .this approach is used in to prove korn s inequality in a tubular neighborhood of a smooth surface .in contrast to our proof handles discrete functions and the fact that is a polygon that changes with the mesh size .[ lemmapoincaresigmah ] let be the average . then the following estimate holds for with small enough .let be the average of .using the fact that is the constant that minimizes and then changing coordinates to followed by the standard poincar estimate on we obtain where we mapped back to in the last step .this concludes the proof .[ lemmaconda ] let be a piecewise constant function on and let be the average on . then the following estimate holds \|^2_f\ ] ] for with small enough .let and let for .next we let we note that there is a uniform bound on the number of elements in since the mesh is quasiuniform . furthermore, if is a set of points on such that is a covering of then is a covering of .next let be smooth , nonnegative , have compact support , and be constant equal to in a neighborhood of .define and then and we have the estimates on all of the sets we have the local poincar estimate \|_f^2\ ] ] where is the average of over and is the set of interior faces in .we finally let be defined by with average with these definitions we have the estimates \|_f^2 + \sum_{{\boldsymbol x}\in \mathcal{x}_h } h\| a_{{\boldsymbol x } } - a \|^2_{d_{h,{\boldsymbol x}}}\end{aligned}\ ] ] where we used the fact that is the constant that minimizes in ( [ eq : conda : aaa ] ) and the local poincar estimate ( [ eq : localpoincare ] ) to estimate the first term and the estimate together with the fact that is a constant to estimate the second term in ( [ eq : conda : bbb ] ) .next we split the second term in ( [ eq : conda : ccc ] ) by adding and subtracting the constant and the function in each term in the sum as follows we proceed with estimates of terms .[ [ term - boldsymbol - i ] ] term + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + using the fact that is constant on , the bound , the definition of , and the identity , we obtain \|_f^2 \end{aligned}\ ] ] where we used cauchy - schwarz in ( [ lemma : a : i : a ] ) , the bounds ( [ eq : varphibounds ] ) for in ( [ lemma : a : i : b ] ) , an element wise inverse inequality in ( [ lemma : a : i : c ] ) , and finally the local poincar estimate ( [ eq : localpoincare ] ) in ( [ lemma : a : i : d ] ) .thus we have the estimate \|_f^2\ ] ] [ [ term - boldsymbol - iboldsymbol - i ] ] term + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + using the estimate followed by the fundamental theorem of calculus we obtain for each we have and thus we find that \|_f^2\end{aligned}\ ] ] where we used the local poincar estimate ( [ eq : localpoincare ] ) . taking the supremum over we obtain \|_f^2\ ] ] where is the set of interior faces contained in the set .thus we conclude that \|_f^2\ ] ] [ [ term - boldsymbol - iboldsymbol - iboldsymbol - i ] ] term + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + using the standard poincar estimate on we obtain \|_f^2\end{aligned}\ ] ] where we used ( [ eq : maxatilde ] ) .starting from the bound ( [ eq : conda : ccc ] ) and using the bounds of terms and for the second term we arrive at \|_f^2 \lesssim \sum_{f \in { \mathcal{f}}_{h } } h^{-1}\|[v]\|_f^2\ ] ] where , in the last inequality , we used the fact that there is a covering such that we have a uniform bound on the number of sets that each edge belongs to . to construct such a coveringwe let and then and we note that .thus is a covering of .next we note that there is a constant such that since the mesh is quasiuniform . the covering number of is uniformly bounded by the number of points in the set .[ lemmacondaa ] the following estimate holds for with small enough .we have for all .choosing such that , it follows from lemma [ lemmaconda ] that next , to estimate we first map to and then use the fact that is a closed surface followed by finite dimensionality of the constant functions to conclude that where we mapped back to the discrete surface and used the estimate .finally , for with sufficiently small , a kick back argument leads to the estimate next , writing , we have combining the estimates ( [ eq : condaaa ] ) , ( [ eq : condaab ] ) , ( [ eq : condaac ] ) , and ( [ eq : condaad ] ) , we obtain the desired result .we are now ready to state and prove our poincar inequality .[ lemmacondb ] the following estimate holds for with small enough . using the same notation as in lemma [ lemmaconda ]we first show that there is a constant such that for each and , , there exists an element such that we say that such an element has a large intersection with .assume that there is no such element .then there is a sequence such that since there is a uniform bound on the number of elements in we obtain the estimate which is a contradiction since since .furthermore , the following estimate holds where we introduced the notation \|_f^2\ ] ] to prove ( [ eq : condba ] ) , let and be two elements sharing a common face .then we have the following identity { \boldsymbol n}_f \cdot ( { \boldsymbol x}-{\boldsymbol x}_f)\ ] ] where is the center of gravity of the face .thus { \boldsymbol n}_f \cdot ( { \boldsymbol x}-{\boldsymbol x}_f)\|^2_{k_2 } \\\label{condterm2 } & \lesssim \| v_1 \|^2_{k_1 } + h^{3 } \| [ { \boldsymbol n}_f \cdot \nabla v ] \|^2_f\end{aligned}\ ] ] iterating this bound and summing over all elements in we obtain here may be estimated using the inverse bound which holds due to quasiuniformity and the fact that the intersection with of satisfies . combining ( [ eq : condb : b ] ) and ( [ eq : condb : c ] ) we obtain ( [ eq : condba ] ) . using a covering of witha uniformly bounded covering number as constructed in lemma [ lemmaconda ] , we obtain where we used the poincar inequality , see lemma [ lemmapoincaresigmah ] .this concludes the proof .we conclude this section with a version of the previous lemma which involves the norm instead of the energy norm .[ lemmacondbb ] the following estimate holds for with small enough . starting from ( [ eq: conda : b ] ) we have and thus we need to estimate .using the same notation as above we obtain where we used the fact that has a large intersection with so that an inverse estimate holds for the tangential derivative and we also added and subtracted where is the tangential gradient to .we proceed with estimates of and .[ [ term - i_boldsymbol - x ] ] term + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + using the definition of the tangential derivative we obtain the estimate where is the ( constant ) normal associated with element .now since for any . using ( [ eq : rhobounds ] )we conclude that the first two terms are and using the fundamental theorem of calculus we have the estimate for the third term .thus we have the estimate which gives where we used an inverse estimate at last . [[ term - ii_boldsymbol - x ] ] term + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we let act on identity ( [ eq : condb : aa ] ) , which gives { \boldsymbol p}_{\sigma_{h,{\boldsymbol x } } } { \boldsymbol n}_f\ ] ] and thus we have the estimate \|^2_f\ ] ] iterating this bound and summing over all the elements in , again using the fact that the number of elements in is uniformly bounded , we arrive at using ( [ eq : condbb : aa ] ) we obtain where we used the estimate which holds since element has a large intersection with .collecting the estimates ( [ eq : condbb : c ] ) , ( [ eq : condbb : i ] ) , and ( [ eq : condbb : ii ] ) we have combining ( [ eq : condbb : a ] ) and ( [ eq : condbb : d ] ) yields and the lemma follows for , with small enough , using a kick back argument . herewe derive the inverse inequality needed in the proof of the condition number estimate .[ lemmacondc ] the following estimate holds for with small enough . using the fact that isconstant we note that , which leads to the estimate where we used an element wise inverse inequality at last .furthermore , using standard inverse inequalities , we obtain the following estimate for the jump term to derive an estimate of the condition number of the stiffness matrix we use the poincar inequality in lemma [ lemmacondb ] and the inverse estimate in lemma [ lemmacondc ] together with the approach in .let be the standard piecewise linear basis functions associated with the nodes in and let be the stiffness matrix with elements .we recall that the condition number is defined by where for and for .the expansion defines an isomorphism that maps to and satisfies the following well known estimates the following estimate of the condition number of the stiffness matrix holds for with small enough .we need to estimate and .starting with we have where we used the estimate together with ( [ inverse ] ) and ( [ rneqv ] ) .thus latexmath:[\[\label{aest } next we turn to the estimate of . using ( [ rneqv ] ) and( [ poincare ] ) , we get and thus we conclude that . setting we obtain ) and ( [ ainvest ] ) of and the theorem follows .in this section we derive a priori error estimates in the energy and -norms .the main technical difficulty is to handle the fact that the surface is approximated by a discrete surface .our approach essentially follows , , and . in order to define an interpolation operator we note that the extension of satisfies the stability estimate with constant only dependent on the curvature of the surface .we let denote the standard scott - zhang interpolation operator and recall the interpolation error estimate where is the union of the neighboring elements of .we also define an interpolation operator as follows introducing the energy norm associated with the exact surface we have the following approximation property .[ lem : approx ] the following estimate holds for with small enough .we first recall the element wise trace inequality which holds with a uniform constant independent of the intersection , see lemma 4.2 in . to estimate the first term we change domain of integration from to and then use the trace inequality ( [ eq : trace ] ) as follows where we used the interpolation estimate ( [ interpolstandard ] ) followed by the stability estimate ( [ eq : ext_stab ] ) for the extension operator . observing that with we arrive at second term can be directly estimated using the elementwise trace inequality followed by ( [ interpolstandard ] ) .[ thmenergy ] the following a priori error estimate holds for with small enough . adding and subtracting an interpolant , defined by ( [ pihl ] ) , and , and using the triangle inequality we have here the first two terms can be immediately estimated using the interpolation error estimate ( [ interpol ] ) . for the third and fourth we have the following identity estimating the right hand side we obtain [ [ term - boldsymbol - i-1 ] ] term + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the first term can be directly estimated using the interpolation inequality ( [ interpol ] ) .[ [ term - boldsymbol - iboldsymbol - i-1 ] ] term + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + changing domain of integration we obtain the estimate where we used the estimates ( [ detbbounds ] ) , the poincar inequality in lemma [ lemmapoincaresigmah ] on , and finally we mapped from to . [ [ term - boldsymbol - iboldsymbol - iboldsymbol - i-1 ] ] term + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + changing domain of integration we obtain where at last we used the stability estimate for the method .furthermore , we used ( [ bestimates ] ) and ( [ detbbounds ] ) to show the following estimate finally , collecting the estimates of terms the proof follows . the following a priori error estimate holds for with small enough .recall that satisfies and define such that then and we have the estimate [ [ term - boldsymbol - i-2 ] ] term + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + let be the solution to the dual problem .then it follows from the lax - milgram lemma that there exists a unique solution in and we also have the elliptic regularity estimate . in order to estimate multiply the dual problem by , integrate using green s formula , and add and subtract suitable terms these terms may now be estimated using cauchy - schwarz , the energy norm estimate ( [ eqenergy ] ) , together with the estimates of terms and in the proof of theorem [ thmenergy ] .we note in particular that here the first term is estimated by observing that and the second term using lemma [ lem : approx ] .[ [ term - boldsymbol - iboldsymbol - i-2 ] ] term + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + using the fact that we obtain combining the estimates of and we obtain the desired estimate .in order to assess the effect of our stabilization method on the condition number , we discretize a circle , solve the eigenvalue problem of the laplace - beltrami operator and sort these in ascending order .the problem is posed so that the integral of the solution is set to zero by use of a lagrange multiplier ( for well - posedness ) .the first eigenvalue ( corresponding to the multiplier ) is then negative . in the unstabilized methodthe next eigenvalue is zero with eigenfunction equal to the discrete piecewise linear distance function used to define the circle .this is due to our choice to define the circle using a level set function on the same mesh used for computations ; this particular problem can be avoided using a finer mesh for the level set function .we illustrate the effect by showing the zero isoline of the level set function ( thick line ) together with the isolines of the eigenfunction corresponding to the zero eigenvalue in figure [ zeroeigen ] . to avoid this effect , we base the condition number on the quotient between the largest eigenvalue and the first positive eigenvalue . for the unstabilized method ,the first nonzero eigenvalue is thus the third , whereas for the stabilized method it is the second . in the stabilized method we used for all computations . in fig .[ mesh2d ] we show the initial position of the circle in a 2d mesh .the circle is then moved to the left , to end up a distance to the left of its original position . for each increment ,we plot the condition numbers of the two methods , shown in fig .[ cond2d ] .note the large variation in condition number of the unstabilized method . for our convergence / conditioning comparison , we discretize a sphere of radius with center at and with a load corresponding to the exact solution compute on to solve for , and define an approximate as a plot of the approximate ( unstabilized ) solution on a coarse meshis given in figure [ solution ] , shown on the planes intersected by the level set function .we compare the error for the stabilized ( using different values for ) and unstabilized methods in figure [ errors ] , where ndof stands for the total number of degrees of freedom on the active tetrahedra , so that .note that the error constant is slightly worse for the stabilized methods but that all choices converge at the optimal rate of .the numbers underlying figure [ errors ] are given in table [ table : conv ] , where stands for the number of unknowns , the errors are listed for different , and is the rate of convergence . in figure [ conditioning ]we show the condition number computed for the same problem ( with the same approach as in section [ condest2d ] ) with different choices for and also for the preconditioning by diagonal scaling suggested in .no stabilization results in a condition number that grows faster than the standard rate of .the condition number is most improved by diagonal scaling .note , however , that diagonal scaling does not remedy the zero eigenvalue induced by the level set .the numbers underlying figure [ conditioning ] are given in table [ table : cond ] ..errors and convergence for different [table : conv ] [ cols=">,>,>,>,>,>,>,>,>",options="header " , ]this research was supported in part by epsrc , uk , grant no .ep / j002313/1 , the swedish foundation for strategic research grant no . am13 - 0029 , and the swedish research council grants nos .2011 - 4992 and 2013 - 4708 .10 g. dziuk .finite elements for the beltrami operator on arbitrary surfaces . in _ partial differential equations and calculus of variations _ , volume 1357 of _ lecture notes in math ._ , pages 142155 .springer , berlin , 1988 . | we consider solving the laplace - beltrami problem on a smooth two dimensional surface embedded into a three dimensional space meshed with tetrahedra . the mesh does not respect the surface and thus the surface cuts through the elements . we consider a galerkin method based on using the restrictions of continuous piecewise linears defined on the tetrahedra to the surface as trial and test functions . the resulting discrete method may be severely ill - conditioned , and the main purpose of this paper is to suggest a remedy for this problem based on adding a consistent stabilization term to the original bilinear form . we show optimal estimates for the condition number of the stabilized method independent of the location of the surface . we also prove optimal a priori error estimates for the stabilized method . laplace beltrami , embedded surface , tangential calculus . |
for both mathematical and physical reasons one would like to be able to understand anisotropic and inhomogeneous cosmological models in general . as a first step spatially homogeneous spacetimes can be studied . for the case where the matter model is a perfect fluid a lot of resultshave been already obtained which are summarized in , see also for a critical discussion .usually in observational cosmology it is assumed that there exists an era which is `` matter - dominated '' where the matter model is a perfect fluid with zero pressure , i.e. the dust model . a kinetic description via collisionless matter enables to study the stability of this model in the following sense .suppose we have an expanding universe where the particles have certain velocity dispersion .one might think that due to the expansion the velocity dispersion will decay . that this is true has been shown in , and for locally rotationally symmetric ( lrs ) models in the cases of bianchi i , ii and iii .these results have been generalized ( in a different direction ) to lrs bianchi ix and in a context which also goes beyond collisionless matter in and .+ the lrs models can be diagonalized in a suitable frame where they then stay diagonal if one makes some extra assumptions on the distribution function .a natural question is , what happens if the model is not diagonal ? in this paper we will treat the late time dynamics of the _ non - diagonal _ bianchi i case assuming small data .that this makes sense was established in where it was shown that geodesic completeness holds for the general bianchi i - vlasov case . in a sense which will be specified later , we assume that the universe is close to isotropic and that the velocity dispersion of the particles is bounded .then we conclude that the universe will isotropize and have a dust - like behaviour asymptotically .in fact these two properties are intimately linked in the proof , something which is not expected to happen in other bianchi types , except in the case of a positive cosmological constant where it has been shown that isotropization and asymptotic dust - like behaviour occurs in all bianchi models except bianchi ix and without the lrs assumption .+ we hope to be able to extend this result in the near future a ) to other bianchi types , where the spacetime does not ( necessarily ) isotropize , but some other solution still may act as an attractor b ) to remove the small data assumption(s ) .a _ bianchi spacetime _ is defined to be a spatially homogeneous spacetime whose isometry group possesses a three - dimensional subgroup that acts simply transitively on the spacelike orbits .they can be classified by the structure constants of the lie algebra associated to the lie group .we will only consider the simplest case where the structure constants vanish , i.e. the case of bianchi i with the abelian group of translations in as the lie group , where the metric has the following form using gauss coordinates : we will use the 3 + 1 decomposition of the einstein equations as made in .we use gauss coordinates , which implies that the lapse function is the identity and the shift vector vanishes , so comparing our metric with ( 2.28 ) of we have that and .the only non - trivial christoffel symbols are the following ( see ( 2.44)-(2.49 ) of ) : in terms of coordinate expressions we have then that ( ) where , and are the energy density , matter current and energy momentum tensor respectively . in the 3 + 1 formulation the second fundamental form is used and rewriting its definition as in ( 2.29 ) of we have : using the einstein equations as in ( 2.34 ) of and the fact that does not depend on the spatial variables , we have : where we have used the notations , .it has been assumed that the cosmological constant vanishes . from the constraint equations ( 2.26)-(2.27 ) of : and we see that no matter current is present since the `` spatial '' christoffel symbols vanish : from ( [ c ] ) we see that never vanishes except for the minkowski spacetime .we exclude this case and assume now without loss of generality that for all time .if this is not true for a given solution it may be arranged by doing the transformation .+ for the matter model we will take the point of view of kinetic theory .this means that we have a collection of particles ( in a cosmological context the particles are galaxies or clusters of galaxies ) which are described statistically by a non - negative distribution function which is the density of particles at a given spacetime point with given four - momentum .we will assume that all the particles have equal mass ( one can relax this condition if necessary , see ) .we want that our matter model is compatible with our symmetry assumption , so we will also assume that does not depend on .in addition to that we will assume that there are no collisions between the particles .in this case the distribution function satisfies the vlasov equation ( see ( 3.38 ) of ) : where f is defined on the set determined by the equation called the mass shell .the energy momentum tensor is ( compare with ( 3.37 ) of ) : here and . for this kind of matter all the energy conditions hold . in particular .our system of equations consists of the equations ( [ a])-([cu ] ) . for a given bianchi i geometry the vlasov equationcan be solved explicitly with the result that if is expressed in terms of the covariant components then it is independent of time .this has the consequence that if is some fixed time and then we can express the non - trivial components of the energy momentum tensor as follows : a useful relation concerns the determinant of the metric ( ( 2.30 ) of ) : =-2 h \end{aligned}\ ] ] taking the trace of the mixed version of the second fundamental form ( 2.36 of ) : with ( [ c ] ) one can eliminate the energy density and ( [ i m ] ) reads : we can decompose the second fundamental form introducing as the trace - free part : then and ( [ in ] ) takes the following form : it is obvious that from the constraint equation ( [ c ] ) and ( [ yes ] ) it also follows that from ( [ ie ] ) and ( [ up ] ) we conclude that let us use the following notation : this quantity is related to the so called _ shear parameter_ , which is bounded by the cosmic microwave background radiation and is a dimensionless measure of the anisotropy of the universe ( see chapter 5.2.2 of ) . using this notation and with the help of ( [ a ] ) , ( [ b ] ) and ( [ yes ] ) we have \end{aligned}\ ] ] from the constraint equation ( [ c ] ) in generalwe see that .+ we see that in the vacuum case and . actually in this case one can write explicitly the solution known as _ kasner solution _ : where the constants are called the _ kasner exponents _ which satisfy the two _ kasner relations _given by : the mean curvature in this case is let be the eigenvalues of with respect to , i.e. , the solutions of we define as the _ generalized kasner exponents_. they satisfy the first but not in general not the second kasner relation . a special case of bianchii is the friedmann model with flat spatial geometry .the metric in this case is : where is the scale factor . in this case which means that and also that .+ moreover in the dust case ( the einstein - dust system can be thought of as singular case of the einstein - vlasov system , see chapter 3.4 of for more information ) and we can solve ( [ yes ] ) obtaining : we can also write down the explicit solution of the metric in this case : this is also called the einstein - de sitter model .let us look at the dust case not necessarily isotropic .the general solution is known and one can see from the solution ( 11 - 1.12 ) of that the spacetime will isotropize .nevertheless we will analyze this case with care , since we will show that the general case behaves asymptotically like it assuming small data . in the dust case : we are interested in the asymptotic behaviour at late times and will assume that is small , i.e. : .then it follows from ( [ yes ] ) : integration leads to using this inequality in ( [ dd ] ) and integrating : we can put the equality ( [ hd ] ) in the following form : where is now we will use the fact that we can choose freely the time origin setting . note that for the general bianchi i symmetric einstein - vlasov - system we know that takes all values in the range ( lemma 2.1 of )we then obtain using ( [ yoy ] ) it is clear that .note that with our time origin choice in ( [ if ] ) , so our result for is the following : \end{aligned}\ ] ] this equation in ( [ dd ] ) leads after integration to : from ( [ hi ] ) and ( [ 2 ] ) we can conclude that : now from ( [ tf ] ) we see that the eigenvalues ( [ ev ] ) of the second fundamental form with respect to the induced metric are also the solutions of \delta^i_j)\end{aligned}\ ] ] let us define the eigenvalues of with respect to by , we have that : note that . from ( [ hi ] ) and ( [ 3 ] )we can see that the spacetime isotropizes at late times , in the sense that where are the generalized kasner exponents .+ now using ( [ det ] ) and ( [ hi ] ) we have : so the hope would be to show that can be bounded by a constant .let us define we have then with ( [ a ] ) that : making similar computations as in we arrive at : \vert \bar{g}_{ab}(s ) \vert ds\end{aligned}\ ] ] and with gronwall s inequality we obtain : \}\leq c\end{aligned}\ ] ] therefore is bounded for all .the same holds for by similar computations .thus : from this we can conclude that : or looking again at the derivative of and putting the facts which have been obtained together , we see that : this is enough to conclude that : \\ g^{ab}=t^{-\frac{4}{3}}[\mathcal{g}^{ab}+o(t^{-2})]\end{aligned}\ ] ] where is the limit of as goes to infinity .for the general case the basic equations are ( [ yes ] ) , a modified version of ( [ b ] ) where is a small and positive quantity which is introduced for technical reasons and ( [ no ] ) + 2t^{-\gamma+\frac{4}{3}}\label{bb } \sigma^{ab } \\ & & \dot{f}=h[f(1-\frac{3}{2}f-\frac{8\pi } { h^2 } \operatorname{tr}s)-\frac{16\pi}{h^3 } s_{ab}\sigma^{ab}]\label{cc}\end{aligned}\ ] ] we have a number ( different from zero ) of particles at possibly different momenta and we will define as the supremum of the absolute value of these momenta at a given time : consider any solution of the einstein - vlasov system with bianchi i - symmetry and with initial data .assume that and are sufficiently small .then at late times one can make the following estimates : * remark * ( [ pp ] ) implies that asymptotically there is a dust - like behaviour ( see ( [ pp ] ) ) . * proof * we will use a bootstrap argument ( see chapter 10.3 of for more information ) .let us look at the interval .our bootstrap assumptions are the following : where a and b are positive constants which we can choose as small as we want .+ + _ 1 .estimate of _+ + integrating ( [ aa ] ) we obtain : with \end{aligned}\ ] ] where the inequality comes from ( [ a1 ] ) and ( [ c ] ) .consider now an orthonormal frame and denote the components of the spatial part of the energy - momentum tensor in this frame by .the components can be bounded by so we have that from which follows that : now returning back to ( [ h ] ) and setting as in the dust case we have besides that + _ 2 . estimate of _ + + now we will use the second equation ( [ bb ] ) to obtain an estimate for the metric .one can show that in the sense of quadratic forms the following is true : then with the definitions of and and the estimates ( [ a1 ] ) and ( [ ho ] ) we have : : now from the last inequality of the first step ( [ hh ] ) equation ( [ bb ] ) leads to : ^{-1}=-\eta t^{-\gamma}\bar{g}^{ab}t^{-1}\end{aligned}\ ] ] and have to be chosen in such a way that is positive , then from which follows that or now from ( [ neg ] ) we have : using the fact that is constant along the geodesics we can conclude that : since we can choose and independently as small as we want . in order to improve ( [ a2 ] ) has to be smaller then . using the notation the last inequality can be expressed as where .+ + _ 3 .estimate of _+ + until now we have an estimate for and for in the interval .now we have to improve the estimate for coming from the bootstrap assumption .the desired estimate is .if this is the case ( case i ) the bootstrap argument will work and there is nothing more to do ._ case i _let us suppose now the opposite , that . then define as the smallest number not smaller than with the property that . in this case, we have to distinguish between the case that ( case iia ) and ( case iib ) .let us look now at ( [ cc ] ) , in particular at the terms in square brackets . by using ( [ a1 ] ) where is a positive small constant . using ( [ c ] ) , ( [ a2 ] ) and ( [ ss ] ) : with the cauchy - schwarz inequality , ( [ c ] ) , ( [ pp ] ) and supposing that in the interval $ ] : note that although may be a big quantity , since and are independent we can make smaller to `` correct '' this . using ( [ hh ] ) andthe last three inequalities in ( [ cc ] ) leads to : where . now setting we end up with : which means that _ case iia ._ in this case , so ( [ b2 ] ) means since we can choose as small as we want .so it follows that in this case : situation which is schematically depicted in the following figure ._ _ case iib ._ in the case iib we can use the fact that by continuity holds and then the here is also a quantity which we can choose as small as we want and then it follows that in this case we can choose to be smaller then , so _ case iibresults of the bootstrap argument _+ we have arrived at the statement that at least for a small interval and assuming ( [ a1])-([a2 ] ) we obtain the estimates : thus both assumptions ( [ a1 ] ) and ( [ a2 ] ) have been improved and using a bootstrap argument we know that the estimates obtained are valid for assuming small data .( [ hh ] ) can be expressed as : and is then also valid for the whole interval .+ + _ 5 . improving the estimate of _ + we want to improve ( [ yy ] ) , but before that we need an inequality in the other direction . from ( [ cc ] )we have : implementing now ( [ yy])-([xx ] ) in this inequality , in particular having in mind the b-dependence of the estimate of ( [ b ] ) : using now ( [ ho ] ) : from which follows and is strictly positive .choosing now small enough we have the following estimate : putting the last term of ( [ cc ] ) with the help of ( [ xx ] ) and ( [ u ] ) in the following manner : we can improve ( [ yy ] ) implementing ( [ yy])-([zz ] ) in ( [ cc ] ) with the result : + with these results we obtain the following theorem . [ t2 ] consider the same assumptions as in the previous theorem .then and \\\label{ll } g^{ab}=t^{-\frac{4}{3}}[\mathcal{g}^{ab}+o(t^{-2 } ) ] \end{aligned}\ ] ] where and are independent of . *proof * the conclusions made in the dust case after ( [ 2 ] ) only depend on the estimate of and and apply directly to the general case .the result concerning the asymptotics of the metric implies `` asymptotic freezing '' in the expanding direction in the following sense .consider the metric ( 4.12 ) of , then by comparison with ( [ lp ] ) one sees that the off - diagonal degrees of freedom , and tend to constants and thus are not important for the dynamics .see for the importance of asymptotic freezing in the `` other '' direction which has been studied , namely the initial singularity , and for consequences of that in a quantum version .+ as already mentioned in the introduction there exist several results concerning the bianchi i - symmetric einstein - vlasov system ( without cosmological constant ) .almost all of them assume additional symmetries namely the lrs and the reflection symmetry condition ( see for a precise definition of these symmetries ) .however there exist some results where only the reflection symmetry is assumed .concerning the expanding direction there is theorem 5.4 of .our theorems can be seen as a generalization of that theorem since we obtain the same the result , but a ) we also obtain how fast the expressions converge b ) we obtain an asymptotic expression for the spatial metric c ) we do not assume any of the additional symmetries mentioned .however we used a different kind of restriction namely the small data assumptions .+ in any case we think it interesting to study the non - diagonal case because although this time there was not an essential difference in the result with respect to the diagonal case , we do not expect that this will always be the case , especially when analyzing the initial singularity . in possible dynamical behaviour towards the past has been determined assuming only the reflection symmetry and already there surprising new features like the existence of heteroclinic networks arose .+ the situation of several ( tilted ) fluids leads naturally to the non - diagonal case .the bianchi i - symmetry implies the absence of a matter current such that a single _ tilted _ fluid is not compatible with this assumption .however there can be several tilted fluids such that the total current vanishes . in and has been considered in the case of two fluids .what was found is that isotropization occurs if at least one of the two fluids has a speed of sound which is less or equal the speed of light .it is shown in particular for the case of two pressure free fluids in , which can be seen as a singular solution of the einstein - vlasov system .+ of course there are many other ways of generalizing results which have been obtained for ( single ) perfect fluids .see for instance and references therein for the inclusion of a maxwell field in the bianchi i case . in presence of a cosmological constantthe results of have been generalized even to the einstein - vlasov - maxwell case .a natural generalization of the vlasov equation is the case where the collision term is not zero , i.e. the boltzmann equation . for this casedust - like asymptotics have already been obtained in for an isotropic spacetime with a cosmological constant and provides a basis for a possible extension to the asymptotics in the case of bianchi i with lrs symmetry .+ finally we would like to mention that non - diagonal bianchi i spacetimes are not only of interest in the context of cosmology , see for instance a recent work on the so called ultra - local limit .* acknowledgements * + the author would like to thank alan d. rendall for helping along all the steps of this project , from proposing the problem to actually helping to solve it .i am also thankful for many discussions and a lot of concrete advice. this work has been funded by the deutsche forschungsgemeinschaft via the sfb 647-project b7 .s. calogero and j.m .heinzle 2009 bianchi cosmologies with anisotropic matter : locally rotationally symmetric models ( preprint arxiv : 0911.0667v1 ) s. calogero and j.m .heinzle 2010 oscillations toward the singularity of lrs bianchi type ix cosmological models with vlasov matter ( to appear in siam journal for appl .dyn . syst . ) b. cropp and m. visser 2010 any spacetime has a bianchi type i spacetime as a limit ( preprint arxiv : 1008.4639v1 ) t. damour , m. henneaux and h. nicolai 2003 cosmological billiards . class .quantum grav .20 r145-r200 j.m .heinzle and c. uggla 2006 dynamics of the spatially homogeneous bianchi type i einstein - vlasov equations .23 , 3463 - 3490 j.m .heinzle and c. uggla 2009 mixmaster : fact and belief . class .quantum grav .26 , 075016 a. kleinschmidt , m. koehn and h. nicolai 2009 supersymmetric quantum cosmological billiards .phys . rev .d 80:061701 v.g .leblanc 1997 asymptotic states of magnetic bianchi i cosmologies . class .quantum grav .14 , 2281 - 2301 h. lee 2004 asymptotic behaviour of the einstein - vlasov system with a positive cosmological constant .cambridge phil .137 , 495 - 509 n. noutchegueme and d. dongo 2006 global existence of solutions for the einstein - boltzmann system in a bianchi type i spacetime for arbitrarily large initial data . class .quantum grav .23 , 2979 - 3004 n. noutchegueme and e.m .tetsadjio 2009 global dynamics for a collisionless charged plasma in bianchi spacetimes . class .quantum grav .26 , 195001 o. heckmann and e. schcking 1962 gravitation : an introduction to current research , ed .l. witten .academic press , new york . chap .rendall 1994 cosmic censorship for some spatially homogeneous cosmological models .233 , 82 - 96 .rendall 1996 the initial singularity in solutions of the einstein - vlasov system of bianchi type i. j. math phys .37 , 438 - 451 .a.d . rendall and k.p .tod 1998 dynamics of spatially homogeneous solutions of the einstein - vlasov equations which are locally rotationally symmetric . class .quantum grav .16 , 1705 - 1726 a.d .rendall and c. uggla 2000 dynamics of spatially homogeneous locally rotationally symmetric solutions of the einstein - vlasov equations . class .quantum grav .17 , 4697 - 4713 a.d .rendall 2002 cosmological models and centre manifold theory .rel . grav .34 , 1277 - 1294 a.d .rendall 2008 partial differential equations in general relativity .oxford university press , oxford .p. sandin and c. uggla 2008 bianchi type i models with two tilted fluids . class .quantum grav .25 , 225013 p. sandin 2009 tilted two - fluid bianchi type i models .41 , 2707 - 2724 e. takou 2009 global properties of the solutions of the einstein - boltzmann system with cosmological constant in the robertson - walker space - time . commun .sci . 7 , 399 - 410 j. wainwright and g.f.r .ellis 1997 dynamical systems in cosmology .cambridge university press , cambridge . | assuming that the space - time is close to isotropic in the sense that the shear parameter is small and that the maximal velocity of the particles is bounded , we have been able to show that for non - diagonal bianchi i - symmetric spacetimes with collisionless matter the asymptotic behaviour at late times is close to the special case of dust . we also have been able to show that all the kasner exponents converge to and an asymptotic expression for the induced metric has been obtained . the key was a bootstrap argument . the sign conventions of are used . in particular , we use metric signature + + + and geometrized units , i.e. the gravitational constant g and the speed of light c are set equal to one . also the einstein summation convention that repeated indices are to be summed over is used . latin indices run from one to three . will be an arbitrary constant and will denote a small and strictly positive constant . they both may appear several times in different equations or inequalities without being the same constant . a dot above a letter will denote a derivative with respect to the cosmological ( gaussian ) time . |
we consider some well known families of two - player , zero - sum , turn - based , perfect information games that can be viewed as specical cases of shapley s stochastic games .they have appeared under various names in the literature in the last 50 years and variants of them have been rediscovered many times by various research communities . for brevity , in this paper we shall refer to them by the name of the researcher who first ( as far as we know ) singled them out . * * condon games * ( a.k.a .simple stochastic games ) .a condon game is given by a directed graph with a partition of the vertices into ( vertices beloning to player 1 , ( vertices belonging to player 2 ) , ( random vertices ) , and a special terminal vertex * 1*. vertices of have exactly two outgong arcs , the terminal vertex * 1 * has none , while all vertices in have at least one outgoing arc . between moves ,a pebble is resting at one of the vertices .if belongs to a player , this player should strategically pick an outgoing arc from and move the pebble along this edge to another vertex .if is a vertex in , nature picks an outgoing arc from uniformly at random and moves the pebble along this arc .the objective of the game for player 1 is to reach * 1 * and should play so as to maximize his probability of doing so .the objective for player 2 is to prevent player 1 from reaching * 1*. * * gillette games * . a gillette game is given by a finite set of states , partioned into ( states belonging to player 1 ) and ( states belonging to player 2 ) .to each state is associated a finite set of possible actions . to each such actionis associated a real - valued _ reward _ and a probability distribution on states . at any point in time of play ,the game is in a particular state .the player to move chooses an action strategically and the corresponding award is paid by player 2 to player 1 .then , nature chooses the next state at random according to the probability distribution associated with the action .the play continues forever and the accumulated reward may therefore be unbounded .fortunately , there are ways of associating a finite payoff to the players in spite of this and more ways than one ( so is not just one game , but really a family of games ) : for _ discounted _ gillette games , we fix a _ discount factor _ and define the payoff to player 1 to be where is the reward incurred at stage of the game .we shall denote the resulting game . for _ undiscounted _ gillette gamewe define the payoff to player 1 to be the _ limiting average _payoff we shall denote the resulting game .undiscounted gillette games have recently been referred to as _ stochastic mean - payoff _ games in the computer science literature .a natural restriction of gillette games is to _ deterministic _ transitions ( i.e. , all probability distributions put all probability mass on one state ) .this class of games has been studied in the computer science literature under the names of cyclic games and mean - payoff games .a _ strategy _ for a game is a ( possibly randomized ) procedure for selecting which arc or action to take , given the history of the play so far .a _ pure , positional strategy _ is the very special case of this where the choice is deterministic and only depends on the current vertex ( or state ) , i.e. , a pure , positional strategy is simply a map from vertices ( for gillette games , states ) to vertices ( for gillette games , actions ) .a strategy for player 1 is said to be _ optimal _ if for all vertices ( states ) it holds that , where is the set of strategies for player 1 ( player 2 ) and is the probability that player 1 will end up in * 1 * ( for the case of condon games ) or the expected payoff of player 1 ( for the case of gillette games ) when players play using the strategy profile and the play starts in vertex ( state ) .similarly , a strategy for player 2 is said to be optimal if for all games described here , a proof of liggett and lippman ( fixing a bug of a proof of gillette ) shows that there are optimal , pure , positional strategies and that a pair of such strategies form an exact nash equilibrium of the game .these facts imply that when testing whether conditions ( [ opt1 ] ) and ( [ opt2 ] ) holds , it is enough to take the inifima and suprema over the finite set of pure , positional strategies of the players . in this paper , we consider _ solving _ games . by solving a game we mean the task of computing a pair of optimal pure , positional strategies , given a description of the game as input .to be able to finitely represent the games , we assume that the discount factor , rewards and probabilities are rational numbers and given as fractions. it is well known that condon games can be seen as a special case of undiscounted gillette games ( as described in the proof of lemma 4 below ) , but a priori , solving gillette games could be harder .a recent paper by chatterjee and henzinger shows that solving so - called stochastic parity games reduces to solving undiscounted gillette games .this motivates the study of the complexity of the latter task .we show that the extra expressive power ( compared to condon games ) of having rewards during the game in fact does not change the computational complexity of solving the games .more precisely , our main theorem is : the following tasks are polynomial time equivalent : 1 . solving condon games ( a.k.a .. , simple stochastic games ) 2 . solving undiscounted gillette games ( a.k.a , stochastic mean - payoff games ) with rewards and probabilities represented in binary notation .3 . solving undiscounted gillette games with rewards and probabilities represented in unary notation .4 . solving discounted gillette games with discount factor , rewards and probabilities represented in binary notation .in particular , there is a pseudopolynomial time algorithm for solving undiscounted gillette games if and only if there is a polynomial time algorithm for this task .the theorem follows from the lemmas 2,3,4 below and the fact that solving games with numbers in the input represented in unary trivially reduces to solving games with numbers in the input represented in binary .the proof techniques are fairly standard ( although coming from two different communities ) , but we find it worth pointing out that they together imply the theorem above since it is relevant , did not seem to be known , and may even be considered slightly surprising , as deterministic undiscounted gillette games can be solved in pseudopolynomial time , while solving them in polynomial time remains a challenging open problem .an even more challenging problem is solving simple stochastic games in polynomial time , so our theorem may be interpreted as a hardness result .note that a `` missing bullet '' in the theorem is solving discounted gillette games given in unary notation .it is in fact known that this can be done in polynomial time ( even if only the discount factor is given in unary while rewards and probabilities are given in binary ) , see littman ( * ? ? ?* theorem 3.4 ) .[ limit ] let be a gillette game with states and all transition probabilities and rewards being fractions with integral numerators and denominators , all of absolute value at most .let and let ] .this does not influence the optimal strategies .vertices of include all states of ( belonging to the same player in as in ) , and , in addition , a random vertex for each possible action of each state of .we also add a `` trapping '' vertex * 0 * with a single arc to itself .it does not matter which player it belongs to .we construct the arcs of by adding , for each ( state , action ) pair the `` gadget '' indicated in figure [ fig ] . reducing discounted gillette games to condon games , height=226 ] to be precise, if the action has reward and leads to states with probability weights , we include in an arc from to , arcs from to with probability weights , an arc from to * 0 * with probability weight and finally an arc from to the terminal * 1 * with probability weight . there is clearly a 1 - 1 correspondence between pure stationary strategies in and in .thus , we are done if we show that the optimal strategies coincide . to see this , fix a strategy profile for the two players and consider play starting in any vertex . by construction , if the expected reward of the play in is , the probability that the play in ends up in * 1 * is exactly .therefore , the two games are strategically equivalent .solving condon games polynomially reduces to solving undiscounted gillette games with _ unary _ representation of rewards and probabilities .we are given a condon game ( a `` plain '' one , using the terminology of the previous proof ) and must construct an undiscounted gillette game .states of will coincide with vertices of , with the states of including the special terminals * 1*. vertices belonging to a player in belongs to the same player in .for each outgoing arc of , we add an action in with reward 0 , and with a deterministic transition to the endpoint of the arc of . random vertices of can be assigned to either player in , but he will only be given a single `` dummy choice '' : if the random vertex has arcs to and , we add a single action in with reward and transitions into , , both with probability weight . the terminal * 1 * can be assigned to either player in , but again he will be given only a dummy choice : we add a single action with reward 1 from * 1 * and with a transition back into * 1 * with probability weight .there is clearly a 1 - 1 correspondence between pure stationary strategies in and strategies in .thus , we are done if we show that the optimal strategies coincide . to see this , fix a strategy profile for the two players and consider play starting in any vertex . by construction , if the probability of the play ending up in * 1 * in is , the expected limiting average reward of the play in is also .therefore , the two games are strategically equivalent , and we are done .undiscounted gillette games can be seen as generalizations of condon games and yet they are computationally equivalent .it is interesting to ask if further generalizations of gillette games are also equivalent to solving condon games .it seems natural to restrict attention to cases where it is known that optimal , positional strategies exists .this precludes general stochastic games ( but see ) .an interesting class of games generalizing undiscounted gillette games was considered by filar .filar s games allow simultaneous moves by the two players . however ,for any position , the probability distribution on the next position can depend on the action of one player only .filar shows that his games are guaranteed to have optimal , positional strategies .the optimal strategies are not necessarily pure , but the probabilities they assign to actions are guaranteed to be rational numbers if rewards and probabilities are rational numbers .so , we ask : is solving filar games polynomial time equivalent to solving condon games ? krishnendu chatterjee , marcin jurdziski , and thomas a. henzinger . quantitative stochastic parity games . in _soda 04 : proceedings of the fifteenth annual acm - siam symposium on discrete algorithms _ , pages 121130 , philadelphia , pa , usa , 2004 .society for industrial and applied mathematics .d. gillette .stochastic games with zero stop probabilities . in m.dresher , a.w .tucker , and p. wolfe , editors , _ contributions to the theory of games iii _ ,volume 39 of _ annals of mathematics studies _ ,pages 179187 .princeton university press , 1957 . | we consider some well known families of two - player , zero - sum , turn - based , perfect information games that can be viewed as specical cases of shapley s stochastic games . we show that the following tasks are polynomial time equivalent : * solving simple stochastic games , * solving stochastic mean - payoff games with rewards and probabilities given in unary , and * solving stochastic mean - payoff games with rewards and probabilities given in binary . |
the kolkata paise restaurant ( kpr ) problem is a repeated game , played between a large number of agents having no interaction amongst themselves . in kpr problem ,prospective customers ( agents ) choose from restaurants each evening simultaneously ( in parallel decision mode ) ; is fixed .each restaurant has the same price for a meal but a different rank ( agreed upon by all customers ) and can serve only one customer any evening .information regarding the customer distributions for earlier evenings is available to everyone .each customer s objective is to go to the restaurant with the highest possible rank while avoiding the crowd so as to be able to get dinner there .if more than one customer arrives at any restaurant on any evening , one of them is randomly chosen ( each of them are anonymously treated ) and is served .the rest do not get dinner that evening . in kolkata, there were very cheap and fixed rate paise restaurants " that were popular among the daily laborers in the city . during lunch hours, the laborers used to walk ( to save the transport costs ) to one of these restaurants and would miss lunch if they got to a restaurant where there were too many customers . walking down to the next restaurant would mean failing to report back to work on time !paise is the smallest indian coin and there were indeed some well - known rankings of these restaurants , as some of them would offer tastier items compared to the others .a more general example of such a problem would be when the society provides hospitals ( and beds ) in every locality but the local patients go to hospitals of better rank ( commonly perceived ) elsewhere , thereby competing with the local patients of those hospitals .unavailability of treatment in time may be considered as lack of the service for those people and consequently as ( social ) wastage of service by those unattended hospitals . a social planner s ( or dictator s ) solution to the kpr problem is the following : the planner ( or dictator s ) asks everyone to form a que and then assigns each one a restaurant with rank matching the sequence of the person in the que on the first evening .then each person is told to go to the next ranked restaurant in the following evening ( for the person in the last ranked restaurant this means going to the first ranked restaurant ) .this shift process than continuous for successive evenings .call this solution the _ fair social norm_. this is clearly one of the most efficient solution ( with utilization fraction of the services by the restaurants equal to unity ) and the system arrives at this this solution immediately ( from the first evening itself ). however , in reality this can not be the true solution of the kpr problem , where each agent decides on his own ( in parallel or democratically ) every evening , based on complete information about past events . in this game ,the customers try to evolve a learning strategy to eventually get dinners at the best possible ranked restaurant , avoiding the crowd .it is seen , the evolution these strategies take considerable time to converge and even then the eventual utilization fraction is far below unity . the kpr problemhave some basic features similar to the minority game problem in that diversity is encourage ( compared to herding behavior ) in both , while it differs from ( two - choice ) minority games in terms of the macroscopic size of the choices .as already shown in ref , a simple random - choice algorithm , if adapted by all the agents , can lead to a reasonable value of utilization fraction ( ) .compared to this , several seemingly `` more intelligent '' stochastic algorithms lead to lower utilization of the services . studied a few more such `` smarter '' algorithms , having several attractive features ( including analytical estimate possibilities ) , but still failing to improve the overall utilization fraction beyond its random choice value . herewe develop a stochastic strategy , which maintains a naive tendency ( probability decreasing with past crowd size ) to stick to any agent s own past choice ( successful or not ) , leading to a maximum , so far , value of the utilization fraction ( ) in the kpr problem .we also estimate here analytically the values for several of such strategies .let the symmetric stochastic strategy chosen by each agent be such that at any time , the probability to arrive at the -th ranked restaurant is given by ,\hspace{.1 in } z=\sum_{k=1}^n\left[k^{\alpha}\exp\left(-\frac{n_k(t-1)}{t}\right)\right],\label{generalstoch}\ ] ] where denotes the number of agents arriving at the -th ranked restaurant in period , is a scaling factor and is an exponent . note that under ( [ generalstoch ] ) the probability of selecting a particular restaurant increases with its rank and decreases with its popularity in the immediate past ( given by the number ) .certain properties of the strategies given by ( [ generalstoch ] ) are the following : 1 . for and , corresponds to the complete random choice case for which we know that the utilization fraction is around , that is on an average there is 63% utilization of the restaurants ( see appendix a ) .2 . for and ,the agents avoid those restaurants visited last evening and choose again randomly from the remaining restaurants . with appropriate simulation it was shown that the distribution of the fraction of utilization of the restaurants is gaussian around ( see subsection 2.2 ) . for any natural number and , an agent goes to the -th ranked restaurant with probability ; which means in the limit in ( [ generalstoch ] ) gives .let us discuss the results for such a strategy here .if an agent selects any restaurant with equal probability then probability that a single restaurant is chosen by agents is given by therefore , the probability that a restaurant with rank is not chosen by any of the agents will be given by where hence therefore the average fraction of agents getting dinner in the -th ranked restaurant is given by ) versus rank of the restaurants ( ) for different values .the inset shows the distribution of the fraction agent getting dinner any evening for different values . ] and the numerical estimates of is shown in fig .( [ fig1 ] ) . naturally for , the problem corresponding to random choice , giving and for , giving as already obtained analytically earlier ( see appendix b ) .we consider here the case ( see also )where each agent chooses on any evening ( ) randomly among the restaurants in which nobody had gone in the last evening ( ) .this correspond to the case where and in eq .( [ generalstoch ] ) .our numerical simulation results for the distribution of the fraction of utilized restaurants is again gaussian with a most probable value at .this can be explained in the following way : as the fraction of restaurants visited by the agents in the last evening is avoided by the agents this evening , the number of available restaurants is for this evening and is chosen randomly by all the agents .hence , when fitted to eq .( [ eq : poisson_mm ] in appendix a ) , .therefore , following eq .( [ eq : poisson_mm ] ) , we can write the equation for as =\bar { f } .\ ] ] the solution of this equation gives .this result agrees well with the numerical results for this limit ( , ) . in this sectionwe start with the following stochastic strategy : if an agent goes to restaurant in period ( ) then the agent goes to the same restaurant in the next period with probability and to any other restaurant with probability . in this process , the average utilization fraction is and the distribution is a gaussian around ( see fig . [ fig2 ] ). an approximate estimate of : let denote the fraction of restaurants where exactly agents appeared on any evening and assume that for .therefore , , and hence . giventhe strategy , fraction of agents will make attempts to leave their respective restaurants in the next evening , while no intrinsic activity will occur on the restaurants where , no body came ( ) or only one came ( ) in the previous evening .these fraction of agents will now get equally divided ( each in the remaining restaurants ) . of these ,the fraction going to the vacant restaurants ( in the earlier evening ) is .hence the new fraction of vacant restaurants is now . in restaurants havingexactly two agents ( percent in the last evening ) , some vacancy will be created due to this process , and this is equal to .steady state implies that and hence using we get , giving and . of course, the above calculation is approximate as none of the restaurant is assumed to get more than two costumers on any evening ( for ) .the advantage in assuming , and only to be non vanishing on any evening is that the activity of redistribution on the next evening starts from this fraction of the restaurants .this of course affects and for the next evening and for steady state these changes must balance .the computer simulation results also conform that for and hence the above approximation does not lead to serious error .in this section we assume that agents have two possible exogenously given values of : or .we start by taking some random allocation of over the set of agents .the strategy followed by each agent thereafter is the following : if an agent starts with an and fails to get dinner for the successive evenings then , in the next evening , the agent shifts to .the steady state distribution of the values in the population of agents do not depend on the initial allocation of values in the population ( see fig .[ fig3 ] ) . however , as in obvious , for large values of , the stability of the distribution disappears . .the same for will be given by just complementary function . ]in the kpr problem if the rational agents interact then a _ fair social norm _ that can evolve is a periodically organized state with periodicity where each agent in turn gets served in all the restaurants and all agents get served every evening . can we find deterministic strategies ( in the absence of a dictator ) such that the society achieves this fair social norm ?there is one variant of pavlov s win shift lose stay strategy ( see ) that can be adopted to achieve the fair social norm and another variant that can be adopted to achieve the fair social norm in an asymptotic sense .of course , these strategies are deterministic in nature .the fair strategy works as follows : 1 . at time( evening ) , agents can choose any restaurants either randomly or deterministically .2 . if at time agent was in a restaurant ranked and was served then , at time , the agent moves to the restaurant ranked if and moves to the restaurant ranked if .3 . if agent was in a restaurant ranked at time and was not served then , at time , the agent goes to the same restaurant .it is easy to verify that this strategy gives a convergence to the fair social norm in less than or equal to periods .moreover , after convergence is achieved , the fair social norm is retained ever after .the difficulty with this strategy is that a myopic agent will find it hard to justify the action of going to the restaurant ranked last after getting served in the best ranked restaurant .however , if the agent is not that myopic and observes the past history of strategies played by all the agents and can figure out that this one evening loss is a tacit commitment devise for this kind of symmetric strategies to work then this voluntary loss is not that implausible .therefore one needs to run experiments before arguing for or against this kind of symmetric deterministic strategies .more importantly the fair strategy can be modified to take care of this justification problem provided one wants to achieve the fair social norm in an asymptotic sense .the asymptotically fair strategy works as follows : 1 . at time( evening ) , agents can choose any restaurants either randomly or deterministically .if at time agent was in a restaurant ranked and was served then , at time , the agent moves to the restaurant ranked if and goes to the same restaurant if .if agent was in a restaurant ranked at time and was not served then , at time , the agent goes to the restaurant ranked .we consider the kpr problem where the decision made by each agent in each time period is independent and is based on the information about the rank of the restaurants and their occupancy given by the numbers .we consider here in sec . several stochastic strategies where each agent chooses the -th ranked restaurant with probability given by eq .( [ generalstoch ] ) .the utilization fraction of the -th ranked restaurants on every evening is studied and their average ( over ) distributions are shown in fig .[ fig1 ] for some special cases . from numerical studies ,we find their distributions to be gaussian with the most probable utilization fraction , and for the cases with , ; , ; and , respectively .for the stochastic crowd - avoiding strategy discussed on sec . , we get the best utilization fraction .the analytical estimates for in these limits are also given and they agree very well with the numerical observations . finally , we suggest ways to achieve the fair social norm either exactly in the presence of incentive problem or asymptotically in the absence of such incentive problem . implementing or achieving such a norm in a decentralized way is impossible when limit .the kpr problem has similarity with the minority game problem as in both the games , herding behavior is punished and diversity s encouraged .also , both involves learning of the agents from the past successes etc . of course , kpr has some simple exact solution limits , a few of which are discussed here . in none of these cases considered here ,learning strategies are individualistic ; rather all the agents choose following the probability given by eq .( [ generalstoch ] ) . in a few different limits of such a learning strategy ,the average utilization fraction and their distributions are obtained and compared with the analytic estimates , which are reasonably close .needless to mention , the real challenge is to design algorithms of learning mixed strategies ( e.g. , from the pool discussed here ) by the agents so that the fair social norm emerges eventually even when every one decides on the basis of their own information independently .as we have seen , some naive strategies give better values of compared to most of the `` smarter '' strategies like strict crowd - avoiding strategies ( sec ) etc .this observation in fact compares well with earlier observation in minority games ( see e.g. , ) .it may be noted that all the stochastic strategies , being parallel in computational mode , have the advantage that they converge to solution at smaller time steps ( or weakly dependent on ) while for deterministic strategies the convergence time is typically of order of , which renders such strategies useless in the truly macroscopic ( ) limits .however , deterministic strategies are useful when is small and rational agents can design appropriate punishment schemes for the deviators ( see ) . in brief, the study of the kpr problem shows that while a dictated solution leads to one of the best possible solution to the problem , with each agent getting his dinner at the best ranked restaurant with a period of evenings , and with best possible value of ( ) starting from the first evening itself .the parallel decision strategies ( employing evolving algorithms by the agents , and past informations , e.g. , of ) , which are necessarily parallel among the agents and stochastic ( as in democracy ) , are less efficient ( ; the best one discussed here in sec . , giving only ) .we also note that most of the `` smarter '' strategies lead to much lower efficiency .is there an upper bound for the value of utilization fraction ( less than unity ; easily achieved in the dictated solution ) for such stochastic strategies employed in parallel ( democratically ) by the agents in kpr ?if so , what is this upper bound value ? also , what is the learning time required to arrive at such a solution ( compared to zero waiting time to arriving at the most efficient dictated solution ) in kpr ?these are the questions are to be investigated in future .the authors would like to thank anindya sundar chakrabarti and satya ranjan chakravarty for useful comments and discussions .suppose there are agents and restaurants .an agents can select any restaurant with equal probability .therefore , the probability that a single restaurant is chosen by agents is given by a poission distribution in the limit : therefore the fraction of restaurants not chosen by any agents is given by and that implies that average fraction of restaurants occupied on any evening is given by in the kpr problem .in this case , an agent goes to the -th ranked restaurant with probability ; that is , given by ( [ generalstoch ] ) in the limit , . starting with restaurants and agents , we make pairs of restaurants and each pair has restaurants ranked and where .therefore , an agent chooses any pair of restaurant with uniform probability or agents chooses randomly from pairs of restaurants .therefore the fraction of pairs selected by the agents ( from eq .( [ eq : poisson_mm ] ) ) also , the expected number of restaurants occupied in a pair of restaurants with rank and by a pair of agents is therefore , the fraction of restaurants occupied by pairs of agents hence , the actual fraction of restaurants occupied by the agents is ghosh , a. , chakrabarti , a. s. , chakrabarti , b. k. , 2010 , _ kolkata paise restaurant problem in some uniform learning strategy limits _ , in econophysics & economis of games , social choices & quantitative techniques , new economic windows , eds . b. basu , b. k. chakrabarti , s. r. chakravarty , k. gangopadhyay , springer , milan , pages 3 - 9 . | we study the dynamics of a few stochastic learning strategies for the `` kolkata paise restaurant '' problem , where agents choose among equally priced but differently ranked restaurants every evening such that each agent tries get to dinner in the best restaurant ( each serving only one customer and the rest arriving there going without dinner that evening ) . we consider the learning strategies to be similar for all the agents and assume that each follow the same probabilistic or stochastic strategy dependent on the information of the past successes in the game . we show that some `` naive '' strategies lead to much better utilization of the services than some relatively `` smarter '' strategies . we also show that the service utilization fraction as high as can result for a stochastic strategy , where each agent sticks to his past choice ( independent of success achieved or not ; with probability decreasing inversely in the past crowd size ) . the numerical results for utilization fraction of the services in some limiting cases are analytically examined . |
a number of investigations aimed to check absolute stability of half - life of different radioactive isotopes and to search for possible time variations of the half - life constants under action of known and possible unknown natural factors has been carried out during last years . in work it was shown that , after many years of investigations with scintillation and semi - conductor -ray detectors , a conclusion was made about changes of rate of radioactive elements -decay with 24 hours and 27 days periodicities .many years measurement s decay rates data for the ( -decay ) measured in the brookhaven national laboratory ( usa , 1982 - 1986 years ) and for the ( and -decays ) measured in the physikalisch - technische - bundesanstalt ( germany , 1983 - 1998 years ) were analyzed in work , , .decay rate variations with a one year period and maximum amplitude of .15% in january - february were found in both data sets .the authors considered an assumed seasonal variations of the detector systems characteristics and/or the direct annular modulation of the count rate being caused by some unknown factor depending on the sun - earth distance as possible causes of such count rate variations .the weak point of the count rate long time monitor experiments is a possible influence of meteorological , climatic and geophysical factors on the count rate of a source - detector pair. this shortcoming could be practically totally avoided in measurements based on a direct registration of a nuclear life time between birth and decay .such a method allows us to answer the question of possible change with time of a just nuclear decay constant . besides that a direct registration of nuclear life time allows us to study of the exponentially decay low . some theoretical models , predicted that decay curve does not exactly follow an exponential low in the short - and long - time regions in particular due to of the so - called quantum zeno effect , , , .experimentally zeno effect was proved in repeatedly measured two - level system undergoing rabi transitions , but not observed in spontaneous decays .very important condition for measure possible deviations from an exponential low is the all investigated nuclei could have the same age .a primary task of the investigation presented in this work was to investigate constancy of a half - life of during several years . decays with 164.3 half - life by emitting the 7.687 mev -particle .this isotope appears mainly in the exited state ( % ) in the -decay .half - lives of the exited levels does not exceed 0.2 ps and they discharge instantly with regard to the scale of the half - life .energies of the most intensive -lines are equal to 609.3 kev ( 46.1% per decay ) , 1120 kev ( 15.0% ) and 1765 kev ( 15.9% ) .so , the -particle and -quantum emitted at the moment of a birth of nuclear ( start ) and the -particle emitted at the decay moment ( stop ) .measurement of `` start - stop '' time intervals allows one to construct decay curve at an observation time and to determine the half - life from it s shape . isotope is an intermediate daughter product in the series .it gives almost all -activity of the series .this isotope could be produced with a constant velocity if an intermediate isotope ( y ) of the series will be used as a source .a time of an equilibrium reached in a partial - series equals to days and is determined by the longest half - life isotope , ( 3.82 days ) .a decay rate does nt change after this time if a source is hermetically sealed to prevent an escape of the radon .two test sources with activity of bq and bq were prepared in the v.g .khlopin radium institute ( st.petersburg , russia ) in march , 2008 .a thin transparent radium spot is deposited in the center of a polished plastic scintillator disc of 18 mm diameter and 0.8 mm thickness .a disc s side with the spot is covered by a similar disc hermetically glued on periphery ( 2.5 mm width ) .a plastic scintillator ( ps ) registers all charged particles generated in the decay series .a ps light yield for an -particle absorption is of that for the electron of the same energy . due to this fact -spectra mix with -spectra if a scintillator is thick enough to absorb a complete energy of an electron . to prevent this effect and separate these spectrathe scintillator discs were made thin enough so that electrons lose only part of their energy .two setups named tau-1 and tau-2 were made to measure radiations of these sources .a schematic sectional view of the tau-1 with the electronics is shown in fig.[fig1 ] .schematic cross - sectional view of tau-1 setup and electronics block diagram.,width=313 ] the detector d1 contains the photomultiplier ( feu-85 ) which views the source disc through a plastic light guide .a teflon reflector is placed under the source to improve light collection .the whole assembly is packed in a stainless steel body .the detector d1 is placed on the butt - end of the scintillation detector d2 with low background nai(tl ) crystal of 80 mm diameter and 160 mm length .the crystal has a stainless still cover and a quartz entrance window .the crystal with photomultiplier ( feu-110 ) is packed in a cooper body .the two detectors are mounted vertically in a low background shield made from pb(10 cm ) + fe(15 cm ) + w(3 cm ) .the tau-1 is located in the underground laboratory `` kapriz '' of the baksan neutrino observatory of inr ras at 1000 m of water equivalent depth .a cosmic ray background is decreased by times due to the rock thickness in comparison with the ground surface one .walls of the laboratory are covered by a low background concrete thus decreasing by times the -ray background from a rock natural radioactivity .total background suppression inside the shield is equal to times .signals from the photomultipliers are read by charge sensitive preamplifiers ( csa ) and fed to two inputs of a digitizer ( digital oscilloscope - do ) la - n20 - 12pci which is inserted into a personal computer ( pc ) .pulses are digitized with 6.25 mhz frequency .the do pulse recording starts by a signal from the d2 which registered decay s -quanta .a d2 signal opens a record of a sequence with 655.36 total duration where first 81.92 time is a `` prehistory '' and the last 573.44 is a `` history '' .duration of a `` history '' exceeds the three half - lives .a ps disc source in the tau-2 installation is fixed at the end of an air light guide having a smooth wall made of vm-2000 light reflecting film .the light guide is put into 0.8 mm thick stainless still rectangular frame with inner dimensions of mm .an open butt - end of the frame is welded into a bottom of cylindrical stainless still body of 45 mm diameter and 165 mm length where photomultiplier ( feu-85 ) placed .so much for the detector d1 of the tau-2 .two scintillation detectors ( d2a and d2b ) made of large nai(tl ) crystals ( mm ) are used for a registration of the -quanta .each crystal has a stainless still cover and a quartz entrance window .photomultipliers ( feu-49 ) are used for the light registration .the d1 are installed into a gap between d2a and d2b .a scheme of pulse registration in the tau-2 is similar to the one of tau-1 but signals from d1a and d2b are preliminarily summed in additional summer .the tau-2 is located in the low background room in the deepest underground laboratory nlgz-4900 of the bno inr ras at the 4900 m of water equivalent depth .a cosmic ray background is decreased by times by the rock thickness in comparison with the ground surface one .the room walls are made of polyethylene ( 25 cm ) + cd(0.1 cm ) + pb(15 cm ) .the detectors are surrounded with additional lead shield of 15 cm thickness aimed to absorb a radiation from decays of in the air being in equilibrium with the radon in the room .working characteristics of each installation were tested preliminary by using usual multichannel analyzer ai-1024 . _ * tau-1 installation * _ an amplitude spectrum , from the d1 , collected during 600 s is shown in fig.[fig2 ] . d1 detector pulse amplitude spectrum collected at 600 s.,width=321 ]d2 detector pulse amplitude spectra ( a ) of source ( 30 min collection time ) and ( b ) of background normalized at 30 min ( 72 hours collection time).,width=321 ] integral count rate is equal to 179 s .a wide peak above the 500 channel corresponds to -particles .a count rate within channels is equal to 21.2 s .an amplitude spectrum , from the d2 , collected during 30 min is shown in fig.[fig3]a .a main part of the spectrum is due to -quanta from the source .a spectrum of the d2 background was measured in 72 hours when the d1 was removed from the shield .it s spectrum shown in the fig.3b .an integral count rate in the spectrum in fig.[fig3]a is 4.90 s ( 1.64 s at the energy above 400 kev ) and that one in fig.[fig3]b is 0.20 s . _ * tau-2 installation * _ integral count rate of the d1 is equal to 70 s .a background count rate of the d2a detector at the energy above 400 kev is equal to 2.3 s .a total count rate with the source is equal to 5.0 s .a light collection coefficient of the d1 in the tau-2 was found to be 0.51 of the one in the tau-1 .the tau-1 and tau-2 installations were placed into underground low background shields to provide stable environmental conditions and a low cosmic ray background .it allows us to decrease possible variations of electronics characteristics and to exclude random d1-d2 coincidences caused by d2 registrations of background s muons and -quanta . a control of the do work in regimes of on - line event separation , data collection and visualization was carried out by specially designed pc program .this program allows do to record only coinciding d1-d2 events , with two pulses in the d1 `` history '' and one pulse in the d2 `` history '' .the last one is in prompt coincidence with the first d1 pulse .an example of the event is shown in fig.[fig4 ] .an example of coinciding d1-d2 events with two pulses in the d1 `` history '' ( blue ) and one pulse in the d2 `` history '' ( red).,width=283 ] a number of such events make up % of the total amount .their count rates are equal to s for tau-1 and s for tau-2 for the energy threshold of 400 kev in the channels of d2 detector .spectra of the d2 pulses ( -quanta ) and the second d1 pulses ( -particles ) for the tau-1 are shown in fig.[fig5](a , b ) .spectra of the d2 -quantum pulses ( a ) and the second d1 -particle pulses ( b ) of the tau-1 setup.,width=302 ] double peak of the fig.[fig5]b spectrum can be explained by different light output of the two ps disks .the lower energy peak was found to move towards the beginning of the coordinates with time .this effect could be connected with a degradation of ps surface scintillations properties under the active spot due to diffusion of chemical components or decay products into a surface layer of the source deposited disc .a rate of the direct data record is 5.8 gb .such large information flux complicates processes of the data collection and off - line processing .a number of pulses and their delays for each event are analysed on - line to decrease usage of pc memory .`` wrong '' events are excluded .only a time of appearing and amplitude of pulses defined for the `` right '' events record at the pc memory . a rate of an information accumulation decreased up to 8.5 mb .measurements with the tau-1 setup started in april , 2008 and the ones with tau-2 started in july , 2009 .a continuous data set is divided on a sequence of equal intervals .a spectrum of delays between the first and second d1 pulses is constructed for each interval .this spectrum is approximated by exponential function viewed as corresponding to the decay curve .a value of the half - life is found from this curve .the series of -values is analysed to find out any regularity or variation .duration of the analyzed data sets of tau-1 and tau-2 were equal to 1038 days and 562 days correspondingly . a half - life value obtained for a decay curve constructed with the whole set of tau-1 data was found to be .this curve is shown in fig.[fig6 ] .summary exponent constructed for the whole data set collected at 1038 days by tau-1.,width=283 ] a half - life value obtained for the tau-2 data was found to be .the difference between values is connected with uncertainties of the digitizer s frequency calibrations .each interval of the whole data set was chosen to be 7 days for the analysis of the half - life constant behavior for a long time period .sequences of -values for tau-1 and tau-2 are shown in fig.[fig7 ] ( a ) and ( b ) respectively .time dependence of half - life determined for one week collection interval data set from tau-1 ( a ) and tau-2 ( b ) ., width=283 ] dependence of for the tau-1 data.,width=283 ] to search for a possible annular variations of the data they were normalized on the averaged values and compared with a periodical function ] , - days.,width=264 ] normalized graph of the time dependence of half - life for tau-2 in comparison with function $ ] , - days.,width=264 ] the maximum value has been reached at .a selection of -value corresponding to the minimum was done for this -value , and was found to be .the values of , and days were found for tau-2 by the similar way .corresponding graphs for the tau-1 and tau-2 data are shown in fig.[fig9 ] and fig.[fig10 ] .the estimated values of are the same for both setups but the phases differ by 167 days .a mean square error of a for a number of degrees of freedom could be estimated as .it was found that for the tau-1 ( and ) and for the tau-2 ( and ) . the decay dependence of of the detector tau-2 .points is the experimental data . curve _1 _ is the result of the approximation of the exponential low . curve _2 _ is the result of the aproximation using eq.1,width=264 ] a low reliability of the obtained results and disagreement between -values allow one to determine only the upper limit of the amplitude of -constant possible annular variation which was found to be ( 90% c.l . ) .an interval of data spacing was chosen to be 1 day in order to search for possible short period variations .no significant frequency peaks in a range day of the fourier spectra constructed by using two such - series has been found .we have tested the theoretical consideration of possible violation of the radioactive decay exponential law at low values of time of life of nucleus . for this purpose the total data set of the detector tau-2 was used .the exponent was constructed for the time interval .half - live of the nucleus was determined using data for the time interval .then the residual part of the data was used for analysis of the deviations between the data and the exponent which was calculated .the deviation was search for in form : limiting n=2 we find : a 4.01 ,a 3 , for time interval . the results of the analysis are shown at the fig.11 .the points at the figure are the experimental data for the time interval . curve _1 _ is the part of the exponent calculated for time interval . curve _ 2 _ is the result of the approximation of the data for time interval , using eq.([eq1 ] ) .two underground installations aimed at monitoring the time stability of half - life have been constructed .baurov , yu.g .sobolev , yu.v .ryabov , v.f .kushniruk , `` experimental investigations of changes in the rate of beta decay of radioactive elements '' .phys.of atomic nucl .v.*70 * , no.11 , 1825 ( 2007 ) .jere h. jenkins at al ., `` analysis of experiments exhibiting time - varying nuclear decay rates : systematic effects or new physics ? '' .arxiv:1106.1678 [ nucl - ex ] ; `` evidence for time - varying nuclear decay rates : experimental results and their implications for new physics '' arxiv:1106.1470 [ nucl - ex ] . | a method and results of an experimental test of the time stability of the half - life of alpha - decay nuclei are presented . two underground installations aimed at monitoring the time stability have been constructed . time of measurement exceeds 1038 days for one set up and 562 days for the other . it was found that amplitude of possible annual variation of half - life does not exceed 0.2% of the mean value . the limit on the deviation of the decay curve from exponent at range has been found . |
as known to all , shannon s information theory deals mainly with the representation and transmission of information . in the development of both source and channel coding theorems , especially for their converses , fano s inequality serves as the key tool . and are two random variables following and .define , then where is the cardinality of and , is the binary entropy function ] . as an inference of this result, the following theorem gives an upper bound on the size of a code as a function of the average error probability .[ th : fano_converse ] every -code ( average probability of error ) for a random transformation satisfies where , $ ] .although simple , these two theorems are insightful and easy to compute both in theory and numerically .the classical coding theorems are mainly based on the asymptotic equipartition property ( aep ) and typical / joint - typical decoder .however , applications of aep requires infinite long codewords , where the error probability goes either to 0 or 1 as the code length goes to infinity .although these coding theorems provide fundamental limits for modern communications , research on the finite blocklength coding schemes are more important in engineering applications . given the block length , upper bounds on the achievable error probability and the achievable code size were obtained in .most importantly , a tight approximation for the achievable maximal rate given the error probability and code length was presented . in this paper, we consider the entropy of one random variable vector conditioned on another , and the corresponding probability of error in guessing one from the other , by proposing an extended fano s inequality .the extended fano s equality has better performance by taking advantage of a more careful consideration on the error patterns .it suits codings in the finite blocklength regime better and is useful in bounding the mutual information between random vectors , and the codebook size given the block length and average symbol error probability constraint . in the following part of this paper, we present the extended fano s inequality in section [ sec : gener_fano ] first .the lower bounds on the mutual information between two random variable vectors and a upper bound on the codebook size given the block length and error probability are given in section [ sec : converse ] .an application of the the obtained result to the -ary symmetric channels ( qsc ) are presented in section [ sec : app ] , which shows that the extended fano s inequality is tight for such channels . finally , we concluded the paper in section [ sec : conclusion ] . throughout the paper , vectors indicated by boldalthough fano s inequality has been used widely in the past few years , it can be improved by treating the error events more carefully . in this section ,a refinement of fano s inequality is presented , which is tighter and more applicable for finite blocklength coding design .fano s inequality extension[th : gner_fano ] suppose that and are two -dimension random vectors where and take values on the same finite set with cardinality .then the conditional entropy satisfies where is the discrete entropy function . is the error distribution , where the error probabilities are for . is the generalized hamming distance , defined as the number of symbols in that are different from the corresponding symbol in .define the error random variable as if for .according to the chain rule of the joint entropy , can be expressed in the following ways , [ dr : fano_g ] particularly , in ( [ dr : fano_g].a ) , it is clear that .then we have where ( a ) follows the fact that entropy increases if its condition is removed , i.e. , .particularly , we have . according to its definition, we have when considering , we know that there are disaccord symbol pairs between and . for each fixed , every symbol in which belongs to a disaccord pairhas possible choices except the one in .thus has choices .besides , there selections for the positions of error symbols for each given .therefore , the total number of possible codeword is , which means particularly , note that since there is no uncertainty in determining from , if they are the same .then , ( [ dr : condi_err ] ) can be written as by combining ( [ dr : hx_y ] ) and ( [ hey ] ) , the proof of the theorem is completed .in fact , the error distribution is easy to calculate , especially for some special channels .for example , the discrete -ary symmetric channels is shown in fig [ fig : qbc ] . in this situation , .-ary symmetric channel , width=240 ] it is clear that theorem [ th : gner_fano ] is a generalization of fano s inequality .specifically , when the block length is 1 , i.e. , , we have , , and .in this case , theorem [ th : gner_fano ] reduces to which is exactly the same as fano s inequality . as a variant of theorem [ th : gner_fano ] ,the following theorem presents the conditional entropy in terms of relative entropy .[ th : gner_fanorela ] suppose that and are two -dimension random vectors where and take values on the same finite set with cardinality .then the conditional entropy satisfies where is the discrete relative entropy function .the error probabilities are for .donate , and is a probability distribution with firstly , we know from the binomial theorem that is a probability distribution .let and , we know that is a probability distribution where . according to theorem [ th : gner_fano ] , where ( a ) holds because for . by the definition of , it is clear that it reflects the error performance of the channel and is totaly determined by the channel itself . on the contrary, is a distribution where each error patten is assumed to appear equiprobablely . in this situation ,the probability that there are error symbols in the codeword is .thus is the distance between the actual error pattern distribution and the uniform error pattern distribution .particularly , if the channel is an error free one , i.e. , and for , we have . according to theorem [ th : gner_fanorela ] ,we get , which is reasonable with the assumption on the channel . in this sense ,theorem [ th : gner_fanorela ] is tight .the fano s inequality has been playing an important role in the history of information theory because it built a close connection between conditional entropy and error probability . for extended fano s inequality given in theorem [th : gner_fano ] , it is especially applicable in finite blocklength coding .it also presents the relationship between conditional entropy and error probabilities , which are defined as follows ._ block error probability _ is the average error probability of a block ( codeword ) , i.e. , .then we have .thus , we have for any ._ symbol error probability _ is the average error probability of a symbol , i.e. , , which can be expressed by . in many communication systems , especially those using error correction channel codings ,a block error does nt imply a system failure . on the contrary ,the error can be corrected or part of the block can still be used with some performance degradation . in this case , the symbol error is more useful than the block error . particularly , a corollary following our result as shown below will answer this problem .[ cor:1 ] suppose that and are two -dimension random vectors where and take values on the same finite set with cardinality .then the conditional entropy satisfies where is a probability distribution with .according to theorem [ th : gner_fano ] , one has since the distribution can be expressed as , which is a binomial distribution with the symbol error probability of . is a measure of the distance between the error probability distribution and the binomial distribution with parameter .if one takes , corollary [ cor:1 ] will reduces to , which is a frequently used form of fano s inequality .it is seen that the extended fano s inequality builds a natural connection between conditional entropy and symbol error and is especially applicable for finite length codings .based on the proposed generalized fano s inequality , the following lower bounds on the mutual information between and can be obtained .[ th : gner_fano_ixy ] suppose that and are two -dimension random vectors that satisfy the following . 1 . and take values on the same finite set with cardinality .either or is equiprobable .the error probabilities are for .donate the error distribution as .then the mutual information between and satisfies where is the entropy function .if is equiprobable , . on the other hand , the mutual information is given by .together with theorem [ th : gner_fano ] , note that and are totally symmetric in ( [ dr : gener_ixy1 ] ) .therefore , if is assumed to be equiprobable at the beginning of the proof , one can get the same result .thus theorem [ th : gner_fano_ixy ] is proved . by using theorem [ th : gner_fanorela ] ,the mutual information between and can be bounded by the following corollary .[ cor:4 ] suppose that and are two -dimension random vectors that satisfy the following . 1 . and take values on the same finite set with cardinality .either or is equiprobable .3 . the error probabilities are and .then the mutual information between and satisfies where is the discrete relative entropy function and is a probability distribution with the distribution means that the symbol in takes any value on with equal probability , regardless of what is sent in .so it is a pure random distribution when and are independent from each other .the most desirable coding scheme is that its error distribution is farthermost from , which also ensures a larger coding rate .suppose is a finite alphabet with cardinality .let s consider the input and output alphabets with and a channel to be a sequence of conditional probabilities .we donate a codebook with codewords by .a decoder is a random transformation where indicates that the decoder choose error .if messages are equiprobable , the average error probability is defined as .an codebook with codewords and a decoder whose average probability of error is smaller than are called an -code . an upper bound on the size of a code as a function of the average probability of symbol error follows the corollary [ cor:1 ] .[ th : conver_1 ] every -code for a random transformation satisfies where is the error distribution with for , is a probability distribution with .since the messages are equiprobable , we have . according corollary [ cor:1 ] , solving from ( [ dr : th_con1 ] ) , one can get ( [ rt : conver_1 ] ) , which completes the proof .consider information transmission over a memoryless discrete -ary symmetric channel with a channel code with crossover probability , as shown in fig .[ fig : qbc ] . in this case , the probability of symbol error is .then the error probabilities are and the block error probability is using the extended fano s inequality in theorem [ th : gner_fano ] , we have it is easy to see that the conditional entropy in theory is by corollary [ cor:4 ] , mutual information is lower bounded by . \end{split}\ ] ] while the capacity of the memoryless qsc is given by and the relative entropy can be derived as \\ \end{split}\ ] ] for a given average symbol error probability constraint , the upper bound on the maximum codebook size given by theorem [ th : conver_1 ] is + n\epsilon\log(q-1 ) .\end{split}\ ] ] on the other hand , by fano s inequality we have with given by ( [ eq : pb ] ) .then the lower bound of the mutual information is finally , the upper bound on the codebook size is suppose the qsc parameter are and , we calculated the bounds on conditional entropy , mutual information and codebook size by our proposed results and fano s inequality .firstly , the upper bound on the conditional entropy is presented in fig .[ fig : hxy ] .specially , is obtained by the extended fano s inequality ( [ sim : hexy ] ) , is calculated according to fano s inequality ( [ sim : hfxy ] ) and is the conditional entropy in theory ( [ sim : hxy ] ) .it is clear that theorem [ th : gner_fano ] is tighter than fano s inequality .particularly , we have for the qsc .this is because the error distributions are the same for any .so holds . besides, the error pattern is uniformly distributed for a given , regardless of and holds .therefore , the upper bound is tight .however , for fano s equality , there are relaxations in both to and to . , width=259 ] , width=278 ] similarly, the lower bound on mutual information given by ( [ sim : iexy ] ) coincides with in theory , given by ( [ sim : ixy ] ) and is better than that given by fano s inequality ( [ sim : ifxy ] ) .when we use the upper bound on the codebook size in theorem [ th : conver_1 ] , it should be noted that it is presented as a function of symbol error probability .in fact , is always larger than . therefore , we use the same fraction of them in the calculation of the bounds to make sense of the comparison , i.e. , for ( [ sim : mexy ] ) and for ( [ sim : mxy ] ) .it is also seen from fig .[ fig : ixy ] that our new developed result is tighter .the performances of theorem [ th : gner_fano_ixy ] and theorem [ th : conver_1 ] versus the qsc parameter are shown in fig .[ fig : epslong ] , where the block length is chosen as .as shown , the mutual information bound is tight and our results are much better than fano s inequality . in the calculation of the upper bounds on codebook size , the selection of the error probability constraints are also chosen as and so that they are comparable . , ,width=278 ]in this paper , we revisited fano s inequality and extended it to a general form . particularly , the relation between the conditional entropy and error probability of two random vectors was considered , other than that between two random variables .this makes the developed results more suitable for source / channel codings in the finite blocklength regime . by investigating the block error pattern more detailedly ,the conditional entropy of the original random vector given the received one is upper bounded more tightly by the extended fano s inequality .furthermore , the extended fano s inequality is completely tight for some symmetric channels such the -ary symmetric channels .converse results are also presented in terms of lower bounds on the mutual information and a upper bound on the codebook size under the blocklength and symbol error constraints , which also have better performances .this work was partially supported by inc research grant of chinese university of hongkong and the china major state basic research development program ( 973 program ) no .2012cb316100(2 ) .15 c. e. shannon,a mathematical theory of communication , " _ bell syst .pp.623 - 656 , oct .r. m. fano , _ class notes for transmission of information _ , course6.574 .combridge , ma : mit .r. w. raymond , _ information theory and network coding _ ,berlin / newyork : springer . 2008 .han and s. verdu , generalizing the fano inequality , " _ ieee trans .inform . theory _40 , no . 7 , pp .1247 - 1250 , jul . 1994 . s. w. ho and s. verdu , on the interplay between conditional entropy and error probability " _ ieee trans .inform . theory _ ,vol.56 , no.12 , pp.5930 - 5942 , dec . 2010 .d. l. tebbe and s. j. dwyer , iii , uncertainty and probability of error , " _ ieee trans .inform . theory _516 - 518 , may 1968 . | fano s inequality reveals the relation between the conditional entropy and the probability of error . it has been the key tool in proving the converse of coding theorems in the past sixty years . in this paper , an extended fano s inequality is proposed , which is tighter and more applicable for codings in the finite blocklength regime . lower bounds on the mutual information and an upper bound on the codebook size are also given , which are shown to be tighter than the original fano s inequality . especially , the extended fano s inequality is tight for some symmetric channels such as the -ary symmetric channels ( qsc ) . fano s inequality , finite blocklength regime , channel coding , shannon theory . |
optimal stopping problems arise in various areas ranging from the classical sequential testing / change - point detection problems to applications in finance .although all formulations reduce to the problem of maximizing / minimizing the expected payoff over a set of stopping times , the solution methods are mostly problem - specific ; they depend significantly on the underlying process , payoff function and time - horizon .this paper pursues a common tool for the class of _ infinite - time horizon _ optimal stopping problems for _ spectrally negative lvy processes _ , or lvy processes with only negative jumps . by extending the classical continuous diffusion model to the lvy model, one can achieve richer and more realistic models .in mathematical finance , the continuity of paths is empirically rejected and can not explain , for example , the volatility smile and non - zero credit spreads for short - maturity corporate bonds .these issues can often be alleviated by introducing jumps ; see , e.g. .naturally , however , the optimal stopping problem becomes more challenging and can not enjoy a number of results obtained under the continuity of paths . in the case of one - dimensional continuous diffusion ,a full characterization of the value function is known and some practical methods have been developed ( see e.g. ) .most of these results rely heavily on the continuity assumption ; once jumps are involved , only problem - specific approaches are currently available . despite these differences, there exists a common tool known as the _ scale function _ for both continuous diffusion and spectrally negative lvy processes . the scale function forthe former enables one to transform a problem of any arbitrary diffusion process to that of a standard brownian motion . for the latter ,the scale function has been playing a central role in expressing fluctuation identities for a general spectrally negative lvy process ( see ) . by taking advantage of the potential measure that can be expressed using the scale function, one can obtain the overshoot distribution at the first exit time , which is generally a big hurdle that typically makes the problem intractable .the objective of this paper is to pursue , with the help of the scale function , a common technique for the class of optimal stopping problems for spectrally negative lvy processes .focusing on the first time it down - crosses a fixed threshold , we express the corresponding expected payoff in terms of the scale function .this semi - explicit form enables us to differentiate and take limits thanks to the smoothness and asymptotic properties of the scale function as obtained , for example , in . by differentiating the expected payoff with respect to the threshold level , we obtain the _ first - order condition _ as well as the candidate optimal level that makes it vanish .we also obtain the _ continuous / smooth fit condition _ when the process is of bounded variation or when it contains a diffusion component .these conditions are in fact equivalent and can be obtained generally under mild conditions .the spectrally negative lvy model has been drawing much attention recently as a generalization of the classical black - scholes model in mathematical finance and also as a generalization of the cramr - lundberg model in insurance .a number of authors have succeeded in extending the classical results to the spectrally negative lvy model by way of scale functions .we refer the reader to for stochastic games , for the optimal dividend problem , for american and russian options , and for credit risk .in particular , egami and yamazaki modeled and obtained the optimal timing of capital reinforcement . as an application of the results obtained in this paper, we give a short proof of the mckean optimal stopping ( perpetual american put option ) problem with additional running rewards , as well as an extension and its analytical solution to .the rest of the paper is organized as follows . in section [ sec : problem ] , we review the optimal stopping problem for spectrally negative lvy processes , and then express the expected value corresponding to the first down - crossing time in terms of the scale function. in section [ sec : fit ] , we obtain the first - order condition as well as the continuous / smooth fit condition and show their equivalence . in section [ section_extension ] ,we solve the mckean optimal stopping problem and an extension to .we conclude the paper in section [ section_conclusion ] .let be a probability space hosting a spectrally negative lvy process characterized uniquely by the _ laplace exponent _ = c \beta + \frac{1}{2}\sigma^2 \beta^2 + \int _ { ( 0,\infty)}(e^{-\beta z}-1+\beta z 1_{\{0< z<1\}})\,\pi({{\rm d}}z ) , \quad { \beta \in \mathbb{r } } , \label{laplace_exp}\end{aligned}\ ] ] where , and is a measure on such that here is the conditional probability where and is its expectation .it is well - known that is zero at the origin , convex on and has a right - continuous inverse : in particular , when we can rewrite where the process has paths of bounded variation if and only if and holds .it is also assumed that is not a negative subordinator ( decreasing a.s . ) .namely , we require to be strictly positive if and holds .let be the filtration generated by and be a set of -stopping times .we shall consider a general optimal stopping problem of the form : , \quad x \in { \mathbb{r}}\label{general_problem}\end{aligned}\ ] ] for some discount factor and locally - bounded measurable functions which represent , respectively , the payoff received at a given stopping time and the running reward up to .typically , its optimal stopping time is given by the _ first down - crossing time _ of the form let us denote the corresponding expected payoff by , \quad x , a \in { \mathbb{r } } , \ ] ] which can be decomposed into where , for every , , \\\gamma_2(x;a ) & : = { \mathbb{e}}^x \left [ e^{-q \tau_a } ( g(x_{\tau_a})-g(a ) ) 1_{\{x_{\tau_a } < a , \ , \tau_a < \infty \}}\right ] , \\\gamma_3(x;a ) & : = { \mathbb{e}}^{x}\left [ \int_0^{\tau_{a } } e^{-qt } h(x_t ) { { \rm d}}t \right ] . \end{split } \label{definition_gamma}\end{aligned}\ ] ] shortly below , we express each term via the scale function .this paper does not consider the _first up - crossing time _ defined by because , for the spectrally negative lvy case , the process always creeps upward ( a.s . on ) , and the expression of the expected value is much simplified .we focus on a more interesting and challenging case where the optimal stopping time is conjectured to be a first down - crossing time .associated with every spectrally negative lvy process , there exists a ( q-)scale function that is continuous and strictly increasing on and is uniquely determined by fix . if is the first time the process goes above and is the first time it goes below zero as a special case of , then we have = \frac { w^{(q)}(x ) } { w^{(q)}(a ) } \quad \textrm{and } \quad { \mathbb{e}}^x \left [ e^{-q \tau_0 } 1_{\left\ { \tau_a^+ > \tau_0 , \ , \tau_0 < \infty \right\}}\right ] = z^{(q)}(x ) - z^{(q)}(a ) \frac { w^{(q)}(x ) } { w^{(q)}(a)},\end{aligned}\ ] ] where here we have .\ ] ] we also have = z^{(q)}(x ) - \frac q { \phi(q ) } w^{(q)}(x ) , \quad x > 0 . \label{laplace_tau_0}\end{aligned}\ ] ] in particular , is continuously differentiable on if does not have atoms and is twice - differentiable on if ; see , e.g. , . throughout this paper , we assume the former .we assume that does not have atoms .fix .the scale function increases exponentially ; there exists a ( scaled ) version of the scale function that satisfies and moreover is increasing , and as is clear from , regarding its behavior in the neighborhood of zero , it is known that see lemmas 4.3 - 4.4 of . for a comprehensive account of the scale function ,see .see for numerical methods for computing the scale function .we now express in terms of the scale function . for the rest of the paper , because , we must have . first , the following is immediate by .[ lemma_gamma_1 ] for every , we have .\end{aligned}\ ] ] for and , we use the potential measure written in terms of the scale function . by using theorem 1 of ( see also ) , we have , for every and , & = \int_{b \cap [ a,\infty ) } \left [ \frac { w^{(q)}(x - a ) w^{(q ) } ( a - y ) } { w^{(q)}(a - a ) } - 1_{\ { x \geq y \ } } w^{(q ) } ( x - y ) \right ] { { \rm d}}y.\end{aligned}\ ] ] by taking via the dominated convergence theorem , we can obtain in .for the problem to be well - defined , we assume throughout the paper the following so that is finite . for a complete proof of lemma [ lemma_gamma_3 ] below , see .[ assump_h ] we assume that .[ lemma_gamma_3]for all , we have for , we first define , for every , [ lemma_lipschitz ] fix .suppose 1 . is in some neighborhood of and 2 . satisfies then . see appendix [ proof_lemma_lipschitz ] . for every , we also define by - , and hence the finiteness of also implies that of for any . using these notations , lemma [ lemma_gamma_3 ] together with the compensation formulashows the following .[ lemma_gamma_2 ] if ( 1)-(2 ) of lemma [ lemma_lipschitz ] hold for a given , then let be the poisson random measure associated with and for all .we also let and for any . by the compensation formula ( see , e.g. , ) , \\ & = { \mathbb{e}}^x \left [ \int_0^\infty \int_0^\infty n({{\rm d}}t , { { \rm d}}u ) e^ { -q t } ( g(x_{t-}-u)-g(a))_+1_{\{x_{t- } - u \leq a , \ ; \underline{x}_{t- } > a \}}\right ] \\ & = { \mathbb{e}}^x \left [ \int_0^\infty e^ { -q t } { { \rm d}}t \int_0^\infty \pi ( { { \rm d}}u ) ( g(x_{t-}-u)-g(a))_+ 1_{\{x_{t- } - u \leq a , \ ; \underline{x}_{t- } > a \}}\right ] \\ & = \int_0^\infty \pi ( { { \rm d}}u ) { \mathbb{e}}^x \left [ \int_0^\infty e^ { -q t } ( g(x_{t-}-u)-g(a))_+1_{\{x_{t- } - u \leq a , \ ; \underline{x}_{t- } > a \ } } { { \rm d}}t \right ] \\ & = \int_0^\infty \pi ( { { \rm d}}u ) { \mathbb{e}}^x \left [ \int_0^{\tau_a } e^ { -q t } ( g(x_{t-}-u)-g(a))_+1_{\{x_{t- } \leq a + u\ } } { { \rm d}}t \right].\end{aligned}\ ] ] by setting or equivalently in lemma [ lemma_gamma_3 ] , \\ & = w^{(q ) } ( x - a ) \int_0^u e^{- \phi(q ) y } ( g(y+a - u ) - g(a))_+{{\rm d}}y - \int_a^x w^{(q ) } ( x - y ) ( g(y - u)-g(a))_+ 1_{\{y \leq a+u\ } } { { \rm d}}y \\ & = w^{(q ) } ( x - a ) \int_0^u e^{- \phi(q ) y } ( g(y+a - u)-g(a))_+{{\rm d}}y - \int_0^{u \wedge ( x - a ) } w^{(q ) } ( x - z - a ) ( g(z+a - u)-g(a))_+ { { \rm d}}z.\end{aligned}\ ] ] by substituting this , we have \\ & = \int_0^\infty \pi ( { { \rm d}}u ) \left [ w^{(q ) } ( x - a ) \int_0^u e^{- \phi(q ) y } ( g(y+a - u)-g(a))_+ { { \rm d}}y \right . \\ & \qquad \left .- \int_0^{u \wedge ( x - a ) } w^{(q ) } ( x - z - a ) ( g(z+a - u)-g(a))_+ { { \rm d}}z \right],\end{aligned}\ ] ] which is finite by lemma [ lemma_lipschitz ] and .similarly , we can obtain ] and on ; see section [ section_extension ] . for continuous fit , we need to obtain if these limits exist .define also , if it exists .it is easy to see that the result for is immediate by the dominated convergence theorem thanks to lemma [ lemma_lipschitz ] and - . [ lemma_rho ] given ( 1)-(2 ) of lemma [ lemma_lipschitz ] for a given , we have 1 . , 2 . .now lemma [ lemma_rho ] and show this together with shows the following .[ lemma_continuous_fit ] fix and suppose ( 1)-(2 ) of lemma [ lemma_lipschitz ] hold . 1 .if is of bounded variation , the continuous fit condition holds if and only if 2 . if is of unbounded variation ( including the case ) , it is automatically satisfied . for the case is of unbounded variation with , we shall pursue smooth fit condition at .the following lemma says in this case that the derivative can go into the integral sign and we can further interchange the limit .[ lemma_for_smooth_fit ] fix .if and suppose ( 1)-(2 ) of lemma [ lemma_lipschitz ] hold , then { { \rm d}}z , \quad x > a , \label{varrho_derivative_g}\end{aligned}\ ] ] and see appendix [ proof_lemma_for_smooth_fit ] . in the case of unbounded variation with , it is expected that holds but does not .this is because and the limit can not go into the integral .we are now ready to obtain for .[ lemma_smooth_fit ] fix .suppose and ( 1)-(2 ) of lemma [ lemma_lipschitz ] hold .then , 1 . , 2 . , 3 . .\(1 ) it is immediate by lemma [ lemma_gamma_1 ] .( 2 ) by , by taking via , we have the claim .( 3 ) using in particular , we have \\ & = w^{(q ) ' } ( 0 + ) \int_0^\infty e^{- \phi(q ) y } h(y+a ) { { \rm d}}y.\end{aligned}\ ] ] by the lemma above , we obtain or equivalently , by virtue of , the smooth fit condition at is equivalent to .[ lemma_smooth_fit2 ] fix .suppose and ( 1)-(2 ) of lemma [ lemma_lipschitz ] hold .then , the smooth fit condition holds if and only if we summarize the results obtained in propositions [ lemma_continuous_fit]-[lemma_smooth_fit2 ] in table [ table : summary ] ..summary of continuous- and smooth - fit conditions .[ cols="^,^,^",options="header " , ] under specification [ assumption_g_h ] , there exists at most one that satisfies because + * verification of optimality : * we let be the unique root of if it exists and set it zero otherwise . as in the case of the mckeanoptimal stopping problem , we only need to show ( iii ) in section [ subsection_obtaining ] because ( i ) holds by and lemma [ lemma_about_i ] and ( ii ) by proposition [ proposition_harmonic ] .if , we have for every .because for every , we shall show that this is negative . because , we must have and hence where the first inequality holds because and are increasing and , the second holds because is nonpositive , and the third holds because and is nonpositive .this together with shows the result .for the rest of the proof , we refer the reader to the proof of proposition 4.1 in . if , then is the optimal stopping time and the value function is given by for every .if , then the value function is given by for every .we have discussed the optimal stopping problem for spectrally negative lvy processes . by expressing the expected payoff via the scale function, we achieved the first - order condition as well as the continuous / smooth fit condition and showed their equivalence .the results obtained here can be applied to a wide range of optimal stopping problems for spectrally negative lvy processes . as examples, we gave a short proof for the perpetual american option pricing problem and solved an extension to egami and yamazaki .for future research , it would be interesting to pursue similar results for optimal stopping games .typically , the equilibrium strategies are given by stopping times of threshold type as in .similarly to the results obtained in this paper , the expected payoff admits expressions in terms of the scale function and hence the first - order condition and the continuous / smooth fit can be obtained analytically .another direction is the extension to a general lvy process with both positive and negative jumps .this can be obtained in terms of the wiener - hopf factor alternatively to the scale function .finally , the results can be extended to a number of variants of optimal stopping such as optimal switching , impulse control and multiple stopping .we choose that satisfies and fix . by the mean value theorem , there exists such that because , for every , we have the taylor expansion implies that , for every , hence uniformly in by on the other hand , by , which is finite by and how is chosen .this allows us to apply the dominated convergence theorem , and we obtain the proof for the left - derivative is similar , and this completes the proof of . :define { { \rm d}}y , \quad z \in { \mathbb{r}}\ ; \textrm{and } \ ; u > 0 . \label{q_tilde}\end{aligned}\ ] ] then , by , we have .we use the same as in the proof of above and fix and such that \wedge \delta.\end{aligned}\ ] ] it is then clear that . we shall split and show that these two terms on the right - hand side are bounded in on . for every fixed ,our assumptions imply that is on . by the mean value theorem, there exists such that given at which is differentiable and also satisfying , differentiating obtains because ( or ) and is differentiable at , - g'(a+\xi ) \int_{a+\xi}^{u+a+\xi } w^{(q)}(x - y ) { { \rm d}}y \right| \\ & \leq w^{(q)}(x - a-\xi ) \left| g(a+\xi)-g(a+\xi - u ) - u g'(a+\xi ) \right| \\ & \qquad + \left| g'(a+\xi ) \int_{a+\xi}^{u+a+\xi } ( w^{(q)}(x - a-\xi)-w^{(q)}(x - y ) ) { { \rm d}}y \right| \\ & \leq w^{(q)}(x - a ) \left| g(a+\xi)-g(a+\xi - u ) - u g'(a+\xi ) \right|u |g'(a+\xi)| \left| w^{(q)}(x - a-\xi)-w^{(q)}(x - u - a-\xi ) \right| \\ & \leq f_1(a;u , x ) + f_2(a;u , x)\end{aligned}\ ] ] where first , is finite because , for every , we have and which is -integrable over by . on the other hand , by and because implies , we have and hence which is finite by .we now obtain the bound for the second term of the right - hand side of . for every ( which implies ) , we have where , \\b_2(a , c;u , x ) & : = \frac { |g(a+c ) - g(a)| } c \int_{a+c}^{(u+a ) \wedge x } w^{(q)}(x - y ) { { \rm d}}y.\end{aligned}\ ] ] for the former , we have here the first inequality holds because .for the second inequality , it holds trivially when the maximum is attained for some . if it is attained at for some .then , because ( thanks to ) and for the latter , by the property of in the neighborhood of , how is chosen and , we obtain therefore , which is bounded in on by and . it is known as in that guarantees that is twice continuously differentiable and hence is continuous on .furthermore , implies and implies .therefore , there exists such that because we have on the other hand , by , these bounds on and also bound , and hence by the dominated convergence theorem and because , the left - derivative can be obtained in the same way .this proves .choosing and as in the proof of lemma [ lemma_lipschitz ] , we obtain by because , by , \\ & \leq e^{\phi(q)(x - a ) } \big(lu + \frac { 1 - e^{-\phi(q ) u } } { \psi'(\phi(q ) ) } \big),\end{aligned}\ ] ] we have with which is finite thanks to and by applying the taylor expansion to to the first integral .hence ( 1 ) is obtained .the proof of the existence of that satisfies ( 2 ) is immediate because and by . | this paper is concerned with a class of infinite - time horizon optimal stopping problems for spectrally negative lvy processes . focusing on strategies of threshold type , we write explicit expressions for the corresponding expected payoff via the scale function , and further pursue optimal candidate threshold levels . we obtain and show the equivalence of the continuous / smooth fit condition and the first - order condition for maximization over threshold levels . this together with problem - specific information about the payoff function can prove optimality over all stopping times . as examples , we give a short proof of the mckean optimal stopping problem ( perpetual american put option ) and solve an extension to egami and yamazaki . |
consider a typical design problem with more than one objectives ( design criteria ) .for example , we may want to design a network that provides the maximum capacity with the minimum cost , or we may want to design a radiation therapy for a patient that maximizes the dose to the tumor and minimizes the dose to the healthy organs .in such _ multiobjective _ ( or _ multicriteria _ ) problems there is typically no solution that optimizes simultaneously all the objectives , but rather a set of so - called _ pareto optimal _ solutions , i.e. , solutions that are not dominated by any other solution in all the objectives .the trade - off between the different objectives is captured by the _ trade - off _ or _pareto curve _( surface for three or more objectives ) , the set of values of the objective functions for all the pareto optimal solutions .multiobjective problems are prevalent in many fields , e.g. , engineering , economics , management , healthcare , etc .there is extensive research in this area published in different fields ; see for some books and surveys . in a multiobjective problem, we would ideally like to compute the pareto curve and present it to the decision maker to select the solution that strikes the ` right ' balance between the objectives according to his / her preferences ( and different users may prefer different points on the curve ) .the problem is that the pareto curve has typically an enormous number of points , or even an infinite number for continuous problems ( with no closed form description ) , and thus we can not compute all of them .we can only compute a limited number of solutions ( points ) , and of course we want the computed points to provide a good approximation to the pareto curve so that the decision maker can get a good sense of the range of possibilities in the design space .we measure the quality of the approximation provided by a computed solution set using a ( multiplicative ) approximation ratio , as in the case of approximation algorithms for single - objective problems .assume ( as usual in approximation algorithms ) that the objective functions take positive values .a set of solutions is an _-pareto set _ if the members of approximately dominate within every other solution , i.e. , for every solution there is a solution such that is within a factor or better of in all the objectives . often , after computing a finite set of solutions and their corresponding points in objective space ( i.e. , their vectors of objective values ) , we ` connect the dots ' , taking in effect also the convex combinations of the solution points . in problems where the solution pointsform a convex set ( examples include multiobjective flows , linear programming , convex programming ) , this convexification is justified and provides a much better approximation of the pareto curve than the original set of individual points .a set of solutions is called an _-convex pareto set _ if the convex hull of the solution points corresponding to approximately dominates within all the solution points . even for applications with nonconvex solution sets ,sometimes solutions that are dominated by convex combinations of other solutions are considered inferior , and one is interested only in solutions that are not thus dominated , i.e. , in solutions whose objective values are on the ( undominated ) boundary of the convex hull of all solution points , the so - called _ convex pareto set_. note that every instance of a multiobjective problem has a unique pareto set and a unique convex pareto set , but in general it has many different -pareto sets and -convex pareto sets , and furthermore these can have drastically different sizes .it is known that for every multiobjective problem with a fixed number of polynomially computable objective functions , there exist an -pareto set ( and also -convex pareto set ) of polynomial size , in particular of size , where is the bit complexity of the objective functions ( i.e. , the functions take values in the range ] .we prove that the worst - case performance ratio of the chord algorithm for computing an -convex pareto set is .the upper bound implies in particular that for problems with polynomially computable objective functions and a polynomial - time ( exact or approximate ) comb routine , the chord algorithm runs in polynomial time in the input size and .we show furthermore that there is no algorithm with constant performance ratio . in particular ,every algorithm ( even randomized ) has performance ratio at least .similar results hold for the approximation of convex curves with respect to the hausdorff distance .that is , the performance ratio of the chord algorithm for approximating a convex curve of length within hausdorff distance , is .furthermore , every algorithm has worst - case performance ratio at least .we also analyze the expected performance of the chord algorithm for some natural probability distributions .given that the algorithm is used in practice in various contexts with good performance , and since worst - case instances are often pathological and extreme , it is interesting to analyze the average case performance of the algorithm .indeed , we show that the performance on the average is exponentially better . note that chord is a simple natural greedy algorithm , and is not tuned to any particular distribution .we consider instances generated by a class of product distributions that are `` approximately '' uniform and prove that the expected performance ratio of the chord algorithm is ( upper and lower bound ) . againsimilar results hold for the hausdorff distance. * related work .* there is extensive work on multiobjective optimization , as well as on approximation of curves in various contexts .we have discussed already the main related references .the problem addressed by the chord algorithm fits within the general framework of determining the shape by probing .most of the work in this area concerns the exact reconstruction , and the analytical works on approximation ( e.g. , ) compute only the worst - case cost of the algorithm in terms of ( showing bounds of the form ) .there does not seem to be any prior work comparing the cost of the algorithm to the optimal cost for the instance at hand , i.e. , the approximation ratio , which is the usual way of measuring the performance of approximation algorithms .the closest work in multiobjective optimization is our prior work on the approximation of convex pareto curves using a different cost metric .both of the metrics are important and reflect different aspects of the use of the approximation in the decision making process .consider a problem , say with two objectives , suppose we make several calls , say , to the routine , compute a number of solution points , connect them and present the resulting curve to the decision maker to visualize the range of possibilities , i.e. , get an idea of the true convex pareto curve .( the process may not end there , e.g. , the decision maker may narrow the range of interest , followed by computation of a better approximation for the narrower range , and so forth ) . in this scenario ,we want to achieve as small an error as possible , using as small a number of calls as we can , ideally , as close as possible to the minimum number that is absolutely needed for the instance . in thissetting , the cost of the algorithm is measured by the number of calls ( i.e. , the computational effort ) ; this is the cost metric that we study in this paper , and the performance ratio is as usual the ratio of the cost to the optimum cost .consider now a scenario where the decision maker does not just inspect visually the curve , but will look more closely at a set of solutions to select one ; for instance a physician in the radiotherapy example will consider carefully a small number of possible treatments in detail to decide which one to follow . since human time is much more limited than computational time ( and more valuable , even small constant factors matter a lot ) , the primary metric in this scenario is the number of selected solutions that is presented to the decision maker for closer investigation ( we want to be as close as possible to ) , while the computational time , i.e. , the number of calls , is less important and can be much larger ( as long as it is feasible of course ) .this second cost metric ( the size of the selected set ) is studied in for the convex pareto curve ( and in in the nonconvex case ) . among other results ,it is shown there that for all bi - objective problems with an exact routine and a continuous convex space , an optimal -convex pareto set ( i.e. , one with solutions ) can be computed in polynomial time using calls to in general , ( though more efficient algorithms are obtained for specific important problems such as bi - objective lp ) .for discrete problems , a -approximation to the minimum size can be obtained in polynomial time , and the factor is inherent . as remarked above , both cost metrics are important for different stages of the decision making .recall also that , as noted earlier , the chord algorithm runs in polynomial time , and furthermore , one can show that its solution set can be post - processed to select a subset that is a -convex pareto set of size at most .* structure of the paper . * the rest of the paper is organized as follows : section [ sec : prelim ] describes the model and states our main results , section [ sec : worst ] concerns the worst - case analysis , and section [ sec : average ] the average - case analysis .section [ sec : concl ] concludes the paper and suggests the most relevant future research directions .this section is structured as follows : after giving basic notation , in section [ ssec : defs ] we describe the relevant definitions and framework from multiobjective optimization . in section [ ssec : chord - description ] we provide a formal description of the chord algorithm in tandem with an intuitive explanation .finally , in section [ ssec : results ] we state our results on the performance of the algorithm as well as our general lower bounds .* notation .* we start by introducing some notation used throughout the paper . for , we will denote : = \{1,2,\ldots , n\} ] . for , we denote by the line segment with endpoints and , denotes its length , is the triangle defined by and is the internal angle of formed by and . we will use and as the two coordinates of the plane . if is a point on the plane , we use and to denote its coordinates ; that is , .we will typically use the symbol to denote the ( absolute value of the ) slope of a line , unless otherwise specified .sometimes we will add an appropriate subscript , i.e. , we will use to denote the slope of the line defined by and . for a lebesgue measurable set will denote its area by .we describe the general framework of a bi - objective problem to which our results are applicable .a bi - objective optimization problem has a set of valid instances , and every instance has an associated set of feasible solutions , usually called the solution or decision space .there are two objective functions , each of which maps every instance solution pair to a real number .the problem specifies for each objective whether it is to be maximized or minimized .consider the plane whose coordinates correspond to the two objectives .every solution is mapped to a point on this plane .we denote the objective functions by and and we use to denote the objective space ( i.e. , the set of -vectors of objective values of the feasible solutions for the given instance ) . as usual in approximation , we assume that the objective functions are polynomial time computable and take values in ] , where is polynomially bounded in the size of the input . note that this framework covers all discrete combinatorial optimization problems of interest ( e.g. , shortest paths , spanning tree , matching , etc ) , but also contains many continuous problems ( e.g. , linear and convex programs , etc ) . throughout this paper we assume , for the sake of concreteness , that both objective functions and are to be minimized .all our results hold also for the case of maximization or mixed objectives .let .we say that _ dominates _ if ( coordinate - wise ) .we say that _ ( ) if .let .the _ pareto set _ of , denoted by , is the subset of undominated points in ( i.e. , iff and no other point in dominates ) .the convex pareto set of a , denoted by , is the minimum subset of whose convex combinations dominate ( every point in ) .we also use the term _ lower envelope _ of to denote the pareto set of its convex hull , i.e. , .in particular , if is convex its lower envelope is identified with its pareto set .if is finite , its lower envelope is a convex polygonal chain with vertices the points of .note that , for any set , the lower envelope is a convex and monotone decreasing planar curve .for we will denote by the subset of with endpoints , . an _ -convex pareto set _ of ( henceforth -cp ) is a subset of whose convex combinations -cover ( every point in ) .note that such a set need not contain points dominated by convex combinations of other points , as they are redundant .if a set contains no redundant points , we call it non - redundant .let , and , be a non - redundant set . by definition, is an -cp for if and only if the polygonal chain -covers .we define the _ ratio distance _ from a point to a point as .( note the asymmetry in the definition . ) intuitively , it is the minimum value of such that -covers .we also define the ratio distance between sets of points . if , then . as a corollary of this definition ,the set is an -cp for if and only if .the above definitions apply to any set .let be a bi - objective optimization problem in the aforementioned framework .for an instance of , the set corresponds to the objective space ( for the given instance ) .we do not assume that the objective space is convex ; it may well be discrete or a continuous non - convex set .it should be stressed that the objective space is not given explicitly , but rather implicitly through the instance . in particular , we access the objective space of via an oracle that ( exactly or approximately ) minimizes non - negative linear combinations of the objectives . that is , the oracle takes as input a parameter and outputs a point ( i.e. , a feasible point ) that ( exactly or approximately ) minimizes the combined objective function .we use the convention that , for , the oracle minimizes the objective . formally , for , we denote by the problem of optimizing the combined objective over .let be an accuracy parameter .then , for , we will denote by a routine that returns a point that optimizes up to a factor of , i.e. , .in other words , the routine is an `` approximate optimization oracle '' for the objective space .we say that the problem has a polynomial time approximation scheme ( ptas ) , if for any instance of and any there exists a routine ( as specified above ) that runs in time polynomial in the size of the instance .as shown in , for any bi - objective problem in the aforementioned framework , there is a ptas for constructing an -convex pareto set if and only if there is a ptas for the problem .we now provide a geometric characterization of that will be crucial throughout this paper .consider the point returned by and the corresponding line through with slope , i.e. , .then there exists no solution point ( i.e. , no point in ) below the line .geometrically , this means that we sweep a line of absolute slope , until it touches ( exactly or approximately ) the undominated boundary ( lower envelope ) of the objective space . for ,the routine returns a point on the lower envelope , while for it returns a ( potentially ) dominated point of `` close '' to the boundary ( where the notion of `` closeness '' is quantitatively defined by the aforementioned condition ) .see figure [ fig : comb ] for an illustration .if is the ( feasible ) point in returned by , then we write .we assume that either ( i.e. , we have an exact routine ) , or we have a ptas . for , i.e. , when the optimization is exact , we omit the subscript and denote the comb routine simply by . we will denote by the size of an optimum -convex pareto set for the given instance , i.e. , an -convex pareto set with the minimum number of points .note that , obviously every algorithm that constructs an -cp , must certainly make at the very least calls to , just to get points which are needed at a minimum to form an -cp ; this holds even if the algorithm somehow manages to always be lucky and call with the right values of that identify the points of an optimal -cp .having obtained the points of an optimal -cp , another many calls to comb with the slopes of the edges of the polygonal line defined by the points , suffice to verify that the points form an -cp .hence , the `` offline '' optimum number of calls is at most .let be the number of calls required by the chord algorithm on instance .the _ worst - case performance ratio _ of the algorithm is defined to be .if the inputs are drawn from some probability distribution , then we will use the _ expected performance ratio _ ] . by definition , approximating the original curve within hausdorff distance is equivalent to approximating the new curve within hausdorff distance error .a simple calculation shows that for a convex curve in the unit square ] are mutually independent .[ def : delta - balanced distn ] let be a bounded lebesgue - measurable subset of , and let be a distribution over .the distribution is called -_balanced _ , , if for all lebesgue measurable subsets , ] , is the number of iterations and equals the number of vertices in the instance .we define a set of points ordered in increasing and decreasing . our instance will be the convex polygonal line with vertices the points of .we set and .the set of points is defined recursively as follows : 1 .the point has and .2 . for ] , let be the intersection of the line the line parallel to through .the error of is exactly .observe that . by a simple geometric argument we obtain , which yields the second statement .for the first statement , we show inductively that the recursion tree built by the algorithm for is a path of length and at depth , for ], we denote .let be the -projection of on , so that . recall that denotes the intersection of the line the line parallel to through .if is the -projection of on , we have .( see figure [ chord - fig - hd2 ] for an illustration of these definitions . )we start with the following claim : [ lem : hd - induction ] for all ] .we will prove it for .we similarly exploit the similarity of the triangles and , from which we get where the second equality follows from the collinearity of .observe that the third term in ( [ eq : one ] ) is equal to by construction .now note that , because of the parallelogram , we have .hence , ( [ eq : one ] ) and the induction hypothesis imply which completes the proof of the claim . since ( as noted in the proof of claim [ lem : hd - induction ] ) , it follows that for all ] , at depth , the chord subroutine selects point .we prove the aforementioned statement by induction on the depth of the tree .recall that the chord algorithm initially finds the extreme points and . for ( first recursive call ) ,the algorithm selects a point of the lower envelope with maximum horizontal distance from . by construction ,all the points ( of the lower envelope ) in the line segment have the same ( maximum ) distance from ( since is parallel to ) .hence , any of those points may be potentially selected .since is a black - box oracle , we may assume that indeed is selected .the maximum error after is selected equals .hence , the algorithm will not terminate after it has selected . for the inductive step ,we assume that the recursion tree is a path up to depth ] , the horizontal approximation error is we remark that the error term in the rhs of ( [ eqn : hd - vs - rd ] ) leads to the `` constant factor loss '' , i.e. , the fact that the chord algorithm under the ratio distance picks ( as opposed to ) points .( also note that , since the ratio distance is a lower bound for the horizontal distance , the chord algorithm under the former metric will select at most points . )suppose we invoke the chord algorithm with desired error of , i.e. , we want to reconstruct the lower envelope exactly . then the algorithm will select the points in order of increasing .it is also clear that the error of the approximation decreases monotonically with the number of calls to .hence , to complete the proof , it suffices to show that after the algorithm has selected , the ratio distance error will be bigger than . to do this , we appeal to lemma [ lem : hd - vs - rd ] . the ratio distance error of the set is .an application of ( the second inequality of ) ( [ eqn : hd - vs - rd ] ) for , gives for the first term of the rhs , it follows from ( [ eqn : hd - error2 ] ) by substitution that and we similarly get hence , where consider the error term .we will show that which concludes the proof .the first summand is negligible compared to , as long as to bound the second summand from above we need a lower bound on the slope .recall ( equation ( [ eq : xistar ] ) in lemma [ lem : hd - induction ] ) that .it is also not hard to verify that also recall that hence , it is straightforward to check that for the chosen values of the parameters , , hence the second summand will also be significantly smaller than . in conclusion , and lemma [ claim : chd - rd ] follows .this also completes the proof of theorem [ thm : rd - lb - ws ] .we can show an information theoretic lower bound against any algorithm that uses as a black box with respect to the ratio distance metric . even though the bound we obtain is exponentially weaker than that attained by the chord algorithm , it rules out the possibility of a constant factor approximation in this model .in particular , we show : [ thm : rd - lb - ws - gen ] any algorithm ( even randomized ) with oracle access to a routine , has performance ratio with respect to the ratio distance .let be a general algorithm with oracle access to .the algorithm is given the desired error , and it wants to compute an -cp set . to do this , it queries the routine on a sequence of ( absolute ) slopes and terminates when it has obtained a `` certificate '' that the set of points forms an -cp set .the queries to can be adaptive , i.e. , the -th query of the algorithm can depend on the information ( about the input instance ) the algorithm has obtained from the previous queries . on the other hand , the adversary also specifies the input instance adaptively , based on the queries made by the algorithm . we will define a family of instances and an error with the following properties : 1 .each instance has 2 . in order for an algorithm to have a certificate it found an -cp set for an ( adversarially chosen ) instance , it needs to make at least calls to .our construction uses the lower bound example for the chord algorithm from section [ ssec : worst - lower - chord ] essentially as a black box .consider the instance ( note that for we have that and ) and let be the corresponding set of points , where and and . our family of input instances consists of all `` prefixes '' of , i.e. , , where , ] , and that where the first inequality holds from ( [ eqn : hd - vs - rd ] ) , and the rest follow by construction .this proves property above .we now proceed to prove .we claim that , given an arbitrary ( unknown ) instance , in order for an algorithm to have a certificate it discovered an -cp set , must uniquely identify the instance . in turn, this task requires calls .indeed , consider an unknown instance and let be the rightmost point of ( excluding ) the algorithm has discovered up to the current points of its execution . at this point, the information available to the algorithm is that .hence , the error the algorithm can guarantee for its current approximation is the first inequality above follows from lemma [ lem : hd - vs - rd ] .also note that the term in the rhs is minimized for and ( by the same analysis as in lemma [ claim : chd - rd ] ) the minimum value is bigger than .it remains to show that an adversary can force any algorithm to make at least queries to until it has identified an unknown instance of .clearly , identifying an unknown instance , i.e. , finding the index such that , is equivalent to identifying the rightmost point of to the left of .first , we can assume the algorithm is given the extreme points beforehand .a general algorithm is allowed to query the routine for any slope .suppose that .then , for , the routine returns : ( i ) if , ( ii ) if , and ( iii ) if .. that is , the information obtained from a query is whether , or .so , for our class of instances , a general deterministic algorithm is equivalent to a ternary decision tree with the corresponding structure .the tree has many leaf nodes and there are at most leaves at depth .every internal node of the tree corresponds to a query to the routine , hence the depth measures the worst - case number of queries made by the algorithm .it is straightforward that any such tree has depth and the theorem follows .( if we allow randomization , the algorithm is a randomized decision tree .the expected depth of any such tree remains which yields the theorem for randomized algorithms as well . )we next show that , for the horizontal distance metric ( or vertical distance by symmetry ) , any algorithm in our model has an unbounded performance ratio , even for instances that lie in the unit square and for desired error .[ thm : hd - lb - gen - ws ] any algorithm with oracle access to a routine has an unbounded performance ratio with respect to the horizontal distance , even on instances that lie in the unit square and for approximation error .let be an algorithm that , given the desired error , computes an -approximation with respect to the horizontal distance .fix .we show that an adaptive adversary can force the algorithm to make queries to even when queries suffice .that is , the performance ratio of the algorithm is ; since can be arbitrary the theorem follows .our family of ( adversarially constructed ) instances all lie in the unit square and are obtained by a modification of the lower bound instance for chord under the horizontal distance ( see step 1 in section [ ssec : worst - lower - chord ] ) . since the horizontal distance is invariant under translation , we can shift our instances so that the points and are the leftmost and rightmost points of the convex pareto set . after this shift ,the point is identified with the origin .consider the initial triangle , where , and , and let be the first query made by the algorithm .given the value of , the adversary adds a vertex to the instance so that . the strategy of the adversary is the following : the point belongs to , i.e. , . if , then the point has otherwise , .let be the line with slope through and be its intersection with .the current information available to the algorithm is that the point is a feasible point and that there are no points below the line .hence , the horizontal distance error it can certify is on the other hand , if the true convex pareto set was , the set would attain error note that after its first query , the algorithm knows that the error to the left of is , hence it needs to focus on the triangle .therefore , it is no loss of generality to assume that . given , the adversary adds a vertex to the instance so that . its strategy is to place on and to set if and otherwiselet be the line with slope through and be its intersection with .the information available to the algorithm after its second query is that the point is a feasible point and that there are no points below the line .hence , the horizontal distance error it can certify is on the other hand , if the true convex pareto set was , the set would attain error similarly , the adversary argument continues by induction on .the induction hypothesis is the following : after its -th query , the algorithm has computed a set of points , , ] , which implies and . to prove theorem [ thm : ws - ub - area ]we will need a few lemmas .we start by proving correctness , i.e. , we show that , upon termination , the algorithm computes an -cp set for . this statement may be quite intuitive , but it requires a proof .the following claim formally states some basic properties of the algorithm : [ clm : sandwich ] let be the triangle processed by the chord algorithm at some recursive step. then the following conditions are satisfied : ( i ) ; in particular , and , ( ii ) , and lies below the line , and ( iii ) . that is , whenever the chord routine is called with parameters ( see table [ table : chord ] ) , the points and are feasible solutions of the lower envelope and the segment of the lower envelope between them is entirely contained in the triangle .consider the node ( corresponding to ) in the recursion tree built by the algorithm .we will prove the claim by induction on the depth of the node .the base case corresponds to either an empty tree or a single node ( root ) .the chord routine is initially called for the triangle . conditions ( i ) and ( ii )are thus clearly satisfied .it follows from the definition of the routine that there exist no solution points strictly to the left of and strictly below .hence , by monotonicity and convexity , we have , i.e. , condition ( iii ) is also satisfied .this establishes the base case . for the induction step , suppose the claim holds true for every node up to depth . we will prove it for any node of depth .indeed , let be a depth node and let be s parent node in the recursion tree . by the induction hypothesis , we have that ( i ) ; and ( ii ) , and lies below the line and ( iii ) .we want to show the analogous properties for .we can assume without loss of generality that is s left child ( the other case being symmetric ) .then , it follows by construction ( see table [ table : chord ] ) that where .we claim that .indeed , note that ( as follows from property ( ii ) of the induction hypothesis ) . by monotonicity and convexity of the lower envelope , combined with property ( iii ) of the induction hypothesis , the claim follows .now note that and .hence , property ( i ) of the inductive step is satisfied .since has non - positive slope ( as follows from ( ii ) of the inductive hypothesis ) , lies to the left and above ; similarly , since has negative slope , lies to the left and above .we also have that , where the first inequality follows from the fact that . property ( ii )follows from the aforementioned . by definition of the chord routine, we have and there are no solution points below . by convexity , we get that lies below .hence , property ( iii ) also follows .this proves the induction and the claim . by exploiting the above claim, we can prove correctness : [ lem : chord - finds - eps - convex ] the set of points computed by the chord algorithm is an -cp set .let be the set of feasible points in output by the algorithm , where the points of are ordered in increasing order of their -coordinate ( decreasing order of their -coordinate ) .note that all the s are in convex position .we have that .so , it suffices to show that , for all , .since the algorithm terminates with the set , it follows that , for all , the chord routine was called by the algorithm for the adjacent feasible points and returned without adding a new feasible point between them .let be the corresponding third point ( argument to the chord routine ) .then , by claim [ clm : sandwich ] , we have that is between and in both coordinates and below the segment and moreover .since the chord routine returns without adding a new point on input , it follows that either or the point satisfies . in the former case , since , we obtain as desired . in the latter case , we have .that is , we claim that is a point of ( by claim [ clm : sandwich ] ) with maximum ratio distance from .since the feasible points in lie between and its parallel line through , the claim follows .this completes the proof of lemma [ lem : chord - finds - eps - convex ] . to bound from above the performance ratiowe need a few more lemmas .our first lemma quantifies the area shrinkage property .it is notable that this is a statement independent of .[ lem : area - half ] let be the triangle processed by the chord algorithm at some recursive step. denote .let , be the triangles corresponding to the two new subproblems .then , we have let be the projections of on and respectively ; see figure [ fig : chord - area ] for an illustration .let \ ] ] where the equality holds because the triangles and are similar ( recalling that ) . hence , we get we have where ( [ uses : y ] ) follows from ( [ eq : y ] ) . by using ( [ eq :y2 ] ) and expanding we obtain as desired .our second lemma gives a convenient lower bound on the value of the optimum .[ lem : opt - lb ] consider the recursion tree built by the algorithm and let be the set of lowest internal nodes , i.e. , the internal nodes whose children are leaves .then .recall that , by convention , there is no node in the tree if , for a triangle , the chord routine terminates without calling .each lowest internal node of the tree corresponds to a triangle with the property that the ratio distance of the convex pareto set from the line segment is strictly greater than ( as otherwise , the node would be a leaf ) .each such triangle must contain a point of an optimal -cp set .any two nodes of are not ancestor of each other , and therefore the corresponding triangles are disjoint ( neighboring triangles can only intersect at an endpoint ) .thus , each one of them must contain a distinct point of the optimal -cp set , and hence the lemma follows .finally , we need a lemma that relates the ratio distance within a triangle to its area . we stress that the lemma applies only to triangles considered by the algorithm .[ lemma : small - area - means - done ] consider a triangle considered in some iteration of the chord algorithm such that .let . if , then .the basic idea of the argument is that the worst - case for the area - error tradeoff is ( essentially ) when , and the triangle is right and isosceles ( i.e. , ) .we now provide a formal proof .let be a triangle considered by the algorithm .the points and we have , and the point lies below the line .note the latter imply that ( see figure [ fig : area - distance ] . )we will relate the area to the ratio distance .consider the intersection of the lines and , where denotes the origin .then we have that . from the definition of the ratio distanceit follows that for any point it holds ( in fact , is the unique minimizer ) .let be the projection of on .it follows that for the area we have that . since is either right or obtuse, the length of its largest base is at least twice the length of the corresponding height , i.e. , .hence , . by expanding and using ( [ eq : star ] ) , we get and the lemma follows . at this pointwe have all the tools we need to complete the proof of theorem [ thm : ws - ub - area ] . lemma [ lem : chord - finds - eps - convex ] gives correctness . to bound the performance ratio we proceed as follows : first , by lemma [ lem : area - half ] ,when a node of the tree is at depth , the corresponding triangle will have area at most .hence , by lemma [ lemma : small - area - means - done ] ( noting that ) , it follows that the depth of the recursion tree is .every internal tree node is an ancestor of a node in .the chord algorithm makes one query for every node of the tree , hence .lemma [ lem : opt - lb ] now implies that , which concludes the proof . in this subsection , we prove the asymptotically tight upper bound of on the worst - case performance ratio of the chord algorithm .the analysis is more subtle in this case and builds on the intuition obtained from the simple analysis of the previous subsection .the proof bounds in effect the length of paths in the recursion tree that consist of nodes with a single child .it shows that if we consider any -convex pareto set , the segment of the lower envelope between any two consecutive elements of can not contain too many points of the solution produced by the chord algorithm .[ thm : ws - ub - tight ] the worst - case performance of the chord algorithm ( with respect to the ratio distance ) is .we begin by analyzing the case ( i.e. , the special case that one intermediate point suffices and is required for an -approximation ) and then handle the general case .it turns out that this special case captures most of the difficulty in the analysis .let ( leftmost ) and ( rightmost ) be the extreme points of the convex pareto curve as computed by the algorithm .we consider the case , i.e. , ( i ) the set is _ not _ an -cp and ( ii ) there exists a solution point such that is an -cp .fix with .we will prove that , for an appropriate choice of the constant in the big - theta , the chord algorithm introduces at most points in either of the intervals ] .suppose , for the sake of contradiction , that the chord algorithm adds more points than that in the segment ( the proof for being symmetric . ) we say that , in some iteration of the chord algorithm , a triangle is _ active _ , if it contains the optimal point .in each iteration , the chord algorithm has an active triangle which contains the optimal point . outside that triangle, the algorithm has constructed an -approximation .we note that the chord algorithm may in principle go back and forth between the two sides of ; i.e. , in some iterations the line parallel to the chord touches the lower envelope to the left of and in other iterations to the right .let be the initial triangle .we focus our attention on the ( not necessarily consecutive ) iterations of the chord algorithm that add points to the left of .we index these iterations in increasing order with ] , it follows that .thus , we get [ claim : tu1 ] and .let be the intersection of the segment with the line and the intersection of with . by definition , , and .let be the intersection of with the line from parallel to . from the similar triangles and , we have .therefore , . from the similar triangles and , the latter ratio is equal to , yielding the first equality for in the claim .the second equality follows from the similar triangles and , since is parallel to .the second equality implies then the expression for . from the claim we have , andsince lies left of , we have .therefore , let be the intersections of with the lines and respectively ; see figure [ tight - upper ] . clearly , , and hence . thus , . therefore , [ claim : tu2 ] and .since the last iteration of the chord algorithm adds a new point , the segment does not -cover all the pareto points left of in the active triangle .these points are all in the triangle .the ratio distance of any point in this triangle from is upper bounded by both and by .it follows that and .thus , we get and [ claim : tu3 ] . if then the claim follows from inequality ( [ g - bound ] ) .so suppose that .the point is at ratio distance at most from the line since is an -convex pareto set .therefore , is at most excess ratio distance from the line , because is parallel to and is to the right of . since , it follows that and hence .therefore , since ( as is left of ) , we conclude that thus , we have a lower bound on the product of the s from inequality ( [ p - bound ] ) and on the product of the s from claim [ claim : tu3 ] .it is easy to see ( and is well - known ) that for a fixed product of the s , the product of the s is maximized if all factors are equal .we include a proof for convenience .let for .the maximum of subject to is achieved when all the are equal .suppose .then is maximized subject to , when is minimized , which happens when by the arithmetic - geometric mean inequality ( with equality iff . for general ,if the s maximize subject to then we must have for all pairs , because otherwise replacing by their geometric mean will increase .thus , for any value of , the product is maximized when for all and . since , we must have because .therefore , , hence , which implies that .we now proceed to analyze the general case , essentially by reducing it to the aforementioned special case .suppose that the optimal solution has an arbitrary number of points , i.e. , has the form .charge the points computed by the chord algorithm to the edges of the optimal solution as follows : if a point belongs to the portion of the lower envelope between the points and ( where we let and , then we charge the point to the edge ; if the chord algorithm generates a point of the optimal solution then we can charge it to either one of the adjacent edges .we claim that every edge of is charged with at most points of the chord algorithm , where is the same number as in the above analysis for the case . to see this , consider any edge of .let be the first point generated by the chord algorithm that is charged to this edge , i.e. , is the first point that lies between and .we claim that the chord algorithm will generate at most more points in each of the two portions and of the lower envelope .the argument for the two portions is symmetric .consider the portion .the proof that the chord algorithm will introduce at most points in this portion is identical to the proof we gave above for the case , with in place of and in place of .the only fact about the assumption that was used there was that the edge -covers the portion of the lower envelope between and .it is certainly true here that the segment -covers the portion , since the edge -covers . hence ,by the same arguments , the chord algorithm will generate at most points between and .thus , the algorithm will generate no more than points overall , and hence its performance ratio is .this completes the proof ._ we briefly sketch the differences in the algorithm and its analysis for the case of an approximate comb routine .first , in this case , the description of the chord algorithm ( table [ table : chord ] ) has to be slightly modified ; this is needed to guarantee that the set of computed points is indeed an -cp .in particular , in the chord routine , we need to check whether for an appropriate .in particular , we choose such that , where is the accuracy of the approximate comb routine , i.e. , the routine .consider the case that the routine always returns feasible points that belong to a scaled version of the lower envelope .the same analysis as in the current section establishes that the chord algorithm performs at most calls to in this setting .if is `` close '' to ( say , ) the first term is clearly .hence , to prove the desired upper bound , it suffices to show that .( it is clear that , but in principle it may be the case that is arbitrarily larger . )this is provided to us by a planar geometric lemma from ( lemma 5.1 ) which states that if then .selecting suffices for the above and completes our sketched description . _in section [ ssec : average - upper ] we present our average case upper bounds and in section [ ssec : average - lower ] we give the corresponding lower bound . in section [ ssec : ppp - upper ]we start by proving our upper bound for random instances drawn from a poisson point process ( ppp ) .the analysis for the case of unconcentrated product distributions is somewhat more involved and is given in section [ ssec : prod - upper ] .* overview of the proofs . *the analysis for both cases has the same overall structure , however each case has its difficulties .we start by giving a high - level overview of the arguments . for the sake of simplicity ,in the following intuitive explanation , let denote : ( i ) the expected number of points in the instance for a ppp and ( ii ) the actual number of points for a product distribution .similarly to the simple proof of section [ sssec : worst - upper - simple ] for worst - case instances , to analyze our distributional instances we resort to an indirect measure of progress , namely the area of the triangles maintained in the algorithm s recursion tree .we think that this feature of our analysis is quite interesting and indicates that this measure is quite robust . in a little more detail , we first show ( see lemma [ lem : area recursion ] for the case of ppp ) that every subdivision performed by the algorithm decreases the area between the upper and lower approximations by a significant amount ( roughly at an exponential rate ) with high probability .it then follows that at depth of the recursion tree , each `` surviving triangle '' contains an expected number of at most points with high probability .we use this fact , together with a charging argument in the same spirit as in the worst - case , to argue that the expected performance ratio is in this case . to analyze the expected performance ratio in the complementary event , we break it into a `` good '' event , under which the ratio is with high probability , and a `` bad '' event , where it is potentially unbounded ( in the poisson case ) or at most ( for the case of product distributions ) .the potential unboundedness of the performance ratio in the poisson case creates complications in bounding the expected ratio of the algorithm over the entire space .we overcome this difficulty by bounding the upper tail of the poisson distribution ( see claim [ claim : ppp - tail ] ) . in the case of product distributions , the worst case bound of onthe competitive ratio is sufficient to conclude the proof , but the technical challenges present themselves in a different form . here ,the `` contents '' of a triangle being processed by the algorithm depend on the information coming from the previous recursive calls making the analysis more involved .we overcome this by understanding the nature of the information provided from the conditioning . * on the choice of parameters . *a simple but crucial observation concerns the interesting range for the parameters of the distributions .suppose that we run the chord algorithm with desired error on some random instance that lies entirely in the set ^ 2 ] . clearly , \pr[{\cal e } ] \leq { \mathop{\textstyle \sum}}_{i = k^*}^{+ \infty } i \cdot \pr[x = i ] = { \mathop{\textstyle \sum}}_{i = k^*}^{+ \infty } i \cdot { e^{- \nu } \nu^i \over i ! } = \nu { \mathop{\textstyle \sum}}_{i = k^*-1}^{+ \infty } { e^{- \nu } \nu^{i } \over i ! } = \nu \pr[x \ge k^*-1].\ ] ] we distinguish two cases . if , then chebyshev s inequality yields \le { 1 \over \nu^2} ] .first note we can clearly assume that and .let and .let be the right triangle of maximum area whose hypotenuse is parallel to .( this is the shaded triangle in figure [ fig : ub - ac ] . )we claim that indeed , it is clear that is a vertex of and that either or ( or both ) are vertices .hence , one of the edges of has length at least . since is the slope of the hypotenuse , the other edge has length at least .if there is a feasible point in , the chord algorithm will find it by calling and terminate ( since such a point forms an -cp set ) .let be the number of random points that land in the triangle .note that is a random variable , with .we can write = { \mathbb{e}}[{\mathrm{chord}}_{{\epsilon}}(t_1 ) \mid x^{\ast } = 0 ] \pr[x^{\ast}=0 ] + { \mathbb{e}}[{\mathrm{chord}}_{{\epsilon}}(t_1 ) \mid x^{\ast } \ge 1 ] \pr[x^{\ast } \ge 1].\ ] ] observe that the second term is bounded from above by a constant , hence it suffices to bound the first term . recall that the number of calls performed by the chord algorithm is at most twice the number of feasible points in the root triangle .therefore , we can write \pr[x^{\ast}=0 ] \le 2 { \mathbb{e}}[y_1 \mid x^{\ast } = 0 ] \pr[x^{\ast}=0].\ ] ] recall that is a random variable , where and note that we can assume wlog that ( since otherwise the expected number of feasible points in is at most and we are done ) .hence , by claim [ claim : ppp - tail ] the rhs above is bounded by \right\}.\ ] ] thus , to complete the proof it suffices to show that = o(1) ] which implies that and .we also have that ( since otherwise the set is an -cp ) which gives therefore , the main result of this section is the following theorem , which combined with proposition [ fact : dens - ppp ] , yields the desired upper bound of on the expected performance ratio .[ th : competitive ratio ppp ] let be the triangle at the root of the chord algorithm s recursion tree , and suppose that points are inserted into according to a poisson point process with intensity .the expected performance ratio of the chord algorithm on this instance is .the proof of theorem [ th : competitive ratio ppp ] will require a sequence of lemmas . throughout this section , we will denote . recall that the number of queries performed by the chord algorithm is bounded from above by twice the total number of points in the triangle .since the expected number of points in is , we have : [ lem : super pessimistic competitive ratio ] the expected performance ratio of the chord algorithm is . hence , we will henceforth assume that is bounded from below by a sufficiently large positive constant .( if this is not the case , the expected total number of points inside is and the desired bound clearly holds . )our first main lemma in this section is an average case analogue of our lemma [ lem : area - half ] : the lemma says that the area of the triangles maintained by the algorithm decreases _ geometrically _ ( as opposed to linearly , as is the case for arbitrary inputs ) at every recursive step ( with high probability ) . intuitively , this geometric decrease is what causes the performance ratio to drop by an exponential in expectation .[ lem : area recursion ] let be the triangle processed by the chord algorithm at some recursive step .denote .let and .for all , with probability at least conditioning on the information available to the algorithm , it follows from the properties of the chord algorithm ( see e.g. , claim [ clm : sandwich ] ) that , before the routine on input is invoked , the following information is known to the algorithm , conditioning on the history ( see figure [ fig : chord - avg ] ) : * there exist solution points at the locations , for all ] .* there is no point below the line , for all ] .note that the chord algorithm makes a query to for every node in the tree . also recall that the number of queries performed by the algorithm on a triangle containing a total number of points is at most .hence , the expected total number of queries made can be bounded as follows : & \le & |{\cal v}_{d^{\ast}}| + 2 \cdot { \mathop{\textstyle \sum}}_{t \in { \cal l}_{d^{\ast } } } \mathbb{e}[x_t~|~{\cal f}]\\ & \le & |{\cal v}_{d^{\ast}}| + 2 \left|{\cal l}_{d^{\ast}}\right| \cdot \nu \cdot s^{\ast \ast}\\ & \le & |{\cal v}_{d^{\ast}}| + 4 |{\cal l}'_{d^{\ast}}| \cdot \nu \cdot s^{\ast \ast}\\ & \le & |{\cal l}'_{d^{\ast}}| \cdot ( 2d^{\ast}+ 4 \nu \cdot s^{\ast \ast}),\end{aligned}\ ] ] where the third inequality uses the fact that .so , conditioning on the information , the expected performance ratio of the algorithm is & \le & \mathbb{e}\left [ { { \mathrm{chord}}_{{\epsilon } } \over |{\cal l}'_{d^{\ast}}|}~\vline~{\cal f}\right ] \\ & \le & ( 2d^{\ast}+ 4 \nu \cdot s^{\ast \ast } ) \\ & = & o \left ( \log \log ( \nu s_1 ) \right).\end{aligned}\ ] ] integrating over all possible in concludes the proof of lemma [ lem : competitive ratio in the good event ] . from lemma [ lem : competitive ratio in the good event ]it follows that =o\left ( \log \log ( \nu s_1 ) \right),\ ] ] and from the preceding discussion we have that \le { 2 \over ( { \ln ( \nu s_1)})^{c-1}}. ] , such that the expected performance ratio of the algorithm conditioning on is .let be the event that all the triangles at the level of the recursion tree of the algorithm ( if any ) have area at most . with the same technique , but using different parameters in the argument , we can also establish the following : [ lem : pessimistic competitive ratio ] for , there exists an event , with \ge 1-{2 \over ( { \nu s_1})^{c'-1}} ] , it follows that \le { 2 \over ( { \ln ( \nu s_1)})^{c-1}}.\ ] ] we bound the expectation of the performance ratio using the law of total probability as follows : & \le & \mathbb{e}\left [ { { \mathrm{chord}}_{{\epsilon } } \over { \mathrm{opt}}_{{\epsilon}}}~\vline~{\cal a}\right ] \cdot \pr[{\cal a } ] + \mathbb{e}\left [ { { \mathrm{chord}}_{{\epsilon } } \over { \mathrm{opt}}_{{\epsilon}}}~\vline~{\cal c}\right ] \cdot \pr[{\cal c } ] \nonumber \\ & + & \mathbb{e}\left [ { { \mathrm{chord}}_{{\epsilon } } \over { \mathrm{opt}}_{{\epsilon}}}~\vline~\overline{{\cal a}\cup { \cal c}}\right ] \cdot \pr\left[\overline{{\cal a}\cup { \cal c}}\right ] \nonumber \\ & \le & o\left ( \log \log ( \nu s_1 ) \right ) \cdot \left ( 1-{2 \over ( { \ln ( \nu s_1)})^{c-1}}\right ) + o\left ( \log(\nu s_1 ) \right ) \cdot { 2 \over ( { \ln ( \nu s_1)})^{c-1 } } \nonumber \\ & + & \mathbb{e}\left [ { { \mathrm{chord}}_{{\epsilon } } \over { \mathrm{opt}}_{{\epsilon}}}~\vline~\overline{{\cal a}\cup { \cal c}}\right ] \cdot \pr\left[\overline{{\cal a}\cup { \cal c}}\right ] .\label{eqn : ena } \end{aligned}\ ] ] where ( [ eqn : ena ] ) follows from lemmas [ lem : good case competitive ratio ] and [ lem : pessimistic competitive ratio ] . to conclude, we need to bound the last summand in the above expression .note first that .hence , \le { 2 \over ( { \nu s_1})^{c'-1}}.\ ] ] we again use the fact that the number of queries made by the chord algorithm ( hence , also the performance ratio ) is bounded by twice the total number of points in the triangle at the root of the recursion tree .this number follows a poisson distribution with parameter .hence , we have \cdot \pr\left[\overline{{\cal a}\cup { \cal c}}\right ] \le 2 \cdot \mathbb{e}\left [ x~\vline~\overline{{\cal a}\cup { \cal c}}\right ] \cdot \pr\left[\overline{{\cal a}\cup { \cal c}}\right].\ ] ] to bound the right hand side of the above inequality we use claim [ claim : ppp - tail ] and obtain : \cdot \pr \left [ \overline{{\cal a } \cup { \cal c } } \right ] & \le & \max\left\{{1 \over \nu s_1 } , o((\nu s_1)^3 ) \cdot \pr\left[\overline{{\cal a}\cup { \cal c}}\right ] \right\ } \\ &\le & \max \left\ { { 1\over \nu s_1 } , o((\nu s_1)^3 ) { 2 \over ( { \nu s_1})^{c'-1}}\right\}.\end{aligned}\ ] ] choosing , the above rhs becomes .plugging this into with gives = o\left ( \log \log ( \nu s_1 ) \right).\end{aligned}\ ] ] this concludes the proof of theorem [ th : competitive ratio ppp ] .we start by proving the analogue of proposition [ fact : dens - ppp ] .[ fact : dens - unif ] let be at the root of the chord algorithm s recursion tree and suppose that points are inserted into independently from a -balanced distribution , where . let , and . if , then = o(1) ] . the last summand is bounded by which is at most for all by monotonicity .the latter quantity equals , where , which is easily seen to be . recalling that and we deduce that the main result of this section is devoted to the proof of the following theorem : [ th : competitive ratio product delta - balanced distribution ]let be the triangle at the root of the chord algorithm s recursion tree , and suppose that points are inserted into independently from a -balanced distribution , where .the expected performance ratio of the chord algorithm on this instance is .combined with proposition [ fact : dens - unif ] , the theorem yields the desired upper bound of .the proof has the same overall structure as the proof of theorem [ th : competitive ratio ppp ] , but the details are more elaborate .we emphasize below the required modifications to the argument .since the performance ratio of the chord algorithm is at most on any instance with points , we will assume that is lower bounded by a sufficiently large absolute constant ( suffices for our analysis ) .we start by giving an area shrinkage lemma , similar to lemma [ lem : area recursion ] .( see figure [ fig : chord - avg ] for an illustration . )[ lem : area recursion product distn ] let be a triangle processed by the chord algorithm at recursion depth at most . denote .let , and . for all , with probabilityat least conditioning on the information available to the algorithm , we follow the proof of lemma [ lem : area recursion ] with the appropriate modifications .let be the triangle maintained by the algorithm at some node of the recursion tree , and suppose that is at recursion depth at most .the information available to the algorithm when it processes ( before it makes the query ) is : * the location of all points , ] . given this information , the probability that , among the remaining points ( whose location is unknown ) , none falls inside a triangle of area inside , is at most indeed ,let be the subset of the root triangle which is available for the location of ; this is the convex set below the line , to the right of all lines , for ] .the probability that a point whose location is unknown falls inside is \over { \cal d}[{t}^{\ast}_i ] } \ge { ( 1-\gamma ) { \cal u}\left [ t \right ] \over { \cal u}[{t}^{\ast}_i ] / ( 1-\gamma ) } \ge { ( 1-\gamma ) { \cal u}\left [ t \right ] \over { \cal u } [ { t_1}]/ ( 1-\gamma ) } = { ( 1-\gamma)^2 } { s(t ) \over s_1}.\ ] ] choosing , the probability becomes hence , the probability that is empty is at most the proof of the lemma is concluded by identical arguments as in the proof of lemma [ lem : area recursion ] . using lemma [ lem : area recursion product distn ] and the union bound , we can show that , with probability at least the following are satisfied : * all triangles maintained by the algorithm at depth of its recursion tree have area at most * for every node ( triangle ) in the first levels of the recursion tree where is defined as in the statement of lemma [ lem : area recursion product distn ] .the proof of the second assertion above follows immediately from lemma [ lem : area recursion product distn ] and the union bound .the first assertion is shown similarly to the analogous assertion of theorem [ th : competitive ratio ppp ] .for the above we assumed that . now let us call the event that the above assertions are satisfied .we can show the following .[ lem : competitive ratio in the good event - product ] suppose .conditioning on the event , the expected performance ratio of the chord algorithm is .the proof is in the same spirit to the proof of lemma [ lem : competitive ratio in the good event ] , but more care is needed .we need to argue that , under , the expected number of points falling inside a triangle at depth of the recursion tree is . using rationale similar to that used in the proof of lemma [ lem : area recursion product distn ] above , we have the following : let be the triangle maintained by the algorithm at a node at depth of the recursion tree .let also be a point whose location is unknown to the algorithm ( conditioning on the information known to the algorithm after processing the first levels of the recursion tree ) .the probability that the point falls inside is \over { \cal d}[{t}^{\ast}_i ] } \le { { \cal u}\left [ t_i \right ] / ( 1-\gamma ) \over ( 1-\gamma ) { \cal u}[{t}^{\ast}_i ] } , \ ] ] where is the region below the line , above the lines , for all in the first levels of the recursion tree , and above the lines , for all in the first levels of the recursion tree . to upper bound the probability that falls inside , we need a lower bound on the size of the area .such bound can be obtained by noticing that where the summation ranges over all in the first levels of the recursion tree .hence , where we used that .hence , the probability that a point falls inside is at most / ( 1-\gamma ) \over ( 1-\gamma ) { \cal u}[{t}^{\ast}_i ] } \le { 1 \over ( 1-\gamma)^2 } \cdot { s_1 \cdot { e \cdot c \cdot \ln \ln n\over n } \over s_1/2 } \le { 2 \cdot e \cdot c \cdot \ln \ln n \over ( 1-\gamma)^2 n}.\ ] ] therefore , the expected number of points falling in is at most the final part of the proof is a charging argument identical to the one in lemma [ lem : competitive ratio in the good event ] .we have thus established the following : [ lem : good case competitive ratio - product ] for , there exists an event , with \ge 1-{2 \over \left ( \ln n \right)^{{{0.5 c ( 1-\gamma)^2}}-1}}, ] , such that the expected performance ratio of the chord algorithm conditioning on is .the proof is similar to the proof of lemma [ lem : good case competitive ratio - product ] , except that the bound is now a bit trickier . for ,let be the event that * all the triangles maintained by the algorithm at depth of its recursion tree have area at most * for every node inside the first levels of the recursion tree where is defined as in the statement of lemma [ lem : area recursion product distn ] . using arguments similar to those in the proof of lemma [ lem : area recursion product distn ] and the union bound , we obtain that \ge 1-{2 \over n^{{{0.5 c ' ( 1-\gamma)^2}}-1}}.\ ] ] now let and be the recursion tree of the algorithm pruned at level .we also define the set as in the proof of lemma [ lem : competitive ratio in the good event ] , ( but with replaced by ) . as in that proof , any -cp set needs to use at least points .moreover , the total number of nodes inside is at most whenever the algorithm processes a triangle , a planar region of area at most is removed from ( the root triangle ) .therefore , after finishing the processing of the first levels of the tree , a total area of at most is removed from .we distinguish two cases . if , then the size of the optimum is at least points . since there is a total of points ( and the algorithm never performs more than calls ) , it follows that in this case the performance ratio is . on the other hand ,if , then the total area that has been removed from is at most hence , the remaining area is at least , assuming . given this bound it follows that the expected number of points inside a triangle at level of the recursion tree is at most using the aforementioned and noting that the performance ratio `` paid '' within the first levels of the recursion tree is at most , we can conclude the proof via arguments parallel to those in the proof of lemma [ lem : competitive ratio in the good event ] .now let us choose and . from lemmas [ lem : good case competitive ratio - product ] and[ lem : pessimistic competitive ratio - product ] we have that \ge 1- { 2 \over ( \ln n)^3}~~\text{and}~~\pr[{\cal b}_{c ' } ] \ge 1- { 2 \over n}.\ ] ] given this , we can conclude the proof theorem [ th : competitive ratio product delta - balanced distribution ] .the argument is the same as the end of the proof of theorem [ th : competitive ratio ppp ] , except that we can now trivially bound the performance ratio of the algorithm by in the event . in this sectionwe show that our upper bounds on the expected performance of the algorithm are tight up to constant factors .in particular , for the case of the poisson point process we prove : [ thm : lower bound ppp ] let be the triangle at the root of the chord algorithm s recursion tree , and suppose that points are inserted into according to a poisson point process with intensity .there exists an infinite family of instances ( parameterized by and ) on which the expected performance ratio of the chord algorithm is . in particular, we can select the parameters so that , which yields a lower bound of .the lower bound construction is reminiscent to the worst - case instance of section [ ssec : worst - lower ] . in particular, the initial triangle ( at the root of the recursion tree ) will be right and . to avoid clutter in the expressions, we present the proof for the case .the generalization for all values of is straightforward .we fix , and and select the intensity of the poisson process to be .note that for this setting of the parameters we have that , we thus obtain an lower bound .given the endpoints , it is clear that with probability .hence , it suffices to show that the chord algorithm makes calls in expectation before it terminates . to show this ,we are in turn going to prove that for being a sufficiently small positive constant , with constant probability , there exists a path of length in the recursion tree of the algorithm .as shown in lemma [ lem : hd - vs - rd ] , for such instances the ratio distance is very well approximated by the horizontal distance .in particular , consider the triangle ( see figure [ fig : chord - avg - lb ] ) , where .for any point , we have that hence , as long as ( the slope of ) is sufficiently large , we can essentially use the horizontal distance metric as a proxy for the ratio distance .indeed , this will be the case for our lower bound instance below .we now proceed to formally describe the construction .the path of length will be defined by the triangles with , i.e. , the ones corresponding to the right subproblem in every subdivision performed by the algorithm . for notational convenience, we shift the coordinate system to the point , so that , and .( note that the horizontal distance is invariant under translation . )we label the triangles in the path by , , , and we let the vertices of triangle be , , and .suppose that when the chord algorithm processes the triangle , the routine returns the point on a line parallel to ( as in figure [ fig : chord - avg - lb ] ) .note that and .let .let .the theorem follows easily from the next lemma : [ lem : long path ] let be a sufficiently small positive constant . with probability at least , for all ] . by the similarity of the triangles and we have from the properties of the poisson point process we have the following : conditioning on the information available to the algorithm when it processes the triangle , if a measurable region inside has area , then the number of points inside this region follows a poisson distribution with parameter . hence , with probability at least , any such region contains at least one point .hence , with probability at least we have that . note that . using and the induction hypothesis, this implies that , with probability at least , hence as desired . on the other hand ,if the area of a region is no more than the probability that a point is contained in that region is at most .similarly , this implies that , with probability at least , by the properties of the poisson point process it follows that the point is uniformly distributed on the segment .hence , with probability at least , it holds a union bound concludes the proof .we now show how the theorem follows from the above lemma .first note that , for all it holds hence , by lemma [ lem : long path ] and the choice of , it is easy to check that with probability at least , we have and latter condition implies that the horizontal distance is a very good approximation to the ratio distance .the latter , combined with the first condition , implies that the node ( corresponding to the triangle ) is not a leaf of the recursion tree .that is , all the triangles survive in the recursion tree , since the chord algorithm does not have a certificate that the points already computed are enough to form an -cp set .this concludes the proof of the theorem . an analogous result can be shown similarly for the case of points drawn from a balanced distributionwe studied the chord algorithm , a simple popular greedy algorithm that is used ( under different names ) for the approximation of convex curves in various areas .we analyzed the performance ratio of the algorithm , i.e. , the ratio of the cost of the algorithm over the minimum possible cost required to achieve a desired accuracy for an instance , with respect to the hausdorff and the ratio distance .we showed sharp upper and lower bounds , both in a worst case and in an average setting . in the worst casethe chord algorithm is roughly at most a logarithmic factor away from optimal , while in the average case it is at most a doubly logarithmic factor away .we showed also that no algorithm can achieve a constant ratio in the worst - case , in particular , at least a doubly logarithmic factor is unavoidable .we leave as an interesting open problem to determine if there is an algorithm with a better performance than the chord algorithm ( both in the worst - case and in average case settings ) , and to determine what is the best ratio that can be achieved .another interesting direction of further research is to analyze the performance of the chord algorithm in three and higher dimensions , and to characterize what is the best performance ratios that can be achieved by any algorithm . | the chord algorithm is a popular , simple method for the succinct approximation of curves , which is widely used , under different names , in a variety of areas , such as , multiobjective and parametric optimization , computational geometry , and graphics . we analyze the performance of the chord algorithm , as compared to the optimal approximation that achieves a desired accuracy with the minimum number of points . we prove sharp upper and lower bounds , both in the worst case and average case setting . |
the first papers about simple population models with complex dynamics are .the main method of analysis of these models is the reduction to discrete equations : the logistic and ricker s equation . in ,liu and gopalsamy investigated the following equation with piecewise constant argument ) \big\},\:\:\:t>0,\ ] ] where are positive constants and ] .we show that for critical values of the parameters the solutions of the differential equations ( [ e7]),([e8]),([e9 ] ) show intermittency which is `` almost periodic '' behavior interrupted by chaotic motions . in the population models ,we consider , where denotes the size of the population at time and is a positive integer , let say , the average value of a population .thus , does not represent the size of the population and it can be negative .let us start with equation ( [ e7 ] ) .if , , it takes the following form then , and , hence if one makes the change of variable in ( [ e13 ] ) , then obtains where .the right - hand side of equation ( [ e14 ] ) is the logistic map when , generates chaos through period - doubling ( see for more details ) . since this map is obtained from the solution of the equation ( [ e11 ] ) and , it is obvious that equation ( [ e7 ] ) can generate chaos for . in , one can find that has intermittent behavior at .therefore , equation ( [ e12 ] ) , and consequently , ( [ e7 ] ) , displays intermittency , too .let , and .one can see the intermittency phenomena for equation ( [ e7 ] ) in figure [ fig : intermittency x ] . ) ) \,x([t])$ ] with , , .,width=360 ] let us consider another equation ( [ e8 ] ) .if , then and hence , now the transformation , , in ( [ e17 ] ) yields where is a negative integer . the right - hand side of equation ( [ e18 ] ) is a function of the form in their article , may and oster discussed the behavior of the following discrete - time equation : they proved that for certain values of equation ( [ mo ] ) has fixed points of period for and it generates chaos .below we will try to extend their results to equation ( [ e160 ] ) for and .now let us consider the fixed points of , and for different values of with .consider the value which is borrowed from , the mapping is tangent to line , as shown in figure [ periodic2teget ] .and when and .,width=360 ] when , the mapping has extra fixed points which are denoted by black stars in figure [ periodic3 ]. then , there exist period three points which are not period one and two. consequently , equation ( [ e8 ] ) admits the chaos through li and yorke theorem .and when and .,width=360 ] for values of just above , the system displays intermittency . in common ,simulations of the corresponding discrete equation ( [ e18 ] ) are realized , but we propose to see the complex behavior in its original form .thus , to compute for let us apply the following program .first , we fix with a negative integer .then , we calculate the sequence , by using ( [ e18 ] ) and , then , and .substituting values of and in ( [ e160 ] ) , we obtain the solution of equation ( [ e110 ] ) . when , and , the result of simulation is seen in the figure [ fig : intermittency x(t ) ] .for , , .,width=360 ] let and . in figure [ chaos_2 ] , one can see that equation ( [ e160 ] ) generates chaos . for and .,width=360 ] our last system is as follows ))\,x([t+1]),\ ] ] where and are constants with the corresponding discrete - time equation is let and , the last equation can be written in the following form where is a negative integer .the right - hand side of equation ( [ e21 ] ) is a function of the form where .similarly to the equation ( [ e15 ] ) , the last one generates complex dynamics .we discuss the complex behavior of different types of logistic equations with piecewise constant argument of delay and advance types .the idea of anticipation and piecewise constant argument are used together .transformations of the space and time variables are used to obtain proper discrete - time equations .the parameter values of the discrete - time equations which cause chaos and intemittency are utilized to get analogues for continuous solutions .simulations of the continuous dynamics are given .m. u. akhmet , h. ktem , s. pickl , g. w. weber , an anticipatory extension of malthusian model , seventh international conference on computing anticipatory systems , computing anticipatory system , casys05 , 2006 , pp .260 - 264 . | we consider the logistic equation with different types of the piecewise constant argument . it is proved that the equation generates chaos and intermittency . li - yorke chaos is obtained as well as the chaos through period - doubling route . basic plots are presented to show the complexity of the behavior . logistic equation ; piecewise constant argument ; chaos ; intermittency . |
in this article we consider the inverse problem of computing the refractive index of a medium from ultrasound time - of - flight ( tof ) measurements . on the one sidethis task is a tomographic problem of its own and often called the _inverse kinematic problem _ which has important applications , e.g. in seismics . on the other side the knowledge of the sound speed , resp .refractive index , of an object under consideration is essential for inverse problems in inhomogeneous media such as photoacoustic or ultrasound vector tomography .the idea is very simple : an ultrasound signal is emitted at a transmitter and its travel time is acquired at a detector , see figure [ motivation ] .of course the tof depends of the sound speed in the medium . the _ refractive index _ , where denotes the constant sound speed of a reference medium like water or air outside the object , causes refractions of the ultrasound beam . assuming a constant ( e.g. by setting ) in applications such as photoacoustic or vector tomographyhence might cause severe artifacts .we emphasized this in figure [ motivation ] .there , the blue triangle would be detected at the wrong place if we assume that the ultrasound signal travels along a straight line and that there is no refraction at all . + to a detector .if , then the signal no longer travels along straight lines and reconstructions which neglect refraction show artifacts .e.g. the blue triangle would be detected at the wrong ( dotted ) place if we assumed straight lines as signal paths . ]let us briefly illuminate the situation in vector tomography .norton derived as mathematical model to compute a solenoidal vector field from tof measurements the doppler transform along straight lines where denotes the vector of direction of the line which connects the points and .hence , to solve the inverse problem of computing from data it is necessary to know the sound speed . but _fermat s pronciple _ says that the propagation paths are geodesic curves of the riemannian metric [ n - metric ] s^2 = ^2 ( x ) |x|^2 leading to an improved model where is a geodesic curve associated with the metric ( [ n - metric ] ) and connecting the points and .the problem now arises that the integrand determines the integration curve turning the inverse problem into a nonlinear one .also in photoacoustic tomography a variable sound speed leads to quite other analysis and numerics , see e.g. .hence , following fermat s principle , then instead of the euclidean space with the euclidean metric tensor , we have to consider the riemannian manifold with metric tensor .+ this is motivation to investigate the following inverse problem : given tof measurements for transmitter / detector pairs , compute the index of refraction satisfying [ ip - refractive ] r ( ) = u^meas , where the ray transform is given as [ ray - transform ] r ( ) ( x , ) : = _ ^_x , s = _^0 ( ^_x,(t ) ) |^_x,(t)| t . here, denotes a geodesic curve of the metric ( [ n - metric ] ) with and for a tangent vector at . +this inverse problem and its research has a long lasting history .we summarize some important references in this context .herglotz was among the first researchers who has taken inhomogeneities into account .he investigated the earth s inner structure by considering travel times of seismic waves .mukhumetov proved that the determination of simple metrics in two dimensions from travel times is possible .the extension to the three - dimensional case was achieved independently by romanov and mukhumetov .general results on the inverse kinematic problem have been proven by stefanov and uhlmann in , chung et al . and sharafutdinov ; a further uniqueness and stability result can be found in .the 2d problem for anisotropic metrics was solved by pestov and uhlmann ; the approach contained therein is constructive .the question of a unique solution , the so called _ boundary rigidity problem _ , is not entirely solved by now .first results in 2d were achieved by michel , croke and otal .pestov and uhlmann showed uniqueness for simple , two - dimensional riemannian manifolds .a microlocal treatment can be found in stefanov and uhlmann .local and semi - global results were presented by croke et al . , stefanov and uhlmann , gromov et al . , croke , and lassas et al . , partly for special metrics only . for more references concerning analytical results for the inverse kinematic problem and the boundary rigidity problemwe refer to the book of sharafutdinov and the references therein .based on the pestov - uhlmann reconstruction formulas from monard derived a numerical solver for the linear geodesic ray transform .another numerical solution scheme which relies on beylkin s theory is presented in .the influence of refraction to reconstruction results in 2d emission tomography have been studied in , a numerical solver for the geodesic ray transform based on b - splines is presented in . a further numerical solver for the inversion of ( [ ray - transform ] )is found in klibanov and romanov ; here the linearization is done by replacing the geodesic curves by straight lines .+ the novelty of our article is twofold : on the one hand we consider the nonlinear problem and our numerical solver linearizes in each iteration step using the old iterate to compute the geodesic curves . on the other hand we use an explicit representation of the geodesic backprojection operator andshow how to implement it . to thisend the construction of a so called _ geodesic projection _ was necessary .+ _ outline . _ in section 2 we provide essential results from riemannian geometry which are necessary for our further considerations as well as the mathematical model for the inverse problem .additionally we collect some mathematical properties of the nonlinear forward operator ( [ ray - transform ] ) .the iterative solver which we develop in this article demands for evaluation of integrals along geodesic curves .the computation of these curves is done using the method of characteristics which is outlined in section 2.5 .the regularizing solution scheme is subject of section 3 .we formulate an appropriate tikhonov functional and linearize for its minimization .the derivative of the so arising functional contains the backprojection operator of the geodesic ray tranform .we give an explicit expression of this operator using the concept of _ phase functions _ and _ geodesic projection _ yielding in that sense an analogon to the conventional 2d backprojektion operator in euclidean geometry ( section 3.1 ) . the iterative minimization scheme and its implementationis described in section 3.2 .section 4 finally contains numerical evaluations of the method for several refractive indices with exact and noisy data .section 5 concludes the article .we collect some fundamental results from differential geometry which are useful for our later considerations . throughout the article we assume to be a compact and convex domain which is seen as a submanifold of .+ [ def:2_riemmetrik ] on we define a _riemannian metric _ as a differentiable mapping , such that is a positive definite , symmetric bilinear form on the tangent space in .we have for * , * , if and * for diffentiable vector fields is a differentiable mapping . here is the tangent bundle on .+ a representation of the metric with respect to local coordinates is given by g_x = _ i , j=1^n g_ij(x ) dx^i|_x dx^j|_x .[ gl:2_defimetriksum ] the third condition is then equivalent to the requirement that the coefficient functions are differentiable independently of the chart .the local coordinates are called _metric tensor_. + if it is convenient , then we use the einstein notation .that means , that we sum up over doubled indices .the representation then becomes the tuple is called _riemannian manifold_. + let be the metric tensor of euclidean geometry ( ) .then is a riemannian manifold and is called _euclidean space_. + we introduce a specific metric tensor which plays a crucial role when studying ultrasound wave propagation in an inhomogeneous medium .let be the speed of sound at and be the sound speed of a reference medium ( e.g. air or water ) .then , denotes the _ index of refraction_. especially we assume in .+ [ lem:2_messgeometrie ] for let [metric - n ] g_x^ = g_ij^(x ) dx^i dx^j with metric tensor where the index of refraction is supposed to be positive and differentiable .then is a riemannian manifold .the element of length is then given as the submanifold can be canonically embedded into such that we can choose the identity as chart which is differentiable .the metric tensor satisfies the requirements of definition [ def:2_riemmetrik ] , since it is symmetric ( ) , differentiable ( the component functions are differentiable ) and positive definite ( ) .+ for simplicity we set for the rest of the article .our aim is to model the propagation of ultrasound waves in a medium with variable sound speed by geodesic curves associated with the metric tensor ( [ metric - n ] ) .this is due fermat s principle ( see section 2.3 ) .we summarize the basics of geodesics . for detailswe refer to standard textbooks such as .let be a riemannian manifold , ] is called _ maximal _ if it can not be extended to a segment ] .+ if the metric is simple , then obviously every geodesic is also a shortest curve in the sense of definition [ d - distance ] .our modelling bases on an important physically axiom , _principle_. it can be summarized as follows : + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a wave signal , which moves from one point to another , always follows the locally shortest path , such that the time of flight is at its minimum .this means that the acceleration in every point disappears in path direction .+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ as a consequence of this axiom it follows that the signals move along geodesic curves . according to this axiom we are going to model ultrasound beams in an inhomogeneous medium with refractive index as geodesic curves associated with the metric [ metric - fermat ] g^ ( x ) = ^2 ( x ) ( _ ij ) .we have now all ingredients together to describe the mathematical model of our measurement process .[ def:2_laufzeitabbildung ] let be a compact riemannian manifold , where is the metric ( [ metric - fermat ] ) .we call the mapping [ tof - mapping ] u : t^0 m ^+ , ( x , ) _ _ -(x,)^0 ( _ x,(t ) ) dt _ time - of - flight mapping _( tof mapping ) , where is parametrised with respect to arc length and : \gamma_{x,\xi}^{\tn}(\tau ) \cap \partial m\neq \emptyset \right\}\ ] ] is the moment where intersects the boundary for the first time . + the following definition addresses the practical situation that we have measurements at the boundary . in this casewe have to distinguish whether the ultrasound wave enters or leaves the domain .+ we have ( red arrow ) at and is the tangential vector ( dotted arrow ) in .there is an `` outflow '' with measured data . in the case of an `` inflow '' , , the geodesic would be outside from and hence . ]let we call _ outflow _ and _ inflow _ , where is the outer normal at in .we have and are compact manifolds .+ the situation is illustrated in figure [ fig:2_messrichtung ] .the following lemma is proven in .+ [ l - tau - smooth ] let be a compact , dissipative riemannian manifold . then the mapping is a smooth function .+ the data acquired by tof measurements can now be modeled as integrals of along geodesics associated with the metric . to this endlet and the solution of the geodesic equation ( [ gl:2_geoddgl ] ) with respect to the metric .the mapping with u^meas(x , ) = 0 & ( x , ) _ -m + _ ^_x , ( y ) l(y ) & ( x , ) _+ m + , [ gl:2_randwerte ] assigns a geodesic curve starting in with tangent its travel time until it leaves the domain .this is why we call the _ tof mapping_. the inverse problem consists of computing the refractive index from .the forward operator is given by the ray transform r ( ) ( x , ) : = _ ^_x , ( z ) l(z ) , ( x , ) _+ m .[gl : operatorgleichung ] for we furthermore define the _ linearized ray transform _ where is the solution of with respect to the metric and initial values and .if , we obviously have the inverse problem of determining the refractive index from tof measurements finally means to find a solution of [ ip - tof ] r ( ) = u^meas .note that the crucial point is that the curve along which integrates depends on the integrand turning ( [ ip - tof ] ) into a highly nonlinear , ill - posed problem . from now onwe assume that the refractive index and is a compact , dissipative riemannian manifold ( cdrm ) .we recall that this implies that for any two points there exists a unique , maximal geodesic connecting and which at the same time is the shortest path between these points . for proving the continuity of is useful to introduce the phase function following the outlines of guillemin and sternberg .compare also ( * ? ? ?* section 2 ) . + for and , , we denote by \to\rr^2 ] .we obtain & & r(a ) - r_a_k(a ) _l^2(_+ m ) + & = & _ ^a_x , a(z ) l(z ) - _ ^a_k_x , a(z ) l(z ) _l^2(_+ m ) + & = & _ 0 ^ 1 a(^a_x,(t))|^a_x,(t ) | t - _ 0 ^ 1 a(^a_k_x,(t ) ) | ^a_k_x,(t ) | t _ l^2(_+ m ) + & & _ y m |a(y)| _ 0 ^ 1 |^a_x,(t ) - ^a_k_x,(t ) | t _ l^2(m ) .let be arbitrary . for large we deduce from lemma [ kor:2_stetigegeodaeten ] and lemma [ l - tau - smooth ] } \|\dot \gamma^a_{x,\xi}(t ) - \dot \gamma^{a_k}_{x,\xi}(t ) \|< \frac{\varepsilon}{2\sup_{y\in m } |a(y)| } .\ ] ] a sufficiently large furthermore assures that putting all this together we conclude r(a ) - r(a_k)_l^2(_+ m ) & & r(a ) - r_a_k(a ) _l^2(_+ m ) + r_a_k(a - a_k ) _ l^2(_+ m ) + & & /2 + / 2 = if only is sufficiently large .this proves the theorem .+ unfortunately by now there is no proof that is dense in or not .+ for completeness we give two existence and uniqueness results which can be found in .+ let such that it induces a simple riemannian metric .* if the measured data is generated by a refractive index , then the operator equation has a unique solution . *if the measured data is generated by a refractive index , then the nonlinear operator equation has a unique solution .+ our numerical solution scheme for ( [ ip - tof ] ) is iterative and demands for evaluating the forward operator , i.e. the computation of the tof measurements for a given metric , in each iteration step .we do this using the fact that the tof function ( [ tof - mapping ] ) satisfies a partial differential equation where the differential operator is the so called _ geodesic vector field_. [ def:3_geodaetischerfluss ] let be a riemannian manifold and the tof mapping ( [ tof - mapping ] ) .we call defined by _ geodesic vector field_. + there s a fundamental connection between the geodesic vector field and the refractive index regarding our measure geometry .[ sat:3_transportgleichung ] let be a riemannian manifold with the metric tensor given as .then for all the transport equation holds true. a proof can be found in ( * ? ? ?* section 1.2 ) .[ bem:3_fluss ] the differential operator has a specific geometrical meaning . let be the solution of then we have for a geodesic .this is the _ geodesic flow _ associated with the vector field .+ [ kor:3_transportgleichung ] let be positive and the metric on defined by with metric tensor then hu(x , ) & = & ^i ( x , ) + ^-1(x ) ( ( x ) ^2 - 2 ^i , ( x ) ) ( x , ) + & = & ( x ) [ gl:3_transportgleichung ] holds true for all . + let . for christoffel symbols are given by ( ^1_ij(x ) ) _ i , j=1 ^ 2 = ^-1(x ) & + & - .for the partial derivatives we have ^1_jk(x)^j^k & = & ^-1(x ) _ 1 & _ 2 & + & - _ 1 + _ 2 + & = & ^-1(x ) ( _ 1 ^ 2 + 2 _ 1 _ 2 - _ 2 ^ 2 ) + & = & ^-1(x ) ( 2_1 ^ 2 + 2 _ 1 _ 2 - ^2 ) + & = & ^-1(x ) ( 2 _ 1 - ^2 ) + & = & ^-1(x ) ( 2 _ 1 , ( x ) - ^2 ) . in the same way we compute andthe proposition follows immediately .+ in theorem [ sat:3_transportgleichung ] we transformed the inverse problem ( [ ip - tof ] ) into a parameter identification problem for the transport equation with boundary conditions .again we recognize that the problem is highly nonlinear , because the refractive index appears as source - term as well as parameter in the differential operator . + according to the transport equation we want to develop a method to compute the geodesic curves for a given refractive index . to this endwe use the method of characteristics .let be the graph of and a curve in with , a compact interval .differentiating with respect to yields then is a normal of the curve at since & & _ x u ( x(t),(t ) ) + _ u ( x(t),(t ) ) + -1 ( x ( t ) , ( t ) , x^i(t ) + ^i(t ) ) + & = & _ x u ( x(t),(t ) ) x(t ) + _ u(x(t),(t ) ) ( t ) - x^i(t ) + & & - ^i(t ) + & = & 0 .from we get that is a tangent vector , because & & _ x u ( x(t),(t ) ) + _ u ( x(t),(t ) ) + -1 ( ( t ) , ( -^i_jk(x(t ) ) ^j(t ) ^k(t))_i , ( x(t ) ) ) + & = & _ x u(x(t),(t ) ) - ^i_jk(x(t))^i(t)^j(t ) - ( x(t ) ) + & = & hu(x(t),(t ) ) - ( x(t ) ) + & = & 0 .if we choose a boundary point and a direction , we finally obtain the initial value problem x_i(t ) = _ i(t ) & t i , i=1,2 + _ i(t ) = -^i_jk(x(t ) ) ^j(t ) ^k(t ) & t i , i=1,2 + z(t ) = ( x(t ) ) & t i + & + x(0 ) = x_0 & + ( 0 ) = _ 0 & + z(0 ) = 0 & . [ gl:3_awp ]the solution is a characteristic , i.e. a curve on , which is the union of all characteristics .but the characteristic curves of are just the geodesics of . solving ( [ gl:3_awp ] )thus follows a geodesic curve starting at the boundary until it meets again say at .then is the tof of the ultrasound signal propagating from in direction until it leaves the boundary at .we intend to solve the nonlinear inverse problem ( [ ip - tof ] ) iteratively by a steepest descent method for a tikhonov functional where we use the linearized operator . in a first subsectionwe define this functional , the iteration scheme and some of its properties . the second subsection then is devoted to deal with implementation issues of this method . + again we assume to be compact . for and consider the tikhonov functional defined by [ def:4_tikhonovfunktional ] j_(f ) = r(f ) - u^meas _y^2 + f - 1 _ l^p(m)^p .here we used the notation .the penalty term thereby ensures that the solution gives a refractive index that does not vary too strongly from , i.e. a homogeneous medium .this is a supposed a priori information about the exact solution which we want to incorporate .furthermore this term of course yields stability of the solution process .the choice furthermore allows also for sparse solutions . because of the nonlinearity we can not expect that there exists a unique minimizer of .usually a stationary point of is searched by the condition . therewe face a further problem : the computation of a gteaux derivative of .the computation is still object of current research .this is why we linearize the problem by using for fixed instead of the nonlinear operator . in this waya second tikhonov functional comes up which is defined by with fixed . since is linear , it is simple to compute the subdifferential .the last ingredient we need for doing so is the adjoint of .+ [ sat:4_adjoperator ] let be fixed. then the adjoint operator is given by [ ra - adjoint ] r_a^*(y ) = _ s^1 ( ( , ( y , ) ) ) | ( , ( y , ) ) | , ym , where \to \partial_+ \omega m ] and set for an integer .for the number of grid points we have and such that number of degrees of freedom increases like . note that in view of ( [ gl:2_geoddgl ] ) it is important that is differentiable on .this is achieved by extending the gradient of continuously to the edges of .+ the next step is the discretization of the measure data . in practical applications we of course have only a finite number of tof measurements .that means the ray transform is given for a finite number of pairs .this is why we define finite sets of source points and ray directions for .hence any geodesic curve for which we acquire measure data is characterized by an element of the set where . for the implementation of algorithm [ alg:4_iteradapmin ] we usethen the discrete tikhonov functional the according set of geodesic curves is given by and the initial value problem is solved by a runge - kutta method with stepsize control .+ the next important ingredient is the implementation of the backprojection operator for given .we recall that our implementation is similar to that of the backprojection operator in standard 2d computerized tomography , but there are two crucial issues to address : + * the determinant can not be computed , since the geodesic projection as well as the phase function are not explicitly known .two possible substitutions are to set the determinant equal to or to use the expression for the euclidean geometry what we have done in our computations .this perfectly fits to our a priori assumption that varies only slightly from .+ * in standard ct for a given point it is easy to compute the corresponding boundary point such that a line , outgoing from in direction of meets .for the determination of such that the geodesic meets a given is a difficult task .+ we developed the following algorithm to compute the backprojection operator for given -th iterate in algoritm [ alg:4_iteradapmin ] as it was used in our implementation .the unit vectors are discretized by for given .+ [ alg:4_adjoperator ] _ input : _ set of geodesic curves associated with and the corresponding tof measuremensts with , , , reconstruction point . *set .* iterate for * * compute and .* * project the point in direction on the boundary point and determine , which is the nearest neighbor to .* * determine the unique such that is minimal . * * if , set and go to step ( s1 ) . * * if , then determine such that for ( unique ) . * * if , then * * * check if . if yes then set and go to step ( s1f ) . * * * otherwise interpolate linearly between and and go to step ( s1 ) . * * if , then * * * check if .if yes then set and go to step ( s1 g ) . * * * otherwise interpolate linearly between and and go to step ( s1 ) .* approximate the backprojection by _ output : _ backprojection .+ the algorithm works as follows : at first we project to the boundary in direction in the euclidean sense .this gives ( s1b ) .next we choose from evey geodesic starting in that one whose trace has minimal distance from .the corresponding index of the tangent is denoted by ( s1c ) .the next step consists of computing the intersection point of with the line yielding .if the geodesic runs to the left of ( in the sense of ) , i.e. , we increment and repeat the procedure ( s1fi ) .if the geodesics , are located on different sides of ( in the sense of ) we use linear interpolation to compute ( s1fii ) .if we proceed in the same way ( s1 g ) . finally the backprojection is computed using the trapezoidal sum .please note that both concepts , the usage of the trapezoidal sum with respect to the projections as well as linear interpolation , are adopted from the standard backprojection step for fbp algorithm in 2d computerized tomography , see e.g. .algorithm [ alg:4_adjoperator ] is emphasized in figure [ fig:4_adjoperator ] . + * if a point is located close to the boundary , then it is possible that there are no neighboring geodesics . in this casewe interpolate to zero .* it is importand to fix what runs left or right of means .our choice is with respect to the line but is not necessarily the best choice . * when incrementing or decrementing the index we have to compute modulo . *the interpolation of can be done in several ways .here we interpolate linearly with respect to the distance of to the intersection points and . .the red dotted line is the geodesic curve , which runs exactly throug bit is not in the measured data , . ]in this section we demonstrate the performance of our method on the basis of several test examples using synthetic data .we implemented algorithms [ alg:4_iteradapmin ] , [ alg:4_abstieglintikh ] and [ alg:4_adjoperator ] as shown in section 3 . to solve the forward operator , i.e. to compute the ray transform for given and we applied an extrapolation step control method based on the classical runge - kutta method of 4th order .thereby the tolerance parameter was chosen as , the initial stepsize as and the stopping index was .since the geodesic curves can be computed independently from each other , the computation of as well as of the synthetic tof data can be parallelized what we have done using 30 cores .the objective functional is minimized subject of in each iteration step using the steepest descent method given in algorithm [ alg:4_abstieglintikh ] . in the first examplewe investigate the behavior of the reconstruction results with respect to a varying regularization parameter and for fixed norm . in every step we chose as a constant .the exact refractive index is given by the sound speed `` peaks '' as [ sound - peaks ] c(x ) = 1/(x ) = 1 + _ i=1 ^ 3_i(x)_i(x ) with and for given center points , radii and amplitudes , . in our experiments we set q_1 = + , q_2 = - + - , q_3 = + - for the center points , r_1 = , r_2 = , r_3 = for the radii and _ 1 = , _ 2 = - , _ 3 = for the amplitudes . in each iteration stepwe chose ( number of detectors ) and ( number of signals per detector ) , such that in every step we have to compute geodesics .the computation of a full set of geodesics lasts about 30 seconds .the evaluation of the adjoint lasts about one minute .we compared results for regularization parameters , . as iteration step size in algorithm [ alg:4_abstieglintikh ]we used .the unit square was discretized using the step size yielding reconstruction points .+ figure [ fig:4_ex1zf ] shows the values for different regularization parameters .one can see that all functional values at first increase and then decrease ( except at the first one ) to a minimum which is less than the initial value . on the basis of these curvesone is able to determine _ optimal stopping indices _ , , where figure [ fig:4_ex1sgv ] shows for along with the exact solution .the errors are plotted in figure [ fig:4_ex1sgvdiff ] .one realizes that for smaller values of there are higher fluctuations in the reconstructions . for all reconstructions the peaksare well detected , but their quantity is smaller if increases .for example in the peak we have and the reconstruction with gives , but for we have ( compare figure [ fig:4_ex1sgs ] ). a bigger leads to a higher weighting of the regularization term which penalizes aberrations from . the weak amplitude at is only detected if is big enough .otherwise the peak can not be recognized because of the high fluctuations . for different regularization parameters plotted against iteration index .top left : , top right : , middle left : , middle right : , bottom : .,title="fig : " ] for different regularization parameters plotted against iteration index .top left : , top right : , middle left : , middle right : , bottom : .,title="fig : " ] + for different regularization parameters plotted against iteration index .top left : , top right : , middle left : , middle right : , bottom : .,title="fig : " ] for different regularization parameters plotted against iteration index .top left : , top right : , middle left : , middle right : , bottom : .,title="fig : " ] + for different regularization parameters plotted against iteration index .top left : , top right : , middle left : , middle right : , bottom : .,title="fig : " ] and reconstructions for different regularization parameters .top left : , top right : , middle left : , middle right : , bottom left : , bottom right : original sound speed.,title="fig : " ] and reconstructions for different regularization parameters .top left : , top right : , middle left : , middle right : , bottom left : , bottom right : original sound speed.,title="fig : " ] + and reconstructions for different regularization parameters .top left : , top right : , middle left : , middle right : , bottom left : , bottom right : original sound speed.,title="fig : " ] and reconstructions for different regularization parameters .top left : , top right : , middle left : , middle right : , bottom left : , bottom right : original sound speed.,title="fig : " ] + and reconstructions for different regularization parameters .top left : , top right : , middle left : , middle right : , bottom left : , bottom right : original sound speed.,title="fig : " ] and reconstructions for different regularization parameters .top left : , top right : , middle left : , middle right : , bottom left : , bottom right : original sound speed.,title="fig : " ] for different regularization parameters . _top left : _ , _ top right : _ , _ middle left : _ , _ middle right : _ , _ bottom : _ .,title="fig : " ] for different regularization parameters ._ top left : _ , _ top right : _ , _ middle left : _ , _ middle right : _ , _ bottom : _ .,title="fig : " ] + for different regularization parameters . _top left : _ , _ top right : _ , _ middle left : _ , _ middle right : _ , _ bottom : _ .,title="fig : " ] for different regularization parameters ._ top left : _ , _ top right : _ , _ middle left : _ , _ middle right : _ , _ bottom : _ .,title="fig : " ] + for different regularization parameters ._ top left : _ , _ top right : _ , _ middle left : _ , _ middle right : _ , _ bottom : _ .,title="fig : " ] and reconstructions for different regularization parameters ._ left : _ original , _ middle : _ _ right : _ .,title="fig : " ] and reconstructions for different regularization parameters ._ left : _ original , _ middle : _ _ right : _ .,title="fig : " ] and reconstructions for different regularization parameters . _left : _ original , _ middle : _ _ right : _ .,title="fig : " ] for different -norms ._ top left : _ , _ top right : _ , _ middle left : _ , _ middle right : _ , _ bottom left : _ , _ bottom right : _ exact sound speed.,title="fig : " ] for different -norms ._ top left : _ , _ top right : _ , _ middle left : _ , _ middle right : _ , _ bottom left : _ , _ bottom right : _ exact sound speed.,title="fig : " ] + for different -norms ._ top left : _ , _ top right : _ , _ middle left : _ , _ middle right : _ , _ bottom left : _ , _ bottom right : _ exact sound speed.,title="fig : " ] for different -norms ._ top left : _ , _ top right : _ , _ middle left : _ , _ middle right : _ , _ bottom left : _ , _ bottom right : _ exact sound speed.,title="fig : " ] + for different -norms ._ top left : _ , _ top right : _ , _ middle left : _ , _ middle right : _ , _ bottom left : _ , _ bottom right : _ exact sound speed.,title="fig : " ] for different -norms ._ top left : _ , _ top right : _ , _ middle left : _ , _ middle right : _ , _ bottom left : _ , _ bottom right : _ exact sound speed.,title="fig : " ] .top left : , top right : , middle left : , middle right : , bottom : .,title="fig : " ] .top left : , top right : , middle left : , middle right : , bottom : .,title="fig : " ] + .top left : , top right : , middle left : , middle right : , bottom : .,title="fig : " ] .top left : , top right : , middle left : , middle right : , bottom : .,title="fig : " ] + .top left : , top right : , middle left : , middle right : , bottom : .,title="fig : " ] again we consider the exact speed of sound ( [ sound - peaks ] ) but now we calculate reconstructions with different -norms .more explicitly we set in case we implemented the soft threshold method as presented in with in this case .for all other we set . in this series of reconstructions we chose and yielding geodesics to be computed in each iteration setp .the computation of a full set of geodesics lasts about seconds , the evaluation of about 20 seconds . in every iteration stepwe make one descent step with step size parameter .the uinit square was discretized again using .the optimal stopping indices , were the reconstructions are visualized in figure [ fig:4_ex2peaksvogel ] compared with the exact . as expected the and norms lead to the best reconstructions because and thus is sparse .particularily the most part of the reconstruction is identical to . for the choices , and one discovers increasing smoothness but also fluctuations of the solution .it is interesting that the small peak at seems to cause severe artifacts at the boundary for .this is also emphasized in figure [ fig:4_ex2peaksdraufdiff ] where the errors are plotted .indeed all peaks are detected correctly ( the maximal error is about ) , but for higher norms the fluctuations in the euclidean areas , i.e. areas where , increase significantly .particularily the error is as large as the detected peaks .the artifacts close to the boundary become also obvious when comparing the traces of the geodesics obtained for and to those of the exact solution , see figure [ fig:4_ex2peaksgeodaeten ] .the left picture shows a good match of the reconstructed and approximated set of geodesics , whereas one clearly recognizes big aberrations in the picture to the right .+ , the red curves are geodesics of the reconstructions .left picture : , , right picture : , .,title="fig : " ] , the red curves are geodesics of the reconstructions .left picture : , , right picture : , .,title="fig : " ] next we consider a sound speed which is not sparse .let with and the parameters , .the riemannian manifold has then constant , positive gaussian curvature .reconstructions for -norms with , can be seen in figure [ fig : constant_curvature ] .the regularization parameters were and , respectively .the stopping indices were and , respectively .as expected the reconstruction for is more accurate then the sparse reconstruction for .we furthermore realize that at the center the reconstruction deteriorates .this comes from the specific metric which generates a _ cold spot _ in the center , that means a small region where almost no geodesic curve , i.e. ultrasound signal , intersects .this fact is clearly visible when we consider the associated geodesic curves for and ( figure [ fig : constant_curvature_geodesics ] ) .one sees that only few geodesics pass the center of the disk .+ , _ middle : _ , _ right : _ exact sound speed.,title="fig : " ] , _ middle : _ , _ right : _ exact sound speed.,title="fig : " ] , _ middle : _ , _ right : _ exact sound speed.,title="fig : " ] ( red curves ) and ( black curves ) .the _ cold spot _ in the center is clearly visible . ] at last we show a numerical test using noise contaminated data .we perform this test by means of the exact solution with , as in subsection [ c - peaks ] and and the parameters , .the measure data additionally have been contaminated by unifromly distributed noise , with relative error ( i.e. relative noise ) .figure [ fig : noise ] shows reconstructions with exact as well as with noisy data .the parameters for the reconstruction with noisy data are , , and .+ ( left picture ) from exact data ( middle picture ) and noisy data ( right picture).,title="fig : " ] ( left picture ) from exact data ( middle picture ) and noisy data ( right picture).,title="fig : " ] ( left picture ) from exact data ( middle picture ) and noisy data ( right picture).,title="fig : " ]in the article we propose a numerical solution scheme for the computation of the refrecative index of a medium from boundary time - of - flight measurements in 2d .the method relies on the minimization of a tikhonov functional that penalizes aberrations from .the minimization is done by a steepest descent method where the linearization of the forward operator was achieved by using the old iteration as refractive index to compute the propagation paths .the ultrasound signals were assumed to propagate along geodesics of the riemannian metric which is due to fermat s principle . we were able to prove that every sequence generated by our iterative scheme has weak limit points .of course this is a little unsatisfactory. it would be great if one could prove that these limit points are minimizers of .one way to do this might be to use the concept of surrogate functionals , see e.g. .in fact if we define by then obviously our algorithm [ alg:4_iteradapmin ] reads as if we assume that at least for all close to , then can be interpreted as a ( local ) surrogate functional .the investigation of convcergence as well as the derivation of the gteaux derivative , is subject of current research . + the numerical experiments show a good performance of the method , also if we have sparse solutions .+ another result of the article is the explicit representation of the backprojection operator for a non - euclidean geometry as well as its numerical realization .we showed the analogy to the conventional ( euclidean ) backprojection operator as it is known from 2d computerized tomography .+ at last we would like to mention that the results of this article do not only affect seismics or phase contrast tof tomography , but also other tomographic problems in inhomogeneous media such as vector and tensor field tomography .we are indebted to the deutsche forschungsgemeinschaft ( german science foundation , dfg ) which funded this project under schu 1978/7 - 1 . | the article deals with a classical inverse problem : the computation of the refractive index of a medium from ultrasound time - of - flight ( tof ) measurements . this problem is very popular in seismics but also for tomographic problems in inhomogeneous media . for example ultrasound vector field tomography needs a priori knowledge of the sound speed . according to fermat s principle ultrasound signals travel along geodesic curves of a riemannian metric which is associated with the refractive index . the inverse problem thus consists of determining the index of refraction from integrals along geodesics curves associated with the integrand leading to a nonlinear problem . in this article we describe a numerical solver for this problem scheme based on an iterative minimization method for an appropriate tikhonov functional . the outcome of the method is a stable approximation of the sought index of refraction as well as a corresponding set of geodesic curves . we prove some analytical convergence results for this method and demonstrate its performance by means of several numerical experiments . another novelty in this article is the explicit representation of the backprojection operator for the ray transform in riemannian geometry and its numerical realization relying on a corresponding phase function that is determined by the metric . this gives a natural extension of the conventional backprojection from 2d computerized tomography to inhomogeneous geometries . refractive index , ray transform , riemannian metric , fermat s principle , geodesic curve , backprojection operator , tikhonov functional 45g10 , 53a35 , 58c35 , 65r20 , 65r32 |
in condensed matter physics , the task of obtaining different mechanical properties of materials , simulated atomistically with a large number of atoms under _ ab initio _ methods , is an almost prohibitive one , in terms of computational effort with the current computer architectures .it might even at times be impossible .because of this , producing a `` classical '' interatomic potential as a substitute for the genuine quantum - mechanical interaction of the particles is highly desirable .the usual procedure is to fit some empirical interatomic potential function , depending on parameters , requiring either agreement with certain macroscopic properties ( structural , thermodynamical , etc . ) or simply agreement between the predicted and observed energies and atomic forces .a standard algorithm based on force information is the force matching method . in this work we present a methodology for fitting interatomic potentials to _ ab initio _ data , using the particle swarm optimization ( pso ) algorithm .the objective function to be minimized is the total prediction error in the energies for the configurations provided , thus the algorithm does not require any information besides the atomic positions for each configuration and their corresponding _ ab initio _ energies . in particular it does not require the atomic forces , as in other fitting procedures such as force matching methods .we implemented two families of interatomic potentials , pair potentials and embedded atom potentials . among the former , we tested the well - known lennard - jones potential , given by ,\ ] ] and the 6-parameters `` generic '' potential as implemented in moldy , from the family of embedded atom potentials , having the general form we implemented the sutton - chen potential , where the pair functions and the embedding function are given by particle swarm optimization ( pso ) algorithm is based on the idea of distributing the search procedure among a large number of `` agents '' , which act independently of each other .each agent moves through the search space with a simple dynamics , reacting to fictitious forces drawing it towards its own _ current best _ solution and the _ global best _solution for the whole swarm . in this way ,when an agent finds a better solution than the current global best , it becomes the new global best and all the other agents react instantly , the swarm is directed towards the new solution .for a set of particles represented by their positions , the velocity for the -th particle and the -th step is and the position is given by we employed the following choice of pso parameters : =0.7 , =1.4 and =1.4 , after a few trial convergence runs .for a potential function where we wish to find the parameters from a set of positions and energies satisfying the relation with we can define an objective function which is just the total prediction squared error , of the form and then for the set of parameters that correctly fit the potential we have . then the problem may be solved numerically with the pso algorithm minimising the function .we have included some improvements on the pso implementation , particular to our problem .for instance , we perturbed the swarm every time the procedure gets stuck in a minimum for steps ( proportional to the number of parameters in the potential , usually ) , completely randomizing their positions .on the other hand , we exploit the fact that for several families of potentials there is a scale parameter for the interatomic distance , let us call it , such that the potential depends on only through .this is the case for the parameter in the lennard - jones potentials , for the , , , and parameters in the generic potential from moldy , and also for the parameter in the sutton - chen variant of the embedded atom potentials .this distance scale parameter can be constrained to be between the minimum observed distance and a multiple of this value ( typically 10 times ) , which considerably reduces the search space .parallelization was achieved simply by distributing the pso particles evenly among the different processors using the message passing interface ( mpi ) framework , at each step sharing the global best between all processors .in order to test the consistency of our procedure , we randomly generate a set of 20 configurations and we compute their energy according to the standard lennard - jones parameters for argon , = 0.0103048 ev and = 3.41 .the resulting set has a standard deviation of energy of 0.41063 ev .then , with the information of positions and energies ( in a parallel run using 64 cores and 500 pso particles ) , the time needed to find the minimum prediction error was 212.6 s. we can see that the algorithm converge quickly for each parameter , recovering their exact values at 1300 steps ( the prediction error reached is below 10 mev / atom ) . coordinate for the case of a lennard - jones potential as a function of optimization step . ]coordinate for the case of a lennard - jones potential as a function of optimization step . ] for the 6-parameters pair potential using the same set of positions and energies obtained for the previous lennard - jones test , the time needed to find the minimum prediction error was 3159.9 s , again using 64 cores and 500 pso particles . in this casethe error for the converged set of parameters falls below 8 mev / atom at 9000 steps .we repeated the same approach for the embedded atom potential , this time using the standard sutton - chen parameters for copper , =0.0123820 ev , =3.61 , =9 , =6 and =39.432 .we used 4 configurations as input , and we stopped the minimization procedure after 193015 steps ( execution time was 23 hours with 64 cores and 800 pso particles ) , when we reached a prediction error of about 0.8 mev / atom and the following fitted parameters : =0.0145749 ev , =3.5834 , =8.82683 , =5.67465 , and =37.028 .in order to test our procedure on a more realist scenario and assess the quality of the fitted potentials we performed _ab initio _ microcanonical molecular dynamics simulations of copper at different temperatures ( covering its solid , liquid and superheated phases ) .all molecular dynamics calculations were performed using density functional theory ( dft ) as implemented in vasp .we used perdew - burke - ernzerhof ( pbe ) generalized gradient approximation ( gga ) pseudopotentials with an energy cutoff of 204.9 ev and -point expansion around the point only . from these simulations , we generated 13229 different atomic configurations with their respective energies , mixed from solid , liquid and superheated state simulations . among themwe chose a subset of 30 with maximum standard deviation of the energy ( namely 0.24 ev / atom ) , in order to increase the transferability of the fitted potential .these configurations were used as input to the fitting procedure .we found the sutton - chen potential parameters presented in table [ tbl_cu ] , with a prediction error of 5.19 mev / atom ..sutton - chen potential parameters for cu , fitted from ab initio data . [ cols="^,^,^,^,^,^",options="header " , ] we tested these parameters by performing classical molecular dynamics simulations using the lpmd code , with a 4x4x4 fcc simulation cell ( 256 atoms ) . fig .[ fig_cu_gdr ] shows the radial distribution function produced by our fitted cu potential for liquid at =1500 k. it reproduces exactly all features ( positions of minima and maxima , heights of the maxima ) found in a previous ab initio fitting study . for liquid copper at =1500 k. ][ fig_cu_msd ] shows the mean square displacement for liquid at =1500 k. from this we obtained a diffusion coefficient =0.276924 /ps , lower than the experimental value reported by meyer , 0.45 /ps at =1520 k. = 1500 k. ] the quality of the potential in reproducing thermal properties was assessed by computing the melting point , using the microcanonical z - method . in this method , for constant volumethe curve is drawn by performing different molecular dynamics simulations at different initial kinetic energies ( in every simulation the system starts with the ideal crystalline configuration ) .the discontinuity in the isochore signals the melting point .[ fig_cu_z ] shows the isochoric curve for different energies around the melting point , where the lowest point of the rightmost branch correspond to an upper estimate of the melting temperature , in our case approx .1700 k ( the experimental value is =1356.6 k ) .the highest point is the critical superheating temperature , around 2020 k. for comparison we also included the isochoric curve calculated with the potential parameters by sutton and chen , which gives around 2000 k for the same system size and number of simulation steps .we have shown that it is possible to use a parallel algorithm based on particle swarm optimization to fit interatomic potentials to _ ab initio _ energies only .our procedure has been tested by fitting both pair potentials and embedded atom potentials , up to a prediction error of the order of 1 mev / atom , using between 5 and 30 different configurations .the implementation code is parallelized using message passing interface ( mpi ) libraries .we demonstrated the capabilities of our method by fitting a set of sutton - chen parameters for copper using _ ab initio _ data from three thermodynamic phases .this fitted potential is able to reproduce the radial distribution function , although it underestimates the diffusion coefficient for liquid copper at =1500 k ( with respect to experimental data ) .it also yields a better prediction of the melting point than the standard sutton - chen parameters . | we present a methodology for fitting interatomic potentials to ab initio data , using the particle swarm optimization ( pso ) algorithm , needing only a set of positions and energies as input . the prediction error of energies associated with the fitted parameters can be close to 1 mev / atom or lower , for reference energies having a standard deviation of about 0.5 ev / atom . we tested our method by fitting a sutton - chen potential for copper from _ ab initio _ data , which is able to recover structural and dynamical properties , and obtain a better agreement of the predicted melting point versus the experimental value , as compared to the prediction of the standard sutton - chen parameters . |
the concept of hierarchy has been used for systems described with networks for a significant time and for different purposes .it has been considered in context of elections , investigation of road network geometry and neural networks .more recently , the concept of hierarchy has been used for social systems , artificial neural networks , financial markets and communication networks .it has also been used on a more theoretical level , to distinguish and explain specific network properties .while the concept of hierarchy has been extensively used , it is continuously lacking a definition .there are papers that attempt to remedy that by introducing more formal definition and measures .the paper by corominas - murtra introduces definition of causal graphs and perfect hierarchical structure , and introduces measure of hierarchy that is based on similarity of a given graph to a perfect hierarchy .the paper by mones et . distinguishes between three types of hierarchy : order ( as in ) , nested ( as in ) and flow ( as in that paper and ) .the order hierarchy is a simple ordering of elements of a set along one dimensional axis .the nested hierarchy is tied to the community structure and multi - scale organization of the network .the flow hierarchy is basically a causal structure , with ( using tree analogy ) the `` root '' being the origin of all signals or flows , and `` leaves '' being simply receivers . in this paper , we focus on the flow hierarchy . as pointed out in order hierarchy can be seen as simple flow hierarchy , and nested hierarchy can be represented by a flow hierarchy . in our work , we intend not to make another definition of hierarchy , but to measure a single aspect of it its depth .we introduce two depth measures for directed networks and discuss their behavior and differences between them .we also show how the existence of cycles can be taken into account and how they influence depth measures .we define depth measures for directed networks , that are composed of vertices connected by arcs ( directed edges ) . in general, they are applicable to any directed network , even if it contains cycles .the depth is defined in the context of flow hierarchies , and is therefore most meaningful for networks that have a certain overall direction of all arcs .the _ rooted depth _ is a value defined for each vertex in a network , and is the distance of that vertex from a root .use of this definition requires a fixed root vertex .let us start simple and first consider a tree , where root is well defined .we are considering directed networks and consider tree where arcs are all directed from root and towards leaves .the root is a vertex with only outgoing arcs , while leaves have only incoming arcs .each vertex is at certain depth in the network , which we define as the distance from root to the vertex .this means that all vertices are organized into distinct depth levels .+ let us now generalize to any graph with a specified root , which we will call _ rooted network_. the root is the only vertex that has only outgoing arcs , and no incoming arcs ( ) .leaves are vertices without outgoing arcs , but some incoming arcs .the definition remains same without any changes , the distance being the length of shortest path . because of the existence of multiple paths of different lengths , it is possible however , that an arc connects higher depth vertex with lower depth vertex ( figure [ ksjh_fig_simplehierarchy ] ) .because the definition uses shortest path , it effectively ignores any arcs that are not part of a shortest path .this definition works for networks with loops and effectively ignores them .it is easy to understand , as at least part of the loop has to be not a shortest path from root to somewhere , so it simply is ignored .+ if we define any vertex that has no incoming arcs as a root , we may treat any directed graph as a sum of several rooted networks ( figure [ ksjh_fig_acyclic ] ) .each root vertex has a part of network reachable form it , forming its own _ rooted component _ , which may encompass whole network , but it not necessarily does .given multiple roots , a given vertex has several possibly different depths , depending which root we will look at .reasonable solution is to average the depths from different roots to obtain a single depth value for a vertex .additional issue comes from fact , that existence of loops means that no roots may exist in the whole network . in such case , the definition does not work , unless we perform loop collapse ( see below ) . + the _ relative depth _ is defined as a value assigned to each vertex , such that any arc always connects from a vertex with lower depth to a vertex with higher depth ( ) .this basically means that we treat the arcs as a `` has depth lesser than '' relation on the set of vertices of the graph , and thus define the depth levels for vertices .such definition is ambiguous , and to give precise values , we add that the difference must be minimal but at least ( ) .depth values assigned according to such definition correspond to difference between vertex depths of and being equal to longest directed path between them , provided such path exists ( figure [ ksjh_fig_alternative ] ) . in absence of the path , the difference can be any value .note that since the values are derived from relations , there is no set `` zero level '' and the values can be set relative to an arbitrary reference value . for convenience ,it is reasonable to set vertex with least depth to , thus recovering non - negative depth levels like in the rooted depth definition .the definition of relative depth sets a strict inequality relations between vertices that have directed paths between them , meaning that they can indirectly interact in some way if the network is treated as a topology of interactions .the definition works only for acyclic graphs .any directed loop is unresolvable , just like an inequality does not have any solution .+ we can extend the definition to graph containing cycles if we force all vertices belonging to a given cycle to have the same depth value .this means each cycle is at fixed depth level , regardless of the directed arcs it contains .this is however a logical rule if all vertices can influence all other vertices in the set , all should be at same depth .effectively , from the point of view of the definition , a cycle is a single complex vertex .we define _ loop collapse _ to be a process , where all cycles are substituted with a complex single vertices ( see figure [ ksjh_fig_collapse ] ) .note that the loop collapse is virtual the vertices are not really collapsed , they are only treated so for the single purpose of assigning the depth level .all vertices that are component of a collapsed loop are assigned same depth .if all loops are collapsed , then we recover a directed acyclic graph , and relative depth definition can be used to calculate depth levels .note that it is possible to use loop collapse independently from relative depth definition , for example using it alongside rooted depth definition , in which case it changes the depth levels .+ both definitions attempt to measure the vertical size of the hierarchy or a directed network . despite aiming for the same ,they significantly differ from each other on several points.// the _ information reduction _ is present in both cases .rooted depth uses only shortest path , which means that it effectively ignores any arcs that are not part of a shortest path , which essentially reduces the whole network to a spanning tree of shortest paths .the relative depth on the other hand effectively ignores arcs that are not part of a longest path .relative depth definition essentially takes into account only arcs that are part of transitive reduction of a relation defined by arcs ( which by definition of relative depth is transitive ) .all `` shortcuts '' are ignored , often including the shortest paths .+ both definitions are different in the aspect of _locality_. rooted depth is defined globally and does not describe the local situation consistently depth levels viewed from local perspective of single vertex may be completely not related to the graph structure .the only rule is that outgoing arcs of given vertex always connect to vertices with depth no larger than and incoming arcs are form vertices with depths no smaller than .the relation between depth levels does not otherwise reflect arc structure , but the depth level of accurately describes how close the vertex is to the root .moreover the root is always at known , fixed depth level . on the other hand relative depthis defined locally and as such is relevant to local structure .arcs always go from lesser to greater depth levels , although the actual values are effect of global topology .the depth values do not reflect how close to the eventual root ( the `` top '' ) is the vertex .a vertex at depth could be connected directly from the root . moreover ,the depth of a root is not fixed , and if several roots exist , may be found at any depth .overall , the rooted depth describes the global position more accurately , but fails locally .the relative depth vice versa .+ _ vulnerability _ of both definitions are also different .rooted depth requires existence of root vertices and can not work without them , while the definition of relative depth does nt work in presence of directed loops . in both casesthe loop collapse allows them to work , but nonexistence of root is rare for a network that would not be almost wholly loop - collapsed into single vertex ( at least for random graphs ) .both definitions of depth are volatile in regard to addition or removal of arcs .a single removal or addition could change depth levels of significant parts of the whole network . adding new links can only decrease depth levels for rooted depth , and increase them for relative depth , and vice versa .+ the rooted depth can take into account the _ multiple aspects _ of depth .if multiple roots are present , then the depth levels relative to each root can be treated separately , not averaged into a single number , or alternatively averaged with certain weights depending on the root or its rooted component .if the depth is considered in the context of a hierarchy , this can differentiate between positions within different hierarchies .relative depth can not distinguish multiple aspects , as it has to be defined with a single number for each vertex .it is however _ consistent _ , as that number is relatively well defined , without multiple aspects and issues tied to averaging .overall , both definitions have their strong and weak points . as they describe the complex topology by simple numbers , they reduce information in different ways , which yields different properties .the choice of depth measure will therefore depend on the aspects of the system being investigated and what one wishes to describe with it .we have investigated the depth measure values in directed random networks .we investigated a directed erdos - renyi graph , where probability of an arc existing between an ordered pair of vertices is constant .opposite arcs ( e.g. and ) can exist and their existence is decided independently .note that the parameter is a directed degree means that on average vertices have outgoing arcs as well as incoming arcs .if we looked at those links as undirected the total degree would be twice as high .+ first , we attempt to investigate most general properties of the depth depending on the network parameters .the average depth is a very simple measure , allowing us to look across different network parameters .we are interested in the total depth of the network however , not necessarily the average value of over all vertices .similar to physical depth , being distance from surface to bottom , not some average over a volume , we decide to measure as average od depth level over leaves in the network .for rooted depth and single - root network , this means where is the number of leaves , the sum over is over all leaves and is the path length from the root to leaf .if there is more than one root , the depth level of vertices is average over all roots _ it has directed paths from _ , which means where is the number of leaves , the sum over is over all leaves , is number of roots that have path to leaf , the sum over is over all roots that have path to leaf and is the path length from the root to leaf .the averaging od depth level for each vertex makes sense when we are investigating that single vertex .if we are looking at whole network however , it is more reasonable to look at pairs root - leaf and averaging over that .this would yield a definition where is number of pairs root - leaf connected by directed path , the sum is over all pairs that are connected by a path and is the lengh of that path .equations [ ksjh_eq_depth1 ] and [ ksjh_eq_depth1pair ] define slightly different values . in the first equation, if vertex is under small number of roots , the depth levels generated by them are contributing more to the total compared to a vertex under large number of roots , which averages over more different depths .if all vertices have directed paths from all roots , then both equations are equivalent as and therefore it can be extracted from under the sum over and is all pairs have paths between them , thus recovering equation [ ksjh_eq_depth1pair ] . as we are calculating a global property, we decided to use the definition described by equation [ ksjh_eq_depth1pair ] as more appropriate on the global level .+ the dependence of average _ rooted depth _ on average node degree is shown at figure [ ksjh_fig_erdos1 ] .note that in case where no roots or leaves could be found , we assumed depth equal to 0 for practical reasons .we explain the results at figure [ ksjh_fig_erdos1 ] as follows .below , the network is not percolated and consist of single vertices ( ) and small clusters ( responsible for increasing value of with ) . loops are very rare . at network becomes percolated ( note that this is directed percolation ) , creating the giant cluster .this corresponds with the maximum of the average depth level .the average depth is tied ot the diameter of the giant cluster as well as the amount of still present disconnected clusters . above adding more arcs means creating loops , since the network is already percolated .loops introduced randomly cause decrease of the component diameter and thus depth , which corresponds to the first drop of after the peak . increasing number of loops also may cause situations where no roots or no leaves exist in the network , causing the average depth to drop towards ( the assumed value for network without roots or leaves ) .this starts to happen only at high and is responsible for second drop of the ( around for ) .figure [ ksjh_fig_topology ] shows how example network looks like at different .+ it is notable that the shape of the dependence of network depth on density shown on figure [ ksjh_fig_erdos1 ] is very similar to global reaching centrality measure introduced by mones et.al . and calculated for directed random graphs .the similarity can be explained , by both depth and grc being measures for hierarchies .below percolation threshold , the graph consists of many small clusters , which yield very low average depth , at the same moment giving very low grc due to narrow local reaching centrality distributions . significantly above percolation threshold, the presence of many loops mean that the network has large strongly connected component , that has relatively low depth as well as again producing narrow lrc distribution and in effect low grc . at percolation threshold ,the giant connected component is critically sparsely connected , meaning no loops and in effect a high depth ( long shortest paths ) .on other hand , without loops the vertices have a broad distribution of lrc , resulting in high grc .both measures achieve high values for a large , extended directed network without loops , which explains the correlation .+ because the rooted depth can not be resolved in case there are no roots or leaves in the network , and also to have better comparison between rooted depth and further relative depth , we decided to also investigate rooted depth while using the loop - collapse .the results are somewhat different and found at figure [ ksjh_fig_erdos2 ] .the main differenc is the plateau after the peak , with value .this corresponds to a network with a dense `` core '' containing many interleaving loops , and thus collapsed to a single complex vertex , with single vertex `` roots '' and `` leaves '' sticking out of this core . without the loop - collapse ,the loops were partially converted to `` vertical '' and assigned some differing depths ( differently from perspective of each root ) , the denser the core , the smaller depths . with loop - collapse ,the value of for network with the dense core is fixed at , resulting in plateau .note that since loop - collapse virtually removes all loops , there are always roots and leaves and in worst case the sole remaining complex vertex is both root and leaf and thus has .also note that all the values ( , , ) are calculated for the real , full network .loop - collapse is used only to calculate depth levels .+ the results are similar , although there are differences .below , the behavior is the same , as lack of loops means loop - collapse does nt do anything . after the maximum, the average depth declines much faster , because addition of loops causes loop - collapse , thus decreasing the effective size of the giant component much quicker .the value then stabilizes at , because the typical topology of loop - collapsed network at this point consists of single complex network representing the majority of original vertices , with few single roots and leaves directly attached to it . asthe becomes even higher , the network starts to entirely collapse to single complex network , causing to approach for very high .this explanation is reinforced by the size of the loop - collapsed network , that is very close to original size below percolation threshold , and then exponentially decreases towards single vertex ( figure [ ksjh_fig_size1 ] ) .the measurement of _ relative depth _ can be performed similarly .since we need to use loop - collapse , we always recover roots and leaves .if our relative depth levels are scaled so that the lowest value is ( similarly to how roots in rooted depth have depth ) , we can calculate average depth of leaves ( the `` bottom '' ) and treat that as the depth of the network where is average over leaves .note however , that unlike in rooted depth , roots ( the `` surface '' ) can be at different depth levels themselves .we can thus calculate the depth of whole network as difference between average depth of leaves and roots ( the `` bottom '' and `` surface '' ) .the first equation ( [ ksjh_eq_depth2r ] ) describes the depth of the network globally , taking into account the extreme value ( for roots ) .the second equation ( [ ksjh_eq_depth2 ] ) is a more locally relevant , showing what is the likely depth in a part of the network .it could be compared to rooted depth where we only take `` highest '' root into account and normal rooted depth that considers all roots .both equations yield similar results , presented in figure [ ksjh_fig_erdos3 ] .in general , it can be seen that relative depth behaves similarly to rooted depth .for small the value is low .the peak is around percolation threshold , corresponding to the maximally stretched treelike structure of the giant component , and then the decline of average depth correspond to the gradual increase in density and number of loops , which flattens the network when if comes to depth ( remember we use loop - collapse ) .the one difference between rooted and relative depth discrenible on the plots is the absence of the sudden drop for higher .this can be explained by the fact , that when we could nt find either roots or leaves in rooted depth calculation , we assumed total depth equal to . if we look at figure [ ksjh_fig_topology ] , we see that in the `` core with roots and leaves '' stadium , loss of last root effectively makes the value ( the assumed value ) , despite one arc earlier having depth or even more .this accounts for the fast drop of depth for rooted depth not using loop - collapse .since relative depth always uses loop collapse , the `` core with roots and leaves '' would simply change into depth structure , where either the `` core '' would assume role of leaf / leaves ( loss of last leaf ) or role of root ( loss of last root ) .the transition is smooth , thus no sudden drops on the graphs for relative depth could be found . trying to describe the behavior of the depth measures for different network parameters we also investigated the height of the peak for different network sizes and different definitions .figure [ ksjh_fig_peaks ] shows results for rooted depth of networks without loop - collapse , with loop - collapse and for relative depth .we found power dependence od rooted depth on size , with exponent to ) .we expect these values to be related to the diameter ( maximum shortest path length ) of the percolation cluster in random graph .the relative depth peaks also behave as a power of network size , albeit with significantly lower exponent .we have defined two depth measures for flow hierarchies , generalized to any type of directed network . _ rooted depth _ is defined as shortest path from one of network s roots , while _ relative depth _ is defined through relations between vertices and is effectively equal the longest path from the root .the differences between these measures are discussed rooted depth ignores arcs that are not shortest paths , while relative depth ignores arcs that are not longest paths .rooted depth has more global meaning , while relative one is more meaningful locally .we have investigated the two measures on a random erdos - renyi networks , showing how they behave on purely random network , thus allowing better understanding of values obtained in other types of networks .both measures behave similarly to a global reaching centrality measure of how hierarchical the network is . +the research leading to these results has received funding from the european union seventh framework programme ( fp7/2007 - 2013 ) under grant agreement no 317534 ( the sophocles project ) .00 a. shimbel , communication in a hierarchical network , bulletin of mathematical biophysics 14 ( 1952 ) 141 - 151 .a. okabe , h. yomono , statistical methods for evaluating the geometrical hierarchy of a network , geographical analysis 20 ( 1988 ) 122 - 139 .willcox , understanding hierarchical neural network behaviour : a renormalization group approach , journal of physics a 24 ( 1991 ) 2655 - 2664 .e. bonabeau , g. theraulaz , j .- l .deneubourg , phase diagram of a model of self - organizing hierarchies , physica a 217 ( 1995 ) 373 - 392 .sousa , d. stauffer , reinvestigation of self - organizing social hierarchies , international journal of modern physics c 11 ( 2000 ) 1063 - 1066 .d. stauffer , phase transition in hierarchy model of bonabeau , international journal of modern physics c 14 ( 2003 ) 237 - 239 .gallos , self - organizing social hierarchies on scale - free networks , international journal of modern physics c 16 ( 2005 ) 1329 - 1336 .s. valverde , r.v .sole , self - organization versus hierarchy in open - source social networks , physical review e 76 ( 2007 ) 046118 .y. yamashita , j. tani , emergence of functional hierarchy in a multiple timescale neural network model : a humanoid robot experiment , plos computational biology 4 issue 11 ( 2008 ) 1000220 m. tumminello , f. lillo , r.n .mantegna , correlation , hierarchies , and networks in financial markets , journal of economic behavior & organization 75 ( 2010 ) 4058 .y. wang , m. iliofotou , m. faloutsos , b. wu , analyzing interaction communication networks in enterprises and identifying hierarchies , proceedings of the 2011 ieee 1st international network science workshop , nsw 2011 ( 2011 ) , 17 - 24 ( art .6004653 ) e. ravasz , a .-barabasi , hierarchical organization in complex networks , physical review e 67 ( 2003 ) 026112 .a. trusina , s. maslov , p. minnhagen , k. sneppen , hierarchy measures in complex networks , physical review letters 92 ( 2004 ) 178702 .e. mones , l. vicsek , t. vicsek , hierarchy measure for complex networks , plos one 7(3 ) ( 2012 ) e33799 .doi:10.1371/journal.pone.0033799 b. corominas - murtra , c. rodriguez - caso , j. goni , measuring the hierarchy of feedforward networks , chaos 21 ( 2011 ) 016108 .e. mones , hierarchy in directed random networks , physical review e 87 ( 2013 ) 022817 . | we explore depth measures for flow hierarchy in directed networks . we define two measures rooted depth and relative depth , and discuss differences between them . we investigate how the two measures behave in random erdos - renyi graphs of different sizes and densities and explain obtained results . hierarchy , complex networks , depth , directed graph , random graphs |
several works ( e.g. , ratinov and roth , 2009 ; cohen and sarawagi , 2004 ) have shown that injecting dictionary matches as features in a sequence tagger results in significant gains in ner performance . however , building these dictionaries requires a huge amount of human effort and it is often difficult to get good coverage for many named entity types .the problem is more severe when we consider named entity types such as gene , virus and disease , because of the large ( and growing ) number of names in use , the fact that the names are heavily abbreviated and multiple names are used to refer to the same entity . also , these dictionaries can only be built by domain experts , making the process very expensive .this paper describes an approach for automatic construction of dictionaries for ner using large amounts of unlabeled data and a small number of seed examples .our approach consists of two steps .first , we collect a high recall , low precision list of _ candidate phrases _ from the large unlabeled data collection for every named entity type using simple rules . in the second step , we construct an accurate dictionary of named entities by removing the noisy candidates from the list obtained in the first step .this is done by learning a classifier using the lower dimensional , real - valued cca embeddings of the candidate phrases as features and training it using a small number of labeled examples .the classifier we use is a binary svm which predicts whether a candidate phrase is a named entity or not .we compare our method to a widely used semi - supervised algorithm based on co - training .the dictionaries are first evaluated on virus and disease ner by using them directly in dictionary based taggers .we also give results comparing the dictionaries produced by the two semi - supervised approaches with dictionaries that are compiled manually .the effectiveness of the dictionaries are also measured by injecting dictionary matches as features in a conditional random field ( crf ) based tagger .the results indicate that our approach with minimal supervision produces dictionaries that are comparable to dictionaries compiled manually .finally , we also compare the quality of the candidate phrase embeddings with word embeddings by adding them as features in a crf based sequence tagger .we first give background on canonical correlation analysis ( cca ) , and then give background on crfs for the ner problem . the input to cca consists of paired observations where are the feature representations for the two views of a data point .cca simultaneously learns projection matrices ( is a small number ) which are used to obtain the lower dimensional representations where . are chosen to maximize the correlation between and .consider the setting where we have a label for the data point along with it s two views and either view is sufficient to make accurate predictions . and give strong theoretical guarantees when the lower dimensional embeddings from cca are used for predicting the label of the data point .this setting is similar to the one considered in co - training but there is no assumption of independence between the two views of the data point .also , it is an exact algorithm unlike the algorithm given in . since we are using lower dimensional embeddings of the data point for prediction , we can learn a predictor with fewer labeled examples .crf based sequence taggers have been used for a number of ner tasks ( e.g. , mccallum and li , 2003 ) and in particular for biomedical ner ( e.g. , mcdonald and pereira , 2005 ; burr settles , 2004 ) because they allow a great deal of flexibility in the features which can be included .the input to a crf tagger is a sentence ( ) where are words in the sentence .the output is a sequence of tags where . is the tag given to the first word in a named entity , is the tag given to all words except the first word in a named entity and is the tag given to all other words .we used the standard ner baseline features ( e.g. , dhillon et al ., 2011 ; ratinov and roth , 2009 ) which include : * current word and its lexical features which include whether the word is capitalized and whether all the characters are capitalized .prefix and suffixes of the word were also added . * word tokens in window of size two around the current word which include and also the capitalization pattern in the window . *previous two predictions and .the effectiveness of the dictionaries are evaluated by adding dictionary matches as features along with the baseline features in the crf tagger .we also compared the quality of the candidate phrase embeddings with the word - level embeddings by adding them as features along with the baseline features in the crf tagger .this section describes the two steps in our approach : obtaining candidate phrases and classifying them .we used the full text of 110,369 biomedical publications in the biomed central corpus to get the high recall , low precision list of candidate phrases .the advantages of using this huge collection of publications are obvious : almost all ( including rare ) named entities related to the biomedical domain will be mentioned and contains more recent developments than a structured resource like wikipedia .the challenge however is that these publications are unstructured and hence it is a difficult task to construct accurate dictionaries using them with minimal supervision .the list of virus candidate phrases were obtained by extracting phrases that occur between `` the '' and `` virus '' in the simple pattern `` the ... virus '' during a single pass over the unlabeled document collection .this noisy list had a lot of virus names such as _ influenza _ , _ human immunodeficiency _ and _ epstein - barr _ along with phrases that are not virus names , like _ mutant _ , _ same _ , _ new _ , and so on .a similar rule like `` the ... disease '' did not give a good coverage of disease names since it is not the common way of how diseases are mentioned in publications .so we took a different approach to obtain the noisy list of disease names .we collected every sentence in the unlabeled data collection that has the word `` disease '' in it and extracted noun phrases following the patterns `` diseases like .... '' , `` diseases such as .... '' , `` diseases including .... '' , `` diagnosed with .... '' , `` patients with .... '' and `` suffering from .... '' .having found the list of candidate phrases , we now describe how noisy words are filtered out from them .we gather ( _ spelling _ , _ context _ ) pairs for every instance of a candidate phrase in the unlabeled data collection . _spelling _ refers to the candidate phrase itself while _ context _ includes three words each to the left and the right of the candidate phrase in the sentence .the _ spelling _ and the _ context _ of the candidate phrase provide a natural split into two views which multi - view algorithms like co - training and cca can exploit .the only supervision in our method is to provide a few _ spelling _ seed examples ( 10 in the case of virus , 18 in the case of disease ) , for example , _ human immunodeficiency _ is a virus and _ mutant _ is not a virus .we use cca described in the previous section to obtain lower dimensional embeddings for the candidate phrases using the ( _ spelling _ , _ context _ ) views . unlike previous works such as and, we use cca to learn embeddings for candidate phrases instead of all words in the vocabulary so that we do nt miss named entities which have two or more words .let the number of ( _ spelling _ , _ context _ ) pairs be ( sum of total number of instances of every candidate phrase in the unlabeled data collection ) .first , we map the _ spelling _ and _ context _ to high - dimensional feature vectors . for the _spelling _ view , we define a feature for every candidate phrase and also a boolean feature which indicates whether the phrase is capitalized or not . for the _ context _ view, we use features similar to where a feature for every word in the _ context _ in conjunction with its position is defined .each of the ( _ spelling _ , _ context _ ) pairs are mapped to a pair of high - dimensional feature vectors to get paired observations with ( are the feature space dimensions of the _ spelling _ and _ context _ view respectively ) . using cca, we learn the projection matrices ( and ) and obtain _ spelling _ view projections .the k - dimensional _ spelling _ view projection of any instance of a candidate phrase is used as it s embedding .the k - dimensional candidate phrase embeddings are used as features to learn a binary svm with the seed _ spelling _ examples given in figure 1 as training data .the binary svm predicts whether a candidate phrase is a named entity or not . since the value of k is small , a small number of labeled examples are sufficient to train an accurate classifierthe learned svm is used to filter out the noisy phrases from the list of candidate phrases obtained in the previous step .to summarize , our approach for classifying candidate phrases has the following steps : * input : ( _ spelling _ , _ context _ ) pairs , _ spelling _ seed examples .* each of the ( _ spelling _ , _ context _ ) pairs are mapped to a pair of high - dimensional feature vectors to get paired observations with . * using cca , we learn the projection matrices and obtain _ spelling _ view projections . *the embedding of a candidate phrase is given by the k - dimensional _ spelling _ view projection of any instance of the candidate phrase .* we learn a binary svm with the candidate phrase embeddings as features and the _ spelling _ seed examples given in figure 1 as training data . using this svm, we predict whether a candidate phrase is a named entity or not .we discuss here briefly the dl - cotrain algorithm which is based on co - training , to classify candidate phrases .we compare our approach using cca embeddings with this approach . here , two decision list of rules are learned simultaneously one using the _ spelling _ view and the other using the _ context _ view .the rules using the _ spelling _ view are of the form : full - string = human immunodeficiency virus , full - string = mutant not_a_virus and so on . in the _ context _ view , we used bigram rules where we considered all possible bigrams using the _ context_. the rules are of two types : one which gives a positive label , for example , full - string = human immunodeficiency virus and the other which gives a negative label , for example , full - string = mutant not_a_virus .the dl - cotrain algorithm is as follows : input : ( _ spelling _ , _ context _ ) pairs for every instance of a candidate phrase in the corpus , specifying the number of rules to be added in every iteration , precision threshold , _ spelling _ seed examples .algorithm : 1 .initialize the _ spelling _ decision list using the _ spelling _ seed examples given in figure 1 and set . 2 .label the entire input collection using the learned decision list of _ spelling _ rules .add new _ context _ rules of each type to the decision list of _ context _ rules using the current labeled data .the rules are added using the same criterion as given in , i.e. , among the rules whose strength is greater than the precision threshold , the ones which are seen more often with the corresponding label in the input data collection are added .label the entire input collection using the learned decision list of _ context _ rules .add new spelling rules of each type to the decision list of _ spelling _ rules using the current labeled data .the rules are added using the same criterion as in step 3 .set .if rules were added in the previous iteration , return to step 2 .the algorithm is run until no new rules are left to be added .the _ spelling _ decision list along with its strength is used to construct the dictionaries .the phrases present in the _ spelling _ rules which give a positive label and whose strength is greater than the precision threshold , were added to the dictionary of named entities .we found the parameters m and difficult to tune and they could significantly affect the performance of the algorithm .we give more details regarding this in the experiments section .previously , introduced a multi - view , semi - supervised algorithm based on co - training for collecting names of people , organizations and locations .this algorithm makes a strong independence assumption about the data and employs many heuristics to greedily optimize an objective function .this greedy approach also introduces new parameters that are often difficult to tune . in other works such as and external structured resources like wikipediahave been used to construct dictionaries . even though these methods are fairly successful they suffer from a number of drawbacks especially in the biomedical domain .the main drawback of these approaches is that it is very difficult to accurately disambiguate ambiguous entities especially when the entities are abbreviations .for example , _ dm _ is the abbreviation for the disease _ diabetes mellitus _ and the disambiguation page for _ dm _ in wikipedia associates it to more than 50 categories since _ dm _ can be expanded to _ doctor of management _ , _ dichroic mirror _ , and so on , each of it belonging to a different category . due tothe rapid growth of wikipedia , the number of entities that have disambiguation pages is growing fast and it is increasingly difficult to retrieve the article we want . also , it is tough to understand these approaches from a theoretical standpoint . used cca to learn word embeddings and added them as features in a sequence tagger .they show that cca learns better word embeddings than cw embeddings , hierarchical log - linear ( hlbl ) embeddings and embeddings learned from many other techniques for ner and chunking .unlike pca , a widely used dimensionality reduction technique , cca is invariant to linear transformations of the data .our approach is motivated by the theoretical result in which is developed in the co - training setting .we directly use the cca embeddings to predict the label of a data point instead of using them as features in a sequence tagger . also , we learn cca embeddings for candidate phrases instead of all words in the vocabulary since named entities often contain more than one word .learn a multi - class svm using the cca word embeddings to predict the pos tag of a word type .we extend this technique to ner by learning a binary svm using the cca embeddings of a high recall , low precision list of candidate phrases to predict whether a candidate phrase is a named entity or not .in this section , we give experimental results on virus and disease ner .the noisy lists of both virus and disease names were obtained from the biomed central corpus .this corpus was also used to get the collection of ( _ spelling _ , _ context _ ) pairs which are the input to the cca procedure and the dl - cotrain algorithm described in the previous section .we obtained cca embeddings for the most frequently occurring word types in this collection along with every word type present in the training and development data of the virus and the disease ner dataset .these word embeddings are similar to the ones described in and .we used the virus annotations in the genia corpus for our experiments .the dataset contains 18,546 annotated sentences .we randomly selected 8,546 sentences for training and the remaining sentences were randomly split equally into development and testing sentences .the training sentences are used only for experiments with the sequence taggers . previously , tested their hmm - based named entity recognizer on this data . for disease ner, we used the recent disease corpus and used the same training , development and test data split given by them .we used a sentence segmenter to get sentence segmented data and stanford tokenizer to tokenize the data .similar to , all the different disease categories were flattened into one single category of disease mentions .the development data was used to tune the hyperparameters and the methods were evaluated on the test data .[ t ] [ cols="^,^,^,^,^,^,^ " , ] first , we compare the dictionaries compiled using different methods by using them directly in a dictionary - based tagger .this is a simple and informative way to understand the quality of the dictionaries before using them in a crf - tagger .since these taggers can be trained using a handful of training examples , we can use them to build ner systems even when there are no labeled sentences to train .the input to a dictionary tagger is a list of named entities and a sentence .if there is an exact match between a phrase in the input list to the words in the given sentence then it is tagged as a named entity .all other words are labeled as non - entities .we evaluated the performance of the following methods for building dictionaries : * * candidate list * : this dictionary contains all the candidate phrases that were obtained using the method described in section 3.1 .the noisy list of virus candidates and disease candidates had 3,100 and 60,080 entries respectively . ** manual * : manually constructed dictionaries , which requires a large amount of human effort , are employed for the task .we used the list of virus names given in wikipedia .unfortunately , abbreviations of virus names are not present in this list and we could not find any other more complete list of virus names .hence , we constructed abbreviations by concatenating the first letters of all the strings in a virus name , for every virus name given in the wikipedia list . + for diseases , we used the list of disease names given in the unified medical language system ( umls ) metathesaurus .this dictionary has been widely used in disease ner ( e.g. , dogan and lu , 2012 ; leaman et al . , 2010 ) + . * * co - training * : the dictionaries are constructed using the dl - cotrain algorithm described previously .the parameters used were and as given in .the phrases present in the _ spelling _ rules which give a positive label and whose strength is greater than the precision threshold , were added to the dictionary of named entities .+ + in our experiment to construct a dictionary of virus names , the algorithm stopped after just 12 iterations and hence the dictionary had only 390 virus names .this was because there were no _ spelling _ rules with strength greater than to be added .we tried varying both the parameters but in all cases , the algorithm did not progress after a few iterations .we adopted a simple heuristic to increase the coverage of virus names by using the strength of the _ spelling _ rules obtained after the iteration .all _ spelling _ rules that give a positive label and which has a strength greater than were added to the decision list of _ spelling _ rules .the phrases present in these rules are added to the dictionary .we picked the parameter from the set [ 0.1 , 0.2 , 0.3 , 0.4 , 0.5 , 0.6 , 0.7 , 0.8 , 0.9 ] using the development data .+ the co - training algorithm for constructing the dictionary of disease names ran for close to 50 iterations and hence we obtained better coverage for disease names .we still used the same heuristic of adding more named entities using the strength of the rule since it performed better . * * cca * : using the cca embeddings of the candidate phrases as features we learned a binary svm to predict whether a candidate phrase is a named entity or not .we considered using 10 to 30 dimensions of candidate phrase embeddings and the regularizer was picked from the set [ 0.0001 , 0.001 , 0.01 , 0.1 , 1 , 10 , 100 ] .both the regularizer and the number of dimensions to be used were tuned using the development data .table 1 gives the results of the dictionary based taggers using the different methods described above .as expected , when the noisy list of candidate phrases are used as dictionaries the recall of the system is quite high but the precision is very low .the low precision of the wikipedia virus lists was due to the heuristic used to obtain abbreviations which produced a few noisy abbreviations but this heuristic was crucial to get a high recall .the list of disease names from umls gives a low recall because the list does not contain many disease abbreviations and composite disease mentions such as _ breast and ovarian cancer_. the presence of ambiguous abbreviations affected the accuracy of this dictionary .the virus dictionary constructed using the cca embeddings was very accurate and the false positives were mainly due to ambiguous phrases , for example , in the phrase _ hiv replication _ , _ hiv _ which usually refers to the name of a virus is tagged as a rna molecule .the accuracy of the disease dictionary produced using cca embeddings was mainly affected by noisy abbreviations .we can see that the dictionaries obtained using cca embeddings perform better than the dictionaries obtained from co - training on both disease and virus ner even after improving the co - training algorithm s coverage using the heuristic described in this section .it is important to note that the dictionaries constructed using the cca embeddings and a small number of labeled examples performs competitively with dictionaries that are entirely built by domain experts .these results show that by using the cca based approach we can build ner systems that give reasonable performance even for difficult named entity types with almost no supervision .we did two sets of experiments using a crf tagger .in the first experiment , we add dictionary features to the crf tagger while in the second experiment we add the embeddings as features to the crf tagger .the same baseline model is used in both the experiments whose features are described in section 2.2 .for both the crf experiments the regularizers from the set [ 0.0001 , 0.001 , 0.01 , 0.1 , 1.0 , 10.0 ] were considered and it was tuned on the development set . here , we inject dictionary matches as features ( e.g. , ratinov and roth , 2009 ; cohen and sarawagi , 2004 ) in the crf tagger . given a dictionary of named entities , every word in the input sentencehas a dictionary feature associated with it .when there is an exact match between a phrase in the dictionary with the words in the input sentence , the dictionary feature of the first word in the named entity is set to and the dictionary feature of the remaining words in the named entity is set to .the dictionary feature of all the other words in the input sentence which are not part of any named entity in the dictionary is set to .the effectiveness of the dictionaries constructed from various methods are compared by adding dictionary match features to the crf tagger .these dictionary match features were added along with the baseline features .figure 2 indicates that the dictionary features in general are helpful to the crf model .we can see that the dictionaries produced from our approach using cca are much more helpful than the dictionaries produced from co - training especially when there are fewer labeled sentences to train .similar to the dictionary tagger experiments discussed previously , the dictionaries produced from our approach performs competitively with dictionaries that are entirely built by domain experts .the quality of the candidate phrase embeddings are compared with word embeddings by adding the embeddings as features in the crf tagger . along with the baseline features , * cca - word * model adds word embeddings as features while the * cca - phrase * model adds candidate phrase embeddings as features . *cca - word * model is similar to the one used in .we considered adding 10 , 20 , 30 , 40 and 50 dimensional word embeddings as features for every training data size and the best performing model on the development data was picked for the experiments on the test data . for candidate phrase embeddings we used the same number of dimensions that was used for training the svms to construct the best dictionary .when candidate phrase embeddings are obtained using cca , we do not have embeddings for words which are not in the list of candidate phrases .also , a candidate phrase having more than one word has a joint representation , i.e. , the phrase `` human immunodeficiency '' has a lower dimensional representation while the words `` human '' and `` immunodeficiency '' do not have their own lower dimensional representations ( assuming they are not part of the candidate list ) .to overcome this issue , we used a simple technique to differentiate between candidate phrases and the rest of the words .let be the highest real valued candidate phrase embedding and the candidate phrase embedding be a dimensional real valued vector .if a candidate phrase occurs in a sentence , the embeddings of that candidate phrase are added as features to the first word of that candidate phrase .if the candidate phrase has more than one word , the other words in the candidate phrase are given an embedding of dimension with each dimension having the value .all the other words are given an embedding of dimension with each dimension having the value .figure 3 shows that almost always the candidate phrase embeddings help the crf model .it is also interesting to note that sometimes the word - level embeddings have an adverse affect on the performance of the crf model .the * cca - phrase * model performs significantly better than the other two models when there are fewer labeled sentences to train and the separation of the candidate phrases from the other words seems to have helped the crf model .we described an approach for automatic construction of dictionaries for ner using minimal supervision . compared to the previous approaches ,our method is free from overly - stringent assumptions about the data , uses svd that can be solved exactly and achieves better empirical performance .our approach which uses a small number of seed examples performs competitively with dictionaries that are compiled manually .we are grateful to alexander rush , alexandre passos and the anonymous reviewers for their useful feedback .this work was supported by the intelligence advanced research projects activity ( iarpa ) via department of interior national business center ( doi / nbc ) contract number d11pc20153 . the u.s .government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation thereon .the views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements , either expressed or implied , of iarpa , doi / nbc , or the u.s . government . | this paper describes an approach for automatic construction of dictionaries for named entity recognition ( ner ) using large amounts of unlabeled data and a few seed examples . we use canonical correlation analysis ( cca ) to obtain lower dimensional embeddings ( representations ) for _ candidate phrases _ and classify these phrases using a small number of labeled examples . our method achieves 16.5% and 11.3% f-1 score improvement over co - training on disease and virus ner respectively . we also show that by adding candidate phrase embeddings as features in a sequence tagger gives better performance compared to using word embeddings . |
conventionally , wireless communication nodes operate in half duplex ( hd ) mode under which they transmit and receive signals over orthogonal frequency or time resources. recent advances , nevertheless , suggest that full duplex ( fd ) communications that allows simultaneous transmission and reception of signal over the same radio channel be possible .in addition to the immediate benefit of essentially doubling the bandwidth , full duplex communications also find applications in simultaneous wireless information and power transfer ( swipt ) .much interest has turned to full - duplex relaying in which information is sent from a source node to a destination node through an intermediate relaying node which is powered by means of wireless energy harvesting . in the literature , the studies on relay aided swipt largely considered hd relaying and adopted a time - switched relaying ( tsr ) approach .authors in considered swipt in miso multicasting systems , in considered swipt in miso broadcasting systems , and in miso secrecy systems , where the joint transmit beamforming and receive power splitting problem for minimising the transmit power of the base station ( bs ) subject to signal - to - noise ratio ( snr ) and energy harvesting constraints at the receiver was investigated .in contrast to the existing results , this paper studies the joint optimization of the two - way beamforming matrix for swipt in a multiple - input multiple - output ( mimo ) amplify - and - forward ( af ) full - duplex relay system employing a power splitter ( ps ) , where the sum rate is maximized subject to the energy harvesting and total power constraints . _notations_we use to represent a complex matrix with dimension of . also , we use to denote the conjugate transpose , while is the trace operation , and denotes the frobenius norm .in addition , returns the absolute value of a scalar , and denotes that the hermitian matrix is positive semidefinite .the expectation operator is denoted by we define as the orthogonal projection onto the column space of ; and as the orthogonal projection onto the orthogonal complement of the column space of us consider swipt in a three - node mimo relay network consisting of two sources and wanting to exchange information with the aid of an af relay , as shown in fig .[ alex ] . in our model, all the nodes are assumed to operate in fd mode , and we also assume that there is no direct link between and so communication between them must be done via . both and transmit their messages simultaneously to with transmit power and , respectively . in the broadcast phase ,the relay employs linear processing with an amplification matrix to process the received signal and broadcasts the processed signal to the nodes with the harvested power .we assume that each source node is equipped with a pair of transmitter - receiver antennas for signal transmission and reception respectively .we use m and m to denote the number of transmit and receive antennas at , respectively .we use and to , respectively , denote the directional channel vectors between the source node s transmit antenna to s receive antennas , and that between the relay s transmit antenna(s ) to source node s receive antenna .the concurrent transmission and reception of signals at the nodes produces self - interference ( si ) which inhibits the performance of a full duplex system .we consider using existing si cancellation mechanisms in the literature to mitigate the si ( e.g. , antenna isolation , analog and digital cancellation , and etc . ) . due to imperfect channel estimation ,however , the si can not be cancelled completely . we therefore denote and as the si channels at the corresponding nodes . for simplicity , we model the residual si ( rsi ) channel as a gaussian distribution random variable with zero mean and variance , for .we further assume that the relay is equipped with a ps device which splits the received signal power at the relay for energy harvesting , amplification and forwarding of the received signal .in particular , the received signal at the relay is split such that a portion of the received signal power at the relay is fed to the information receiver ( ir ) and the remaining portion of the power to the energy receiver ( er ) at the relay .when the source nodes transmit their signals to the relay , the af relay employs a short delay to perform linear processing .it is assumed that the processing delay at the relay is given by a duration , which denotes the processing time required to implement the full duplex operation . typically takes integer values .we assume that the delay is short enough compared to a time slot which has a large number of data symbols , and thus its effect on the achievable rate is negligible . at timeinstant the received signal ] at the relay can be , respectively , written as & = \mathbf{h}_{ar}{s_a}[n ] + \mathbf{h}_{br}{s_b}[n]+ \mathbf{h}_{rr } \mathbf{x}_r[n ] + \mathbf{n}_r[n],\label{y1}\\ \mathbf{x}_r[n ] & = \mathbf{w y}^{\rm ir}_r(n-\tau),\label{x_r}\end{aligned}\ ] ] where ] it becomes \!\!\ ! & = & \!\!\ ! \rho ( \mathbf{h^{\dagger}}_{ra } \mathbf{w } \mathbf{h}_{br } s_b [ n-\tau ] + \mathbf{h^{\dagger}}_{ra}\mathbf{wn}_r[n])\nonumber\\ \!\!\ ! & + & \!\!\ ! \mathbf{h^{\dagger}}_{ra } \mathbf{w}n_p[n ] + h_{aa}s_a[n]+ n_a[n].\label{y^{sa1}}\end{aligned}\ ] ] the received signal - to - interference - plus - noise ratio ( sinr ) at node , denoted as can be expressed as similarly , the received sinr at node can be written as the achievable rates are then given by and at nodes and , respectively .the signal split to the er at is given as + \mathbf{h}_{br}{s_b}[n]+ \mathbf{h}_{rr } \mathbf{x}_r[n ] + \mathbf{n}_r[n]),\ ] ] where denotes the energy conversion efficiency of the er at the relay which accounts for the loss in energy transducer for converting the harvested energy to electrical energy to be stored . in this paper , for simplicity , we assume .thus , the harvested energy at the relay is given by + \delta_r),\label{q1a}\ ] ] where and is the additive white gaussian noise ( awgn ) with zero mean and unit variance at the relay .note that the conventional hd relay communication system requires two phases for and to exchange information .fd relay systems on the other hand reduce the whole operation to only one phase , hence increasing the spectrum efficiency . for simplicity , we assume that the transmit power at the source nodes are intelligently selected by the sources . therefore , in this work , we do not consider optimization at the source nodes . to ensure a continuous information transfer between the two sources ,the harvested energy at the relay should be above a given threshold so that a useful level of harvested energy is reached . as a result , we formulate the joint relay beamforming and receive ps ratio ( ) optimization problem as a maximization problem of the sum rate .mathematically , this problem is formulated as where is the maximum transmit power at the relay and is the minimum amount of harvested energy required to maintain the relay s operation .in this section , our aim is to maximize the sum - rate of the proposed fd mimo two - way af - relaying channel .considering the fact that each source only transmits a single data stream and the network coding principle encourages mixing rather than separating the data streams from the two sources , we decompose as , where is the transmit beamforming vector and denotes the receive beamforming vector at the relay . then the zf condition is simplified to or equivalently because in general .we further assume without loss of optimality that .therefore , the optimization problem in ( [ problem ] ) can be rewritten as ( [ y6 ] ) ( see top of next page ) [ y6 ] where and observe in ( [ y6 ] ) that is mainly involved in and , so it has to balance the signals received from the sources .according to the result obtained in , can be parameterized by as where is a non - negative real - valued scaler .it should be made clear that ( [ alex1 ] ) is not a complete characterization of because it is also involved in the zf constraint but this parameterization makes the problem more tractable .thus , given we can optimize for fixed ps ratio . then perform a 1-d search to find the optimal . for given and ,the optimal receive ps ratio can be determined .firstly , using the monotonicity between sinr and the rate , ( [ y6 ] ) can be rewritten as problem ( [ rho1 ] ) is a linear - fractional programming problem , and can be converted into a linear programming problem .the receive ps ratio is determined by the equation set below : using the procedure in , the optimal can be found by we check whether the above solution satisfies the constraint ( [ rho1 ] ) . if it does , then it is the optimal solution .otherwise the energy harvesting constraint should be met with equality thus giving the optimal receive ps ratio given by here , we first study how to optimize for given and then we perform a 1-d search on to find the optimal which guarantees an optimal as defined in ( [ alex1 ] ) for the given for convenience , we define a semidefinite matrix . then problem ( [ y6 ] ) becomes where is given in ( [ fw ] ) ( see next page ) . clearly , is not a concave function , making the problem challenging . to solve ( [ fw ] ), we propose to use the difference of convex programming ( dc ) to find a local optimum point . to this end, we express as a difference of two concave functions and , i.e. , where and .note that is a concave function while is a convex function .the main idea is to approximate by a linear function .the linearization ( first - order approximation ) of around the point is given in ( [ main^_prob ] ) . then the dc programming is applied to sequentially solve the following convex problem : we solve ( [ main_prob2 ] ) by : * choosing an initial point . * for , solve ( [ main_prob2 ] ) until convergence .notice that in ( [ main_prob2 ] ) , we have ignored the rank1 constraint on this constraint is guaranteed to be satisfied by the results in theorem 2 in and also in when m .thus , the decomposition of leads to the optimal solution .given , the optimal receive beamforming can be obtained by performing a 1-d search on to find the maximum which maximizes for a fixed value of .see algorithm 1 .the bounds of the rate search interval are obtained as follows .the lower bound is obviously zero while the upper bound is defined as the achievable sum - rate at zero rsi . with optimal , the optimal can be obtained from ( [ alex1 ] ) .now , the original beamforming and receive ps optimization in problem ( [ y6 ] ) can be solved by an iterative technique shown in algorithm 2 .algorithm 2 continually updates the objective function in ( [ y6 ] ) until convergence .set set as numerals and ( any value ) .set obtain by considering ( [ lola ] ) .set = non - negative scaler and obtain in ( [ alex1 ] ) . at step ,set until where is the searching step size .set , * repeat * \i ) set sum - rate \ii ) obtain the optimal by solving ( [ main_prob2 ] ) .\iii ) update the value of with the bisection search method : if ( ii ) is feasible , set ; otherwise , .* until * where is a small positive number .thus we get the optimal which maximizes ( [ alex1 ] ) to give obtain by comparing .initialise * repeat * \1 ) solve ( [ rho1 ] ) to obtain optimal \2 ) solve ( [ max1 ] ) using algorithm [ algorithm1 ] to obtain this section , we evaluate the performance of the proposed algorithm through computer simulations assuming flat rayleigh fading environments . in fig .[ sumr_eupisco ] , we show the sum - rate results against versus the transmit power budget ( db ) for various harvested energy constraint . the proposed scheme ( ` joint opt ' in the figure ) is compared with those of the fixed receive beamforming vector ( ) ( ` frbv'= 0.583 ) at optimal ps coefficient ( ) . remarkably , the proposed scheme yields higher sum - rate compared to the sum - rate of the frbv schemes which essentially necessitates joint optimization .the impact of the rsi on the sum - rate is studied in fig .[ si_eupisco ] .results show that an increase in the rsi results in a corresponding decrease in the achievable sum - rate .in this paper , we investigated the joint beamforming optimization for swipt in fd mimo two - way relay channel and proposed an algorithm which maximizes the sum - rate subject to the relay transmit power and harvested energy constraints . using dc and a 1-d search , we jointly optimized the receive beamforming vector , the transmit beamforming vector , and receive ps ratio to maximize the sum - rate .simulation results confirm the importance of joint optimization .a. a. nasir , x. zhou , s. durrani , and r. a. kennedy , `` relaying protocols for wireless energy harvesting and information processing , '' _ ieee trans .wireless commun .12 , no . 7 , pp .36223636 . jul .2013 .x. ji , b. zheng , y. cai , and l. zou , `` on the study of half duplex asymmetric two way relay transmission using an amplify and forward relay , '' _ ieee trans . on vehicular technology _ ,16491664 , may . 2012 . | in this paper , we investigate the problem of two - relay beamforming optimization to maximize the achievable sum - rate of a simultaneous wireless information and power transfer ( swipt ) system with a full - duplex ( fd ) multiple - input multiple - output ( mimo ) amplify - and - forward ( af ) relay . in particular , we address the optimal joint design of the receiver power splitting ( ps ) ratio and the beamforming matrix at the relay given the channel state information ( csi ) . our contribution is an iterative algorithm and one - dimensional ( 1-d ) search to achieve the joint optimization . simulation results are provided to demonstrate the effectiveness of the proposed algorithm . |
in the present work a method for solving the problem of time - dependent electromagnetic wave propagation through an isotropic inhomogeneous medium is developed . several ideas concerning such mathematical notions as bicomplex and biquaternionic reformulations of electromagnetic models , hyperbolic vekua - type equations and transmutation operators from the theory of ordinary linear differential equationsare combined in our approach which results in a simple and practical representation for solutions .we observe that the 1 + 1 maxwell system for inhomogeneous media can be transformed into a hyperbolic vekua equation .this gives us the possibility to obtain the exact solution of the problem of a normally incident electromagnetic time - dependent plane wave propagated through an inhomogeneous layer in terms of a couple of darboux - associated transmutation operators .this is a new representation for a solution of the classic problem .application of the recent results on the analytic approximation of such operators allows us to write down the electromagnetic wave in an approximate analytic form which is then used for numerical computation .the numerical implementation of the proposed approach reduces to a certain recursive integration and solution of an approximation problem , and can be based on the usage of corresponding standard routines of such packages as matlab . in section [ sect2 ]we recall a biquaternionic reformulation of the maxwell system and use it to relate the 1 + 1 maxwell system for inhomogeneous media with a hyperbolic vekua equation .we establish the equivalence between the electromagnetic transmission problem and an initial - value problem for the vekua equation . in section [ sect3 ]we obtain the exact solution of the problem in terms of a couple of the transmutation operators . in section [ sect4 ] the analytic approximation of the exact solution is obtained in the case when the incident wave is a partial sum of a trigonometric series .other types of initial data ( and hence modulations ) are discussed in section [ sect5 ] .section [ sect6 ] contains several exactly solved examples used as test problems for the resulting numerical method . in section [ sect7 ]we formulate some additional conclusions .the algebraic formalism in studying electromagnetic phenomena plays an important role since maxwell s original treatise in which hamilton s quaternions were present . in his phd thesis of 1919lanczos wrote the maxwell system for a vacuum in the form of a single biquaternionic equation .this elegant form of maxwell system was rediscovered in several posterior publications , see , e.g. , . in ( see also ) the maxwell system describing electromagnetic phenomena in isotropic inhomogeneous media was written as a single biquaternionic equation .the maxwell system has the form here and are real - valued functions of spatial coordinates , and are real - valued vector fields depending on and spatial variables , the real - valued scalar function and vector function characterize the distribution of sources of the electromagnetic field .the following biquaternionic equation obtained in is equivalent to this system here where is the wave propagation velocity and is the the intrinsic impedance of the medium .all magnitudes in bold are understood as purely vectorial biquaternions , and the asterisk denotes the complex conjugation ( with respect to the complex imaginary unit ) .the operator is the main quaternionic differential operator introduced by hamilton himself and sometimes called the moisil - theodoresco operator .it is defined on continuously differentiable biquaternion - valued functions of the real variables , and according to the rule where and are basic quaternionic units . in with the aid of the representation of the maxwell system in the form ( [ maxwell quat ] ) it was observed that in the sourceless situation ( i.e. , and are identically zeros ) and when all the magnitudes involved are independent of two spatial coordinates , say , and , and , the maxwell system is equivalent to the following vekua - type equation where , is a hyperbolic imaginary unit , commuting with , is a bicomplex - valued function of the real variables and , and , are complex valued ( containing the imaginary unit ) .the function is real valued and depends on only .the conjugation with respect to is denoted by the bar , .the maxwell system in this case can be written in the form where , , .the relation between ( [ maxwell1 + 1 ] ) and ( [ bicompw ] ) involves the change of the independent variable .the function in ( [ bicompw ] ) is related to and by the equality where and below the tilde means that a function of is written as a function of , .the function is written in terms of and as follows let us consider the problem of a normally incident plane wave transmission through an inhomogeneous medium ( see , e.g. , ( * ? ? ?* chapter 8) ) .the electromagnetic field and is supposed to be known at , .\label{init cond}\ ] ] we assume and to be continuously differentiable functions . the problem ( [ maxwell1 + 1 ] ) , ( [ init cond ] ) can be reformulated in terms of the function ( [ relation w ] ) .find a solution of ( [ bicompw ] ) satisfying the condition where is a given continuously differentiable function .first , let us consider the elementary problem the hyperbolic cauchy - riemann system ( [ cr hyper ] ) was studied in several publications ( see , e.g. , , , and more recent ) .its general solution can be written in the form where and are arbitrary continuously differentiable scalar functions , .for the scalar components of we introduce the notations when we obtain .hence the unique solution of the cauchy problem ( [ cr hyper ] ) , ( [ initial condition w ] ) has the form where in there was established a relation between solutions of ( [ cr hyper ] ) and solutions of ( [ bicompw ] ) . any solution of ( [ bicompw ] )can be represented in the form + jt_{1/f}\left [ \mathcal{i}\left ( w(\xi , t)\right ) \right ] \label{w transmut w}\ ] ] where is a solution of ( [ cr hyper ] ) , and are darboux - associated transmutation operators defined in , see also and , and applied with respect to the variable .both operators have the form of volterra integral operators , with continuously differentiable kernels .moreover , the operators and preserve the value at giving additionally to ( [ w transmut w ] ) the relation .this together with ( [ elementary sol ] ) allows us to write down the unique solution of the problem ( [ bicompw ] ) , ( [ initial condition ] ) in the form + \frac{j}{2}t_{1/f}\left [ w_{0}^{+}(t+\xi)-w_{0}^{-}(t-\xi)\right ] .\label{solution initial problem}\ ] ] in what follows we use the convenience of this representation and the recent results , on the construction of the operators and .the single - wave approximation of the solution of ( [ bicompw ] ) , ( [ initial condition ] ) ( see ( * ? ? ?* subsection 8.5.2 ) ) can be obtained from the representation ( [ solution initial problem ] ) by removing integrals from the definition of and or in other words , replacing and by the identity operator .initial data interesting in practical problems correspond to modulated electromagnetic waves which are represented as partial sums of trigonometric series ( other types of initial data are discussed in the next section ) .in other words , consider initial data of the form this leads to a similar form for the initial data in ( [ initial condition ] ) , where the bicomplex numbers are related to , as follows due to ( [ solution initial problem ] ) , the unique solution of the problem ( [ bicompw ] ) , ( [ initial condition ] ) with given by ( [ w0exp ] ) can be written in the form + \gamma _ { m}^{-}t_{f}\left [ e^{-i(\omega_{0}+m\omega)\xi}\right ] \right)\right .\\ & + j\left.\sum_{m =- m}^{m}e^{i(\omega_{0}+m\omega)t}\left ( \gamma_{m}^{+}t_{1/f}\left [ e^{i(\omega_{0}+m\omega)\xi}\right ] -\gamma_{m}^{-}t_{1/f}\left [ e^{-i(\omega_{0}+m\omega)\xi}\right ] \right )\right ) \end{split}\label{wxit}\ ] ] where .although the explicit form of the operators and is usually unknown , in it was shown how their kernels can be approximated by means of generalized wave polynomials .in particular , for the images of the functions the following approximate representations are valid & \cong e^{\pm i(\omega_{0}+m\omega)\xi}+2\sum_{n=0}^{n}a_{n}\sum_{\text{even } k=0}^{n}\binom{n}{k}\varphi_{n - k}(\xi)\int_{0}^{\xi}\tau^{k}\cos(\omega_{0}+m\omega)\tau\,d\tau\\ & \quad\pm2i\sum_{n=1}^{n}b_{n}\sum_{\text{odd } k=1}^{n}\binom{n}{k}\varphi_{n - k}(\xi)\int_{0}^{\xi}\tau^{k}\sin(\omega_{0}+m\omega)\tau\,d\tau \end{split } \label{tfexp}\ ] ] and & \cong e^{\pm i(\omega_{0}+m\omega)\xi}-2\sum_{n=0}^{n}b_{n}\sum_{\text{even } k=0}^{n}\binom{n}{k}\psi_{n - k}(\xi)\int_{0}^{\xi}\tau^{k}\cos(\omega_{0}+m\omega ) \tau\,d\tau\\ & \quad\mp2i\sum_{n=1}^{n}a_{n}\sum_{\text{odd } k=1}^{n}\binom{n}{k}\psi _ { n - k}(\xi)\int_{0}^{\xi}\tau^{k}\sin(\omega_{0}+m\omega)\tau\,d\tau . \end{split } \label{t1/fexp}\ ] ] here all integrals are easily calculated in a closed form , the functions and are defined as follows .consider two sequences of recursive integrals and the two families of functions and are constructed according to the rules and finally , the coefficients and are obtained by solving an approximation problem described in . as an important feature of the representations( [ tfexp ] ) and ( [ t1/fexp ] ) in there were obtained estimates of their accuracy uniform with respect to the parameter .note that the direct evaluation of expressions and requires algebraic operations for each pair of and leading to the computation complexity for the evaluation of .the change of summation order in and allows one to evaluate with the computation complexity of .for example , consider the coefficients can be precomputed once for every in operations and the outer sum requires operations .the proposed method is not restricted exclusively to initial data which can be represented or closely approximated by .when one can efficiently calculate ( at least numerically ) the indefinite integrals the approximations of the transmutation operators can be used .one of the examples of such initial data arises in digital signal transmission . for widely used modulations like phase - shift keying or qam, the transmitted signal can be represented as \mathbf{1}_{[nf_s , ( n+1)f_s)}(t),\ ] ] where is the characteristic function of the interval , is the carrier frequency , is the symbol rate and the coefficients and encode transmitted information .other examples include gaussian rf pulses and linear frequency modulation ( also known as chirp modulation ) used for radars .we refer the reader to for further details . returning to , consider ] on can change in and the coefficients and by and respectively and change functions by .[ ex1 ] let us consider the system ( [ maxwell1 + 1 ] ) with the permittivity of the form where and are some real numbers , such that does not vanish on the interval of interest and . then .hence in this case the vekua equation ( [ bicompw ] ) has the form where the coefficient is constant , .the vekua equation ( [ bicompw ] ) is a special case of the main vekua equation ( see , ) . for its solutionsone has that satisfies the equation where and , and satisfies the equation where .the relation between and is akin to the relation between harmonic conjugate functions and can be found in , . in the case of equation ( [ vekuaconst ] )consider a solution of the equation in the form where and are arbitrary real constants .then a solution of ( [ vekuaconst ] ) such that can be chosen in the form ( see , ) this leads to the following solution of the maxwell system and or in terms of the variables and , {\mu}\sqrt{\alpha x+\beta}e^{\frac{i\alpha t}{2\sqrt{\mu}}}\left ( \frac{\sqrt{\mu}}{\alpha}a\log\frac{\alpha x+\beta } { \beta}+b\right)\ ] ] and{\mu}\sqrt{\alpha x+\beta}}\left ( \frac{\sqrt{\mu}}{\alpha}a\log\frac{\alpha x+\beta}{\beta}+b+\frac{2a\sqrt{\mu}}{\alpha}\right ) .\ ] ] thus the functions and are the exact solutions of the maxwell system with given by and with the initial conditions {\mu}\sqrt{\beta}e^{\frac{i\alpha t}{2\sqrt{\mu}}}b\quad \text{and}\quad\mathcal{h}(0,t)= -\frac{e^{\frac{i\alpha t}{2\sqrt{\mu}}}}{\sqrt[4]{\mu } \sqrt{\beta}}\left ( b+\frac{2a\sqrt{\mu}}{\alpha}\right).\ ] ] below we present the results of numerical solution of the same system by the proposed method . for the numerical experiment we considered an interval ] for both and . for the initial condition we took the sum of four terms , each of the form having , .since the expression for reduces to , we took , and obtained initial conditions .for this example the optimal was equal to 13 and the whole computation time was seconds . on figure[ fig : ex2 ] we present the graphs of the initial condition and the computed .the absolute errors of the computed and were less than and respectively .[ ex3 ] for this example let us consider the system with the permittivity of the form where and , are such that on the interval of interest .then .hence in we show how one can construct the transmutation operators and when , .the procedure can be easily generalized for functions of the form , .hence for values of of the form , one can explicitly construct the pair of transmutation operators and and obtain the solution of . for the numerical experiment we took , , and . for such parametersthe function is equal to and the integral kernels of the transmutation operators and are given by ( see ) we considered the interval ] for and took the gaussian pulse , as the initial condition . for such initial conditionthe expression can be evaluated in the terms of function .the approximate solution was computed using and .we used 2001 points to represent the permittivity .the formal powers were computed as it was explained in example [ ex1 ] while to evaluate the indefinite integrals , we approximated the integrands as splines and used the function from matlab .the main reason for such choice is that despite the uniform mesh was taken for both and , the resulting mesh for may not be uniform leading to rather large set of values which can be taken by and .all computations were performed in machine precision in matlab 2012 . on figure[ fig : ex3 ] we show the obtained graphs of and .the absolute errors of the computed solutions were less than and , respectively .a method for solving the problem of electromagnetic wave propagation through an inhomogeneous medium is developed .it is based on a simple transformation of the maxwell system into a hyperbolic vekua equation and on the solution of this equation by means of approximate transmutation operators . in spite of elaborate mathematical results which are behind of the proposed method ,the final representations for approximate solutions of the electromagnetic problem have a sufficiently simple form , their numerical implementation is straightforward and can use standard routines of such packages as matlab .h. campos , v. v. kravchenko , l. m. mndez , _ complete families of solutions for the dirac equation : an application of bicomplex pseudoanalytic function theory and transmutation operators _ , adv .clifford algebr . 22( 2012 ) , issue 3 , 577594 .v. v. kravchenko , _ quaternionic equation for electromagnetic fields in inhomogeneous media_. in : progress in analysis , v. 1 , eds .h. begehr , r. gilbert and m. wah wong , 361366 , world scientific , 2003 .l. a. ostrovsky , a. i. potapov , _ introduction into the theory of modulated waves _ , fizmatlit , moscow , 2003 ( in russian ) ,the revised and extended translation of _ modulated waves : theory and applications _ , johns hopkins university press , 2002 . | the time - dependent maxwell system describing electromagnetic wave propagation in inhomogeneous isotropic media in the one - dimensional case reduces to a vekua - type equation for bicomplex - valued functions of a hyperbolic variable , see . using this relation we solve the problem of the transmission through an inhomogeneous layer of a normally incident electromagnetic time - dependent plane wave . the solution is written in terms of a pair of darboux - associated transmutation operators , and combined with the recent results on their construction , can be used for efficient computation of the transmitted modulated signals . we develop the corresponding numerical method and illustrate its performance with examples . |
quantum machine learning aims at merging the methods from quantum information processing and pattern recognition to provide new solutions for problems in the areas of pattern recognition and image understanding . in the first aspect the research in this areais focused on the application of the methods of quantum information processing for solving problems related to classification and clustering .one of the possible directions in this field is to provide a representation of computational models using quantum mechanical concepts . from the other perspectivethe methods for classification developed in computer engineering are used to find solutions for problems like quantum state discrimination , which ares tightly connected with the recent developments in quantum cryptography .+ using quantum states for the purpose of representing patterns is naturally motivated by the possibility to exploit quantum algorithms to boost the computational intensive parts of the classification process .in particular , it has been demonstrated that quantum algorithms can be used to improve the time complexity of the neighbor _ _ ( ) method . using the algorithms presented in it is possible to obtain polynomial reductions in query complexity in comparison to the corresponding classical algorithm .another motivation comes from the possibility of using quantum - inspired algorithms for the purpose of solving classical problems .such an approach has been exploited by various authors . in authorspropose an extension of gaussian mixture models by using the statistical mechanics point of view . in their approachthe probability density functions of conventional gaussian mixture models are expressed by using density matrix representations . on the other hand , in authorsutilize the quantum representation of images to construct measurements used for classification. such approach might be particularly useful for the physical implementation of the classification procedure on quantum machines . in the last few years , many efforts to apply the quantum formalism to non - microscopic contexts and to signal processing have been made .moreover , some attempts to connect quantum information to pattern recognition can be found in .exhaustive survey and bibliography of the developments concerning applications of quantum computing in computational intelligence are provided in . even if these results seem to suggest some possible computational advantages of an approach of this sort , an extensive and universally recognized treatment of the topic is still missing .the main contribution of our work is the introduction of a new framework to encode the classification process by means of the mathematical language of density matrices .we show that this representation leads to two different developments : it enables us to provide a representation of the _ nearest mean classifier _( nmc ) in terms of quantum objects ; it can be used to introduce a _ quantum classifier _ ( qc ) that can provide a significative improvement of the performances on a classical computer with respect to the nmc .the paper is organized as follows . in section [ sec : pre ] basic notions of quantum information and pattern recognition are introduced . in section [ sec : pdp ] we formalize the correspondence between arbitrary two - feature patterns and pure density operators and we define the notion of _ density pattern_. in section [ sec : cp ] we provide a representation of nmc by using density patterns and by the introduction of an _ ad hoc _ definition of distance between quantum states . in section [ qcp ]is devoted to describe a new quantum classifier qc that has not a classical counterpart in the standard classification process .numerical simulations for both qc and nmc are presented . in section [ sec : general ] a geometrical idea to generalize the model to arbitrary -feature patterns is proposed .finally , in section [ sec : concl ] concluding remarks and further developments are discussed .in the standard quantum information theory , the states of physical systems are described by unit vectors and their evolution is expressed in term of unitary matrices ( _ i.e. _ quantum gates ) . however, this representation can be applied for an ideal case only , because it does not take into account some unavoidable physical phenomena , such as interactions with the environment and irreversible transformations . in modern quantum information theory , another approach is adopted .the states of physical systems are described by density operators also called _ mixed states _ and their evolution is described by quantum operations .the space of density operators for -dimensional system consists of positive semidefinite matrices with unit trace .a quantum state can be _ pure _ or _mixed_. we say that a state of a physical system is pure if it represents `` maximal '' information about the system , _i.e. _ an information that can not be improved by further observations . a probabilistic mixture of pure states is said to be a _mixed _ state .generally , both pure and mixed states are represented by density operators , that are positive and hermitian operators ( with unitary trace ) living in a -dimensional complex hilbert space .formally , a density operator is pure iff and it is mixed iff .+ if we confine ourselves in the -dimensional hilbert space , a suitable representation of an arbitrary density operator is provided by where are the pauli matrices .this expression comes to be useful in order to provide a geometrical representation of .indeed , each density operator can be geometrically represented as a point of a radius - one sphere centered in the origin ( the so called _ bloch sphere _ ) , whose coordinates ( _ i.e. _ _ pauli components _ ) are ( with ) . by using the generalized pauli matrices it is also possible to provide a geometrical representation for an arbitrary -dimensional density operator , as it will be showed in section [ sec : general ] .again , by restricting to a -dimensional hilbert space , the points on the surface of the bloch sphere represent pure states , while the inner points represent mixed states .quantum formalism turns out to be very useful not only in the microscopic scenario but also to encode classical data .this has naturally suggested several attemps to represent the standard framework of machine learning through the quantum formalism . in particular ,pattern recognition is the scientific discipline which deals with theories and methodologies for designing algorithms and machines capable of automatically recognizing `` objects '' ( i.e. patterns ) in noisy environments .some typical applications are multimedia document classification , remote - sensing image classification , people identification using biometrics traits as fingerprints .a pattern is a representation of an object. the object could be concrete ( i.e. , an animal , and the pattern recognition task could be to identify the kind of animal ) or an abstract one ( i.e. a facial expression , and the task could be to identify the emotion expressed by the facial expression ) .the pattern is characterized via a set of measurements called _ _features__. features can assume the forms of categories , structures , names , graphs , or , most commonly , a vector of real number ( feature vector ) . intuitively , a class is the set of all similar patterns . for the sake of simplicity , and without loss of generality , we assume that each object belongs to one and only one class , and we will limit our attention to 2-class problems .for example , in the domain of ` cats and dogs ' we can consider the classes ( the class of all cats ) and ( the class of all dogs ) . the pattern at hand is either a cat or a dog , and a possible representation of the pattern could consist in the height of the pet and the length of its tail . in this way ,the feature vector is the pattern representing a pet whose height and length of the tail are and respectively .+ now , let us consider an object whose membership class is unknown .the basic aim of the classification process is to establish which class belongs to . to reach this goal ,standard pattern recognition designs a _ classifier _ that , given the feature vector , has to determine the true class of the pattern .the classifier should take into account all the available information about the task at hand ( i.e. , information about the statistical distributions of the patterns and information obtained from a set of patterns whose true class is known ) .this set of patterns is called ` training set ' , and it will be used to define the behavior of the classifier . + if no information about the statistical distributions of the patterns is available , an easy classification algorithm that could be used is the _nearest mean classifier _ ( nmc ) , or minimum distance classifier .the nmc * computes the centroids of each class , using the patterns on the training set where is the number of patterns of the training set belonging to the class ; * assigns the unknown pattern to the class with the closest centroid . in the next section we provide a representation of arbitrary 2d patterns by means of density matrices , while in section [ sec : cp ] we introduce a representation of nmc in terms of quantum objects .let be a generic pattern , _i.e. _ a point in . by means of this representation ,we consider all the features of as perfectly known .therefore , represents a maximal kind of information , and its natural quantum counterpart is provided by a pure state . for the sake of simplicity, we will confine ourselves to an arbitary two - feature pattern indicated by in this section , a one - to - one correspondence between each pattern and its corresponding pure density operator is provided .+ the pattern can be represented as a point in .the stereographic projection allows to unequivocally map any point of the surface of a radius - one sphere ( except for the north pole ) onto a point of as the inverse of the stereographic projection is given by therefore , by using the bloch representation given by eq .( [ bl ] ) and placing we obtain the following definition .[ dp ] given an arbitrary pattern , the density pattern ( dp ) associated to is the following pure density operator it is easy to check that .hence , always represents a pure state for any value of the features and .+ following the standard definition of the bloch sphere , it can be verified that with and are pauli matrices .let us consider the pattern .the corresponding reads the introduction of the density pattern leads to two different developments .the first is showed in the next section and consists in the representation of the nmc in quantum terms . moreover , in section [ qcp ] , starting from the framework of density patterns , it will be possible to introduce a quantum classifier that exhibits better performances than the nmc .as introduced in section [ sec : pre ] , the nmc is based on the computation of the minimum euclidean distance between the pattern to be classified and the centroids of each class . in the previous section , a quantum counterpart of an arbitrary classical " pattern was provided . in order to obtain a quantum counterpart of the standard classification process, we need to provide a suitable definition of distance between dps . in addition to satisfy the standard conditions of metric , the distance also needs to satisfy the _ preservation of the order _ : given three arbitrary patterns such that , if are the dps related to respectively , then in order to fulfill all the previous conditions , we obtain the following definition . [ ntr ] the normalized trace distance between two arbitrary density patterns and is given by formula where is the standard trace distance , , with representing the eigenvalues of , and is a normalization factor given by , with and representing the third pauli components of and , respectively .[ ntd ] given two arbitrary patterns and and their respective density patterns , and , we have that it can be verified that the eigenvalues of the matrix are given by using the definition of trace distance , we have by applying formula ( [ r123 ] ) to both and , we obtain that using proposition [ ntd ] , one can see that the normalized trace distance satisfies the standard metric conditions and the preservation of the order . due to the computational advantage of a quantum algorithm able to faster calculate the euclidean distance , the equivalence between the normalized trace distance and the euclidean distance turns out to be potentially beneficial for the classification process we are going to introduce .let us now consider two classes , and , and the respective centroids and do not represent true centroids , but centroids estimated on the training set . ] and the classification process based on nmc consists of finding the space regions given by the points closest to the first centroid or to the second centroid . the patterns belonging to the first regionare assigned to the class , while patterns belonging to the second region are assigned to the class .the points equidistant from both the centroids represent the _ discriminant function _( df ) , given by thus , an arbitrary pattern is assigned to the class ( or ) if ( or ) .+ let us notice that the eq .( [ df ] ) is obtained by imposing the equality between the euclidean distances and .similarly , we obtain the quantum counterpart of the classical discriminant function .let and be the dps related to the centroids and , respectively .then , the _ quantum discriminant function ( qdf ) _ is defined as where * , * , are pauli components of and respectively , * * in order to find the , we use the equality between the normalized trace distances and , where is a generic dp with pauli components , , .we have the equality reads in view of the fact that , and are pure states , we use the conditions and we get this completes the proof .+ similarly to the classical case , we assign the dp to the class ( or ) if ( or ) .geometrically , eq . ( [ qdf ] ) represents the surface equidistant from the dps and .+ let us remark that , if we express the pauli components , and in terms of classical features by eq .( [ r123 ] ) , then eq . ( [ qdf ] ) exactly corresponds to eq .( [ df ] ) . as a consequence ,given an arbitrary pattern , if ( or ) then its relative dp will satisfy ( or , respectively ) .+ the comparison between the classical and quantum discrimination functions for the _ moon _ dataset is presented in fig .[ fig : discr - compare ] . plots in figs .[ fig : class - moon ] and [ fig : quant - moon ] present the classical and quantum discrimination , respectively .it is worth noting that the correspondence between pattern expressed as a feature vector ( according to the standard pattern recognition approach ) and pattern expressed as a density operator is quite general .indeed , it is not related to a particular classification algorithm ( nmc , in the previous case ) nor to the specific metric at hand ( the euclidean one ) .therefore , it is possible to develop a similar correspondence by using other kinds of metrics and/or classification algorithms , different from nmc , adopting exactly the same approach .this result suggests potential developements which consist in finding a quantum algorithm able to implement the normalized trace distance between density patterns .so , it would correspond to implement the nmc on a quantum computer with the consequent well known advantages .the next section is devoted to explore another developement , that consists in using the framework of density patterns in order to introduce a purely quantum classification process ( without any classical counterpart ) more convenient than the nmc on a classical computer .in section [ sec : cp ] we have shown that the nmc can be expressed by means of quantum formalism , where each pattern is replaced by a corresponding density pattern and the euclidean distance is replaced by the normalized trace distance .representing classical data in terms of quantum objects seems to be particularly promising in quantum machine learning . quoting lloyd et al. `` estimating distances between vectors in -dimensional vector spaces takes time on a quantum computer .sampling and estimating distances between vectors on a classical computer is apparently exponentially hard '' .this convenience was already exploited in machine learning for similar tasks .hence , finding a quantum algorithm for pattern classification using our proposed encoding could be particularly beneficial to speed up the classification process and it can suggest interesting developments , that , however , are beyond the scopes of this paper . what we propose in this section is to exhibit some explicative examples to show how , on a classical computer , our formulation can lead to meaningful improvements with respect to the standard nmc .we also show that these improvements could be further enhanced by combining classical and quantum procedures . in order to get a real advantage in the classification process we need to be not confined in a pure representation of the classical procedure in quantum terms . for this reason, we introduce a purely quantum representation where we consider a new definition of centroid .the basic idea is to define a _quantum centroid _ not as the stereographic projection of the classical centroid , but as a convex combination of density patterns . *( quantum centroid ) * given a dataset with + let us consider the respective set of density patterns the quantum centroid is defined as : generally , is a mixed state that has not an intuitive counterpart in the standard representation of pattern recognition , but it turns out to be convenient in the classification process .indeed , the quantum centroid includes some further information that the classical centroid generally descards .in fact , the classical centroid does not involve all the information about the distribution of a given dataset , _i.e. _ the classical centroid is invariant under uniform scaling transformations of the data .consequently , the classical centroid does not take into account any dispersion phenomena .standard pattern recognition conpensates for this lack by involving the covariance matrix .+ on the other hand the quantum centroid is not invariant under uniform scaling . let us consider the set of points where and let be the respective classical centroid .a uniform rescaling of the points of the dataset corresponds to move each point along the line joining itself with , whose generic expression is given by : let be a generic point on this line .obviously , a uniform rescaling of by a real factor is represented by the map : even if the classical centroid is not dependent on the rescaling factor , on the other hand the expression of the quantum centroid is : that , clearly , is dependent on according to the same framework used in section [ sec : cp ] , given two classes and of real data , let and the respective quantum centroids .given a pattern and its respective density pattern , is assigned to the class ( or ) if ( or , respectively ) .let us remark that we do not need any normalization parameter to be added to the trace distance , because the exact correspondence with the euclidean distance is no more a necessary requirement in this framework .from now on we refer to the classification process based on density patterns , quantum centroids and trace distances as the _ quantum classifier _ ( qc ) .we have shown that the quantum centroid is not independent on the dispersion of the patterns and , intuitively , it could contain some additional information with respect to the classical centroid .consequently , it is reasonable to expect that qc could provide some better performances than the nmc. the next subsection will be devoted to exploit this convenience by means of numerical simulations on different datasets . + before presenting the experimental results , let briefly remark in what consists the `` convenience '' of a classification process with respect to another . in order to evaluate the performances of a supervised learning algorithm , for each classit is tipical to refers to the respective confusion matrix .it is based on four possible kinds of outcome after the classification of a certain pattern : * true positive ( tp ) : pattern correctly assigned to its class ; * true negative ( tn ) : pattern correctly assigned to another class ; * false positive ( fp ) : pattern uncorrectly assigned to its class ; * false negative ( fn ) : pattern uncorrectly assigned to another class . according to above ,it is possible to recall the following definitions able to evaluate the performance of an algorithm with tp .similarly for tn , fp and fn . ] .+ true positive rate ( tpr ) , or sensitivity or recall : ; false positive rate ( fpr ) : ; true negative rate ( tnr ) : ; false negative rate ( fnr ) : .+ let us consider a dataset of elements allocated in different classes .we also recall the following basic statistical notions : * error : * accuracy : * precision : further , another statistical index that is very useful to indicate the reliability of a classification process is given by the cohen s , that is , where and .the value of is such that , where the case corresponds to a perfect classification procedure . in this subsectionwe implement the qc on different datasets and we show the difference between qc and nmc in terms of the values of error , accuracy , precision and other probabilistic indexes summarized above .+ we will show how our quantum classification procedure exhibits a convenience with respect to the nmc on a classical computer by using different datasets .+ we refer to the following very popular two - features datasets , extracted from common machine learning repositories : the _ gaussian _ and the _ moon _datasets , composed of patterns allocated in two different classes , the _ banana _ dataset , composed of patterns allocated in two classes and the _ 3classgaussian _ , composed of patterns allocated in three classes .this dataset consists of patterns allocated in two classes ( with equal size ) , following gaussian distributions whose means are , and covariance matrices are , , respectively . + as depicted in figure 2 , the classes appear particularly mixed and the qc is able to classify a number of true positive patterns that is significantly larger than the nmc . hence , the error of the qc is ( about ) smaller than the error of the nmc .in particular , the qc turns out to be strongly beneficial in the classification of the patterns of the second class .further , also the values related to accuracy , precision and the other statistical indexes exhibit relevant inprovements with respect to the nmc .+ on the other hand , there are some patterns correctly classified by the nmc which are neglected by the qc . on this basis , exploiting their complementarity , it makes sense to consider a combination of both classifiers .the so - called _ oracle _ is an hypothetical selection rule that , for each pattern , is able to select the most appropriate classifier .its aim is to show the potentiality of an ensemble of classifiers ( in this case , qc and nmc ) if we were able to select the most appropriate classifier depending on the test pattern .fig 2(d ) shows the effect of the oracle whose performances are summarized in table 1 .these performances represent the theoretical upper bound of the ensemble composed by qc and nmc . + we denote the variables listed in the tables as follows : e= error ; ei= error on the class i ; ac= accuracy ; pr= precision ; k = cohen s k ; tpr= true positive rate ; fpr = false positive rate ; tnr= true negative rate ; fnr= false negative rate .let us remark that : _ i _ ) the values listed in the table are referred to the mean values over the classes ; _ ii _ ) in case the number of classes is equal to , is , and . lll c c c c c cc&e&e1&e2&pr&k&tpr&fpr + + nmc&*0.445&0.41&0.48&*0.555&*0.11&0.555&0.445 + qc&*0.24&0.28&0.2&*0.762&*0.52&0.76&0.24 + nmc - qc&*0.13&0.14&0.12&*0.87&*0.74&0.87&0.13 + * * * * * * * * * this dataset consists of patterns equally distributed in two classes . in this case , the correctly classified patterns of the first class are exactly the same for both classifiers but the qc turns out to be beneficial in the classification of the second class .+ differently from the gaussian dataset , for this dataset the patterns correctly classified by the nmc are a proper subset of the ones correctly classified by the qc . on this basis , the qc is fully convenient with respect to the nmc and a combination of the two classifiers is useless . .moondataset [ gd ] [ cols="<,<,^,^,^,^,^,^,^ " , ] even if the previous examples have shown how the qc can be particularly beneficial with respect to the nmc , according to the well known _ no free lunch theorem _ , there is no a classifier whose performance is better than the others for any dataset .this paper is focused on the comparison between the nmc and the qc because these methods are exclusively based on the pattern - centroid distance .anyway , a widely comparison among the qc and other commonly used classifiers ( such as the lda - linear discriminant analysis - and the qda - quadratic discriminant analysis - ) will be proposed for future works , where also other quantum metrics ( such as the fidelity , the bures distance etc ) instead of the trace distance will be considered to provide an adaptive version of the quantum classifier .in section [ sec : pdp ] we provided a representation of an arbitrary two - feature pattern in the terms of a point on the surface of the bloch sphere , _i.e. _ a density operator . a geometrical extension of this model to the case of -feature patterns inspired by quantum framework is possible .+ in this section we introduce a method for representing an arbitrary -dimensional real pattern as a point in the radius - one hypersphere , centered in the origin . a quantum system described by a density operator in an -dimensional hilbert space , can be represented by a linear combination of the -dimensional identity i and -square matrices ( _ i.e. _ _ generalized pauli matrices _ ) : where the real numbers are the pauli components of . hence , by eq . ( [ rn ] ), a density operator acting on an -dimensional hilbert space can be geometrically represented as a -dimensional point in the bloch hypersphere , with .therefore , by using the generalization of the stereographic projection we obtain the vector , that is the correspondent of in in fact , the generalization of eqs .( [ sp])([sp1 ] ) are given by hence , by eq .( [ gsp ] ) , a -dimensional density matrix is determined by three pauli components and it can be mapped onto a -dimensional real vector .analogously , a -dimensional density matrix is determined by eight pauli components and it can be mapped onto a real vector . generally , an -dimensional density matrix is determined by pauli components and it can be mapped onto an dimensional real vector . now , let consider an arbitrary vector with . in this case eq .( [ gsp1 ] ) can not be applied because in order to represent in an -dimensional hilbert space , it is sufficient to involve only pauli components ( instead of all the pauli components of the -dimensional space ) .hence , we need to project the bloch hypersphere onto the hypersphere .we perform this projection by using eq .( [ gsp1 ] ) and by assigning some fixed values to a number of pauli components equal to . in this way, we obtain a representation in that involves pauli components and it finally allows the representation of an -dimensional real vector . let us consider a vector by eq .( [ gsp1 ] ) we can map onto a vector hence , we need to consider a -dimensional hilbert space .then , an arbitrary density operator can be written as with pauli components such that and generalized pauli matrices . in this case is the set of eight matrices also known as _ gell - mann matrices _ , namely consequently , the generic form of a density operator in the -dimensional hilbert space is given by then , for any it is possible to associate an -dimensional bloch vector .however , by taking for we obtain that , by eq .( [ gsp1 ] ) , can be seen as point projected in where the generalization introduced above , allows the representation of arbitrary patterns as points also the classification procedure introduced in section [ sec : cp ] can be naturally extended for an arbitrary -feature pattern where the normalized trace distance between two dps and can be expressed using eq .( [ gsp ] ) in terms of the respective pauli components as ^ 2}}{(1-r_{a_{n+1}})(1-r_{b_{n+1}})}.\ ] ] analogously , also the qc could be naturally extended to a -dimesional problem ( without lost of generality ) by introducing a -dimensional quantum centroid .in this work a quantum representation of the standard objects used in pattern recognition has been provided . in particular , we have introduced a one - to - one correspondence between two - feature patterns and pure density operators by using the concept of _ density patterns_. starting from this representation , firstly we have described the nmc in terms of quantum objects by introducing an _ ad hoc _ definition of _ normalized trace distance_. we have found a quantum version of the discrimination function by means of pauli components .the equation of this surface was obtained by using the normalized trace distance between density patterns and geometrically it corresponds to a surface that intersects the bloch sphere .this result could be considered potentially useful because suggests to find an appropriate quantum algorithm able to implement the normalized trace distance between density patterns . in this way, we could reach a replacement of the nmc in a quantum computer , with a consequent significative reduction of the computational complexity of the process . secondly , the definition of a _ quantum centroid _ that has not any kind of classical counterpart permits to introduce a purely quantum classifier .the convenience of using this new quantum centroid lies in the fact that it seems to contain some additional information with respect to the classical one because the first takes into account also the distribution of the patterns .the main implementative result of the paper consists in showing how the quantum classifier performs a meaningful reduction of the error and improvement of the accuracy , precision and other statistical parameters of the algorithm with respect to the nmc .further developments will be devoted to compare our quantum classifier with other kinds of commonly used classical classifiers .finally , we have presented a generalization of our model that allows to express arbitrary -feature patterns as points on the hypersphere , obtained by using the generalized stereographic projection .however , even if it is possible to associate points of a -hypersphere to -feature patterns , those points do not generally represent density operators . in authors found some conditions that guarantee the one - to - one correspondence between points on particular regions of the hypersphere and density matrices .a full development of our work is therefore intimately connected to the study on the geometrical properties of the generalized bloch sphere .this work has been partly supported by the project `` computational quantum structures at the service of pattern recognition : modeling uncertainty '' [ crp-59872 ] funded by regione autonoma della sardegna , l.r . 7/2007 ( 2012 ) .d. aerts , b. dhooghe .classical logical versus quantum conceptual thought : examples in economics , decision theory and concept theory ., _ lecture notes in comput ._ , * 5494 * : 128142 .springer , berlin ( 2009 ) . | we introduce a framework suitable for describing pattern recognition task using the mathematical language of density matrices . in particular , we provide a one - to - one correspondence between patterns and pure density operators . this correspondence enables us to : represent the nearest mean classifier ( nmc ) in terms of quantum objects , ) introduce a quantum classifier ( qc ) . by comparing the qc with the nmc on different 2d datasets , we show the first classifier can provide additional information that are particularly beneficial on a classical computer with respect to the second classifier . |
the recent euro - zone crisis remind us that ` risk analysis ' is always an essential part of the theory of portfolio management .markowitz(1952 ) , first proposed volatility ( or standard deviation ) as the risk measure .as volatility provides an idea about average loss ( or gain ) of portfolio ; more new risk measures like ` _ _ value at risk _ _ ' ( var ) and ` _ _ expected short fall _ _ ' ( esf ) for extreme losses are also popular .although basel ii regulatory framework requires inclusion of var and esf , till date volatility plays an important role in finance .for example , volatility can be traded directly on the open market through _ exchange traded fund _ of vix index and indirectly through derivatives .it is essential to measure the volatility and identify the main sources of volatility. often the number of financial instruments of well diversified mutual funds or pension funds are more than thousands .such funds invest in foreign countries on a regular basis ; sometimes in more than fifty to sixty different countries . however , portfolio managers are concerned about the stationarity in long time - series data and interested about only recent volatility which consider daily returns of a month and sometimes even less . as the number of sectors , or countries , or components , of a portfolio is greater than the number of days of return , the rank of the portfolio covariance matrix is less than full ; such cases yield non - unique solutions .generally it is known as the ill - posed " problem . in this paper, we mainly focus on estimating and analysing the sources of portfolio volatility under ill - posed " condition .we address the mean - variance optimization problem which admits an analytical solution for the optimal weights of a given portfolio .this optimization procedure requires an estimate of the portfolio covariance matrix .but due to the ill - posed " structure of the problem , regular sample covariance would not work here .therefore , we propose a regularized plug - in bayes estimator for the portfolio covariance matrix and use the optimized weights for the given portfolio . using this setupwe evaluate its out - sample performance using a monte carlo algorithm . in a 3-series paper , ledoit and wolf ( 2003 , 2004a, 2004b) showed the use of shrinkage estimator on actual stock market data keeping all other steps of optimization process the same . by doingso they reduced the tracking error relative to a ad - hoc index . as a resultit substantially increased the realized information ratio of active portfolio managers .ledoit and wolf ( 2004a ) suggested a distribution - free approach to regularizing the covariance matrix , in this paper however we impose a probability structure on the covariance matrix for obvious reasons .the proposed regularized plug - in bayes estimator in this paper bears a direct relationship with the empirical bayes estimator suggested by ledoit and wolf ( 2004a ) , for the covariance matrix . also , golsnoy and okhrin ( 2007) showed the improvement of portfolio selection by using the multivariate shrinkage estimator for the optimal portfolio weights .recently , das and dey ( 2010 ) introduced some bayesian properties of multivariate gamma distribution for covariance matrix . in this paperwe use this bayesian approach to regularize the estimation problem . under certain conditions ,the posterior distribution of the portfolio covariance matrix is proper and has a closed form inverted multivariate gamma distribution .consequently , the solution of covariance matrix is unique .the rest of the article is organized as follows . in section [ postcov ], we discuss the posterior distribution of portfolio covariance matrix and the condition under which it is proper . in section [ riskana ], we present a parallelizable monte carlo algorithm to obtain posterior inference on the risk . in section [ empstudy ] ,we demonstrate the method with two empirical data sets .the methodology of inference is applied initially to a small dataset , consisting of different asset classes .next we consider a portfolio consisting of the stocks from national stock exchange of india " ( nsei ) .section [ concl ] concludes the paper .suppose , is a real symmetric sample portfolio covariance matrix of order with variables . the corresponding population portfolio covariance matrix , such that for a diagonal matrix with diagonal elements and , has non - positive off - diagonal elements . hence due to bapat s condition , bapat ( 1989 ) , with characteristic function as =|i_p - i\beta \sigma t|^{-\alpha},\ ] ] has the density function has an infinitely divisible multivariate gamma distribution with parameters , and a positive definite matrix .note that if , has a degenerate distribution .if we choose and then follows a wishart distribution , i.e. , ( see anderson , ( 1984 ) , pp 252 ) . if , then is less than full rank , and the sampling distribution of is degenerate and that is no valid statistical inference can be implemented for such cases .das and dey ( 2010 ) showed that if has prior as inverted multivariate gamma distribution , i.e. , if then the posterior distribution of is note that as long as , the posterior distribution is proper .suppose , i.e. , , where ; then the sampling distribution of is degenerate .however , if we choose the prior degrees of freedom parameter to be such that that is then the posterior distribution of is proper .hence , we will be able to carry out bayesian inference .if we choose and then the prior of is inverse wishart distribution , and posterior distribution of is for details see anderson ( 1984 ) .the 3-series paper of ledoit and wolf ( 2003 , 2004a , 2004b ) , established the reasons for the sample covariance matrix failing to provide a good estimate for the portfolio covariance structure , and showed the need for regularization even when the problem is not ill - posed " . however , there was a lack of scope for implementing inferential procedures on their structural framework , regarding quantities like _ conditional contribution to total risk_. the main reason behind this , being the problem of assigning a suitable model for the anticipated return distribution . in this paperan assumption for the covariance matrix , has an underlying assumption of normal / normal - component - mixture distribution on the anticipated returns .the justification for the assumption is provided by the bayesian formulation as in gelman et al . .the marginal posterior distribution for anticipated return would be .this would provide a superior apprach to modelling the anticipated return in comparision to using a normal distribution . if , then we choose the prior degrees of freedom as for .this ensures posterior distribution to be proper .the posterior mode of is where .clearly posterior mode of is a shrinkage estimator which is a weighted average of prior distribution s mode and sample covariance estimator .das and dey ( 2010 ) showed posterior mode is also a bayes estimator under a kullback - leibler type loss function .therefore under properly chosen prior degrees of freedom ( or ) and positive definite , posterior mode of is also a bayes shrinkage estimator which regularizes the solution .the posterior distribution of is proper , which helps us to regularize the portfolio optimization procedure while conducting our empirical study in section [ empstudy ] . under the assumption of =\sigma ] and ] = \frac{2}{n-1}\big[\frac{1}{p}\sum_{i=1}^{p}\sigma_{ii}^2\big] ] .result 4 follows from the definition of fr norm and the proofs of result 1,2 and 3 are presented in section [ app ] .now we examine the denominator of in ( [ dhdest1 ] ) , ^ 2}{p}\big\rbrace+\frac{2}{n-1}\big [ \frac{1}{p}\sum_{i=1}^{p}\sigma_{ii}^2\big ] , \nonumber \\ \delta^2 & = & \frac{1}{p^2}\big[(p-1)\sum_{i=1}^{p}\sigma_{ii}^2+\mathop{\sum \sum}_{i\neq j}^{p}\sigma_{ii}\sigma_{jj}(p\;\rho_{ij}^2 - 1)\big]+\frac{2}{n-1}\big[\frac{1}{p}\sum_{i=1}^{p}\sigma_{ii}^2\big ] .\label{deltaeqn}\end{aligned}\ ] ] multiplying both sides of equation [ deltaeqn ] by we get , \nonumber\\ & = & a_1+a_2 , \nonumber\end{aligned}\ ] ] where , is a function of the variances , and \end{aligned}\ ] ] is a function of the variances and the covariances .we consider from , and if which reduces to , as , which implies . now , we consider from ( [ numer ] ) therefore , ( [ denomo ] ) provides a lower - bound for the denominator of .now we consider the form of , from results 2 and 4 , \big[\sum_{i=1}^{p}\sigma_{ii}\big]\label{varst}\\ & \leq & 2p^2 \big\lbrace \mathop{\mathrm{max}}_{i } \sigma_{ii}^2\big\rbrace \big\lbrace \mathop{\mathrm{max}}_{i } \sigma_{ii}\big\rbrace\nonumber\\ & \rightarrow & ( n-1)p^2\beta^2\mu = o(p^2).\nonumber\\\end{aligned}\ ] ] it provides an upper - bound for the numerator of .hence we combine these findings in the following theorem .if s , is the sample portfolio covariance matrix , such that , then for ( [ smode ] ) and ( [ dhdest1 ] ) we have , provided .theorem 1 establishes the functional equivalence of the distribution - free approach and the bayesian approach .note that from ( [ smode ] ) * note 2 : * theorem 1 implies that the ` degree of shrinkage ' towards the target for ( [ ledoitest ] ) , is less than a constant times ; whereas in ( [ smode ] ) , the degree of shrinkage towards the target , is exactly .equation ( [ varst ] ) is a function of the variances , therefore we propose the following modification to the regularization , where , .we have two choices of weights for the regularization . using ( [ orderbayes ] ), we can choose as the bayesian weights and compare the performance with the asymptotic weights for presented in ledoit and wolf ( 2003 ) .the asymptotic weight is obtained by minimizing ] included in the portfolio constructed using the ledoit and wolf weights , presents an even higher ] has increased , which implies increase in portfolio volatility - risk .x20microns preserves a negative cctr and shows more concentration near the posterior mean . in the second period ( jan - jun14 ) abgship has a negative cctr and shows a considerable decrease in cctr from the first period . the stocks astral and bajajhldng show decreased cctr , their distributions being more concentrated about the posterior mean , in comparision to the first period .the stock bhushanstl , however shows no considerable change over the two periods .finally , using the asymptotic weights , instead of the bayesian weights for the ledoit wolf method with the same shrinkage target as indicated in as in ( [ lw - weight ] ) , we can compute the value for , providing us with a comparable , but inferior result in comparision with the proposed portfolio weights .the proposed weights also provide a portfolio of smaller size that lowers transaction cost for the investor .in this paper we discussed about bayesian approach to regularize the ` ill - posed ' covariance estimation problem , establishing equivalence of the bayesian technique with the existing non - parametric techniques .the method also analyzes the sources of risk by estimating the probability of cctr being positive for any particular security .as cctr sums up to total volatility , it provides each source s contribution to total volatility .regular method only estimates cctr of a source , but it does not estimate the probability of cctr to be significantly greater ( or less ) than zero .this paper discussed the methodology to do so .the existing and relatively new measures of risk , like esf and var , are used to analyze the performance of the proposed portfolio weights in comparison to the traditional methods .we presented a parallelizable and easy to implement monte carlo method to carry out inference regarding the individual contribution of securities towards total risk .we further presented two empirical studies , the first showed that during the crisis of 2008 , a portfolio consisting of five asset classes experienced large volatility risk due to significant increase in the contribution of stock and hybrid bond . during the same period` bond ' was the asset class which contributed least on an average to the risk exposure of the portfolio .secondly , the performance of the regularization technique under relatively higher dimensions , under computationally efficient bayesian analysis , to produce effective construction of a portfolio and inference regarding its risk exposure .* proof of result 1 : * ^ 2}{p } & = & \sum_{i=1}^{p } \sum_{j=1}^{p } \sigma_{ij}^2 - \frac{1}{p}\big [ \sum_{i=1}^{p } \sigma_{ii}\big]^2\\ & = & \frac{1}{p}\big [ p\sum_{i=1}^{p } \sigma_{ii}^2 + p\mathop{\sum \sum}_{i\neq j}^{p } \sigma_{ij}^2- \big\lbrace \sum_{i=1}^{p } \sigma_{ii}^2 + \mathop{\sum \sum}_{i\neq j}^{p } \sigma_{ii}\sigma_{jj}\big\rbrace\big]\\ & = & \frac{1}{p}\big [ ( p-1)\sum_{i=1}^{p } \sigma_{ii}^2+\mathop{\sum \sum}_{i\neq j}^{p } ( p\ ; \sigma_{ij}^2-\sigma_{ii}\sigma_{jj})\big]\\ & = & \frac{1}{p}\big[(p-1)\sum_{i=1}^{p } \sigma_{ii}^2+\mathop{\sum \sum}_{i\neq j}^{p } \sigma_{ii } \sigma_{jj}(p\ ; \rho_{ij}^2 - 1)\big].\end{aligned}\ ] ] consequently we have , .\ ] ] * * proof of result 2:**note that ,the following result is true for the covariance matrix , s , &=&\mathrm{tr}[\mathrm{e}(s^2)]\\ & = & \mathrm{tr}\big[\mathrm{var}(s)+[\mathrm{e}(s)]^2 \big]\\ & = & \mathrm{tr}\big[(n-1)((\sigma_{ij}^2+\sigma_{ii}\sigma_{jj}))_{i=1\ldots p , j=1\ldots p}+(n-1)^2\sigma^2 \big]\\ & = & ( n-1)\big[2\sum_{i=1}^{p}\sigma_{ii}^2+(n-1)\ ; \mathrm{tr}(\sigma^2)\big].\end{aligned}\ ] ] using the result obtained above we have , \\ & = & \frac{1}{p } \mathrm{e}\big[\mathrm{tr}\big\lbrace \big(\frac{s}{n-1 } -\sigma \big)\big(\frac{s}{n-1 } -\sigma \big)^{\prime}\big\rbrace \big]\\ & = & \frac{1}{p } \mathrm{e}\big[\mathrm{tr } \big(\frac{s^2}{(n-1)^2}\big ) -2 \mathrm{tr}\big(\frac{s}{n-1}\sigma \big ) + \mathrm{tr } \big(\sigma^{2}\big)\big\rbrace \big].\\\end{aligned}\ ] ] now using the fact that =\ ; \mathrm{tr}[\mathrm{e}(.)] ] , and the result above , we have \\ & = & \frac{1}{p } \big [ \frac{(n-1)\lbrace 2\sum_{i=1}^{p } \sigma_{ii}^2 + ( n-1 ) \mathrm{tr}(\sigma^2)\rbrace}{(n-1)^2}-\frac{2}{n-1}(n-1)\mathrm{tr}(\sigma^2)+\mathrm{tr}(\sigma^2)\big]\\ & = & \frac{2}{n-1}\big [ \frac{1}{p}\sum_{i=1}^{p}\sigma_{ii}^2\big].\end{aligned}\ ] ] * proof of result 3 : * \\ & = & \frac{1}{p}\mathrm{e}\big [ \mathrm{tr } \big\lbrace \big(\frac{s}{n-1}-\mu\mathrm{i}\big)\big(\frac{s}{n-1}-\mu\mathrm{i}\big)^{\prime}\big\rbrace\big]\\ & = & \frac{1}{p}\mathrm{e}\big[\mathrm{tr } \big(\frac{s^2}{(n-1)^2}\big)-\frac{2\mu}{n-1}\mathrm{tr}(s)+\mu^2\mathrm{tr}\mathrm{i}\big]\\ & = & \frac{1}{p}\mathrm{tr}\big [ \mathrm{e}\big(\frac{s^2}{(n-1)^2}\big)-\frac{2\mu}{n-1}(n-1)\sigma+p\mu^2\big]\\ & = & \frac{1}{p}\big [ \frac{(n-1)\lbrace 2\sum_{i=1}^{p } \sigma_{ii}^2 + ( n-1 ) \mathrm{tr}(\sigma^2)\rbrace}{(n-1)^2}-2\frac{[\mathrm{tr}(\sigma)]^2}{p}+\frac{[\mathrm{tr}(\sigma)]^2}{p } \big]\\ & = & \frac{2}{n-1}\big [ \frac{1}{p}\sum_{i=1}^{p}\sigma_{ii}^2\big ] + \frac{1}{p}\big\lbrace\mathrm{tr}(\sigma^2 ) -\frac{[\mathrm{tr}(\sigma)]^2}{p}\big\rbrace=\ ; \alpha^2+\beta^2.\end{aligned}\ ] ] * proof of result 5 : * ~=~ \mathop{\mathrm{argmin}}_{\rho \geq 0 } \big|\big|\sigma_3-\sigma\big|\big|^2\\ & = & \mathop{\mathrm{argmin}}_{\rho \geq 0 } \big|\big|\rho \lambda^{\prime}\mathrm{i_{p}}+(1-\rho)\frac{s}{n-1}-\sigma\big|\big|^2~=~\mathop{\mathrm{argmin}}_{\rho \geq 0 } \big|\big|\rho \mathop{\mathrm{diag}}_{i}~\lbrace s_{11},s_{22},\ldots , s_{pp}\rbrace+(1-\rho)\frac{s}{n-1}-\sigma\big|\big|^2.\\\end{aligned}\ ] ] now considering we have , &= & \frac{1}{p}\mathrm{e}\big[\mathrm{tr}\big\lbrace\big(\rho \mathop{\mathrm{diag}}_{i}~\lbrace s_{11},s_{22},\ldots , s_{pp}\rbrace+(1-\rho)\frac{s}{n-1}-\sigma\big)\big(\rho \mathop{\mathrm{diag}}_{i}~\lbrace s_{11},s_{22},\ldots , s_{pp}\rbrace+(1-\rho)\frac{s}{n-1}-\sigma\big)^{\prime}\big\rbrace\big]\\ & = & \frac{1}{p } \mathrm{tr } \big(\mathrm{e}\big[\rho^2\mathop{\mathrm{diag}}_{i}~\lbrace s_{ii}\rbrace^2 + 2\rho(1-\rho)\mathop{\mathrm{diag}}_{i}~\lbrace s_{ii}\rbrace\frac{s}{n-1}-2\rho\mathop{\mathrm{diag}}_{i}~\lbrace s_{ii}\rbrace\sigma-2(1-\rho)\sigma\frac{s}{n-1}+\sigma^2\big]\big).\\\end{aligned}\ ] ] since , implies , =(n-1)\sigma ] , then to minimize we have the normal equation , by differentating w.r.t , and rearranging terms , for co - efficient of \rho & = & [ ( n-2)^2 + 1]\sum_{i=1}^{p}\sigma_{ii}^2-(n-2)\sum_{i=1}^{p}\sum_{i=1}^{p}\sigma_{ii}-(n-1)\sum_{i=1}^{p}\sum_{j=1}^{p}\sigma_{ij}^2,\\\end{aligned}\ ] ] which is the required result . 99 golosnoy v. , okhrin y. multivariate shrinkage for optimal portfolio weights . the european journal of finance , 2007 ; 13:441 - 458 .ledoit o. , wolf m. honey , i shrunk the sample covariance matrix .journal of portfolio management , 2004 ; 30:4:110 - 119 .ledoit o. , wolf , m. a well - conditioned estimator for large - dimensional covariance matrices .journal of multivariate analysis , 2004 ; 88:365 - 411 .ledoit o. , wolf m. improved estimation of the covariance matrix of stock returns with an application to portfolio selection .journal of empirical finance , 2003 ; 10:5:603 - 621 .markowitz harry m. portfolio selection .journal of finance , 1952 ; 77 - 91 .das s. , dey d. k. on bayesian inference for generalized multivariate gamma distribution .statistics and probability letters , 2010 ; 80 : 14921499 .bapat , r. b. infinite dividibility of multivariate gamma distributions and m - matrices .sankhya : the indian journal of statistics , 1989 ; 51:1:73 - 78 .anderson , t. w. an introduction to multivariate statistical analysis 2 ed .wiley , 1984 .ruppert d. statistics and finance an introduction .springer , 2004 .menchero j. , davis .b. risk contribution is exposure time s volatility time s correlation : decomposing risk using the x - sigma - rho formula .the journal of portfolio management , 1989 ; 37:2:97 - 106 .baigent g. g. x - sigma - rho and market efficiency. international journal of finance and banking , 2014 ; 1:1:39 - 43 .matloff n. the art of r programming : a tour of statistical software design .the journal of portfolio management , 2011 ; isbn : 9781593274108 : 347 e. graham , knuth .big o micron and big omega and big theta .sigact news , 1976 ; : 18 - 24 rachev svetlozar t. , john s. , hsu j. , biliana s. bayesian methods in finance ( frank j. fabozzi series ) .wiley , 2008 .andrew gelman , john b. carlin , hal s. stern , david b. dunson , aki vehtari , donald b. rubin .bayesian data analysis .chapman and hall / crc , 2004 .david l. , wolfgang b. ghyp : a package on the generalized hyperbolic distribution and its special cases .r package v 1.5.6 , 2013 .< http://cran.r - project.org / package = ghyp>..portfolio weights for five asset classes [ portweighttable ] [ cols=">,>,>,>,>,>",options="header " , ] | it is important for a portfolio manager to estimate and analyse recent portfolio volatility to keep the portfolio s risk within limit . though the number of financial instruments in the portfolio can be very large , sometimes more than thousands , daily returns considered for analysis are only for a month or even less . in this case rank of portfolio covariance matrix is less than full , hence solution is not unique . it is typically known as the ill - posed " problem . in this paper we discuss a bayesian approach to regularize the problem . one of the additional advantages of this approach is to analyze the source of risk by estimating the probability of positive ` conditional contribution to total risk ' ( cctr ) . each source s cctr would sum up to the portfolio s total volatility risk . existing methods only estimate cctr of a source , and does not estimate the probability of cctr to be significantly greater ( or less ) than zero . this paper presents bayesian methodology to do so . we use a parallelizable and easy to use monte carlo ( mc ) approach to achieve our objective . estimation of various risk measures , such as value at risk and expected shortfall , becomes a by - product of this monte - carlo approach . * keyword * : monte carlo , parallel computation , risk analysis , shrinkage method , volatility |
massive multiple - input multiple - output ( mimo ) systems employing simple linear precoding and combining schemes offer significant performance gains in terms of bandwidth , power , and energy efficiency compared to conventional multiuser mimo systems as impairments such as fading , noise , and interference are averaged out for very large numbers of base station ( bs ) antennas .furthermore , in time - division duplex ( tdd ) systems , channel reciprocity can be exploited to estimate the downlink channels via uplink training so that the training overhead scales only linearly with the number of users and is independent of the number of bs antennas .however , if the pilot sequences employed in different cells are not orthogonal , so - called pilot contamination occurs and impairs the channel estimates , which ultimately limits the achievable performance of massive mimo systems . since secrecy and privacy are critical concerns for the design of future communication systems , it is of interest to investigate how the large number of spatial degrees of freedom in massive mimo systems can be exploited for secrecy enhancement .if the eavesdropper ( eve ) remains passive to hide its existence , neither the transmitter ( alice ) nor the legitimate receiver ( bob ) will be able to learn eve s channel state information ( csi ) .in this situation , it is advantageous to inject artificial noise ( an ) at the transmitter to degrade eve s channel and to use linear precoding to avoid impairment to bob s channel as was shown in - and , for single user and single - cell multiuser systems , respectively .however , in multi - cell massive mimo systems , multi - cell interference and pilot contamination will hamper alice s ability to degrade eve s channel and to protect bob s channel .this problem was studied first in for simple matched - filter ( mf ) data precoding and null - space ( ns ) and random an precoding .however , it is well known that mf data precoding suffers from a large loss in the achievable information rate compared to other linear data precoders such as zero - forcing ( zf ) and regularized channel inversion ( rci ) precoders as the number of mobile terminals ( mts ) increases . since it is expected that this loss in information rate also translates into a loss in secrecy rate , studying the secrecy performance of zf and rci data precoders in massive mimo systemsis of interest .furthermore , while ns an precoding was shown to achieve a better performance compared to random an precoding , it also entails a much higher complexity .similarly , the improved performance of zf and rci data precoding compared to mf data precoding comes at the expense of a higher complexity .hence , the design of novel data and an precoders which allow a flexible tradeoff between complexity and secrecy performance is desirable .related work on physical layer security in massive mimo systems includes where the authors use the channel between alice and bob as secrete key and show that the complexity required by eve to decode alice s message is at least of the same order as a worst - case lattice problem .physical layer security in a downlink multi - cell mimo system was considered in - . however , unlike our work , perfect knowledge of eve s channel was assumed , an injection was not considered , and pilot contamination was not taken into account . furthermore , zf and rci data precoding were analyzed in the large system limit in . however , neither pilot contamination nor an were taken into account and the secrecy rate was not analyzed . using a concept that was originally conceived for code division multiple access ( cdma ) uplink systems in and later extended to mimo systems in , reduced complexitylinear data precoders that are based on matrix polynomials were investigated for use in massive mimo systems in - .however , - did not take into account the effect of an leakage for precoder design and did not study the secrecy performance .hence , the results presented in - are not directly applicable to the system studied in this paper . in this paper, we consider secure downlink transmission in a multi - cell massive mimo system employing linear data and an precoding in the presence of a passive multi - antenna eavesdropper .we study the achievable ergodic secrecy rate of such systems for different linear precoding schemes taking into account the effects of uplink channel estimation , pilot contamination , multi - cell interference , and path - loss .the main contributions of this paper are summarized as follows : * we study the performance - complexity tradeoff of selfish and collaborative data and an precoders .selfish precoders require only the csi of the mts in the local cell but cause inter - cell interference and inter - cell an leakage .in contrast , collaborative precoders require the csi between the local bs and the mts in all cells , but reduce inter - cell interference and inter - cell an leakage. however , since the additional csi required for the collaborative precoders can be estimated directly by the local bs , the additional overhead and complexity incurred compared to selfish precoders is limited .* we derive novel closed - form expressions for the asymptotic ergodic secrecy rate which facilitate the performance comparison of different combinations of linear data precoders ( i.e. , mf , selfish and collaborative zf / rci ) and an precoders ( i.e. , random , selfish and collaborative ns ) , and provide significant insight for system design and optimization . * in order to avoid the computational complexity and potential stability issues in fixed point implementations entailed by the large - scale matrix inversions required for zf and rci data precoding and ns an precoding , we propose polynomial ( poly ) data and an precoders and optimize their coefficients . unlike and , which considered polynomial data precoders for massive mimo systems without an generation , we use free probability theory to obtain the poly coefficients .this allows us to express the poly coefficients as simple functions of the channel and system parameters .simulation results reveal that these precoders are able to closely approach the performance of selfish rci data and ns an precoders , respectively .the remainder of this paper is organized as follows . in section [ s2 ] , we outline the considered system model and review some basic results from . in sections [ s3 ] and [ s4 ] , the considered linear data and an precoders are investigated , respectively . in section [ s5 ] , the ergodic secrecy rates of different linear precoders are compared analytically for a simple path - loss model . simulation and numerical results are presented in section [ s6 ] , and some conclusions are drawn in section [ s7 ] ._ notation : _ superscripts and stand for the transpose and conjugate transpose , respectively . is the -dimensional identity matrix .the expectation operation and the variance of a random variable are denoted by ] , respectively . denotes a diagonal matrix with the elements of vector on the main diagonal . and denote trace and rank of a matrix , respectively . represents the space of all matrices with complex - valued elements . denotes a circularly symmetric complex gaussian vector with zero mean and covariance matrix .{kl} ] .in this section , we introduce the considered system model as well as the adopted channel estimation scheme , and review some ergodic secrecy rate results .we consider the downlink of a multi - cell massive mimo system with cells and a frequency reuse factor of one , i.e. , all bss use the same spectrum .each cell includes one -antenna bs , single - antenna mts , and potentially an -antenna eavesdropper .the eavesdroppers try to hide their existence and hence remain passive . as a result, the bss can not estimate the eavesdroppers csi . to overcome this limitation, each bs generates an to mask its information - carrying signal and to prevent eavesdropping . in the following ,the mt , , in the cell , , is the mt of interest and we assume that an eavesdropper tries to decode the signal intended for this mt . we note that neither the bss nor the mts are assumed to know which mt is targeted by the eavesdropper .the signal vector , , transmitted by the bs in the cell ( also referred to as the bs in the following ) is given by where and denote the data and an vectors for the mts in the cell , respectively . \in \mathbb{c}^{n_t \times k} ] are the data and an precoding matrices , respectively , and the efficient design of these matrices is the main scope of this paper .thereby , the structure of both types of precoding matrices does not depend on which mt is targeted by the eavesdropper .the an precoding matrix has rank , i.e. , dimensions of the -dimensional signal space spanned by the bs antennas are exploited for jamming of the eavesdropper .the data and an precoding matrices are normalized as and , i.e. , their average power per dimension is one .the average powers and allocated to the information - carrying signal for each mt and each an signal , respectively , can be written as and , where is the total transmit power and ] and ^t \in \mathbb{c}^{k \times n_t} ] , is a normalization constant , and is a regularization constant .the corresponding sinr of the mt in the cell is provided in the following proposition ._ proposition 2 _ : for crci data precoding , the received sinr at the mt in the cell is given by where with .the proof is similar to that for the sinr for the srci data precoder given in appendix a and omitted here for brevity .furthermore , the optimal regularization constant maximizing the sinr ( and thus the secrecy rate ) in ( [ gammacrcinpc ] ) is obtained as , and the corresponding maximum sinr is given by on the other hand , for , the crci precoder in ( [ crci ] ) reduces to the collaborative zf ( czf ) precoder .the corresponding received sinr is provided in the following corollary ._ corollary 2 _ : assuming , for czf data precoding , the received sinr at the mt in the cell is given by in ( [ czf ] ) is obtained by letting in ( [ gammacrcinpc ] ) ._ remark 1 : _ selfish data precoders require estimation of in - cell csi , i.e. , , only .in contrast , collaborative data precoders require estimation of both in - cell and inter - cell csi at the bs , i.e. , . furthermore , since collaborative data precoders attempt to avoid interference not only to in - cell users but also to out - of - cell users , more bs antennas are needed to achieve high performance .this is evident from corollaries 1 and 2 , which reveal that and are necessary for szf and czf data precoding , respectively . on the other hand ,if successful , trying to avoid out - of - cell interference is beneficial for the overall performance .hence , whether selfish or collaborative precoders are preferable depends on the parameters of the considered system , cf .sections [ s5 ] and [ s6 ] .the rci and zf data precoders introduced in the previous section achieve a higher performance than simple mf data precoding .however , they require a matrix inversion which entails a high computational complexity for the large values of and desired in massive mimo . hence , in this section , we propose a low - complexity poly data precoder which avoids the matrix inversion . as the goal is a low - complexity design ,we focus on selfish poly precoders , although the extension to collaborative designs is possible .the proposed poly precoder , , for the bs can be expressed as where , and ^t ] with error vector where includes gaussian noise , inter - cell interference , and inter - cell an leakage .furthermore , is a normalization constant at the receiver , which does not impact detection performance .the optimal coefficient vector minimizes for a given power budget for the information - carrying signal , i.e. , where we use the notation .the optimal coefficient vector , , is provided in the following theorem ._ theorem 1 _ : for , the optimal coefficient vector minimizing the asymptotic average mse of the users in the cell for the poly precoder in ( [ w ] ) is given by where ^t ] , ] .furthermore , denotes the -order moment of the sum of the eigenvalues of , i.e. , , which converges to for ( * ? ? ?* theorem 2 ) .finally , is chosen such that holds .please refer to appendix b. we note that does not depend on instantaneous channel estimates , and hence , can be computed offline .we compare the computational complexity of the considered data precoders in terms of the number of floating point operations ( flops ) .each flop represents one scalar complex addition or multiplication .we assume that the coherence time of the channel is symbol intervals of which are used for training and are used for data transmission .hence , the complexity required for precoding in one coherence interval is comprised of the complexity required for generating one precoding matrix and precoded vectors .a similar complexity analysis was conducted in ( * ? ? ?* section iv ) for selfish data precoders without an injection at the bs .since the an injection does not affect the structure of the data precoders , we can directly adapt the results from ( * ? ? ?* section iv ) to the case at hand .in particular , the selfish mf , the szf / srci , and the czf / crci precoders require , , and flops per coherence interval , see ( * ? ? ? * section iv ) .in contrast , for the poly data precoder , we obtain for the overall computational complexity flops , which assumes implementation of the precoding operation by horner s rule ( * ? ? ? * section iv ) .the above complexity expressions reveal that the additional complexity introduced by collaborative data precoders compared to selfish data precoders is at most a factor of .in addition , the complexity savings achieved with the poly data precoder compared to the szf / srci data precoders increase with increasing for a given .we note however that , regardless of their complexity , poly data precoders are attractive as they avoid the stability issues that may arise in fixed point implementation of large matrix inverses .in this section , we investigate the performance of selfish and collaborative ns ( s / cns ) and random an precoders .in addition , a novel poly an precoder is derived . to the best of the authors knowledge ,poly an precoding has not been considered in the literature before . for a given dimensionality of the an precoder , ,the secrecy rate depends on the an precoder only via the an leakage , , given in ( [ eq8 ] ) , which affects the sinr of the mt .furthermore , the optimal poly data precoder coefficients in ( [ muopt ] ) are affected by the an precoder via the leakage term . in this subsection , for , we will provide closed - form expressions for and for the sns , cns , and random an precoders .the sns an precoder of the bs is given by which has rank and exists only if .we divide the corresponding an leakage into an inter - cell an leakage and an intra - cell an leakage , where . for the sns an precoder , is obtained as =\mathbb{e}\bigg[{\rm tr}\left\{{\bf a}_{m } { \bf a}^h_m\right\}\bigg]\sum_{m \neq n}^m \beta^k_{mn}= ( n_t - k)\sum_{m \neq n}^m \beta^k_{mn},\ ] ] where we exploited ( * ? ? ? * lemma 11 ) and the independence of and .in contrast , the intra - cell an leakage power is given by =\beta^k_{nn}\mathbb{e}\bigg[\tilde{\bf h}^k_{nn } { \bf a}_{n } { \bf a}^h_n ( \tilde{\bf h}^k_{nn})^h\bigg]=(n_t - k)\beta^k_{nn}\frac{1 + p_\tau \tau \sum_{m \neq n}^m \beta^k_{nm}}{1+p_\tau \tau \sum_{m=1}^m \beta^k_{nm}},\ ] ] as the sns an precoder matrix lies in the null space of the estimated channels of all mts in the cell . similarly , the an leakage relevant for computation of the poly data precoder is obtained as for the cns an precoder at the bs , the an is designed to lie in the null space of the estimated channels between all mts and the bs , i.e. , which has rank and exists only if . the corresponding an leakage to the mt in the cell is given by = ( n_t - mk ) \sum_{m=1}^m \beta^k_{mn}\frac{1+p_\tau \tau \sum_{l \neq m}^m \beta^k_{ml}}{1+p_\tau \tau \sum_{l=1}^m \beta^k_{ml}}.\ ] ] furthermore , the cns an precoder results in the same as the sns an precoder , cf .( [ aaa ] ) . for the random precoder ,all elements of are i.i.d .random variables independent of the channel , i.e. , has rank .hence , and , , are mutually independent , and we obtain =n_t\sum_{m=1}^m \beta^k_{mn}.\ ] ] furthermore , we obtain ._ remark 2 : _ if the power and time allocated to channel estimation are very small , i.e. , , the s / cns an precoders yield the same and as the random an precoder .this suggests that in this regime all considered an precoders achieve a similar sinr performance for a given mt .however , for , the s / cns an precoders cause less an leakage resulting in an improved sinr performance compared to the random precoder at the expense of a higher complexity .to mitigate the high computational complexity imposed by the matrix inversion required for the s / cns an precoders , while achieving an improved performance compared to the random an precoder , we propose a poly an precoder .similar to the poly data precoder , we concentrate on the selfish design because of the desired low complexity , and hence , set .the proposed poly an precoder is given by where ^t ] and ] denotes the inter - cell interference factor . for this simplified model , and in ( [ cup ] )simplify to and .furthermore , the sinr expressions of the linear data precoders considered in section [ s3a ] and the mf precoder considered in can be simplified considerably and are provided in table [ tablea ] , where we use the normalized an leakage .the expressions for the normalized an leakage , the asymptotic average an leakage , and the dimensionality of the considered linear an precoders are given in table [ tableb ] .[ cols="^,^",options="header " , ] [ tableb ] in this subsection , we compare the performances achieved with szf , czf , and mf data precoders for a given an precoder , i.e. , and are fixed . since the upper bound on the capacity of the eavesdropper channel is independent of the adopted data precoder , cf .section [ s2c ] , we compare the considered data precoders based on their sinrs . exploiting the results in table [ tablea ] , we obtain the following relations between , , and : hence , for , we require , and for , we need ] , mf data precoding is always preferable regardless of the value of .similarly , if exceeds ] , it is not a priori clear which an precoder has the best performance . in fact , our numerical results in section [ s6 ] confirm that it depends on the system parameters ( e.g. , , , , and ) which an precoder is preferable . in this subsection , we provide closed - form results for the ergodic secrecy rate for szf , czf , and mf data precoding for the simplified path - loss model in ( [ eq21 ] ) .thereby , the simplified path - loss model is extended also to the eavesdropper , i.e. , and , , is assumed . combining ( [ secnk ] ) , ( [ cup ] ) , and the results in table [ tablea ] , we obtain the following lower bounds for the ergodic secrecy rate of the mt in the cell : ^+ & \quad { \rm for~mf},\\\bigg[\log_2 \left(\frac{(\tilde{q}+1/p_t)\beta+(a-\theta-\tilde{q})\beta \phi+c\theta(1-\beta)\phi}{(\tilde{q}+1/p_t)\beta+(a-\theta-\tilde{q})\beta \phi+(c-1)\theta(1-\beta)\phi}\cdot \frac{-\chi \phi+\chi}{(1-\chi)\phi+\chi}\right)\bigg]^+ & \quad { \rm for~szf},\\\bigg[\log_2 \left(\frac{(\tilde{q}+1/p_t)\beta+(a - a\theta-\tilde{q})\beta \phi+c\theta(1-m\beta)\phi}{(\tilde{q}+1/p_t)\beta+(a - a\theta-\tilde{q})\beta \phi+(c-1)\theta(1-m\beta)\phi}\cdot \frac{-\chi \phi+\chi}{(1-\chi)\phi+\chi}\right)\bigg]^+ & \quad { \rm for~czf},\end{cases}\ ] ] where , and and are given in table [ tableb ] for the considered an precoders .( [ rmfzf ] ) is easy to evaluate and reveals how the ergodic secrecy rate of the three considered data precoders depends on the various system parameters . to gain more insight , we determine the maximum value of which admits a non - zero secrecy rate .this value is denoted by in the following , and can be shown to be a decreasing function of for all conidered data precoders .hence , we find by setting in ( [ rmfzf ] ) and letting .this leads to eq .( [ alphas ] ) reveals that for a given an precoder , independent of the system parameters , the mf data precoder can always tolerate a larger number of eavesdropper antennas than the szf data precoder , which in turn can always tolerate a larger number of eavesdropper antennas than the czf data precoder .this can be explained by the fact that the high an transmit power required to combat a large number of eavesdropper antennas drives the receiver of the desired mt into the noise - limited regime , where the mf data precoder has a superior performance compared to the s / czf data precoders . on the other hand ,since depends on both and , it is not a priori clear which an precoder can tolerate the largest number of eavesdropper antennas . for a lightly loaded network with small and small , according to table [ tableb ] , we have for all three an precoders .hence , in this case , we expect the cns an precoder to outperform the sns and random an precoders as it achieves a smaller . on the other hand , for a heavily loaded network with large and ,the value of of the cns an precoder is compromised by its small value of and sns and even random an precoders are expected to achieve a larger .in this section , we evaluate the performance of the considered secure multi - cell massive mimo system .we consider cellular systems with and hexagonal cells , respectively , and to gain insight for system design , we adopt the simplified path - loss model introduced in section [ s5 ] , i.e. , the severeness of the inter - cell interference is only characterized by the parameter ] , and ] and ] , we obtain for the first term of the denominator of ( [ rnk ] ) , =\frac{1+p_\tau \tau \sum_{m \neq n}^m \beta^k_{nm}}{1+p_\tau \tau \sum_{m=1}^m \beta^k_{nm}} ] , the definition of given in theorem 1 , the definition of in ( [ w ] ) , the definition , and . in the following ,we simplify the right hand side ( rhs ) of ( [ x1 ] ) term by term . to this end, we denote the first three terms on the rhs of ( [ x1 ] ) by , , and , respectively . using a result from free probability theory , the first term converges to ( * ? ? ?* theorem 1 ) , \label{eq49}\ ] ] as matrix is free from .similarly , the third term converges to .\ ] ] furthermore , the second term can be rewritten as \nonumber\\ & \stackrel{{\rm ( { b})}}{=}&\varsigma^2 p n_t { \rm tr}\left\{{\bf d}_{nn } \boldsymbol{\delta}_n\right\ } , \label{eq51}\end{aligned}\ ] ] where ( a ) follows again from ( * ? ? ?* theorem 1 ) and ( b ) results from ={\rm tr}\left\{{\bf d}_{nn } \boldsymbol{\delta}_n\right\} ] , and ^t ] , where is the lagrangian multiplier . taking the gradient of the lagrangian function with respect to , and setting the result to zero , we obtain for the optimal coefficient vector : \boldsymbol{\mu } = \frac{{\rm tr}\left\{{\bf d}^{1/2}_{nn}\right\ } } { \varsigma \sqrt{p } { \rm tr}\left\{{\bf d}_{nn}\right\ } } \lim_{k \to \infty } \frac{1}{k } \mathbb{e}\left[{\bf c}^t_1 \boldsymbol{\lambda}\right ] .\label{eq56}\ ] ] furthermore , taking the derivative of with respect to and equating it to zero , and multiplying both sides of ( [ eq56 ] ) by and applying ( [ eq55 ] ) , we obtain the expressions involving , , and in ( [ eq56 ] ) can be further simplified .for example , we obtain {m , n}\bigg]= \lim_{k \to \infty } \mathbb{e}\bigg[\frac{1}{k } \sum_{k=1}^k \lambda^{m+n-1}_k\bigg] ] , the constraint in ( [ opt2 ] ) , and a similar approach as was used to arrive at ( [ aaa ] ) , the objective function in ( [ opt2 ] ) can be simplified as =q\mathbb{e}\bigg[{\rm tr}\left\ { { \bf d}_{nn } \hat{{\bf h}}_{nn } { \bf a}_n { \bf a}^h_n \hat{\bf h}^h_{nn}\right\}\bigg ] + ( 1-\phi)p_t{\rm tr}\{{\bf d}_{nn } \boldsymbol{\delta}_n\}.\ ] ] defining vandermode matrix , where {i , j}=\lambda_i^{j-1} ] with lagrangian multiplier .the optimal coefficient vector is then obtained by taking the gradient of the lagrangian function with respect to and setting it to zero : \boldsymbol{\nu}=\lim_{k \to \infty } \mathbb{e}\bigg[{\bf c}^t_2 \left(\boldsymbol{\lambda}+\epsilon { \bf i}_k\right ) \boldsymbol{\lambda}\bigg],\ ] ] where we used . simplifying the terms in ( [ nuopt ] ) by exploiting a similar approach as in appendix b, we obtain the result in theorem 2 .20 f. rusek , d. persson , b. k. lau , e. g. larsson , t. l. marzetta , o. edfors , and f. tufvesson , `` scaling up mimo : opportunities and challenges with very large arrays , '' _ ieee sig .proc . mag ._ , vol . 30 , no .40 - 46 , jan . 2013 .h. q. ngo , e. g. larsson , and t. l. marzetta , `` energy and spectral efficiency of very large multiuser mimo systems , '' _ ieee trans .commun . _ , vol .1436 - 1449 , apr . 2013 .j. jose , a. ashikhmin , t. l. marzetta , and s. vishwanath , `` pilot contamination and precoding in multi - cell tdd systems , '' _ ieee trans .wireless commun .10 , no . 8 , pp .2640 - 2651 , aug .a. mukherjee , s. a. a. fakoorian , j. huang , and a. l. swindlehurst , `` principles of physical layer security in multiuser wireless networks : a survey , '' _ ieee commun .surveys & tutorials _ , vol .3 , third quarter , 2014 .f. oggier and b. hassibi , `` the secrecy capacity of the mimo wiretap channel , '' _ ieee trans .inform . theory _57 , no . 8 , pp.4961 - 4972 , aug .s. goel and r. negi , `` guaranteeing secrecy using artificial noise , '' _ ieee trans .wireless commun ._ , vol . 7 , no . 6 , pp. 2180 - 2189 , june 2008 . x. zhou and m. r. mckay , `` secure transmission with artificial noise over fading channels : achievable rate and optimal power allocation , '' _ ieee trans . veh .3831 - 3842 , july 2010 .w. liao , t. chang , w. ma , and c. chi , `` qos - based transmit beamforming in the presence of eavesdroppers : an optimized artificial - noise - aided approach , '' _ ieee trans .1202 - 1216 , march 2011 .g. geraci , h. s. dhillon , j. g. andrews , j. yuan , i. b. collings , `` physical layer security in downlink multi - antenna cellular networks , '' _ ieee trans .62 , no . 6 , pp .2006 - 2021 , jun . 2014 .g. geraci , m. egan , j. yuan , a. razi , and i.b .collings , `` secrecy sum - rates for multi - user mimo regularized channel inversion precoding , '' _ ieee trans .3472 - 3482 , nov .g. geraci , j. yuan , and i. b. collings , `` large system analysis of linear precoding in miso broadcast channels with confidential messages,''__ieee journal on sel .areas in commun . _9 , pp . 1660 - 1671 , sept .2013 .v. k. nguyen and j. s. evans , `` multiuser transmit beamforming via regularized channel inversion : a large system analysis , '' in _ proc . ieee global communications conference _ , new orleans , lo , us , pp . 1 - 4 , dec . 2008. s. zarei , w. gerstacker , r. r. muller , and r. schober , `` low - complexity linear precoding for downlink large - scale mimo systems , '' in _ proc .personal , indoor and mobile radio commun .( pimrc ) _ , sept .2013 .a. kammoun , a. mller , e. bjrnson , and m. debbah , `` linear precoding based on polynomial expansion : large - scale multi - cell mimo systems , '' _ ieee journal of sel .topics in sig ._ , vol . 8 , no .861 - 875 , oct . 2014 .j. evans and d. n. c. tse , `` large system performance of linear multiuser receivers in multipath fading channels , '' _ ieee trans .inform . theory _46 , no . 6 , pp . 2059 - 2018 , sept .r. hunger , `` floating point operations in matrix - vector calculus , '' technische universitt mnchen , associate institute for signal processing , tech . rep . , 2007 .a. m. tulino and s. verdu , `` random matrix theory and wireless communications , '' _ foundations and trends in communications and information theory _ , vol . 1 , no. 1 , pp . 1 - 182 , jun . 2004 . | in this paper , we consider secure downlink transmission in a multi - cell massive multiple - input multiple - output ( mimo ) system where the numbers of base station ( bs ) antennas , mobile terminals , and eavesdropper antennas are asymptotically large . the channel state information of the eavesdropper is assumed to be unavailable at the bs and hence , linear precoding of data and artificial noise ( an ) are employed for secrecy enhancement . four different data precoders ( i.e. , selfish zero - forcing ( zf)/regularized channel inversion ( rci ) and collaborative zf / rci precoders ) and three different an precoders ( i.e. , random , selfish / collaborative null - space based precoders ) are investigated and the corresponding achievable ergodic secrecy rates are analyzed . our analysis includes the effects of uplink channel estimation , pilot contamination , multi - cell interference , and path - loss . furthermore , to strike a balance between complexity and performance , linear precoders that are based on matrix polynomials are proposed for both data and an precoding . the polynomial coefficients of the data and an precoders are optimized respectively for minimization of the sum mean squared error of and the an leakage to the mobile terminals in the cell of interest using tools from free probability and random matrix theory . our analytical and simulation results provide interesting insights for the design of secure multi - cell massive mimo systems and reveal that the proposed polynomial data and an precoders closely approach the performance of selfish rci data and null - space based an precoders , respectively . |
public disclosure of datasets containing _ micro - data _ ,i.e. , information on precise individuals , is an increasingly frequent practice .such datasets are collected in a number of different ways , including surveys , transaction recorders , positioning data loggers , mobile applications , and communicaiton network probes .they yield fine - grained data about large populations that has proven critical to seminal studies in a number of research fields . however , preserving user privacy in publicly accessible micro - data datasets is currently an open problem .publishing an incorrectly anonymized dataset may disclose sensible information about specific users .this has been repeatedly proven in the past .one of the first and best known attempts at re - identification of badly anonymized datasets was carried out by then mit graduate student latanya sweeney in 1996 . by using a database of medical records released by an insurance company and the voter roll for the city of cambridge ( ma ) , purchased for 20 us dollars , dr .sweeney could successfully re - identify the full medical history of the then governor of massachusetts , william weld .she even sent the governor full health records , including diagnoses and prescriptions , to his office . a later , yet equally famous experiment was performed by narayanan _et al . _ on a dataset released by netflix for a data - mining contest , which was cross - correlated with a web scraping of the popular imdb website .the authors were able to match two users from both datasets revealing , e.g. , their political views .mobile traffic datasets include micro - data collected at different locations of the cellular network infrastructure , concerning the movements and traffic generated by thousands to millions of subscribers , typically for long timespans in the order of weeks or months .they have become a paramount instrument in large - scale analyses across disciplines such as sociology , demography , epidemiology , or computer science .unfortunately , mobile traffic datasets may also be prone to attacks on individual privacy .specifically , they suffer from the following two issues . 1 .* elevate uniqueness . *mobile subscribers have very distinctive patterns that often make them unique even within a very large population .zang and bolot showed that 50% of the mobile subscribers in a 25 million - strong dataset could be uniquely detected with minimal knowledge about their movement patterns , namely the three locations they visit the most frequently .the result was corroborated by de montjoye _ et al . _ , who demonstrated how an individual can be pinpointed among 1.5 million other mobile customers with a probability almost equal to one , by just knowing five random spatiotemporal points contained in his mobile traffic data .uniqueness does not implies identifiability , since the sole knowledge of a unique subscriber trajectory can not disclose the subscriber s identity .building that correspondence requires instead sensible side information and cross - database analyses similar to those carried out on medical or netflix records .to date , there has been no actual demonstration of subscriber re - identification from mobile traffic datasets using such techniques and our study does not change that situation . still , uniqueness may be a first step towards re - identification , and whether this represents a threat to user privacy is an open topic for discussion .in such a context , the standard , safe approach to ensure data confidentiality relies on non - technical solutions , i.e. , non - disclosure agreements that well define the scope of the activities ( e.g. , fundamental research only ) carried out on the datasets , and that prevent open disclosure of the data or results without prior verification by the relevant authorities .this is , for instance , the solution adopted in the case of the mobile traffic information we will consider in sec.[sec : datasets ] .clearly , this practice can strongly limit the availability of mobile traffic datasets , as well as the reproducibility of related research .mitigating the uniqueness of subscriber trajectories becomes then a very desirable facility that can entail more privacy - preserving datasets , and favor their open circulation .it is however at this point that the second problem of mobile traffic datasets comes into play . 1 .* low anonymizability . *the legacy solution to reduce uniqueness in micro - data datasets is generalization and suppression .however , previous studies showed that blurring users in the crowd , by reducing the spatial and temporal granularity of the data , is hardly a solution in the case of mobile traffic datasets .zang and bolot found that reliable anonymization is attained only under very coarse spatial aggregation , namely when the mobile subscriber location granularity is reduced to the city level .similarly , de montjoye _ et al . _ proved that a power - law relationship exists between uniqueness and spatiotemporal aggregation of mobile traffic .this implies that privacy is increasingly hard to ensure as the resolution of a dataset is reduced . in conclusion ,not only mobile traffic datasets yield highly unique trjectories , but the latter are also hard to anonymize . ensuring individual privacy risks to lower the level of detail of such datasets to the point that they are not informative anymore . in this work ,we aim at better investigating the reasons behind such inconvenient properties of mobile traffic datasets .we focus on anonymizability , since it is a more revealing feature : multiple datasets that feature similar trajectory uniqueness may be more or less difficult to anonymize . attaining our objective brings along the following contributions : ( i ) we define a measure of the level of anonymizability of mobile traffic datasets , in sec.[sec : metric ] ; ( ii ) we provide a first assessment of the anonymizability of two large - scale mobile traffic datasets , in sec.[sec : datasets ] ; ( iii )we unveil the cause of elevate uniqueness and poor anonymizability in such datasets , i.e. , the heavy tail of the temporal diversity among subscriber mobility patterns , in sec.[sec : results ] . finally , sec.[sec : conc ] concludes the paper .in this section , we first define in a formal way the problem of user uniqueness in mobile traffic datasets , in sec.[sub : problem ] .then , we introduce the proposed measure of anonymizability , in sec.[sub : measure ] . in order to properly define the problem we target , we need to introduce the notion of mobile traffic fingerprint that is at the base of the mobile traffic dataset format .we also need to specify the type of anonymity we consider in our case , -anonymity .next , we discuss these aspects of the problem . [tab : db_std ] .standard micro - data database format . [ cols="<,<,<,<,<,<,<",options="header " , ] [ tab : db_mt ] a & ,8 & ,14 & ,17 & + b & ,8 & ,15 & ,15 & & ,15 & ,16 & ,17 + c & ,7 & ,20 & + & & + traditional micro - data databases are structured into matrices where each row maps to one individual , and each column to an _attribute_. an example is provided in tab.[tab : db_std ] .individuals are associated to one _ identifier _ , i.e. , a value that uniquely pinpoints the user across datasets ( e.g. , his complete name , social number , or passport number ) .since identifiers allow direct identification and immediate cross - database correlation , they are never disclosed .instead , they are replaced by a _ pseudo - identifier _ , which is again unique for each individual , but changes across datasets ( e.g. , a random string substituting the actual identifier ) . then , standard re - identification attacks leverage _ quasi - identifiers _ , i.e. , a sequence of known attributes of one user ( e.g. , the age , gender , zip code , etc . ) to recognize the user in the dataset . if successful , the attacker has then access to the complete record of the target user .this knowledge can directly include sensitive attributes , i.e. , items that should not be disclosed because they may pertain to the personal sphere of the individual ( e.g. , diseases , political or religious views , sexual orientation , etc . ) .it can also be exploited for further cross - database correlation so as to extract additional private information about the user .the same model directly applies to the case of mobile traffic datasets .however , the database semantics make all the difference here : while mobile users are the obvious individuals whose privacy we want to protect , attributes are now sequences of spatiotemporal samples .each sample is the result of an event that the cellular network associated to the user .an illustration is provided in fig.[fig : map_def ] , which portrays the trajectories of three mobile customers , denoted with pseudo - identifiers , , and , respectively , across an urban area .user interacts with the radio access infrastructure at 8 am , while he is in cell along his trajectory .then , he triggers additional mobile traffic activities at 2 pm , while located in a cell in the city center , and at 5 pm , from a cell in the south - east city outskirts .the same goes for users and .all these spatiotemporal samples are recorded by the mobile operator and constitute the _ mobile traffic fingerprint _ of the user .the resulting database has a format such as that in tab.[tab : db_mt ] , where subscriber identifiers are replaced by pseudo - identifiers , and each element of a user s fingerprint is a cell and hourly timestamp pair . in order to preserve user privacy in micro - data, one has to ensure that no individual can be uniquely pinpointed in a dataset .this principle has led to the definition of multiple notions of non - uniqueness , such as -anonymity , -diversity and -closeness . among those ,-anonymity is the baseline criterion , to which -diversity or -closeness add further security layers that cope with sensitive attributes or cross - database correlation .more precisely , -anonymity ensures that , for each individual , the set of attributes ( or its quasi - identifier subset ) is identical to that of at least other - 1 users . in other words ,each individual is always hidden in a crowd of , and thus he can not be uniquely identified among such other users .granting -anonymity in micro - data databases implies generalizing and suppressing data .as an example , in order to ensure -anonymity on the age and zip code attributes for the first user in tab.[tab : db_std ] , one can aggregate the age in twenty - year ranges , and the zip codes in three - number ranges : both the first and second user end up with a ` ( 20,40 ) ` age and ` 770 * * ` zip code , which makes them both -anonymous . clearly , the process is lossy , since the information granularity is reduced .many efficient algorithms have been proposed that achieve -anonymity in legacy micro - data databases , while minimizing information loss .also in mobile traffic datasets , -anonymity is regarded as a best practice , and data aggregation is the common approach to achieve it . in this case, one has to ensure that the fingerprint of each subscriber is identical to that of at least other - 1 mobile users in the same dataset .we remark that previous works have typically considered a model of attacker who only has partial knowledge of the subscribers fingerprints , e.g. , most popular locations or random samples . in order to counter such a attack model , a partial -anonymization , targeting the limited information owned be the attacker , would be sufficient .however , we are interested in a general solution , so we do not make any assumption on the precise knowledge of the attacker , which can be diverse and possibly broad . thus, -anonymizing the whole fingerprint of each subscriber in the mobile traffic dataset is the only way to deterministically ensure mobile user privacy .both spatial and temporal aggregations can be leveraged to attain this goal .examples are provided in fig.[fig : map_arr-2h ] and fig.[fig : map_half-12h ] . in fig.[fig : map_arr-2h ] , cells are aggregated in large sets that roughly map to the nine major neighborhoods of the urban area ; also , time is aggregated in two - hour intervals . the reduction of spatiotemporal granularity allows -anonymizing mobile users and : both have now a fingerprint composed by samples ` ( v,8 - 9 ) ` , ` ( iii,14 - 15 ) ` , and ` ( vii,16 - 17 ) ` .user has instead a different footprint , with samples ` ( iv,6 - 7 ) ` and ` ( iii,20 - 21 ) ` .if we need to -anonymize all three mobile customers in the example , then a further generalization is required , as in fig.[fig : map_half-12h ] .there , the metropolitan region is divided in west and east halves , and only two time intervals , before and after noon , are considered .the result is that all subscribers , , and have identical fingerprints ` ( west,1 - 12 ) ` and ` ( east,13 - 24 ) ` .clearly , this level of anonymization comes at a high cost in terms of information loss , as the location data is very coarse both in space and time .this is precisely the problem of low anonymizability of mobile traffic datasets unveiled by previous works : even guaranteeing -anonymization in a very large population requires severe reductions of the spatiotemporal granularity , which limits the usability of the data .we intend to devise a measure of anonymizability that is based on the -anonymity criterion .thus , our proposed measure evaluates the effort , in terms of data aggregation , needed to make a user indistinguishable from - 1 other subscribers .we start by defining the distance between two spatiotemporal samples in the mobile traffic fingerprints of two mobile users . each sample is composed of a spatial information ( e.g. , the cell location ) and a temporal information ( e.g. , the timestamp ) .the distance must keep into account both dimensions . a generic formulation of the distance between the -th sample of s fingerprint , , and the -th sample of s fingerprint , , is here , and are functions that determine the distance along the spatial and temporal dimensions , respectively .the former thus operates on the spatial information in the two samples , and , and the latter on the temporal information , and .the factors and weigth the spatial and temporal contributions in ( [ eqn : sampledist ] ) . in the following , we will assume that the two dimensional have the same importance , thus .we shape the and functions by considering that both spatial and temporal aggregations induce a loss of information that is linear with the decrease of granularity .however , above a given spatial or temporal threshold , the information loss is so severe that the data is not usable anymore . as a result, the functions can be expressed as and in ( [ eqn : deltas ] ) , is the _ taxicab distance _ between the spatial components of the samples , whose coordinates are denoted as and in a valid map projection system .both functions fulfill the properties of distances , i.e. , are positive definite , symmetric , and satisfy the triangle inequality .they range from ( samples are identical from a spatial or temporal viewpoint ) to ( samples are at or beyond the maximum meaningful aggregation threshold ) . concerning the values of the thresholds , in the following we will consider that the aggregation limits beyond which the information deprivation is excessive are 20 km for the spatial dimension ( i.e. , the size of a city , beyond which all intra - urban movements are lost ) and 8 hours ( beyond which the night , working hours , and evening periods are merged together ) . the sample distance in ( [ eqn : sampledist ] ) can be used to define the distance among the whole fingerprints of two mobile subscribers and , as here , and are the cardinalities of the fingerprints of and , respectively . the expression in ( [ eqn : fingerdist ] ) takes the longer fingerprint between the two , and finds , for each sample , the sample at minimum distance in the shorter fingerprint .the resulting is the average among all such sample distances , and , .the measure of anonymizability of a generic mobile user can be mapped , under the -anonymity criterion , to the average distance of his fingerprint from those of the nearest - 1 other users .formally where is the set of users with the smallest fingerprint distances to that of .the expression in ( [ eqn : delta ] ) returns a measure $ ] that indicates how hard it is to hide subscriber in a the crowd of users .if , then the user is already -anonymized in the dataset . if , the user is completely isolated , i.e. , no sample in the fingerprints of all other subscribers is within the spatial and temporal thresholds , and , from any samples of s fingerprint .we employ the proposed measure to assess the level of anonymizability of fingerprints present in two mobile traffic datasets released by orange in the framework of the data for development challenge . in order to allow for a fair comparison , we preprocessed the datasets so as to make them more homogeneous .* * ivory coast .* released for the 2012 challenge , this dataset describes five months of call detail records ( cdr ) over the whole the african nation of ivory coast .we used the high spatial resolution dataset , containing the complete spatio temporal trajectories for a subset of 50,000 randomly selected users that are changed every two weeks .thus , the dataset contains information about 10 2-weeks periods overall .we performed a preliminary screening , discarding the most disperse trajectories , keeping the users that have at least one spatio - temporal point per day .then , we merged all the user that met this criteria in a single dataset , so as to achieve a meaningful size of around 82,000 users .this dataset is indicated as ` d4d - civ ` in the following . ** senegal . *the 2014 challenge dataset is derived from cdr collected over the whole senegal for one year .we used the fine - grained mobility dataset , containing a randomly selected subset of around 300,000 users over a rolling 2-week period , for a total of 25 periods .we did not filter out subscribers , since the dataset is already limited to users that are active for more than 75% of the 2-week time span . in our study, we consider one representative 2-week period among those available . this dataset is referred to as ` d4d - sen ` in the following . in both the mobile traffic datasets ,the information about the user position is provided as a latitude and longitude pair .we projected the latter in a two - dimensional coordinate system using the lambert azimuthal equal - area projection .we then discretize the resulting positions on a 100-m regular grid , which represents the maximum spatial granularity we consider .as far as the temporal dimension is concerned , the maximum precision granted by both datasets is one minute , and this is also our finest time granularity .the measure of anoymizability in ( [ eqn : delta ] ) can be intended as a dissimilarity measure , and employed in legacy definitions used to understand micro - data database sparsity , e.g. , ` ( , ) -sparsity ` . however , these definitions are less informative than the complete distribution of the anonymizability measure .thus , in this section , we employ cumulative distribution functions ( cdf ) of the measure in ( [ eqn : delta ] ) in order to assess the anonymizability of the two datasets presented before .-anonymity criterion , in the ` d4d - civ ` and ` d4d - sen ` mobile traffic datasets . ]our basic result is shown in fig.[fig : d4d_cdf ] .the plot portrays the cdf of the anonymizability measure computed on all users in the two reference mobile traffic datasets , ` d4d - civ ` and ` d4d - sen ` , when considering -anonymity as the privacy criterion .we observe that the two curves are quite similar , and both are at zero in the x - axis origin .this means that no single mobile subscriber is -anonymous in either of the original datasets .since similar observations were made on different data , our results seem to confirm that the elevate uniqueness of subscriber trajectories is an intrinsic property of any mobile traffic dataset , and not just a specificity of those we analyse in this study .more interestingly , the probability mass gathered in both cases in the - range , i.e. , it is quite close to the origin .this is good news , since it implies that the average aggregation effort needed to achieve -anonymity is not elevate .as an example , 50% of the users in the ` d4d - civ ` dataset have a measure or less , which maps , on average , to a combined spatiotemporal aggregation of less than one km and little more than 20 minutes .in other words , the result seems to suggest that half of the individuals in the dataset can be -anonymized if the spatial granularity is decreased to 1 km , and the temporal precision is reduced to around 20 minutes .similar considerations hold in the ` d4d - sen ` case , where , e.g. , 80% of the dataset population has a measure or less .such a measure is the result of average spatial and temporal distances of 1.7 km and 41 minutes from -anonymity .one may wonder how more stringent privacy requirements affect these results .fig.[fig : d4d_vark ] shows the evolution of the anonymizability of the two datasets when varies from to .as expected , higher values of require that a user is hidden in a larger crowd , and thus shift the distributions towards the right , implying the need for a more coarse aggregation .however , quite surprisingly , the shift is not dramatic : -anonymity does not appear much more difficult to reach than -anonymity .unfortunately , the easy anonymizability suggested by the distributions is only apparent .fig.[fig : d4d_agg ] depicts the impact of spatiotemporal generalization on anonymizability : each curve maps to a different level of aggregation , from meters and minute ( the finer granularity ) to km and 8 hours .as one could expect , the curves are pushed towards smaller values of the anonymizability measure .however , the reduction of spatiotemporal precision does not have the desired magnitude , and even a coarse - grained citywide , 8-hour aggregation can not -anonymize but 30% of the mobile users .this result is again in agreement with previous studies , and confirms that mobile traffic datasets are difficult to anonymize .we are interested in understanding the reasons behind the incongruity above , i.e. , the fact that spatiotemporal aggregation yields such poor performance , even if the average effort needed to attain -anonymity is in theory not elevate . to attain our goal, we proceed along two directions .first , we separate the spatial and temporal dimensions of the measure in ( [ eqn : delta ] ) , so as to understand their precise contribution to the dataset anonymizability .second , we measure the statistical dispersion of the fingerprint distances along the two dimensions : the rationale is that we observed the average distance among fingerprints to be quite small , thus the reason of the low anonymizability must lie in the deviation of sample distances around that mean . formally , we consider , for each user in the dataset , the set of - 1 other subscribers whose fingerprints are the closest to that of , according to ( [ eqn : delta ] ) . then , we disaggregate all the fingerprint distances between and the users into sample distances , as per ( [ eqn : fingerdist ] ) . finally , we separately collect the spatial and temporal components of all such sample distances , in ( [ eqn : sampledist ] ) , into ordered sets and .the resulting sets can be treated as disjoint distributions of the distances , along the spatial and temporal dimensions , between the fingerprint of a generic individual and those of the - 1 other users that show the most similar patterns to his .examples of the spatial and temporal distance distributions we obtain in the case of -anonymity are shown in fig.[fig : d4d_dis1]-[fig : d4d_dis5 ] .each plot refers to one random user in the ` d4d - civ ` or ` d4d - sen ` dataset , and portrays the cdf of the spatial ( ) and temporal ( ) component distance , as well as that of the total sample distance ( ) . we can remark that temporal components typically bring a significantly larger contribution to the total fingerprint distance than spatial ones .in fact , a significant portion of the spatial components is at zero distance , i.e. , is immediately -anonymous in the original dataset .the same is not true for the temporal components .a rigorous confirmation is provided in fig.[fig : d4d_diswei ] , which shows the distribution of the temporal - to - spatial component ratios , i.e. , , for all subscribers in the two reference datasets .the cdf is skewed towards high values , and for half of mobile subscribers in both ` d4d - civ ` or ` d4d - sen ` datasets temporal components contribute to 80% or more of the total sample distance .we conclude that the temporal component of a mobile traffic fingerprint is much harder to anonymize than the spatial one . in other words , _an individual generates mobile traffic activity is easily masked , but hiding _ when _ he carries out such activity it is not so .not only temporal components weight much more than spatial ones in the fingerprint distance , but they also seem to show longer tails in fig.[fig : d4d_dis1]-[fig : d4d_dis5 ] .longer tails imply the presence of more samples with a large distance : this , in turn , significantly increases the level of aggregation needed to achieve -anyonimity , as the latter is only granted once all samples in the fingerprint have zero distance from those in the second fingerprint .we rigorously evaluate the presence of a long tail of hard - to - anonymize samples by means of two complementary metrics , still separating their spatial and temporal components .the first metric is the gini coefficient , which measures the dispersion of a distribution around its mean .considering an ordered set , the coefficient is computed as where is the cardinality of .we compute the gini coefficient on the sets and , for all users .the second metric is the tail weight index , which quantifies the weight of the tail of a distribution with empirical cdf as in the expression above , is the inverse function of the empirical cdf and is the inverse function of a standard normal cdf .we compute again the tail weight index on the distributions obtained from both and , for all .fig.[fig : d4d_ginitail ] shows the results returned by the two metrics in the ` d4d - civ ` or ` d4d - sen ` datasets .no significant differences emerge among the two mobile traffic datasets . in both cases , the gini coefficient , in fig.[fig : d4d_gini_sen ] and fig.[fig : d4d_gini_civ ] ,has , for all mobile user fingerprints ( ) , high values around that denote significant dispersion around the mean .however , two opposite behaviors are observed for the spatial ( ) and temporal ( ) components .the former show cases where no dispersion at all is recorded ( coefficient close to zero ) , and cases where the distribution is very sparse . the latter has the same behavior as the overall distance , with values clustered around .the result ( i ) corroborates the observation that the overall anonymizability is driven by distances along the temporal dimension , and ( ii ) imputes the latter to the complete absence of easy - to - anonymize short tails in the distribution of temporal distances .fig.[fig : d4d_tail_sen ] and fig.[fig : d4d_tail_civ ] show instead the cdf of tail weight indices . here , the result is even more clear : the tail of temporal component distances is typically much longer than that of spatial ones , and in between those of exponential and heavy - tailed distributions .once more , the temporal component tail fundamentally shapes that of the overall fingerprint distance .at the light of all previous observations , we confirm the findings of previous works on user privacy preservation in mobile traffic datasets .namely , the two datasets we analysed do not grant -anonymity , not even for the minimum = 2 .moreover , our reference datasets show poor anonymizability , i.e. , require important spatial and temporal generalization in order to slightly improve user privacy .the fact that these properties have been independently verified across diverse datasets of mobile traffic suggests that the elevate uniqueness of trajectories and low anonymizability are intrinsic properties of this type of datasets . in our case , even a citywide ,8-hour aggregation is not sufficient to ensure complete -anonymity to all subscribers .the result is even worse than that observed in previous studies : the difference is due to the fact that we consider the anonymization of complete subscriber fingerprints , whereas past works focus on simpler obfuscation of summaries or subsets of the fingerprints . on the one hand ,the typical mobile user fingerprint in such datasets is composed of many spatiotemporal samples that are easily hidden among those of other users in the dataset .this leads to fingerprints that appear easily anonymizable , since their samples can be matched , _ on average _ , with minimal spatial and temporal aggregation .on the other hand , mobile traffic fingerprints tend to have a non - negligible number of elements that are much more difficult to anonymize than the average sample .these elements , which determine a characteristic dispersion and long - tail behavior in the distribution of fingerprint sample distances , are mainly due to a significant diversity along the temporal dimension . in other words, mobile users may have similar spatial fingerprints , but their temporal patterns typically contain a non - negligible number of dissimilar points .it is the presence of these hard - to - anonymize elements in the fingerprint that makes spatiotemporal aggregation scarcely effective in attaining anonymity .indeed , in order to anonymize a user , one needs to aggregate over space and time , until all his long - tail samples are hidden within the fingerprints of other subscribers . as a result ,even significant reductions of granularity ( and consequent information losses ) may not be sufficient to ensure non - uniqueness in mobile traffic datasets . as a concluding remark , we recall that such uniqueness does not implies direct identifiability of mobile users , which is much harder to achieve and requires , in any case , cross - correlation with non - anonymized datasets . instead , uniqueness is a first step towards re - identification .understanding its nature can help developing mobile traffic datasets that are even more privacy - preserving , and thus more easily accessible .h. zang and j. bolot _ anonymization of location data does not work : a large - scale measurement study_. proceedings of the 17th annual international conference on mobile computing and networking .acm , 2011 . | preserving user privacy is paramount when it comes to publicly disclosed datasets that contain fine - grained data about large populations . the problem is especially critical in the case of mobile traffic datasets collected by cellular operators , as they feature elevate subscriber trajectory uniqueness and they are resistant to anonymization through spatiotemporal generalization . in this work , we investigate the -anonymizability of trajectories in two large - scale mobile traffic datasets , by means of a novel dedicated measure . our results are in agreement with those of previous analyses , however they also provide additional insights on the reasons behind the poor anonimizability of mobile traffic datasets . as such , our study is a step forward in the direction of a more robust dataset anonymization . |
we show in this paper by elementary means that the support of the capacity achieving input measure for multiple - input multiple - output ( mimo ) rayleigh fading channels subject to average power constraint with coherence time is bounded .a generalization of the result to coherence intervals of size seems to be highly non - trivial and will probably require a substantial extension of the techniques used here supplemented by some results and methods from the `` hard analysis '' .+ previous fundamental achievements , e.g. , follow the same procedure which can be traced back to the classic paper by smith . the basic tools are the karush - kuhn - tucker ( kkt ) conditions from the theory of convex optimization supported by an application of the identity theorem ( also known as the uniqueness theorem ) from complex analysis . our approach is based on the kkt conditions too but avoids the usage of the identity theorem .+ in abou - faycal , trott , and shamai proved , using these techniques , that for a one - dimensional rayleigh fading channel the optimal input measure subjected to an average power constraint to be discrete with a finite number of mass points . in chan , hranilovic , andkschischang showed for a mimo rayleigh block - fading channel with i.i.d .channel matrix coefficients that the optimum input distribution subjected to peak and average power constraint contains a finite number of mass points with respect to a specific norm .in addition fozunbal , mclaughlin , and schafer argued in that a bounded support of the capacity maximizer implies its singularity with respect to the borel - lebesgue measure .the approach in is based on the identity theorem for holomorphic functions in several complex variables and use the assumption that an open set in fulfills the hypothesis of the identity theorem in .we show in section [ sec - discussion ] by a simple example that the conclusion of the identity theorem fails in this setting. consequently , these results are not rigorous . since , in contrast to the complex analysis in one variable , it is still an open difficult problem to characterize the families of sets for which the identity theorem for holomorphic functions in several complex variables holds we can not hope to understand the properties of the capacity maximizers in the present setting by an reduction to uniqueness properties of holomorphic functions in higher dimensions .therefore , it is likely that we will be forced to develop or apply `` real - analytic '' tools for tackling this important communication - theoretic problem .+ the paper is organized as follows : section [ b ] provides some basic definitions and is followed by section [ sec - bounded - support ] which contains the main result of this paper . as mentioned above , in section [ sec - discussion ] we give an elementary example that shows that the application of the identity theorem in higher dimensions is , in general , not admissible if we want to understand the properties of capacity maximizers of rayleigh fading channels .+ _ notation ._ throughout the paper we will denote the set of complex -by- matrices by and will freely identify this set with . stands for the logarithm to the base .capital letters are reserved for random variables .we consider a rayleigh fading channel with the coherence time which is described by with coefficient matrices , + and , where the the channel h is assumed to be complex circularly symmetric gaussian with zero mean and with covariance matrix and the additive noise coefficients are assumed to be i.i.d .complex circularly symmetric gaussian with .let be the set of probability measures on + .then the set with the average power constraint of the transmitted signal is weak * compact as it was shown in and .+ if is the set of conditional probability measures on + we can determine the channel by a set , where is absolutely continuous with respect to borel - lebesgue measure . for the rayleigh fading channel the conditional probability density of the received signals conditioned on the input symbol is given by }}{\pi^{m}\textrm{det } ( \sigma^2_z{\mathbf{1}}_{m } + ( { \mathbf{1}}_m\otimes x^h)\sigma ( { \mathbf{1}}_m\otimes x ) ) } \ ] ] with covariance matrix of let be a probability measure and define then the mutual information of the channel with no csi at the receiver is given by the mutual information is a weak * continuous functional on the weak * compact and convex set ( see ) . thus the functional achieves its maximum on by the following let be a weak * continuous real - valued functional on a weak * compact subset of .then is bounded on and achieves its maximum on . the mutual information is strictly concave functional on up to _ equivalence _ of measures .hereby , two measures are called equivalent if .so its maximum on is achieved by a unique input distribution up to equivalence defined above .hence , with there exists a measure that achieves the capacity of the channel and is unique up to equivalence of measures .the aim of this paper is to show that subjected to an average power constraint the capacity achieving distribution of the channel has an bounded support .the purpose of this section is to show that the support of the capacity achieving input measure for the channel given in ( [ channel - def ] ) , with coherence time , is bounded . + for with we set with .+ [ lemma1 ] let with and with be given . then with and and are the minimum and maximum eigenvalues of the covariance matrix . by the defining relation ( [ int - measure ] ) we have next we define + whereas the maximum of the function is achieved on because of the compactness of . hence ,+ for we obtain }}{\pi^{m}\pi}.\ ] ] for every we have where and are the minimum and maximum eigenvalues of the hermitian and strictly positive covariance matrix . by the definition of we have hence, it follows that for two operators with and a positive operator we have due to the fact that the operators in ( [ ineq ] ) are hermitian and positive and the same holds for and because the function is operator monotone for all positive operators , we have \geq \\ \textrm{tr}\left[((\sigma^2_z + \lambda_{min}x^hx ) { \mathbf{1}}_m)^{-1}yy^h\right ] \\\geq \textrm{tr}\left[(\sigma^2_z{\mathbf{1}}_{m } + ( { \mathbf{1}}_m\otimes x^h)\sigma({\mathbf{1}}_m\otimes x ) ) ^{-1}yy^h \right].\end{gathered}\ ] ] with ( [ ineq - max ] ) it follows that for }}{\pi^{m}\pi}\ ] ] inserting this into ( [ ineq - shell ] ) yields }.\ ] ] therewith we get & p(y|x)f_(y)dy & + & p(y|x)dy & + & = - p(y|x ) dy & + & = - p(y|x ) dy & + & = - & + & - & + & = - & & a:=. & determining the capacity achieving input distribution subjected to average power constraint is a convex optimization problem .necessary conditions for the optimal input distribution can be derived from the local _ karush - kuhn - tucker _ conditions .together with the fact that the mutual information is a concave functional and the convexity of the constraint functional we obtain ( see and ) , that achieves capacity if and only if with equality if , where denotes the lagrange multiplier and is the constraint under consideration .it is fairly standard fact that \nonumber\end{gathered}\ ] ] and ( [ loc - kkt ] ) can be therefore rewritten as [ loc - kkt3 ] & ( x^2 - a)+c(a)+ ( e)^m + & + & + ( ^2_z_m + ( _ m x^h ) ( _ m x ) ) + & + & + p(y|x)f_(y)dy 0 & with equality if . let [ kkt ] & kkt(x):= ( x^2 - a)+c(a)+ ( e)^m + & + & + ( ^2_z _m + ( _ m x^h ) ( _ m x ) ) + & + & + p(y|x)f_(y)dy & then ( [ loc - kkt ] ) can be rephrased as for and if .+ the following theorem gives a sufficient condition for the boundedness of the support of the capacity achieving measure in terms of the lagrange multiplier .[ lemma - bounded - input ] let be given and let be a capacity achieving input measure subject to the average power constraint for the channel ( [ channel - def ] ) .then implies that is bounded .the proof is by contradiction .suppose that and that is not bounded . by our assumptions we can find with the following properties : applying _ lemma _ [ lemma1 ] to the function defined in ( [ kkt ] ) we obtain the following inequality .[ kktb ] & kkt(x ) ( x^2 - a)+c(a)+ ( e)^m + & + & + ( ^2_z_m + ( _ m x^h ) ( _ m x))+ & + & + a - & + & = x^2 ( - ) -a+ c(a)+(e)^m + & + & + ( ^2_z _ m + ( _ m x^h ) ( _ m x))+ & + & + a - & combining the karush - kuhn - tucker conditions and ( [ kktb ] ) we obtain that for any & 0=kkt(x ) & + & x^2 ( - ) -a+c(a)+(e)^m + & + & + ( ^2_z_m + ( _ m x^h ) ( _ m x))+ & + & + a - & but this last inequality with our assumption that is not bounded , ( [ fac_greater_zero ] ) , and the fact that & x^2 ( - ) x & + & & + & ( ^2_z _ m + ( _ m x^h ) ( _ m x ) ) x & implies that , which is the desired contradiction . in view of lemma [ lemma - bounded - input ]our remaining goal is to show that for each .for example in abou - faycal , trott , and shamai showed this in the scalar case .our proof of the corresponding result in mimo case below is strongly motivated by their approach via fano s inequality .[ gamma>0 ] for the channel given in ( [ channel - def ] ) we have for each .as mentioned above the proof is an extension of the argument given in .the capacity functional is a non - decreasing and concave function of the argument .it was observed in using _ global _ karush - kuhn - tucker conditions that is the slope of the tangent line to at ( cf . , section iii.b and appendix ii.a ) .thus , since is non - decreasing and concave , it can be shown that implies for all need not be differentiable .however , is differentiable a.e . due to the monotonicity and concavity .the proof that for all follows a standard line of reasoning from the real analysis and is skipped due to the space limitation .the full argument will be given elsewhere . ] .consequently , we can rule out the possibility that by showing the existence of a sequence of input measures such that the corresponding sequence of mutual informations approaches .+ we will be done if there is such that for each we can find distinct and disjoint measurable sets such that for all . because a simple application of fano s inequality with block length shows then that for the input measures ( is the point measure concentrated on ) we have now we define where denotes the surface area of the unit sphere in and are the smallest and the largest eigenvalues of and let be given .+ we will now present the construction of the vectors and the decoding sets . let with be fixed and consider a large positive real number that will be specified later .set for where .+ let denote the smallest eigenvalue of .for we set and where as shown in the proof of lemma [ lemma1 ] we have using ( [ gamma-1 ] ) and transforming to spherical coordinates in obtain where denotes the surface area of the unit sphere in and .after the substitution in the integral on the rhs of the inequality ( [ gamma-2 ] ) we arrive at in what follows we use the abbreviation the defining relation ( [ gamma-0 ] ) and our assumption that ensure that . using this and ( [ gamma-3 ] )we are led to for all .now , since , , and it is clear that and from ( [ gamma-4 ] ) we have for all .thus if we choose our sufficiently large ( [ gamma-5 ] ) and these limit relations ensure that for all .moreover it is clear that the sequence of second moments of the measures can be made arbitrarily large for large .this concludes our proof by the remarks given at the beginning of the argument .now , we can summarize our results obtained so far in the following fashion : [ theorem - bounded - input ] we consider the channel defined by ( [ channel - def ] ) .then the support of the capacity achieving input measure is bounded .simply apply lemma [ gamma>0 ] and lemma [ lemma - bounded - input ] .with the embedding function with and and the transformed channel we get an extension of the function & kkt(x ) : m(n1,c ) r & + & & + & kkt(z ) : m(2n1,c ) c & + & & + & kkt(z):=(z^tz - a)+c(a)-(|z)d . & and are obtained by changing the channel matrix and the channel output according the transformation of the input under ( in p. 2081 ,moreover it is easily seen using fubini s theorem from measure theory and morera s theorem from the complex analysis in several variables ( cf . ) that this extension of the function is holomorphic .but , unfortunately , it is _ not _ true that the identity theorem ( also known as the uniqueness theorem ) holds for open sets in as the following standard example shows : + _ example ._ we consider the simplest non - trivial case .let denote the standard basis of and let be defined as where denotes the transpose and are the coordinates of with respect to the basis .clearly , is holomorphic and the set of zeros of is in what follows we identify with . is , by definition , open in the natural topology on ( but it is _ not _ open in the natural topology of , it is a closed linear subspace of ) , and the function is , apparently , not identically zero on . + note that this example with the identical arguments shows also that the conclusion of the identity theorem is not valid for open balls , say , in . if is any open ball in then for all but , again , on .the reason is , as before , that an open ball in ( with the natural topology of ) is not open in the topology of .+ this last example shows that the proof of proposition 4.3 in is not correct , since it assumes the validity of the identity theorem in exactly this setting .it is this proposition 4.3 in which would allow us to conclude that the support of the capacity achieving input measure contains no open sets ( in ) provided we know that this support is bounded .+ actually , the authors of this paper are convinced that we need different mathematical techniques to tackle the problem of characterization of the optimal inputs for multiple antenna rayleigh fading systems not relying on the identity theorem .one reason for this opinion is the fact that the characterization of sets for which the identity theorem holds ( so called sets of uniqueness ) in the setting of several complex variables is a long standing challenging open problem in complex analysis .we have shown that for a rayleigh fading channel with coherence time the support of the capacity achieving input measure is bounded .our method of proof does not allow to extend the results to the case .in fact the techniques we have used have to be substantially sharpened and supplemented by additional new tools .furthermore we have shown that the approach based on the application of the identity theorem from the complex analysis in several variables is not admissible .therefore , it seems highly likely for us that the techniques needed should be `` real - analytic '' in spirit .this work is supported by the deutsche forschungsgemeinschaft dfg via project bo 1734/16 - 1 entwurf von geometrisch - algebraischen und analytischen methoden zur optimierung von mimo kommunikationssystemen .chan , s. hranilovic and f.r .kschischang , `` capacity - achieving probability measure for conditionally gaussian channels with bounded inputs '' , _ ieee trans .inform . theory _ , vol .51(6 ) , pp . 2073 - 2088 , june 2005 m. fozunbal , s.w .mclaughlin , r.w .schafer , `` capacity analysis for continuous - alphabet channels with side information , part i : a general framework '' , _ ieee trans . inform .51(9 ) , pp . 3075 - 3084 , sept .2005 m. fozunbal , s.w .mclaughlin , r.w .schafer , `` capacity analysis for continuous - alphabet channels with side information , part ii : mimo channels '' , _ ieee trans .inform . theory _51(9 ) , pp . 3086 - 3101 , sept . | we consider transmission over a wireless multiple antenna communication system operating in a rayleigh flat fading environment with no channel state information at the receiver and the transmitter with coherence time . we show that , subject to the average power constraint , the support of the capacity achieving input distribution is bounded . moreover , we show by a simple example concerning the identity theorem ( or uniqueness theorem ) from the complex analysis in several variables that some of the existing results in the field are not rigorous . |
community detection in large graphs is getting attention as an important application of social network analysis ( sna ) , the ability to detect closely knit communities opens several applications from targeting ads to recommender systems . in this workwe try to derive a very simple and efficient algorithm for community detection based on a size parameter .being able to specify the minimum and maximum size of communities to detect can be a critical factor in the sna area , some networks tend to form very small and dense communities while other networks form larger groups .the first section of this report discusses some existing algorithms for community detection in social graphs , then we introduce the idea behind the entropy walker and present our algorithm .the final sections show some examples of the algorithm being used in some toy examples and analyzes the scaling of the method for large graphs .several algorithms have been developed for community detection in large graphs .clutsering methods based in k - means need to know in advance the number of communities to find in the network . in practicethis is not possible as the number of communities is usually unknown and furthermore due to social interactions the number of communities in a network might change over time making it very hard to set up as a parameter .the modularity optimization algorithm [ b08 ] automatically detects the number of communities but it does nt allow for overlapping communities .this is also inpractical for social networks as most nodes will be members of several different social circles .bigclam [ lesk13 ] is a fast algorithm to detect overlaping communities , it s based in non - negative matrix factorization but it needs to know the number of communities to detect , as mentioned before this is an important limitation .[ mca13 ] presents an algorithm to find social circles in networks but is based on node parameters `` features '' , we would like to perform the extraction of communities based in network structure only .the idea of random walks being used to detect communities is also used in the mcl algorithm [ vdon99 ] however mcl ca nt control the size of the communities being detected and it needs to perform operations on the complete matrix of the graph limiting its use to small and medium sized networks .we define a `` tour '' as a random walk of length `` s '' .the basic idea of the algorithm is to perform several tours starting from random nodes and to detect communities based on the result of those tours .`` s '' should be longer than the minimum number of members that we want for a community and it serves as an upper bound for the maximum number of members in a community .it is likely for a random walker to get `` trapped '' inside nodes of a community , going back and forth between them because there are more inter - community edges than edges that will take the walker outside of the community . even if the random walker goes outside the community chances are it might come back . the algorithm will filter the random walks that are nt likely to have found a community calculating the entropy of the tour [ sha48 ] . tours with high entropy are unlikely to contain a community because they visit mostly different nodes .they are probably paths or bridges between communities and might be of interest for some other applications .the entropy is computed using the very popular shannon formula : where is just the probability of the node in the tour , in other words its frequency in the tour over the sum of all node frequencies .a threshold parameter establishes the maximum entropy for a tour to be accepted as a fraction of the maximum possible entropy that can be computed assuming a random walk that never visits the same node more than once .we call this parameter for entropy threshold . when is 1 all the tours are accepted , lowering increases the amount of rejected tours .the graph in figure 1 shows the percentage of accepted tours for different values of using the food network as an example . the parameter can be tuned based on two different goals .one possibility is to use it to limit the total number of tours to store in memory for very large graphs , a second use , more logical , is to set how dense a community has to be to be considered .this second use that is data dependant is probably the recommended one .this is an example of a very low entropy tour from the food network : ] we can see how the first tour can be converted in a community with the top ingredients being used for the same kind of dishes , the second tour has a wide array of ingredients and ca nt be considered a community .maybe a bridge between different communities . as we have mentioned extracting the high entropy tours from a networkmight also be an interesting application . after accepting or rejecting a tour based on its entropythe algorithm will try to see if this tour is new or if it is similar to an already seen tour .locality sensitive hashing ( lsh ) can be used to make similar tours hash to the same bucket avoiding the need to compare new tours with the existing ones .if lsh maps the tour to a bucket where a tour is already stored then both tours are merged adding the frequencies of the nodes present in both tours .this greatly reduces the number of tours that need to be stored in memory and avoids the problem of two very similar tours being detected as different communities . in some applicationsthe most frequent nodes in a tour can be used as the key to a hash function to determine the bucket number for the node .this is a simplification of lsh using only one minhash computed from the most frequent nodes in a tour .when this is not possible or does nt work standard lsh can be used .now we describe the parameters used in the algorithm : the algorithm uses several parameters to fine - tune its behaviour : 0.25 cm the algorithm will perform random tours and check the entropy of each tour . if the tour entropy is below the threshold then the tour will be stored in a hash table along with a counter merging the tour with the already existing one if the bucket is not empty .it s easy to notice that this process can be parallelized and that several million tours can be performed efficiently .the memory cost to store the tours depends on the algorithm parameters . when the ( entropy threshold ) parameter is low the algorithm with detect only a few very dense communities and tours with frequency 1 can be considered a community .when the parameter is higher the algorithm will check many tours and it might make sense to discard the tours with lower frequencies keeping the ones that have been repetedly matched .it is known that montecarlo random walks can be used to compute pagerank and/or eigenvector centrality , the procedure used to detect communities can be used to compute at the same time a centrality score for the network nodes .so the first conclusion is that node centrality can be computed at the same time as the community detection algorithm runs , just adding 1 to a counter every time a node is visited by a tour and then normalizing the cummulative score .the effect of entropy filter is show in fig2 .we can see that some nodes produce peaks for entropy thresholds below 1.00 , this means that the centrality of those nodes is higher in the entropy filtered sets compared to the plain random walks without filtering .these peaks can be detected computing the delta between the eigenvector centrality and the tour computed centrality . from these peakswe can detect nodes that are both central to the network and to the small communities where they belong , this gives an index of in - community centrality .testing the procedure on the facebook ego network the peaks matched nodes that had a high degree of connections with the members in their communities .something interesting to notice is that the algorithm can be run starting always from the same node , in the style of a personalized pagerank , when that happens we get as a result the social circles of a given user .this is in some way similar to the algorithm used by twitter to recommend users to follow[gup12 ] the difference is that instead of computing a score for each node we compute scores for each random walk ( tour ) performed by the simulation . for example we can run the algorithm from the tomato ingredient to see what goes well with tomato : instantaneous delicious recipes !this section presents some analysis and graphs about the behaviour of the algorithm .it is interesting to analyze the number of tours that the algorithm will keep in memory as the network grows larger for a constant fixed entropy threshold .we found that the number of tours analyzed does not grow as the size of the network and is strongly dependant on network structure . with only a few nodes small communitiesare common in a graph with high clustering , as the network grows larger the number of small communities quickly goes down .this can be explained because a random walker has now more options and is less likely to get trapped inside a community .then after more nodes are added a threshold is passed and small communities emerge again .this curious behaviour in the formation of small communities as the network grows larger resulted an interesting find and can be useful to refine generic models for network growth .the emergence of small communities in large networks is strongly related to the clustering coefficient of the network . when the clustering coefficient is very los there are not enough edges to form dense communities so small communities will not form in random networks . in the same wayif the clustering coefficient is too high then the random walker can visit almost any node from any node and thus will not get trapped inside a small community , the whole network is the only existing community .the following graph shows the number of tours detected for a fixed entropy threshold depending on the clustering coefficient of networks synthetically generated using the barabasi - albert model[bar99 ] .as the clustering coefficient gets larger the number of nodes in a tour has to be increased to detect communities .in our example we run the algorithm against the eastern food network composed by different ingredients using in the eastern cuisine .the idea is that the algorithm should be able to find groups of ingredients that are frequently used together .using at 0.75 and simulating 150.000 tours of 30 hops the algorithm processed a total of 8308 tours to find clusters with 5 to 10 nodes in less than 5 seconds and these were the top results .the number between parentheses reflects the number of times the same community was detected , so the higher the number the stronger the community .we can see that the algorithm quickly detects the ingredients for most deserts or breakfast - type preparations .in total the algorithm detected 141 overlapping communities .the following result looks like a good recipe to try : as a point of comparision we run the modularity optimization algorithm [ blon08 ] as implemented in gephi and got the following communities : [ lemon , egg , orange , almond , orangejuice , cream , raisin , cinnamon , honey , butter , milk , vanilla , walnut ] [ coriander , pepper , blackpepper , chicken , thyme , cayenne , cilantro , dill , cumin , bellpepper , chickenbroth , ginger , turmeric , carrot ] [ garlic , parsley , onion , lemonjuice , beef , lamb , tomato , cucumber , bread , oliveoil , mint , vinegar , yogurt , potato ] as we can see the modularity algorithm does a very good job but it lists all the ingredients that are similar together and is not very helpful to detect smaller groups that go very well together , for example communities of 3 or 4 ingredients . the algorithm presented here would create the following top 10 communities of 3 ingredients : the graph of the communities found by gephi looks like this [ figure2 ] as we can see the results help to create new recipes starting with ingredients that go well together frequently .something interesting is that by allowing overlapping communities we can see that some ingredients are partially in different groups .for example ginger is used for both savory and deserts .the modularity algorithm is forced to choose only one cluster for ginger but in our algorithm we can find it in different communities .we also run the algorithm in a very large dump of a social network with a total of about 5 million nodes .the algorithm runs in constant time regardless of the size of the graph as it always simulates a constant number of random walks , the only difference in runtime is due to the time needed to access the adjacency list of each node and that is independant of the clustering algorithm .besides the runtime analysis we weree curious to investigate what kind of small communities the algorithm would find in a large social network .we run a modularity clustering phase first and then the entropy walker algorithm . after running the entropy walker algorithm we found that 100 we see that the entropy walker algorithm finds small dense communities inside the big communities created by the modularity algorithm .figure 4 shows an accepted random walk inside a modularity class .figure 5 shows the shape of one of the accepted random walk , we can see the community is actually a clique so the algorithm is finding cliques or structures similar to cliques for the parametrized size of components that depend on the length of the random walks .in a streaming model the graph is constantly updated via the addition and deletion of nodes and edges . in this modelthe algorithm can be kept running continuously producing `` infinite '' tours .as the graph is updated communities that were previously detected might disappear and new communities can emerge .an algorithm like the count - min sketch [ mutu05 ] can be used to keep in memory a list of only the top communities discovered so far . if a new very tight community forms it will be eventually found by the algorithm several times entering the top ranking . besides keeping the top communitiesthe streaming model can be used to detect communities that pass the entropy filter and the count - min sketch can be used to only list those communities that have repeated a number of times .several strategies to prune old communities from memory can be used .the entropy walker is a very simple algorithm , the core is just a montecarlo simulation of random walks in a graph .the algorithm uses two very simple tricks to be able to compute communities from these random walks , first it is able to keep or discard a tour by calculating its entropy reasoning that a tour that gets trapped inside a community will visit several times the same nodes resulting in a low - entropy tour .the second trick is the use of lsh and the ability to merge similar tours into a single one to reduce memory consumption and be able to detect the same community even if the nodes have been visited in different order and with different frequencies .the algorithm can run very quickly consuming very little memory even for massive graphs , it can be kept running continusly in a streamming model where the graph is constantly updated , this setup is perfect for the anlysis of large social networks . | this report presents a very simple algorithm for overlaping community - detection in large graphs under constraints such as the minimum and maximum number of members allowed . the algorithm is based on the simulation of random walks and measures the entropy of each random walk to detect the discovery of a community . |
the science surrounding the origin of life presents an obvious difficulty to scientists : our data consist of a single example .it is unknown whether the events that followed the origin of life are deterministic consequences of life or unique to this specific origin .one question of interest is whether all life , or specifically all self - replicators , are evolvable .evolvability ( the ability to evolve ) has many similar , but differing definitions ; here , we define it as the ability to increase in fitness .we also distinguish between two possibilities for such fitness increases : _ optimization_defined as the ability to improve an already - present phenotypic trait , and _ innovation_the ability to evolve novel phenotypic traits .a common assumption is that life originated with self - replicating rna molecules .thus , most empirical studies have focused on rna replicators , either emergent replicators or those created by experimenters .experiments ( both computational and biochemical ) have also explored the evolvability of rna replicators , usually involving extensive mapping of their fitness landscapes .while rna is an enticing candidate for a pre - biotic molecule , the so - called rna world hypothesis " has its own problems foremost the asphaltization " that tends to befall formose carbohydrates under the expected conditions of an early earth which has led researchers to explore other origins of life not necessarily rna - based , or even to move the origin of life from earth to mars . in recent years, more and more theoretical models concerning the origin of life have focused on exploring the abstract concepts that could possibly be involved in any potential origination , independent of a particular biochemical system .the field of artificial life is ideally suited to study various possibilities for the origin of life as it imagines life as it could be , not just as it is .the question of the random emergence of replicators has been addressed in various digital systems before , while theoretical models have explored the factors that lead to differing evolvability .artificial life tools have also been used to explore the potential of evolvability in different systems .recently , adami used information theory to calculate the likelihood of the random emergence of a self - replicator in a sense , the progenitor to life without regards to a specific biochemistry .adami and labar tested this theory with avida by generating billions of random avidian sequences and checking for their ability to replicate themselves . such an investigation is akin to studying the chance emergence of self - replicating rna molecules . in previous workwe explored the relationship between evolvability and self - replication using these emergent replicators , and found that almost all emergent replicators were evolvable , both in terms of optimization for replication and in terms of evolving beneficial phenotypic innovations .we also discovered that these replicators came in two forms : proto - replicators and self - replicators .proto - replicators deterministically copy themselves inaccurately , but eventually evolve into self - replicators ; self - replicators , on the other hand , produce an exact copy of themselves in the absence of stochastic mutation .we also noted the possibility of an optimization - innovation trade - off in some of these replicators , especially the default avida ancestor ( a hand - written self - replicator specifically designed for evolution experiments ) . here , we extend our previous study and test a fundamental question concerning life s origins : how does the genetic composition of the first replicator determine the future evolution of life ?one extreme possibility is that all emergent replicators have similar genetic compositions and thus the future evolution of life will occur in a similar manner , no matter the progenitor . on the opposite end of the spectrum is the possibility that every emergent replicator is sufficiently different from every other replicator in genetic composition . in this case , the future outcome of life may be entirely dependent on which replicator emerges first . in experiments with the artificial life system avida , we find that the interplay between the genetic composition of the first emergent replicator and future evolutionary outcomes is between these two extremes .emergent replicators can be classified into two distinct classes based on a functional analysis of their replication machinery .these classes differ in their ability to optimize their replication ability .however , those replication classes that display high evolvability towards optimizing replication also demonstrate low evolvability towards evolving novel phenotypic traits , and vice versa .finally , we show that this difference in evolvability is due to differences in the replication machinery between the different replicator classesin order to study the interplay between emergent replicators and evolvability , we used the avida digital evolution platform . in avida , simple computer programs ( avidians " ) in a population compete for the memory space and cpu time needed to replicate themselves .each avidian consists of a genome of computer instructions , where each locus in the genome can be one of 26 possible instructions .contained with each genome are the instructions necessary for the avidian to allocate a daughter genome , copy its genome into this new daughter genome , and divide off the daughter genome .as the replication process is mechanistic ( i.e. , avidians execute their genome s instructions , including those to replicate , sequentially ) , the speed at which replication occurs is also genome dependent. therefore , fitness is genome - dependent , as an organism s ultimate success is determined by its replication speed in these simple environments .as avidians directly copy and pass their genomes to their daughters , avida populations also have heredity . during the copying process, errors may be introduced , resulting in mutations and population variation .therefore , avida populations possess heredity , variation , and differential fitness : they are an _ instantiation _ ( as opposed to a simulation ) of darwinian evolution this simpler instantiation of evolution by natural selection has allowed for the exploration of many topics hard to test in biological systems , see e.g. . avida has been explained in greater detail elsewhere ( see for a full description ) ; here , we will cover the details relevant for this study .an avidian consists of a variety of elements : a genome of instructions , a read - head " that marks which instruction should be copied , a write - head " that denotes where the read - head - marked instruction should be copied to , and three registers to hold integer numbers ( ax , bx , and cx ) , among other elements . in order to undergo reproduction, an avidian needs to perform four operations in the following order .first , it must execute an _ h - allocate _ instruction which allocates a fixed number of _ nop - a _ instructions to the end of its genome ( here 15 , as we work exclusively with length 15 genomes ) .these _ nop - a _ instructions are inert by themselves , and serve as placeholders to be replaced by actual information . allocation of this memory prepares the daughter genome space to receive the information from the parent .next , the read - head and write - head must be set 15 instructions apart , allowing for the instructions in the parent genome to be copied into the daughter genome .these operations are algorithmically similar to creating a dna replication fork at the origin of replication , preparing for the assembly of a copied sequence . following this step, the genome must find a way of looping over the _ h - copy _ instruction in the genome to actually copy instructions from the parent into the daughter genome , similar to the action of a dna polymerase fusing the new ( daughter ) nucleotides on the former parent strand .this copying can be done by either looping through the entire genome ( using the circular nature of the genome to re - use a single _ h - copy _ command as many times as necessary to copy all instructions ) or else by continuously looping over a smaller set of instructions in the genome ( called the copy loop " ) .the latter algorithm requires marking the set of instructions ( the replication gene " ) so as to control the forking of execution flow .finally , an avidian must execute an _ h - divide _ instruction to divide the duplicated genome into two avidians , and thus successfully reproduce . in the experimental design used here , we used two different mechanisms to produce mutations during replication .the first mechanism produces at most one _ point mutation _ at a random locus in the genome at a rate of 0.15 mutations per genome per generation upon successful division .the second mechanism is a deterministic ( or incipient " ) mutation , which occurs when the instructions in an avidian cause it to copy instructions into the daughter genome in the wrong place . often , this results in a daughter genome with one or two _ nop - a _ instructions at the beginning if these two loci were skipped over during the copying process , remaining instead in their pristine form .in many cases , such a faulty copy algorithm results in the offspring being non - viable ; however , in some cases viable offspring are produced . sometimes , this incipient mutation mechanism results in the phenomenon of _ proto - replicators _ , defined as those replicators that deterministically ( i.e. , reproducibly because genetically controlled ) make an offspring different from itself .replicators that instead deterministically make an identical offspring are called self - replicators . in the standard experimental evolution scenario ,avidians are placed into a landscape that rewards a variety of phenotypic traits .these traits commonly refer to the ability to perform certain boolean logic calculations on binary numbers that the environment provides . during an avidian s lifespan , they can input ( read from ) and output ( write to ) binary numbers from / to the environment . whenever a number is written , the avida world checks if a boolean logic calculation was performed .successful performance results in an increase in the replication speed of that individual s descendants . in the standard avida design ( also the one used here in the experiments to explore evolvability in the sense of innovation ) that environment rewards the performance of nine different calculations .this environment is usually referred to as the logic-9 " environment and rewards calculations such as not and equals .the more complex the calculation performed , the greater the replication speed increase . the performance of such logic calculations can be viewed as an algorithmic analogue of performing different metabolizing reactions using different sugar resources ( the binary numbers provided by the environment ) . to study the evolvability of emergent replicators , we first had to generate a collection of emergent replicators .we re - analyzed a list of 10 random avidian length-15 genomes generated previously . in order to discover replicators of fixed genome size, we set the offspring size range parameter in the main avida configuration file to 1.0 , which guarantees that the _ allocate _ command allocates exactly as much space as needed by the daughter genome . for our focus on replicators of length 15, this guaranteed that exactly 15 instructions are allocated for the daughter to receive next , we looked at the relative abundance of proto - replicators compared to true self - replicators . we re - analyzed the above set of replicators , but made one additional parameter change : we set require exact copy to 1 .any replicator that registered as having zero fitness in this analysis would deterministically copy its genome inaccurately , and would be classified as proto - replicators . any replicators that could reproduce themselves accurately under this treatmentwere classified as self - replicators . in order to test the evolvability of the these replicators , we performed three different experimental tests . for all experiments, we used a population size of 3600 .individuals offspring were placed in any random cell in the 60x60 grid , thus mimicking a well - mixed environment .point mutations were applied upon division and the genomic mutation rate was 0.15 mutations per generation .genome size was fixed at 15 instructions .the first experiment tested the replicators evolvability in the sense of replication optimization ( optimization experiments ) . in this experiment , each replicator was used to seed 10 populations ; these populations evolved for 10 generations .phenotypic traits were not under selection in these experiments , that is , evolution only optimized the replication machinery .next , we performed evolution experiments ( referred to as the innovation " experiments ) where we rewarded individuals for evolved phenotypic traits ( i.e. , logic tasks ) besides any increase in replicatory prowess .we ran the innovation experiments for 10 generations ( an order of magnitude longer than the optimization experiments ) .these experiments were designed to decrease the likelihood that our data resulted from one of the replicator classes taking longer to evolve complex traits . finally , we repeated the innovation experiments , but used each ancestral replicator s fittest descendant from the optimization experiments to initialize the populations ( trade - off experiments ) .these experiments tested the presence of an optimization innovation trade - off .once we determined the set of replicators , we first tried to determine if there were similarities in the genomes of the replicators .we realized that many of the genomes had similar instruction motifs within the sequence . to test whether different replication algorithms existed in different replicators , we used avida s trace function to analyze the step - by - step execution of the replicators genomes .this allowed us to cluster the emergent replicators into two main distinct functional replicator classes . to examine the results from the evolution treatments, we analyzed the most abundant genotype at the end of each experiment .the data collected included the final evolved fitness and the number of evolved phenotypic traits .all avida analysis was performed using avida s analyze mode with settings matching those under which the relevant experiments were performed .statistical analyses were performed using the numpy , scipy , and pandas python modules .figures were generated using the matplotlib python module .among the one billion randomly - generated genomes , we found 75 genomes that could replicate themselves when genome size was fixed .of these replicators , 22 were true self - replicators in the sense that they could perfectly copy their genomes even when mutations were turned off .the remaining 53 replicators were proto - replicators " in the sense that they could _ not _ produce a perfect copy of their genome when the mutation rate was set to zero .this deterministic miscopying is due to the specific nature of a proto - replicator s genome ( see methods ) .however , these replicators still produced viable offspring that would eventually lead to a self - replicator .the discovery of these proto - replicators extends the definition of proto - replicators from to include fixed - length proto - replicators . upon examination of these replicators, we detected the presence of distinct instruction motifs in their genomes .these motifs consist of instructions involved in genome replication and suggested that different replicators used different replication mechanisms . to explore this possibility, we performed a step - by - step analysis of each replicator s lifestyle by looking at the execution of their genome s instructions .avidians must perform four steps to successfully reproduce : allocate a blank daughter genome , separate their read- and write - heads , copy their genome into the daughter genome through some looping process , and divide off the daughter genome ( see methods for a fuller description ) .we were able to cluster the replicators into two replication classes based on a difference in two traits : ( 1 ) how they separated their read- and write - heads to copy their genome and ( 2 ) how they looped through their genome in order to copy their genome .we named the first class of replicators hc " replicators because of the hc instruction motif they all share ( see table [ avidainst ] for the avida instructions and their descriptions ) .this class contains 62 replicators ( 9 self - replicators ) .only 8 of these replicators have a standard copy loop ( see methods for a definition of a copy loop ) , which appears at the end of their genomes ; these copy loops were marked by the presence of a _ mov - head _( g ) instruction . | the role of historical contingency in the origin of life is one of the great unknowns in modern science . only one example of life exists one that proceeded from a single self - replicating organism ( or a set of replicating hyper - cycles ) to the vast complexity we see today in earth s biosphere . we know that emergent life has the potential to evolve great increases in complexity , but it is unknown if evolvability is automatic given any self - replicating organism . at the same time , it is difficult to test such questions in biochemical systems . laboratory studies with rna replicators have had some success with exploring the capacities of simple self - replicators , but these experiments are still limited in both capabilities and scope . here , we use the digital evolution system avida to explore the interaction between emergent replicators ( rare randomly - assembled self - replicators ) and evolvability . we find that we can classify fixed - length emergent replicators in avida into two classes based on functional analysis . one class is more evolvable in the sense of optimizing their replication abilities . however , the other class is more evolvable in the sense of acquiring evolutionary innovations . we tie this trade - off in evolvability to the structure of the respective classes replication machinery , and speculate on the relevance of these results to biochemical replicators . department of microbiology & molecular genetics + beacon center for the study of evolution in action + program in ecology , evolutionary biology , and behavior + department of integrative biology + department of computer science and engineering + department of physics and astronomy + michigan state university , east lansing , mi 48824 + |
one of the most promising configurations for thermoacoustic devices is the traveling wave configuration which generally consists of a closed - loop tube. this design however , has one disadvantage : the possibility of a time - averaged mass flow , or gedeon streaming , to occur due to the looped geometry .this streaming leads to unwanted convective heat transport and reduces the efficiency of closed - loop thermoacoustic devices .a commonly used solution is the application of a jet pump. this is a tapered tube section which , due to the asymmetry in the hydrodynamic end effects , establishes a time - averaged pressure drop . by balancing the time - averaged pressure drop across the jet pump with that which exists across the regenerator of a thermoacoustic device, the gedeon streaming can be canceled. fig .[ fig : jetpumpgeom ] shows a schematic of a typical conical jet pump geometry with its corresponding dimensions .the two openings both have a different radius : for the big opening ( left side ) and for the small opening .together with the jet pump length , , the jet pump taper angle is defined .furthermore , at the small opening a curvature is applied to increase the time - averaged pressure drop compared to a sharp contraction . to estimate the performance of a jet pump ,a quasi steady model has been proposed by backhaus and swift. this model is based on minor losses in steady flow and assumes that the oscillatory flow can be approximated as two steady flows. given the pressure drop generated by a pipe transition in steady flow, and assuming a pure sinusoidal velocity in the jet pump , the time - averaged pressure drop across the jet pump can be estimated as , \label{eq : backhaus}\end{aligned}\ ] ] where is the velocity amplitude at the small exit of the jet pump .the subscripts `` '' and `` '' indicate the small and big opening of the jet pump , respectively . is the minor loss coefficient for abrupt expansion and is the minor loss coefficient for contraction , both are well documented for steady flows. with the jet pump being an additional flow resistance , acoustic power will be dissipated due to viscous dissipation and vortex formation .using the same quasi steady approach ,backhaus and swift derived an equation to estimate the amount of acoustic power dissipation associated with the jet pump as .\label{eq : backhaus_de}\end{aligned}\ ] ] an optimal jet pump should establish the required amount of time - averaged pressure drop to cancel any gedeon streaming with minimal acoustic power dissipation .this requires maximizing the difference in minor losses due to contraction and expansion while at the same time minimizing the sum of the minor loss coefficients .previous studies have shown however , that the accuracy of this quasi steady approximation is limited and that there are other factors influencing the performance of a jet pump. petculescu and wilen experimentally studied the influence of taper angle and curvature on minor loss coefficients. good agreement was obtained between steady flow and oscillatory experiments for taper angles up to .however , the minor loss coefficients determined were found to be strongly dependent on the taper angle used .a qualitative comparison between their findings and the current results will be provided in section [ sec : varalpha_varrs ] .smith and swift investigated a single diameter transition , corresponding to one end of a jet pump. the measured pressure drop and acoustic power dissipation was found to be dependent on the dimensionless stroke length , the dimensionless curvature and the acoustic reynolds number . recently , tang et al . investigated the performance of jet pumps numerically, but assumed a priori the quasisteady approximation to be valid by modeling the flow as two separate steady flows .the negative effect of flow separation on the jet pump performance was identified which is in line with the current work .although some jet pump measurements are available in literature , a systematic parameter study that directly relates variations in wave amplitude and geometry to a jet pump s performance has , to the authors knowledge , not yet been addressed .a first step towards the investigation of a jet pump s performance in oscillatory flows has been made in previous work by scaling the jet pump geometry using two different keulegan carpenter numbers and correlating this to the jet pump performance. four different flow regimes were distinguished as a function of the wave amplitude .[ fig : flowregimes ] shows the observed vorticity fields in the vicinity of a jet pump at when . in order of increasing wave amplitudethese flow regimes can be described as follows . at low wave amplitudes ,an oscillatory vortex pair exists on both sides of the jet pump but no vortex shedding is observed ( fig .[ fig : flowregime1 ] ) . when the amplitude is increased , one - sided vortex propagation at the location of the small jet pump opening starts ( fig .[ fig : flowregime2 ] ) . in this flow regime ,vortex rings are shed from the jet pump waist to the right while on the big side of the jet pump an oscillatory vortex pair is still visible .vortex shedding from the big side of the jet pump begins to occur at increased wave amplitudes ( fig .[ fig : flowregime3 ] ) . herevortex rings are shed from both jet pump openings directed outwards , but no interaction between the two sides is visible .ultimately , a further increase in wave amplitude leads to interaction between the two jet pump openings ( fig .[ fig : flowregime4 ] ) .vortices being shed from the small jet pump opening now propagate in alternating directions and flow separation inside the jet pump occurs . albeit the described flow regimes were distinguished using the proposed scaling , the data set was too limited to determine whether the observed flow separation ( fig . [ fig : flowregime4 ] ) was geometrically initiated by an increase in the taper angle or by a decrease in the jet pump length . in this paper ,a parameter study is performed using a computational fluid dynamics ( cfd ) model to further identify the relation between the four different flow regimes and the jet pump geometry .the influence of various geometric parameters including the jet pump taper angle , length , curvature and waist diameter on the jet pump performance is investigated .based on the presented results , the existing scaling parameters are further extended .furthermore , a comparison with the quasi steady model is provided to determine under what conditions the approximation is applicable . based on this parameter study ,design guidelines for future jet pump design are proposed . after a description of the used cfd model in section [ sec : modeling ] ,the various investigated jet pump geometries are introduced in section [ sec : geom_varalpha_varrs ] .the resulting flow regimes will be distinguished in section [ sec : flowregimes ] and subsequently linked to the jet pump performance in section [ sec : varalpha_varrs ] .finally , the influence of the jet pump curvature is investigated in section [ sec : var_rc ] before drawing final conclusions on the design guidelines in section [ sec : conclusions ] .a two - dimensional axisymmetric cfd model is developed using the commercial software package ansys cfx version 14.5, identical to the model used in previous work. in the following , the numerical model with the corresponding boundary conditions will be repeated briefly and the methods to derive the jet pump performance will be explained .the different jet pump geometries are confined in an outer tube with radius .the length of the outer tube on either side of the jet pump is when .this length is scaled relative to the wavelength to avoid the jet pump being placed at a velocity node for simulations performed at other frequencies . in all cases , air at a mean pressure of and a mean temperature of is used as the working fluid .the unsteady , fully compressible navier - stokes equations are discretized using a high resolution advection scheme in space and a second order backward euler scheme in time. the system of equations is closed using the ideal gas law as the equation of state and the energy transport is calculated using the total energy equation , including viscous work terms .based on the critical reynolds number defined in section [ sec : results ] , all presented results fall within the laminar regime so no additional turbulence modeling is applied .the time - step is chosen such that each wave period is discretized using 1000 time - steps .a total of ten wave periods are simulated to achieve a steady periodic solution and the last five wave periods are used for further analysis .the acoustic wave is generated on the left boundary of the domain using a sinusoidal velocity boundary condition with a specified frequency and velocity amplitude . to control the wave propagation over the right boundary of the computational domain , a dedicated time - domain impedance boundary condition is developed and implemented in ansys cfx .the approach is based on the work of polifke et al . and allows the application of a complex reflection coefficient at the boundary. a detailed validation and explanation of the implementation can be found in the work of van der poel , which was carried out as part of the current research. in all cases described here , a reflection coefficient of is specified to simulate a traveling wave on the right side of the jet pump . the combination of the velocity boundary condition and the time - domain impedance boundary condition results in a time - averaged volume flow that is on average less than of the acoustic volume flow rate . on the radial boundary of the outer tube ( at ) , a slip adiabatic wall boundary condition is imposed as the pipe losses in this part of the domain are currently not of interest . to correctly simulate the minor losses in the jet pump , a no - slip adiabatic wall boundary condition is used at the walls of the jet pump .the choice for a two - dimensional axisymmetric model to simulate flow separation might need some additional explanation as in planar diffusers the flow separation can be asymmetric. a clear distinction should be made between flow separation in planar and conical diffusers .the steady flow separation in conical diffusers has been investigated previously and no visible asymmetry of the flow was reported under laminar conditions. furthermore , the oscillatory nature of the flow prevents the flow from developing asymmetric instabilities , even in planar diffuser geometries. these observations motivate the applicability of a two - dimensional axisymmetric model for the current situation of pure oscillatory flow through a conical geometry .the used spatial discretization is validated in previous work and will be only briefly described here. the computational mesh consists of an unstructured part within from the jet pump and a structured mesh in the rest of the domain . in both partsquadrilateral elements are used . in the jet pump region , a maximum element size of is used which is refined up to near the jet pump waist . moreover , to be able to accurately resolve the flow separation , a refinement in the viscous boundary layer is applied such that a minimum of 10 elements are located within one viscous penetration depth distance from the jet pump wall . for each simulated wave frequency, the mesh distribution is adjusted according to the viscous penetration depth , . in the quadrilateral part of the mesh , a fixed radial mesh resolution ofis used whereas the mesh size in the axial direction grows from at a distance of from the jet pump up to at the extremities of the domain . in additional to the mesh validation performed in previous work, the accuracy of the employed computational mesh has been investigated specifically for the prediction of flow separation . the jet pump geometry with a taper angle of ( no . 4 in table [ tab : geom_varalpha ] ) has been used at a frequency of and at a wave amplitude where flow separation is expected ( , eq . [ eq : x1dsalpha ] ) .the mesh size in the jet pump region is in two steps refined from to . at the same time , the mesh size in the jet pump waist region is refined from to .additionally , the effect of a mesh refinement in the viscous boundary layer is investigated by increasing the number of elements from 10 to 20 .the difference in the dimensionless pressure drop ( eq .[ eq : dp2_star ] ) and dimensionless acoustic power dissipation ( eq . [ eq : de2_star ] ) between the investigated meshes is less than 0.04 and 0.05 , respectively .furthermore , both the time and location where the flow first separates do not vary more than and between the different mesh resolutions . in order to determine the jet pump performance from the transient cfd solution ,some additional analysis steps are performed .the first order amplitudes of all physical quantities are calculated by a discrete fourier transformation for the specified wave frequency using the solution from the last five simulated wave periods . to obtain the time - averaged quantities ,the cfd solution is averaged over the last five wave periods in order to eliminate the first order contribution .the streaming velocity is calculated using a density - weighted time - average : .the velocity amplitude in the jet pump waist , , is calculated using an area - weighted average over the cross - section .subsequently , the local acoustic displacement amplitude in the jet pump waist can be calculated under the assumption of a sinusoidal jet pump velocity : to scale the displacement amplitude to the jet pump geometry , a keulegan carpenter number is defined based on the jet pump waist diameter : where is the jet pump waist diameter .this is also referred to as a `` dimensionless stroke length '' or `` dimensionless displacement amplitude '' and can be rewritten to where is an acoustic reynolds number based on diameter and the stokes number .the described computational model is used to investigate a large variety of conical jet pump geometries and wave amplitudes . by varying either the jet pump length , the size of one of the two openings or the local curvature at the jet pump waist , 19 different geometries are considered .the different investigated geometries are all variations based on one reference geometry ( denoted by `` _ _ ref _ _ '' in all tables ) .the reference geometry is a jet pump with a taper angle , all other reference dimensions are denoted in table [ tab : geom_ref ] ..dimensions of the reference jet pump geometry .[ cols= " < , < " , ] + when comparing the measured dimensionless pressure drop using these jet pump samples , it is found that an additional contribution to the keulegan carpenter number is required to properly scale the results from different curvatures .[ fig : dp2_kcd[varrc ] ] shows the dimensionless pressure drop as a function of for the geometries with varied curvature .the investigated curvatures are indicated by different symbols depicted in the corresponding legend .the crosses ( ) represent the reference geometry where .the shape of the dimensionless pressure drop curve is found to be inversely proportional to the dimensionless curvature , especially for low values of . for large curvatures , the effect onthe dimensionless pressure drop becomes less apparent .holman et al . studied the effect of edge curvature on the onset of a synthetic jet. a formation criterion of the form was derived based on the vorticity flux through the jet .the applicability of this criterion to jet pumps with variable curvature is investigated in the following .it is observed that especially the initial increase in , corresponding to the onset of vortex shedding , can be properly scaled using with .given that the onset of vortex shedding for a jet pump geometry with is found at , the additional scaling parameter can be estimated .[ fig : dp2_kcd - alt1[varrc ] ] shows the dimensionless pressure drop as a function of the adjusted scaling parameter .while the initial increase in for the various curvatures all collapse to a single curve , the decay in due to the occurrence of flow separation can not properly be accounted for by any scaling of the form . to overcome this , an alternative scaling strategy for proposed of the form . using a least squares approach , the fitting parameter is determined .note that represents the fraction of the dimensionless curvature with respect to a `` smooth '' contraction .[ fig : dp2_kcd - alt2[varrc ] ] shows again the dimensionless pressure drop data , but now plotted against .the case with no curvature ( ) does not appear in this figure as it will lead to infinite values on the horizontal axis .both the maxima of the dimensionless pressure drop as well as the decay at higher amplitudes are well aligned using this alternative scaling approach .what is remarkable is that the maximum measured dimensionless pressure drop decreases with increasing curvature , which is in contrast with the steady flow theory where the minor loss coefficient for contraction is expected to fall until .the maximum achieved dimensionless pressure drop ranges from for a sharp contraction ( ) to for in an almost linear manner .furthermore , the effect of curvature on the dimensionless pressure drop is not only reversed , but the variation is larger than what is predicted by the quasi steady model ( dashed lines ) .an explanation of this behavior is the effect of curvature on the expansion phase , which is typically neglected in a quasi steady approximation of the jet pump performance .not only will the minor loss coefficient for contraction at the jet pump waist decrease , but so does the minor loss coefficient for expansion .this was first experimentally determined by smith under steady flow conditions. a decrease in the expansion minor loss coefficient at the jet pump waist will lead to a decrease in the dimensionless pressure drop according to eq .[ eq : backhaus ] .additionally , this is reflected in the measured dimensionless acoustic power dissipation , which is shown in fig .[ fig : de2[varrc ] ] .the minimum achieved dimensionless acoustic power dissipation decreases more than six times between ( not shown ) and .a similar effect has been reported when rounding the orifice of an synthetic jet. combining the effect the curvature has on both the pressure drop and acoustic power dissipation using the jet pump effectiveness , shows that a higher dimensionless curvature leads to a better jet pump performance .this is depicted in fig .[ fig : eta[varrc ] ] where the jet pump effectiveness for five investigated curvatures is shown .similar to what was presented in section [ sec : flowregimes ] , the observed jet pump behavior can be explained by studying the flow field and distinguishing between the four different flow regimes .[ fig : varrc_flowregimes ] shows the distribution of the flow regimes in the ( , ) space .the four flow regimes are represented by different symbols in accordance to the symbols used in fig .[ fig : flowregimes_varalpha+varrs ] . a clear influence of the dimensionless curvature is visible on the value of where vortex shedding is first observed ( transition from to ) as well as on the value where flow separation is initiated ( transition from to ) .the boundary between one - sided and two - sided vortex shedding ( transition from to ) is not determined by the jet pump waist curvature at all . as the local curvature at the big opening is not changed, no dependency on is to be expected . by substituting the original defined flow regime bounds from section [ sec : flowregimes ] in the adjusted scaling parameter , the bounds between the flow regimes ( indicated by symbols as used in fig .[ fig : flowregimes_varalpha+varrs ] and fig .[ fig : varrc_flowregimes ] ) can be represented in the ( , ) space : where the first expression corresponds to , the second expression corresponds to and the last expression corresponds to for the cases where .the bounds are shown in fig .[ fig : varrc_flowregimes ] by the dashed lines .the bottom curve represents the initiation of vortex shedding ( eq . [ eq : flowbound1 ] ) and the upper curve the transition to flow separation ( eq . [ eq : flowbound4 ] ) , both a function of .the horizontal dashed line in fig . [ fig : varrc_flowregimes ] indicates the boundary between one - sided and two - sided vortex shedding .based on the data available the determined bounds do separate the flow regimes depicted in fig .[ fig : varrc_flowregimes ] well , although the defined onset of vortex shedding ( ) is subject to discussion .given the fact that the initial increase in caused by the onset of vortex shedding was well described by adopting a scaling of the form proposed by holman et al., a flow regime bound based on the latter could be more appropriate : this is represented in fig .[ fig : varrc_flowregimes ] by a gray solid line and does indeed separate the two flow regimes .it should be noted that the data available is not sufficient to determine whether eq .[ eq : flowbound1 ] or eq .[ eq : flowbound1a ] is more appropriate .nevertheless , eq . [ eq : flowbound1a ] predicts vortex shedding for at which is also the formation criterion observed experimentally for a sharp edged synthetic jet. this exactly describes the difference in the transitional keulegan carpenter number , observed in section [ sec : flowregimes ] , between vortex shedding from the jet pump waist ( ) and vortex shedding from the big jet pump opening ( ) .a lower dimensionless curvature leads to a lower value of where vortex shedding is initiated .consequently , vortex shedding at the sharp edged big opening will take place at a lower value of the _ local _ keulegan carpenter number than the vortex shedding from the jet pump waist . , ) space for jet pump geometries with varied curvature ( table [ tab : geom_varrc ] ) and a taper angle .symbols are in accordance with fig .[ fig : flowregimes_varalpha+varrs ] with the open circles showing the position where is maximum for each investigated curvature .the dashed lines represent the bounds of the different flow regimes determined according to eq .[ eq : flowbound1][eq : flowbound4 ] and the gray solid line represents the onset of vortex shedding according to eq .[ eq : flowbound1a ] ( color online ) . ]a cfd based parameter study is performed to investigate the influence of various geometric parameters on the performance of jet pumps for thermoacoustic applications .a total of 19 different jet pump geometries are used and for each geometry a number of wave amplitudes are simulated , resulting in a total of 197 simulations . in correspondence with previous work ,four flow regimes are distinguished and separated in a fixed variable space. this space spans the jet pump taper angle , the dimensionless curvature and the keulegan carpenter number . at a certain value of , single - sided vortex propagation from the jet pump waistis initiated and the flow field becomes asymmetric between the left and right side of the jet pump .this onset is found at for a dimensionless curvature of .two different expressions are proposed to account for the effect of , but more data is required to decisively choose one of the two . at some point ,flow separation inside the jet pump can be distinguished and vortices are shed from the jet pump waist in alternating directions leading to a flow field that is more symmetric .this transition is found to be strongly dependent on the jet pump taper angle and occurs when .the amount of asymmetry in the flow field has a direct consequence on the measured time - averaged pressure drop . when no vortex shedding occurs , no dimensionless time - averaged pressure drop is observed .furthermore , as soon as the flow separation is observed , the dimensionless pressure drop has already decayed significantly compared to the maximum value .for all the performed simulations , the maximum jet pump effectiveness is observed when either single - sided or two - sided vortex propagation occurs .consequently , the practical operation area of jet pumps with and is bound between the onset of vortex shedding from the jet pump waist and the occurrence of flow separation .this can be used as a guideline for future jet pump design .the design considerations presented in this paper , are supposed to provide better insight into the validity of the quasi steady approximation that is widely used for the design of jet pumps for thermoacoustic applications .the quasi steady approximation is in most cases an ideal representation of the jet pump performance and is valid only in a small operation area . outside this region, the actual jet pump performance is expected to be significantly lower and a correction using the presented results is advised .although the current parameter study represents a wide range of jet pump geometries , it is restricted to jet pumps with a single , linear tapered hole . as such , this work can serve as a basis for further jet pump geometry optimization in terms of effectiveness , compactness and robustness .extensions to multiple holes or different shaped jet pump walls might affect the jet pump performance and the occurrence of flow separation .furthermore , the influence of turbulence on the flow separation requires further attention . if the flow separation regime could be shifted to higher keulegan carpenter numbers without a decrease in effectiveness , the operation area of a jet pump can be extended , greatly enhancing its robustness .i. idelchik , `` resistance to flow through orifices with sudden change in velocity and flow area . '' in a. ginevskiy and a. kolesnikov , editors , _ handbook of hydraulic resistance _, chapter 4 , 223275 , ( begell house , new york , 2007 ) , 4th edition .j. p. oosterhuis , s. bhler , d. wilcox and t. h. van der meer , `` computational fluid dynamics analysis of the oscillatory flow in a jet pump : the influence of taper angle . '' , 391395 , ( cnrs ipul , riga , latvia , 2014 ) .j. p. oosterhuis , s. bhler , d. wilcox and t. h. van der meer , `` a numerical investigation on the vortex formation and flow separation of the oscillatory flow in jet pumps . ''j. acoust ., * 137 * , 17221731 ( 2015 ) .k. tang , y. feng , s.h .jin , t. jin , and m. li , `` performance comparison of jet pumps with rectangular and circular tapered channels for a loop - structured traveling - wave thermoacoustic engine . ''energ . , * 148 * , 305313 ( 2015 ) . | the oscillatory flow through tapered cylindrical tube sections ( jet pumps ) is characterized by a numerical parameter study . the shape of a jet pump results in asymmetric hydrodynamic end effects which cause a time - averaged pressure drop to occur under oscillatory flow conditions . hence , jet pumps are used as streaming suppressors in closed - loop thermoacoustic devices . a two - dimensional axisymmetric computational fluid dynamics model is used to calculate the performance of a large number of conical jet pump geometries in terms of time - averaged pressure drop and acoustic power dissipation . the investigated geometrical parameters include the jet pump length , taper angle , waist diameter and waist curvature . in correspondence with previous work , four flow regimes are observed which characterize the jet pump performance and dimensionless parameters are introduced to scale the performance of the various jet pump geometries . the simulation results are compared to an existing quasi steady theory and it is shown that this theory is only applicable in a small operation region . based on the scaling parameters , an optimum operation region is defined and design guidelines are proposed which can be directly used for future jet pump design . + * pacs numbers : * 43.35.ud , 43.25.nm , 43.20.mv , 47.32.ff = 1 |
the knapsack problem is one of the famous tasks in combinatorial optimization .items and each of them has a profit of measuring the usefulness of this item during the trip and a size .a natural constraint is that the total size of all selected items must not exceed the capacity of the knapsack .the aim of the hitchhiker is to select a subset of items while maximizing the overall profit under the capacity constraint . ] in the knapsack problem ( kp ) we are given a set of items .every item has a profit and a size .further there is a capacity of the knapsack .the task is to choose a subset of , such that the total profit of is maximized and the total size of is at most . within the d - dimensional knapsack problem ( d - kp ) a set a of items anda number of dimensions is given .every item has a profit and for dimension the size .further for every dimension there is a capacity .the goal is to find a subset of , such that the total profit of is maximized and for every dimension the total size of is at most the capacity .further we consider the multiple knapsack problem ( mkp ) where beside items a number of knapsacks is given .every item has a profit and a size and each knapsack has a capacity .the task is to choose disjoint subsets of such that the total profit of the selected items is maximized and each subset can be assigned to a different knapsack without exceeding its capacity by the sizes of the selected items .surveys on the knapsack problem and several of its variants can be found in books by kellerer et al . and by martello et al . .the knapsack problem arises in resource allocation where there are financial constraints , e.g. capital budgeting .capital budgeting problems have been introduced in the 1950s by lorie and savage and also by manne and markowitz and a survey can be found in . from a computational point of viewthe knapsack problem is intractable .this motivates us to consider the fixed - parameter tractability and the existence of kernelizations of knapsack problems . beside the standard parameter ,i.e. the threshold value for the profit in the decision version of these problems , and the number of items , knapsack problems offer a large number of interesting parameters . among theseare the sizes , the profits , the number of different sizes , the number of different profits , the number of dimensions , the number of knapsacks , and combined parameters on these .such parameters were considered for fixed - parameter tractability of the subset sum problem , which can be regarded as a special case of the knapsack problem , in and in the field of kernelizaton in .this paper is organized as follows . in section [ sec - pre ] , we give preliminaries on fixed - parameter tractability and kernelizations , which are two equivalent concepts within parameterized complexity theory .we give a characterization for the special case of polynomial fixed - parameter tractability , which in the case of integer - valued problems is a super - class of the set of problems allowing polynomial time algorithms .we show that a parameterized problem can be solved by a polynomial fpt - algorithm if and only if it is decidable and has a kernel of constant size .this implies a tool to show kernels of several knapsack problems .further we cite a useful theorem for finding kernels of knapsack problems with respect to parameter by compressing large integer values to smaller ones .we also give results on the connection between the existence of parameterized algorithms , approximation algorithms , and pseudo - polynomial algorithms . in section [ sec - kp ], we consider the knapsack problem .we apply known results as well as our characterizations to show fixed - parameter tractability and the existence of kernelizations . in section [ sec - mkp ] ,we look at the d - dimensional knapsack problem .we show that the problem is not pseudo - polynomial in general by a pseudo - polynomial reduction from independent set , but pseudo - polynomial for every fixed number of dimensions .we give several parameterized algorithms and conclude bounds on possible kernelizations . in section [ def - sec - mkp ] , we consider the multiple knapsack problem .we give a dynamic programming solution and a pseudo - polynomial reduction from 3-partition in order to show that the problem is not pseudo - polynomial in general , but for every fixed number of knapsacks .further we give parameterized algorithms and bounds on possible kernelizations for several parameters . in the final section [ sec - con ]we give some conclusions and an outlook for further research directions .in this section we recall basic notations for common algorithm design techniques for hard problems from the textbooks , , , and . within parameterized complexitywe consider a two dimensional analysis of the computational complexity of a problem . denoting the input by , the two considered dimensions are the input size and the value of a parameter , see and for surveys .let be a decision problem and the set of all instances of .parameterization _ or _ parameter _ of is a mapping that is polynomial time computable .the value of the parameter is expected to be small for all inputs .parameterized problem _ is a pair , where is a decision problem and is a parameterization of .for we will also use the abbreviation - .an algorithm is an _ fpt - algorithm with respect to _ , if there is a computable function and a constant such that for every instance the running time of on is at most or equivalently at most , see . for the casewhere is also a polynomial , is denoted as _polynomial fpt - algorithm with respect to . a parameterized problem belongs to the class and is called _ fixed - parameter tractable _ , if there is an fpt - algorithm with respect to which decides .typical running times of an fpt - algorithm w.r.t .parameter are and .a parameterized problem belongs to the class and is _ polynomial fixed - parameter tractable _ ) , if there is a polynomial fpt - algorithm with respect to which decides .please note that polynomial fixed - parameter tractability does not necessarily imply polynomial time computability for the decision problem in general .a reason for this is that within integer - valued problems there are parameter values which are larger than any polynomial in the instance size .an example is parameter for problem knapsack in section [ sec - para - kp ] . on the other hand , for small parameters polynomialfixed - parameter tractability leads to polynomial time computability .let be some parameterized problem and be some constant such that for every instance of it holds .then the existence of a polynomial fpt - algorithm with respect to implies a polynomial time algorithm for .let be some polynomial fpt - algorithm with respect to for .then has a running time of for two constants , .since we obtain a running time which is polynomial in . in order to state lower bounds we give the following corollary .[ cor - pfpt - p ] let be some parameterized problem and be some constant such that is np - hard and for every instance of it holds . then there is no polynomial fpt - algorithm with respect to for , unless .an algorithm is an _ xp - algorithm with respect to _ , if there are two computable functions such that for every instance the running time of on is at most a parameterized problem belongs to the class and is called _ slicewise polynomial _ , if there is an xp - algorithm with respect to which decides .typical running times of an xp - algorithm w.r.t .parameter are and . in order to show fixed - parameter intractability, it is useful to show the hardness with respect to one of the classes ] be the set of all positive integers between and .we apply bip versions for our knapsack problems to obtain parameterized algorithms ( theorem [ maintheorem ] ) .dynamic programming solutions for max kp are well known .the following two results can be found in the textbook .[ t - dy2 ] max kp can be solved in time .[ t - dy1 ] max kp can be solved in time , where is an upper bound on the value of an optimal solution . since for unary numbers the value ofthe number is equal to the length of the number the running times of the two cited dynamic programming solutions is even polynomial .thus max kp can be solved in polynomial time if all numbers are given in unary . in this paperwe assume that all numbers are encoded in binary .although max kp is a well known example for a pseudo - polynomial problem we want to give this result for the sake of completeness .[ th - si - bug - pseudo ] max kp is pseudo - polynomial .we consider the running time of the algorithm which proves theorem [ t - dy1 ] . in the running time is polynomial in the input size and part is polynomial in the value of the largest occurring number in every input .( alternatively we can apply the running time of the algorithm cited in theorem [ t - dy2 ] . )since max kp is an integer - valued problem defined on inputs of various informations , it makes sense to consider parameterized versions of the problem . by adding a threshold value for the profit to the instance and choosing a parameter from this instance , we define the following parameterized problem .-knapsack ( -kp ) a set of items , for every item , there is a size of and a profit of .further there is a capacity for the knapsack and a positive integer . is there a subset such that the total profit of is at least and the total size of is at most . for some instance of -kp its size be bounded by next we give parameterized algorithms for the knapsack problem .the parameter counts the number of distinct item sizes within knapsack instance ..overview of parameterized algorithms for kp [ cols="<,<,<,^,<,<",options="header " , ] [ maintheorem2-mkp ] there exist kernelizations for the parameterized multiple knapsack problem such that the upper bounds for the sizes of a possible kernel in table [ tab - mkp - ker - survey ] hold true . for parameter proceed as in the proof of theorem [ maintheorem2 ] which shows a kernel of size for -kp . in the case of-mkp we have to scale inequalities of type ( [ eq - ker - n-1 ] ) on variables and one inequality of type ( [ eq - ker - n-2 ] ) on variables by theorem [ th - ft87 ] .for the obtained instance we can bound by the number of items and numbers of value at most and numbers of value at most . thus -mkp has a kernel of size we can assume ( cf .beginning of section [ def - sec - mkp ] ) and since every item is assigned to at most one knapsack ( cf .( [ lp - mkp3 ] ) ) we know .thus we obtain a kernel of size for -mkp . for parameter obtain by theorem [ maintheorem - mkp ] and theorem [ th - me ] a kernel of constant size . for the remaining six parameters of table [ tab - mkp - ker - survey ] the upper bounds follow by theorem [ maintheorem - mkp ] and theorem [ th - ker ] .we have considered the max knapsack problem and its two generalizations max multidimensional knapsack and max multiple knapsack .the parameterized decision versions of all three problems allow several parameterized algorithms . from a practical point of view choosingthe standard parameterization is not very useful , since a large profit of the subset violates the aim that a good parameterization is small for every input .so for kp we suggest it is better to choose the capacity as a parameter , i.e. , since common values of are low enough such that the polynomial fpt - algorithm is practical .the same holds for d - kp and mkp .further one has a good parameter , if it is smaller than the input size but measures the structure of the instance .this is the case for the parameter number of items within all three considered knapsack problems . the special case of the max knapsack problem , where for all items is known as the subset sum problem .for this case we know that and we conclude the existence of fpt - algorithms with respect to parameter , , and .kernels for the subset sum problem w.r.t . and the number of different sizes are examined in . the closely related minimization problem \label{lp}\end{aligned}\ ] ] is known as the change making problem , whose parameterized complexity is discussed in . in our future work , we want to find better fpt - algorithms , especially for d - kp and mkp .we also want to consider the following additional parameters .* for kp * , , , , , and for d - kp , and * , , , , , and for mkp . also from a theoretical point of viewit is interesting to increase the number of parameters for which the parameterized complexity of the considered problems is known .for example if our problem is ]-hard the question arises , whether d - kp becomes tractable w.r.t .parameter .more general , one also might consider more combined parameters , i.e.parameters that consists of two or more parts of the input .for d - kp combined parameters including are of our interest .the existence of polynomial kernels for knapsack problems seems to be nearly uninvestigated .recently a polynomial kernel for kp using rational sizes and profits is constructed in by theorem [ th - ft87 ] .this result also holds for integer sizes and profits ( cf .theorem [ maintheorem2 ] ) . by considering polynomial fpt - algorithms we could show some lower bounds for kernels for kp ( cf .table [ tab - kp - ker - survey ] ) .we want to consider further kernels for d - kp and mkp , try to improve the sizes of known kernels , and give lower bounds for the sizes of kernels .a further task is to extend the results to more knapsack problems , e.g. max - min knapsack problem and restricted versions of the presented problems , e.g. multiple knapsack with identical capacities ( mkp - i ) , see .we also want to consider the existence of parameterized approximation algorithms for knapsack problems , see for a survey .we would like to thank klaus jansen and steffen goebbels for useful discussions .m. etscheid , s. kratsch , m. mnich , and rglin .polynomial kernels for weighted problems . in _ proceedings of mathematical foundations of computer science _ ,volume 9235 of _ lncs _ , pages 287298 .springer - verlag , 2015 .fellows , s. gaspers , and f.a .parameterizing by the number of numbers . in _ proceedings of the symposium on parameterized and exact computation _ ,volume 6478 of _ lecture notes in computer science _ , pages 123134 .springer - verlag , 2010 .f. gurski , j. rethmann , and e. yilmaz . computing partitions with applications to capital budgeting problems . in _operations research proceedings ( or 2015 ) , selected papers_. springer - verlag , 2016 . to appear .k. jansen . a fast approximation scheme for the multiple knapsack problem . in _ proceedings of the conference on current trends in theory and practice of computer science _ ,volume 7147 of _ lncs _ , pages 313324 .springer - verlag , 2012 .j. nederlof , e. j. van leeuwen , and r. van der zwaan .reducing a target interval to a few exact queries . in _ proceedings of mathematical foundations of computer science _ , volume 7464 of _ lncs _ , pages 718727 .springer - verlag , 2012 .r. niedermeier .reflections on multivariate algorithmics and problem parameterization . in _ proceedings of the annual symposium of theoretical aspects of computer science _ ,volume 5 of _ lipics _ , pages 1732 .schloss dagstuhl - leibniz - zentrum fuer informatik , 2010 . | the knapsack problem ( kp ) is a very famous np - hard problem in combinatorial optimization . also its generalization to multiple dimensions named d - dimensional knapsack problem ( d - kp ) and to multiple knapsacks named multiple knapsack problem ( mkp ) are well known problems . since kp , d - kp , and mkp are integer - valued problems defined on inputs of various informations , we study the fixed - parameter tractability of these problems . the idea behind fixed - parameter tractability is to split the complexity into two parts - one part that depends purely on the size of the input , and one part that depends on some parameter of the problem that tends to be small in practice . further we consider the closely related question , whether the sizes and the values can be reduced , such that their bit - length is bounded polynomially or even constantly in a given parameter , i.e. the existence of kernelizations is studied . we discuss the following parameters : the number of items , the threshold value for the profit , the sizes , the profits , the number of dimensions , and the number of knapsacks . we also consider the connection of parameterized knapsack problems to linear programming , approximation , and pseudo - polynomial algorithms . * keywords : * knapsack problem ; d - dimensional knapsack problem ; multiple knapsack problem ; parameterized complexity ; kernelization |
in recent years there have been a slew of textbooks , review articles , and lecture notes designed either as an introduction to numerical relativity , or as a convenient place to understand the state of the art of the field , not to mention classic texts on time evolution problems .these resources already serve their purpose beautifully .so , happy as i was to be asked to teach introductory material and provide written lecture notes , the obvious question is ; does the world need another set of introductory notes to hyperbolic systems ?i ve tried to come to a solution that presents the heart - and - soul of the topic as concisely as possible . in this classi will review concepts in the analysis of time - evolution partial differential equations , pdes , proving results only sparsely .the main aim is to collect together , in the form of a tool - box , the necessary weapons for treating a given system of pdes .i hope that where i have shamelessly copied , the authors of existing texts will accept my imitation as flattery . in the second lecturei treat electromagnetism as a model for general relativity , and apply each of the tools to demonstrate how they are used in practice .i expect that this application will be enlightening .i highlight the effect of gauge freedom on the pdes analysis , for a large family of gauges . to my knowledgesuch a treatment has not appeared elsewhere , although free - evolution formulations of electromagnetism have of course been studied in the literature . in physics andapplied mathematics we are frequently presented with systems of pdes .well - posedness is a fundamental property of a pde problem .it is the requirement that there be a unique solution that depends continuously , in some norm , on given data for the problem . without it ,one has simply not built a reasonable mathematical abstraction of the physical problem at hand .the model is without predictive power , because small changes in the given data might result in either arbitrarily large changes in the outcome or that there is no solution at all .if we are given a complicated system , like the field equations of general relativity , we will probably have to find solutions numerically .but if the formulation as a pde problem is ill - posed , _ no _ numerical approach can be successful !afterall , how can one construct an approximation scheme that converges to the continuum solution if the solution does nt exist ?therefore one might find it surprising that research in numerical relativity has been performed with problems that are ill - posed . spontaneously on hearingthat such versions of general relativity exist , you might think that this sounds a bit like a way of saying that the theory is broken , or somehow deficient .that impression is wrong .the answer is that for theories with gauge freedom , the precise formulation of the field equations as a system of pdes affects well - posedness . and it took time for this fact to be recognized in the context of numerical relativity . in the second lecture we will carefully investigate this for electromagnetism . the most crude way of classifying a pde is into one of the three classes , elliptic , parabolic or hyperbolic , names originally inspired by the conic - sections .the class of a pde determines what type of data has any chance of producing a well - posed problem . from the intuitive point of view of the physicist, one might summarize their character as follows : _ elliptic pdes _ have no intrinsic notion of time , and often arise as the steady - state , or end - state solutions of dynamical evolution , for example in electrostatics .the solutions to well - posed elliptic problems are typically `` as smooth as the coefficients allow '' .the prototype of a well - posed elliptic problem is the boundary value problem for the laplace equation ._ parabolic pdes _ describe diffusive processes .they have an intrinsic notion of time , but signals travel at infinite speed . even non - smooth initial data immediately become smooth as they evolve .the prototype of a well - posed parabolic problem is the initial value problem for the heat equation ._ hyperbolic pdes _ are the best .they describe processes which are in some sense causal ; there is an intrinsic notion of time , and crucially signals travel with _finite _ speed .discontinuities in initial data for a hyperbolic pde will often be propagated , or may even form from smooth initial data .the prototype of a well - posed hyperbolic problem is the initial value problem for the wave equation .notice that all of the prototype well - posed problems specify both a simple pde , the type of data , and the domain that is appropriate .one can also concoct pdes of mixed character , so this classification is certainly not complete .numerical relativists have to face all three , and occasionally mixed classes .but in this lecture we will focus exclusively on hyperbolic problems and well - posedness of the initial , and initial boundary value problems ._ the initial value problem : _ consider a system of pdes , which can be written , with state vector . herei employ the summation convention , denote , and assume that .the highest derivative terms are called the principal part .i will sometimes refer to as the principal matrix , although it is really a shorthand for three matrices , since i m assuming that we have three spatial dimensions .the remaining terms on the right - hand - side of are called non - principal . in this lecturewe will assume that the matrices and are constant in both time and space .we therefore call it a _ linear , constant coefficient system_. the initial value , or cauchy problem , is the following : specify data at time , with spatial coordinates . what is the solution at later times ?in other words data is specified _ everywhere _ in space ; the domain of the solution is in this sense unbounded .naturally many pdes of interest are not linear or do not have constant coefficients .that said , local properties of more complicated systems are determined by the behavior of the system in linear approximation , which justifies the restriction ._ well - posedness : _ if there exist constants and , such that for all initial data we have the estimate , norm , then the initial value problem for is called well - posed .notice that we restrict to initial data that are bounded in ._ strong hyperbolicity : _ given an arbitrary unit spatial vector , we say that the matrix is the principal symbol of the system .the system is called weakly hyperbolic if for every unit spatial vector , the principal symbol has real eigenvalues .if furthermore for every unit spatial vector , the principal symbol has a complete set of eigenvectors and there exists a constant , independent of , such that is formed with the eigenvectors of as columns , and we have the usual definition of the matrix norm , the system is called strongly hyperbolic . if a system is strongly hyperbolic and the multiplicity of the eigenvalues does not depend on we say that it is strongly hyperbolic of constant multiplicity .notice that if the eigenvectors of the principal symbol depend continuously on then the second condition is automatically satisfied , because varies over a compact set . in most applications we will have continuous dependence , andso checking for strong hyperbolicity amounts to doing a little linear algebra . _characteristic variables : _ the components of the vector are called the characteristic variables in the direction . up to non - principal terms and derivativestransverse to the direction the characteristic variables satisfy advection equations with speeds equal to the eigenvalues of the principal symbol . for this reason we will sometimes call the eigenvalues the speeds of the system . to see thisconsider , here has the eigenvalues of the principal symbol on the diagonal , we denote longitudinal derivatives , and transverse derivatives by an upper case latin index ._ well - posed strongly hyperbolic : _ the main result for the initial value problem for is that it is well - posed if and only if the system is strongly hyperbolic .we will need to fourier transform in space , and use the convention , the time derivative of the variables after fourier transform is , where we write .so in fourier space the general solution is assume that the system is strongly hyperbolic .the key to the proof of well - posedness is the use of a symmetrizer .we define the hermitian , positive definite symmetrizer by , which satisfies the crucial property , note that our choice for the definition of is not the most general that yields ; instead we could have chosen , with hermitian , positive definite , and commuting with .we do not require the most general here , and so make do with this restriction .define the norm by , computing the time derivative of the norm , a couple of lines gives the inequality , and integrating we have the well - posedness estimate , in the new norm , latexmath:[\ ] ] i leave it as an exercise for you to convince yourself that this is really a boundary condition on some combination of the electric and magnetic fields , actually in the terminology of teukolsky .laplace - fourier transforming the boundary conditions and substituting the general solution into them , we can solve for the constants and obtain the solution at the boundary , where now i ve abandoned most of the shorthands so that you can see how it really looks .all that remains is to show that each of the variables is bounded in and , which i leave as an exercise , but point you towards which shows the necessary estimates on all of the terms present here .the essential point is that we do nt have to worry about the numerators in any of the fractions , because they are obviously bounded , and since the real part of is positive , terms like appearing in the denominators are bounded away from zero . recalling that by construction corresponds directly to constraint violation , and noting that , it is clear that the constraint preserving boundary conditions work when is chosen .these calculations are largely performed in the mathematica notebook maxwell_lf.nb which accompanies the lecture . with the lorenz gauge , , well - posednesscan be shown for constraint preserving boundary conditions using the energy method with a special choice of symmetrizer , or alternatively with the kreiss - winicour cascade approach . to my knowledgethis is the first time that boundary stability has been demonstrated for arbitrary hyperbolic gauge conditions inside the family . in this lecturewe looked at different formulations of electromagnetism suitable for free - evolution .we saw that for every strongly hyperbolic pure gauge , we could build a formulation which was itself strongly hyperbolic .we saw furthermore that system is system is symmetric hyperbolic for all of these gauge choices .working then in the high - frequency frozen coefficient approximation , we used the laplace - fourier method to investigate boundary stability with constraint preserving boundary conditions .some parts of the calculations were not very explicitly presented . to understand the ins - and - outs i recommend that you study the mathematica notebooks in tandem with the lecture notes .they are available at the school s website .some of the calculations presented in this lecture used the mathematica package _ xtensor _ for abstract tensor calculations .i would like to thank the organizers of the nrhep2 school for giving me the rewarding opportunity to present these lectures .i have benefited greatly from discussions with bernd brgmann , ronny richter , milton ruiz and andreas weyhausen .i am especially grateful to olivier sarbach for carefully reading the lecture notes and offering very helpful criticism .i am supported by the dfg grant sfb / transregio .frans pretorius .binary black hole coalescence . in w.b. burton , editor , _ physics of relativistic objects in compact binaries : from birth to coalescence _ ,volume 359 , pages 305369 .springer , netherlands , 2009 .lee lindblom , mark a. scheel , lawrence e. kidder , harald p. pfeiffer , deirdre shoemaker , and saul a. teukolsky .controlling the growth of constraints in hyperbolic evolution systems ., 69:124025 , 2004 . | these lecture notes accompany two classes given at the nrhep2 school . in the first lecture i introduce the basic concepts used for analyzing well - posedness , that is the existence of a unique solution depending continuously on given data , of evolution partial differential equations . i show how strong hyperbolicity guarantees well - posedness of the initial value problem . symmetric hyperbolic systems are shown to render the initial boundary value problem well - posed with maximally dissipative boundary conditions . i discuss the laplace - fourier method for analyzing the initial boundary value problem . finally i state how these notions extend to systems that are first order in time and second order in space . in the second lecture i discuss the effect that the gauge freedom of electromagnetism has on the pde status of the initial value problem . i focus on gauge choices , strong - hyperbolicity and the construction of constraint preserving boundary conditions . i show that strongly hyperbolic pure gauges can be used to build strongly hyperbolic formulations . i examine which of these formulations is additionally symmetric hyperbolic and finally demonstrate that the system can be made boundary stable . |
for several experimental setups , a description that completely specifies the nature of each device is unsatisfactory .for example , in a realistic scenario the assumption that the provider of the devices is fully reliable is often overoptimistic : imperfections unavoidably affect the implementation , thus turning it away from its ideal description .a device independent description of an experimental setup does not make any assumption on the involved devices , which are regarded as `` black boxes '' , while only the knowledge of the correlations between preparations , measurements and outcomes is considered . in this scenario ,a natural question is whether it is possible to derive some properties of the non - characterized devices instead of assuming them , building only upon the knowledge of these correlations . in generalone could be interested in bounding the dimension of the systems prepared by a non - characterized device ; one could also ask whether a source is intrinsically quantum or can be described classically .the framework of device independent dimension witnesses ( didws ) provides an effective answer to these questions , suitable for experimental implementation and for application in different contexts , such as quantum key distribution ( qkd ) or quantum random access codes ( qracs ) .didws were first introduced in in the context of non - local correlations for multi - partite systems .subsequently , the problem of didws was related to that of qracs in , and in it was reformulated from a dynamical viewpoint allowing one to obtain lower bounds on the dimensionality of the system from the evolution of expectation values . a general formalism for tackling the problem of didws in a prepare and measure scenariowas recently developed in .the derived formalism allows one to establish lower bounds on the classical and quantum dimension necessary to reproduce the observed correlations .shortly after , the photon experimental implementations followed , making use of polarization and orbital angular momentum degrees of freedom or polarization and spatial modes to generate ensembles of classical and quantum states , and certifying their dimensionality as well as their quantum nature .didws also allow reformulating several applications in a device - independent framework .for example , dimension witnesses can be used to share a secret key between two honest parties . in ,the authors present a qkd protocol whose security against individual attacks in a semi - device independent scenario is based on didws .the scenario is called semi - device independent because no assumption is made on the devices used by the honest parties , except that they prepare and measure systems of a given dimension .another application is given by qracs , that make it possible to encode a sequence of qubits in a shorter one in such a way that the receiver of the message can guess any of the original qubits with maximum probability of success .in qracs were considered in the semi - device independent scenario , with a view to their application in randomness expansion protocols . clearlyany experimental implementation of didws is unavoidably affected by losses - that can be modeled as a constraint on the measurements - and can reduce the value of the dimension witness , thus making it impossible to witness the dimension of a system .based on these considerations , it is relevant to understand whether it is possible to perform reliable dimension witnessing in realistic scenarios and , in particular , with non - optimal detection efficiency .we refer to this problem as the _ robustness of device independent dimension witnesses_. despite its relevance for experimental implementations and practical applications , this problem has not been addressed in previous literature .the aim of this work is to fill this gap .we consider the case where shared randomness between preparations and measurements is allowed .our main result is to provide the threshold in the detection efficiency that can be tolerated in dimension witnessing , in the case where one is interested in the dimension of the system as well as in the case where one s concern is to discriminate between its quantum or classical nature .the paper is structured as follows . in section [sect : didw ] we introduce the sets of quantum and classical correlations and the concept of dimension witness as a tool to discriminate whether a given correlation matrix belongs to these sets .section [ sect : sets ] discusses some properties of the sets of classical and quantum correlations . in section [sect : robustness ] we provide a threshold in the detection efficiency that is allowed in witnessing the dimensionality of a system or in discriminating between its classical or quantum nature , as a function of the dimension of the system .we summarize our results and discuss some further developments - such as dimension witnessing in the absence of correlations between preparations and measurements or entangled assisted dimension witnessing - in section [ sect : conclusion ] .let us first fix the notation . given a hilbert space ,we denote with the space of linear operators . a quantum state in represented by a density matrix , namely a positive semi - definite matrix such that = 1 ] for any . here , the notion of classicality has to be understood in an operational sense : in our scenario , the observed correlations can be reproduced by a classical variable taking possible values if , and only if , they can be reproduced by measurements on pairwise commuting states acting on a hilbert space of dimension ( this will become clearer after lemma [ thm : classicalpovms ] below ) .a general quantum measurement is represented by a positive - operator valued measure ( povm ) , namely a set of positive semi - definite hermitian matrices such that .a povm is said to be classical when = 0 ] .the general setup introduced in ref . for performing device independent dimension witnessing is given by a preparing device ( let us say on alice s side ) and a measuring device ( on bob s side ) as in fig .[ fig : setup ] . .alice ( on the left hand side ) owns a preparing device which sends the state to bob whenever alice presses button ] , giving the outcome ] and sends a fixed state to bob .bob chooses the value of index ] . after repeating the experimentseveral times ( we consider here the asymptotic case ) , they collect the statistics about indexes obtaining the conditional probabilities .note that we also implicitly assume that we are dealing with independent and identically distributed events .we now introduce the set ( the set ) of correlations achievable with quantum ( classical ) preparations . for any define the _ set of quantum correlations _ as the set of correlations with ] and ] , namely \}. \end{aligned}\ ] ] for any we define the _ set of classical correlations _ as the set of correlations with ] and ] , namely \}. \end{aligned}\ ] ] we write and omitting the parameters whenever they are clear from the context .[ rmk : sr ] we notice that , when shared randomness is allowed between quantum ( classical ) preparations and measurements , the set of achievable correlations is given by ( ) , where for any set we denote with the convex hull of .the following lemma shows that it is not restrictive to consider only classical povms , that is , measurements consisting of commuting operators , in the definitions of classical correlations .[ thm : classicalpovms ] for any correlation there exist a classical set of states and a set of classical povms such that ] for any . take where is an orthonormal basis with respect to which the s are diagonal ( it is straightforward to verify that for any and for any ) .we have ] , ] such that for some which depends on .interestingly , in many situations the value of the bound in the definition of a dimension witness varies depending on whether one is interested in classical or quantum ensembles of states .this gives a second application for dimension witnesses , namely quantum certification : if the system dimension is assumed , dimension witnesses allow certifying its quantum nature .it is precisely this quantum certification what makes dimension witnesses useful for quantum information protocols . motivated by remark [ rmk : sr ] , for any when [ when we say that is a classical ( quantum ) dimension witness for dimension in the presence of shared randomness . given a set of states and a set of povms , we define with and ] .now we show that .indeed can be obtained by alice and bob making use of the quantum set of states and of the set of quantum povms , with and which proves that .finally , we verify that if alice and bob make use of classical sets of states and povms and do not have access to shared randomness there is no way to achieve the probability distribution given by eq . . indeed , to have perfect discrimination between and with povm ] , one must take and orthogonal - let us say without loss of generality and , and and .due to the hypothesis of classicality of the sets of states , must be a convex combination of and .then , in order to have as in eq ., one has to choose . finally , the only possible choice for is and , which is incompatible with the remaining entries of in eq .this proves that . the relations between the sets of quantum and classical correlations are schematically depicted in fig .[ fig : sets ] . of classical correlations without shared randomness ; the rectangle represents the set of classical correlations with shared randomness ; the ellipsoid represents the set of quantum correlations with shared randomness . ]in practical applications , losses ( due to imperfections in the experimental implementations or artificially introduced by a malicious provider ) can noticeably affect the effectiveness of dimension witnessing .the main result of this section is to provide a threshold value for the detection efficiency that allows one to witness the dimension of the systems prepared by a source or to discriminate between its quantum or classical nature .the task is to determine whether a given conditional probability distribution belongs to a particular convex set , namely or ( see remark [ rmk : sr ] ) .the situation is illustrated in figure [ fig : task ] . and of the sets of quantum and classical correlations are represented as in figure [ fig : sets ] . in the presence of loss ,only a subset of the possible correlations is attainable .the subset , surrounded by bold line in the figure , is parametrized by detection efficiency .the task is to find the threshold value in such that dimension witnessing is still possible .for example , when the task is to discriminate between the quantum or classical nature of a source , one is interested in achieving correlations in the dark area of the figure , and our goal is to determine the values of such that this area is not null . ]the experimental implementation is constrained to be lossy , namely it can be modeled considering an ideal preparing device followed by a measurement device with non - ideal detection efficiency .this means that any povm on bob s side is replaced by a povm with detection efficiency , namely we notice that each lossy povm has one outcome more than the ideal one , corresponding to the no - click event . in a general model, the detection efficiency may be different for any povm .nevertheless , in the following we assume that they have the same detection efficiency , which is a reasonable assumption if the detectors have the same physical implementation .analogously given a set of povms we will denote with the corresponding set of lossy povms . upon defining with ] , ] normalized as in eq .one has one has \\ & = \eta \ , w(r , p^{(1 ) } ) , \end{aligned}\ ] ] where the first equality follows from eq . and the second from the fact that due to the normalization given in eq . .in particular from lemma [ thm : lossy ] it follows that for any linear dimension witness one has due to lemma [ thm : lossy ] , it is possible to recast the optimization of dimension witnesses in the presence of loss to the optimization in the ideal case . then due to lemma [ thm : pure ] it is not restrictive to carry out the optimization with pure states and no shared randomness .consider the case where , , and .using the technique discussed in appendix [ sect : optw ] one can verify that the witness given by eq . with the following coefficients is the most robust to non - ideal detection efficiency .this fact should not be surprising , as we notice that this witness relies on only out of outcomes . according to ,we denote it . in ( see also ) it was conjectured that for any dimension the dimension witness is tight in the absence of loss .now we provide upper and lower bounds for the maximal value where the maximization is over any set of states and any set of povms with .[ thm : recbound ] for any dimension we have .the statement follows from the recursive expression , where and noticing that and can be optimized independently .a tight upper bound for was provided in . in the following lemmawe provide a constructive proof suitable for generalization to higher dimensions .[ thm:2bound ] for dimension we have .the statement follows from standard optimization with lagrange multipliers method and from the straightforward observation that given two normalized pure states and , if a pure state can be decomposed as follows then .making use of lemmas [ thm : recbound ] and [ thm:2bound ] , we provide upper and lower bounds on as follows where the second inequality follows from the non discriminability of states in dimension ( see ) .we now make use of these facts to provide our main result , namely a lower threshold for the detection efficiency required to reliably dimension witnessing .we consider the problem of lower bounding the dimension of a system prepared by a non - characterized source in proposition [ thm : etac ] , as well as the problem of discriminating between the quantum or classical nature of a source in proposition [ thm : etad ] .[ thm : etac ] for any there exists a dimension witnessing setup such that it is possible to discriminate between the quantum and classical nature of a -dimensional system using povms with detection efficiency whenever furthermore one has we provide a constructive proof of the statement .take , , and , and we show that satisfies the thesis .we notice that the maximum value of attainable with classical states is given by . then is the minimum value of the detection efficiency such that can discriminate a quantum system from a classical one .due to lemma [ thm : lossy ] we have eq . . from eq .the lower and upper bounds for given in eq . straightforwardly follow .notice that in eq . can be numerically evaluated with the techniques discussed in appendix [ sect : optid ] .figure [ fig : etac ] plots the value of for different values of the dimension of the hilbert space .the threshold in the detection efficiency when is , going asymptotically to with as . as in eq . as a function of the dimension , obtained through numerical optimization of with algorithm [ thm : algo2 ] .the lower bound ( lower line ) and upper bound ( upper line ) given by eq . are also plotted .as expected , the upper bound is tight for .the detection efficiency asymptotically goes to as since its upper and lower bound do the same . ][ thm : etad ] for any there exists a dimension witnessing setup such that it is possible to lower bound the dimension of a -dimensional system using povms with detection efficiency whenever furthermore one has we provide a constructive proof of the statement .take , , and , and we show that satisfies the thesis .we notice that the maximum value of attainable in any dimension is given by . then is the minimum value of the detection efficiency such that can lower bound the dimension of a dimensional system . due to lemma[ thm : lossy ] we have eq . .from eq . the lower bound to given by eq . straightforwardly follows .notice that in eq . can be numerically evaluated with the techniques discussed in appendix [ sect : optid ] .figure [ fig : etad ] plots the value of for different values of the dimension of the hilbert space .the threshold in the detection efficiency when is , going asymptotically to with as .we notice that grows faster than , thus showing that for fixed dimension , the discrimination between the quantum or classical nature of the source is more robust to loss than lower bounding the dimension of the prepared states . as in eq . as a function of the dimension , obtained through numerical optimization of with algorithm [ thm : algo2 ] .the lower bound ( lower line ) given by eq . is also plotted .as expected , the lower bound is tight for .the detection efficiency asymptotically goes to as since its lower bound does the same ( and is a trivial upper bound ) . ]in this work we addressed the problem whether a lossy setup can provide a reliable lower bound on the dimension of a classical or quantum system .first we provided some relevant properties of the sets of classical and quantum correlations attainable in a dimension witnessing setup .then we introduced analytical and numerical tools to address the problem of the robustness of didws , and we provided the amount of loss that can be tolerated in dimension witnessing .the presented results are of relevance for experimental implementations of didws , and can be naturally applied to semi - device independent qkd and qracs .we notice that , while we provided analytical proofs of our main results , i.e. propositions [ thm : etac ] and [ thm : etad ] , their optimality as a bound relies on numerical evidences . in particular , they are optimal if the dimension witness is indeed the most robust to loss for any , which is suggested by numerical evidence obtained with the techniques of appendix [ sect : optw ] and appendix [ sect : optid ] .thus , a legitimate question is whether the bounds provided in propositions [ thm : etac ] and [ thm : etad ] are indeed optimal .moreover , it is possible to consider models of loss more general than the one considered here , e.g. one in which a different detection efficiency is associated to any povm . a natural generalization of the problem of didws , in the ideal as well as in the lossy scenario , is that in the absence of correlations between the preparations and the measurements . in this case , as discussed in this work , the relevant sets of correlations are and , which are non - convex as shown in section [ sect : didw ] .the non convexity of the relevant sets allows the exploitation of non - linear witnesses - as opposed to what we did in the present work .an intriguing but still open question is whether there are situations in which this exploitation allows to dimension witness for any non - null value of the detection efficiency .another natural generalization of the problem of didws is that of entangled assisted didws , namely when entanglement is allowed to be shared between the preparing device on alice s side and the measuring device on bob s side .this problem is similar to that of super - dense coding .consider again fig . [fig : setup ] . in the simplest super - dense coding scenario ,alice presses one button out of , while bob always performs the same povm ( ) obtaining one out of outcomes .the dimension of the hilbert space is , but a pair of maximally entangled qubits is shared between the parties . in this case , the results of imply that a classical system of dimension ( quart ) can be sent from alice to bob by sending a qubit ( corresponding to half of the entangled pair ) .consider the general scenario where now the two parties are allowed to share entangled particles .the super - dense coding protocol automatically ensures that by sending a qubit alice and bob can always achieve the same value of any didw as attained by a classical quart .remarkably , the super - dense coding protocol turns out not to be optimal , as we identified more complex protocols beating it .in particular , we found a situation for which , upon performing unitary operations on her part of the entangled pair and subsequently sending it to bob , alice can achieve correlations that can not be reproduced upon sending a quart .this thus proves the existence of communication contexts in which sending half of a maximally entangled pair is a more powerful resource than a classical quart .this observation is analogous to that done in , where it was shown that entangled assisted qracs ( where an entangled pair of qubits is shared between the parties ) outperform the best of known qracs . for these reasonswe believe that the problem of entangled assisted didws deserves further investigation .we are grateful to nicolas brunner , stefano facchini , and marcin pawowski for very useful discussions and suggestions .m. d. thanks anne gstottner and the human resources staff at icfo for their invaluable support .this work was funded by the spanish fis2010 - 14830 project and generalitat de catalunya , the european percent erc starting grant and q - essence project , and the japanese society for the promotion of science ( jsps ) .given a linear dimension witness the following algorithm converges to a local maximum of .[ thm : algo1 ] for any set of pure states and any set of povms , 1 .let { |\psi_i^{(n)}\rangle} ] , 3 .normalize , 4 .normalize with .as for all steepest - ascent algorithm , there is no protection against the possibility of convergence toward a local , rather than a global , maximum .hence one should run the algorithm for different initial ensembles in order to get some confidence that the observed maximum is the global maximum ( although this can never be guaranteed with certainty ) .any initial set of states and any initial set of povms can be used as a starting point , except for a subset corresponding to minima of .these minima are unstable fix - points of the iteration , so even small perturbations let the iteration converge to some maxima .the parameter controls the length of each iterative step , so for too large , an overshooting can occur .this can be kept under control by evaluating at the end of each step : if it decreases instead of increasing , we are warned that we have taken too large . referring to fig .[ fig : setup ] , the simplest non - trivial scenario one can consider is the one with preparations and povms each with outcomes , one of which corresponding to no - click event . in this case one has several tight classical didws . applying algorithm [ thm : algo1 ]we verified that among them the most robust to loss is given by eq . with coefficients given by eq . .the following lemma proves that the povms maximizing for any dimension are such that one of their elements is a projector on a pure state , thus generalizing a result from . for any fixed set of pure statesdefine , , and . then clearly , and for any . from eq .it follows immediately that the optimal set of povms is such that $ ] .the optimum of is achieved when is the sum of the eigenvectors of corresponding to positive eigenvalues . upon denoting with the eigenvalues of , the weyl inequality ( see for example ) holds for any . since and for any and for any , the thesis follows immediately .the same remarks made about algorithm [ thm : algo1 ] hold true for algorithm [ thm : algo2 ] .nevertheless , we verified that in practical applications algorithm [ thm : algo2 ] always seems to converge to a global , not a local maximum .this can be explained considering that without loss of generality it optimizes over a smaller set of povms when compared to algorithm [ thm : algo1 ] .moreover , we noticed that the optimal sets of states and povms are real , namely there exists a basis with respect to which states and povm elements have all real matrix entries .a similar observation was done in in the context of bell s inequalities .n. brunner , s. pironio , a. acn , n. gisin , a. a. mthot , and v. scarani , phys .lett . * 100 * , 210503 ( 2008 ) .s. wehner , m. christandl , and a. c. doherty , phys . rev .a * 78 * , 062112 ( 2008 ) .m. m. wolf and d. perez - garca , phys .lett . * 102 * , 190504 ( 2009 ) .r. gallego , n. brunner , c. hadley , and a. acn , phys .lett . * 105 * , 230501 ( 2010 ) .m. hendrych , r. gallego , m. miuda , n. brunner , a. acn , and j. p. torres , nature phys .* 8 * , 588 - 591 ( 2012 ) .h. ahrens , p. badziag , a. cabello , and m. bourennane , arxiv : quant - ph/1111.1277 . m. pawowski and n. brunner , phys .a * 84 * , 010302 ( 2011 ) .li , m. pawowski , z .- q .yin , g .- c .guo , and z .- f .han , arxiv : quant - ph/1109.5259 . m. pawowski and m. ukowski , phys .a * 81 * , 042326 ( 2010 ) .i. l. chuang and m. a. nielsen , _ quantum information and communication _ ( cambridge , cambridge university press , 2000 ) .this is not the case in the hybrid scenario where different types of detectors ( e.g. photodetectors and homodyne measurements ) are used .a similar scenario was proposed for example in the context of bell inequalities .d. cavalcanti , n. brunner , p. skrzypczyk , a. salles , v. scarani , phys .a * 84 * , 022105 ( 2011 ) .masanes , arxiv : quant - ph/0210073 c. h. bennett and s. j. wiesner , phys .lett . * 69 * , 2881 ( 1992 ) .masanes , arxiv : quant - ph/0512100 .r. bhatia , _ positive definite matrices _ ( princeton university press , princeton , 2006 ) .t. franz , f. furrer , and r. f. werner , phys .* 106 * , 250502 ( 2011 ) . | device independent dimension witnesses provide a lower bound on the dimensionality of classical and quantum systems in a `` black box '' scenario where only correlations between preparations , measurements and outcomes are considered . we address the problem of the robustness of dimension witnesses , namely that to witness the dimension of a system or to discriminate between its quantum or classical nature , even in the presence of loss . we consider the case when shared randomness is allowed between preparations and measurements and provide a threshold in the detection efficiency such that dimension witnessing can still be performed . |
next generation wireless communication networks are required to provide ubiquitous and high data rate communication with guaranteed quality of service ( qos ) .these requirements have led to a tremendous need for energy in both transmitter(s ) and receiver(s ) . in practice ,portable mobile devices are typically powered by capacity limited batteries which require frequent recharging . besides, battery technology has developed very slowly over the past decades and the battery capacities available in the near future will be unable to improve this situation .consequently , energy harvesting based mobile communication system design has become a prominent approach for addressing this issue . in particular , it enables self - sustainability for energy limited communication networks .in addition to conventional energy harvesting sources such as solar , wind , and biomass , wireless power transfer has been proposed as an emerging alternative energy source , where the receivers scavenge energy from the ambient radio frequency ( rf ) signals .in fact , wireless power transfer technology not only eliminates the need of power cords and chargers , but also facilitates one - to - many charging due to the broadcast nature of wireless channels .more importantly , it enables the possibility of simultaneous wireless information and power transfer ( swipt ) leading to many interesting and challenging new research problems which have to be solved to bridge the gap between theory and practice . in ,the authors investigated the fundamental trade - off between harvested energy and wireless channel capacity across a pair of coupled inductor circuit in the presence of additive white gaussian noise .then , in , the study was extended to multiple antenna wireless broadcast systems . in ,the energy efficiency of multi - carrier systems with swipt was revealed .specifically , it was shown in that integrating an energy harvester into a conventional information receiver improves the energy efficiency of a communication network . in ,robust beamforming design for swipt systems with physical layer security was investigated .the results in indicate that both the information rate and the amount of harvested energy at the receivers can be significantly increased at the expense of an increase in the transmit power .however , despite the promising results in the literature , the performance of wireless power / energy transfer systems is still limited by the distance between the transmitter and the receiver due to the high signal attenuation associated with path loss .coordinated multipoint ( comp ) transmission is an important technique for extending service coverage , improving spectral efficiency , and mitigating interference .a possible deployment scenario for comp networks is to split the functionalities of the base stations between a central processor ( cp ) and a set of remote radio heads ( rrhs ) . in particular, the cp performs the power hungry and computationally intensive baseband signal processing while the rrhs are responsible for all radio frequency ( rf ) operations such as analog filtering and power amplification . besides, the rrhs are distributed across the network and connected to the cp via backhaul links .this system architecture is known as cloud computing network . as a result, the comp systems architecture inherently provides spatial diversity for combating path loss and shadowing .it has been shown that a significant system performance gain can be achieved when full cooperation is enabled in comp systems . however , in practice , the enormous signalling overhead incurred by the information exchange between the cp and the rrhs may be infeasible when the capacity of the backhaul link is limited .hence , resource allocation for comp networks with finite backhaul capacity has attracted much attention in the research community . in , the authors studied the energy efficiency of comp multi - cell networks with capacity constrained backhaul links . in and ,iterative sparse beamforming algorithms were proposed to reduce the load of the backhaul links while providing reliable communication to the users .however , the energy sources of the receivers in were assumed to be perpetual and this assumption may not be valid for power - constrained portable devices . on the other hand , the signals transmitted by the rrhs could be exploited for energy harvesting by the power - constrained receivers for extending their lifetimes . however , the resource allocation algorithm design for comp swipt systems has not been solved sofar , and will be tackled in this paper . motivated by the aforementioned observations , we formulate the resource allocation algorithm design for multiuser comp communication networks with swipt as a non - convex optimization problem .we jointly minimize the total network transmit power and the maximum capacity consumption per backhaul link while ensuring quality of service ( qos ) for reliable communication and efficient wireless power transfer .in particular , we propose an iterative algorithm which provides a local optimal solution for the considered optimization problem .we use boldface capital and lower case letters to denote matrices and vectors , respectively . , , and represent the hermitian transpose , trace , and rank of matrix ; and indicate that is a positive definite and a positive semidefinite matrix , respectively ; denotes the vectorization of matrix by stacking its columns from left to right to form a column vector ; is the identity matrix ; denotes the set of all matrices with complex entries ; denotes the set of all hermitian matrices ; denotes a diagonal matrix with the diagonal elements given by ; and denote the absolute value of a complex scalar and the -norm of a vector , respectively .in particular , is known as the -norm of a vector and denotes the number of non - zero entries in the vector ; the circularly symmetric complex gaussian ( cscg ) distribution is denoted by with mean and variance ; stands for distributed as " ; {x} ] and , respectively .indeed , when both the power budget and the number of antennas at the rrhs are sufficiently large , full cooperation may not be beneficial . in this case , conveying the data of each ir to one rrh may be sufficient for providing the qos requirements for reliable communication and efficient power transfer . hence , backaul system resources can be saved . besides , it can be seen from figure [ fig : backhaul_nt_subfig2 ] that the system with co - located antennas requires the smallest amount of total system backhaul capacity since the data of each ir is conveyed only to a single rrh . however , the superior performance of the co - located antenna system in terms of total network backhaul capacity consumption incurs the highest capacity consumption per backhaul link among all the schemes , cf .figure [ fig : backhaul_nt_subfig1 ] . in figure[ fig : pt_nt ] , we study the average total transmit power versus total number of transmit antennas for different resource allocation schemes . it can be observed that the total transmit power decreases monotonically with increasing number of transmit antennas .this is due to the fact that the degrees of freedom for resource allocation increase with the number of transmit antennas , which enables a more power efficient resource allocation .besides , the proposed algorithm consumes a lower transmit power compared to the optimal exhaustive search scheme .this is because the exhaustive search scheme consumes a smaller backhaul capacity at the expense of a higher transmit power .furthermore , the system with co - located antennas consumes a higher transmit power than the proposed scheme and the full cooperation scheme in all considered scenarios which reveals the power saving potential of comp due to its inherent spatial diversity . on the other hand, it is expected that the full cooperation scheme is able to achieve the lowest average total transmit power at the expense of an exceedingly large backhaul capacity consumption , cf .figure [ fig : backhaul_nt ] . in figure[ fig : hp_nt ] , we study the average total harvested power versus the total number of transmit antennas in the network for different resource allocation schemes . we compare the average total harvested power of all resource allocation schemes with a lower bound which is computed by assuming that constraint c5 is satisfied with equality for all ers .as can be observed , the total average harvested powers in all considered scenarios are monotonically non - increasing with respect to the number of transmit antennas .this is because the extra degrees of freedom offered by the increasing number of antennas improve the efficiency of resource allocation . in particular ,the direction of beamforming matrix can be more accurately steered towards the irs which reduces the power allocation to and the leakage of power to the ers .this also explains the lower harvested power for the full cooperation scheme and the system with co - located antennas since they both exploit all transmit antennas in the network for joint transmission . on the other hand ,the highest amount of radiated power can be harvested for the exhaustive search scheme at the expense of a higher total transmit power .in this paper , we studied the resource allocation algorithm design for comp multiuser communication systems with swipt .the algorithm design was formulated as a non - convex combinatorial optimization problem with the objective to jointly minimize the total network transmit power and the maximum capacity consumption of the backhaul links .the proposed problem formulation took into account qos requirements for communication reliability and power transfer .a suboptimal iterative resource allocation algorithm was proposed for obtaining a locally optimal solution of the considered problem .simulation results showed that the proposed suboptimal iterative resource allocation scheme performs close to the optimal exhaustive search scheme and provides a substantial reduction in backhaul capacity consumption compared to full cooperation .besides , our results unveiled the potential power savings enabled by comp networks compared to centralized systems with multiple antennas co - located for swipt .it can be verified that ( [ eqn : sdp_relaxation ] ) satisfies slater s constraint qualification and is jointly convex with respect to the optimization variables .thus , strong duality holds and solving the dual problem is equivalent to solving the primal problem . for the dual problem ,we need the lagrangian function of the primal problem in ( [ eqn : sdp_relaxation ] ) which is given by where.\notag\end{aligned}\ ] ] here , denotes the collection of terms that only involve variables that are independent of . is the dual variable matrix for constraint c8 . , , , , , , and are the scalar dual variables for constraints c1c7 , respectively .then , the dual problem of ( [ eqn : sdp_relaxation ] ) is given by to .for the sake of notational simplicity , we define and as the set of optimal primal and dual variables of ( [ eqn : sdp_relaxation ] ) , respectively .now , we consider the following karush - kuhn - tucker ( kkt ) conditions which are useful in the proof : where is obtained by substituting the optimal dual variables into ( [ eqn : a_k ] ) . in ( [ eqn : kkt - complementarity ] ) indicates that for , the columns of are in the null space of .therefore , if , then the optimal beamforming matrix must be a rank - one matrix .we now show by contradiction that is a positive definite matrix with probability one in order to reveal the structure of .let us focus on the dual problem in ( [ eqn : dual ] ) .for a given set of optimal dual variables , , power supply variables , , and auxiliary variable , the dual problem in ( [ eqn : dual ] ) can be written as suppose is not positive definite , then we can choose as one of the optimal solutions of ( [ eqn : dual2 ] ) , where is a scaling parameter and is the eigenvector corresponding to one of the non - positive eigenvalues of .we substitute into ( [ eqn : dual2 ] ) which leads to on the other hand , since the channel vectors of and are assumed to be statistically independent , it follows that by setting , the term and the dual optimal value becomes unbounded from below . besides , the optimal value of the primal problem is non - negative for , strong duality does not hold which leads to a contradiction .therefore , is a positive definite matrix with probability one , i.e. , . by exploiting ( [ eqn : lagrangian_gradient ] ) and a basic inequality for the rank of matrices , we have thus , is either or .furthermore , is required to satisfy the minimum sinr requirement of ir in c1 for .hence , and hold with probability one .in other words , the optimal joint beamformer can be obtained by performing eigenvalue decomposition of and selecting the principal eigenvector as the beamformer .d. w. k. ng , e. s. lo , and r. schober , `` wireless information and power transfer : energy efficiency optimization in ofdma systems , '' _ ieee trans .wireless commun ._ , vol . 12 ,6352 6370 , dec .d. w. k. ng , e. s. lo , and r. schober , `` robust beamforming for secure communication in systems with wireless information and power transfer , '' _ ieee trans .wireless commun ._ , vol .pp , apr . 2014 .d. lee , h. seo , b. clerckx , e. hardouin , d. mazzarese , s. nagata , and k. sayana , `` transmission and reception in lte - advanced : deployment scenarios and operational challenges , '' _ ieee commun . magazine _ , vol .50 , pp . 148155 , feb .r. irmer , h. droste , p. marsch , m. grieger , g. fettweis , s. brueck , h. p. mayer , l. thiele , and v. jungnickel , `` coordinated multipoint : concepts , performance , and field trial results , '' _ ieee commun . magazine _ , vol .102111 , feb . 2011 .d. w. k. ng , e. s. lo , and r. schober , `` energy - efficient resource allocation in multi - cell ofdma systems with limited backhaul capacity , '' _ ieee trans .wireless commun ._ , vol . 11 , pp36183631 , oct . | this paper studies the resource allocation algorithm design for multiuser coordinated multipoint ( comp ) networks with simultaneous wireless information and power transfer ( swipt ) . in particular , remote radio heads ( rrhs ) are connected to a central processor ( cp ) via capacity - limited backhaul links to facilitate comp joint transmission . besides , the cp transfers energy to the rrhs for more efficient network operation . the considered resource allocation algorithm design is formulated as a non - convex optimization problem with a minimum required signal - to - interference - plus - noise ratio ( sinr ) constraint at multiple information receivers and a minimum required power transfer constraint at the energy harvesting receivers . by optimizing the transmit beamforming vectors at the cp and energy sharing between the cp and the rrhs , we aim at jointly minimizing the total network transmit power and the maximum capacity consumption per backhaul link . the resulting non - convex optimization problem is np - hard . in light of the intractability of the problem , we reformulate it by replacing the non - convex objective function with its convex hull , which enables the derivation of an efficient iterative resource allocation algorithm . in each iteration , a non - convex optimization problem is solved by semi - definite programming ( sdp ) relaxation and the proposed iterative algorithm converges to a local optimal solution of the original problem . simulation results illustrate that our proposed algorithm achieves a close - to - optimal performance and provides a significant reduction in backhaul capacity consumption compared to full cooperation . besides , the considered comp network is shown to provide superior system performance as far as power consumption is concerned compared to a traditional system with multiple antennas co - located . |
fluid and diffusion limits for queuing systems have been applied successfully toward performance analysis and optimization of various queuing systems .we are concerned here with performance analysis in steady - state and , more specifically , with brownian steady - state approximations for continuous time markov chains ( ctmcs ) .the framework of diffusion limits begins with a sequence of ctmcs , and properly scaled and centered versions for some sequence that arises from the specific structure of the model . with appropriate assumptions on the parameters of the ctmc , and on the sequence of initial conditions , one typically proceeds to establish process convergence in the appropriate function space where is a diffusion process .if each of the as well as are ergodic , and is a continuous function such that is uniformly integrable , one can subsequently conclude that \rightarrow{\mathbb{e}}\bigl[f\bigl({\widehat{x}}(\infty ) \bigr)\bigr ] \qquad\mbox{as } n\rightarrow\infty , \ ] ] where and have , respectively , the steady - state distributions of and . a relatively general framework toward proving the required uniform integrability has been developed in and applied there to generalized jackson networks ;see also .it was subsequently applied successfully to other queueing systems .this so - called _ interchange of limits _ establishes that ={\mathbb{e}}\bigl[f\bigl ( { \widehat{x}}(\infty ) \bigr)\bigr]+o(1)\label{eq : gapintro},\ ] ] and supports using ] .a central benefit of the limit approach to approximations is the relative tractability of the diffusion relative to the original ctmc .the convergence rate embedded in the term is not , however , precisely captured by these convergence arguments . in this paper , we prove that an appropriately defined sequence of diffusion models , that are as tractable as the diffusion limit , provides accurate approximations for the steady - state of the ctmcs with an approximation gap that shrinks at a rate of . our approach does not require process convergence as in ( [ eq : fclt ] ) .we proceed to an informal exposition of the results and key ideas .the markov chains that we consider have a semi - martingale representation where is a local martingale with respect to a properly defined filtration .we define a fluid model by ( heuristically ) removing the martingale term , that is , if the fm has a unique stationary point satisfying , it subsequently makes sense to center around and consider the centered and scaled process .the process satisfies the equation where . under appropriate conditions , a strong approximation for given by the diffusion process where is a standard brownian motion and arises naturally from the markov - chain transition functions and is intimately related to the predictable quadratic variation of the martingale .strong - approximations theory predicts an approximation gap that is logarithmic in where is the time horizon ; see remark [ rem : strong ] .a cruder approximation is obtained by replacing the ( state dependent ) diffusion coefficient with its value at the stationary point of the fm , , to obtain the diffusion process specified by the equation our main finding is that this straightforward heuristic derivation of the dm building on a stationary point of the fluid model to construct a simplified diffusion model may provide , insofar as steady - state analysis is concerned , an impressively accurate approximation .more precisely , but still proceeding informally at this stage , we prove the following .let be the generator of the diffusion . if there exists a function together with finite positive constants and a compact set ( all not depending on ) such that then -{\mathbb{e}}\bigl[f\bigl ( { \widehat{x}}^n(\infty)\bigr)\bigr]={\mathcal{o}}(1/\sqrt{n})\ ] ] for all functions with . the uniform lyapunov requirement ul must be proved on a case - by - case basis , and we illustrate this via two examples in section [ sec : examples ] .the requirement ul restricts the scope of our results to ( sequences of ) chains in which the corresponding dm is exponentially ergodic .the sequence of poisson equations ( associated with the sequence of dms ) is central to our proofs .let be the steady - state distribution of the diffusion model and be that of the scaled ctmc .let be such that .[ the requirement that is not necessary and is imposed in this discussion for expositional purposes . ]we will show that a solution exists for the dm s poisson equation based on it s rule one expects that ={\mathbb{e}}_{\pi^n } \bigl[u^n_f \bigl({\widehat{y}}^n(0)\bigr ) \bigr]+{\mathbb{e}}_{\pi^n } \biggl[\int _ 0^t { \mathcal{a}}^n u^n_f \bigl({\widehat{y}}^n(s)\bigr)\,ds \biggr].\ ] ] since the dm has , by construction , a diffusion coefficient that does not depend on the state , the poisson equation is ( for each ) a linear pde , and we are able to build on existing theory to identify gradient estimates that are uniform in the index .these gradient estimates facilitate proving that ={\mathbb{e}}_{\nu^n } \bigl[u^n_f\bigl ( { \widehat{x}}^n(0)\bigr ) \bigr]+{\mathbb{e}}_{\nu^n } \biggl[\int _ 0^t { \mathcal{a}}^n u^n_f \bigl({\widehat{x}}^n(s)\bigr)\,ds \biggr]+t{\mathcal{o}}(1/\sqrt{n}).\ ] ] informally speaking , this shows that `` almost solves '' the poisson equation for the ctmc .stationarity then allows us to conclude that =-t{\mathbb{e}}_{\nu ^n } \biggl[\int_0^t f\bigl({\widehat{x}}^n(s)\bigr)\,ds \biggr]=t{\mathcal{o}}(1/\sqrt{n}),\ ] ] and , in particular , that recalling that , it then follows that in the process of proving these results , we explore connections between the stability of the ctmc and that of the corresponding fm and dm .refined properties of the poisson equation in the context of diffusion approximations for diffusions with a fast component are used in . in the spirit of this paper , derivative bounds for certain dirichlet problemsare used in to study _ universal _approximations for the birth - and - death process underlying the so - called erlang - a queue .the proofs there are based on the study of excursions but are closely related to ours ; we revisit the erlang - a queue in section [ sec : examples ] .the use of gradient estimates in conjunction with martingale arguments is also the theme in where these are used to study optimality gaps in the control of a multi - class queue .the poisson equation is replaced there with the pde associated with the hjb equation . _notation_. unless stated otherwise , all convergence statements are for .we use to denote the euclidean norm of in ( the dimension will be clear form the context ) . for two nonnegative sequences and we write if . throughout we adopt the convention that .we let and denote its closure by . following standard notation ,we let be the space of -times continuously differentiable functions from to , and for we let and denote the gradient and the hessian of , respectively .given a markov process on a complete and separable metric space , we let be the probability distribution under which for and = { \mathbb{e}}[\cdot|\xi(0 ) = x] ] to be the expectation operator w.r.t .this distribution . a probability distribution defined on said to be a stationary distribution if for every bounded continuous function = { \mathbb{e}}_{\pi}\bigl[f \bigl(\xi(0)\bigr)\bigr]\qquad \mbox{for all } t\geq0.\ ] ] it is said to be the steady - state distribution if for every such function and all , \rightarrow{\mathbb{e}}_{\pi } \bigl[f\bigl(\xi(0)\bigr)\bigr ] \qquad\mbox{as } t\rightarrow\infty.\ ] ] given a probability distribution and a nonnegative function , we define ( which can be infinite ) . for a general ( not necessarily nonnegative ) function , we define as above whenever . finally , whereas our results are not concerned with process - convergence , we will be making connections to the functional central limit theorem .all the processes that we study are assumed to be right continuous with left limits ( rcll ) , and will be used for convergence in the space of such functions unless otherwise stated . for rcll processes we use and let .we consider a sequence of continuous time markov chains ( ctmcs ) .the chain moves on a countable state space according to transition rates for .given a nonrandom initial condition , the dynamics of are constructed as follows : where and are independent unit - rate poisson processes ; see , section 6.4 . letting , we rewrite where provided that is nonexplosive , is a local martingale with respect to the filtration see , theorem 6.4.1 .the local ( predictable ) quadratic variation of is given by where in essence , and are defined only for values in .we henceforth assume that they are extended to and , with some abuse of notation , denote by and these extensions .the requirements that we impose on these extensions will be clear in what follows ._ fluid models_. given , we define the fluid model by or , in differential form , if is lipschitz continuous , the fluid model has a solution .we will assume that there exists a unique satisfying this requirement is intimately linked to our lyapunov requirement ; see lemma [ asum : l ] . _ centered and scaled process ._ define the processes and denote by the state space of .letting we have the martingale has the local predictable quadratic variation process where _ assumptions ._ we assume that the jump sizes are bounded uniformly in : and that is sufficiently large so that . the sequence is assumed to be uniformly lipschitz , and is assumed to have linear growth around .formally , there exist constants , such that , for all , and the requirements ( [ eq : flip ] ) and guarantee , in particular , that . condition ( [ eq : alip ] ) is equivalently stated in terms of the ( unscaled ) as we further assume that _ is positive definite for each _ and that where is itself positive definite . the matrix is not used in specifying the diffusion model in section [ sec : main ] , but the assumption of convergence is used in our proofs , most notably in that of theorem [ thmm : expo_ergo ] . in various settings ,including our own examples in section [ sec : examples ] , in which case the convergence requirement is trivially satisfied .the requirement that the continuous extension satisfies the uniform lipschitz requirement ( [ eq : flip ] ) is a restriction .it excludes , for example , single - server queueing systems ; we revisit this point in section [ sec : conclusions ] . for each , is nonexplosive , irreducible , positive recurrent and satisfies ( [ eq : boundedjumps])([eq : quadraticvar_fluid ] ) .[ asum : base ] positive recurrence and irreducibility imply ergodicity of and , in particular , the existence of a steady - state distribution ( which is also the unique stationary distribution ) . in certain cases ,positive recurrence of need not be a priori assumed ; see theorem [ thmm : ctmc_lyap ] and remark [ rem : allinone ] .assumption [ asum : base ] is imposed for the remainder of this paper .recall that is a stationary point for the _ fluid model _ and that .fix a probability space and a -dimensional brownian motion , and let be the strong solution to the sde the existence and uniqueness of a strong solution follow from the lipschitz continuity and linear growth of and the constant diffusion coefficient ; see , for example , , theorems 5.2.5 and 5.2.9 .[ rem : strong]the strong approximation for is a diffusion obtained ( heuristically at first ) by taking the `` density '' of the quadratic variation in ( [ eq : m_quad_var ] ) as the diffusion coefficient , to define the process the process provides a `` good '' approximation for the dynamics of the ctmc in the sense that where are random variables with exponential tails ( uniformly in ) ; see , for example , , chapters 7.5 and 11.3 . given the cruder ( state independent ) diffusion coefficient , the dm is not likely to be as precise , over finite horizons , as the strong approximation . in terms of tractability , however , the analysis of steady - state is simpler for the dm , insofar as its steady - state distribution ( when it exists ) involves linear pdes ; see , for example , , chapter 4.9 .our main result , theorem [ thmm : main ] , shows that this increased tractability co - exists with an impressive steady - state - approximation accuracy .suppose that , in addition , assumption [ asum : base ] uniformly on compact subsets of . if , then where is the strong solution to the sde with and is as in ( [ eq : quadraticvar_fluid ] ) ; see , theorem 6.5.4 . given ( [ eq : betaconv ] ) , requirements ( 5.9 ) and ( 5.14 ) of that theorem are trivially satisfied here due to the bounded jumps . the final requirement in , theorem 6.5.4 , that has almost surely , follows immediately from the fact that is a strong solution .further , it is easily proved that .thus , within a diffusion - limit framework , the dm is consistent with the diffusion limit in the sense that and converge to the same limit . for functions ,the generator of coincides with the second order differential operator defined , for such functions , by see , for example , , proposition 5.4.2 .we next state the uniform lyapunov assumption .we say that is a _ norm - like _ function if as .a function is said to be sub - exponential if and there exist constants and such that and [ asum : l ] there exist a sub - exponential norm - like function and finite positive constants ( not depending on ) such that and , for each and all , < \infty , \qquad t \geq0 .\label{eq : finite_integral}\ ] ] assumption [ asum : l ] is imposed for the remainder of this paper .the requirement that is made without loss of generality .if a norm - like function satisfies ul , there exists re - defined constants and such that satisfies ul .all polynomials satisfy ( [ eq : expobound1 ] ) and ( [ eq : expobound])the former is used only in the proof of lemma [ lem : fromdisctocont ] , and the latter is used in the derivations of gradient bounds following the statement of theorem [ thmm : pde ] .requirement ( [ eq : finite_integral ] ) is relatively unrestrictive as it is imposed on each individual ( rather than uniformly in ) .lyapunov conditions are frequently used in the context of stability of continuous time markov processes ( corresponding to fixed here ) ; see .the requirement of a uniform lyapunov condition imposed on a family of markov processes is less common ( see for a related example ) . in section [ sec : examples ]we study two examples for which all the requirements of assumption [ asum : l ] are met . with assumption [ asum : l ] , the existence and uniqueness of a steady - state distribution , , for follows from , sections 4 and 6 , as does the fact that is exponentially ergodic and that , for each , for all functions with ; see , theorem 4.2 . for that satisfies ( [ eq : expobound1 ] ) we have , for all and , that =v(x)+ { \mathbb{e}}_x \biggl[\int_0^{t } \mathcal{a}^n v\bigl({\widehat{y}}^n(s)\bigr)\,ds \biggr ] ;\label{eq : dynkin}\ ] ] see , for example , , theorem 6.3 .ul then guarantees that \leq v(x)+ { \mathbb{e}}_x \biggl[\int_0^t \bigl(- \delta v\bigl({\widehat{y}}^n(s)\bigr)+b\bigr)\,ds \biggr]\label{eq : tbound}\ ] ] for all and and , consequently , that for all functions with ; see also , corollary 2 .important for our analysis is the following consequence of assumption [ asum : l ] .[ thmm : expo_ergo ] let be the steady - state distribution of . then there exist finite positive constants and such that -\pi^n(f ) \bigr|\leq\mathcal{m}e^{-\mut},\qquad t\geq 0.\ ] ] bounds on the convergence rate of exponentially ergodic markov processes to their steady - state distribution have been studied extensively in recent literature .our proof builds specifically on .the constants and are related to a minorization condition for the discrete - time process . in the standard application , these constants may depend on . to obtain constants that can be used for all must argue that a minorization condition is satisfied uniformly in ; the proof of theorem [ thmm : expo_ergo ] is postponed to section [ sec : expo_ergo ] .theorem [ thmm : expo_ergo ] has the following important implication : fixing a function with and , we have for all , that \bigr{\vert}\leq\mathcal{m } v(x)e^{-\mu t},\qquad t\geq0,\ ] ] so that \bigr { \vert}\,ds\leq \mathcal{m } v(x)\int_0^{\infty}e^{-\mu s } \,ds = c v(x)<\infty\ ] ] for all , where the constant does not depend on or .we conclude that \,ds\ ] ] is a well - defined function of and that , for all , also , for any fixed and , \,ds-\int_0^{\infty } { \mathbb{e}}_x\bigl[f\bigl({\widehat{y}}^n(s)\bigr)\bigr]\,ds\biggr{\vert}=0.\label { eq : poisson_convergence}\ ] ] define and the introduction of is motivated by the analysis of the ( sequence of ) poisson equations , specifically by the gradient estimates that require bounds on local fluctuations of ; see the derivations following theorem [ thmm : pde ] .our main result , stated next , establishes that the steady - state distribution of the markov chain and the dm are suitably close provided that moments of the former are uniformly bounded .[ thmm : main ] fix that satisfies assumption [ asum : l ] and a function such that and .let and be , respectively , the steady - state distributions of and . if then theorem [ thmm : main ] and the remaining results of this section are proved in section [ sec : ito ] . if satisfies but , consider instead the function . then .by ( [ eq : fbound ] ) , and , in turn , .further , satisfies that . finally , if satisfies assumption [ asum : l ] , so does the function .thus the results that follow hold for functions with regardless of whether or not . in general , proving requirement ( [ eq : requirement ] ) ( which implies , in particular , tightness of the sequence of steady - state distributions ) is far from trivial .as we show next ( [ eq : requirement ] ) can be argued in advance in our setting .one expects that , as grows , the property ( [ eq : tbound ] ) of the dm will be approximately valid for the ctmc allowing to draw an implication similar to ( [ eq : fbound ] ) with there replaced by .the next theorem shows that this intuition is valid provided that satisfies additional simple properties .given a function , define for , {2,1,b_x ( \bar{\ell}/\sqrt{n } ) } = \sup_{y , z\in b_x ( \bar{\ell}/\sqrt{n } ) } \frac{|d^2\psi(y)-d^2\psi ( z)|}{|y - z|},\label{eq : square_bracket}\vadjust{\goodbreak}\ ] ] where the right - hand side may be infinite .[ thmm : ctmc_lyap ] let be as in assumption [ asum : l ] .suppose , in addition , that there exists a finite positive constant such that , for each , and all , {2,1,b_x(\bar{\ell}/\sqrt { n})}\bigr ) \bigl(1+|x|\bigr)\leq cv(x).\label{eq : dmtoctmc2}\ ] ] then , for all sufficiently large , and all , \leq v(x)+ { \mathbb{e}}_x \biggl[\int_0^t \biggl(- \frac { \delta } { 2}v\bigl(\widehat{x}^n(s)\bigr)+b \biggr ) \,ds\biggr],\qquad t\geq0 , \label{eq : lyap_ctmc}\ ] ] where is as in assumption [ asum : l ] . consequently , is ergodic for all such and , furthermore , if , condition ( [ eq : dmtoctmc2 ] ) can be replaced with using taylor s theorem we have , for all , that {2,1,b_x(\bar{\ell}/\sqrt { n})}&\leq & \sup_{\eta\in b_x(\bar{\ell}/\sqrt{n})}2\bigl(1+| \eta|\bigr)\bigl|d^3v(\eta)\bigr| \\ & \leq & 2 c \bigl(\sup_{\eta\in b_x(\bar{\ell}/\sqrt{n})}v(\eta ) \bigr)\leq 2c_3cv(x),\end{aligned}\ ] ] where the last inequality follows from the sub - exponential property ( [ eq : expobound ] ) of and . note that ( [ eq:3diff1 ] ) is satisfied by any polynomial .fix that satisfies assumption [ asum : l ] .suppose that there exists that , itself , satisfies assumption [ asum : l ] as well as ( [ eq : dmtoctmc2 ] ) and then , and , in particular , ( [ eq : requirement ] ) holds for .[cor : tightness ] suppose that and satisfies assumption [ asum : l ] and ( [ eq:3diff1 ] ) .if there exists such that and satisfies ( [ eq : finite_integral ] ) , then we can take in corollary [ cor : tightness ] .indeed , for an integer , with and as in assumption [ asum : l ] and as in ( [ eq:3diff1 ] ). thus if is sub - exponential and satisfies ul and ( [ eq:3diff1 ] ) , so does .[ rem : simplecase ] combined , theorem [ thmm : main ] and corollary [ cor : tightness ] establish the following : if there exist functions and both satisfying assumption [ asum : l ] such that ( [ eq : dmtoctmc2 ] ) holds for and , then we _ simultaneously _ have : ( i ) the positive recurrence of for sufficiently large , ( ii ) the moment bound in ( [ eq : requirement ] ) ( which implies , in particular , the tightness of ) and ( iii ) the approximation gap . with the exception of the simple requirement ( [ eq : finite_integral ] ) ,this reduces the requirements to properties of the dm .[ rem : allinone ] we conclude this section with an observation pertaining to the connection between the stability of the fm and the dm .suppose that there exist a norm - like function and a constant such that letting we have so that the fm is stable in the sense that , for each and any initial condition , as .moreover , \\[-8pt ] \nonumber & \leq&-\eta\bigl(v(y)-v(0)\bigr)+\bigl|\bar { a}^n(0)\bigr|\bigl|d^2v(y)\bigr|.\end{aligned}\ ] ] the following is an immediate consequence .let be a sub - exponential norm - like function satisfying ( [ eq : finite_integral ] ) and ( [ eq : fmlyap ] ) . if then satisfies ul and , in turn , assumption [ asum : l ] .[ lem : fmtodm ]in what follows , fixing a set , denotes the space of twice continuously differentiable functions from to . for , recall that and denote the gradient and the hessian of , respectively .the space is then the subspace of members of which have second derivatives that are lipschitz continuous on .that is , a twice continuously differentiable function is in if {2,1,{\mathcal{b}}}=\sup_{x , y\in{\mathcal{b } } , x\neq y } \frac { |d^2u(x)-d^2u(y)|}{|x - y|}<\infty.\ ] ] [ in equation ( [ eq : square_bracket ] ) the set is taken to be . ]we define where stands for the boundary of , and we let . we define{j,{\mathcal{b}}}^*+\sup_{x , y\in{\mathcal{b}},x\neq y}\,d_{x , y}^{3 } \frac{|d^2u(x)-d^2 u(y)|}{|x - y| } , \label{eq : ustardefin}\ ] ] where {j,{\mathcal{b}}}^*=\sup_{x\in { \mathcal{b}}}d_x^j |d^j u(x)| ] , and we say that the function is locally lipschitz if for all , where is as in ( [ eq : mbxdefin ] ) .[ thmm : pde ] fix that satisfies assumption [ asum : l ] and a locally lipschitz function with and .then , for each , the poisson equation has a unique solution given by \,dt.\label { eq : poissonsolution}\ ] ] moreover , there exist a finite positive constant ( not depending on ) such that consequently , for all and , and {2,1,{\mathcal{b}}_x}\leq8\theta \bigl(\bigl|u_f^n\bigr|_{0,{\mathcal{b}}_x}+|f|^{(2)}_{0,1,\mathcal{b}_x } \bigr)\bigl ( 1+|x|\bigr)^3.\label{eq : pde2}\ ] ] several observations are useful for what follows : recall ( [ eq : ubound ] ) that for some constant . by the assumed sub - exponentiality of for all , where is as in ( [ eq : expobound ] ) . in turn , for a function with [ see ( [ eq : barf ] ) ] and for all , so that for all .defining we have , by theorem [ thmm : pde ] ( and assuming , without loss of generality that ) , that for all and , {2,1,b_x(\bar{\ell}/\sqrt { n})}&\leq & c_v(x).\nonumber\end{aligned}\ ] ] proof of theorem [ thmm : pde ] we first prove that in ( [ eq : poissonsolution ] ) solves the poisson equation ( [ eq : poisson ] ) .since is fixed throughout we omit it from the notation .fixing , let be the solution to dirichlet problem in the boundary condition , is as in ( [ eq : poissonsolution ] ) .the existence and uniqueness of a solution follows directly from , theorem 6.13 , recalling that is lipschitz continuous and is a constant matrix and hence trivially lipschitz .theorem 6.13 of requires that is continuous in on .this follows exactly as in part ( c ) of , theorem 1 , using ( [ eq : poisson_convergence ] ) .we omit the detailed argument .it follows that ,\ ] ] where ; see , proposition 5.7.2 and lemma 5.7.4 .we have that with as in ( [ eq : poissonsolution ] ) .this assertion is proved as in , theorem 1 , part ( d ) . since is arbitrarywe conclude that , solves the poisson equation ( [ eq : poisson ] ) . to establish the gradient estimates observe that , since is bounded in , there exists a constant ( not depending on ) such that ( with the notation in , theorem 6.2 ) . from the positive definiteness of , and since for a positive definite , it follows that there exists a constant such that for all and all . finally , following the notation in , theorem 6.2 , {0,1,{\mathcal{b}}_x}^{(1 ) } \\ & = & \bigl[{\widehat{f}}^n\bigr]_{0,{\mathcal{b}}_x}^{(1)}+\sup _ { y , z\in{\mathcal{b}}_x } d_{y , z}^2 \frac { |{\widehat{f}}^n(y)-{\widehat{f}}^n(z)|}{|y - z| } \\ & = & \sup_{y\in{\mathcal{b}}_x } d_y\bigl|{\widehat{f}}^n(y)\bigr|+\sup _ { y , z } d_{y , z}^2 \frac { |{\widehat{f}}^n(y)-{\widehat{f}}^n(z)|}{|y - z| } \\ & \leq & 2 k_f,\end{aligned}\ ] ] where is as in ( [ eq : flip ] ) . in turn , by , theorem 6.2 , that where depends only on and the constant in ( [ eq : lambda ] ) ( for there , we take ) .bounds ( [ eq : pde0])([eq : pde2 ] ) now follow from the definition of applied to points in the subset of .specifically , for each , {1,{\mathcal{b}}_x}^ * \leq\bigl|u_f^n\bigr|_{2,1,\mathcal{b}_x}^*.\ ] ] noting that for all we have , for all such ( in particular for itself ) , that equations ( [ eq : pde1 ] ) and ( [ eq : pde2 ] ) are argued similarly .the following simple lemma is proved in the . given a function write for the coordinate of and for the coordinate of .[ lem : ito ] let be such that , for all and , \\[-8pt ] \nonumber & & \hspace*{80pt}{}+ [ \psi ] _ { 2,1,b_{\widehat{x}^n(s)}(\bar{\ell}/\sqrt{n } ) } \bigr ) \bigl(1+\bigl | \widehat{x}^n ( s)\bigr| \bigr ) \,ds \biggr]<\infty.\end{aligned}\ ] ] then , for all and , = \psi(x)+{\mathbb{e}}_x \biggl[\int_0^t { \mathcal{a}}^n \psi \bigl(\widehat{x}^n(s)\bigr)\,ds \biggr]+a_{\psi}^{n , x}(t)+d_{\psi } ^{n , x}(t),\label{eq : almostito}\ ] ] where is as in ( [ eq : diff_gen ] ) and , for all and , {2,1,b_{\widehat{x}^n(s)}(\bar{\ell}/\sqrt{n})}\bigl|\bar { a}^n\bigl(\widehat{x}^n(s)\bigr)\bigr| \,ds \biggr ] , \\d_{\psi}^{n , x}(t)&=&\frac{1}{2}{\mathbb{e}}_x \biggl[\sum_{i , j}^d \int _ 0^t \psi _ { ij}\bigl ( \widehat{x}^n(s)\bigr ) \bigl(\bar{a}_{ij}^n \bigl(\widehat{x}^n(s)\bigr)-\bar { a}_{ij}^n(0 ) \bigr)\,ds \biggr].\end{aligned}\ ] ] below is as in ( [ eq : barf ] ) and as in ( [ eq : cfdefin ] ) .[ cor : interim ] fix that satisfies assumption [ asum : l ] and a function such that .then there exists a finite positive constant ( not depending on ) , such that , for all and , - u_f^n(x)-{\mathbb{e}}_x \biggl [ \int_0^t { \mathcal{a}}^n u_f^n\bigl(\widehat{x}^n(s ) \bigr)\,ds \biggr ] \biggr| \\ & & \qquad\leq c \biggl({\mathbb{e}}_x \biggl[\int_0^t \frac{c_v(\widehat{x}^n(s))}{\sqrt{n } } \biggl(1+\frac{|\widehat { x}^n(s)|}{\sqrt { n } } \biggr)\,ds \biggr ] \biggr).\end{aligned}\ ] ] by ( [ eq : gradient_est ] ) we have , for , that {2,1,b_x ( { \bar{\ell}}/{\sqrt{n } } ) } \bigr ) \bigl(1+|x|\bigr ) & \leq&3 c_v(x ) \bigl(1+|x|\bigr ) \\ & \leq&\varepsilon\bigl(1+|x|\bigr)^4v(x)\end{aligned}\ ] ] for some finite positive constant . by assumption [ asum : l ] , specifically ( [ eq : finite_integral ] ) , <\infty,\ ] ] so that satisfies the requirements of lemma [ lem : ito ] , and we have that \nonumber \\ & \leq&\frac{k_a}{2\sqrt{n}}{\mathbb{e}}_x \biggl [ \int _ 0^t\bigl |d^2 u^n_f \bigl(\widehat{x}^n(s)\bigr)\bigr|\bigl|\widehat{x}^n(s)\bigr|\,ds \biggr ] \\ &\leq & \frac{k_a}{2\sqrt{n}}{\mathbb{e}}_x \biggl[\int_0^t c_v\bigl(\widehat { x}^n(s)\bigr)\,ds \biggr ] .\nonumber\end{aligned}\ ] ] the second inequality follows from ( [ eq : alip ] ) .the last inequality follows from ( [ eq : gradient_est ] ) .next , {2,1,b_{\widehat { x}^n(s)}(\bar { \ell } /\sqrt{n})}\bigl|\bar{a}^n\bigl(\widehat{x}^n(s)\bigr)\bigr|\,ds \biggr ] \nonumber\\ & \leq & \frac{\bar{\ell}}{2\sqrt{n}}{\mathbb{e}}_x \biggl[\int _0^t \bigl[u^n_f \bigr]_{2,1,b_{\widehat{x}^n ( s)}(\bar{\ell}/\sqrt{n})}\bigl|\bar{a}^n(0)\bigr|\,ds \biggr ] \\ & & { } + \frac{\bar{\ell}}{2\sqrt{n}}{\mathbb{e}}_x \biggl[\int_0^t \bigl[u^n_f\bigr]_{2,1,b_{\widehat{x}^n(s)}(\bar{\ell}/\sqrt{n})}\bigl|\bar { a}^n \bigl(\widehat{x}^n ( s)\bigr)-\bar { a}^n(0)\bigr|\,ds \biggr ] .\nonumber\end{aligned}\ ] ] using ( [ eq : alip ] ) , ( [ eq : quadraticvar_fluid ] ) and ( [ eq : gradient_est ] ) we conclude that ,\ ] ] which completes the proof .we are ready to prove theorem [ thmm : main ] .proof of theorem [ thmm : main ] as is a stationary distribution we have , by ( [ eq : ubound ] ) and ( [ eq : requirement ] ) , that ={\mathbb{e}}_{\nu^n}\bigl[u_f^n \bigl(\widehat { x}^n(0)\bigr)\bigr]\leq c\nu ^n(v)<\infty\ ] ] for all sufficiently large and all . recalling that , corollary [ cor : interim ] guarantees the existence of a finite positive constant ( not depending on ) such that \biggr{\vert}&\leq&\vartheta { \mathbb{e}}_{\nu^n } \biggl[\int_0^t \frac{c_v(\widehat{x}^n ( s))}{\sqrt { n } } \biggl(1+\frac{|\widehat{x}^n(s)|}{\sqrt{n } } \biggr)\,ds \biggr ] \nonumber \\[-8pt ] \\[-8pt ] \nonumber & = & \vartheta t{\mathbb{e}}_{\nu^n } \biggl[\frac { c_v(\widehat{x}^n(0))}{\sqrt{n } } \biggl(1 + \frac{|\widehat { x}^n(0)|}{\sqrt{n } } \biggr ) \biggr]\end{aligned}\ ] ] for all , where the interchange of integral and expectation is justified by the nonnegativity of the integrands . using again ( [ eq : requirement ] ) and the nonnegativity of we have , for all , that \leq{\mathbb{e}}_{\nu^n } \biggl [ \int _ 0^t v\bigl(\widehat{x}^n(s ) \bigr)\,ds \biggr]= t\nu^n(v)<\infty.\ ] ] this justifies replacing integral and expectation in ( [ eq : inter1 ] ) to conclude that , with , \biggr{\vert}\leq\vartheta{\mathbb{e}}_{\nu^n } \biggl [ \frac { c_v(\widehat{x}^n ( 0))}{\sqrt{n } } \biggl(1+\frac{|\widehat{x}^n(0)|}{\sqrt{n } } \biggr ) \biggr ] \\ & = & { \mathcal{o}}(1/\sqrt{n})\end{aligned}\ ] ] for a ( re - defined ) constant as required , where the last equality follows from ( [ eq : requirement ] ) recalling the definition of in ( [ eq : cfdefin ] ) . proof of theorem [ thmm : ctmc_lyap ] let be as in assumption [ asum : l ] . applying lemma [ lem : ito ] as in the proof of corollary [ cor : interim ] we have that {2,1,b_{\widehat{x}^n ( s ) } ( { \bar{\ell}}/{\sqrt{n } } ) } \bigl| \bar{a}^n(0)\bigr|\,ds \biggr ] \\ & & { } + \frac{\bar{\ell}}{2\sqrt{n}}{\mathbb{e}}_x \biggl[\int_0^t [ v]_{2,1,b_{\widehat{x}^n(s ) } ( { \bar{\ell}}/{\sqrt { n } } ) } \bigl|\bar { a}^n\bigl(\widehat{x}^n(s ) \bigr)-\bar{a}^n(0)\bigr|\,ds \biggr ] \\ & \leq&{\mathbb{e}}_x \biggl[\int_0^t \frac{\delta}{4 } v\bigl(\widehat { x}^n(s)\bigr)\,ds \biggr]\end{aligned}\ ] ] for all sufficiently large .the last inequality follows noting that , by ( [ eq : alip ] ) , ( [ eq : quadraticvar_fluid ] ) and ( [ eq : dmtoctmc2 ] ) , there exists a finite positive constant such that {2,1,b_{\widehat{x}^n(s ) } ( { \bar{\ell}}/{\sqrt { n } } ) } \bigl|\bar { a}^n(0)\bigr|\leq c v\bigl ( \widehat{x}^n(s)\bigr)\ ] ] and {2,1,b_{\widehat{x}^n(s ) } ( { \bar{\ell}}/{\sqrt { n } } ) } \bigl|\bar { a}^n\bigl(\widehat{x}^n(s ) \bigr)-\bar{a}^n(0)\bigr|\leq\frac{ck_a}{\sqrt { n}}v\bigl(\widehat{x}^n(s ) \bigr),\ ] ] where is as in ( [ eq : alip ] ) .similarly one argues , using ( [ eq : alip ] ) and ( [ eq : dmtoctmc2 ] ) , that for all sufficiently large , \\ & \leq&{\mathbb{e}}_x \biggl[\int_0^t \frac{\delta}{4 } v\bigl(\widehat { x}^n(s)\bigr)\,ds \biggr],\end{aligned}\ ] ] to conclude from assumption [ asum : l ] and lemma [ lem : ito ] that \leq v(x)+{\mathbb{e}}_x \biggl[\int_0^t \biggl(-\frac{\delta } { 2}v\bigl(\widehat{x}^n(s)\bigr)+b \biggr)\,ds \biggr].\ ] ] in turn , ( [ eq : lyap_ctmc ] ) holds for all sufficiently large .this guarantees that is ergodic for all such ; see , for example , , theorem 8.13. using ( [ eq : lyap_ctmc ] ) and the nonnegativity of , we have for all sufficiently large and all that \leq\frac{1}{t}2\delta ^{-1}\bigl(v(x)+b t \bigr).\ ] ] letting be the steady - state distribution of we have , for each , that = \lim_{t\rightarrow\infty } \frac{1}{t}{\mathbb{e}}_x \biggl[\int _ 0^tv\bigl(\widehat{x}^n(s)\bigr ) \wedge m \,ds \biggr]\leq2\delta^{-1}b.\ ] ] the result now follows from the nonnegativity of and the monotone convergence theorem .lyapunov functions that satisfy assumption [ asum : l ] must be identified on a case - by - case basis .for the first example the erlang - a queue this is a straightforward task . for the second example a queue with many servers and phase - type service time distribution this task is substantially more difficult , but recent work provides us with the required function .we consider a sequence of queues with a single pool of i.i.d .servers that serve one class of impatient i.i.d . customers .arrivals follow a poisson process ( with rate in the queue ) , service times are exponentially distributed with rate and customers patience times are exponentially distributed with rate . in the queue , there are servers in the server pool .let be the total number of jobs in the queue ( waiting or in service ) at time .then is a birth and death process with state space , birth rate in all states and death rate in state where , for the remainder of the paper , we use , ._ we assume that _ so that positive recurrence of follows easily .the drift is then specified here by and is trivially extended here to the real line by allowing to take real values ( including negative values ) .the fm is then given by there exists a unique point in which . at this point so that the dm for the erlang - a queue is subsequently given by where and .it is easily verified that there exists such that when and if .fixing and taking we have that for all and note that is trivially sub - exponential .further , for all sufficiently large , so that the conditions of lemma [ lem : fmtodm ] are satisfied and , in turn , ul holds for the dm .further , for each , where is the number of arrivals by time . condition ( [ eq : finite_integral ] ) then follows from basic properties of the poisson process .we have the following consequence .fix and positive .then , satisfies assumption [ asum : l ] for the dm of the erlang - a queue .fixing and choosing sufficiently large , we can take in corollary [ cor : tightness ] ; see remark [ rem : simplecase ] .the following is now a direct consequence of theorem [ thmm : main ] and corollary [ cor : tightness ] .[ thmm : erlanga ] consider a sequence of erlang - a queues as above and let be such that for some .then above , we did not impose any restrictions on the way in which the number of servers , , scales with so that one may interpret our dm as a _ universal _approximation for the erlang - a queue .universality for this queue ( and its contrast with the assumption of a so - called operational regime ) are discussed at length in ; see also the references therein .a similar result is proved there for the erlang - a queue using an approach that , while having important similarities to the approach we take here , is based on approximating the excursions of the process above and below . in this one - dimensional markov chain ,the poisson equation we use here is ( informally ) a `` pasting '' of the dirichlet problems studied in . in their greatest generality ,the results of are not a special case of theorem [ thmm : erlanga ] above . in authors allow the service rate to vary with .this is facilitated by the excursion approach taken there but violates the assumptions required to apply our results , particulary , the uniform lipschitz continuity of .moreover , the approach in seems to be easily extendable to the case with in which case the dm is not exponentially ergodic and assumption [ asum : l ] is not satisfied .we next consider the single class queue .this is a generalization of the erlang - a queue where the exponential service time is replaced by a phase - type service - time ; see for a detailed construction .we repeat here only the essential details .let be the number of service phases , and let be the average length of phase .we assume that , corresponding to all customers commencing their service at phase 1 ( the diffusion limits in cover the general case where is an arbitrary probability vector ) . having completed phase a job transitions into phase with probability .the triplet defines the phase - type service - time distribution .let note that .as before , the patience rate is .we consider a sequence of such queues indexed by the _ arrival rate . let let be the number of customers in the first phase of their service and waiting in the queue at time . for ,let be the number of customers in phase of service at time .the process is then a ctmc . for simplicity of expositionwe assume here that is integer valued for each and that the number of servers satisfies .this implies , trivially , that which corresponds to the so - called halfin whitt many - server regime and allows us subsequently to build on the results of and that study diffusion limits in this regime .the analysis below is easily extended to the case for some .define and the scaled and centered process as in ( [ eq : hxdefin ] ) .then , this is written , in matrix notation , as and , for , the functions and satisfy ( [ eq : flip ] ) and ( [ eq : alip ] ) . assumption[ asum : base ] holds in this example as the chain is trivially nonexplosive and irreducible .the positive recurrence follows immediately from the fact that .the diffusion model is given by with as in ( [ eq : ph_drift_matrix ] ) and diffusion coefficient as in ( [ eq : ph_diff1])([eq : ph_diff2 ] ) .note ( [ eq : ph_drift_matrix])([eq : ph_diff2 ] ) that and do not , in fact , depend here on .the existence of a quadratic lyapunov function , , for then follows from , theorem 3this function is specified in equation ( 5.24 ) there .( to extend this argument to the general case with , note that in is still a lyapunov function for each if we perturb by a constant and by a term that shrinks proportional to . ) with a careful choice of the smoothing function there , the function ( for any constant ) is also sub - exponential .finally , ( [ eq : finite_integral ] ) is argued as in the erlang - a case using crude bounds on the poisson arrivals .the function thus satisfies assumption [ asum : l ] .it is easily verified that ) and satisfies ( [ eq:3diff1 ] ) so that , as in remark [ rem : simplecase ] , satisfies assumption [ asum : l ] with re - defined constants and . choosing sufficiently large guarantees that .the following is then an immediate consequence of theorem [ thmm : main ] and corollary [ cor : tightness ] .consider the sequence of phase - type queues as above , and let be such that for some .then , as in remark [ rem : allinone ] , we have a lyapunov function that allows us to establish simultaneously the stability of the markov chain for each sufficiently large , the uniform integrability of moments and the approximation gap .it is worth noting that the fact that was already established , by alternative means and for more general ( multiclass ) phase - type queues , in .the main step in this proof is a uniform minorization condition for a time - discretized version of .once this is established ( see lemma [ lem : minorization ] below ) , we build on to complete the argument .the proofs of the lemmas that are stated in this section appear in the .we first consider a linear transformation of .specifically , let be the unique square root of the matrix ; see , theorem 7.2.6 .in particular , .the matrix is itself invertible and its inverse is the square root of the inverse of ; see , page 406 .let and define then is a -dimensional brownian motion with drift , that is , where .we next consider the discrete - time analogues of both and .let let and be the corresponding one - step transition functions . below is the family of borel sets in .[ lem : minorization ] fixing , there exist a probability measure with and a constant ( both not depending on ) such that there consequently exists a constant ( not depending on ) such that the following translates the lyapunov property ul into the discrete time setting .[ lem : fromdisctocont ] let be as in assumption [ asum : l ] . then there exist finite positive constants and ( not depending on ) such that for all and all , \leq(1- \gamma)v(x)+\bar{b}\mathbh{1}_{\overline { b}_0(k)}(x).\label{eq : discretedrift}\ ] ] using the fact that as , ( [ eq : discretedrift ] ) implies that there exist finite positive constants , and such that \leq \cases { \lambda v(x),&\quad \vspace*{2pt}\cr m , & \quad}\ ] ] the following is then a direct consequence of , theorem 1.1 .assumptions ( a1)(a3 ) there hold by lemmas [ lem : minorization ] , [ lem : fromdisctocont ] and by ( [ eq : baxendalcond ] ) .there exist constants and ( not depending on ) such that for each , -\pi^n(f ) \bigr|\leq\mathbb{m}e^{-\mu m}.\ ] ] with these we are ready for the proof of theorem [ thmm : expo_ergo ] .proof of theorem [ thmm : expo_ergo ] the proof of the theorem now follows as in , page 536 .specifically , let -\pi^n(f)\bigr| & = & \sup_{|f|\leq v}\bigl| { \mathbb{e}}_x\bigl[f\bigl({\widehat{y}}^n\bigl(\lfloor t\rfloor+s\bigr ) \bigr)\bigr]-\pi^n(f)\bigr| \\ & = & \sup_{|f|\leq v}\bigl|{\mathbb{p}}_{{\widehat{y}}^n}^s(x , dy ) \bigl ( { \mathbb{e}}_x\bigl[f\bigl({\widehat{y}}^n\bigl(\lfloor t\rfloor \bigr ) \bigr)\bigr]-\pi^n(f)\bigr)\bigr| \\ & \leq&\int_{y}{\mathbb{p}}_{{\widehat{y}}^n}^s(x , dy)\sup _ { |f|\leq v}\bigl|{\mathbb{e}}_y\bigl[f\bigl({\widehat{y}}^n\bigl ( \lfloor t\rfloor\bigr)\bigr)\bigr]-\pi^n ( f)\bigr| \\ & \leq & \mathbb{m}e^{-\mu\lfloor t\rfloor } { \mathbb{e}}_x\bigl[v\bigl ( { \widehat{y}}^n(s)\bigr)\bigr ] \\ & \leq & \mathbb{m } e^{\mu } e^{-\mu t } \bigl(v(x)+b\bigr),\end{aligned}\ ] ] where is the transition probability function of in time units .in the last inequality we used ( [ eq : tbound ] ) and the fact that . finally , since , the theorem holds with the constants and models are useful in the approximation of markov chains . we proved that , under a uniform lyapunov condition , the steady - state of some multidimensional ctmcs can be approximated with impressive accuracy by the steady - state of a relatively tractable diffusion _ model_. the existence of a diffusion limit that satisfies the lyapunov requirement as is the case for the phase - type queue considered in section [ sec : ph]can facilitate the application of our results .the distinction between the diffusion model and diffusion limit is , however , important . a central motivation behind this work is to bypass the need for diffusion limits with the objective of providing steady - state diffusion approximation whose precision does not depend on assumption with regards to limiting values of underlying parameters .that is , we ultimately seek to provide `` limit - free '' ( or _ universal _ ) approximations . a uniform lyapunov condition , as we require in assumption [ asum : l ] , need not hold in general .informally , one expects such a condition to hold if the scale parameter has limited effect on the drift of the process around the fms stationary point .many - server queues with abandonment , as those we use to illustrate our results , seem to satisfy this characterizations : diffusion limits ( regardless of the parameter regime , determining how the number of servers scales with ) are generalizations of the ou process .it remains to identify the broadest characterization of markov chains for which a uniform lyapunov condition can be expected to hold .in addition , the following extensions seem important : _ state - space collapse_. a fundamental phenomenon in diffusion limits for multi - class queueing systems is that of state - space collapse ( ssc ) . with ssc , the diffusion limit `` lives '' on a state - space that is of lower dimension relative to the ctmc : some coordinates of the ctmc become , asymptotically , deterministic functions of others . for example ,if one allows for arbitrary initial - phase vectors in the example of section [ sec : ph ] , the number of customers in queue with initial phase is asymptotically equal to ; see . to exploit state - space collapse within the diffusion - model framework used in this paper , one must develop bounds ( rather than convergence results ) for state - space collapse . _ single server queues and reflection ._ a key challenge with single - server queueing systems is that of reflection . such reflection may violate our assumptions on .consider , for example , the queue this is a single - server version of the erlang - a queue discussed in section [ sec : examples ] .suppose that the arrival rate and service rate in the queue satisfy , and ( for ) .let be the patience parameter .then so that .also , and , in particular as .clearly , ( [ eq : flip ] ) is violated .it is fair to conjecture that similar results as ours can be proved in such settings provided that the reflection is explicitly captured in the dm .extending our results to dms with reflection seems to present a challenge insofar as the theory of pdes that arise from the poisson equation for such networks is less developed and poses a challenge in terms of the gradient bounds that are central to our analysis here ; see , for example , , where the poisson equation for constrained diffusion is discussed as well as , in the context of ergodic control , .proof of lemma [ lem : ito ] fix . by its rule applied to the pure jump process we have that \\[-8pt ] \nonumber & & { } + \sum_{s\leq t } \biggl[\psi\bigl ( \widehat{x}^n(s)\bigr)-\psi\bigl(\widehat{x}^n ( s- ) \bigr)-\sum_{i=1}^d \psi_i \bigl(\widehat{x}^n(s-)\bigr)\delta\widehat{x}^n_i(s ) \biggr ] .\label{eq : itojump}\end{aligned}\ ] ] from the linear growth of and from ( [ eq : integrability_cond ] ) , it then follows that <\infty.\ ] ] we can then apply lvy s formula for ctmcs ( see , e.g. , , exercise i.2.e2 ) to get that is a martingale with respect to the filtration in ( [ eq : filtration ] ) and , in turn , for all , ={\mathbb{e}}_x \biggl[\sum_{i=1}^d \int_0^t \psi_i\bigl(\widehat { x}^n(s)\bigr){\widehat{f}}_i^n\bigl ( \widehat{x}^n(s)\bigr)\,ds \biggr].\ ] ] to treat the second line of ( [ eq : itojump ] ) , we decompose it into and \\[-8pt ] \nonumber & & \hspace*{60pt}\qquad { } -\frac{1}{2}\sum_{i , j}^d \psi_{ij}\bigl(\widehat{x}^n ( s-)\bigr)\delta \widehat{x}^n_i(s)\delta\widehat{x}^n_j(s ) \biggr].\end{aligned}\ ] ] we treat ( [ eqd ] ) first . by ( [ eq : alip ] ) , so that , by ( [ eq : integrability_cond ] ) , < \infty , \qquad t \geq0 , x\in\widehat{e}^n,\ ] ] and applying lvy s formula once again , we obtain \\ & & \qquad = \frac{1}{2}{\mathbb{e}}_x \biggl[\sum _ { i , j}^d\sum_{\ell } \int _ 0^t \psi_{ij}\bigl ( \widehat{x}^n(s)\bigr)\ell_i\ell_j \frac{1}{n}\beta _ { \ell } ^n\bigl(\sqrt { n } \widehat{x}^n(s)+\bar{x}^n_{\infty}\bigr)\,ds \biggr ] \\ & & \qquad = \frac{1}{2 } { \mathbb{e}}_x \biggl[\sum _ { i , j}^d \int_0^t \psi_{ij}\bigl(\widehat{x}^n(s)\bigr)\bar{a}_{ij}^n \bigl(\widehat { x}^n(s)\bigr)\,ds \biggr ] \\ & & \qquad = \frac{1}{2}{\mathbb{e}}_x \biggl[\sum _ { i , j}^d \int_0^t \psi_{ij}\bigl(\widehat{x}^n ( s)\bigr)\bar { a}_{ij}^n(0)\,ds \biggr ] \\ & & \qquad\quad { } + \frac{1}{2}{\mathbb{e}}_x \biggl[\sum _ { i ,j}^d \int_0^t \psi_{ij}\bigl(\widehat{x}^n(s)\bigr ) \bigl ( \bar{a}^n\bigl(\widehat { x}^n(s)\bigr)-\bar { a}_{ij}^n(0)\bigr)\,ds \biggr].\end{aligned}\ ] ] the second item in the last line is in the statement of the lemma .we have proven thus far that \\ & & \qquad = \psi(x)+{\mathbb{e}}_x \biggl[\sum_{i=1}^d \int_0^t \psi_i\bigl(\widehat { x}^n(s)\bigr){\widehat{f}}_i^n\bigl ( \widehat{x}^n ( s)\bigr)\,ds \biggr ] \\ & & \quad\qquad{}+ \frac{1}{2 } { \mathbb{e}}_x \biggl[\sum_{i , j}^d \int _ 0^t \psi_{ij}\bigl ( \widehat{x}^n ( s)\bigr)\bar { a}_{ij}^n(0)\,ds \biggr ] + d_{\psi}^{n , x}(t)+a_{\psi } ^{n , x}(t ) \\ & & \qquad = \psi(x)+{\mathbb{e}}_x \biggl[\int_0^t \mathcal{a}^n\psi\bigl(\widehat{x}^n ( s)\bigr)\,ds \biggr]+d_{\psi}^{n , x}(t)+a_{\psi}^{n , x}(t),\end{aligned}\ ] ] where is as in the statement of the lemma and ] .thus here note that .let .note that {2,1,b_{x}(\bar{\ell}/\sqrt{n})}$ ] for with . since , we have that \\ & & \qquad\leq \frac{\bar{\ell}}{\sqrt{n}}\frac{1}{2 n}{\mathbb{e}}_x \biggl[\sum _ { i , j}^d \int_0^t [ \psi]_{2,1,b_{\widehat{x}^n(s)}(\bar{\ell}/\sqrt{n})}\sum_{\ell } |\ell _ i||\ell _j|\beta_{\ell}^n \bigl(x^n(s)\bigr)\,ds \biggr ] \\ & & \qquad \leq \frac{\bar{\ell}}{\sqrt{n}}\frac{1}{2 n}{\mathbb{e}}_x \biggl[\int _ 0^t [ \psi ] _ { 2,1,b_{\widehat{x}^n(s)}(\bar{\ell}/\sqrt { n})}\bigl|a^n \bigl(x^n(s)\bigr)\bigr|\,ds \biggr ] \\ & & \qquad = \frac{\bar{\ell}}{2\sqrt{n}}{\mathbb{e}}_x \biggl[\int_0^t [ \psi ] _ { 2,1,b_{\widehat{x}^n(s)}(\bar{\ell}/\sqrt{n})}\bigl|\bar { a}^n\bigl(\widehat{x}^n ( s ) \bigr)\bigr|\,ds \biggr ] < \infty,\end{aligned}\ ] ] where the finiteness follows from ( [ eq : alip ] ) and condition ( [ eq : integrability_cond ] ). we can apply lvy s formula one final time to conclude that \bigr|&= & \biggl{\vert}\frac{1}{2 n}\sum _ { i , j}^d { \mathbb{e}}_x \biggl [ \int _0^t \sum_{\ell } \widetilde{\psi}_{ij}\bigl(\widehat{x}^n(s),\widehat { x}^n(s)+\ell/\sqrt { n}\bigr)\ell _ i\ell_j \beta_{\ell}^n\bigl(x^n(s)\bigr)\,ds\biggr]\biggr { \vert}\\ & \leq&\frac{\bar{\ell}}{2\sqrt{n}}{\mathbb{e}}_x \biggl[\int_0^t [ \psi ] _ { 2,1,b_{\widehat{x}^n(s)}(\bar{\ell}/\sqrt{n})}\bigl|\bar { a}^n\bigl(\widehat{x}^n ( s ) \bigr)\bigr|\,ds \biggr]\end{aligned}\ ] ] as required . toward the proof of lemma [ lem : minorization ]we first prove that inherits the lipschitz continuity of .[ lem : psd ] there exists a finite positive constant ( not depending on ) such that since , for each , is symmetric positive definite as is , these matrices have strictly positive eigenvalues ; see , for example , , theorem 7.2.1 . also , the eigenvalues of the square - root matrix are the square roots of the eigenvalues of . since , the eigenvalues of , , converge to those of , .the eigenvalues of the inverses and are given by the reciprocals and , in turn , satisfy .in particular and ( where , following common notation , is the spectral norm of ; see , section 5.1 . since the matrices are symmetric this norm is equal to the spectral radius of the matrix , that is , to its maximal eigenvalue ) . by definition of the matrix normit then holds that for some finite positive constant where the last inequality follows from the fact argued above .similarly , for a finite positive constant . finally , using ( [ eq : flip ] ) we have that which completes the proof .proof of lemma [ lem : minorization ] we consider first the chain .fix and let .let .by ( [ eq : interim21 ] ) , there exists a constant not depending on such that by lemma [ lem : psd ] there exist and not depending on such that for all and . also , since it satisfies also a linear growth condition uniformly in . using , theorem 3.1 and ( [ eq : bark1 ] ) we have that for some where is the transition density of from to in time .in particular , where is here the lebesgue measure and using the invariance of lebesgue measure under invertible linear transformations we have for any that where is here the determinant of the positive definite matrix , and we use the simple fact that .since , it also holds that so that there exists ( not depending on ) such that let . defining the measure we conclude that the result for follows immediately from the above .indeed , which completes the proof. proof of lemma [ lem : fromdisctocont ] this argument is almost identical to the proof in , page 27 . under condition ( [ eq :expobound1 ] ) , dynkin s formula holds up to , that is , = v(y)+ { \mathbb{e}}_y \biggl[\int_0^t \mathcal { a}^n v\bigl({\widehat{y}}^n(s)\bigr)\,ds \biggr];\ ] ] see , for example , , theorem 6.3 . setting \quad\mbox{and}\quad h(t)={\mathbb{e}}_y\bigl[\mathcal{a}^nv\bigl({\widehat{y}}^n(t)\bigr)\bigr]+\delta g(t),\ ] ] we have that ( and as in assumption [ asum : l ] ) and solving this differential equation we get setting and we have the statement of the lemma .the author is grateful to junfei huang and to an anonymous referee for their careful reading of this paper and for their numerous insightful comments . | motivated by queues with many servers , we study brownian steady - state approximations for continuous time markov chains ( ctmcs ) . our approximations are based on _ diffusion models _ ( rather than a diffusion limit ) whose steady - state , we prove , approximates that of the markov chain with notable precision . strong approximations provide such `` limitless '' approximations for process dynamics . our focus here is on steady - state distributions , and the diffusion model that we propose is tractable relative to strong approximations . within an asymptotic framework , in which a scale parameter is taken large , a uniform ( in the scale parameter ) lyapunov condition imposed on the sequence of diffusion models guarantees that the gap between the steady - state moments of the diffusion and those of the properly centered and scaled ctmcs shrinks at a rate of . our proofs build on gradient estimates for solutions of the poisson equations associated with the ( sequence of ) diffusion models and on elementary martingale arguments . as a by - product of our analysis , we explore connections between lyapunov functions for the fluid model , the diffusion model and the ctmc . |
it is difficult to count the incidence of crime at the local level .typically , the only systematically collected crime data are those compiled by the local police department for submission to the federal bureau of investigation s ( fbi ) uniform crime reporting ( ucr ) program .these numbers are best suited for tracking the amount of crime that is reported to the police and how often these reported crimes are cleared by arrest or exceptional means ( fbi , 2009 ) .however , they often carry considerable weight in assessing how well a police department is performing , how safe a community is , and whether a police department needs more or different kinds of resources ( maltz , 1999:2 ) .news media and law enforcement agencies routinely report levels and trends in crimes known to the police as `` crimes '' and `` criminal behavior . '' in prepared testimony to the u.s .house of representatives , carbon ( 2012:16 ) writes that `` [ t]he ucr is the national ` report card ' on serious crime ; what gets reported through the ucr is how we , collectively view crime in this country . '' sometimes , these assessments veer into explicit crime rate rankings of cities ( rosenfeld and lauritsen [ 2008 ] discusses the scope of this problem ) a practice that has been condemned by the american society of criminology ( 2007 ) .criminologists sometimes use and compare point estimates of crime rates for different jurisdications at various levels of aggregation , and they report relationships between crime and various social and economic indicators as if there was no uncertainty in those estimates beyond sampling variation . in our view, there is no inherent problem with considering whether a crime rate is higher in one place than another at the same time or whether a crime rate is higher at one time than another in the same place . to be absolutely clear, the problem arises when ambiguities in the statistics undermine the validity of the comparison .for example , a major obstacle to using police - based crime statistics to infer within - community changes in crime over time or between - community crime differences arises from the well - known fact that many crimes are not reported to the police ( baumer and lauritsen , 2010 ; james and council , 2008 ) .therefore , when crime rates vary across space or time , it is hard to know how much of that change is caused by shifts in real criminal behavior or changes in the reporting behavior of victims ( biderman and reiss , 1967 ; maltz , 1975 ; eck and riccio , 1979 ; blumstein et al . , 1991 , 1992 ; also for a similar idea in state sat rankings see wainer , 1986 ) . consider a simple anecdote that illustrates our concerns .a recent newspaper article in the _ charlotte observer _ reported that `` the number of crimes dropped 7.1 percent last year , a development that charlotte police chief rodney monroe credited largely to officers keeping a close eye on potential criminals before they struck '' ( lyttle , 2012 ) .the comparison expressed in this news coverage implicitly makes the strong and untestable assumption that the reporting behavior of victims stayed the same and all of the change in the number of crimes known to the police from one year to the next is due to changes in criminal behavior ( eck and riccio , 1979 ; blumstein et al . , 1991 ; brier and fienberg , 1980 ; nelson , 1979 ; skogan , 1974 ; rosenfeld and lauritsen , 2008 ; bialik , 2010 ) .analytically , the same problems exist when criminologists try to explain variation in crime rates across different cities with identified explanatory variables like police patrol practices , dropout rates , home foreclosures , and unemployment ; they also arise when researchers try to measure and explain short - term changes in crime rates within the same jurisdiction . whether the analysis involves simple year - over - year percent change comparisons for different cities ormore complex statistical models for cross - sectional and panel data sets , the analytical ambiguities are the same .in fact , variation in crime reporting patterns injects considerable ambiguity into the interpretation of police - based crime statistics ( eck and riccio , 1979 ; blumstein et al . , 1991 ;levitt , 1998 ) .recent work by baumer and lauritsen ( 2010 ) building on a long series of detailed crime reporting statistics from the national crime survey ( ncs ) and its successor , the national crime victimization survey ( ncvs ) makes the compelling point that there may be a causal relationship between the mobilization of the police and the likelihood that a community s citizens will report victimization experiences to the police .police departments that cultivate strong working community partnerships may actually increase reporting of certain crimes simply because people believe the police will take useful actions when those crimes are reported : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ police notification rates are indicators of public confidence in the police and the legitimacy of the criminal justice system , and increasing police - public communication is a key goal of community - oriented policing strategies to reduce crime and the fear of crime ( baumer and lauritsen , 2010:132 ) . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ even changes in the number of police in a particular area may affect crime reporting practices of the citizenry ( levitt , 1998 ) .variation in reporting rates can create the illusion of a shift in crime even if real crime levels are perfectly stable ( eck and riccio , 1979 ; maltz , 1975 ) .in fact , residential burglary reporting rates do exhibit year - to - year volatility . from 2010 to 2011 , the rate at which residential burglary victimizations were reported to the police dropped from 59% to 52% ( truman , 2011:10 ; truman and planty , 2012:9 ) .if these kinds of changes occur at the local level as well , they could easily explain a good deal of the variation we typically see from one year to the next in local , police - based robbery and burglary statistics . in this paper, we conduct a case study of local level crime measurement while trying to pay close attention to some important sources of ambiguity . specifically , our objective is to estimate the incidence of residential burglary for each of the 10 most populous cities in north carolina in 2009 , 2010 , and 2011 .the analysis is informed by data measuring the likelihood that residential burglary victimizations are reported to the police .we focus on residential burglaries in the 10 largest north carolina cities because : ( 1 ) residential burglary is a clear , well - defined crime about which the public expresses considerable fear and concern ( blumstein and rosenfeld , 2008:18 - 20 ) ; ( 2 ) unlike most other states , north carolina law enforcement agencies publicly report residential burglaries known to the police separately from non - residential burglary ; ( 3 ) residential burglaries are household - level victimizations which correspond closely to the household - level structure of the ncvs ( the ncvs does not measure reporting behaviors for commercial burglaries ) ; and ( 4 ) conducting the analysis across cities and over time allows us to comment directly on the kinds of comparisons that are often conducted with police - based crime statistics .we are not the first to consider this issue ( see , for example , maltz , 1975 ; eck and riccio , 1979 ; blumstein et al . , 1991 ; levitt , 1998 ; lohr and prasad , 2003 ; westat , 2010 ; raghunathan et al . , 2007 ) .nonetheless , the interval estimates or bounds we propose in this paper consider several key sources of uncertainty and stand in contrast to the excessively definitive point estimates that are often the subject of public discourse about crime .the first issue to address is why would we expect different rates of reporting crimes to the police in different jurisdictions ( cities , counties , states ) ?since most police work is reactive rather than proactive , questions about the relationship between public perceptions of the police and the propensity of citizens to report crime victimizations to the police loom large .if the public perceives the police as indifferent and unlikely to do anything to help , the likelihood of crimes being reported to the police could be affected ( baumer and lauritsen , 2010 ) .concerns that the police are unresponsive to the needs of the community can lead to a phenomenon called `` legal cynicism . ''kirk and colleagues ( kirk and matsuda 2011:444 ; kirk and papachristos 2011 ) have argued that legal cynicism is a `` cultural frame in which the law and the agents of its enforcement are viewed as illegitimate , unresponsive , and ill equipped to ensure public safety . '' in addition , legal cynicism is understood to be `` an emergent property of neighborhoods in contrast to a property solely of individuals '' in part because it is formed not only in reaction to one s own personal experiences with legal actors and institutions but through interaction with others in the community ( kirk and matsuda 2011:448 ) . according to this view , culture , and legal cynicism as part of it ,is not perceived as a set of values , goals , or `` things worth striving for '' ( merton 1968:187 ) but rather as a repertoire or toolkit to use in understanding the world ( swidler 1986 ) .there are two consequences that follow from the level of legal cynicism in a community .first , if legal institutions like the police are perceived as illegitimate then citizens are less willing to comply with laws with the result that there is going to be more actual crime .for example , fagan and tyler ( 2005 ) found that adolescents who perceived a lack of procedural justice among authorities also exhibited higher levels of legal cynicism .adolescents rated higher in legal cynicism ( i.e. , expressing agreement with statements like `` laws are made to be broken '' ) were also higher in self - reported delinquency than those less cynical . in a survey of adult respondents , reisig , wolfe , and holtfreter ( 2011 ) reported that self - reported criminal offending was significantly related to their measure of legal cynicism net of other controls including self - control .finally , kirk and papachristos ( 2011 ) found that legal cynicism in chicago neighborhoods explained why they had persistently high levels of homicide in spite of declines in both poverty and general violence .in addition to these studies of legal cynicism , there are numerous studies which have shown a link between measures of legitimacy of legal institutions such as the courts and police and a higher probability of violating the law ( paternoster et al .1997 ; tyler 2006 ; papachristos , meares , and fagan 2011 ) .a second consequence of legal cynicism of central concern in this paper is that citizens are not likely to cooperate with the police , including reporting a crime when it occurs .when citizens believe that the police are not likely to be responsive or will do little to help people like them , then we would expect more crimes to go unreported and offenders to go unarrested . the perception that it would do no good to cooperate with the police is an integral part of the cultural system described by anderson ( 1999:323 ) as the `` code of the street '' : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [ t]he most public manifestation of this alienation is the code of the street , a kind of adaptation to a lost sense of security of the local inner - city neighborhood and , by extension , a profound lack of faith in the police and judicial system ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ several studies have found that community members both adults and juveniles are unlikely to cooperate with the police , including reporting crime and providing information , when law enforcement is seen as illegitimate ( sunshine and tyler 2003 ; tyler and fagan 2008 ; slocum et al .kirk and matsuda ( 2011 ) found a lower risk of arrest in neighborhoods with high levels of legal cynicism . in sum , an increasing array of conceptual and empirical workhas linked a perceived lack of responsiveness on the part of legal actors to both more crime and less reporting of crime .communities characterized by high levels of legal cynicism or a sudden change in the level of legal cynicism ( because of perceived mishandling of an event ) may exhibit not only a higher level of crime but also a greater unwillingness of citizens to report a crime to the police .given the reactive nature of most police work , there are sound reasons for believing that citizens lack of cooperation and faith in the police are reflected in a lower rate of official police - based crime statistics though the actual rate may be higher .although legal cynicism accounts for some of the variation in the rate at which citizens report a crime to the police , other factors are also involved .the point here is not to offer legal cynicism as the only hypothesis or even test this conjecture as a hypotheseis .rather , it is to provide some justification that there are credible _ a priori _ reasons to believe there is systematic variation in the reporting of crimes across jurisdictions ( and over time within the same jurisdiction ) and that such variation is one source of the ambiguity in police statistics . in the following sections , we give formal expression to this ambiguity using an approach which brings the fragile nature of police - based crime statistics to center stage .we begin our analysis by examining the number of residential burglaries reported by the police to the north carolina state bureau of investigation s ( sbi ) 2010 uniform crime reports for the 10 most populous city - level jurisdictions in north carolina during the 2009 - 2011 period ( state bureau of investigation , 2012 ) .the sbi statistics count both attempted and completed residential burglaries .we verified that each of these 10 cities participated in the sbi s ucr program for each month of the 2009 - 2011 calendar years .table 1 identifies the 10 cities included in our study ( column 1 ) along with the frequency of residential burglaries reported by each city s police department in 2009 ( column 2 ) , 2010 ( column 3 ) , and 2011 ( column 4 ) .we denote residential burglaries reported by the police to the sbi - ucr program as ..residential burglaries counted in the ucr [ cols="<,>,>,>",options="header " , ] a prominent aspect of the evidence in table 6 is that there is real variation in the number of persons per household in large north carolina cities . on average ,asheville and wilmington have smaller household sizes ( persons per household ) while cary and high point have the highest density households ( persons per household ) .it is possible , then , that two cities could have an identical rate of residential burglary when that rate is expressed in terms of population size but have different residential burglary rates when expressed in terms of the number of households in the city .we now consider how much our inferences about residential burglary rates depend upon the issues we have considered in this paper .we do nt expect much sensitivity to the uncertainty of the hierarchy rule since the adjustments are small .it is less clear how sensitive our results will be to uncertainty about the size of the population ( including whether we scale by the number of persons or the number of households ) and uncertainty about the fraction of residential burglaries reported to the police . in order to estimate the actual rate of residential burglaries per 100,000 persons ( ), we define its lower bound as : while the upper bound is : this interval estimate identifies the outer limits of what is possible in terms of the residential burglary rate per 100,000 persons assuming that and form the proper bounds on the size of the population , that each city s reporting rate falls within the 95% confidence interval of the ncvs - estimated reporting rate , and that our adjustments for the fbi s hierarchy rule are accurate .we combine these estimates with the standard residential burglary rates per 100,000 population based on the information from tables 1 and 5 to produce the point and interval estimates in figure 1 .for each of the 10 cities in figure 1 there are 2 sets of estimates : ( 1 ) the upper and lower bound ( interval ) estimates of the actual residential burglary rate ( ) for 2009 - 2011 ; and ( 2 ) the `` standard '' ( point ) estimates of the residential burglary rate per 100,000 population ( based on tables 1 and 5 ) for 2009 - 2011 .comparisons between the point estimates over time within the same city implicitly assume that the reporting rate , is the same across the years while comparisons between point estimates of different cities implicitly assume that the reporting rate is constant between the cities being compared ( at the time they are compared ) .while these assumptions seem implausibly strong , they are commonly invoked for both journalistic and research purposes .we also consider the impact of adjusting for the number of households instead of the number of people and then placing residential burglaries on a scale per 1,000 households in each city during each year ( figure 2 ) .this analysis relies on the information presented in tables 4 and 6 . to estimate the lower bound on the residential burglary rate per 1,000 households we obtain : andthe upper bound is given by : broadly speaking , the two sets of rate comparisons in figures 1 and 2 seem to tell similar stories . from this analysis ,our major conclusion is that the major source of uncertainty in estimating residential burglary rates in these 10 north carolina cities over the 2009 - 2011 time frame is the reporting rate , and to a lesser extent , the size of the population .it is worth considering a couple of example implications of our results .suppose we set out to compare the burglary rates between charlotte and wilmington in 2009 .using the standard approach for comparing the two cities , we would find that charlotte had a residential burglary rate of 1,051 per 100,000 persons while wilmington s rate is 1,184 ( nearly 13% higher ) .some might use this evidence to say that wilmington had a higher residential burglary rate than charlotte in 2009 .but upon further analysis , we find that the actual rate of residential burglaries per 100,000 population could plausibly lie in the ] interval in wilmington . since these intervals overlap ( see figure 1 ) , the sign of the difference between charlotte and wilmington is not identified .this lack of identifiability becomes even more prominent when we focus on household incidence of burglary .for charlotte , the bounds on the residential burglary rate per 1,000 households in 2009 are ] .this is a marked increase in the degree of overlap between the charlotte and wilmington interval estimates .we can attribute most of this difference to the fact that charlotte households had an average of 2.46 persons in 2006 - 2010 while wilmington households were smaller on average ( 2.19 persons ) over the same time period . in short , to speak about a clear difference in these two cities is an example of unwarranted certitude .it is possible that charlotte s burglary rate is higher , lower , or the same as wilmington s .considering plausible sources of uncertainty in the comparison , the data are simply not strong enough to tell us .a comparison of charlotte and raleigh on the other hand leads us to a stronger set of conclusions . in 2011 , for example , charlotte s estimated residential burglary rate per 100,000 population based exclusively on the information in tables 1 and 5 was 818 while raleigh s rate was 597 ( a difference ) . we conclude that the difference between the burglary rates in charlotte and raleigh can not be explained by the uncertainties considered in this paper .the residential burglary rate interval for charlotte in 2011 is ] .as figure 2 shows , there is a similar pattern for burglary incidence scaled by the number of households . since these intervalsdo not overlap , it seems credible to argue that charlotte s rate is higher than raleigh s rate in 2011 .figures 1 and 2 are also helpful for displaying the over - time change within cities .using the standard crime rate estimator , we can see that the point estimates of charlotte s residential burglary rate reveal what appears to be a meaningful decline from 2009 ( 1,051 ) to 2011 ( 818 ) ( a drop of about ) .the problem is that there is some evidence that reporting rates could have also changed a good deal over the same time period .what this means is that the burglary rate interval for charlotte ( accounting for the hierarchy rule , reporting rate uncertainty , and population size uncertainty ) is ] in 2011 .since these intervals overlap we can not discern whether the charlotte burglary rate increased , decreased , or stayed the same over this time period .a counterexample is provided by durham . in 2009 ,the standard burglary rate estimate was 1,281 ; in 2011 that rate estimate increased to 1,472 an increase of .our interval estimates suggest that this increase was real ; in 2009 , the interval was ] . in the case of durham , we can confidently conclude that residential burglaries increased why that increase occurred , of course , is a different question .a good deal of contemporary discussion about local crime patterns in the u.s . is marred by unwarranted certitude about the numbers and rates underlying that discussion .criminal justice officials , journalists , and even academic criminologists count crimes known to the police while ignoring key sources of uncertainty about those numbers . since the late 1960 s and early 1970 s , for example , it has been common criminological knowledge that many crimes are not reported to the police but somehow that knowledge ends up playing only a tangential role ( if any role at all ) in our public discourse about crime patterns at the local level .part of the problem is that there has been little methodological attention to the task of expressing and transmitting uncertainty about crime patterns to policy and lay audiences ( manski , 2003:21 ) .based on manski s work on bounds and partial identification , however , we think it will be useful for criminologists to begin reporting crime patterns in terms of a range of uncertainty that expresses both what is known and unknown about the numbers that are used to measure those patterns .a key feature of the methods used here is that they explicitly abandon the goal of obtaining point estimates in favor of a more realistic and reasonable goal of obtaining interval estimates .our approach provides one path by which criminologists can _ begin _ to reasonably express both what is known and unknown with current publicly available datasets .another feature of our approach is that we move away from the `` incredible certitude '' problem described by charles manski ( 2011 ) the practice of developing unqualified and unjustifiably `` certain '' inferences based on weak data .criminologists are often asked by the media to comment on small year - to - year movements in police - based crime statistics . in our conversations with other criminologists ,we have noted that many feel quite uncomfortable characterizing this or that small movement in the crime rate .the analysis in this paper illustrates why these feelings of apprehension are justified .as eck and riccio ( 1979 ) observed over 30 years ago , a movement of a few percentage points in the police statistics may or may not reflect real changes in crime .we think our approach to this problem is useful because it allows us to transmit our uncertainty especially to lay audiences in systematic ways that have not been obvious in the past .still , there are limitations .first and foremost , we believe our bounds on the probability that a residential burglary is reported to the police ( ) are a reasonable starting point but improving our understanding of this interval would be constructive .this highlights an important direction for future research : achieving disciplinary consensus on the likely bounds for crime reporting probabilities should be a high priority .one reviewer of a previous version of this manuscript criticized our reporting rate intervals as being too narrow .that reviewer found it inconceivable that the local and national estimates would exhibit any particular comparability .most of the data that can be used to check on this were collected in the 1970 s in a series of city crime surveys conducted by the national criminal justice information and statistics service ( 1974 , 1975 , 1976 ; see also levitt , 1998 ) and a research report by lauritsen and schaum ( 2005 ) .while there is not much local data to go on , it appears from the weight of this evidence that most cities have residential burglary reporting rates that are within a reasonably proximate range of the national estimates of their time .the reviewer s comment nonetheless highlights the need for greater understanding of how closely local reporting rates track what is observed nationally . and if the reviewer turns out to be correct then the burglary rate intervals estimated in our work will be too narrow ; a result which amplifies rather than diminishes our arguments .we have considered several examples where point estimates based on conventional methods prove to be highly misleading . using those methods one would draw the conclusion that one city had a higher rate than another city or that a city s rate changed in a meaningful way from one year compared to another .our analysis shows that in some of these comparisons , a plausible rival hypothesis can not be excluded : it is possible that the burglary rates are the same while only the reporting rate differs .since the reporting rate , , is not identified we can only make assumptions about its value the data can not be used to resolve this ambiguity . only information that reduces our uncertainty about the rate at which residential burglaries are reported to the police in the two cities will resolve it .the good news is that the development of this kind of information is feasible .the national research council along with the bjs has recently considered a range of possibilities for improving on the small - area estimation capabilities of the ncvs ( groves and cork , 2008 ; bureau of justice statistics , 2010 ) .most of the attention has focused on small - area estimation of victimization rates but further refinement of reporting rate estimates should also be a priority .this is not a new idea eck and riccio ( 1979 ) emphasized the possibilities of this approach decades ago yet combining this emphasis with a focus on interval estimation of crime rates may prove to be a viable way forward .a key benefit of this kind of information would be a substantial reduction of the uncertainty that is evident in our figures 1 - 2 .it it noteworthy that we are able to make useful statements about residential burglary rates for north carolina cities because the state reporting program clearly identifies residential burglaries known to the police .this is not done in the fbi s uniform crime report which presents counts of all burglaries both residential and commercial known to the police in a single number . andwe encounter difficulty using the fbi s burglary numbers since the ncvs only measures reporting behaviors for residential burglaries .expansion of our approach to other crimes will require careful consideration of how the crimes described in the ucr relate to the victimization incidents counted in the ncvs .there is a well - developed literature on this topic ( see , for example , blumstein et al . , 1991 , 1992 ; lynch and addington , 2007 ) but there will be some difficulties in ensuring that the reporting probability gleaned from the ncvs maps onto ucr crime categories in a meaningful way . in our view , the field will be well served by taking on these challenges .we encountered a few other ambiguities in addition to the reporting probability ; namely , uncertainty due to the ucr s hierarchy rule , the size of the population and the question of whether to scale by the number of households or by the number of person ( gibbs and erickson , 1976 ) .it is surprising how large some of the differences in population estimates were and this uncertainty should be considered in more detail .we verified that each of the jurisdictions we studied participated in the state uniform crime reporting program each month of each year during the 2009 - 2011 calendar years ( but greensboro did not participate in the federal program in 2011 ) ; still we clearly have no way to verify the accuracy of the numbers reported by the police departments ( westat , 2010:vii - viii ) .this issue is always a threat to analyses that rely on police - based crime statistics and our study is no exception . in our view, it will be useful for criminologists to : ( 1 ) be aware of the kinds of uncertainties discussed in this paper ; ( 2 ) develop better information about uncertain parameters such as the probability of victimizations being reported to the police at the local level ; ( 3 ) create analytic methods that will formally incorporate and transmit key sources of uncertainty in the measurement of crime rates ; and ( 4 ) explore ways of conducting sensitivity analysis to assess the fragility of our results .a fifth priority should be a program of research to consider how identification problems such as those discussed in this paper can be addressed within the framework of statistical models commonly used to estimate effects of social and economic changes on crime rates . logically , there is no difference between a comparison of burglary rates in charlotte and raleigh from 2009 to 2010 and the kinds of panel regresssion , difference - in - difference , and pooled - cross - sectional time series estimators commonly used to identify causal relationships in crime data .all of the uncertainties discussed here are present in space - and - time crime regressions commonly estimated by criminologists . yet the issues discussed in this paper loom as major sources of uncertainty for these models .we view our approach as an initial , constructive , and necessary step in the direction of a more balanced and informative use of aggregate crime statistics .= 0.15 in addington , lynn a. ( 2007).using nibrs to study methodological sources of divergence between the ucr and ncvs . in _ understanding crime statistics : revisiting the divergence of the ncvs and ucr _ , pp .225 - 250 .new york : cambridge university press .american society of criminology ( 2007).http://www.asc41.com / policies / policypositions.html[official policy position of the executive board of the american society of criminology with respect to the use of uniform crime reports data . ]columbus , oh : american society of criminology .baumer , eric p. and janet l. lauritsen ( 2010 ) .reporting crime to the police 1973 - 2005 : a multivariate analysis of long - term trends in the national crime survey ( ncs ) and national crime victimization survey ( ncvs ) ._ criminology _ , 48:131 - 185 .pepper , john v. ( 2001 ) .how do response problems affect survey measurement of trends in drug use ? in _ informing america s policy on illegal drugs : what we do nt know keeps hurting us _ , pp .321 - 347 .washington , dc : national academy press .raghunathan , trivellore e. , dawei xie , nathaniel schenker , van l. parsons , william w. davis , kevin w. dodd and eric j. feuer ( 2007 ) . combining information from two surveys to estimate county - level prevalence rates of cancer risk factors and screening. _ journal of the american statistical association _ , 102:474 - 486 .slocum , lee ann , terrance j. taylor , bradley t. brick , and finn - aage esbensen .neighborhood structural characteristics , individual - level attitudes , and youths crime reporting intentions . _ criminology _ , 48:1063 - 1100 . | we consider the problem of estimating the incidence of residential burglaries that occur over a well - defined period of time within the 10 most populous cities in north carolina . our analysis typifies some of the general issues that arise in estimating and comparing local crime rates over time and for different cities . typically , the only information we have about crime incidence within any particular city is what that city s police department tells us , and the police can only count and describe the crimes that come to their attention . to address this , our study combines information from police - based residential burglary counts and the national crime victimization survey to obtain interval estimates of residential burglary incidence at the local level . we use those estimates as a basis for commenting on the fragility of between - city and over - time comparisons that often appear in both public discourse about crime patterns . |
the construction of tests for hypotheses on the coefficient vector in linear regression models with dependent errors is highly practically relevant and has received lots of attention in the statistics and econometrics literature .the main challenge is to obtain tests with good size and power properties in situations where the nuisance parameter governing the dependence structure of the errors is high- or possibly infinite - dimensional and allows for strong correlations .the large majority of available procedures are autocorrelation - corrected f - type tests , based on nonparametric covariance estimators trying to take into account the autocorrelation in the disturbances .these tests can roughly be categorized into two groups , the distinction depending on the choice of a bandwidth parameter in the construction of the covariance estimator .the first group of such tests , so - called ` hac tests ' , incorporates bandwidth parameters that lead to consistent covariance estimators , and to an asymptotic -distribution of the corresponding test statistics under the null hypothesis , the quantiles of which are used for testing . concerning ` hac tests ' , important contributions in the econometrics literature are , , , and .it is safe to say that the covariance estimators introduced in the latter two articles currently constitute the gold standard for obtaining ` hac tests ' .in contrast to the estimator suggested earlier by - structurally times a standard kernel spectral density estimator ( , , , and section 7.9 ) evaluated at frequency - the covariance estimators suggested in and both incorporate an additional prewhitening step based on an auxiliary vector autoregressive ( var ) model , as well as a data - dependent bandwidth parameter . a distinguishing feature of the estimators introduced by on the one hand and on the other hand is the choice of the bandwidth parameter : used an approach introduced by , where the bandwidth parameter is chosen based on auxiliary parametric models .in contrast to that , suggested a nonparametric approach for choosing the bandwidth parameter . even though simulation studies have shownthat the inclusion of a prewhitening step and the data - dependent choice of the bandwidth parameter can improve the finite sample properties of ` hac tests ' , the more sophisticated ` hac tests ' so obtained still suffer from size distortions and power deficiencies .for this reason , , and suggested to choose the bandwidth parameter as a fixed proportion of the sample size .this framework leads to an inconsistent covariance estimator and to a non - standard limiting distribution of the corresponding test statistic under the null , the quantiles of which are used to obtain so called ` fixed - b tests ' . in simulation studies it has been observed that ` fixed - b tests ' still suffer from size distortions in finite samples , but less so than ` hac tests ' . however , this is at the expense of some loss in power .similar as in ` hac testing ' simulation results in and suggest that the finite sample properties of ` fixed - b tests ' can be improved by incorporating a prewhitening step . in the latter paperit was furthermore shown that the asymptotic distribution under the null of the test suggested by is the same whether or not prewhitening is used . a number of recent studies ( , , , ) tried to use higher order expansions to uncover the mechanism leading to size distortions and power deficiencies of ` hac tests ' and ` fixed - b tests ' . these higher - order asymptotic results ( and also the first - order results discussed above )are pointwise in the sense that they are obtained under the assumption of a fixed underlying data - generating - process .hence , while they inform us about the limit of the rejection probability and the rate of convergence to this limit for a fixed underlying data - generating - process , they do not inform us about the _ size _ of the test or its limit as sample size increases , nor about the _ power function _ or its asymptotic behavior .size and power properties of tests in regression models with dependent errors were recently studied in : in a general finite sample setup and under high - level conditions on the structure of the test and the covariance model , they derived conditions on the design matrix under which a concentration mechanism due to strong dependencies leads to extreme size distortions or power deficiencies .furthermore , they suggested an adjustment - procedure to obtain a modified test with improved size and power properties .specializing their general theory to a covariance model that includes at least all covariance matrices corresponding to stationary autoregressive processes of order one ( ar(1 ) ) , they investigated finite sample properties of ` hac tests ' and ` fixed - b tests ' based on _ non - prewhitened _ covariance estimators with _ data - independent _ bandwidth parameters ( covering _ inter alia _ the procedures in , sections 3 - 5 of , , , , , but _ not _ the methods considered by , or ) .in this setup demonstrated that these tests break down in terms of their size or power behavior for generic design matrices . despite this negative result, they also showed that the adjustment procedure can often solve these problems , if elements of the covariance model which are close to being singular can be well approximated by ar(1 ) covariance matrices . did not consider tests based on prewhitened covariance estimators or data - dependent bandwidth parameters .therefore the question remains , whether the more sophisticated ` hac tests ' typically used in practice ( i.e. , tests based on the estimators by or ) and the prewhitened ` fixed - b tests ' ( i.e. , tests as considered in ) also suffer from extreme size distortions and power deficiencies , or if prewhitening and the use of data - dependent bandwidth parameters can indeed resolve or at least substantially alleviate these problems . in the present paper we investigate finite sample properties of tests based on prewhitened covariance estimators or data - dependent bandwidth parameters .in particular our analysis covers tests based on prewhitened covariance estimators using auxiliary ar(1 ) models for the construction of the bandwidth parameter as discussed in , tests based on prewhitened covariance estimators as discussed in , and prewhitened ` fixed - b ' tests as discussed in .we show that the tests considered , albeit being structurally much more complex , exhibit a similar behavior as their _ non - prewhitened _ counterparts with _ data - independent _ bandwidth parameters : first , we establish conditions on the design matrix under which the tests considered have ( i ) size equal to one , or ( ii ) size not smaller than one half , or ( iii ) nuisance - minimal power equal to zero , respectively .we then demonstrate that at least one of these conditions is generically satisfied , showing that the tests considered break down for generic design matrices .motivated by this negative result , we introduce an adjustment procedure . under the assumption that elements of the covariance model which are close to being singular can be well approximated by ar(1 ) covariance matrices , we show that the adjustment procedure , if applicable , leads to tests that do not suffer from extreme size distortions or power deficiencies .finally , it is shown that the adjustment procedure is applicable under generic conditions on the design matrix , unless the regression includes the intercept _ and _ the hypothesis to be tested restricts the corresponding coefficient . on a technical levelwe employ the general theory developed in .we remark , however , that the genericity results in particular do not follow from this general theory .rather they are obtained by carefully exploiting the specific structure of the procedures under consideration .the paper is organized as follows : the framework is introduced in section [ framework ] . in section [ tpwc ]we introduce the test statistics , covariance estimators , and bandwidth parameters we analyze . in section [ neg ]we establish our negative result and its genericity . in section [ pos ]we discuss the adjustment - procedure and its generic applicability .section [ concl ] concludes .the proofs are collected in appendices [ aa]-[ad ] .consider the linear regression model where is a ( real ) dimensional non - stochastic design matrix satisfying , and . here, denotes the unknown regression parameter vector , and the disturbance vector is gaussian , has mean zero and its unknown covariance matrix is given by .the parameter satisfies and is assumed to be an element of a prescribed ( non - void ) set of positive definite and symmetric matrices , which we shall refer to as the _ covariance model_. throughout we impose the assumption on that the parameters and can be uniquely determined from .[ stationary ] the leading case we have in mind is the situation where are consecutive elements of a weakly stationary process . in such a setup a covariance modelis typically obtained from a prescribed ( non - void ) set of spectral densities . assuming that no element of vanishes identically almost everywhere , the covariance model corresponding to is then given by with and where denotes the imaginary unit .every such is positive definite and symmetric .furthermore , since is a correlation matrix , and can uniquely be determined from . as outlined in the introduction the tests we shall investigate in this paperare particularly geared towards setups where is a nonparametric class of spectral densities , i.e. , where the corresponding set is rich .a typical example is the class , which consists of all spectral densities of linear processes the coefficients of which satisfy a certain summability condition , i.e. , spectral densities of the form where , for a fixed , the summability condition is satisfied .we observe that contains in particular all correlation matrices corresponding to spectral densities of stationary autoregressive moving average models of arbitrary large order . the linear model described ininduces a collection of distributions on , the sample space of . denoting a gaussian probability measure with mean and covariance matrix by and denoting the regression manifold by , the induced collection of distributionsis given by since every is positive definite by definition , each element of the set in the previous display is absolutely continuous with respect to ( w.r.t . )lebesgue measure on . in this setupwe shall consider the problem of testing a linear hypothesis on the parameter vector , i.e. , the problem of testing the null against the alternative , where is a matrix of rank and .define the affine space and let adopting these definitions , the above testing problem can be written as where it is emphasized that the testing problem is a compound one .it is immediately clear that size and power properties of tests in this setup depend in a crucial way on the richness of the covariance model .before we close this section by introducing some further terminological and notational conventions , some comments on how the above assumptions can be relaxed are in order : we remark that even though our setup assumes a non - stochastic design matrix , the results immediately carry over to a setting where the data generating processes of the design and the disturbances are independent of each other .in such a setup our results deliver size and power properties conditional on the design .the gaussianity assumption might seem to be restrictive .however , as in section 5.5 of , we mention that the negative results given in section [ neg ] of the present paper immediately extend in a trivial way without imposing the gaussianity assumption on the error vector in , as long as the assumptions on the feasible error distributions are weak enough to ensure that the implied set of distributions for contains the set in equation , but possibly contains also other distributions .furthermore , by applying an invariance argument ( explained in section 5.5 ) one can easily show that all statements about the null - behavior of the procedures under consideration derived in the present paper carry over to the more general distributional setup where is assumed to be elliptically distributed .this is to be understood as having the same distribution as , where , , is a random vector uniformly distributed on the unit sphere , and is a random variable distributed independently of and which is positive with probability one .we next collect some further terminology and notation used throughout the whole paper .a ( non - randomized ) _ test _ is the indicator function of a set , i.e. , the corresponding _rejection region_. the _ size _ of such a test ( rejection region ) is the supremum over all rejection probabilities under the null hypothesis , i.e. , throughout the paper we let , where is the design matrix appearing in and . the corresponding ordinary least squares ( ols ) residual vectoris denoted by .the subscript is omitted whenever this does not cause confusion .random vectors and random variables are always written in bold capital and bold lower case letters , respectively .we use as a generic symbol for a probability measure and denote by the corresponding expectation operator .lebesgue measure on will be denoted by .the euclidean norm is denoted by , while denotes the euclidean distance of the point to the set . for a vector in euclidean spacewe define the symbol to denote for , the sign being chosen in such a way that the first nonzero component of is positive , and we set . the -th standard basis vector in is denoted by .let denote the transpose of a matrix and let denote the space spanned by its columns .for a linear subspace of we let denote its orthogonal complement and we let denote the orthogonal projection onto .the set of real matrices of dimension is denoted by .lebesgue measure on this set equipped with its borel -algebra is denoted by .we use the convention that the adjoint of a dimensional matrix , i.e. , , equals one . given a vector the symbol denotes the diagonal matrix with main diagonal .we define i.e. , the set of design matrices of full rank , and whenever we define which is canonically identified ( as a set ) with the set of design matrices of full column rank the first column of which is the intercept .in the present section we formally describe the construction of tests based on prewhitened covariance estimators .these tests ( cf . remark [ rdef ] below and the discussion preceding it ) reject for large values of a statistic where and the quantity appearing in the definition of above denotes a ( var- ) prewhitened nonparametric estimator of that incorporates a bandwidth parameter which might depend on the data .such an estimator is completely specified by three core ingredients : first , a _ kernel _ , i.e. , an even function satisfying , such as , e.g. , the bartlett or parzen kernel ; second , a ( non - negative ) possibly data - dependent _ bandwidth parameter _ ; and third , a deterministic _ prewhitening order _ , i.e. , an integer satisfying ( cf .remark [ defp ] ) .specific choices of are discussed in detail in section [ bw ] .all possible combinations of , and we analyze are specified in assumption [ weightspwrb ] of section [ comb ] .once these core ingredients have been chosen , one obtains a prewhitened estimator , which is computed at an observation following the steps ( 1 ) - ( 3 ) outlined subsequently ( cf . also ) .we here assume that the quantities involved ( e.g. , inverse matrices ) are well defined , cf .remark [ wdpsi ] below , and follow the convention in the literature and leave the estimator undefined at else . using this convention is obtained as follows : 1 . to prewhiten the data a var(p ) modelis fitted via ordinary least squares to the columns of .one so obtains the var(p ) residual matrix with columns the -dimensional var(p)-ols estimator is given by where and the -th column of equals for . in matrix formwe clearly have .2 . then , one computes the quantities and defines the preliminary estimate where in case one sets for and for .3 . finally , the preliminary estimate is ` recolored ' using the transformation '.\ ] ] [ wdpsi ] the construction of outlined above clearly assumes that ( i ) is well defined , which is equivalent to ; that ( ii ) is well defined , which depends on the specific choice of ( cf .section [ bw ] ) ; and that ( iii ) is invertible .[ defp ] by assumption , all possible var orders we consider must satisfy .this is done to rule out degenerate cases : for if , then would follow , because of .hence the covariance estimator would nowhere be well defined for such a choice , because ( i ) in remark [ wdpsi ] would then clearly be violated at every observation . [ varols ] in the present paper we focus on var prewhitening based on the ols estimator .this is in line with the original suggestions by , as well as with .alternatively , for , suggested to use an eigenvalue adjusted version of the ols estimator , the adjustment being applied if the matrix is close to being singular .we shall focus on the unadjusted ols estimator for the following reasons : reported that the finite sample properties show little sensitivity to this eigenvalue adjustment .furthermore , it is the unadjusted estimator that is often used in implementations of the method suggested by in software packages for statistical and econometric computing ( e.g. , its implementation in the ` r ` package ` sandwich ` by , or its implementation in ` eviews ` , e.g. , , p. 784 . ) .we remark , however , that one can obtain a negative result similar to theorem [ thmlrvpw ] , and a positive result concerning an adjustment procedure similar to theorem [ tu_3 ] , also for tests based on prewhitened estimators with eigenvalue adjustment .furthermore , we conjecture that it is possible to prove ( similar to proposition [ generic ] ) the genericity of such a negative result , and to show that one can ( similar to proposition [ genericadj ] ) generically resolve this problem by using the adjustment procedure .we leave the question of which estimator to choose for prewhitening to future research . in a typical asymptotic analysis of tests based onprewhitened covariance estimators the event is asymptotically negligible ( since converges to a positive definite , or almost everywhere positive definite matrix ) .hence there is no need to be specific about the definition of the test statistic for , and one can work directly with the statistic which is left undefined for . in a finite sample setup , however , one has to think about the definition of the test statistic also for . our decision to assign the value to the test statistic for is of course completely arbitrary . that this assignment does not affect our resultsat all is discussed in detail in the following remark .[ rdef ] given that the estimator is based on a triple , , that satisfies assumption [ weightspwrb ] introduced below ( which is assumed in all of our main results , and which is satisfied for covariance estimators using auxiliary ar(1 ) models for the construction of the bandwidth parameter as considered in , for covariance estimators as considered in , and for covariance estimators as considered in ) , it follows from lemma [ n*pw ] that is either a -null set , or that it coincides with . in the first case , which is generic under weak dimensionality constraintsas shown in lemma [ nulln*pw ] , the definition of the test statistic on does hence not influence the rejection probabilities , because our model is dominated by ( contains only positive definite matrices ) .therefore , size and power properties are not affected by the definition of the test statistic for .in the second case , i.e. , if coincides with , the statistic in is nowhere well defined , and hence , regardless of which value is assigned to it for observations , the resulting test statistic is constant , and thus the test breaks down trivially . in the following we describe bandwidth parameters are typically used in step 2 in the construction of the prewhitened estimator as discussed above : the parametric approach ( based on auxiliary ar(1 ) models ) suggested by and , the nonparametric approach introduced by , and a data - independent approach which was already investigated in in simulation studies and which has recently been theoretically investigated by .since the bandwidth parameter is computed in step 2 in the construction of , we assume that , and are given and that step 1 has already been successfully completed , i.e. , all operations in step 1 are well defined at , in particular is available for the construction of . if not , we leave the bandwidth parameter ( and hence the covariance estimator ) undefined at .we also implicitly assume that the quantities and operations appearing in the procedures outlined subsequently are well defined and leave the bandwidth parameter ( and hence the covariance estimator ) undefined else .a detailed structural analysis of the subset of the sample space where a prewhitened estimator is well defined is then later given in lemma [ npwrb ] in section [ struct ] . finally , we emphasize that the bandwidth parameters discussed subsequently all require the choice of additional tuning parameters .these tuning parameters are typically chosen independently of and , an assumption we shall maintain throughout the whole paper ( but see remark [ rtuning ] for some generalizations ) .let be such that and for , i.e. , is a _weights vector_. based on this weights vector the bandwidth parameter is now obtained as follows : first , univariate ar(1 ) models are fitted via ols to for , giving where we note that holds as a consequence of and .then , one calculates finally , bandwidth parameters are obtained via where to obtain a bandwidth parameter , one has to fix the constants , and and where .typically the choice of these constants and the choice of depends on certain characteristics of ( for specific choices see , section 6 , in particular p. 834 ) . for example , if is the bartlett kernel one uses , and , or if is the quadratic - spectral kernel one would use , and .since we do not need such a specific dependence to derive our theoretical results , we do not impose any further assumptions on these constants beyond being positive ( and independent of and ) .we shall denote by the set of all bandwidth parameters that can be obtained as special cases of the method in the present section , by appropriately choosing - functionally independently of and - a weights vector , constants , and a .since , and are fixed quantities , the tuning parameters , for and might also depend on them , although we do not signify this in our notation .a similar remark applies to the constants appearing in section [ mnw94 ] and in section [ mr ] .although we do not provide any details , we furthermore remark that one can extend our analysis to bandwidth parameters as above , but based on estimators other than , e.g. , all estimators satisfying assumption 4 of such as the yule - walker estimator or variants of the ols estimator .let be as in section [ ma92 ] and let for be real numbers such that .for example , suggested to use rectangular weights , i.e. , where denotes the floor function .define for every a bandwidth parameter is then obtained via ^ 2 n \right)^{\bar{c}_3},\ ] ] where is a positive integer , where and are positive real numbers and where .these numbers are constants independent of and and have to be chosen by the user .the choice typically depends on the kernel ( for the specific choices we refer the reader to , section 3 ) . as in the previous section, we do not impose any assumptions beyond positivity ( and independence of and ) on the constants .furthermore , we shall denote by the set of all bandwidth parameters that can be obtained as special cases of the method in the present section , by appropriately choosing - functionally independently of and - a weights vector , numbers for , a positive integer , and .\(i ) the method described here is the ` real - bandwidth ' approach suggested in , as opposed to the ` integer - bandwidth ' approach . in the latter approachone would use instead of .both approaches are asymptotically equivalent ( , theorem 2 ) for most kernels ( including the bartlett kernel which is suggested in ) .therefore , they are equally plausible in terms of their theoretical foundation . for the sake of simplicity and comparability with the bandwidth parameteras suggested by , which is not an integer in general , we have chosen to focus on the ` real - bandwidth ' approach .+ ( ii ) , p. 637, in principle also allow for ( in their notation ) in the definition of their estimator .we do not allow for such a choice .however , note that implies .this is a data - independent bandwidth parameter .these parameters are separately treated in section [ mr ] .+ and studied properties of prewhitened ` fixed - b tests ' . here one sets where ] the covariance model does not contain ar(1 ) correlation matrices with ; i.e. , instead of assumption [ aar(1 ) ] the covariance model satisfies inspection of the proof of theorem [ thmlrvpw ] then shows that a version of theorem [ thmlrvpw ] holds , in which all references to are deleted in parts 1 - 4 .for example , part 4 of this version of theorem [ thmlrvpw ] reads as follows : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` suppose that .suppose further that and holds .then holds for every and every .in particular , the size of the test is equal to one . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this statement covers ( in particular ) the important special case of testing a restriction on the mean in a location model .we make the following observations concerning this version of theorem [ thmlrvpw ] : * since is not necessarily an element of the singular boundary of the covariance model considered here , the result just described does not contain `` size equal to one''- or `` nuisance - minimal - power equal to zero''-statements that arise from covariance matrices approaching .note , however , that the original theorem [ thmlrvpw ] implies by a continuity argument that if is small ( compared to sample size ) , then considerable size distortions or power deficiencies will nevertheless be present for covariance matrices in that are close to .* consider the case where , i.e. , the regression contains an intercept , and where the hypothesis does not involve the intercept , i.e. , : then we see that parts 1 - 4 of the version of theorem [ thmlrvpw ] just obtained do not apply .in fact , in this case we can establish a positive result concerning a test based on with , and based on a non - standard critical value that depends on .this positive result , together with its restrictions , is discussed in remark [ rposbound ] . given a hypothesis the four sufficient conditions provided in the preceding theorem are conditions on the design matrix .they depend on observable quantities only .how these conditions can be checked is discussed in the subsequent paragraph : the first three parts of the theorem operate under the local assumption that the multivariate polynomial does not vanish at the point or , respectively .the multivariate polynomial is explicitly constructed in the proof of lemma [ n*pw ] .therefore , the condition that it does not vanish at specific data points can readily be checked .some additional conditions needed in parts 1 - 3 of the theorem are formulated in terms of and , which are in fact independent of the specific chosen and therefore easy to calculate .part 3 of the theorem requires the existence of or ( which is immaterial if as discussed in the preceding remark ) . againthe existence of the gradients is independent of the specific choice of .sufficient conditions for the existence of the gradient , under the assumption that is continuously differentiable on the complement of a finite number of points , are provided in lemma [ gradientt ] in appendix [ aneg ] .these conditions amount to checking whether or not or , respectively , is an element of a certain set determined by consisting of finitely many points .in contrast to parts 1 - 3 , the fourth part of the theorem operates under the global assumption that the multivariate polynomial is not the zero polynomial . since the polynomial is explicitly constructed in the proof of lemma [ n*pw ] , the global assumption can either be checked analytically , or by using standard algorithms for polynomial identity testing .in addition to this global assumption , the fourth part needs additional assumptions on the structure of and the hypothesis which can of course be easily checked by the user .the preceding theorem has given sufficient conditions on the design matrix , under which the test considered breaks down in terms of its size and/or power behavior .however , for a given hypothesis there exist elements of to which the theorem is not applicable . as a consequence , the question remains to ` how many ' elements of the theorem can be applied once has been fixed .this question is studied subsequently .it is shown that generically in the space of all design matrices at least one of the four conditions of theorem [ thmlrvpw ] applies .the first part of the proposition establishes this genericity result in the class of all design matrices of full column rank , i.e. , .the remaining parts establish the genericity result in case and the first column of is the intercept , i.e. , with .before we state the proposition , we introduce two assumptions on the kernel .the first assumption is satisfied by all kernels typically used in practice .[ cd ] the kernel is continuously differentiable on the complement of , a set consisting of finitely many elements .the second assumption , which is used in some statements of the second part of the genericity result , imposes compactness of the support of the kernel .this is satisfied by many kernels used in practice , e.g. , the bartlett kernel or the parzen kernel , but is not satisfied by the quadratic - spectral kernel .[ cdl ] the support of is compact .the genericity result is now as follows , where several quantities are equipped with the additional subindex to stress their dependence on the design matrix .[ generic ] fix a hypothesis such that .let , , satisfy assumption [ weightspwrb ] .for let be the test statistic defined in with and let be arbitrary ( the sets defined below do not depend on the choice of ) .fix a critical value such that .then , the following holds . 1 .suppose that , define and similarly define and .then , and are -null sets . if or if satisfies assumption [ cd ] , then and are -null sets .if assumption [ aar(1 ) ] holds , then the set of all design matrices for which the first three parts of theorem [ thmlrvpw ] do not apply is a subset of and hence is a -null set if or if satisfies assumption [ cd ] ; it thus is a ` negligible ' subset of in view of the fact that differs from only by a -null set .2 . let and assume further that , where .define then , is a -null set .furthermore , is a -null set under each of the following conditions : 1 .2 . and satisfies assumptions [ cd ] and [ cdl ] . , is odd , for some and satisfies assumptions [ cd ] and [ cdl ] .4 . satisfies assumption [ cd ] and on .+ suppose that the first column of consists of zeros and that assumption [ aar(1 ) ] holds .then , the set of all matrices such that the first three parts of theorem [ thmlrvpw ] do not apply to the design matrix is a subset of and hence is a -null set if one of the conditions in ( a)-(d ) holds ; it thus is a ` negligible ' subset of in view of the fact that differs from only by a -null set .3 . suppose , that the first column of is nonzero and that assumption [ aar(1 ) ] holds .then theorem [ thmlrvpw ] ( part 4 ) applies to the design matrix for every satisfying .[ rgeneric ] ( i ) if and holds , the first part of lemma [ nulln*pw ] shows that the test trivially breaks down , since for every element of the test statistic is then constant on .therefore , the assumption on in the first two parts of the proposition can in general not be substantially improved .\(ii ) in the second part of the proposition , the analogously defined sets and clearly satisfy and .\(iii ) in the third part of the proposition , if does not satisfy , then the test breaks down in a trivial way , since is then constant .the first part of the preceding genericity result shows that if , or if the kernel satisfies assumption [ cd ] , then theorem [ thmlrvpw ] can be applied to generic elements of , i.e. , to all elements besides a -null set . since all kernels used in practice , in particular the kernels emphasized in and , i.e. , the quadratic - spectral kernel and the bartlett kernel , respectively , satisfy assumption [ cd ] , this additional restriction on is practically immaterialthe second part of the proposition considers the situation where the first column of the design matrix is the intercept , which in addition is assumed not to be involved in the hypothesis in the sense that the first column of is zero .in this situation it is shown that theorem [ thmlrvpw ] can generically be applied to design matrices of the form with , under certain sets of conditions on the triple , , .we first discuss conditions ( a)-(c ) : 1 . in case no additional condition is needed for establishing generic applicability of theorem [ thmlrvpw ] .2 . if , generic applicability of the negative result follows if the kernel satisfies assumptions [ cd ] and [ cdl ] , which applies to many kernels used in practice , but not to the quadratic - spectral kernel which is emphasized in .3 . in case , the result shows that the procedure breaks down generically if is odd , for some and satisfies assumptions [ cd ] and [ cdl ] .this seems to be restrictive .however , the recommended procedure in is obtained by choosing the bartlett kernel , and , because in part 2 the first column of is the intercept .therefore , we see that the recommended procedure in satisfies this condition . summarizing , we see that the conditions ( a)-(c ) in the proposition cover the recommended choices of , and in and . for all procedures that are not covered by conditions ( a)-(c ) , e.g. , the procedure in based on the quadratic - spectral kernel , one can typically obtain the genericity result by applying condition ( d ) , which ( under assumption [ cd ] ) is always satisfied apart from at most one exceptional critical value .this is seen as follows : clearly , condition ( d ) depends on the critical value .we see that if assumption [ cd ] is satisfied , then ( d ) can be violated for at most a single .if this happens to coincide with , the condition is not satisfied and we can not draw the desired conclusion for this specific value of .moreover , we immediately see that the condition must then be satisfied for any other choice , say .therefore , generic applicability of the negative result follows for any value in that case .this shows that even if one chooses a triple , , that does not allow for an application of ( a)-(c ) , one can not expect to obtain a procedure that has good finite sample size and power properties , because for all but at most one exceptional critical value the corresponding test is guaranteed to break down generically .the third part of the proposition considers the case where the first column of the design matrix is the intercept , and where the coefficient corresponding to the intercept is restricted by the hypothesis . in this caseit follows that one can either apply part 4 of theorem [ thmlrvpw ] , or the test statistic is constant and hence the test breaks down in a trivial way ( cf .remark [ rgeneric ] ) .in the previous section we have established a ( generically applicable ) negative result concerning tests as in based on a prewhitened covariance estimator . in the present sectionwe first present a positive result concerning these tests under a _ non - generic _ condition on the design matrix .then we introduce an adjustment procedure and establish a condition on the design matrix under which the adjustment procedure leads to improved tests .finally we prove that this condition holds generically in the set of all design matrices .both , the positive result concerning tests as in based on a prewhitened covariance estimator , and the results concerning the adjustment procedure are established under the following assumption on the covariance model .[ approxaar(1 ) ] the set is norm - bounded and satisfies .furthermore , for every sequence that converges to satisfying there exists a corresponding sequence such that as .[ rapproxaar(1 ) ] ( i ) we first note that assumption [ approxaar(1 ) ] is stronger than assumption [ aar(1 ) ] .therefore , under the former assumption the negative result established in section [ neg ] concerning tests as in based on a prewhitened covariance estimator _ does _ apply a fortiori . as a consequence ,if satisfies assumption [ approxaar(1 ) ] , then positive results concerning size and power properties of tests of the form can only be established under non - generic assumptions on the design matrix .however , as we shall show , positive results can generically be established for an _ adjusted _ version of such tests .\(ii ) boundedness of is typically satisfied in our setup , as it is always satisfied if consists only of correlation matrices .\(iii ) the last part of the assumption states that elements of that are ` close ' to being singular can be well approximated by ar(1 ) correlation matrices .this , together with being a subset of , readily implies that the singular boundary of must coincide with .therefore , we see that the assumption rules out the existence of rank deficient elements of with rank strictly greater than one . as an example , this rules out the case where is the correlation model corresponding to all stationary autoregressive processes of order less than or equal to two ( cf .lemma g.2 in ) .if this is not ruled out , however , further obstructions to good size and power properties can arise along suitable sequences approximating these boundary points ( cf . section 3.2.3 in ) .the possibility of establishing positive results in settings like that is beyond the scope of the present paper and will be discussed elsewhere .\(iv ) we note that assumption [ approxaar(1 ) ] is clearly satisfied for every covariance model of the form , where is a closed set consisting of positive definite correlation matrices . as an example , let be fixed and let denote the set of all correlation matrices corresponding to stationary moving average processes of an order not exceeding , i.e. , where denotes the -dimensional correlation matrix corresponding to the spectral density ( cf .equation ) . then satisfies assumption [ approxaar(1 ) ] , because every element of the closure of is a positive definite correlation matrix ( the latter statement follows from equation , compactness of the unit sphere in , and the dominated convergence theorem ) . under assumption [ approxaar(1 ) ] we shall subsequently establish a positive result concerning tests based on a test statistic as in with . in light of part ( i ) of the preceding remarkwe already know that such a positive result can only be established under non - generic conditions on the design matrix .in particular , the subsequent positive result considers the non - generic case where - besides , a condition that is generically satisfied under a mild constraint on ( cf .lemma [ nulln*pw ] ) - the column span of the design matrix includes the vectors and and where holds .[ excpw ] suppose that the triple , , satisfies assumption [ weightspwrb ] , and that satisfies assumption [ approxaar(1 ) ] .let be the test statistic defined in equation with .let be the rejection region , where is a real number satisfying .suppose further that , and .then , the following holds : 1 .the size of the rejection region is strictly less than , i.e. , furthermore , 2 .the infimal power is bounded away from zero , i.e. , 3 .for every holds for and for any sequence satisfying with a singular matrix .furthermore , for every sequence holds for whenever , , and is a closed subset of .[ the very last statement even holds if one of the conditions and is violated .] 4 . for every , ,there exists a , , such that under the maintained assumptions on the hypothesis and the design proposition [ excpw ] shows that given any level of significance , a critical value can be chosen in such a way that the test obtained holds its size , while its nuisance - minimal power at every point in the alternative is bounded away from zero .as theorem [ thmlrvpw ] in combination with proposition [ generic ] shows , this is impossible for generic elements of the space of all design matrices .additionally , part 3 of the proposition shows that the power approaches one in certain parts of the parameter space corresponding to the alternative hypothesis .these parts are characterized by being bounded away from zero and with being singular , or and with positive definite , and where in both cases is the parameter vector corresponding to ( note that is bounded from above and below by multiples of , where the constants involved are positive and depend only on , and ) .[ rposbound ] suppose that instead of assumption [ approxaar(1 ) ] it is known that the covariance model satisfies the following variant of assumption [ approxaar(1 ) ] that rules out ar(1 ) correlation matrices with arbitrarily close to : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the covariance model is norm - bounded and there exists an ] the -th coordinate of , by {\cdot j } = m_{\cdot j} ] the row of . in case we write {i } = m_i ] .since is a matrix of full column rank by assumption , we clearly have . from the definition of see that , i.e. , is not well defined , if and only if one of the following conditions is satisfied ( cf .remark [ wdpsi ] ) : using we see that the coordinates of and of are values of certain multivariate polynomials defined on evaluated at the point .since ( i ) is equivalent to this shows that ( i ) is equivalent to , say , where is a multivariate polynomial which is clearly independent of . using this equivalence , condition ( ii )is seen to be equivalent to and .because of we have where . using this together with similar arguments as above we see that pre - multiplying by results in a matrix , the entries of which are values of certain multivariate polynomials , defined on , evaluated at the point follows that the second equation in ( ii ) can be replaced by ^k \det\left(i_k-\sum_{l = 1}^p \hat{a}^{(p)}_{l , x}(y)\right ) = 0,\ ] ] where is a multivariate polynomial which is independent of either . summarizing our observations concerning ( i ) and ( ii )we see that & \hspace{1.5 cm } \cup { \left\ { y \in { \mathbb{r}}^n : g_1(y , x)g_2(y , x ) \neq 0 \text { and } m(y ) \text { not w.d . } \right\}}.\end{aligned}\ ] ] the set in the second line of the previous display depends on the specific bandwidth .hence , we have to distinguish three cases : suppose first that , i.e. , is a constant which is functionally independent of , and thus everywhere well defined on .define , so that is a multivariate polynomial . noting that then proves the statement in case , because is independent of .next we consider the case , where we write instead of , because the argument and the resulting function do not depend on and .we partition where and are disjoint and defined as the equality in is readily seen from the definition of .we want to obtain more suitable characterizations of and and proceed in two steps : ( i ) first , we claim that if and only if {ij } [ \hat{z}_x(y)]_{i(j-1)}\right]^2 - \left [ \sum_{j = 1}^{n - p-1 } [ \hat{z}_x(y)]_{i j}^2\right]^2 \right ) = 0.\ ] ] to see this assume that holds : suppose that is not well defined .the latter occurs if and only if {{i^*}j}^2=0 ] . therefore , the factor corresponding to index vanishes and thus the product defining the second equation in vanishes .that implies that the product vanishes is obvious . to prove the other direction assume that and that the product vanishesthis implies that at least one factor with index , say , equals zero , which implies that either is not well defined or holds .this proves the claim .secondly , we recall that if , then . using an argument as above it is then easy to see that pre - multiplied by gives a matrix , the entries of which are values of certain multivariate polynomials defined on evaluated at the point .thus , if we multiply the second equation in by the -th power of the expression in the previous display we see that equation can equivalently be written as where is a multivariate polynomial that is independent of . summarizing, we have shown that ( ii ) first we observe that if and only if {ij } - \hat{\rho}_i(y ) [ \hat{z}_x(y)]_{i(j-1)}\right)^2 = 0,\ ] ] where we recall that by assumption is functionally independent of and . because implies that is well defined for , which is equivalent to {{i}j}^2\neq0 ] . by lemma [ npwrb ]we see that is a multivariate polynomial that does not depend on .[ auxconstr ] let , and let be a hypothesis .suppose that the triple , , satisfies assumption [ weightspwrb ] .assume that the tuple satisfies for some : 1 . has exactly nonzero columns with indices .2 . for , and .3 . if , then . otherwise ,{\cdot j_i } : i = 1 , \hdots , t \right\ } } ) = \operatorname{span}({\left\ { [ \hat{v}_x(y)]_{\cdot j_i } : i = 2 , \hdots , t+1 \right\ } } ) = { \mathbb{r}}^k.\ ] ] 1 .2 . under each of the following three conditions it follows that ( or equivalently ) : 1 . ; 2 . , and every row vector of the matrix obtained from by deleting its last column is nonzero [ this is in particular satisfied if ; 3 . , and either each coordinate of is non - negative , or each coordinate of is non - positive .3 . for every such that , the tuple is an element of that satisfies ( a1 ) , ( a2 ) and ( a3 ) .if and either {{1j_{i } } } > 0 ] for holds , then there exists a regular matrix such that the first columns of and , respectively , coincide and such that ( or equivalently ) .denote the column vectors of by for . if , then by ( a3 ) the set and , respectively , spans . using ( a3 ) , we now show that this is automatically satisfied in case . to prove this claim , we first recall that x_{j \cdot}' ] for , or we have {{1j_{i } } } < 0 ] for because satisfies assumptions ( a1 ) and ( a2 ) .therefore , it is obvious from equation , together with the assumed {{1j_{i } } } > 0 ] for ) , that by choosing large enough , we can enforce that all nonzero columns of are coordinate - wise positive ( negative ) . consider ( i ) : using equation we see that {\cdot j_2} ] is a column of , together with ( a consequence of ) , which shows that it is not the last column of .since all coordinates of {\cdot j_2} ] can be written as \\ = & ~~ \frac{n}{e_{+}'\mathbf{g}_m } \operatorname{grad}t ( e_{+ } + \mu_0^ * ) s_m^{-1/2 } \pi_{\operatorname{span}(e_{+})^{\bot}}\mathbf{g}_m + { \vert { \frac{n}{e_{+}'\mathbf{g}_m } s_m^{-1/2 } \pi_{\operatorname{span}(e_{+})^{\bot}}\mathbf{g}_m } \vert } \\ & ~~~~~~~ \times ~~~~~ { \vert { \left(\frac{n}{e_{+}'\mathbf{g}_m } \pi_{\operatorname{span}(e_{+})^{\bot}}\mathbf{g}_m\right ) } \vert}^{-1 } q\left(\frac{n}{e_{+}'\mathbf{g}_m } \pi_{\operatorname{span}(e_{+})^{\bot}}\mathbf{g}_m\right).\end{aligned}\ ] ] to derive the almost sure limit as of the expression in the previous display we first observe that converges point - wise to because of ( ii ) . from thatit follows that converges point - wise to and that converges point - wise to zero .an application of the continuous mapping theorem hence shows that almost surely as , which immediately implies almost surely as as a consequence of equation together with .we also observe that ( i ) above implies point - wise and thus , using the continuous mapping theorem again , we see that almost surely as ( where the limiting random vector is well defined almost - surely ) .this finally shows that \rightarrow \frac{\sqrt{n}}{e_{+ } ' \mathbf{g } } \operatorname{grad}t ( e_{+ } + \mu_0^ * ) d^ * \mathbf{g},\ ] ] almost surely .we already know from equation that , where is an orthogonal matrix .furthermore maps onto , and does not vanish everywhere on .hence , we see that the probability that the limiting random variable in the previous display takes on the value vanishes because is a gaussian random variable with mean zero and positive variance .hence , equation together with portmanteau theorem shows that the covariance between the gaussian mean - zero random variables and is given by where the equality follows from ( iii ) .therefore , and are independent . since the probability to the right in equation equals the probability that the random variables and have the same sign it is now obvious that the limit equals . 1 . if and , then exists .2 . suppose that , that is continuously differentiable on the non - void complement of a closed set , and that .assume further that one of the following conditions is satisfied : 1 . and for .2 . and has compact support .+ then exists .3 . if , then for every we have where is a multivariate polynomial ( explicitly constructed in the proof ) that does not depend on the hypothesis .we first verify parts 1 and 2 .let us start by deriving a convenient expression for under the assumption .by lemma [ n*pw ] the assumption is equivalent to .an application of lemma [ a567pw ] shows that satisfies assumption 5 in .an application of part ( ii ) of this assumption shows that .we can therefore use equation together with and ( both following from assumption 5 in ) to see that implies where in deriving the second equality we made use of the representation of developed in equation .the function is linear and hence totally differentiable on .furthermore , in the proof of lemma [ n*pw ] it is shown that the coordinates of the matrix are multivariate rational functions ( without singularities ) on .in particular the coordinates of are continuously partially differentiable on . to show that exists at a given point is therefore sufficient to show that each off - diagonal element ( recall that the diagonal is constant ) of is continuously partially differentiable on an open neighborhood of in .recall that the -th off - diagonal element ( ) of evaluated at some is given by if the sufficient condition above is obviously satisfied , because in this case is constant and therefore is constant on .this proves part 1 of the lemma .consider now part 2 . by considering separately the cases and , we observe that is continuously partially differentiable in an open neighborhood of any element of satisfying .we start with condition ( a ) .let satisfy and .fix an . by assumption continuously differentiable on an open neighborhood of .hence there exists an open neighborhood of in on which is strictly greater than zero and such that is continuously partially differentiable on .it hence follows that is continuously partially differentiable on because it coincides with on this set . to establish existence of the gradient under condition ( b )let satisfy and .let be as before and recall that the support of is compact by assumption .since is continuous on , there exists an open neighborhood of in such that for every point in this neighborhood we either have or that is not contained in the support of .it follows that the function is constant equal to , and thus is in particular continuously partially differentiable , on this neighborhood . to prove the third part of the lemma consider first the case , where we dropped the index because the argument and the resulting polynomial do not depend on it .suppose satisfies . then is well defined and by definition if and only if holds , where and are positive constants .this can equivalently be written as which , after multiplying both sides of the equation by ( which is nonzero ) , is seen to be equivalent to \\ & \hspace{2 cm } - ~~ \delta^ * { \sum \limits_{i = 1}^{k}}\omega_i \left[\hat{\sigma}^4_i(y ) ( 1+\hat{\rho}_i(y))^2 ( 1-\hat{\rho}_i(y))^2 \prod_{j \neq i}^k ( 1-\hat{\rho}_j(y))^6(1+\hat{\rho}_j(y))^2 \right ] = 0 \end{aligned}\ ] ] by multiplying both sides of this equation by a suitably large power of the products of the denominators of ( which are nonzero ) , we can write the preceding equation equivalently as where each for is a multivariate polynomial . in a final stepwe multiply both sides of the equation by a suitably large power of the non - vanishing factor in equation to obtain an equivalent equation of the form where each is a multivariate polynomial . therefore , the condition can be equivalently stated as finally , we note that the multivariate polynomial does not depend on the hypothesis .this proves the last part of the lemma in case .the proof of the case is almost identical and therefore we omit it .we finally note that similar arguments can be used to prove the statement in case , but we omit details .we first prove that the sets , , and do not depend on the specific choice of .this follows from an invariance argument .consider for example the set , which is by definition a subset of .every element of this superset satisfies .hence , for every such the corresponding test statistic is invariant w.r.t . by lemma [ a567pw ] .it now immediately follows that does not depend on the specific choice of .the same argument shows that the statement in part 2 condition ( d ) is independent of the specific choice of .we shall now prove the three main parts of the proposition and start with the first : \1 ) we begin with the statement concerning . under the maintained assumptions we know from part 5 of lemma [ n*pw ]that is a multivariate polynomial .this immediately implies that is a multivariate polynomial , showing that is an algebraic superset of .it hence suffices to show that the set in the previous display is a - null set , or equivalently to show that . to this endwe shall use lemma [ auxconstr ] ( with ) to construct a matrix such that .let be an auxiliary matrix the column vectors of which span , where is the vector obtained from by selecting the coordinates with indices for ( we shall need a similar construction for later on ) .note that this selection is feasible because holds as a consequence of the assumption - \mathbf{1}_{\mathbb{m}_{am}}(m ) \geq 0 ] ( for and ) is a multivariate polynomial , and that is a multivariate polynomial as well .it is easily seen that the coordinates of as a function of are multivariate polynomials . putting this togetherwe have shown that where is a multivariate polynomial . therefore , if we can show that we obtain that is a -null set . in the proof of the part concerning abovewe have already constructed an that satisfies which implies .together with equation this shows that holds for this specific . butthis immediately shows that and we are done .clearly , we can use an almost identical argument to prove the statement concerning . under assumption [ aar(1 ) ] the set of matrices for which the first three cases of theorem [ thmlrvpw ] do not apply is obviously a subset of .hence the first part of the proposition follows .\2 ) we start with the statement concerning . under the maintained assumptions we know from part 5 of lemma [ n*pw ]that is a multivariate polynomial .this immediately implies that is a multivariate polynomial , which shows that is an algebraic superset of .it hence suffices to show that the set in the previous display is a - null set , or equivalently to show that .again , we shall use lemma [ auxconstr ] with to construct a matrix such that .the situation here is more complicated than in the first part , because the first column of the design matrix we seek has to be the intercept . for our constructionwe need some additional ingredients : by definition if is odd , and if is even .if is odd set , where is the vector obtained from by selecting the coordinates for and , where for was defined in part 1 above .this selection is feasible , because by assumption we have , which , since is odd , gives , implying that , because of . if is even set , the vector obtained from by selecting the coordinates for .next , define we claim that and satisfy , is linearly independent of and is orthogonal to .the latter property is clearly always satisfied , regardless of whether is even or odd .we thus only have to verify the first two conditions .we start with the case odd . herewe have and therefore .furthermore can not equal for some , because the last and the last but one coordinate of are unequal . for even , where equals either ( if is even ) or ( if is odd ) , therefore holds and thus .furthermore can not equal for some , because the second and third coordinate of are unequal .this proves the claim . using these properties, we see that and where we have used that is orthogonal to both and to derive the third equality .we shall now define an auxiliary matrix .let denote a -dimensional matrix such that , and such that the remaining columns for are linearly independent and orthogonal to .since and are linearly independent , we have . for later usewe observe that where the first equality follows immediately from for being linearly independent and orthogonal to .this immediately shows define the two -vectors and .let be such that for , and if the index , then let if = 1 ] . by constructionthe matrix is of the form .we claim that . to see this denote the set of indices such that = -1 ] by .the sum of squares can be written as {j^*_i } - x_{j^*_i \cdot } \beta)^2 + \sum_{j \in \mathcal{i}_- } ( -1 - r_- \beta)^2 + \sum_{j \in \mathcal{i}_+ } ( 1 - r_+ \beta)^2 \\ & = \sum_{i = 1}^{k+1 } ( v_i - l_{i \cdot } \beta)^2 + \sum_{j \in \mathcal{i}_- } ( -1 - r_- ' \beta)^2 + \sum_{j \in \mathcal{i}_+ } ( 1 - r_+ ' \beta)^2 \\ & = { \vert { v - l\beta } \vert}^2 + \sum_{j \in \mathcal{i}_- } ( -1 - r_- ' \beta)^2 + \sum_{j \in \mathcal{i}_+ } ( 1 - r_+ ' \beta)^2 \end{aligned}\ ] ] if we now plug in and note that and we see that this immediately proves the claim .hence , the residual vector satisfies = \begin{cases } u_i & \text { if } j = j^*_i \text { for some } i = 1 , \hdots , k+1 \\ 0 & \text{else}. \end{cases}\ ] ] this immediately entails that equals }),\ ] ] where the indices of the nonzero columns of this matrix are precisely for , because the first column of is and for , the latter following since and for by definition .in deriving the dimension of } ] , which together with the assumption implies . assumption ( a3 ) is satisfied because . if we are done .consider the case .we show that condition ( cam ) is satisfied .but this is obvious , because the assumption immediately implies .suppose .we apply part 4 of lemma [ auxconstr ] . for thiswe claim that either {1j_i^ * } > 0 ] for . assuming that this claim is true, the lemma shows that there exists a regular matrix such that , the first column of is and , and we are done . to prove the claim recall that by construction holds , which shows that either for or for .furthermore , the first column of is the vector .equation now shows that {1j_i^ * } = u_i ] for a dimensionality argument implies existence of normalized vectors , that are functionally independent of , linearly independent and orthogonal to , , , and ( for every ) .recall that and are the first two elements of the canonical basis of .hence , the first two coordinates of for are zero .these orthogonality properties readily imply and for . inserting zero coordinates and rows , respectively, we shall now suitably embed and into and .define as and as where , a number that does not exceed by assumption .we emphasize that by construction does _ not _ depend on .furthermore , if we delete from and those coordinates that correspond to the zero coordinates that have been inserted to obtain from , we obtain the vectors and .therefore , it follows from equation that is orthogonal to , that and that for every we have as an immediate consequence we obtain where we recall that all coordinates of are nonzero and therefore has precisely nonzero columns .we now intend to apply lemma [ auxconstr ] with , acting as if was the underlying design , was the hypothesis to be tested and with the triple , , which obviously satisfies assumption [ weightspwrb ] with respect to ( a matrix with columns ) , since by assumption we have ( note that due to interpreting as the underlying design , corresponds to the ` k ' in lemma [ auxconstr ] ) .we note first that removing the first or last row of does not reduce its rank , because ( a vector all coordinates of which are nonzero ) is orthogonal to every column of this matrix ( cf . the argument in the beginning of the proof of lemma [ auxconstr ] ) . using hence see that assumptions ( a1)-(a3 ) in lemma [ auxconstr ] are satisfied by construction .now consider the case . by definition an element of ( acting as if was the underlying design matrix ) .therefore , condition ( ckv ) is satisfied for arbitrary , and follows . consider the case where . since and because of it follows that .therefore , condition ( cam ) in lemma [ auxconstr ] is satisfied and therefore for arbitrary . it remains to consider the case where .it suffices to find a such that is well defined ( see the proof of part 2 of lemma [ auxconstr ] ) .the latter statement is equivalent to the denominator in the fraction appearing in the definition of being nonzero , i.e. , where {\cdot j } [ \hat{z}_{\bar{x}}(y(\delta^*))]_{\cdot ( j-|i| ) } ' \bar{\omega } ~~~ \text { for } |i| = 0 , \hdots , n - p-1.\ ] ] by definition and we recall that which implies via equation that equals the only coordinate of this vector that depends on is , the latter equation following from and . since , the denominator appearing in the definition of interpreted as a function of is now seen to be a polynomial of degree in .hence , there must exist a such that the denominator does not vanish .it follows that .concerning the second statement we observe that implies , and therefore we have for -almost every . by assumptionthe first column of is zero .therefore , for -almost every we have , and , i.e. , for -almost every scenario ( 1 ) in theorem [ tu_3 ] applies .as above , it suffices to construct a pair and ( recall that ) , such that , where and . here , the matrix is dimensional . by assumption satisfies . to construct the matrix we can thus use the same argument as was used to construct in the proof of the first statement ( replacing )the matrix so obtained has ( after a permutation of its columns ) the same structure as has the matrix constructed in the proof of the first statement .we can therefore use almost the same arguments to conclude that for some and as constructed in the first part of the proof . | we analytically investigate size and power properties of a popular family of procedures for testing linear restrictions on the coefficient vector in a linear regression model with temporally dependent errors . the tests considered are autocorrelation - corrected f - type tests based on prewhitened nonparametric covariance estimators that possibly incorporate a data - dependent bandwidth parameter , e.g. , estimators as considered in , , or . for design matrices that are generic in a measure theoretic sense we prove that these tests either suffer from extreme size distortions or from strong power deficiencies . despite this negative result we demonstrate that a simple adjustment procedure based on artificial regressors can often resolve this problem . * ams mathematics subject classification 2010 * : 62f03 , 62j05 , 62f35 , 62m10 , 62m15 ; * keywords * : autocorrelation robustness , hac test , fixed - b test , prewhitening , size distortion , power deficiency , artificial regressors . |
seeking a suitable thank to the organizers of this unique conference and volume , we chose to present a nascent work in which rigor and speculation are hopefully well balanced .accordingly , it has two parts : + i ) quantum oblivion ( qo ) refers to a very simple yet hitherto unnoticed type of quantum interaction , where momentum appears not to be conserved .the problem s resolution lies within a brief critical interval , during which more than one interaction takes place .rapid self - cancellation , however , leaves only one interaction completed . while the paradox s resolution is novel , the interaction itself turns out to underlie several well - known quantum peculiarities .this gives a new , realistic twist to one of qm s most uncanny hallmark , namely , the _ counterfactual_. although rigorously derived from standard qm , oblivion offers some new experimental predictions , as well as new insights into other quantum oddities .+ ii ) quantum hesitation ( qh ) proceeds further to theorizing , as to _ how _ qo takes place .it reformulates oblivion within time - symmetric interpretations of qm , mainly aharonov s two - state - vector formalism ( tsvf ) .we assume that , beneath several momentary interactions , out of which only one is completed , there were several possible _ histories _ , which have left only one finalized . from the tsvfwe then take one of its most exotic features , namely , weak values unbounded by the eigenvalue spectrum .this allows too - large / too - small / negative weak values appearing under special pre- and post - selections .such `` unphysical '' values are assumed to evolve along both time directions , over the same spacetime trajectory , eventually making some interactions `` unhappen '' while prompting a single one to `` complete its happening , '' until all conservation laws are satisfied over the entire spacetime region .+ naturally , the qo phenomenon and the qh hypothesis should be considered separately .their combination , however , strives to make quantum mechanics more comprehensible and realistic , at the price of admitting that spacetime has some aspects still unaccounted for in current physical theory .it is the quantum effects themselves , regardless of any interpretation , which are best illustrated by the cynical yiddish aphorism quoted in this section s title . in classical physics ,grandmother s wheels , which she never had , trivially play no role in her dynamics .a quantum grandmother , in contrast , manages somehow to employ them , even to the point of outrunning vehicles .this is the quantum counterfactual , illustrated by the following examples : + i ) in the elitzur - vaidman experiment , a bomb is prepared and positioned such that , if struck by an appropriately prepared photon , it will explode .surprisingly , even when no explosion happens , its mere _ potentiality _ suffices to disturb the photon s interference : a non - explosive bomb leaves it undisturbed .+ ii ) hence , in any interaction of a particle with more than one possible absorber , each absorber s capability of absorbing the particle takes part in determining its final position ( `` collapse '' ) .consider , e.g. , a position measurement of a photon whose wave - function spreads from the source towards a circle of remote detectors : every detector s _ non - clicking _ incrementally contribute to the photon s eventual position , just like the single detector that does click .+ iii ) such counterfactuals underlie also quantum non - locality .bell s theorem implies the following counterfactual : had bob chosen to measure his particle s spin along an axis other than the one he actually chose , then alice s spin measurement would give the opposite value .+ this peculiar status of counterfactuals in qm stems from the uncertainty principle .consider the simple setting in fig .[ fig1 ] . a beam - splitter ( bs )is positioned between four equidistant mirrors , left / right / up / down .we know only that , within the device , a single photon crosses the bs back and forth .this situation , with four possible positions and momenta of the photon , encapsulates the uncertainty , well - known from the standard mach - zehnder interferometer ( mzi ) . ] to see that , loosen mirror to enable it to measure the photon s possible interaction with it . in of the cases ,the detector will remain silent .this means , with certainty , that the photon hits only and then reflected back to bs , where it splits towards _ both _ and , then reflected back , reunited by bs and returns _ only _ to .we can therefore leave the detector in and be sure that , despite the photon s endless oscillations , it will _ never _ hit this mirror .what has happened here ?apparently nothing .a possible event did not occur . yet the fact that it _could _ occur endows it a physical say .for if we turn also mirror into a detector , then again in cases it will not click , but now , sooner or later , is bound to click !the second non - event gave rise to a real event , which until that moment , was impossible .these four possible positions and momenta are typical quantum counterfactuals . as such , they present an acute duality : + i ) as long as they are not verified , they take an essential causal part in the system s dynamics .+ ii ) once , however , one of them is verified , all others vanish without a trace . + how , then , can non - events play part in the quantum process ?the challenge s acuity is reflected in the various radical moves it has elicited from the reveal interpretations of qm , of which two famous schools have desperately resort to the extreme opposites : + _ abandonment of ontology : copenhagen_. since counterfactuals are facts of our knowledge , just like actual facts , let us define qm as dealing only with knowledge , information , etc ., rather than with objective reality .+ _ excess ontology : many worlds_. the counterfactual does occur , but in a different world , split from ours at the instant of measurement .+ in what follows we offer an explanation to quantum counterfactuals which is purely physical , derived from quantum theory alone : at the sub - quantum level , grandmother nature utilizes her proverbial wheels by taking advantage of an otherwise - unhappy consequence of her age : _ amnesia_.we begin with a very simple yet surprising quantum interaction where one particle emerges from it visibly affected , while the other seems to `` remember '' nothing about it .let an electron and a positron , with spin states and momenta , , enter two stern - gerlach magnets ( sgms ) ( drawn for simplicity as beam - splitters ) positioned at and respectively ( see fig .[ fig2 ] ) .the sgms split the particles paths according to their spins in the -direction : and let technical care be taken such that , if the particles turn out to reside in the intersecting paths , they would meet , at or , ending up in annihilation .two nearby detectors , are set to measure the photons emitted upon pair annihilation , thereupon they would change their states to or .let us follow the time evolution of these particles .initially , at , the total wave - function is the separable state : latexmath:[\[\label{initial } depending on the two particles positions at times or , they may ( not ) annihilate and consequently ( not ) release photons , which would in turn ( not ) trigger one of the detectors . at , then , the superposition is still unchanged , as in [ initial ] .but at , either photons are emitted , indicating that the system ended up in _ or not _ , and then |ready_1\rangle|ready_2\rangle,\ ] ] which is entangled : one component of it is a definite state , while the other is a superposition in itself . ] similarly at : if a photon pair is emitted , we know that the particles ended up in paths and , i.e. , otherwise we find the non - entangled state which is peculiar . the positron is observably affected : if we time - reverse its splitting , it may fail to return to its source .its momentum has thus changed . not so with the electron : it remains superposed , hence its time - reversibility remains intact ( fig .[ fig3 ] ) .+ summarizing , one party of the interaction `` remembers '' it through momentum change , while the other remains `` oblivious , '' apparently violating momentum conservation .this is quantum oblivion ( qo ) . ]it is obviously the intermediate time - interval that conceals the momentum conservation in qo .the details , however , are no less interesting . the two particles , during this interval , become entangled in their positions and momenta .suppose , e.g. that we reunite the two halves of each particle s wave - function through the original bs , to see whether they return to their source .either one of the particles may fail to do that , on which case the other must remain intact . similarly for their positions .this is entanglement , identical to that of the electron - positron pair in hardy s experiment , .it is during this interval that the positions and momenta of the two particles become , in contrast with ordinary quantum measurement , first entangled and then non - entangled ( for greater detail of this effect and its implications see ) .it remains to be shown that something similar occurs also to the two macroscopic detectors that finalize a experiment .this is done next . rather than a curious effect of a specific interaction , oblivionis present in every routine quantum measurement .its elucidation can therefore shed new light on the nature of measurement , further enabling some novel varieties thereof .+ ordinary quantum measurement requires a basic preparation often considered trivial .consider e.g. , a particle undergoing simple detection ( as routinely employed during spin measurements ) .the detector s pointer , positioned at a specific location , reveals the particle s arrival by receiving momentum from it .this , by definition , requires the pointer to have considerable momentum certainty ( usually ) . in return , however , the pointer must have _position uncertainty_. let this tradeoff be illustrated with a slight modification of our first experiment . in the original version ( fig .[ fig2 ] ) , the two possible interactions were annihilations , which were mutually exclusive .for the present purpose , however , let us replace annihilation by mere ( elastic ) collision ( fig .[ fig4 ] ) . in other words , two superposed atoms and like the electron and positron in fig .[ fig2 ] , but instead of annihilating , they just collide .this can now happen on _ both _ possible occasions at and , namely at the two locations where can reside . with annihilations thus dropped, the outcome is even more interesting .suppose that the detector on path remains silent ( fig .[ fig4]b ) : we are now certain that the two atoms have collided , but remain oblivious about this collision s location . what we thus measure is an _ ordinary momentum exchange _ : both atoms momenta have been reversed along the horizontal axis . here ,oblivion is small , affecting only the two atoms positions at the time of the collision . yet , because has vanished from the distant location on , the final outcome is a coarse position measurement of . s position .[ fig4 ] ] the situation is much more surprising when we _fail _ to detect and in the paths to which they would be diverted in case of collision , namely , , , .we are now certain that s superposition is reduced to path while has returned to its initial superposition over and ( fig . [in other words , undergoes momentum oblivion , which is again an ordinary position measurement of , but this time the measurement is interaction - free , offering a special opportunity to observe how a counterfactual takes an integral part in the quantum evolution . to summarize : we have studied an asymmetric interaction between two atoms , where two halves of s wave - function interact with one half of .two momentum exchanges between the atoms can occur : either + i ) turns out to have collided with .this amounts to undergoing position measurement .the price exacted by the uncertainty principle is a minor position oblivion of and .+ or + ii ) turns out to have not collided with .this again amounts to undergoing position measurement- collapsing it to the remote path .hence its momentum ( interference ) is visibly disturbed .this time , however , if serves as a detector s pointer , its oblivion is amplified to a macroscopic scale , making the measurement interaction - free .+ as both ( i ) and ( ii ) occur under unusually high space- and time - resolution , they enable a novel study of the critical interval . during this interval ,entanglement between the two atoms has ensued , as they have assumed new possible locations ,\ ] ] which have remained undistinguished until the macroscopic detectors indicated that no annihilation has occurred .this has broken the symmetry between the two wave - functions , finalizing the interaction and sealing the oblivion .the generalization is therefore natural : during every quantum measurement , the detector s pointer interacts with the particle in the same asymmetric manner as atoms and above : part of the particle s wave - function with the whole wave - function of the pointer . to make the analogy complete , recall that in reality the pointer s superposition is continuous rather than discrete .as the pointer thus resides over a wide array of locations , momentum measurement becomes much more precise .not only is quantum oblivion essential for every quantum measurement , it is also present beneath several well - known variants thereof . in have demonstrated qo s underlying ifm , hardy s paradox , weak measurements , partial measurement , the aharonov - bohm and the quantum zeno effects .this passage from discrete to continuous superposition also opens the door for several interesting interventions , studied in .let us summarize the oblivion effect with the commonest and most basic example .every time a detector s pointer is set to measure the momentum of a particle that hits it , the following happens .+ i ) the detector s pointer , prepared with ( almost ) precise zero momentum , is consequently superposed in space .+ ii ) therefore , not only one particle - pointer interaction takes place ; rather , several interactions occur , one after another , in all the pointer s superposed locations , as the particle proceeds along them during the critical interval .+ iii ) in all these interactions , the photon and the pointer momentarily `` measure '' each other s state , + iv ) and each of these mutual `` measurements '' mixes position and momentum , hence being very inaccurate .+ v ) yet none of them is amplified to a full measurement .a complex superposition of correlated states thus builds up during the critical interval .+ vi ) only then does the pointer s state undergo amplification into a full measurement , as follows : all its possible positions fall prey to oblivion , while all possible momenta add up to a precise value .+ vii ) all the above equally holds for ifm , where the momentum measured is zero .+ our counterfactuals , then , become demystified . during the ci ,grandmother s wheels do appear and vanish time and again beneath the quantum noise . in the present case , these wheels are the pointer s possible locations .it is only because we choose to measure the pointer s momentum that we give up the options to extract these positions , that have now turned into counterfactuals .otherwise , as shown in the previous section , had we chosen to measure them instead of the momentum , other quantum surprises would emerge .like two tunnels dug underneath towards one another until merging into one , the above line of investigation turns out to complement another research program , much older and renowned for its surprising predictions , and also verified time and again in laboratories . in what followswe introduce the two - state vector formalism and its offshoot , weak measurement , before proceeding to describe how the two tunnels meet .tsvf originates from the work of aharonov , bergman and lebowitz .it asserts that every quantum system is determined by two wave - functions : one ( also known as the pre - selected wave - function ) evolves forward in time while the other ( post - selected ) evolves backward .the forward- and backward - evolving wave - functions , and respectively , define the so called two - state vector .the two wave - functions are equally important for describing the present of the quantum system via the weak value of any operator defined by : here a logical catch ensues : _ `` the state between two measurements '' can not be revealed by measurement ._ weak measurement was conceived in order to bypass this obstacle as well as to test other tsvf s predictions .this led to numerous intriguing works , both theoretical and experimental .apart from its time - symmetry , tsvf reveals even subtler symmetries with respect to values usually considered inherently positive .consider the surprising values for mass and momentum in the following two experiments + i ) the three boxes paradox is a by - now familiar surprise yielded by tsvf , of which the underlying logic can serve as an introduction to the idea of qh .a particle is prepared with equal probability to reside in one out of three boxes : later it is post - selected in the state what is the particle s state between these two strong measurements ? by definition , _ projective measurement is unsuitable for answering this question _ , as it reveals the state upon the intermediate measurement .it is _ weak _ measurement , again , that comes to help .tsvf predicts the following weak values of the projection operators , for : therefore , the total number of particles is , as it should be , but it is a sum of two ordinary particles plus one odd .the last equation denotes negative weak value for the very exitance in the third box . in order to fully grasp the paradoxical nature of this term ,let us consider within the context of its standard versions : + if `` probability '' means `` the particle certainly resides within this box '' ; + and `` probability '' means `` the particle has never resided within this box '' ; + then `` probability '' means `` the particle certainly _un_resides within this box . ''+ as absurd as third expression may sound , this is its simplest non - mathematical meaning , the alternative being dismissing it as meaningless .the choice in to trust the mathematics , has led to assigning a negative sign to every interaction involving the third box , as long as it is weak enough .obviously , this can not be related to the particle s charge , as it played no role from the beginning .the remaining choice is mass .the simplest way to prove this prediction is through the particle s momentum : a collision with another particle must give the latter a `` pull '' rather than a `` push , '' even though their initial velocities were opposite . in passing ,it is worth comparing this step of the tsvf to dirac s choice to trust the mathematics upon encountering the negative value for the electron s charge , following the dual solution for his famous equation . that choice has later led to the discovery of the positronthis may be the case with aharonov s present choice as well .can this prediction be put to test ?a preliminary version has been carried out by steinberg s group + ii ) the importance of negative weak values becomes highly visible through hardy s experiment .two mzis overlap in one corner ( see fig . [ fig5 ] ) .they are tuned such that electron entering the first mzi will always arrive at detector , while a positron entering the second mzi will always arrive at detector . therefore ,when the electron and positron simultaneously traverse the setup , they might annihilate at the intersection or make their partner reach the `` forbidden '' detector for the electron / positrion respectively . in case no annihilation was detected we can exclude the case they booth took the overlapping path and their state becomes _ i.e. _ , at least one of the particles took the non - overlapping ( ) state .the interferometers were tuned such that clicks for the electron constructive interference state , clicks for the position constructive interference state state , and similarly for and .therefore , observing clicks at and is equivalent to post - selection of the state this post - selection is rather peculiar since a click at naively tells us that the positron took its overlapping path , while click at naively tells us that the electron took its overlapping path .this scenario , however , is impossible , because we know annihilation did not occur .the paradox can be resolved within the tsvf .when we calculate the weak values of the various projection operators we find out that and while this leads us to conclude that although the number of pairs is , we have two `` positive '' pairs and one `` negative '' pair- a pair of particles with opposite properties . the pair in the `` no - no '' path creates a negative `` weak potential '' , that is , when weakly interacting with any other particle in the intermediate time , its effect will have a negative sign .moreover , we can see now the cancelation of positive and negative weak values by considering projections on the non - overlapping paths : ] \iii ) we can now derive from these experiments precisely the conclusion needed for our thesis : _ the negative values that they reveal exist also in ordinary measurements , only perfectly hidden_. + consider then , a photon split by a bs , into and beams , then split the beam further into two beams ( this setup was recently suggested by vaidman to answer the question `` where have the photons been ? ''perform weak measurements on all three thirds of the wave - function . now delay all three parts in separate boxes .this delay enables you to choose between two options : either + a ) reunite the three beams for interference , and post - select destructive interference with negative sign of the third .you will get the results of in the above three boxes experiment , namely , two photon and one negative - mass photon .+ or + b ) reunite only the two split back into the original beam , then perform strong position measurements on each beam .you will get a photon on one side and nothing on the other .no post - selection can reveal nothing unusual in the earlier weak measurements .what was so far a trivial `` nothing '' thus turns out to be something much more profound , namely , a perfect mutual cancellation of positive and negative masses .similarly for the real photon detected on the other half : it turns out to be the sum of two odd weak values .so far , our discussion went strictly within standard quantum formalism .oblivion straightforwardly follows under a finer time - resolution of the quantum interaction . encouraged by tsvf, we now take a step further asking : how does oblivion evolve in space and time ?this is admittedly a dissembling question , pretending not to know that quantum measurement itself still poses unresolved issues ; that `` collapse of the wave - function '' is hotly debated , etc .we thus enter the daunting minefield of interpretations of qm , where each choice carries its penalty . equally , however, these difficulties may serve as positive incentives : to the extent that qo and tsvf illuminate the nature of quantum measurement , it is worth rephrasing older questions in the terms of these frameworks .our proposal focuses on aspect ( ii ) of the quantum counterfactuals dual nature ( section 2 ) , namely , their perfect self - annihilation . recall that any possibility to observe a counterfactual after the measurement , such as retrieving information about a particle s momentum after its position was measured , entails straightforward violation of the uncertainty principle .the challenge , therefore , is to explain not only how the counterfactual exerts such an essential role , but also how , having done that , it manages to disappear without a trace .the first clue is already in our possession : qo has shown that counterfactuals can be observed , although indirectly beneath great quantum noise , during the critical interval ( sec .3.2 ) .we now return to aharonov s two state - vector formalism described in sec .4.1 above , where causality is highly time - symmetric at the quantum level .elsewhere we have explained our endorsement of this approach .first , it is appealingly simple .take for example the epr experiment : one experimenter s choice of measurement obviously affects the outcome of another measurement chosen at that moment , far away , by the other experimenter .this appears disturbing , however , only as long as one assumes the familiar causation , going from the past emission of the particles to the later measurements .if , in contrast , one allows causal relations to somehow go back and forth between the past event and its successors , much of the mystery dissolves ._ what appears to be nonlocal in space , becomes perfectly local in spacetime_. similar explanations , equally elegant , await many other quantum paradoxes , from schrdinger s cat to the quantum liar paradox . to be sure , the above `` somehow '' , by which causality runs along both time directions , still needs much clarification .the model also takes the price of admitting that there is something to spacetime that still lies beyond physics comprehension .these issues will be discussed in consecutive works . yetwe find the advantages of time - symmetric quantum causality compelling enough to pursue them , even before they mature into a fully - fledged physical theory .our second clue is therefore : _ quantum peculiarities can be better explained by allowing causality to be time - symmetric at the quantum level_. while time - symmetric causality has been conceived by several quantum physicists ( most notably cramer s transactional interpretation ) , surprising experimental derivation of it are solely due to the tsvf .these are the odd weak values described in section 4.2 above . although they could be derived from standard quantum theory, the fact is that they never crossed anybody s mind unless guided by tsvf .paradoxically , it is these exotic features that go beyond interpretation , having won several laboratory verifications worldwide . herethen is our third clue : _ unusual quantum values that appear to be mere noise due to the pointer s uncertainty , turn out to be physically real , revealed by weak measurements. they may be unusually large , unusually small , and negative , even in the case of the mass , which in classical physics is known to be only positive_. we have what we need . the established oblivion effect ( section 3 ) , plusthe hypothesis of time - symmetric quantum causality ( section 4.1 ) , plus the latter s more specific discovery of odd values ( section 4.2 ) , merge into the following coherent picture : oblivion is an evolution not only of states , but of histories .`` collapse , '' whereby quantum counterfactuals first play an essential role and then succumb to oblivion , occurs by ( i ) retarded and advanced actions complementing one another along the same spacetime trajectory , and ( ii ) negative values taking place alongside with positive ones . to prove this assertion, we have to show that its most exotic ingredient , namely the odd values , rather than being fringe phenomena , are in fact omnipresent .as paradoxical as this may sound , the mathematical proof is fairly straightforward .as in the case of oblivion , we now show that weak values , rather than an exotic curiosity , are part and parcel of every quantum value .the proof relies on two ranges of continuum .suppose that we gradually move ( i ) from weak measurements to strong ones , and/or ( ii ) from special pre- and post - selections to non - special ones .would odd weak values vanish ?this indeed appears to be the case , but in reality they become stronger , only counterbalanced by opposite weak values so as to give the familiar quantum values . moreover , as one of us has shown in there exists a continuous mapping between weak and projective measurement , allowing to decompose the latter into a long sequence of weak measurements .performing a sequence of weak measurements on a single particle , is amount to a biased random walk with incremental .finally , the measured state is randomly driven into a definite vector out of the measurement basis .ergo , upon gradually moving from weak to strong measurements , odd values do not diminish , but rather increase .its only through their addition with other weak values that they add up or cancel out , either completing or canceling each other to give the familiar quantum values , including .the above idea seem to be independently emerging in the very current literature on weak values . the main argument of this work nothing short of astonishing , but perfectly converges with ours : weak values take part in the formulation of the most salient features of the quantum realm . +i ) contextuality : when you measure a particle s spin along a certain direction and get the value `` up , '' you can be sure that , had the sgm been oriented to the opposite direction , the results would have been `` down . ''similarly for other directions : the outcome should deviate from your `` up '' by the angle difference between the directions .this is common logic .quantum contextuality , however , poses an odd restriction on these counterfactual outcomes : they have no objective existence , because they depend on the very orientation you chose first .in other words , there can not be some objective `` up / down '' along a certain spin direction which has existed prior to your choice , such that the angle you choose would be related to it .on the contrary , it is the direction that you chose for whatever reason that serves as the basis for all other possible choices . although contextuality and non - locality of quantum phenomena are dominating strong values , the work of pusey , as well as our previous works have recently shown respectively that interfering weak values can well account for the contextuality and non - locality of qm .+ ii ) interference : classical waves interfere in strict accordance with everyday intuition .single - particle interference , in contrast , is paradoxical in that the location of a single particle after passing the interference device is strictly determined by counterfactual paths . here too , mori and tsutumi have recently argued that weak values take part in the formation of this quantum phenomenon .their `` weak trajectories , '' which the single particle is supposed not to have traversed , are shown on one hand to be detectable by weak measurement and on the other hand to add up to give the wave - function s familiar undulatory motion .this article recounts a search for a better understanding of the unique causal efficacy of quantum events that failed to occur .we have revisited two opposite characteristics unique to quantum physical values . in terms of information theory ,classical values constitute , by definition , signals .quantum values , in contrast , are sometimes equivalent to lack of signals , namely , silence , and sometimes to a surplus of signals , namely , noise . perhaps , then , instead of ignoring the former and trying to filter out the latter , it is time to take them as complementary to signals , even as their constituents ?we have therefore studied two such quantum phenomena .+ _ interaction - free measurement _ indicates that a detector s silence is as causally potent as its click . seeking to understand this potency , we pointed out quantum oblivion as the basic mechanism underlying it . upon further study , qo turned out to underlie every standard quantum measurement .+ _ weak values_ appear amid enormous quantum noise inflicting the quantum weak measurement .tsvf , however , has extracted from them a great deal of novel information about the quantum state .they reveal , e.g. negative weak values emerging in many quantum states .recently , it has been argued that these weak values take part also in more common quantum phenomena , such as contextuality and interference .+ our synthesis of these two lines of research has led to the model of quantum hesitation : _ the weak values , taking part in the time - symmetric evolution of the quantum state , are responsible for the very phenomenon of quantum measurement which we take as real_. following the tsvf , we suggest that the measurement s outcome , randomly coming out of several possible ones , is the sum of the two state vectors going back and forth in time .this outcome may therefore emerge from weak values , even odd ones , brought by these two state vectors .similarly for the even more intriguing disappearance of the other possible outcomes : it may be negative weak values , contributed by one state vector , which precisely cancel the positive weak value of the other state vector .hence , perhaps , the apparently innocuous `` , '' which nevertheless endows quantum grandmother such intriguing mobility .+ can this account boast greater rigor than just one more interpretation of qm ?in conclusion we want to point out what we consider to be the greatest difficulty hindering such an advance .the above account invokes two distinct causal chains contacting between past and future events , _ traversing the same spacetime trajectory_. such a far - reaching idea may invoke more new questions than answers for the old ones .the most acute problem concerns the nature of time : is there more to physical time then the geometrical characteristics assigned to it by special and general relativity ? or does it allow some yet unknown `` becoming '' that involves a privileged `` now '' moving from past to future ? + the two major time - symmetric interpretations of quantum mechanics , namely tsvf and the transactional interpretation , openly admit that these issues are sorely obscure .recently both schools produced novel accounts that deal with time s nature in radical ways .as our own contribution to these issues has been only to stress their acuity , we can only hope for this kind of theorizing to be even more radical in the future .vaidman l 2009 counterfactuals in quantum mechanics , in compendium of quantum physics : concepts , experiments , history and philosophy greenberger d , hentschel k and weinert f ( eds . ) ( berlin : springer - verlag ) p 132 ( _ preprint _arxiv:0709.0340 ) aharonov y and cohen e 2015 weak values and quantum nonlocality , to be published in `` quantum nonlocality and reality '' , mary bell and gao shan ( eds . ) , cup , ( _ preprint _ http://www.ijqf.org/wps/wp-content/uploads/2015/01/aharonovcohen.pdf ) | among the ( in)famous differences between classical and quantum mechanics , quantum counterfactuals seem to be the most intriguing . at the same time , they seem to underlie many quantum oddities . in this article , we propose a simple explanation for counterfactuals , on two levels . quantum oblivion ( qo ) is a fundamental type of quantum interaction that we prove to be the origin of quantum counterfactuals . it also turns out to underlie several well - known quantum effects . this phenomenon is discussed in the first part of the article , yielding some novel predictions . in the second part , a hypothesis is offered regarding the unique spacetime evolution underlying qo , termed quantum hesitation ( qh ) . the hypothesis invokes advanced actions and interfering weak values , as derived first by the two - state - vector formalism ( tsvf ) . here too , weak values are argued to underlie the familiar `` strong '' quantum values . with these , an event that appears to have never occurred can exert causal effects and then succumb to qo by another time - evolution involving negative weak values that eliminate them . we conclude with briefly discussing the implications of these ideas on the nature of time . |
the background caused by scattering of neutrons off the irradiated sample is a serious issue in neutron capture experiments . through subsequent neutron interactions with the materials surrounding the sample ,secondary reaction products are created such as rays and/or charged particles which may be detected alongside the capture rays emitted from the sample , contributing to the total background .this particular contribution , referred to as the _ neutron background _, is most notable for the samples characterized by a large neutron scattering - to - capture cross section ratio . in general , neutron background is characteristic of environments which are strongly affected by the neutron scattering .it is intensified by the presence of any neutron - sensitive material in the immediate vicinity of the detectors , and especially by the detector proximity to the walls of the experimental hall .the neutron background is determined by two distinct components , one being the sample itself , serving as the primary neutron scatterer , and other being the sample - independent _ neutron sensitivity _ related to the entire experimental setup .the neutron sensitivity may be generally defined as the detector response to reaction products created by the interaction of scattered neutrons with the surrounding materials . a nontrivial effect of the neutron sensitivity on the neutron background and , consequently , the entire capture measurement has been demonstrated by koehler et al . in capture measurement on , where the reduction of the neutron sensitivity of the experimental setup has led to significant improvements in the acquired capture data . at the neutron time - of - flight facility n_tof at cern ,neutron sensitivity considerations have been followed since the start of its operation .this was reflected through the development of specially optimized c ( deuterated benzene ) liquid scintillation detectors , exhibiting a very low intrinsic neutron sensitivity . however , the neutron background at the n_tof facility is heavily affected by the surrounding massive walls , serving as the prime candidates for the enhanced neutron scattering . furthermore , the much higher neutron energies available from the n_tof spallation source introduce an additional contribution to the neutron background , when compared to the neutron sources based on electron linacs , where the neutron energies are usually limited to mev .details on the n_tof facility can be found in refs . .recently , geant4 simulations were developed for determining the neutron background in the measurements with c detectors at n_tof .the results of these simulations were first applied in the analysis of the experimental capture data for and the analysis of the integral cross section measurement of the () reaction .an earlier capture measurement by guber et al . has already revealed that previous experimental results and adopted evaluations of the capture cross section have been heavily affected by the neutron background , that was in the past inadequately suppressed or accounted for . at n_tof the neutron backgroundwas accurately determined by means of dedicated simulations benchmarked against the available measurements , and was subtracted from the data .the aim of this paper is to demonstrate that deriving the neutron background from the neutron sensitivity ( sections [ sensitivity ] and [ resonance ] ) or even the dedicated measurements ( section [ natcarbon ] ) is not a trivial issue and requires a suitable procedure .we address the issue by developing an improved method for determining the neutron background , which is based on an advanced treatment of the simulated neutron sensitivity ( section [ novel ] ) .the improvements regard both the event tracking in the simulations and the subsequent data analysis .in particular , we propose to study the neutron sensitivity by keeping track of the total time delays between the neutron scattering off the sample and the detection of counts caused by the neutron - induced reactions .the limitations of the method are addressed in section [ uranium ] .section [ summary ] summarizes the results and conclusions of this work . a detailed mathematical formalism underlying the proposed method is reported throughout the appendices , , and .when comparing the neutron background to the neutron sensitivity , a clear distinction has to be made concerning the neutron energies .the _ primary neutron energy _ is the true energy of the neutron ( from the incident neutron beam ) that has caused the reaction or the chain of reactions leading to the neutron background .the _ reconstructed energy _ is the energy determined from the total time delay between the neutron production and the detection of secondary particles generated by the neutron - induced reactions . in case of the _ prompt counts_ , caused by the reaction products immediately produced in the sample ( e.g. rays from neutron capture ) , the reconstructed energy is equal to the neutron kinetic energy , due to the total time delay being equal to the neutron time - of - flight . in case of neutron scattering inside the experimental hall or some other delay mechanism , such as the decay of radioactive products created by neutron - induced reactions , the total time delay may be large and may significantly affect the reconstructed neutron energy . for these _ delayed counts _ contributing to the neutron background , the reconstructed energy will be lower than the primary neutron energy , often by orders of magnitude . while the reconstructed energy is experimentally accessible , the primary neutron energy is not , and can only be determined by simulations .the neutron background estimation methods laid out in sections [ sensitivity ] , [ resonance ] and [ natcarbon ] neglect the difference between the primary neutron energy ( before the scattering ) , the scattering energy ( sampled in the simulations ) and the reconstructed energy .hence , throughout these sections the notation will be used as the universal one for the neutron energy .we follow this approach for consistency with refs . , freely combining the considerations strictly valid either for , or . starting from section [ novel ] , these distinctions will be explicitly taken into account . in that, it should be noted that the neutron sensitivity has conventionally been expressed in terms of the scattering neutron energy . on the other hand ,the neutron background as appearing in the experiments is a function of the reconstructed energy , suggesting at once an incompatibility between the two . in order to calculate the neutron background from the neutron sensitivity, one needs to determine the neutron detection efficiency , i.e. the efficiency for detecting a neutron through the detection of particles produced in secondary neutron reactions .this is commonly achieved by running the dedicated simulations , wherein the neutrons are isotropically and isolethargically generated from a point source at the sample position .we note that the pulse height weighting technique has to be applied in calculating the efficiency , in order to compensate for the lack of correlations between rays in the simulated cascades following neutron captures .this issue has already been addressed in ref .furthermore , the central role of applying the pulse height weighting technique to the simulated capture data was unambiguously confirmed in ref . by comparing the simulated and the experimental capture data for .a detailed description of the pulse height weighting technique applied at n_tof may be found in ref . .we adopt the definition of the neutron sensitivity from , which uses the ratio , taking into account the maximum detection efficiency as an additional constant factor . in order to be able to use the weighted neutron detection efficiency ,we further generalize the definition of the neutron sensitivity , by introducing the average weighting factor : the weighted neutron detection efficiency is : with as the appropriate weighting factors from the pulse height weighting technique , dependent on the energy deposited in detectors . is the number of detected counts caused by neutrons of scattering neutron energy , while is the total number of neutrons simulated at this energy .the average weighting factor is obtained by taking into account all neutron energies sampled : it may be noted that without weighting ( for all counts ) the generalized neutron sensitivity reverts to the original ratio . the weighted efficiency has been calculated for two c detectors used at n_tof .one is the modified version of a commercial bicron detector and the other one was custom built at forschungszentrum karlsruhe and denoted as fzk detector . for the sake of simplicity and clarity , in this paper we will only show the results for the bicron detector , with the condition kev , as usually imposed on the experimental data .furthermore , the reader s attention may be drawn to noticeable fluctuations apparent in multiple figures presented throughout this paper . with the exception of clearly recognizable resonances in the displayed spectra , the fluctuations are purely statistical in nature a simple consequence of a finite runtime dedicated to the computationally intensive simulations .they are also naturally enchanted by the application of the pulse height weighting technique and by a fine binning that was selected for displaying the data , in order to preserve the clear appearance of some of the very narrow resonances . in accordance with the laid out considerations , fig .[ fig1 ] shows the weighted neutron detection efficiency of the bicron detector .the generalized neutron sensitivity , offset by a constant factor , is also shown , since it will be used later on . to estimate the neutron background in terms of the weighted counts per neutron bunch , the yield of elastically scattered neutrons ( i.e. the scattering probability ) and the neutron flux ( normalized to the number of neutrons per neutron bunch ) have to be taken into account .for purely illustrative purposes , we will assume a simple relation for the yield : where is the areal density of the sample in number of atoms per unit surface . and denote the elastic scattering cross section and the total cross section , respectively. this expression does not take into account the multiple scattering effects , the angular distribution of scattered neutrons nor the contribution from inelastic reactions , which may all be accounted for in dedicated simulations , as in ref .the weighted neutron background may be expressed as : this estimate is compared in fig .[ fig2 ] against the true neutron background for , obtained from dedicated simulations . outside the resonance region the scale of the true background is , indeed , very well reproduced by the estimated one . however , under the capture resonances the background calculated from the neutron detection efficiency is clearly overestimated .this is precisely because the estimated background is expressed in terms of the primary neutron energy instead of the reconstructed energy , thus missing the time delays following the neutron scattering off the sample . since the background overestimation seems to be more pronounced for strong resonances , subtracting the neutron background estimated from the neutron detection efficiency may significantly affect the measured capture resonancesthis is particularly troublesome because the strongest capture resonances constitute the dominant contribution to the maxwellian averaged cross sections ( macs ) , which represent the basic input for astrophysical models of stellar nucleosynthesis .ni and the one estimated from the weighted neutron detection efficiency of the bicron detector . ]as is shown in fig .[ fig2 ] , the neutron background estimated from the neutron sensitivity of the experimental setup exhibits strong resonances due to the elastic scattering cross section of a given sample .since each capture resonance is accompanied by the corresponding resonant component in the elastic scattering cross section , an erroneous estimate of the neutron background may heavily affect the capture data . herewe discuss the correct method of estimating the neutron background , focusing on the illustrative example of the strong 15.3 kev resonance in the ( ) reaction .the first estimate of the neutron background under the capture resonances relies on simple neutron sensitivity considerations . to demonstrate the limitations of this method, we benchmark it against the true neutron background under the 15.3 kev resonance ( obtained by dedicated simulations ; shown in fig . [ fig2 ] ) .we assume that the capture yield under the resonance is determined by its radiative width : .the reaction yield is translated into the number of the detected capture counts through the average detection efficiency as : . by the same reasoning ,the yield of elastically scattered neutrons is determined by the neutron width as : .the neutron background counts are similarly affected by the normalized weighted neutron detection efficiency , so that : . in order to establish the link with the adopted definition of the generalized neutron sensitivity , we replace the average detection efficiency by the maximum one .the relative contribution of the neutron background to the total number of counts measured at the given resonance may then be estimated as : for the resonance at 15.3 kev the values from endf / b - vii.1 are : = 1.104 ev and = 1354.062 ev . with the neutron sensitivity for the bicron detector of , determined by averaging the data from fig .[ fig1 ] within the resonance range from 14 kev to 17 kev , the ratio from eq .( [ eq1 ] ) amounts to approximately 68% . on the contrary ,dedicated simulations of the neutron background indicate that when expressed in terms of the primary ( note : _ not _ reconstructed ! ) neutron energy , the neutron background between 14 kev and 17 kev amounts only to 24% of the total detected counts .the difference relative to the result of eq .( [ eq1 ] ) is due to the following reasons : ( 1 ) in eq .( [ eq1 ] ) the maximum detection efficiency was used instead of the average one ( which , in fact , lowers the estimated value ) ; ( 2 ) in small part because the neutron sensitivity was calculated assuming an isotropic distribution of scattered neutrons ( isotropic in laboratory frame ) , instead of a more realistic one ; ( 3 ) most importantly , the elastic scattering cross section has a different shape than the neutron capture cross section , showing pronounced interference patterns and more extended resonance tails , strongly affecting not only the partial contribution to the yield of scattered neutrons within the limited energy range between 14 kev to 17 kev , but also the proportionality between the overall scattering yield and the neutron width , relative to the capture counterparts ( ) .these simple considerations already indicate that a simplified approach , such as the one of eq .( [ eq1 ] ) , may lead to large errors in the estimates of the neutron background .ni within the 14 kev 100 kev range , compared to the spectrum of primary energies for the neutrons that have caused it . ]a very different value for the relative contribution of the neutron background is obtained if we consider the reconstructed neutron energy .the true neutron background , integrated between 14 kev and 17 kev , makes only 6.2% of the total resonance area .moreover , this reduced value consists of both the prompt and the delayed component , while the neutron sensitivity considerations apply only to the prompt component .( the delayed component was defined by considering events with more than 1% relative difference between the primary neutron energy and the reconstructed energy , i.e. . increasing this relative difference to 10%did not produce any notable changes between the two components . ) if treated separately , the delayed component , which can not be estimated on the basis of simple neutron sensitivity considerations , contributes 2.6% to the 15.3 kev resonance , while the prompt component amounts to only 3.6% , in contrast with the initial value of 24% from the neutron background expressed as a function of the primary neutron energy .ni caused by neutrons with primary energies between 14 kev and 100 kev . ] in order to understand the difference between the value of 24% from the primary neutron energies and the value of 6.2% from the reconstructed energies , we focus on the origin of the neutron background within the 14 kev 100 kev range , where two strong capture resonances ( 15.3 kev and 63.3 kev ) are located .figure [ fig5 ] compares the total neutron background in that region with the energy spectrum of primary neutrons that have generated it .neutrons all the way up to 10 mev contribute notably to the background below 100 kev , while the contribution from neutrons of higher energy is negligible .on the other hand , fig .[ fig6 ] shows the background produced by neutrons of primary energies between 14 kev and 100 kev . the comparison of figs .[ fig5 ] and [ fig6 ] shows that a large fraction of the background in the 14 kev 100 kev range is produced by higher - energy neutrons .simultaneously , the neutrons of primary energy from the considered range produce a background mostly contained at lower reconstructed energies , down to 0.1 ev .only a small fraction of the background remains in the same energy range ( in particular , only 15% , as the ratio between previously quoted values of 3.6% and 24% ) . from both figures , together with fig .[ fig2 ] , we reach the following conclusion : contrary to the past assumption , not even the prompt component of the neutron background under the capture resonances may be safely estimated from neutron sensitivity considerations alone . instead , complete simulations of the neutron propagation throughout the experimental hallmust be performed , taking the full temporal evolution of the neutron - induced reactions into account .another common method used for estimating the neutron background without relying on simulations , except for possible higher order corrections consists in scaling the neutron background measured with a neutron scatterer , such as a sample of natural carbon .this method is based on the assumption that the measured background is proportional to the yield of elastically scattered neutrons for a given sample : , with the constant of proportionality assumed to be equal for both the scatterer and the sample under investigation . under such assumptionsthe weighted background related to a given sample ( we use the example background related to the sample ) may be estimated from the weighted background , measured with the carbon sample , as : where and are the yields of elastically scattered neutrons for and , respectively .ni and the one estimated from the weighted neutron background of sample . ]figure [ fig8 ] compares the actual neutron background for with the one estimated by eq .( [ carbon ] ) , using the simulated neutron background for . under strong resonancesthe neutron background is again overestimated , for the same reasons that were covered in section [ resonance ] .however , a startling agreement may be observed outside the resonant region of , which may be surprising at first , since the principle of scaling the backgrounds for different samples by the portion of elastically scattered neutrons ( at a given primary neutron energy ) can only be applied to the prompt components .though , in principle , an excellent agreement outside the resonant region may not be _ a priori _ expected , it may be understood from the fact that the sources of the neutron background other than the sample itself ( i.e. the experimental components and the walls of the experimental area ) are equal for both samples . if figs . [ fig2 ] and [ fig8 ] have shown anything , it is that in estimating the neutron background directly from the elastic scattering cross section , it is by far more appropriate to use the smoothed modification of the cross section , in which the ( strong ) scattering resonances have been flattened and replaced by the smooth sections connecting to the cross section at surrounding energies . unfortunately , fig .[ fig8 ] may betray an overly optimistic picture , since for estimating the neutron background of , the simulated neutron background of was used , instead of the experimental one .the reason is that the measured carbon data from experimental area 1 of the n_tof facility reveal the pure neutron background only at the reconstructed energies above 1 kev . below this energythe measurements are strongly affected by the detection of rays from the decay of radioactive residuals produced by the inelastic () reaction , opening above the reaction threshold of 13.6 mev .this is an intrinsic feature of itself and can not be projected to the other samples .for this reason , in experimental area 1 the neutron background for ( or any other sample ) can not be estimated in the energy range below 1 kev directly from the measured carbon data . at the same time ,any reliable extrapolation toward lower energies is extremely hard , especially since the shape of the background below 10 ev decidedly departs from the one set above 1 kev .the considerations up to this point reveal the limitations of the currently available methods for determining the neutron background .herein , we propose an improved approach to estimating the neutron background , based on simulations of the complete experimental setup , barring the sample itself .these simulations need to be performed only once for a given experimental setup , with the output results reflecting the detector response to the scattered neutrons being adjusted to the scattering properties of a particular sample by means of a proper normalization technique .the simulations are basically identical to those used for the neutron sensitivity in section [ sensitivity ] and are described in ref . . in short ,the overall experimental setup is irradiated by neutrons isolethargically and isotropically emitted from a point source at the sample position .note that the neutrons are emitted as if they have been already scattered off the sample .for this reason , we will treat them as the _ scattered neutrons _, as opposed to the _ primary neutrons _ from the neutron beam .the distinction and its importance are elaborated in .the sampled neutron energies span the full energy range of the n_tof neutron beam : from thermal up to 10 gev .each count detected by any of the two c detectors is characterized by the following set of parameters extracted from simulations : . here is the initial neutron energy , is the energy deposited in the detector , is the time delay between the neutron emission from the sample position and the detection of secondary particles , while and are conventionally defined polar and azimuthal angles of the initial neutron emission , relative to the primary beam direction .the most important feature of the improved approach is that the neutron background is calculated from the time delays , and expressed as a function of the reconstructed energy , instead of the initially sampled energy .in addition , a more involved data analysis will be included in the procedure .figure [ fig11 ] shows examples of the detector response to neutrons isotropically scattered in the laboratory frame , for several selected scattering energies , as functions of the time delay .the detector response function and its relation to the neutron background are explained in detail in .[ fig11 ] shows the angle - integrated response .the counts at very large time delays are caused by the detection of particles ( mostly rays ) emitted in the decay of long - lived radionuclides produced by neutron activation . only counts with a time delay of up to contribute significantly to the neutron background affecting the n_tof measurements ( since ms is the width of the data acquisition window at n_tof facility ; the exact vale depends on the adopted sampling rate of the data acquisition system ) .the later counts may only contribute ( with sharply decreasing probability ) through the wrap - around process in subsequent neutron pulses .the backbone of the new method is the proper normalization of the simulated data , which consists in weighting each detected count by the appropriate weighting factors , dependent on more parameters than just the reconstructed energy , which appears as the main argument of the neutron background .since the cosine of the scattering angle plays a dominant role , in order to abbreviate relevant expressions we use the following notation : together with an abbreviation for the set of all relevant parameters : we report here the central expression for the weighting factors , while the complete mathematical formalism underlying the method is treated in detail in a series of appendices ( , , ) .in particular , an overview of considerations leading to the correct expression for the weighting factors is presented in , with the detailed derivation given in . adopting the notation from eqs .( [ short_cos ] ) and ( [ star ] ) , the expression for the weighting factors may be written as : here is the weighting function from the pulse height weighting technique . is the yield of elastically scattered neutrons , given by eq .( [ yield ] ) , but dependent on the _ primary neutron energy_ from before the scattering off the sample .the primary neutron energy is given as a function of the _ neutron scattering energy _ and the scattering angle by eq .( [ primary ] ) in . the neutron flux is given in units of lethargy .the angular correction factor given by eq .( [ eq3 ] ) in translates the isotropically simulated angular distribution of the scattered neutrons into the more realistic laboratory distribution : which is the relativistically transformed angular distribution of neutrons isotropically scattered in the neutron - nucleus center of mass frame . finally , is the total number of simulated neutrons , with and as the minimum and maximum sampling energy , respectively .we remark that the derivative may be easily calculated from eq .( [ primary ] ) .ni ( top panel ) and ( bottom panel ) , compared with the background obtained by the newly proposed estimation method . the estimated background obtained by properly normalizing the simulated data , on the basis of eq .( [ weighting ] ) , is compared against the one obtained by a simplified normalization based on eq .( [ approx]).,title="fig : " ] ni ( top panel ) and ( bottom panel ) , compared with the background obtained by the newly proposed estimation method . the estimated background obtained by properly normalizing the simulated data , on the basis of eq .( [ weighting ] ) , is compared against the one obtained by a simplified normalization based on eq .( [ approx]).,title="fig : " ] since the weighting factors from eq .( [ weighting ] ) are dependent on more parameters than appear as the arguments of the final distribution , the weighting procedure can not be directly applied to an overall distribution of unweighted counts .rather , it must be applied to a set of discrete data , on a count - to - count basis . presents the simple formalism establishing the link between the continuous distributions and the associated set of discrete data .figure [ fig10 ] shows the neutron background for and , estimated using the new method , i.e. applying the weighting factors from eq .( [ weighting ] ) . an excellent agreement with the neutron background determined from the dedicated simulations for the two samplesis observed in both cases . while eq .( [ weighting ] ) represents the fully relativistic approach , reasonable results can also be obtained with a simplified approach in which the angular corrections and the difference in neutron energy before and after the scattering are ignored : in the reconstructed - energy range , the ratio between the average proper weighting factor from eq .( [ weighting ] ) and the average simplified factor from eq .( [ approx ] ) is very close to 1 for both and .this may be easily understood from the fact that the major contribution to these particular backgrounds comes from essentially nonrelativistic neutrons of energies below 10 mev , with only a minor contribution from higher energies .however , while the differences between the proper and the simplified weights may average out , they may become significant on the level of single counts ( being more pronounced for lighter nuclei , since the boost from a neutron - nucleus center of mass frame into the laboratory frame depends on the mass of the target ) .this is illustrated in fig .[ fig12 ] for the counts from the reconstructed - energy range , for both and .evidently , the corrections are mostly limited to % in case of and to % in case of , but both distributions show the long tails far beyond the central parts . from eq .( [ weighting ] ) and the simplified weighting factors from eq .( [ approx ] ) , for and . ]the advantage of the method proposed herein is the universality of the simulated data that need to be obtained only once for a particular experimental setup . estimating the neutron background for a given sample requires only the proper selection of the elastic scattering cross section and the introduction of an appropriate nuclear mass into eqs .( [ primary ] ) and ( [ eq4 ] ) from [ considerations ] , used for evaluating the central eq .( [ weighting ] ) .in this section we present an additional complication that may affect the neutron background . the method proposed in section [ novel ] considers only the elastic neutron scattering off the sample . in case of an additional background component , related to the sample properties other than the elastic scattering, there is no alternative to running the dedicated simulations specifically adapted to the particular sample .this is discussed in the example of the neutron background in a measurement of the ( ) cross section at n_tof .figure [ fig9 ] shows the simulated neutron background for a sample , clearly separating a portion of the background caused exclusively by the neutron scattering off the sample .the presence of a strong additional component is immediately evident . according to the simulations ,this component is caused by high energy neutrons ( above 600 kev ) inducing fission reactions on .a variety of radioactive fission fragments is produced in the process , most so short - lived that their decay falls within ms data acquisition window of the n_tof facility .the products of these decays ( mostly rays ) are then detected alongside the capture rays , contributing to the neutron background .the detection of these secondary products is similar to the detection of rays from the () reaction on the carbon sample .this background component , although related to the activation of the sample , can not be measured separately in beam - off runs , since the half - lives of the produced radioactive isotopes span just a few neutron bunches .the sudden increase in the total neutron background above several hundreds of kev is due to the prompt rays released by neutron inelastic scattering and neutron - induced fission .u , compared against the background caused exclusively by neutrons elastically scattered off the sample .two background estimates are also shown : one obtained from the neutron detection efficiency of the experimental setup , the other one by the improved estimation technique ( see the main text for details ) . ]figure [ fig9 ] also shows the neutron background estimated by the older technique represented by eq .( [ estimate ] ) comparing it against the background obtained by the new technique from section [ novel ] .the superiority of the new technique in reconstructing the portion of the background caused by the neutron scattering is clearly evident .note that in the energy range above several hundred kev the angular distribution of elastically scattered neutrons from departs from isotropy . while the dedicated simulations properly account for the angular distribution of the scattered neutrons , the new technique at its basic level explicitly assumes isotropic scattering in the neutron - nucleus center of mass frame , leading to a more pronounced discrepancy between the true and the estimated backgroundhowever , the correction for the realistic angular distribution may be introduced directly through the angular correction factor . assuming the angular distribution of neutrons scattered in the center of mass frame to be known and normalized to unity ( with as the neutron scattering angle in the neutron - nucleus center of mass frame ) ,the angular correction factor simply becomes : thus accommodating even the most general form of the angular distribution .of course , needs to be treated here as a function of , which is achieved directly by applying eq .( [ eq4 ] ) from [ considerations ] . in conclusion ,the neutron background for is dominated by an additional component caused by the radioactive decay of short - lived fission products ( together with the fission neutrons further enhancing the neutron background ) .however , it should be noted that the gravity of this issue is strongly related to the length of the particular neutron flight - path ( m for experimental area 1 at n_tof ) , which determines the time - energy correlation .thus , the only way to correctly estimate the background is by means of dedicated simulations , taking into account the full framework of the known neutron reactions induced both within and without the sample , and closely following their complete temporal evolution . nevertheless , the newly proposed method from section [ novel ] may still be used for a fast and reliable estimation of the neutron background originating exclusively from elastic neutron scattering off the sample .following ref . , we have performed a close investigation of the relationship between the neutron background in neutron capture measurements and the neutron sensitivity related to the experimental setup .the neutron background for a sample was used in order to illustrate the difficulties in estimating the neutron background from neutron sensitivity considerations , i.e. from the neutron detection efficiency related to the experimental setup . as opposed to the neutron sensitivity being a function of the primary neutron energy ,the neutron background is a function of the reconstructed neutron energy , which is affected by the temporal evolution of the neutron - induced reactions . as a consequence, the neutron background may be overestimated under the capture resonances when the estimation is attempted on a basis of the neutron sensitivity , or even on a basis of the surrogate measurements with the neutron scatterer of natural carbon .the reason is that correlations between the primary neutron energy and the reconstructed energies are neglected in considerations based only on the neutron sensitivity . outside the resonance regiona good agreement is found between the true neutron background and the background estimated from the neutron detection efficiency related to the experimental setup , and from the neutron background for natural carbon ( we remind that in the experimental area 1 of the n_tof facility the pure neutron background for the carbon sample is experimentally available only above 1 kev ) .therefore , these methods may still be used , provided that a smoothed elastic scattering cross section is used , instead of the resonant one .an improved neutron background estimation technique was presented , relying on the calculation of the neutron sensitivity as a function of the reconstructed neutron energy .supplemented by an advanced data analysis procedure , taking into account the fully relativistic kinematics of the neutron scattering , the proposed procedure yields excellent agreement between the true and the estimated neutron backgrounds for and .the above considerations apply only to the background caused by elastically scattered neutrons . in the presence of reactions leading to the production of short - lived radioactive nuclides within the sample itself, an additional background component may be present .this has been illustrated in the case of the sample , for which neutron - induced fission at primary neutron energies above 600 kev translates into a strong background component at reconstructed neutron energies in the thermal and epithermal region , through the detection of radioactive residuals produced by fission .the same case can also be made for the carbon sample measured in the experimental area 1 of the n_tof facility , if one considers the () reaction as interfering with the neutron background measurements . in all these cases particularly , in measurements of capture cross sections of actinides the only way of properly estimating the total neutron - induced background is by means of detailed monte carlo simulations .+ * acknowledgements * + this work was supported by the croatian science foundation under project no .the geant4 simulations have been run at the laboratory for advanced computing , faculty of science , university of zagreb .let us suppose we have the function of the detector response to _ primary _ neutrons of energy , scattered by an angle ( we adopt the notation from eq .( [ short_cos ] ) for the cosine of the scattering angle ) , being a distribution of time delays between the neutron scattering off the sample ( at ) and the detection of counts caused by reactions of the scattered neutrons . in parallel ,let us consider the function of the detector response to neutrons _ scattered _ with the energy .evidently , the following must hold : since the effect must be the same whether we regard the primary neutron of energy as the ultimate source of the detected counts , or the same neutron after the scattering by the angle , with the associated scattering energy .let us now consider the contribution to the detected counts caused by the primary neutrons of energy which were scattered by the angle , where the counts themselves are detected at time after the primary neutron production ( at ) .the total time delay is given by the neutron time - of - flight along the flight - path between the neutron source ( in particular , the n_tof spallation target ) and the irradiated sample , and the time delay between the neutron scattering off the sample and detecting the count : the detector response to the neutrons characterized by and determines the number of the detected counts as : with as the neutron flux , as the yield of the sample - scattered neutrons , given by eq .( [ yield ] ) , and as the angular distribution of the neutrons scattered in the laboratory frame , such as the one from eq .( [ ang ] ) .( throughout the rest of the paper the neutron flux is given in units of lethargy , requiring the transition . )the detector response function is extracted from the dedicated simulations . however , when the neutrons are simulated not as coming from the primary beam , but as already having been scattered off the sample which is a backbone of the improved method from section [ novel ] then the simulations yield the detector response function , instead of . fortunately , the relation from eq . ( [ relation ] ) overcomes this difficulty , allowing to rewrite eq .( [ counts ] ) and to express the neutron background ( untreated by the pulse height weighting technique ) as the function of the total time delay : evidently , attempting to follow this procedure would be a formidable computational task , requiring the identification of the detector response as a function of no less than three variables , with a sufficient statistical accuracy. however , the data analysis procedure laid down in section [ novel ] relying on the proper weighting of the individual counts circumvents this problem by immediately building the integral from eq .( [ background ] ) , instead of first requiring the extraction of the detailed multidimensional detector response function .the ultimate confirmation of this claim comes in form of eq .( [ ultimate ] ) from .in the simulations the neutrons are treated as if already having been scattered off the sample .though they are simulated isotropically in the laboratory frame , it is more justifiable to assume that scattering is isotropic in the center of mass frame of the primary neutron and the target nucleus .furthermore , if the primary neutron beam was assumed to be isolethargic , the energy distribution of the scattered neutrons would not be such .therefore , the detected counts caused by an over - idealized stream of scattered neutrons need to be weighted in a manner which will make them appear as if they were caused by an isolethargic beam of primary neutrons , which have been scattered isotropically in the neutron - nucleus center of mass frame . after this correction, the isolethargic distribution may be reliably corrected by the actual energy dependence of the neutron flux , applying the correction to the energy distribution of the primary neutron beam , rather than to the distribution of scattered neutrons . to apply this correction properly , we will have to treat the primary neutron flux as a function of the _ primary energy _ that the neutron must have had in the primary beam in order to be scattered by the scattering angle and with the _ scattering energy _ .we remind that and have been independently sampled in the simulations : the scattering angle isotropically , the scattering energy isolethargically . adopting the notation from eq .( [ short_cos ] ) and employing the relativistic scattering kinematics , the expression for the primary energy may be obtained : ^ 2}\times\\ & \bigg\{(m+m)\big[e_n-(m - m)c^2\big]+\\ & \big(e_n+2mc^2\big){\,\xi\,}\big[\sqrt{m^2-m^2\big(1-{\,\xi\,}^2\big)}-m{\,\xi\,}\big]\bigg\ } \end{split}\end{aligned}\ ] ] with as the mass of the neutron , as the mass of the scattering nucleus and as the speed of light in vacuum .the isotropically simulated angular distribution of the scattered neutrons can be translated into the more realistic laboratory distribution ( which is the relativistically transformed distribution of the neutrons isotropically scattered in the neutron - nucleus center of mass frame ) by means of the angular correction factor , as used in eq .( [ ang ] ) : here is the neutron scattering angle in the center of mass frame , relative to the initial beam direction . the term is found by employing the relativistic scattering kinematics : the terms required for the evaluation of eq .( [ eq4 ] ) are given by : \ ] ] here is a conventional relativistic term for the center of mass speed in the laboratory frame ( , with as the actual speed ) .additionally , and are the momentum and total energy of the scattered neutron in the laboratory frame .though by plugging eqs .( [ eq4])([eq7 ] ) into eq .( [ eq3 ] ) , a correction factor may be analytically calculated , the exact expression is long and tedious .therefore , the derivative from eq .( [ eq3 ] ) is best calculated numerically .furthermore , the reader may note that the scattered neutron energy from eq .( [ eq7 ] ) should correspond to , where the kinetic energy of the scattered neutron has been directly sampled in the simulations .however , the correction factor must be calculated for a fixed primary energy , instead of .hence , during the calculation of the correction factor , the momentum from eq .( [ eq6 ] ) and energy from eq .( [ eq7 ] ) must be treated as functions of and , and have to be varied accordingly . for brevity of expressions, we adopt the notation from eq .( [ star ] ) for the set of all relevant parameters .the first step in determining the weighting factors , dependent on any combination of these parameters , is the normalization of the data by the total number of neutrons simulated with scattering energy and scattering angle . to obtain the contribution to the detected counts per single neutron bunch, the statistical effect of a single scattered neutron isolated by the previous normalization must be amplified by the number of primary neutrons ( those from the primary beam ) that can be scattered by the angle , with the energy .furthermore , it is necessary to account for the probability that the primary neutron of energy will indeed be scattered .this probability may be expressed as the yield of elastically scattered neutrons from eq .( [ yield ] ) .finally , since we are applying the pulse height weighting technique , the most evident weighting factor is given by the weighting function , dependent on the energy deposited in the detector .combined , the previous considerations give rise to the total weighting factor that has to be applied to each detected count : since the scattered neutrons have been simulated isolethargically and isotropically ( in the laboratory frame ) , the term is simply determined as : where is the total number of simulated neutrons , with and as the minimum and maximum sampling energies , respectively .the angular distribution of neutrons scattered from the primary beam under the assumption of isotropic scattering in the neutron - nucleus center of mass frame is already known from eq .( [ ang ] ) . with the neutron neutron flux conveniently given in units of lethargy , the term is also easily expressed : combined , eqs .( [ term1 ] ) and ( [ term2 ] ) lead to : since the primary neutron energy may be calculated for any combination of the scattering energy and angle ( up to the possible kinematic limitations , dependent on the projectile and target masses ) , and are independent , making the derivative from eq .( [ ratio ] ) a partial one : finally , plugging eqs .( [ ratio ] ) and ( [ der ] ) back into eq .( [ w_start ] ) yields the full expression for the overall weighting factor : we remind that the derivative may be easily calculated from eq .( [ primary ] ) .we present a convenient formalism establishing the link between discrete data and continuous distributions that are built from these data .this formalism is especially useful when each datum must be applied its own weighting factors , dependent on more parameters or other parameters than those that appear as the arguments of the final distribution .we demonstrate the formalism by immediately applying it to the estimated neutron background from section [ novel ] . for this purpose ,let us consider the contribution to the detected counts weighted by the weighting factors , dependent on any combination of the physical parameters .as before , we adopt the notation from eqs .( [ short_cos ] ) and ( [ star ] ) . within the formalism of continuous distributions ,the contribution to the detected counts from an element of the parameter space defined by , , and , may be written as : with as the elementary contribution to the unweighted counts . on the other hand , when building the distribution from discrete data , the contribution appears as a union of all appropriately weighted counts from an associated element of the parameter space : with as the total number of detected counts from the parameter space volume . in eqs .( [ rep1 ] ) and ( [ rep2 ] ) holds a prominent place because we are expressing the final distribution ( the neutron background from section [ novel ] ) as a function of the reconstructed energy . combining eqs .( [ rep1 ] ) and ( [ rep2 ] ) leads to a direct correspondence between the set of discrete data and their representation by the continuous distribution : 00 p. e. koehler , r. r. winters , k. h. guber , et al .c 62 ( 2000 ) 055803 .r. plag , m. heil , f. kppeler , et .al . , nucl .instr . and meth . a 496 ( 2003 ) 425 .c. guerrero , a. tsinganis , e. berthoumieux , et al . , eur .j. a 49 ( 2013 ) 27 . c. wei , e. chiaveri , s. girod , et al .instr . and meth . a 799 ( 2015 ) 90 .s. barros , i. bergstrm , v. vlachoudis and c. wei , j. instrum .10 ( 2015 ) p09003 .s. agostinelli , j. allison , k. amako , et al .instr . and meth . a 506 ( 2003 ) 250 .p. ugec , n. colonna , d. bosnar , et al .instr . and meth . a 760 ( 2014 ) 57 .p. ugec , m. barbagallo , n. colonna , et al .c 89 ( 2014 ) 014605 .p. ugec , n. colonna , d. bosnar , et al .c 90 ( 2014 ) 021601(r ) .k. h. guber , h. derrien , l. c. leal , et al ., phys . rev .c 82 ( 2010 ) 057601 .f. gunsing , e. berthoumieux , g. aerts , et al .c 85 ( 2012 ) 064601 .r. l. macklin , j. h. gibbons , phys .159 ( 1967 ) 1007 .u. abbondanno , g. aerts , h. alvarez , et al .instr . and meth . a 521 ( 2004 )m. b. chadwick , m. herman , p. obloinsk , et al .data sheets 112 ( 2011 ) 2887 . the n_tof collaboration , _ neutron capture cross section measurements of , and at n_tof _ , proposal to the isolde and neutron time - of - flight committee , cern - intc-2009 - 025 / intc - p-269 20/04/2009 .f. mingrone , c. massimi , s. altstadt , et al .data sheets 119 ( 2014 ) 18 . | the relation between the neutron background in neutron capture measurements and the neutron sensitivity related to the experimental setup is examined . it is pointed out that a proper estimate of the neutron background may only be obtained by means of dedicated simulations taking into account the full framework of the neutron - induced reactions and their complete temporal evolution . no other presently available method seems to provide reliable results , in particular under the capture resonances . an improved neutron background estimation technique is proposed , the main improvement regarding the treatment of the neutron sensitivity , taking into account the temporal evolution of the neutron - induced reactions . the technique is complemented by an advanced data analysis procedure based on relativistic kinematics of neutron scattering . the analysis procedure allows for the calculation of the neutron background in capture measurements , without requiring the time - consuming simulations to be adapted to each particular sample . a suggestion is made on how to improve the neutron background estimates if neutron background simulations are not available . neutron sensitivity , neutron background , geant4 simulations , neutron time - of - flight , n_tof , neutron capture , neutron scattering |
in recent times cellular automata based simulations of traffic flow have gained considerable importance . by extending the range of rules from nearest neighbours to a range of 5 grid sites and introducing 6 discrete velocities nagel and schreckenberg have found a striking resemblance of simulation and realistic traffic behaviour . for schadschneider and schreckenberghave found an analytic solution . for higher ,these analytic approaches lead to good approximations for the average behavior .further analytic results can be found in .nagel pointed out the strong connections between particle hopping models and fluid - dynamical approaches for traffic flow .much less is known about modelling of multi - lane traffic .this statement is not only true for particle hopping models for traffic flow , but for traffic flow theory in general . queueing models are not truly multilane , but emulate multiple lanes by switching the order of vehicles on one lane whenever a passing would have occurred in reality .fluid - dynamical models incorporate multi - lane traffic only by parametrization , although sometimes based on kinetic theory .traditional car - following theory ( see ) by and large never dealt with multi - lane traffic .modern microscopic traffic simulation models ( e.g. ) obviously handle multi - lane traffic by necessity .cremer and coworkers even treat multi - lane traffic in the context of cellular automata models . yet, all these papers approach the problem by using heuristic rules of human behavior , without checking which of these rules exactly cause which kind of behavior . in validations then ( e.g. ) , it often enough turns out that certain features of the model are not realistic ; and because of the heuristic approach it is difficult to decide which rules have to be changed or added in order to correct the problem . for that reason, a more systematic approach seems justified .our approach here is to search for _ minimal _ sets of rules which reproduce certain macroscopic facts .the advantage is that relations between rules and macroscopic behavior can be more easily identified ; and as a welcome side - effect one also obtains higher computational speed .we again choose particle hopping models as starting point for this investigation because their highly discrete nature reduces the number of free parameters even further .it is clear that a similar analysis could be applied to continuous microscopic models , hopefully benefiting from the results obtained in this and following papers .nagatani examined a two lane system with completely deterministic rules and , where cars either move forward or change lanes .a very unrealistic feature of this model are states where blocks of several cars oscillate between lanes without moving forward at all .he corrected this by introducing randomness into the lane changing .latour has developed the two lane model which served as the basis for the one discussed here .rickert used a more elaborate rule set for two lane traffic which reproduced the phenomenon of increased flow with an imposed speed limit .for the convenience of the reader we would like to outline the single lane model .the system consists of a one dimensional grid of sites with periodic boundary conditions .a site can either be empty , or occupied by a vehicle of velocity zero to .the velocity is equivalent to the number of sites that a vehicle advances in one update provided that there are no obstacles ahead .vehicles move only in one direction .the index denotes the number of a vehicle , its position , its current velocity , its maximum speed .we now allow for different esired velocities to include an inhomogeneous fleet ] , the number of the preceding vehicle , the width of the gap to the predecessor . at the beginning of each time step the rules are applied to all vehicles simultaneously ( parallel update , in contrast to sequential updates which yield slightly different results ) .then the vehicles are advanced according to their new velocities . * * if * * then * ( * s1 * ) * * if * * then * ( * s2 * ) * * if * * and * * then * ( * s3 * ) * s1 * represents a linear acceleration until the vehicle has reached its maximum velocity .* s2 * ensures that vehicles having predecessors in their way slow down in order not to run into them . in * s3* a random generator is used to decelerate a vehicle with a certain probability modelling erratic driver behaviour .the free flow average velocity is ( for ) .the single lane model is not capable of modelling realistic traffic mainly for one reason : a realistic fleet is usually composed of vehicles types having different desired velocities . introducing such different vehicle types in the single lane model only results in _ platooning _ with slow vehicles being followed by faster ones and the average velocity reduced to the free flow velocity of the slowest vehicle .we introduce a two lane model consisting of two parallel single lane models with periodic boundary conditions and four additional rules defining the exchange of vehicles between the lanes .the update step is split into two sub - steps : 1 .check the exchange of vehicles between the two lanes according to the new rule set .vehicles are only moved _sideways_. they do not _ advance_. note that in reality this sub - step regarded by itself seems unfeasible since vehicles are usually incapable of purely transversal motion .only together with the second sub - step our update rules make physically sense .+ this first sub - step is implemented as strict parallel update with each vehicle making its decision based upon the configuration at the beginning of the time step .2 . perform independent single lane updates on both lanes according to the single lane update rules . in this second sub - stepthe resulting configuration of the first sub - step is used .the most important parameters of the two lane model are as follows : symmetry : : : the rule set defining the lane changing of vehicles can be both symmetric and asymmetric .the symmetric model is interesting for theoretical considerations whereas the the asymmetric model is more realistic .stochasticity : : : the single lane model proved that a strictly deterministic model is not realistic : the model did not show the desired spontaneous formation of jams . in the case of the two lane model the lack of stochasticity in combination with the parallel update results in strange behaviour of slow platoons occupying either lane : since none of the vehicles has reached its maximum velocity and all evaluate the other lane to be better there is collective change sidewise which is usually reversed over and over again until the platoon dissolves or the platoon is passed by other vehicles .+ we introduce stochasticity into the two lane rule set to reduce the effective number of lane changes and thus dissolve those platoons .the simulation also revealed that is effect is also important in the asymmetric free flow case ( see [ pingpong ] ) .direction of causality : : : in the single lane model a vehicle only looks ahead (= downstream = in the direction of vehicle flow ) so that causality can only travel upstream (= in the opposite direction of vehicle flow ) . a reasonable lane changingrule must include a check of sites _ upstream _ in order not to disturb the traffic of the destination lane .this would result in causality travelling downstream .a somewhat generic starting point for modeling passing rules is the following : ( t1 ) you look ahead if somebody is in your way .( t2 ) you look on the other lane if it is any better there .( t3 ) you look back on the other lane if you would get in somebody else s way .technically , we keep using for the number of empty sites ahead in the same lane , and we add the definitions of for the forward gap on the other lane , and for the backward gap on the other lane .note that if there is a vehicle on a neighbouring site both return -1 .the generic multi - lane model then reads as follows .a vehicle changes to the other lane if all of the following conditions are fulfilled : * ( * t1 * ) , * ( * t2 * ) , * ( * t3 * ) , and * ( * t4 * ) . , , and are the parameters which decide how far you look ahead on your lane , ahead on the other lane , or back on the other lane , respectively . according to the before mentioned characteristics we associate the parameters of our rule set : [ cols="^,^,^",options="header " , ]as an example , we start with , , , and .that means that both and are essentially proportional to the velocity , whereas looking back is not : it depends mostly on the expected velocity of other cars , not on one s own . in the symmetric version of this model ,cars remain on their lane as long as they do nt `` see '' anybody else .if they see somebody ahead on their own lane ( i.e. ) , then they check on the other lane if they can switch lanes and do so if possible . afterwards ,if they are satisfied , they remain on this lane until they become dissatisfied again . in the asymmetric version , cars always try to return to the right lane , independent of their situation on the left lane .space - time - plots both of the symmetric and the asymmetric version are shown in figs .[ spacetime_symm_lb5 ] and [ spacetime_asym_lb5 ] . for these plots, we simulated a system with a length of 12,000 sites of which we plot 400 sites in 400 consecutive time - steps .the density is 0.09 which is slightly above the density of maximum flow ( see below ) .vehicles go from left to right ( spatial axis ) and from top to bottom ( time axis ) .traffic jams appear as solid areas with steep positive inclination whereas free flow areas are light and have a more shallow negative inclination .each plot is split into two parts : the left part containing the left lane and the right part containing the right lane , respectively .note that plot [ spacetime_asym_lb5 ] ( left lane ) gives a good impression of the great number of lane changes through the high frequency of short vehicle life lines appearing and disappearing : these are vehicles that temporarily leave the right lane to avoid an obstacle .they go back to their old lane as soon as the obstacle has been passed .it will be confirmed quantitatively that indeed the rate of lane changes is much higher for the asymmetric model than for the symmetric model . before going on, we would like to describe our standard simulations set - up for the following observations .note that quantitative simulation results were obtained with a much larger system than the qualitative space - time plots .we simulated a system of length with closed boundary conditions , i.e. traffic was running in a loop .we started with random initial conditions , i.e. cars were randomly distributed on both lanes around the complete loop with initial velocity .since the system is closed , the average density per lane is now fixed at where the `` 2 '' stands for the number of lanes .the simulation was then started , 1000 time steps were executed to let the transients die out , and then the data extraction was started .the flow which is found in the fundamental diagrams is both space and time averaged , i.e. values for lane change frequency and ping pong lane change frequency ( see below ) are obtained by the same averaging procedure except that statistics are gathered every time step , since _ by definition _ping pong lane changes occur in subsequent time steps .we usually used , and the same procedure was repeated for each density found in the plots . with a resolution of an average plot took about 22 hours of computation time on a sparc 10 workstation . by comparing these models with each other and with earlier results ,we make the following observations ( fig . [ fig_flow_lb5_c1_0 ] unless otherwise noted ) : \(i ) both for the symmetric and the asymmetric version , maximum flow is higher than twice the maximum flow of the single lane model ( fig .[ fig_single_multi ] ) . which means that , in spite of the additional disturbances which the lane changing behavior introduces into the traffic flow , the general effects are beneficial , probably by diminishing large deviations from `` good '' flow patterns .\(ii ) both for the symmetric and the asymmetric version , the combined 2-lane flow reaches a maximum at , which is at or near a sharp bend of the flow curve .\(iii ) for the asymmetric model , flow on the left lane keeps increasing slightly for , but this is over - compensated by the decreasing flow on the right lane .\(ii ) and ( iii ) together lead one to the speculation that maximum flow in the asymmetric case here actually is connected to a `` critical '' flow on the right lane and a `` sub - critical '' flow on the left lane .any addition of density beyond here leads to occasional break - downs on the right lane and thus to a much lower flow there .obviously , such interpretations would have to be clarified by further investigations , and the word `` critical '' would have to be used with more care , such as is pointed out in for the single lane case .\(iv ) for both lanes combined , the curves for symmetric and asymmetric traffic actually look fairly similar .if the above interpretation is right , this means that the overall density of maximum flow is a fairly robust quantity , but one can stabilize one lane at a much higher density if this density is taken from the other lane .\(v ) at very low densities in the asymmetric case , flow on the left lane , , only slowly builds up .this is to be expected , since at least two cars have to be close to each other to force one of them on the left lane , leading to a mean field solution of for .\(vi ) for , flows on both lanes in the asymmetric models are fairly similar and similar to the lane flows in the symmetric models .to get some further insight into the lane changing dynamics , fig .[ fig_change_lb5 ] shows the frequency of lane changing both for the asymmetric and the symmetric model .\(i ) note that in the asymmetric case there is a sharp bend in the curve , which is not found for the symmetric case .this bend is also near , giving further indication that the dynamics above and below are different .\(ii ) for the symmetric case , lane changing occurs with less than half the frequency compared to the asymmetric case .\(iii ) in the symmetric case , the lane changing frequency per site for small densities increases approximately quadratically up to rather high densities , whereas the same quantity for the asymmetric model grows approximately linearly already for fairly low densities .suggests that for the symmetric case a mean field description of interaction , , would be valid up to comparably high densities .obvious that this does not work .since the vehicles have a strong tendency to be on the right lane , already a density of 0.04 per lane would be a density of 0.08 if everybody were on the right lane .yet , is known to be already a density of high interaction in single lane traffic .since this high interaction tends to spread vehicles out , each additional vehicle simply adds its own share of lane changes , making the relation roughly linear .\(iv ) the maximum number of lane changes occurs at densities much higher than .the lane changing probability _ per vehicle _ , however , reaches a maximum below the critical density ( fig .[ fig_lanechangespercar ] ) .an artifact of the so far described algorithm is easily recognizable when one starts with all cars on the same lane , say the right one . assuming fairly high density , then all cars see somebody in front of them , but nobody on the left lane . in consequence , everybody decides to change to the left lane , so that _ all _ cars end up on the left lane . here , they now all decide to change to the right lane again , etc ., such that these coordinated lane changes go on for a long time ( `` cooperative ping - pong effect '' ) .this effect has already been observed by nagatani for the much simpler two - lane model .one way around this is to randomize the lane changing decision .the decision rules remain the same as above , but even if rules t1 to t3 lead to a yes , it is only accepted with probability . with this fourth lane changing rule ,patterns like the above are quickly destroyed . in order to quantify the effects of a different ,simulations with were run .the observations can be summarized as follows : \(i ) the flow - density curves are only marginally changed ( fig . [ fig_flow_lb5 ] ) .\(ii ) the frequency of lane changes is decreased in general , but , except for in the asymmetric case , by much less than the factor of two which one would naively expect ( fig .[ fig_change_lb5 ] ) .that means that usually there is a _ dynamic _reason for the lane change , that is , if it is not done in one time step due , then it is re - tried in the following time step , etc .\(iii ) to better quantify in how far a actually changes the pattern of vehicles changing lanes back and forth in consecutive time steps , we also determined the frequency of `` ping pong lane changes '' , where a car makes two lane changes in two consecutive iterations .obviously , there are left - right - left ( lrl ) and right - left - right ( rlr ) ping pong lane changes .[ fig_pingpong_lb5 ] shows that reducing the probability to change lanes , , from 1 to 1/2 has indeed a beneficial effect : the number of ping pong lane changes decreases by about a factor of five .yet , for the symmetric case , the frequency of ping pong lane changes is more than an order of magnitude lower in both cases anyway .this indicates that in simulations starting from random initial conditions , the cooperative effect as described further above as cooperative ping - pong effect does not really play a role for the statistical frequency , because this effect should be the same for the symmetric and the asymmetric model . instead ,the cause of the ping pong lane changes in the asymmetric model is as follows : assume just two cars on the road , with a gap of 5 between them .with respect to velocity , both cars are in the free driving regime , and their velocities will fluctuate between 4 and 5 . now assume that the following car has velocity 5 from the last movement .that means that it looks 6 sites ahead , sees the other car , and changes to the left lane .then , assume that in the velocity update , the leading car obtains velocity 5 and the following car obtains velocity 4 .then , after the movement step , there is now a gap of 6 between both cars , and in the lane changing step , the follower changes back to the right lane . and this can happen over and over again in the asymmetric model , but will not happen in the symmetric model : once the following car in the above situation has changed to the left lane , it will remain there until it runs into another car on the left lane . to investigate this second kind of ping pong lane changes we ran simulations recording whether a ping pong lane change was made at low velocities or high velocities .[ fig_slowfastpingpong ] shows a very distinct peak for _ fast _ ping pong changes at low densities whereas _slow _ ping pong changes have a lower peak at higher densities similar to that of the symmetric case .this gives a strong indication that most lane changes are actually caused by the `` tailgating effect '' as described above , which is an artifact of the rules .it is , though , to be expected that this behavior does not have a strong influence on the overall dynamics : it mostly happens in the free driving regime ; as soon as , for example , another car is nearby in the left lane , it is suppressed by the looking back and forward on the other lane .we would like to mention two other parameter combinations .they are presented because they generate artifacts which contradict the common sense one would apply to the phenomena of traffic flow .\(i ) in the first case the lookahead is reduced to instead of . while this change is negligible for vehicles at higher velocities it becomes crucial to a vehicles stopped in a jam : assuming the current velocity to be zero the vehicle looks _ zero _ sites ahead and decides to remain in the current lane due to the non fulfilled rule .this state will persist until the predecessor moves even if the other lane is _ completely _ free ! fig .[ fig_model1 ] shows the impact the reduced look ahead on overall flow : for density there is no perceptible flow in the right lane which corresponds to a traffic jam that occupies more or less the whole right lane .\(ii ) in the second case the look - back is reduced to .vehicles no longer check whether their lane changing could have a disadvantageous effect on the other lane which corresponds to a very egoistic driver behaviour .[ fig_flow_lbx_c1_0 ] shows flow density relationships for look - back and .it is obvious that the decrease in look - back also decreases the maximum flow at critical densities .moreover seems to split the curves of the symmetric and asymmetric cases which used to be almost identical for : the lack of look - back is much more disadvantageous for asymmetric than for symmetric rules . in figs .[ spacetime_symm_lb0 ] and [ spacetime_asym_lb0 ] we used for the symmetric and asymmetric rule sets with one plot per lane .it is clearly visible ( compare to figs .[ spacetime_symm_lb5 ] and [ spacetime_asym_lb5 ] ) how completely disrupts the laminar flow regime .vehicles change lanes without looking back ; and due to the formulation of the model this does not cause accidents , but causes the obstructed vehicles to make sudden stops . since these stops are caused more or less randomly , the regime becomes much more randomly disturbed than before , somewhat reminiscent of the asymmetric stochastic exclusion process ( see ) . as seen before the effect is even more drastic for the asymmetric rule set since the number of lane changes is higher than in the symmetric case . in fig .[ spacetime_asym_lb0 ] with dynamics are dominated by small traffic jams caused by lane changes , while in fig .[ spacetime_symm_lb5 ] there are still some fairly laminar areas .compared to reality ( e.g. ) , the lane change frequency in the asymmetric models presented here is by about a factor of 10 too high . using would correct this number , but is dynamically not a good fix : it would mean that a driver follows a slower car in the average for 10 seconds before he decides to change lanes .besides , it was shown that about 90% of the lane changes in the asymmetric models here are produced by an artificial `` tailgating dance '' , where a follower changes lanes back and forth when following another car .it remains an open question in how far artifacts like this can be corrected by the current modeling approach or if it will be necessary to , e.g. , introduce memory : if one remembers to just have changed lane from right to left , one will probably stay on that lane for some time before changing back .another defect of the models presented in this paper is that the maximum flow regime is most probably represented incorrectly . both measurements ( e.g. or fig .3.6 in ) or everyday observation show that real traffic shows a `` density inversion '' long before maximum flow , that is , more cars drive on the left lanes than on the right lanes .this effect is more pronounced for countries with higher speed limits .let us denote by the density of maximum flow of the _ single lane _ case .it follows for the real world two lane case that at a certain point the left lane will have a density higher than this density whereas the right lane has a density lower than . when further increasing the overall density , then the flow on the left lane will decrease whereas it still increases on the right lane .it is unclear if the net flow here increases or decreases ; but it should become clear that instabilities here are caused by the left lane first .this is in contrast to the models of this paper , where the right lane reaches the critical density first .work dealing with this problem is currently in progress .also the effect of different maximum velocities will be addressed in later papers .we have presented straightforward extensions of the cellular automata approach to traffic flow so that it includes two - lane traffic .the basic scheme introduced here is fairly general , essentially consisting of two rules : look ahead in your own lane for obstructions , and look in the other lane if there is enough space .the flow - density relations of several realizations of this scheme have been investigated in detail ; possible artifacts for certain parameter choices have been pointed out . in general , there seem to be two important lessons to be drawn from our investigations : * checking for enough space on the other lane ( `` look - back '' ) is important if one wants to maintain the dynamics consisting of laminar traffic plus start stop waves which is so typical for traffic . * especially in countries with high speed limits , observations show a density inversion near maximum flow , that is , the density is higher on the left lane than on the right lane .this effect is not reproduced by our models ( work in progress ) . yet, in general , it seems that the approach to multi lane traffic using simple discrete models is a useful one for understanding fundamental relations between microscopic rules and macroscopic measurements .we thank a. bachem and c. barrett for supporting mr s and kn s work as part of the traffic simulation efforts in cologne ( `` verkehrsverbund nrw '' ) and los alamos ( transims ) .we also thank them , plus ch .gawron , t. pfenning , s. rasmussen , and p. wagner for help and discussions .computing time on the parsytec gcel-3 of the zentrum fr paralleles rechnen kln , on the paragon xp / s 5 and xp / s 10 of the zentralinstitut fr angewandte mathematik ( zam ) of the forschungszentrum jlich and on the sgi-1 of the regionales rechenzentrum kln is gratefully acknowledged .we further thank all persons in charge of maintaining the above mentioned machines .m. mcdonald , m.a .brackstone , simulation of lane usage characteristics on 3 lane motorways , paper no . 94at051 , in : proceedings of the 27th international symposium on automotive technology and automation ( isata ) ( automotive automation ltd , croydon , england , 1994 ) , p. 365 . | we examine a simple two lane cellular automaton based upon the single lane ca introduced by nagel and schreckenberg . we point out important parameters defining the shape of the fundamental diagram . moreover we investigate the importance of stochastic elements with respect to real life traffic . 0.3 |
volume - phase holographic ( vph ) gratings potentially have many advantages over classical surface - relief gratings ( barden , arns , & colburn 1998 ; see also barden et al . 2000 ) , and are planned to be used in a number of forthcoming instruments ( e.g. , aa ; bridges et al .while applications to optical spectrographs only are currently being considered , vph gratings will also be useful to near - infrared spectrographs if the performance at low temperatures is satisfactory .in particular , its diffraction efficiency and angular dispersion should be confirmed .contraction of dichromated gelatin with decreasing temperature could cause variations in the line density and profile of diffraction efficiency ( the thickness of the gelatin layer is one of the parameters defining diffraction efficiency ) . since cooling andheating cycles might cause some deterioration of a vph grating and reduce its life time , we also need to see whether these characteristics vary with the successive cycles . in this paper , results from measurements of a sample grating at 200 and at room temperature are presented .a picture of the grating investigated is shown in figure [ vph ] .this grating was manufactured by ralcon development lab , and its diameter is about 25 cm .the line density is 385 lines / mm and thus the peak of diffraction efficiency is around 1.3 m at the bragg condition when the incident angle of an input beam to the normal of the grating surface is 15 .the measurements are performed at wavelengths from 0.9 m to 1.6 m . the target temperature , size and line density of the grating , and wavelengths investigatedare nearly the same as those adopted for the fibre - multi object spectrograph ( fmos ; e.g. , kimura et al . 2003 ) , which is one of the next generation instruments for the 8.2 m subaru telescope with commissioning expected in 2004 : this instrument will be observing at wavelengths from 0.9 m to 1.8 m , and a vph grating will be used as an anti - dispersing element in the near - infrared spectrograph which is operated at 200 to reduce thermal noise .before starting measurements using the cryogenic test facility ( see next section ) , we investigated the diffraction efficiency of the vph grating at a room temperature .the measurements were performed manually on the optical bench .the procedures and results obtained are summarised below . in figure [ config ] , the overall configuration of the optical components used for the measurements is indicated ( the detailed information for the main components are listed in table [ comps ] ) .light exiting from the monochromator is collimated and used as an input beam to illuminate the central portion of the vph grating .the spectral band - width of this input beam is set by adjusting the width of the output slit of the monochromator . the slit width and the corresponding spectral band - widthwere set to 0.5 mm and 0.01 m , respectively , throughout the measurements ; the beam diameter was set to 2 cm by using an iris at the exit of the lamp house .the input beam is diffracted by the grating and the camera , composed of lenses and a near - infrared detector ( 320 256 ingaas array ) , is scanned so as to capture the diffracted beam .the output slit of the monochromator is thus re - imaged on the detector .since the detector has some sensitivity at visible wavelengths , a visible blocking filter which is transparent at wavelengths longer than 0.75 m is inserted after the monochromator to reduce contamination of visible light from a higher order .the basic measurement procedures are as follows .first , the brightness of the lamp and the wavelength of light exiting from the monochromator are fixed ( the brightness of the lamp is kept constant by a stabilised power supply during the measurement cycle at a given wavelength ) , and the total intensity included in the image of the slit is measured without the vph grating . then , the vph grating is inserted at an angle to the optical axis , and the intensities of the zero and first order ( ) diffracted light are measured .the diffraction angle is also recorded .next , the grating is set at a different incident angle and the intensities of the diffracted light and diffraction angles are measured .after these measurements are repeated for all the incident angles of interest , a different wavelength is chosen and the same sequence is repeated . the brightness of the lamp can be changed when moving from one wavelength to another : a higher brightness was used at shorter wavelengths because the system throughput is lower . in figure[ eff_c ] , the diffraction efficiencies measured are plotted against wavelengths for the cases where incident angles are 15 ( upper panel ) and 20 ( lower panel ) .open and solid dots show the efficiency profiles for the zero and first order ( ) diffracted light , respectively .since random errors are dominated by fluctuations of the bias level of the detector on a short time scale ( sec ) , the error bars plotted are calculated from a typical value of the fluctuations .it is found from this figure that particularly for the incident angle of 15 , the peak of the diffraction efficiency reaches 80% and the efficiency exceeds 50% over the wavelength range from 0.9 m to 1.6 m .the profiles can be well reproduced by theoretical calculations based on the coupled wave analysis ( kogelnik 1969 ) , which are shown by the solid lines in the figure . in these calculations ,a thickness of the dichromated gelatin layer of 12 m and a refractive index modulation amplitude of 0.05 are assumed .energy losses by surface reflections at boundaries between glass and air ( 10% ) are also included .although the energy losses at boundaries between glass and gelatin are likely to be much smaller because their refractive indices are very similar , they might explain that the measured diffraction efficiency tends to be slightly lower than the theoretical calculation .( the energy lost by internal absorption of the dichromated gelatin layer is estimated to be 1% below 1.8 m ; e.g. , barden et al .in the following , we describe the measurements at 200 as well as those at a room temperature , both of which were performed using the cryogenic test facility as shown below . throughout these measurements, we used a different vph grating from that used in the pre - test at room temperature , although both have the same specifications . in figures[ config1 ] , schematic views of the fore - optics and the optics inside the cryogenic chamber are indicated .pictures of these facilities are shown in figure [ pic ] .the fore - optics and light path before the window of the cryogenic chamber and the camera are the same as those used in the warm pre - test .the slit width and the spectral band - width were set to 0.1 mm and 2.0 m , respectively , throughout the measurements , and the beam diameter was set to 2 cm by using an iris before the window .the light path in the cryogenic chamber is described as follows : the input beam illuminates the central portion of the vph grating and is diffracted by the grating .the diffracted light is captured by scanning the pick - off arm and is delivered to the camera on the top of the chamber by 3 pick - off mirrors .the output slit of the monochromator is thus re - imaged onto the detector .this procedure enables measurements to be made at a variety of incident and diffraction angles without having to mount the detector inside the cryostat .the measurement procedures are the same as those in the pre - test , except that all the measurements were performed with the vph grating in place .we initially perform the measurements at a room temperature ( 280 ) . then, we repeat the measurements at 200 before returning to 280 to repeat the cycle . when we cool the vph grating , we monitor the temperature of the grating with a sensor on the surface , close to the edge of the grating but unilluminated by the input beam .when the temperature reaches , we switch off the compressor and cold heads before starting the measurements .although we do not have any thermostatic systems to maintain a given temperature , it takes several hours for the temperature of the grating to start increasing and go above 200 after the compressor and cold heads are switched off .thus the temperature of the grating stays approximately at 200 5 for the duration of the measurement cycle .the following results were obtained when the incident angle was set to , which gives the peak of diffraction efficiency around 1.3 m when satisfying the bragg condition .we note that the same trends are obtained from measurements when different incident angles are adopted . in the upper panel of figure [ eff ] ,differences of diffraction efficiencies in a sequence of measurements at 200 from those obtained in the first warm test ( 280 ) are plotted against wavelengths .the error bars are calculated from the typical fluctuation of the bias level of the detector .the errors are larger at shorter wavelengths because the system throughput is lower so that the bias level fluctuation is larger compared to the intensity of the slit images ( the brightness of the lamp is kept constant with a stabilised power supply throughout the measurements at all the wavelengths ) .it is found that the differences are close to zero at all the wavelengths , suggesting that there is no significant variation in profile of diffraction efficiency such as a global decrease in the efficiency or a lateral shift of the peak . in the lower panels ,the differences in diffraction efficiency are averaged over the wavelength range , and the averaged difference from the first warm test is plotted against cycle number .open triangles and solid dots represent the measurements at 200 and those at 280 , respectively .the error bars indicate the standard deviation of a distribution of the differences around the average value .these results suggest that the diffraction efficiency of a vph grating is nearly independent of temperature , at least between 200 and 280 , and that no significant deterioration is caused by a small number of heating and cooling cycles . in the upper panel of figure [ dispersion ] , difference of diffraction angle from the prediction for the line density of 385 lines / mm ( the nominal line density of the vph grating )is plotted against wavelength ; solid line ( zero level ) corresponds to the relationship between diffraction angle and wavelength for the line density of 385 lines / mm . dotted and dashed lines indicate the relationships predicted for a line density of 375 and 395 lines / mm , respectively .the data points show the actual measurements at 280 . as the arrow shows , if the gelatin layer shrinks with decreasing temperature , the line density would increase and the data points would shift upwards on this plot . in the lower panel ,the difference of diffraction angle from that measured in the first warm test is plotted against the number of cycles .the symbols have the same meanings as those in figure [ eff ] .again , each data point represents the difference averaged over the wavelength range , and the error bars indicate the standard deviation of the distribution of the differences around the average value .the lower panel suggests that at 200 , the diffraction angle is slightly larger than that at 280 .this is equivalent to a slight increase of the line density of the grating , and the simplest explanation for this is a shrinkage of the grating with decreasing temperature . by using this increment of diffraction angle ( 0.1 ) ,the amount of shrinkage is estimated to be % of the diameter of the grating , which is consistent with the amount of shrinkage of the glass substrate expected when the temperature is decreased by 80 ( the amount of shrinkage of gelatin would be larger by an order of magnitude ) .one needs to keep in mind , however , that only a small portion of the vph grating was illuminated throughout the measurements ( the beam diameter was 2 cm while the diameter of the vph grating is 25 cm ) and thus a variation of the line density might be difficult to be detected . investigating larger portions over the vphgrating would be an important future work .other cryogenic tests of vph gratings are also in progress by the golem group at brera astronomical observatory ( bianco et al .their preliminary results suggest that diffraction efficiency is significantly reduced ( 20% around the peak ) at 200 compared with room temperature , which is inconsistent with our results .one consideration here is that there is a significant difference in speed of the cooling and heating processes between the tests of the golem group and ours . in the golem case , they require only 1 hour to cool a vph grating down to their target temperature from a room temperature ( zerbi 2001 ) . with the cryogenic chamber used for our experiments , it takes about 15 hours to cool down to 200 from 280 .this may imply that rapid cooling and/or heating can cause some deterioration of a vph grating .further experiments are required in this area .in this paper , results from the cryogenic tests of a vph grating at 200 are presented .the aims of these tests were to see whether diffraction efficiency and angular dispersion of a vph grating are significantly different at a low temperature from those at a room temperature , and to see how many cooling and heating cycles the grating can withstand .we have completed 5 cycles between room temperature and 200 , and find that diffraction efficiency and angular dispersion are nearly independent of temperature .this result indicates that vph gratings can be used in spectrographs cooled down to 200 such as fmos without any significant deterioration of the performance . in future, we will be trying more cycles between 200 and 280 to mimic more realistic situations of astronomical use .measurements at a much lower temperature ( e.g. , 80 ) will also be necessary to see whether vph gratings are applicable to spectrographs for use in the -band .we will report on these issues in a forthcoming paper .we thank colleagues in durham for their assistance with this work , particularly paul clark , john bate , and the members of the mechanical workshop .we are also grateful to the anonymous referee for the comments to improve our paper .this work was funded by pparc rolling grant ( ppa / g / o/2000/00485 ) .kimura , m. , maihara , t. , ohta , k. , iwamuro , f. , eto , s. , iino , m. , mochida , d. , shima , t. , karoji , h. , noumaru , j. , akiyama , m. , brzeski , j. , gillingham , p. r. , moore , a. m. , smith , g. , dalton , g. b. , tosh , i. a. j. , murray , g. j. , robertson , d. j. , & tamura , n. 2003 , proc .4841 , 974 with that at 280 . in the upper panel ,differences of diffraction efficiencies in a sequence of measurements at 200 from those obtained in the first warm test ( 280 ) are plotted against wavelengths .the error bars are calculated from the typical fluctuation of the bias level of the detector . in the lower panels ,the differences in diffraction efficiency as shown above are averaged over the wavelength range , and the averaged difference from the first warm test is plotted against cycle number .open triangles and solid dots represent the data at 200 and those at 280 , respectively .the error bars indicate the standard deviation of a distribution of the differences around the average value.,width=302 ] | we present results from cryogenic tests of a volume - phase holographic ( vph ) grating at 200 measured at near - infrared wavelengths . the aims of these tests were to see whether the diffraction efficiency and angular dispersion of a vph grating are significantly different at a low temperature from those at a room temperature , and to see how many cooling and heating cycles the grating can withstand . we have completed 5 cycles between room temperature and 200 , and find that the performance is nearly independent of temperature , at least over the temperature range which we are investigating . in future , we will not only try more cycles between these temperatures but also perform measurements at a much lower temperature ( e.g. , ) . |
over the years we watched ourselves working back and forth between writing equations for clocks and signals on a blackboard and working with lasers , lenses , and electronics on a work bench . in the course of this experience we noticed the role in physics of memories , both the memories of the investigators and the memories of the digital computers they employ , and our eyes opened to unsuspected vistas .we speak of _ memory _ as belonging to a _ party _ , which can be a person , a computing machine , _ etc_. as we mean it , a memory is a device in which symbols are recorded and manipulated . by _ memory _ we mean no static photograph , but a dynamic device in which the symbols recorded can undergo changes from moment to moment . by _ symbols _ we mean what is recorded in a memory of a party , distinct from whatever propagates externally from one party to another party , which we call a _ signal_. the elemental symbol carries a binary distinction : the bit .what we call a party or a symbol or a signal depends on the level of description , which can be finer or coarser . by a change in level of description ,what is termed `` a memory '' belonging to a single party can become several memories belonging to distinct parties , with communications among them , and _vice versa_. thus the distinction between _ symbol _ and _ signal _ is relative to the memory of a party , and both the memory and the party are relative to a level of description . as noted in sec .[ subsec:5.1 ] changes in levels of descriptions will be seen to correspond to morphisms of graphs .regardless of how one imagines mathematical entities , their expression in symbols is physical , _e.g. _ as ink on paper or voltages in a computer memory .symbols in formulas and symbols of evidence from experiments live in memories .thinking of symbols as physical attributes of memory , with associated dynamics and rhythms , offers a physical analog of gdel coding : one can inquire into the timing and location of symbols , both symbols of theory expressing classical or quantum states and also symbols expressing evidence extracted from experiments .a familiar blackboard " picture of memory is the turing - machine tape , divided into squares ; on each square a symbol `` 0 '' or a symbol `` 1 '' can be written or erased . now lift up this abstraction to recall that the physical mechanism of computer memory is a single device what engineers call a clocked set - reset flip - flop that recognizes a binary symbol carried by a signal , and as part and parcel of the act of recognition also acts as a memory device by recording the symbol .the flip - flop works as a damped inverted pendulum , a hinge if you will , with its exposure to signals from outside cycled by a driven adjustable pendulum , in effect a clock .noticing that a physical implementation of a turing machine depends on the flip - flop allows one to see symbols as physical objects .then one can inquire into the motion of symbols , and into the relation of that motion to concepts of spatial and temporal order .the paired - pendulum mechanism acts as a physical unit of computation and also , through its participation in the machinery of radar , as a physical unit of geometry .we make a distinction between _ recognizing a symbol in a signal _ and _ measuring the signal_. in recognition , the hinge in the memory of a party falls one way or the other to express the symbol ; further , the hinge - position - as - symbol can be copied to flip - flops in the memories of other parties .in contrast , measurement is idiosyncratic , characterized by error bars , and no two instances of a measurement can be expected to agree exactly .the results of a measurement , though idiosyncratic , can be expressed in symbols ( digitized ) , but only after waiting for hinges to fall one way or the other , and with a ( usually small ) risk of confusion . out of the buzzing world of experience , fingers on knobs , tweaking adjustments to bring optics into alignment _etc_. , comes , one way or another , _ evidence _ from an experiment . by _ evidence_ we mean expressions in mathematical language taken as reflecting experience on the work bench .theory , quantum or otherwise , offers explanations , such as explanations in terms of quantum state vectors and operators or explanations in terms of a general - relativistic 4-manifold .an explanation asserts ( rightly wrongly ) properties of evidence .experience evades direct comparison with theory , but in memories symbols for evidence reflecting experience can be compared against assertions about evidence implied by explanations .( see fig .[ fig:1 ] . ) recognizing the role of memory as the holder of evidence written in symbols splits the question of the relation between theory and experience into two questions : 1 .how well does evidence in a memory reflect the experience of an investigator ? 2 .how well does an assertion about evidence implied by an explanation fit actual evidence ? mathematical structures ( _ e.g. _ axioms ) for explanations have been much studied .we raise the parallel question : what mathematical structures are to be found or invented to express evidence ? in this report we concentrate on structures of evidence recordable in the memories of communicating parties , to do with the timing ( not the content ) of their communications .one party communicates a symbol from its memory to the memory of another party via a signal . in propagating from one party to another ,a signal deforms unpredictably , so the recognition of a symbol carried by a signal must be insensitive to a range of deformations .the damped inverted pendulum of the paired - pendulum recognition mechanism offers this insensitivity , provided that the _ rhythm of communication meshes the arrival of the part of a signal that carries a symbol with the phase of symbol recognition_. the receiver must look at the signal when the symbol is present , within some leeway but not too much earlier or too much later . by its dependence on the meshing of the part of a signal that carries a symbol with a receiving party s phase of recognition , the paired - pendulum mechanism of the flip - flop shapes evidence recordable from a communications network by imposing discrete phases of the adjustable pendulum for signal reception , leading to a single form of evidence , regardless of whether explanations for evidenceare stated in quantum terms or in terms of general relativity .the symbol recognized in a signal can not be a function of the signal alone .to communicate , two parties must share some axioms in common , and also _ share a rhythm _ that meshes their clocks with the signal propagating from one to the other .the rhythm , once acquired , must be maintained , and its maintenance depends on reaching beyond the logic of symbol recognition : the rhythm of symbol exchange is maintained not by recognitions but by _ measurements of signal arrivals relative to pendulum phases_. these measurements are subject to idiosyncrasies of each party , on which the other party must rely : an _ intimacy _ necessary to the communication of a symbol from one party to another .radar as the instrument by which spacetime is conceived will be shown to have an analog in the timing of symbols communicated among memories of a synchronized network .( indeed a working radar depends on the communication of symbols , such as those identifying targets . )evidence of the timing of signals transmitted and received in a network of communicating parties , recorded in their memories , has a mathematical form independent of metric assumptions involved in explanations based on the special or general theory of relativity .we show this form in a record format which we relate functorially to colored , directed graphs .the graphs expressing records of communication networks , such the global positioning system ( gps ) , assume neither a general - relativistic geometry , nor quantum states . because of this freedom from additional assumptions needed for one or the other form of explanation, we will show how the graphical expression of evidence offers a platform on which to negotiate the joint participation of quantum theory and general relativity in explanations of evidence from networks of communicating parties .we think of a memory as belonging to a communicating _ party _ , a person or a machine . as a first cut , model a _ party _ by a turing machine moved by a _ driven adjustable pendulum_a clock with a faster - slower lever . following turing , we think of the history of a party s memory as segmented into moments interspersed by moves , but , unlike turing s history of a memory as a sequence of snap shots at successive moments , we need to inquire into what happens during a move in which the symbols in memory can change . thus we view turing s `` move '' not as something structureless but as a phase of positive duration , during which there can be measurements of clock readings . picture the clock that moves a turing machine as moving its hand cyclically around a circle marked in subdivisions of the unit interval , so that a reading of the hand position is the clock reading modulo integers .take the phase ` move ' to be an interval of the circle that includes the position `` 12 oclock '' at the top of dial and the phase ` moment ' as a disjoint interval that includes the `` 6 oclock '' position at the bottom of the dial . at the level of description appropriate to an engineer who probes the operation of a computer memory ,the memory itself becomes a network of communicating `` sub - parties , '' each with a piece of the computer memory .symbols are conveyed by signals from one piece of memory to another .because of uncontrolled deformations as the signal propagates and because , on the workbench , no two things ever get built quite alike , the signal that carries a symbol to a receiving sub - party is subject to unpredictable deformations ; and beyond these practicalities , lower limits to signal variability are implied by quantum theory . for this reason , the mechanism for recognizing a symbol carried by a signal must be made insensitive to small variations in the signal ; _i.e. _ , the signal has to be allowed a certain _ leeway _ in both its shape and its timing . in terms of differential equations , recognizing a single symbolregardless of a certain variation in the signal requires an attractor leading to each symbol , with the implication that between attractors there are unstable equilibria .the insensitivity to variations in the signal requires damping , in conflict with any quantum explanation that invokes only the unitary evolution of a schrdinger equation . physically , the simplest memory element for recording a choice between two symbols consists of paired pendulums , one inverted and damped , with two stable positions and an unstable equilibrium between them , the other the adjustable pendulum of a clock , swinging through phases as part of a rhythm of communication , opening and shutting a gate to allow a signal to flip over or not to flip over the inverted pendulum that holds a bit .a `` bit '' is thus a snap shot of a livelier creature a recognition - and - memory device that not only can display a `` 0 '' or a `` 1 '' but , when the rhythm of its operation is disturbed , can teeter in an unstable equilibrium like a flipped coin landing on edge , where it can hang , lingering , with no sharp limit on how long it can take to show a clear head or tail .we are to think of a bit not as a 0 or 1 on a turing tape but as the position of the inverted pendulum at a moment . in computer hardware , the inverted pendulum gated by a clockis called a clocked set - reset ( s - r ) flip - flop . without adequate maintenance of the rhythm of communicationthe part of a signal in which a bit is to be recognized can arrive at a receiving party in a race with the closing of the gate , resulting in `` runt signal '' squeaking through the gate , big enough to push the inverted pendulum ( think of a hinge ) part way but not all the way over , leaving the hinge hung up in an unstable `` in between '' state .we say the signal straddles a timing boundary .known to engineers concerned with the synchronization of digital communications , such hang - up causes logical confusion .computation requires acts of copying symbols : a symbol in flip - flop a at one moment is copied into two flip - flops , say b and c , at a later moment , so that whatever bit value was in a at the earlier moment appears in both b and c at the later moment both hold 0 or both hold 1 ; one speaks of `` fan - out . ''if flip - flop a hangs up in an unstable equilibrium , then flip - flops b and c not only may hang up , but can `` fall differently '' so that the symbol in b , instead of matching that in c , conflicts with it . in revealing conflicts in response to an unstable condition of a flip - flop a , the fan - out from a to flip - flops b and c also offers a means of detecting unstable conditions , which has been used to show a roughly exponential decline with waiting time of the probability of disagreement between b and c , resulting in the measurement of a _ half - life _ of the instability .( for silicon integrated circuits we found to be close to 1 ns .modern gallium arsenide circuits operate much faster , and efforts to shorten their half - life are underway , but so far their ratio of half - life to cycle period is not much less than that for silicon . ) in quantum explanations , one describes the inverted pendulum by a wave function , putting planck s constant into the relation between the short time constant required for rhythmic operation of the flip - flop and the long time that must be waited for it to settle down when subject to the straddling of timing boundaries and the ensuing runt pulses .=-1 in its use to decide a race among more than two signals , the teetering hinge of a flip - flop has a noteworthy consequence .consider the case of a three - way race among signals , , and arriving at a clock .a world line in a general - relativistic explanation of this clock corresponds on the workbench not to one device but to several interconnected devices .each of the three signals fans out to allow three separate pairwise comparisons of `` which came before which '' . in a close race , teetering in all three pairwise comparisons can result in finding : , and , violating the transitivity of an ordering relation , and suggesting a limit on the validity of even local temporal ordering .making sense out of temporal order requires distinguishing the question of which cycle a symbol recognition occurred from the question of when within a cycle did a signal arrive.=-1 remarks : 1 . to reduce the risk of disagreement between b and c, it suffices to wait after the setting of a to the reading of b and c. the literature on digital circuits discusses the related use of `` arbiters''of which there are two types , one that might take forever , the other that might generate confusion .2 . weeks after a given day , gps publishes corrections to coordinates for events that it issued on that day , derived from subsequent cross comparisons among its clock readings recorded at the transmission and reception of radio signals .although the process of comparing and correcting may yet be greatly speeded , not only does the delay in communicating comparisons limit how quickly one can determine what the clock readings `` should have been , '' but an additional delay is imposed by the balancing instrument used to convert analog measurements to digital signals suitable for communication .for theoretical purposes , we assume the conceptually simplest ( but not the most used ) scheme for digital communications , called _ synchronous communication _ , which offers the fastest response . in synchronous communicationa receiver recognizes symbols one by one ( without use of sample - and - hold techniques ) .synchronous communication from a party a to a party b , moved by clocks a and b , respectively , requires that a symbol be transmitted from clock a while a s clock hand is in the 12 oclock `` move '' phase and must arrive a b while b s clock hand is also in a 12 oclock `` move '' phase .clocks , including the atomic clocks used to generate international atomic time ( tai ) , drift unpredictably in rate , leading eventually to unbounded phase drift between two nearby clocks , with the result that clocks function only in a network of comparisons that guide adjustments of clock rates over some ( possibly small ) range .in addition , communications involve other perturbing circumstances , including doppler shifts among parties in motion .unless the clock of a receiving party can be maintained so the phases of reception are aligned with the arrivals of symbol - carrying signals , the recognition of a symbol carried by a signal fails .suppose that the conditions of phasing allowing synchronous communication between two parties have been brought into being a story in itself .to maintain these conditions over a succession of symbols requires more or less continual adjustment of the motion of the clocks : their accelerations , their rates of ticking , or both . in all casesthe adjustment is guided by departures from nominal behavior of the arriving symbols relative to an imagined center of the phase of reception , much as steering an automobile toward the center of a lane depends on noticing and responding to its departure from the center.=-1 how then to determine the departures in the clock reading of a receiving party at a signal arrival ?let the reading of the clock of the a receiving party relative to the 12 oclock center of the receptive move , modulo integers , be symbolized by .in order to guide adjustment necessary to the maintenance of synchronous communication , the offset symbolized by has to be made to act on a `` lever '' ( as in the lever on the back of a wind - up clock by which its rate of ticking is adjusted ) .( if the receiver clock needs to be showed down relative to the arriving symbols , and speeded if . ) in hardware , the symbol `` '' never appears .for example , one way to guide adjustment is by `` bang - bang '' control that responds to whether the part of a signal that carries a signal arrives before or after a nominal clock reading within the receptive phase . for this a logical and gateis used not as part of a device for recognizing symbols but as a measuring device in a feedback loop that controls the rate of ticking of the receiving party .the and gate is opened at the beginning of the cycle to the arrival of the signal but turned off at the nominal reading . if the signal arrives well before the turn - off it passes through the and gate to put a pulse of charge on a capacitor . a running average of the charge on the capacitor controls the faster - slower lever of the party s clock .close races between the arriving signal and the turn - off produce runt pulses without causing any logical confusion , for the runt pulses never need to be recognized as symbols ; instead the pulses , runt or not , pile up like gravel that is shoveled without the stones being counted .the point is that the fine - grained determination of clock reading within a receptive phase at the arrival of a symbol can not be _ recognized _ as a symbol but requires something distinctly different , which we call _ measurement _ , as follows .symbol recognition depends not only on leeway but also on avoiding `` straddling of boundaries . ''to recognize a symbol , such as the arrival of a pawn on a square of a chess board , the act of looking must be coordinated with the arrival so that a party looks while the pawn is in the square and not sliding over a boundary that it straddles as it moves . it is these conditions of `` no straddle '' and `` leeway '' that allow two parties to agree exactly in their recognitions of symbols .in contrast to the recognition of symbols , we speak of _ measurement _ as in the determination of a mass in a balance , for which no two parties can expect to agree exactly ; instead one speaks of error bars .the idiosyncratic variations among parties resulting in error bars are inescapable precisely because of the straddling of boundaries and the lack of leeway .although distinct , ` measuring ' and ` recognizing ' depend on one another .for example , in measuring using a balance instrument , i have to recognize weight a , weight b , and that `` they balance '' or `` the balance tips toward a , '' and if i am wrong in such noticing , my measuring makes no sense .indeed , in spite of their neglect in physics education , recognitions are essential to logic , without which physics collapses . going the other way , recognitions basic to logical communication turn out to take place in rhythms that require maintenance adjustments to clock rates guided by measurements . with the distinction between recognizing and measuring in mind , we return to the issue of determining a departure from a desired clock hand position within a phase at the arrival of ( the center of ) a symbol carried by a signal . in its use to recognize a symbolthe mechanism of an inverted pendulum must be insensitive to the very timing variations of interest within the leeway of the phase of reception .when finer - grained distinctions necessary to determining the clock hand position are implemented , straddling of boundaries is unavoidable , and the distinctions can not be _ recognitions _ but depend on _ measurements_. altogether we arrive at : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ fine - grained `` local clock readings''necessary to maintaining rhythms essential to the communication of logical symbols constitute measurements , idiosyncratic in that no exact agreement can be expected between any two measuring parties ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ only in special situations can a receiving party recognize in a signal the symbol intended by a transmitting party .for example , when two people converse : each person s ear hears the symbols what the other s mouth puts into a spoken signal . for communicationthe two parties have to share not only concepts but a rhythm , and the establishment of that rhythm requires reaching beyond logical recognitions to rely on necessarily idiosyncratic measurements that guide the adjustments needed to maintain the rhythm .the conditions of shared concepts and a shared rhythm necessary to communication can reasonably be called _ intimacy_. without this intimacy of communication in which symbols are conveyed , there can be no logic , mo mathematics , and no physics .in working back and forth between experiments on the optics bench and writing quantum states on the blackboard , we saw lens holders and lenses and lasers on the optics bench , but _ no _ quantum state vectors .but _ must _ state vectors be invisible on the bench ?nobody can lay formulas on top of an optics bench to see if they fit. to be compared with mathematically expressed explanations raw experience with lenses and mirrors has to be first reflected into _ evidence written in symbols of a mathematical system based on axioms _ , recorded in a memory .so our question became : can mathematically expressed evidence in a record ever determine its own explanation ?the answer hinges on a striking property of quantum theory . in pre - quantum physics , including general relativity , the mathematical system available for expressing evidence involves the same axioms as that for expressing explanations .quantum theory differs by invoking two distinct mathematical systems , hilbert - space constructions for _ explanations _ , and a distinct other system of probability measures for assertions about _ evidence _ implied by an explanation : [ cols="^,^,^,^ " , ] such graph fragments can be pasted together by condensing a signal edge of the graph for party a for transmission to another party b and the signal edge for reception of a s transmission by a party b into a single edge from the transmission move of a to the reception move of b , as follows .a vertex at the head of a signal edge from a move of a at count is overlaid on a vertex colored a at the tail of an edge to reception at a move of party b , the vertex is removed and the signal arrows joined head to tail into a single directed edge .this is illustrated in fig .[ fig:3 ] .such graphs are essentially _ occurrence graphs _ , specialized to exhibit a distinct trail for each party , with edges for signals linking parties .when `` analog '' measurements with their idiosyncrasies that color the occurrence graphs are forgotten , the occurrence graph for a network of communicating parties can exhibit symmetry , illustrated in fig . [ fig:4 ] . in some interesting cases , forgetting the coloring by fine - grained clock readings and rate settings , an occurrence graph can be `` wrapped around '' to form a marked graph , as in figs .[ fig:5 ] and [ fig:6 ] .figure [ fig:7 ] shows an example of an occurrence graph for a network in which one set of parties is in motion relative to another set of parties .occurrence graphs , marked graphs , and , more general petri nets form categories with interesting graph morphisms . going the other way , one studies morphisms among network histories , aided by the functor from network histories to occurrence graphs .redrawn as a role - activity graph , with a vertical trail for each of four parties .a circle at the top of a trail is identified with the circle at the bottom of the trail , and vertices ( square boxes ) connected by a horizontal line are identified .vertical edges are understood to be downward - directed.,width=264 ] redrawn as a role - activity graph , with a vertical trail for each of four parties .a circle at the top of a trail is identified with the circle at the bottom of the trail , and vertices ( square boxes ) connected by a horizontal line are identified .vertical edges are understood to be downward - directed.,width=240 ] moving past one another .solid boxes indicate a meeting between a party of one set and a party of another set .all edges are directed downward.,width=305 ] the graphs are objects of respective categories in which morphisms include ( 1 ) isomorphisms from one induced subgraph to another ; ( 2 ) inclusions ; and ( 3 ) epimorphisms in which certain stretches of clock image over several vertices for moves along with neighbor - to - neighbor signals map to a single vertex .example : view main memory as one party and view auxiliary memory as a second party ; then map the two parties into a single `` turing - machine '' party . by another such map , illustrated in fig .[ fig:8 ] , a vertex at which two signal edges meet a party can be seen as a condensation of a pattern involving two parties , each with a vertex involving only one signal edge .occurrence graphs of this form of `` no more than one signal per party vertex '' map to virtual braid diagrams , and it will be interesting to see what interpretation , if any , to make of virtual - braid isotopies .a nice path to study more complex synchronization methods , including those used in gps , employs the two - step procedure of choosing a sensible form for records of timing recordable in the memories of communicating parties , and then translating those records to graphs .we expect different synchronization methods to produce different formats for records , which in turn will imply different special properties of the occurrence graphs to which they map .for that reason the basic starting point is the notion of the records recordable in the memories of communicating parties .a noteworthy property that can be defined by a network history and read from the corresponding occurrence graphs is what we call _ echo count _ , which is an integer - valued measure relevant to communications , defined to be the difference between the cycle count of a transmitting clock a at the transmission to clock b and the cycle count at a at which an echo from b is possible .let ec be the echo count for transmission from a during cycle count to echo back from b to a. note that : 1 .the echo count can vary along a history in which one party receives a sequence of echoes from another party .2 . except in special cases the echo count _ not _ symmetric . for instance , clock of a can run twice as fast as clock of b , resulting in ec 2ec .so far we have concentrated on colored directed graphs as reference systems for evidence . herewe put in a word about explanations of such evidence , by looking at what happens if one chooses to introduce the additional assumptions , not required for expressing the evidence , but needed for explanations .we start with general relativity . in order to explain synchronous communication of symbols from one clock to another in the language of general relativity we follow convention by modeling a clock as a smooth embedding from a real interval into 4-dimensional manifold with a smooth metric tensor field of lorentzian signature and time orientation , such that the tangent vector is everywhere timelike with respect to and future - pointing . to express the positive duration of phases of moves , recall the distinction between an embedding as a curve that is , a function from to , and the image of this curve as a 1-dimensional submanifold of , denoted .for lack of a better word , we call such an image a _ thread_. think of a dial position attached to each point of the thread for a party , and picture the thread for a clock that takes part in synchronous communication as striped by the 12 oclock phases in which transmission and reception are allowable .the form of a general relativistic explanation of evidence presented in the colored occurrence graphs is then a corresponding network of threads , with timelike threads for parties and lightlike threads for signals from thread to thread .such a network of threads in a manifold with metric maps to an assertion of evidence ; however , as in the case of the trace as a map from evidence to an assertion of evidence in quantum theory : the map from a network of threads to a colored occurrence graph is _ not _ injective : for given any given colored occurrence graph displaying evidence , there is a freedom to change the metric tensor and make a corresponding change in the convention for relating physical clocks to proper clocks leaving unchanged the assertion of evidence implied by the explanation .indeed , in applications such as gps , one needs to invoke non - gravitational forces , and these forces have to be estimated from their effects on evidence , bringing in a much larger realm for free choice of explanation .consider a spacetime manifold and two parties and as non - intersecting , threads colored by their respective clock readings . a change in clock rate is expressed by a change in the coloring along the thread .if the manifold is flat , it is always possible to adjust the clock rates in such a way that : 1 .synchronous communication can take place from to and from to ; 2 .an event of can be chosen freely as a transmission event , provided the clock is reset , as represented by re - coloring the thread for so that the event corresponds to an integral clock reading ; 3 .given a clock as thread colored by its reading , along with an integer , there exists a clock allowing for synchronous communication at echo distance ec , independent of cycle count .the same holds in a curved spacetime if the clocks are not too far apart , which is the case if for each event of the thread for a there be an event of the thread for b within a radar neighborhood of with respect to the thread for a , and _ vice versa _ . in 1967 , the 13th general conference on weights and measures specified the international system ( si ) unit of time , the second , in terms of a cesium atomic clock rather than the motion of the earth .specifically , a second was defined as the duration of 9,192,631,770 cycles of microwave light absorbed or emitted by the hyperfine transition of cesium-133 atoms in their ground state , supposing the atoms are undisturbed by external fields .two commercially available cesium clocks functioning well can vary in rate by about 1 part in , and primary cesium standards approach 1 part in .nonetheless , as clock improve in their reproducibility , the size of discrepancies that matter keeps shrinking . for examplethe national institutes of science and technology ( nist ) detects a rate shift between two optical clocks of when one clock is lifted against the earth s gravity by 33 cm and this shift is proposed as a basis for mapping the earths gravitational field .because of size of discrepancies that matter keeps shrinking in step with improvements of precision , we continue to experience the circumstance that `` no two clocks tick alike . '' for this reason the choice of or any other clock design can only be a partial specification of the clocks used to generate coordinated universal time ( utc ) .utc actually makes use of a global system of signaling between clocks , comparing clock readings at the arrival of signals , deciding what these readings `` would be '' if the clocks were proper clocks and the general relativistic metric tensor were that assumed , and issuing ex - post corrections to readings of clocks reported by national laboratories .a big part in this inter - clock communication is played by the global positioning system ( gps).thus in practice , a `` standard clock '' in not local to any single physical clock , but instead is a creature of a network of communicating clocks governed by a scheme of comparisons of signal arrivals that guide adjustments of clock rates , or , what is the same in its effect on recorded times , corrections .the second depends on ( a ) a network of clock - driven communications , and ( b ) the assumption of general relativity and of some particular choices of metric tensor field within that theory . by sorting out a reference system for evidence distinct from assumptions of geometry a choice of metric tensor field we offer the opportunity to put the choice - making aspect of utc up on the table where it can be considered more clearly , by virtue of a reference system for evidence independent of the dynamical and indeed chaotic nature of the metric tensor field of general relativity .it follows from perlick s work that spacetime curvature imposes a lower bound on the duration of phases of moves in a network of synchronous communication by use of signals that propagate at the speed of light .we note however , as follows from the remarks above on paired clocks , that there is no such bound if only two clocks constitute the network .the tightness of synchronization in a network of communicating parties is indicated by the greatest required phase duration : the less the phase duration , the tighter the synchronization .the tightness of synchronization possible when the network operation is restricted to a subregion is apt to be greater than for the network over the region .for this reason there can be no network that is universally tightest over all subregions . applied to coordinate - generating networks such as gps, the implication is that for the highest precision over a limited spacetime region , the scheme of synchronization must be specially adapted to the limited region of interest . for the futureis will be interesting to study the possibility of adapting clock networks to achieve the tightest synchronization possible for particular uses in which a limited region of spacetime is at issue . in a curved spacetime such as that appropriate to explain the global positioning system , there are no killing vector fields and indeed no exact isometries linking two disjoint spacetime regions .yet there can be occurrence graphs for a clock network that , once idiosyncratic clock readings and rate settings are forgotten , exhibit exact isomorphisms from one graph fragment to another .one can make an analogy to isomorphisms among square tiles laid over a region of a `` potatoid , '' where variations in the thickness of the grout take up the slack , as illustrated in fig .[ fig:9 ] .such isomorphisms come as close as one can get to resolving the need in quantum theory to speak of repeated occurrences of the preparation of an experiment .quantum mechanical explanations imply probability distributions for clock readings of a receiving party at the arrival of a signal within a receptive phase .( indeed the distributions can not be confined to a receptive phase , leading to occasional logical failures that can be reduced in their disruptive effects by well - known error - correction techniques , based on redundancy , but never reduced to the vanishing point . ) but what to make of a probability measure for clock readings ?experimentally , one compares an asserted probability with relative frequencies of clock readings at signal arrivals .for this one has to identify many disjoint fragments of an occurrence graph as pertaining to repetitions of a single quantum `` preparation . ''assuming a flat spacetime , this identification perhaps presents no problem .in contrast , when one wants to work with quantum theory adjoined to a curved spacetime of general relativity , the situation becomes more interesting .in particular in a spacetime appropriate for gps , lacking any exact isometries from one spacetime region to another , the `` uncertainty in clock readings '' picks up a component from the general - relativistic curvature , in addition to any uncertainty asserted by quantum theory .experimenting with pendulums and balances in experiments done with our own hands made us aware of a gap between a frequency `` '' on the blackboard and the rate of swinging of one pendulum compared to another that we could experience on the workbench .the key in learning to navigate between the bench and the blackboard was to see the physical device , the paired - pendulum mechanism of the flip - flop , that both recognizes and records a symbol . by burying the flip - flop under the abstract notions of spacetime and of quantum states ,theoretical physics has lost track of the rhythms and their maintenance essential to extracting information from the bench and using that information to control experiments . in this reportwe take a first step toward bringing the rhythms and their maintenance back into physics as a background against which all else in physics takes place .this background applies regardless of the mode of explanation , and in particular regardless of whether one explains the evidence extracted from experiments by invoking quantum theory or by invoking general relativity . because quantum mechanical explanations put planck s constant into limits of behavior of the flip - flop , and the flip - flop works also in the acquisition of evidence to be explained by general relativity , one glimpses as a question for the future the a possible role of in general relativity .the flip - flop mimics gdel coding by coding whatever symbols it recognizes in a system of numerics endowed with axioms of arithmetic . grasping that symbols expressed are necessarily physical prepares one to trace the influence of physical symbols on the statements possible in physics .so far what has been uncovered includes the following : 1 . among other things ,a symbol can express a pattern of other symbols , so that any description in physics , whether evidence or explanation , involves making a choice of level of detail .because of the separation of axioms needed to symbolize evidence from axioms needed to symbolize explanations , no quantum state can be determined from evidence without reaching beyond the evidence to exercise an irreducible element of free choice , _ i.e. _ to make a _ guess_. 3 .the communication of recognizable symbols requires a rhythm , and the rhythm requires maintenance guided not by recognitions , but by measurements idiosyncratic to the party making them , on which other parties in a communications network must rely . seeing a physical mechanism for recognizing and recording a symbol opens avenues of exploration .questions of `` who can know what and when can they know it ? '' become colored by clock phases imposed by the pair - pendulum mechanism on which the background of symbol exchange depends . we offer a restructuring of _ clock , signal _ , and _ time _ , incorporating attention to the recognition and recording of symbols .this structure differs from that invoked by the iau by bringing concepts into alignment with practice .recall that einstein defined spacetime in terms of light signals exchanged among clocks .we see spacetime coordinates as implemented by devices based on the paired - pendulum mechanism for symbol recognition ( as is the implemented turing machine ) . _ time _ amounts to relations between the ordering by one clock of a communications network to ordering by another clock , with the result that no isolated clock can `` tell time . ''the ticking of clock a is influenced by the ticking of other clocks with which clock a communicates .our graph pictures of evidence formalize this structure , in which records of ` digital ' symbols are made in rhythms guided by idiosyncratic ` analog ' measurements .the concept of a physical basis for recognition invites application to biology .we note that in an organism the propagation of signals goes very slowly relative to that in electronics , so that the single oscillator that drives the clocking throughout a digital computer likely has no biological analog ; instead , we conjecture that a nervous system , whether that of a worm or of a person , involves rhythms in which independently adjusted oscillators take part . recalling the impossibility of a `` universally tightest '' communication network in the context of general relativity , we would be interested to join other in inquiring into constraints on coordination of such rhythms .questions abound concerning the role of quantum explanations in biology . to this topicwe contribute a suggestion that dna can be viewed as a classical code for setting up situations , for example involving photosynthesis , describable quantum mechanically .this paper grew out of a talk given at the workshop on topological quantum information , organized by l. h. kauffman and s. j. lomonaco , jr ., on may 16 and 17 , 2011 , held at the centro di ricerca matematica ennio de giorgi ( crm ) in pisa , italy .we thank the crm for hosting the workshop , at which several conversations took place that stimulated the work reported here ; we also thank the crm for financial help .we thank prof .kauffman for recognizing some condensed occurrence diagrams as virtual knot diagrams .99[references ] t. l. booth , _ digital networks and computer systems _ , 2nd ed .( wiley , new york , 1978 ) , p. 223 .a. m. turing , `` on computable numbers with an application to the entscheidungsproblem , '' proc .london math .soc . , series 2 , * 42 * , 230265 ( 193637 ) .c. e. shannon , `` a mathematical theory of communication , '' the bell system technical journal , vol . 27 , pp .379423 , 623656 , july , october , 1948 . t. j. chaney and c. e. molnar , `` anomalous behavior of synchronizer and arbiter circuits , '' ieee trans .computers * c-22 * , no . 4 , 421422 ( 1973 ) . h. j. gray , _ digital computer engineering _( prentice hall , englewood cliffs , nj , 1963 ) , pp . 198201 . j. h. anderson and m. g. gouda , `` a new explanation of the glitch phenomenon , '' acta informatica * 28 * , no . 4 , 297309 ( 1991 ) . f. h. madjid and j. m. myers , `` matched detectors as definers of force , '' ann .physics * 319 * , 251273 ( 2005 ) .b. cheney and r. savara , `` metastability in scfl , '' in _ technical digest of the 17th annual ieee gallium arsenide integrated circuits symposium , 1995 _ , ( ieee , san diego , ca , 1995 ) , pp .. h. meyr and g. ascheid , _ synchronization in digital communications_(wiley , new york , 1990 ) .h. meyr , m. moeneclaey , and s. a. fechtel , _ digital communication receivers : synchronization , channel estimation , and signal processing_(wiley , new york , 1998 ) .j. m. myers and f. h. madjid , `` a proof that measured data and equations of quantum mechanics can be linked only by guesswork , '' in s. j. lomonaco , jr . and h. e. brandt , eds . , _ quantum computation and information _ , contemporary mathematics series , vol . 305 ( american mathematical society , providence , ri , 2002 ) , pp . 221244 .j. m. myers and f. h. madjid , `` ambiguity in quantum - theoretical descriptions of experiments , '' in k. mahdavi and d. koslover , eds ., _ advances in quantum computation _ , contemporary mathematics series , vol . 482 ( american mathematical society , providence , ri , 2009 ) , pp . 107123. m. soffel , s. a. klioner , g. petit , p. wolf , s. m. kopeikin , p. bretagnon , v. a. brumberg , n. capitaine , t. damour , t. fukushima , b. guinot , t .- y .huang , l. lindegren , c. ma , k. nordtvedt , j. c. ries , p. k. seidelmann , d. vokrouhlicky , c. m. will , and c. xu , `` the iau 2000 resolutions for astrometry , celestial mechanics , and metrology in the relativistic framework : explanatory supplement , '' the astronomical journal * 126 * , 26872706 ( 2003 ) .a. w. holt , `` introduction to occurrence systems , '' in e. l. jacks , ed . , _ associative information techniques _ , proceedings of the symposium held at general motors research laboratories , 1968 ( american elsevier , new york , 1971 ) , pp . 175203. f. commoner , a. w. holt , s. even , and a. pnueli , `` marked directed graphs , '' journal of computer and system sciences * 5*(5 ) , 511523 ( 1971 ) . j. l. peterson , _ petri net theory and the modeling of systems _( prentice - hall , englewood cliffs , nj , 1981 ) .a. w. holt , role / activity models and petri nets , " second semi - annual technical report , project on development of theoretical foundations for description and analysis of discrete information systems , ntis ad - a008 385 ( massachusetts computers associates , wakefield , ma , 1975 ) . l. kauffman , `` introduction to virtual knot theory , '' arxiv:1101.0665v1 ( 2011 ) . v. perlick , `` on the radar method in general - relativistic spacetimes , '' arxiv:0708.0170v1 ( 2007 ) . c. w. chou , d. b. hume , t. rosenband , and d. j. wineland , `` optical clocks and relativity , '' science * 329 * , 16301633 ( 2010 ) .a. einstein , _ ber die spezielle und die allgemeine relativittstheorie _ ,24th ed .( springer , berlin , 2008 ) ; english trans . by r. w. lawson as _ relativity , the special and general theory : a popular exposition _ ( bonanza books , new york , 1961 ) | preoccupied with measurement , physics has neglected the need , before anything can be measured , to _ recognize _ what it is that is to be measured . the recognition of symbols employs a known physical mechanism . the elemental mechanism a damped inverted pendulum joined by a driven adjustable pendulum ( in effect a clock)both recognizes a binary distinction and records a single bit . referred to by engineers as a `` clocked flip - flop , '' this paired - pendulum mechanism pervades scientific investigation . it shapes evidence by imposing discrete phases of allowable _ leeway _ in clock readings ; and it generates a mathematical form of evidence that neither assumes a geometry nor assumes quantum states , and so separates _ statements of evidence _ from further assumptions required to _ explain _ that evidence , whether the explanations are made in quantum terms or in terms of general relativity . cleansed of unnecessary assumptions , these expressions of evidence form a platform on which to consider the working together of general relativity and quantum theory as explanatory language for evidence from clock networks , such as the global positioning system . quantum theory puts planck s constant into explanations of the required timing leeway , while explanations of leeway also draw on the theory of general relativity , prompting the question : does planck s constant in the timing leeway put the long known tension between quantum theory and general relativity in a new light ? |
a fundamental question in the theory of evolution has been how cooperation can emerge between selfish members .another question is why evolution takes place in terms of intermittent bursts of activities , which are the characteristics of dynamical systems in a ` critical ' state . here, we propose a generalized bak - sneppen ( bs ) model , which may solve the above two puzzles simultaneously .we take an approach of evolutionary game theory and use the prisoner s dilemma ( pd ) games to mimic the interactions among members .each member is identified by its stochastic strategy , specified by its ( history independent ) cooperation probability ( cp ) . here, a ` member ' can represent an individual in a species , an agent in an economical system or a species in an ecological system .the fitness of a member is given by the payoffs of the games with its neighbors .we then apply bs dynamics and replace the least fit member and its neighbors by new members with random cps .the neighbors of the non - cooperator are likely to vanish due to its low payoff , but the non - cooperator itself can also be removed through the bs mechanism . as the non - cooperators disappear , the overall cp increases , and a new comer ( with a random cp ) will have a lower cp than the increased average .therefore , the new comer tends to cause its neighbor to be the least fit , and the replacement activity likely occurs at or near the new comer s site .this invokes the spatio - temporal correlation between the least fit sites and can explain why replacements are episodic as well as how cooperation emerges .evolutionary game theory has been one of the most powerful tools in studying the dynamics of evolution .however , a simple straightforward application of game theory can not explain the strong cooperation between selfish " replicators observed in nature and society .for the evolution to construct a new , upper level of organization , cooperation amongst the majority of the population is needed .however , the game theoretical description of interactions between members usually leads to defections as evolutionarily stable strategies .natural selection , which has been a fundamental principle of evolution , prefers the species that beat off the others and oppose cooperation .there have been numerous studies looking for natural mechanisms for the evolution of cooperation among competitive members .recently , nowak presented a state - of - art review on the evolution of cooperation and discussed five known mechanisms : kin selection , direct reciprocity , indirect reciprocity , network reciprocity , and group selection .extensive studies provide the exact conditions for the emergence of cooperation for each of the five mechanisms .however , such conditions do not seem to be general enough to explain the cooperative phenomena observed everywhere .for example , for network reciprocity , the benefit - to - cost ratio of a cooperative behavior should be larger than the average degree , but this seems to be a rather strong assumption because the degrees are quite large in most cases in real population structures .also , there have been a great deal of studies on self - organized criticality in game theory , but their dynamics leading to the critical states are not directly connected to the emergence of cooperation . here , we consider an evolutionary game on networks and show that cooperation can emerge when the benefit to cost ratio is larger than just 1 if we use the bs process .when cooperators interact with defectors , they tend to disappear , giving rise to an assortment of cooperators .furthermore , this behavior emerges in the long run even with a small `` chain - death '' rate , , where the number of neighbors that get replaced is less than one . for a uniform or random arrangement of cooperators and defectors , more cooperators than defectors disappear for small , but in the long run, the bs process builds a self - organized structure so that the number of cooperators in the population increases .an influential model aimed to mimic the interactions between competitive members in a population is the pd game .it is one of the matrix games between two players who have two possible decisions , cooperation ( ) or defection ( ) .we consider a case in which the payoffs are calculated by the cost and the benefit of a cooperative behavior . if one player defects while the other cooperates , the defector receives benefit without any cost whereas the cooperator pay cost and its payoff becomes . for mutual cooperation , both get benefit , but pay cost , and their payoffs become while the payoffs for mutual defection are 0 .when we add to all elements so that payoff can be directly interpreted as ( non - negative ) fitness , the payoff matrix becomes where we set without loss of generality . with conventional competition processes , the matrix game shown abovedoes not , in general , predict the evolution of cooperation .the birth - death process always predicts an evolution of defection .cooperation can emerge for death - birth or imitation processes in a structured population , but only with a ( unrealistic ) large value of the benefit - to - cost ratio for real populations . here , we consider the pd game interaction , but introduce the bs mechanism as the competition process , and assume that the least fit member and its neighbors are prone to disappear .each member is characterized by its strategy that determines when to choose the ` decisions ' or .we consider the history - independent stochastic strategies , and the phenotype of a member , say the member , is represented by its cp .the history independent pure ( deterministic ) strategies , the `` always '' and the `` always '' , correspond to the limits of and , respectively .the fitness of a member is given by the sum of payoffs from its neighbors , and the member dies out if its total payoff is the minimum .the died - out site is occupied by a new member with a new cp , which is drawn randomly from 0 to 1 .neighbors of the least fit site may also be harmed in the process of establishing the steady interaction with the new comer .hence , we replace the neighbors of the least fit site by new members with the `` chain - death '' probability . and ( b ) the rf for ] .we show the configurations for initial 6000 time steps of a system with and .both the cp and the rf are represented by colors , 0 by red and 1 by green .the least fit sites ( black sites in ( b ) ) and their two neighbors are where the replacement activity occurs .comparison between the configurations in ( a ) and their equivalents in ( b ) reveals that the least fit sites are located where their neighbors are less cooperative [ relatively red in ( a ) ] . the disappearance of the `` red '' neighbors beside the least fit site by the bs - mechanism shifts the overall system to green ( more cooperative ) with time . , for ( a , c ) and ( b , d ) for systems with , 64 , 128 , and 256 . in ( a ) and ( b ), the overall behaviors of mcp are shown while the initial transient characteristics are shown in ( c ) and ( d ) . for , mcp decreases first and then increases while it monotonically increases for . , width=302 ] for a quantitative analysis , we measure the mean cp ( mcp ) , , of the populations and show the results in fig . 2 . here , represents the ensemble average over many different realizations of random initial configurations .note that the mcp also represents the overall fitness of the population because it is linearly related to mcp : f(t ) = = 2 + 2 ( b-1 ) c(t ) . in fig .[ f.2 ] , the mcps for four different system sizes , , 64 , 126 , and 256 , are shown for two different values of , , and 1 .we use for all figures in this paper , and all data are obtained from numerical simulations .because we have assigned a random cp initially , the mcp starts from 0.5 at . for ,the mcp decreases at the beginning and then increases to the steady values while it monotonically increases from the beginning for , as shown in figs .2(c ) and ( d ) .note that we have two different elements in mcp changes .replacement of the least fit member ( which is likely to have a high cp ) tends to cause the mcp to decrease while the replacement of its neighbors ( which probably have low cps ) likely results in an increased mcp .the competition between these two elements governs the early dynamics of the mcp .it can decrease initially when , where is the number of neighbors .for a sufficiently large system , there would be a site , , whose cp , , is arbitrarily close to one while those of its neighbors , , are almost zero .hence , the expectation of mcp changes , would be }$ ] and becomes negative for at the beginning . however , as time proceeds , the cp values develop spatio - temporal correlations , and they govern the long - time dynamics . initially , the isolated high - cp cooperators are likely to be the least fit member , and they are removed as time proceeds .then , surviving cooperators remain in the groups , and , thus , have high fitness .now , low - cp defectors can be the least fit member , especially when they are next to a very low - cp member .the replacement of these low - cp member by new members with random cps causes the mcp to increase .therefore , at a later time , the mcp easily becomes larger than the initial 0.5 even for .now , a new comer with a random cp will have a lower cp than the increased average of mcp .this in turn causes the least fit site to be likely located next to the new comer s site , resulting in avalanches of replacement activities ., their neighbors , , and members that are replaced , , are shown together with mcp , , for ( a , c ) , and ( b , d ) . in ( a ) and ( b ), the initial transient behaviors are shown while overall behaviors are shown in ( c ) and ( d ) .the system size is used for all cases ., width=302 ]we start from the population with random strategies .hence , there is no correlation between the cps initially , and we may understand the initial dynamics through the mean - field calculation .we first define the mean cp of the replacement sites ( before the replacement ) , c_rep = ( c_min + 2c_nei ) , where mean - field dynamics can be easily analyzed . here , is the average of the cps for the least fit members , and is that for the neighbors of the least fit members . on average , cps of sitesare updated each time . since the average of the newly assigned random cooperation rate is 0.5 , satisfies , = = 0.5 - c_rep .[ e.crep ] we measure and present them in fig .[ f.crep.t ] , together with the cps of the least fit members , , that of the replaced members , , and the mcp , , for and .the curves are , indeed , well described by eq .( [ e.crep ] ) . if we represent the numerical solutions of eq .( [ e.crep ] ) in the figure , they can not be distinguished from the curves from the simulations because they are almost identical . from fig .[ f.crep.t ] , we also see that enters its steady value in a relatively short period of time compared to and rapidly converges to its steady - state value of 0.5 . for , the initial is more than half and hence decreases to the steady value of 0.5 while it increases from the value below 0.5 for . for a sufficiently large system, the initial value of would be 1 while is 0 .hence , the initial value of would be , which is more than 0.5 for . in this transient time of , the dynamics of mcp , , would be mainly determined by the dynamics of .therefore , initially decreases for as does .however , after reaches a steady value , the correlation of the replacement sites mainly governs the dynamics , and begins to increase .let be the least fit member at time ; then , at time , is always updated , and are updated with probability . after replacement ,if the sum of the cps at these three sites , ( at the time ) , is small , at least one of , or sites , is likely to have small fitness .therefore , they will be easily replaced in a relatively short time . in other words , a new born member with small has a short lifetime and contributes less to the than those with large .this mechanism makes increase up to ( almost ) , and hence , the system becomes cooperative overall .thus , according to our model , the emergence of cooperation is intrinsically related to the dynamics leading to self - organized criticality ( soc ) .in the steady states for five different system sizes of , 32 , 64 , 128 and 256 with ( a ) and ( b ) . the system size dependence of the effective lower and upper thresholds and ( defined in the text ) are shown in ( c ) .legends of ( a ) are also applied to ( b ) . ]we now show that our model , in fact , drives the population into a soc state as in the original bs model .we measure the distributions of avalanche sizes and distances between successive least fit sites in the steady states and show that they follow power - law distributions .following bak and sneppen , we would like to define the size of an avalanche as the number of subsequent replacements at the least fit sites below the lower threshold in its fitness value .the fitness distributions share some characteristics of the bs model although their overall shapes are quite different .a crucial similarity is that the fitness distribution in the steady state becomes zero for fitness smaller than a lower threshold as the system size goes to infinity .the fitness distributions in the steady states for five different system sizes are shown in figs .[ f.4](a ) and ( b ) for and .as the system sizes increase , the peak positions of the fitness distribution move to the right to high values , and the peak widths become narrow . to estimate the threshold values and , we define the effective lower [ upper ] threshold [ as the value below [ above ] which the integrated distribution is 5 percent .we plot them against in fig .[ f.4](c ) for two different chain - death rates , and .there are no noticeable differences in the thermodynamic values for the two values . using linear fitting, we get rough estimates of the threshold values , and , for both values . of avalanche sizes .distributions with three different values of , , , and are measured in systems of in their steady states .data with show a most persistent straight line in the log - log scale fit , indicating the lower threshold for the system with .the black line is the least - squares fit of the data for and is given in a form of with .( b ) a distribution of the distances between successive minimum fitness sites in the steady states for the system of with . the black line is the least squares fit of the data in the form of with . ] for the avalanche size distribution , we need a more precise value of .we measure with several different values of around the estimated value .if the system is really in a soc state , we expect the avalanche size distribution to show a power - law distribution , for the exact value of for the given system .figure [ f.5](a ) shows the distribution of avalanche sizes in a system of size .we plot against on a log - log scale with three different values of around the value estimated from fig .[ f.4](c ) to pinpoint the threshold . for shown in fig .[ f.5](a ) , the avalanche size distribution is well fit by a power - law with .it remains as a line in the log - log plot up to an avalanche size about 20000 , indicating power - law distributions .the exponent obtained from a least - square fit of the form is .this value is consistent with the known exponent of the 1d bs model .the power law indicates that the evolution occurs in a dynamical criticality .we measure the avalanche distributions for other and and found the critical exponent to be independent of the benefit - to - cost ratio or the chain - death probability .we also measured the distance distribution between successive least fit sites . denoting the distance between successive minimum fitness sites by , we plot in fig . [ f.5](b ) .the distance distribution is measured in the steady states for the system of with . when the distribution is plotted against on a log - log scale , it also becomes a line , indicating power - law distributions with the slop .this exponent is also consistent with the known exponent of the 1d bs model .it is notable that our model belongs to the same universality class as the bs model in spite of the complexity in computing the fitness of members and the non - trivial dynamics of the population - fitness changes .we have considered the bs mechanism as a reproduction process with fitness given by a pd game payoff on a network structure .our observation may have more natural implication in economical systems because the bs process with chain bankruptcy is a more feasible scenario .it might be worthwhile analyzing weekly or monthly bankruptcy data and see if they follow a power - law distribution as our study suggests .we have simulated our model with other values of the benefit - to - cost ratio and see that cooperation emerges in a wide range of chain - death rates , as long as is larger than 1 .in contrast to a common belief , cooperation can emerge even with parameters that a population with random strategies decreases cooperation .this is possible because the bs mechanism builds dynamical correlations that suppress the long - term survival of non - cooperators even in the region where mean - field calculation predicts a decrease in cooperators .the same dynamical correlation leads to soc in replacement activities with the same exponents as the original bs model .the strategy space presented here is rather small .mixed but only history independent strategies are considered on a very simple population structure , a 1d lattice . however , we speculate that our main results , the emergence of cooperation and soc , are robust under variations in the population structure or the strategy space extension .in fact , the preliminary results with the extended strategy space show that the emergence of cooperation appears more easily and rapidly when the reactive strategies are included .this work was supported by the national research foundation of korea grant funded by the korean government(mest ) ( nrf-2010 - 0022474 ) .j. would like to thank kias for the support during the visit . | cooperation and self - organized criticality are two main keywords in current studies of evolution . we propose a generalized bak - sneppen model and provide a natural mechanism which accounts for both phenomena simultaneously . we use the prisoner s dilemma games to mimic the interactions among the members in the population . each member is identified by its cooperation probability , and its fitness is given by the payoffs from neighbors . the least fit member with the minimum payoff is replaced by a new member with a random cooperation probability . when the neighbors of the least fit one are also replaced with a non - zero probability , a strong cooperation emerges . the bak - sneppen process builds a self - organized structure so that the cooperation can emerge even in the parameter region where a uniform or random population decreases the number of cooperators . the emergence of cooperation is due to the same dynamical correlation that leads to self - organized criticality in replacement activities . |
millions of people edit wikipedia pages , however , in average we find that only contribute to of their content . such heterogeneous level of activity is reminiscent of the well - known and widely applicable law postulated by pareto , which states that of the effects are induced by of the causes .the example of wikipedia users reported here highlights how heterogeneous the activity of their users are , with both , activity as well as degree following a power - law distribution.indeed , heavy - tailed distributions following a power - law have been observed in variety of social systems ever since pareto reported his observation of the extreme inequality of wealth distribution in italy back in 1896 . in recent years , due to ubiquitous computerization , networking and obsessive data collection , reports of heavy - tailed distributions have almost become a routine .following simple distributions such as those of wealth , and income , certain structural properties of social systems were also found to be heavy - tailed distributed .more specifically , distribution of the number of ties of a person ( degree ) has been shown to fall in this group for vast and still growing number of social networks .power - law degree distributions , called scale - free , represent one of the three general properties of social networks ( short distances and high clustering being the other two ) .a power - law degree distribution is not only the least intuitive and surprising property , but also is the most well - studied and debated feature of networks since extensively found in the late 90s . immediately following the empirical measurements ,a number of plausible models aiming at explaining the emergence of these distributions have been proposed .many models reproduce heterogeneous connectivity by amplifying small differences in connectivity frequently stochastically emerging using some kind of multiplicative process or `` preferential attachment '' .other models propose different optimization strategies leading to scale - free .a common attribute of all these models is that fat - tailed distributions emerge out of some kind of interaction between the basic system s elements .in fact , the question is not whether there exists a mechanism that could produce scale - free networks similar to the ones observed , but which of the many mechanisms suggested are more likely to actually play a significant role in each network formation .the data presented here suggests that there is a different underlying cause for heavy - tailed degree distributions which does not involve interactions between people .we investigate distinct social networks focusing on the relationship between users activity and degree , specifically , the number of posts , messages , or actions of a user , i.e. _ activity _ and the number of user establishing a link with her / him , i.e. the incoming degree , or _ degree _ , for short . both , degree in the social network , and the activity of a user , exhibit power - law distributions , and , where and are the scale - free degree and activity exponents , respectively . positively skewed distributions of human activitywere recently reported in and we extend this result here for a number of datasets .more importantly , in all instances we find that activity causally determine degree of the same user , suggesting that the broad distribution of one , could result from the broad distribution of the other .it is important to note that the studied actions are not likely to be driven by interaction with other people .activity and degree , as measured here , are taken from two different networks developed by the same pool of users , and so there is no reason to expect that they should depend on each other in some trivial fashion .surprisingly , however , the number of potential followers of a user ( degree distribution ) appears to be entirely random except for its mean value , which is tightly controlled by the volume of activity of that user .our observations convincingly point at the intrinsic activity of people as the driving force behind the evolution of the examined social systems and particularly the heterogeneity in user connectivity .the observed degree distribution in social systems may merely be a manifestation of the similarly wide distribution of human activity related to the system construction .these wide distributions in social collaborative networks can not be explained by interactive model since the observed actions are not likely to be caused by actions of other people .we have analyzed activity of individuals over time collaboratively working on construction of extensive electronic data sets : wikipedia in four different languages ( _ http://www.wikipedia.org_ ) , and a collaborative news - sharing web - site ( _ http://www.news2.ru_ ) .these datasets represent various domains of human activity and contain records of a vast number of individual user contributions to the collaboratively generated content ( see method ) . for each person , we analyze two properties defined in two independent layers : activity and degree . for instance , in wikipedia , the activity performed by users includes posting of new material and discussions about them .this is the activity layer .simultaneously , by tracing users contributing to other users personal or talk pages , we recover the underlying network of wikipedia contributors personal communication or social network .the resulting network reliably represents actual interactions of wikipedia users and thus defines the social network layer . the number of incoming connections , i.e. others reaching out to the user in this network represents the degree . in principle , activity and degree as defined here are unrelated .similarly , news2.ru posses the same two - layer structure of activity and degree ( see method ) .we start by analyzing the distributions of various types of activities performed by users in these systems .very few of the most active users perform the vast majority of work so that the activity levels frequently span five orders of magnitude ( fig .[ fig1]a , b ) .for instance , when analyzing the activity to a given wikipedia page , only 5% of users contribute of the edits ( fig .[ fig5 ] in method ) .this surprising result is similar to the rule postulated by pareto to describe the unequal distribution of wealth .indeed , a power - law faithfully characterizes the activity distributions in fig .the exponent of the activity distribution for spanish language wikipedia is ( fig . [ fig1]a ) , while the activity distribution for voting in stories in news2.ru is ( fig .[ fig1]b , detailed fitting procedure in method ) the activity distributions in fig .[ fig1]a represent the number of users as a function of the number of wikipedia edits in four languages .interestingly , different populations performing similar activity in separate instances of similarly - built social systems exhibit identical activity distributions .figure [ fig1]b shows several different activities performed by the same population of users at the social news aggregator news2.ru .these activities differ in their complexity .we consider submission of posts to be the most difficult and time consuming of the four activity types because it typically requires the user to locate the content on - line , evaluate its quality and publish at the news2.ru web site by filling a form with multiple fields .considering the task complexity , writing comments is arguably easier task than posting .there are on average nearly three comments per every published post .these two content - generating tasks are followed by ranking of posts and comments .the differences in the underlying complexity of the task seem to explain the difference in the range and slope of the observed distributions plotted in fig .[ fig1]b .we further observe the social networks emerging in each of these systems .these networks serve different functions . in wikipediathey arise due to the direct interaction required to coordinate common tasks .in particular , we derive social networks from the record of edits of personal user pages by other users - a common way of personal communication in wikipedia ( the web site rules forbid activity - related confidential communication between its editors ) . innews2.ru the social network emerges through declaration of personal attitudes - a user may indicate that he / she likes , dislikes or is neutral to any other user .another social network arises from a set of explicit ( directed ) declarations of friendship between news2.ru users .figure [ fig1 ] c and d present the degree distributions in these networks .broad distributions are measured and present in each system , suggesting a scale - free behavior in their degree distribution .the exponent of the degree distribution for spanish wikipedia is ( fig . [ fig1]c ) , and for the degree distribution in news2.ru is ( fig .[ fig1]d ) .the present data suggest a simple explanation of the origin of degree distributions .we first observe that the number of incoming links aggregated by a person in all these social networks is highly correlated to the individual s activity .the correlation between the degree and the activity measurements is presented in table [ table1 ] .it is measured here as the correlation of the log - values to capture the gross relationship of these two variables across different orders of magnitude .more importantly , the dependence analysis below suggests that the broad distribution of activity is the driving force of scale - free degree as will be discussed next .it is important to emphasize that in order to avoid direct and rather obvious correlation between different aspects of activity of the same person , we test the correlation of individual s activity to her degree determined by actions of his / her followers rather than his / her own .it is possible that these actions are driven by reciprocity , i.e. , a person is simultaneously active in the community and in constructing her social network inspiring others to link back to her . to determine the precise nature of the relationship , we analyze the joint distribution of degree and activity , ( fig .[ fig2]a ) .we find that the mean degree for a given level of activity follows a smooth monotonic function of ( fig .[ fig2]b ) , whereas the opposite is not true , i.e. , the mean activity does not seem to be tightly determined by degree ( fig .[ fig2]c ) . a similarly tight relationship exists for the standard deviation of the degree distribution for specific values of the activity ( fig .[ fig2]d ) , but , again , the reverse is not true ( fig .[ fig2]e ) .the conditional mean and standard deviation of degree ( conditioned on activity ) show a tight relationship with approximately unit slope ( fig .[ fig2]f ) .however , the , values conditioned on degree are more variable ( fig . [ fig2]g ) . based on these observations we hypothesize that the conditional degree distribution may be scale invariant with scale entirely determined by activity : . here, this functional dependence of scale can be estimated as the mean activity for a given : .indeed , we observe that the conditional degree distribution appears to follow a geometric distribution for all : this theoretical distribution provides a remarkably accurate fit to the first two sample moments of degree for a given level of activity as shown in fig .we plot the standard deviation versus mean degree for given activity for four wikipedia databases .the curves follow a smooth , monotonically increasing functional form which is almost identical for all datasets ( as one would expect for activity conditioning degree ) .when the analysis is repeated for activity conditioned on degree the variables do not appear to follow a tight relationship .the tight relationship between versus conditioned on activity follows asymptotically a straight line with unit slope , which follows exactly the geometric distribution eq .( [ geo ] ) . in fig .[ fig3 ] , we compare the data to the analytic relationship between mean and standard deviation for geometric distribution eq .( [ geo ] ) : and , where is the parameter of geometric distribution .the data fit this theoretical curve surprisingly well for the four displayed languages of wikipedia ( in average ) .the previous findings can be understood with the following hypothesis h1 : , activity deterministically affects the mean degree , but degree is otherwise random ( fig .[ fig4]a ) .note that for positive discrete variables like the degree with a given mean ,the highest entropy or least informative and most random distribution is achieved by the geometric distribution as we find above .the geometric distribution is analogous to exponential distribution in statistical mechanics , which maximizes entropy for continuum variables with fix mean .we also tested the inverse hypothesis h2 : , degree deterministically affects mean activity , , and activity is otherwise random . the goodness - of - fit of these two analytic models to histograms of h1 : activity degree or h2 : degree activity was measured with the -square statistics averaged over activity or degree respectively .the likelihood that the observed distributions match h1 or h2 was assessed using surrogate data generated with monte - carlo sampling to estimate the chance occurrence of these averaged -square values .the results for the spanish language wikipedia data indicate that we can not dismiss the correctness of h1 ( fig .[ fig4]b ) with a confidence of higher than 95% ( ) but that h2 can be soundly dismissed ( the chance of the corresponding -square value occurring at random is ) .the same is true for all other datasets ( see table [ table1 ] ) . in all datasetsthe likelihood of h1 is several orders of magnitudes larger than h2 and thus we accept model h1 , which states that activity determines degree . given the explicit model of a geometric distribution for of hypothesis h1 , and the observed power - law distribution for activity , , one can explicitly derive the expected degree distribution.the conditional degree distribution closely matches a geometric distribution ( fig .[ fig3 ] ) . for large mean values , say , it can be very well approximated by its continuous equivalent , the exponential distribution i.e. .therefore : thus the exponent is predicted to be where defines for large as shown in figure [ fig2]b .the observed exponents closely follow these predicted exponents for all datasets ( table [ table1 ] ) .the causal inference argument provided here is borrowed from ideas recently developed in causal inference . there , a deterministic functional dependence of cause on mean effect is hypothesized and deviations from this mean effect are assumed to have fixed standard deviation but to be otherwise random . with two variables for which one wishes to establish causal direction ,the model is evaluated in both directions and the more likely one is postulated to indicate the correct causal dependence , as we have done here .this approach has been demonstrated to give the correct causal dependence for a large number of known causal relationships , and theoretical results indicate that there is only an exceedingly small class of functional relationships and distributions for which this procedure would give the incorrect answer .such an identifiability proof does not yet exist for the present case where the standard deviation is not constant .nevertheless , our explicit model of a deterministic effect of human activity on the success of establishing social links is the simplest possible explanation for the data available to us . for a different dataseta different probabilistic model may be better suited .the individual activity of people deterministically affects the mean success at establishing links in a social network , and the specific degree of a given user is otherwise random following a maximum entropy attachment ( mea ) model .the mea model is exemplified in fig .[ fig4]a and consists of the following steps : introduce a node with links , where is drawn from a probability given by the activity of the node .the activity has an intrinsic power - law distribution .then , link the links at random following maximum entropy principle with the concomitant geometric distribution .this mechanism contrasts with the preferential attachment mechanism where each link attaches to a node with a probability proportional to the number of links of that node .a possible mechanism by which a geometric distribution could arise is based on the notion of `` success '' . in this model ,the activity of users aims to achieve a specific outcome ( a wikipedia project ) , and each new incoming link can aid in achieving this desired outcome ; once the goal is achieved the user stops collecting links .the probability of the desired event in this model is .hence , those users working so very hard may have an exceedingly unlikely event they are aiming for .but eventually , they too will succeed , and will turn their attention away from the on - line social network .the present data indicates that degree distribution is maximally random except for what can be determined solely from the volume of a user s activity .does this mean that the precise content of a user s actions ( the meaning and quality of the edits in wikipedia , messages , etc ) is immaterial in determining his / her success in establishing relationships ?one can only hope that small deviations from this maximum entropy attachment model will become more pronounced with increasing data - set sizes , which can then point us to the benefits of well thought out and carefully executed actions , specially in specialized large - scale collaborative projects like wikipedia .whether the dynamics of preferential attachment is consistent with the maximum entropy distribution of degree remains to be established .what is certain is that distributions of levels of activities in all tested populations are heavily heavy - tailed indicating highly varying level of involvement of users in collaborative efforts .we showed here that this fact alone is sufficient to produce the heavy - tailed distribution of degree observed throughout social networks .therefore , previous interactive models may not be necessary .the present result shifts the burden of proof to explaining the origin to the incredible diversity in human effort observed here spanning five orders of magnitude .the number of actions contained in the datasets range from hundreds of thousands to hundreds of millions of user actions . from the editing on wikipedia , to the votes , to commentaries on news2.ru , these actions represents different and natural underlying dynamics of social networks , since they range from collaborative interaction ( wikipedia ) to discussions about different interesting of human behavior ( new2.ru ) , which are intrinsic properties of the social nature of the web .we have collected details about user activity in the wikipedia project and reconstructed the underlying social network .in addition to the widely used term and category pages , wikipedia provides special pages associated with specific contributing authors and discussion ( talk ) pages maintained alongside each of these pages .these user pages are widely used by wikipedia contributors for coordination behind the scenes of the project .in fact , interaction via user and discussion pages dominates all other communication methods .however , communication via personal user pages ( and the corresponding discussion pages ) differs from the topic - associated talk pages in that it is explicit person - to - person communication rather than general topic specific , usually impersonal communication . by tracing users contributing to other user s personal or talk pages ,we recover the underlying network of wikipedia contributor s personal communication .not surprisingly , as presented in the next section , the obtained social networks show a scale - free degree distribution , typically observed in a variety of social networks analyzed so far .the other data set is a de - identified record of activities of social news aggregator news2.ru .the record contains all actions performed by the community members over more than three years of collaborative selection and discussion of news - related content .these , user - related actions include such events as submission of news article , comments as well as preference - revealing actions such as voting for articles ( `` dig '' or `` bury '' , using digg.com language ) and other users comments .in addition to the trace of user activity , the data contains explicit social network layer . each user may publicly declare his / her ( positive , neutral or negative ) attitude to any other user .considering the personal flavor of the rather emotional way people interact through commentary threads , this list of attitudes when aggregated can be perceived as social network .in addition , users maintain list of friends , usually including users most favorable on them . these networks are directional , which allows to focus on the incoming links , since they can not be controlled by the target individual , but by his / her friends .each of these systems represents different approaches to collaborative content creation .the wikipedia editors interact to create the same content collaboratively so that the content contributed by one user can be complemented , altered or completely removed by others .the news2.ru represents a mixed case in which the content is contributed individually , but collaboratively ranked .given these fundamental differences in user activity and network dynamics , the similarities between these systems reported below are particularly revealing . to get the exponents and of power - law distribution , we present a rigorous statistical test based on maximum likelihood methods .take the degree distribution as an example .we fit degree distribution assuming a power law within a given interval . for this, we use a generalized power - law form where and are the boundaries of the fitting interval , and the hurwitz function is given by .we use the maximum likelihood method , following the rigorous analysis of clauset et al .the fit was done in an interval where the lower boundary was . for each valuewe fix the upper boundary to , where is the maximal degree .we calculate the slopes in successive intervals by continuously increasing and varying the value of . in this way, we sample a large number of possible intervals . for each one of them , we calculate the maximum likelihood estimator through the numerical solution of ),\ ] ] where are all the degrees that fall within the fitting interval , and is the total number of nodes with degrees in this interval . the optimum interval was determined through the kolmogorov - smirnov ( ks ) test . for the goodness - of - fit test, we use the monte carlo method described in . for each possible fitting interval, we calculate the kolmogorov - smirnov statistics for the obtained cumulative distribution function .then we choose the interval with the minimal as the best fitting interval and take the in this interval as the final result .as to the standard error estimation , we adopt the method in . the standard error on , which is derived from the width of the likelihood maximum , is , where is the number of data .although the fitting method mentioned above is rigorous , it is suitable for fitting probability density distributions .when we fit the data , we use another fitting method .the procedure for determining fitting interval is similar . in each fitting intervals ,the fittings were done using ordinary least squares methods .the goodness of fitting was estimated through the coefficient of determination , , where . the value of is used as a measure of how reliably the fitted line describes the observed points , and is often described as the ratio of variation that can be explained by the fitted curve over the total variation .we assume that any value above represents an accepted fitting .the final result is the average of the accepted exponent . in fig.[fig5 ], each dot represents a distinct wikipedia project page .horizontal axis measures the total number of edits for each project .vertical axis represents the fraction of contributors to that project who performed of edits on that project .this fraction drops fast ( with power law ) as the number of edits grows .this suggests that the largest projects are dominated by a few very dedicated users .perhaps more representative are the mean values ; the vertical line indicates the average edits and the horizontal line marks the fraction of users contributing if the work in the average across projects ( approximately ) the accuracy of fit of the data to the theoretical geometric distribution is measured as the goodness - of - fit to the conditional histogram . as an example , consider h1 for the spanish wikipedia data : for the theoretical distribution we use for each activity the mean degree as shown in fig .[ fig2]b . the valueis then averaged over all activity bins shown in that figure . to testif this observed average is consistent with chance assuming h1 we generate surrogate data following h1 : for each given activity , we generate the same amount of random numbers from a geometric distribution with the same mean values , calculate the values and again , average across activities .we draw such samples and obtain a distribution of average ( fig .[ fig4]b ) .the chance that the for the spanish wikipedia data occurred by chance ( p - value ) is the fraction of times the surrogate data provided a value larger than the one observed ( red line in fig.[fig4]b ) .the analysis for h2 is analogous using the data as shown in fig .the resulting p - values for all datasets can be found in table i. 99 v. pareto , _cours deconomie politique _( f. rouge , luzanne , 1896 ) . b. a. huberman and l. a. adamic , internet : growth dynamics of the world - wide web , _ nature _ * 401 * , 131 ( 1999 ) .r. l. axtell , zipf distribution of u.s .firm sizes , _ science _ * 293 * , 1818 ( 2001 ) . c. castellano , s. fortunato , and v. loreto , statistical physics of social dynamics , _ rev . mod .phys_. * 81 * , 591 ( 2009 ) .barabsi , the origin of bursts and heavy tails in human dynamics , _ nature _ * 435 * , 227 ( 2005 ) .d. rybski , s. v. buldyrev , s. havlin , f. liljeros , and h. a. makse , scaling laws of human interaction activity , _ proc .natl . acad .usa _ * 106 * , 12640 ( 2009 ) .v. m. yakovenko and j. b. rosser , statistical mechanics of money , wealth , and income , _ rev .phys . _ * 81 * , 1703 ( 2009 ) .barabsi and r. albert , emergence of scaling in random networks , _ science _ * 286 * , 509 ( 1999 ) .g. caldarelli , a. capocci , p. de los rios , and m.a .muoz , scale - free networks from varying vertex intrinsic fitness , _ phys ._ * 89 * , 258702 ( 2002 ) .a. vzquez , a. flammini , a. maritan , and a. vespignani , modeling of protein interaction networks , _ complexus _ * 1 * , 38 ( 2003 ) . g. bianconi and a .-barabsi , competition and multiscaling in evolving networks , _ europhys .lett . _ * 54 * , 436 ( 2001 ) .g. caldarelli , _ scale - free networks : complex webs in nature and technology _ ( oxford univ press , oxford , 2007 ) .d. watts and s. strogatz , collective dynamics of small - world networks , _ nature _ * 393 * , 440 ( 1998 ) .m. faloutsos , p. faloutsos , and c. faloutsos , on power - law relationships of the internet topology , _ proceedings of the conference on applications , technologies , architectures , and protocols for computer communication _( acm , new york , 1999 ) , pp 251262 .m. e. j. newman , the structure and function of complex networks , _ siam rev ._ * 45 * , 167 ( 2003 ) .m. mitzenmacher , a brief history of generative models for power - law and lognormal distributions , _ internet mathematics _ * 1 * , 226 ( 2004 ). g. u. yule , a mathematical theory of evolution , based on the conclusions of dr .j. c. willis , f.r.s ., _ philos .b _ * 213 * , 21 ( 1925 ) . h. a. simon , on a class of skew distribution functions , _ biometrika _ * 42 * , 425 ( 1955 ) .b. mandelbrot , _ communication theory , ed .w. jackson _ ( butterworth , london , 1953 ) , pp .r. m. dsouza , c. borgs , j. t. chayes , n. berger , and r. d. kleinberg , emergence of tempered preferential attachment from optimization , _ proc .usa _ * 104 * , 6112 ( 2007 ) .f. papadopoulos , m. kitsak , m. .serrano , m. bogu , and d. krioukov , popularity versus similarity in growing networks , _ nature _ * 489 * , 537 ( 2012 ) .j. leskovec and e. horvitz , planetary - scale views on a large instant - messaging network , _ proceedings of the 17th international conference on world wide web _ , pp .915924 ( 2008 ) .perra , n. , gonalves , b. , pastor - satorras , r. & vespignani , a. activity driven modeling of time varying networks ._ scientific reports _ * 2 * , 469 ( 2012 ) .r. kumar , j. novak , p. raghavan , and a. tomkins , structure and evolution of blogspace , _ communications of the acm - the blogosphere _ * 47 * , 35 ( 2004 ) .w. h. hsu , j. lancaster , m. s. r. paradesi , and t. weninger , structural link analysis from user profiles and friends networks : a feature construction approach , _ proceedings of the international conference on weblogs and social media _ , pp 7580 ( 2007 ) .d. liben - nowell and j. kleinberg , tracing information flow on a global scale using internet chain - letter data , _ proc .usa _ * 105 * , 4633 ( 2008 ) . f. topse , information theoretical optimization technique , _ kybernetika _ * 15 * , 8 ( 1979 ) .j. pearl , causal inference in statistics : an overview , _ statistics surveys _ * 3 * , 96 ( 2009 ) .p. hoyer , d. janzing , j. mooij , j. peters , and b. schlkopf , nonlinear causal discovery with additive noise models , _ proceedings of the conference neural information processing systems _ , ( 2009 ) .k. zhang and a. hyvrinen , on the identifiability of the post - nonlinear causal model , _ proceedings of the 25th conference on uncertainty in artificial intelligence _( auai press , arlington , 2009 ) , pp .d. janzing , j. mooij , k. zhang , j. lemeire , j. zscheischler , p. daniuis , b. steudel , and b. schlkopf , information - geometric approach to inferring causal directions , _ artificial intelligence _ * 182 * , 1 ( 2012 ) .a. clauset , c. r. shaliz , and m. e. j. newman , power - law distribution in empirical data , _ siam rev ._ * 51 * , 661 ( 2009 ) .l. k. gallos , p. barttfeld , s. havlin , m. sigman , and h. a. makse , collective behavior in the spatial spreading of obesity , _ sci .* 2 * , 454 ( 2012 ) .we thank g. khazankin , research institute of physiology sb rams for kindly providing access to invaluable data on news2.ru user activity . the research is supported by nsf emerging frontiers , arl , fp7 project socionical and multiplex , cnpq , capes , and funcap .h.a.m . , s.h . and j.s.a .designed research . l.m .prepared data . l.m ., l.c.p . and s.d.s.r .analyzed the data .all authors wrote , reviewed and approved the manuscript .competing financial interests : the authors declare no competing financial interests . fig .1 . probability distribution of activities and degree . ( a ) probability density function of wikipedia contributors as a function of the number of performed page edits in four languages .( b ) probability density function of news2.ru for five different activities.lines indicate power - law fitting for spanish and stories with the maximum likelihood methods .( c ) probability distribution of degree for social networks as a function of number of links between wikipedia contributors .degree represents the number of links other users establish with a given user .( d ) distribution for networks of relationship ( positive / negative ) between users of news2.ru web portal and users friendships .analysis of joint distribution of activity and degree .( a ) scatter plot of degree and activity for each user in wikipedia spanish dataset .( b ) mean degree for given activity .( c ) mean activity for given degrees .( d ) standard deviation of degree for given activity .( e ) for given degree .( f ) relationship between standard deviation of degree and the mean value for given activity .inset is the theoretical fit of geometric distributions for spanish wikipedia .( g ) versus for given degree .3 . test of `` maximum entropy attachment model '' via the geometric distribution . theoretical relationship of mean and standard deviation for geometric distribution ( solid curve ) and data points for wikipedia in four languages .4 . causal hypotheses and test result . ( a )schematic diagrams for hypotheses h1 and h2 .h1 : mean degree is determined by activity through function .then degree is random distributed according to the conditional probability distribution .h2 is the other way around .( b ) and ( c ) results of monte - carlo simulation with samples following h1 and h2 for the spanish wikipedia data .the vertical red lines show the goodness - of - fit of the actual data to h1 and h2 , respectively .the empirical analysis clearly favors h1 over h2 . | the probability distribution of number of ties of an individual in a social network follows a scale - free power - law . however , how this distribution arises has not been conclusively demonstrated in direct analyses of people s actions in social networks . here , we perform a causal inference analysis and find an underlying cause for this phenomenon . our analysis indicates that heavy - tailed degree distribution is causally determined by similarly skewed distribution of human activity . specifically , the degree of an individual is entirely random - following a `` maximum entropy attachment '' model - except for its mean value which depends deterministically on the volume of the users activity . this relation can not be explained by interactive models , like preferential attachment , since the observed actions are not likely to be caused by interactions with other people . |
in recent times many models have been proposed to study the dynamics of granular materials like sand in different practical situations ( see e.g. for an overview ) . in many applicationsone has to store this kind of materials in view of their later use , so that filling and emptying a container are crucial processes .granular materials adapt their shape to the container ( like a fluid does ) , but in general the free surface of a heap strongly depends on its formation process , for example on intensity and dislocation of the source .moreover , the pressure on the bottom of the structure does not grow linearly with the height of the pile , since part of it is released against the walls through arcs of grains , a fact that can even produce silos explosion and collapse .in this paper we deal with the simple problem of pouring at low intensity granular matter into a silo of given cross - section : it is known from the experiments ( see e.g. ) that if the source is independent of time , the free surface of the growing heap evolves towards a well - defined profile which then retains its shape while growing with a constant velocity . the two - layer model of hadeler and kuttler , which basically describes the formation of sandpiles over an open bounded plane table , is a system of two partial differential equations for a standing layer and a small rolling layer of grains running down the slope .it can be adapted in a natural way to the case of the silo problem ( see again and more specifically ) by adding a suitable boundary condition on the silo walls . if denotes the vertical source of material and the final time , then the model takes the form the nonlinear term which appears in both the equations with opposite sign expresses the exchange term between the two layers , being the maximal ( _ critical _ ) slope that the material can support without flowing down , and respectively the mobility and the collision rate parameters .the boundary condition comes from the total mass conservation law ( when ) , which suggests but the first equation in ( [ hk ] ) is an advection equation for in the direction of , so that boundary conditions can not be imposed for the outgoing direction of its flow and condition ( [ bc ] ) reduces to a pure homogeneous neumann condition on .existence and uniqueness results for the solution of a system similar to ( [ hk ] ) under general assumptions on the data has been recently discussed in .if the source is constant in time the free profile is expected to evolve towards a _similarity solution _ , according to the following definition .we call a pair of functions a similarity solution of system ( [ hk ] ) in if there exists a positive constant such that the functions solve the system ( disregarding the initial condition ) .it can be considered as a sort of equilibrium for the model : the rolling layer is constant in time , while the free surface keeps growing by a rigid traslation of its shape at a constant rate . in the toy one - dimensional ( 1d )case for the cross - section , these quasi - stationary profiles can be expressed by closed integral formulas ( see and next section ) in terms of the source and of the other problem parameters . in two dimensions ( 2d )it can be proved that similarity solutions exist , but their expressions are known only in special cases .that is why in section 3 we discuss a finite element ( fe ) characterization of such solutions in the general case .finite difference ( fd ) numerical schemes for the model of hadeler and kuttler have been studied in and in the case of growing sandpiles on a bounded open table , and in on a table partially bounded by vertical walls . in section 4 we will adapt such schemes to the present problem of silos in order to show through the experiments of section 5 that the growing heaps generated by the evolving model perfectly match the similarity solutions .we recall the basic theorem of existence for similarity solutions in the case of a constant in time source term .( ) assume ; then there exists a similarity solution for problem ( [ hk ] ) ( in the sense of definition 1 ) , with unique up to an additive constant and the main idea in the proof is to consider the basic properties of the flux function . if one looks for it in the form of a gradient ( ) , then its potential should solve the semidefinite neumann problem for the laplacian with , which has a solution ( unique up to an additive constant ) due to the zero - mean property of . then we can derive from , and also deduce : so that can be determined up to an additive constant .formula ( [ vel ] ) says that the growth velocity of the similarity profile coincides with the average precipitation , that is with the mean value of the source intensity and is independent from the other parameters .theorem 1 does not say anything about uniqueness : in principle other solution pairs could exist such that is not a gradient . anyway , numerical experiments of section 5 show that the solutions given by the previous theorem are the only significant ( physical ) ones , since the evolving profiles tend asymptotically to them .+ in 1d the previous result yields explicit expressions for similarity solutions .if for example coincides with the interval , one finds ( see for details ) : where such expressions give several informations about solutions : * if for any , then , and , that is the free surface grows remaining flat , as expected ; * if the source is not identically zero , then everywhere , even at the boundary , confirming what already stated about condition ( [ bc ] ) ;* , that is the standing layer never exceeds the critical slope ; * the rolling layer thickness is directly proportional to the source intensity and inversely proportional to . in higher dimensionsexplicit formulas for similarity solutions can not be deduced in general , and we will see in the next section how to detect them numerically . herewe just report the two special cases of a central point source , for 1d and 2d radial cross - sections respectively , where similarity solutions can be explicitly computed .* example 1.*( ) assume and ( where denotes the usual dirac function centered in ) , that is there is a point source placed over the middle of the silo .similarity solutions then take the form ( see figure [ sol1ddelta ] ) : * example 2.*( ) let be the ball of radius centered at the origin , and a point source over its center ; then similarity solutions are radial functions , and radial symmetry arguments yields ( see figure [ sol3d ] ) : previous examples show that the typical profile of a growing symmetric heap of grains in a silo under the effect of a central source is different from the classical conical shape which would emerge on an open table without walls . in both dimensionsthe standing layer now assumes a strictly convex ( logarithmic ) profile. the maximal slope of the pile changes instead with the dimension : in 2d it is reached right in the center , and coincides with the critical slope , whereas in 1d it depends on the parameters and on the size of the container .in particular , when the ratio ( grains roll slowly and are easily trapped ) the slope of the pile always remains very close to the critical angle . on the contrary , when is large the grains move very fast from the beginning , and larger variations of the slope can emerge . for what concerns the rolling layer in 2d , figure [ sol3d ] shows the emergence of a singularity in the center , in accordance with the fact that its expression comes from the solution of a potential problem with a central dirac source .for the sake of simplicity from now on we assume .the proof of theorem 1 in the previous section shows that in order to characterize the similarity profiles one needs to solve in the elliptic neumann problem ( [ neum ] ) . from a numerical point of view , this can be done for example by using a finite element approach .if denotes a regular triangulation of of size , and and are the finite element spaces of respectively piecewise linear and piecewise constant functions on , galerkin method requires to solve the discrete variational problem however , if is not a polygonal domain it has to be replaced in ( [ discr_neum ] ) by a suitable set defined as the union of the triangular elements of ; will be closed to , but in general the right - hand side will not retain its zero - mean property on , and the discrete semidefinite problem will not be solvable at all . a way to overcomethis difficulty is to replace in ( [ discr_neum ] ) by the function , with by construction , , and for any , so that ( [ discr_neum ] ) becomes solvable ( see for details ) .then by definition will be a piecewise constant vector on , and its norm an element of the discrete space .hence , from ( [ vfromw ] ) , now , since , the gradient of on any triangle is given by it remains to compute . in 1d its value at any node can be determined by direct integration of the piecewise constant function from 0 to ( which corresponds to choose the particular solution vanishing at the origin ) : in higher dimension a different strategy can be used : plugging into ( [ discr_neum ] ) , becomes the solution of the discrete variational neumann problem ( with given ) this section we want to show that the previously characterized similarity solutions asymptotically arise as profiles of the growing heaps in the dynamic process of filling the silo . in order to do that we implemented a numerical scheme for the complete system ( [ hk ] ) , adapting to this case the finite difference scheme used in for the growing sandpiles . in 1d , if and denotes the space discretization step , a uniform mesh is described by the nodes , for .if is the time step , our explicit scheme reads for the internal nodes as where denote respectively the approximate values at time in of the solutions and of the upwind flux derivative in the direction determined by the sign of , that is ( in each node the spatial derivative is defined as the term of maximal absolute value between the backward and the forward first differences ) . to complete the scheme we added initial conditions ( ) and boundary terms induced by the neumann condition on .the extension of this approach to the 2d case is straightforward if we restrict ourselves to square or rectangular cross - sections for the silo .it is enough to decompose the flux term as , and to repeat the 1d approach in each direction . in order to study the asymptotic behavior of the growing heaps , in the numerical tests the scheme was stopped when the relative growth per iteration of the standing layer resulted approximately the same at each node , revealing the emergence of a similarity profile .such profile was then translated to the base of the silo and compared with the computed similarity solution .for the 1d cross - section case we assumed and tested different choices of the source term . in each example, for a given uniform partition ( of step ) of , we compared the exact similarity solutions given by formulas ( [ sim1 ] ) ( the couple , with ) , the discrete similarity solutions computed by the fe approach of section 3 ( , with ) , and the stabilized profiles determined by the fd scheme for the evolutive problem described in section 4 ( , with the first one shifted towards zero , that is , where is the iterate selected by the stopping criterion ) .figure [ grow1d ] shows three examples of growing heaps according to different source supports ( centered symmetric , close to the boundary , disconnected ) . in all the casesit can be seen the formation of a similarity solution ( in dot lines ) .we found first order convergence of the approximate similarity solutions to the exact ones for the fe method in uniform norm , and approximately the same order for the asymptotic convergence of the growing profiles computed by the fd scheme to the quasi - stationary ones . in table 1we report the values found for the symmetric centered support case of figure [ sol1dcentral ] .other tests gave similar results .note that the stabilized rolling layer is everywhere positive , with a small depression in the central region corresponding to the source support .when its length tends to zero one recovers the situation of the point source of example 1 , that is assumes the known logarithmic profile and the depression region of disappears . in the more realistic case of a spatial silo , that is when the cross - section is a 2d domain ,the similarity solutions can only be approximated , so that we just estimated the quantities and , where as before denote the fe solutions of the stationary model and the fd solutions of the evolutive model after the stopping criterion applies , computed over the same mesh introduced in .we restricted our tests to the case of a rectangular domain decomposed through a uniform mesh , in order to use the same set of nodes for the two schemes .if for example , a mesh with equispaced nodes ( with for the fd scheme can be used as well as a base for a uniform courant fe triangulation over .the experiments gave results similar to those of the 1d case .figures [ cen2d ] and [ scon2d ] illustrate the results corresponding to a source supported in a small ball in the center of the silo or in the union of two disconnected balls , showing the profiles of the standing layers and the level lines of the rolling layers .the correspondence of the similarity solutions ( above ) to the evolving profiles ( below ) appears evident .figure [ riemp ] shows the growing heap in the square silo for the first example at four successive time steps . | the problem of filling a silo of given bounded cross - section with granular matter can be described by the two - layer model of hadeler and kuttler . in this paper we discuss how similarity quasi - static solutions for this model can be numerically characterized by the direct finite element solution of a semidefinite elliptic neumann problem . we also discuss a finite difference scheme for the dynamical model through which we can show that the growing profiles of the heaps in the silo evolve in finite time towards such similarity solutions . _ keywords _ : granular matter , finite difference schemes , finite element schemes |
study of passage through an isolated resonance in a multi - frequency quasi - linear hamiltonian system can be reduced to the case of one - frequency system ( see , e.g. , ) .the corresponding hamiltonian has the form here , mod are conjugate canonical variables , is a slow time , , and is a small parameter , .equations of motion are for we get an unperturbed system with the hamiltonian and action - angle variables , .the function is the frequency of the unperturbed motion .for some value of the slow time , where there is a resonance , vanishes : .we assume that the resonance is non - degenerate : . here`` prime '' denotes the derivative with respect to .let , for definiteness , .we assume that is the only resonant moment of the slow time : is different from 0 at .action is an adiabatic invariant : its changes along trajectory of ( [ motion ] ) are small over long time intervals . for motionfar from the resonance value oscillates with an amplitude .passage through a narrow neighbourhood of the resonance leads to a change in of order ( so called jump of the adiabatic invariant ) .there is an asymptotic formula for this jump ( , ) .let and be values of along a trajectory of ( [ motion ] ) at moments of slow time and , where .then here is the value of on the considered trajectory at .there are formulas for change of the angle ( phase ) due to passage through the resonance as well .one can replace in the left hand side of ( [ difference ] ) with values of the improved adiabatic invariant , but the error estimate in ( [ difference ] ) still will be .it was suggested in to eliminate an asymmetry in ( [ difference ] ) by replacing in the right hand side with , where is the value of on the considered trajectory at .a numerical simulation in shows that this symmetrization indeed improves the accuracy of formula ( [ difference ] ) for replaced with considerably .it is conjectured in on the basis of the numerical simulation that the error term in the modified formula is . in the current paperwe prove this conjecture by means of a hamiltonian adiabatic perturbation theory .we show that this improvement of accuracy occurs due to cancellations of many terms in formulas of the perturbation theory considered up to terms of 4th order in .we obtain also formulas which describe change of phase due to passage through resonance with the same accuracy . as a result, we obtain formulas which allow to predict motion in post - resonance region with accuracy , provided that the motion in the pre - resonance region is known . in the last section we provide a numerical verification of these formulas .we consider hamiltonian system ( [ motion ] ) with hamilton s function ( [ ham ] ) .we assume that the function is of class for , where and are some open intervals in .we assume that is in and that the frequency does not vanish in other than at . at the resonance state , , but .let , be a solution of ( [ motion ] ) on a time interval ] , substitute with formulas ( [ j ] ) and ( [ dsdphi ] ) : for ] .here , , , and is a smooth function .the same estimate is valid if is replaced with and ] , where , , , are smooth functions . then also this implies the result of the lemma .let we have the identity : we should prove that , , and .here we use the sum , , , , . for convenience ,we consider as the main part of expression of , then consider the others .\alpha^1(j_l,{\varphi}_l,\tau_l ) -\omega(\tau_l)\alpha^1(i_*,{\varphi}_{a_l},\tau_*)}{\omega(\tau_l)\omega(\tau_l)}\\ & & -{\varepsilon}\frac{\big[\omega(\tau_r)-\frac12\omega''_*(\tau_r-\tau_*)^2 + o(\tau_r-\tau_*)^3\big]\alpha^1(j_r,{\varphi}_r,\tau_r)-\omega(\tau_r)\alpha^1(i_*,{\varphi}_{a_r},\tau_*)}{\omega(\tau_r)\omega(\tau_r)}\\ & = & { \varepsilon}\frac{\omega(\tau_l)\big[\alpha^1(j_l,{\varphi}_l,\tau_l)-\alpha^1(i_*,{\varphi}_{a_l},\tau_*)\big]-\frac{\omega''_*(\tau_l-\tau_*)^2}2\alpha^1(j_l,{\varphi}_l,\tau_l)+o(\tau_l-\tau_*)^3}{\omega(\tau_l)\omega(\tau_l)}\\ & & -{\varepsilon}\frac{\omega(\tau_r)\big[\alpha^1(j_r,{\varphi}_r,\tau_r)-\alpha^1(i_*,{\varphi}_{a_r},\tau_*)\big]-\frac{\omega''_*(\tau_r-\tau_*)^2}2\alpha^1(j_r,{\varphi}_r,\tau_r)+o(\tau_r-\tau_*)^3}{\omega(\tau_r)\omega(\tau_r)}\\ & = & { \varepsilon}\frac{\alpha^1(j_l,{\varphi}_l,\tau_l)-\alpha^1(i_*,{\varphi}_{a_l},\tau_*)}{\omega(\tau_l ) } -{\varepsilon}\frac{\omega''_*(\tau_l-\tau_*)^2\alpha^1(j_l,{\varphi}_l,\tau_l)}{2\omega(\tau_l)\omega(\tau_l)}\\ & & -{\varepsilon}\frac{\alpha^1(j_r,{\varphi}_r,\tau_r)-\alpha^1(i_*,{\varphi}_{a_r},\tau_*)}{\omega(\tau_r ) } + { \varepsilon}\frac{\omega''_*(\tau_r-\tau_*)^2\alpha^1(j_r,{\varphi}_r,\tau_r)}{2\omega(\tau_r)\omega(\tau_r)}+o({\varepsilon}^{\frac32})\\ & = & \frac{{\varepsilon}}{\omega(\tau_l)}\big[\big(\alpha^1(j_l,{\varphi}_l,\tau_l)-\alpha^1(i_*,{\varphi}_{a_l},\tau_*)\big)+\big(\alpha^1(j_r,{\varphi}_r,\tau_r)-\alpha^1(i_*,{\varphi}_{a_r},\tau_*)\big)\big]\\ & & -\frac{\varepsilon}2\frac{\omega''_*(\tau_l-\tau_*)^2}{\omega(\tau_l)}\left(\frac{\alpha^1(j_l,{\varphi}_l,\tau_l)}{\omega(\tau_l)}+\frac{\alpha^1(j_r,{\varphi}_r,\tau_r)}{\omega(\tau_r)}\right)+o({\varepsilon}^{\frac32})\\ & = & o(\sqrt{\varepsilon})e_{1+}+o({\varepsilon}^{\frac32})\left(\frac{\alpha^1(j_l,{\varphi}_l,\tau_l)}{\omega(\tau_l)}+\frac{\alpha^1(j_l,{\varphi}_l,\tau_l)+o(\sqrt{\varepsilon})}{\omega(\tau_r)}\right)+o({\varepsilon}^{\frac32})\\ & = & o(\sqrt{\varepsilon})e_{1+}+o({\varepsilon}^{\frac32})\left(\alpha^1(j_l,{\varphi}_l,\tau_l)\cdot\frac{\omega(\tau_l)+\omega(\tau_r)}{\omega(\tau_l)\omega(\tau_r)}\right)+o({\varepsilon}^{\frac32})\\ & = & o(\sqrt{\varepsilon})e_{1+}+o({\varepsilon}^{\frac32})\end{aligned}\ ] ] here +\big[\alpha^1(j_r,{\varphi}_r,\tau_r)-\alpha^1(i_*,{\varphi}_{a_r},\tau_*)\big] ] .similarly to , we can obtain from lemma [ est_comb ] .therefore , it is true that .similarly , we can derive that therefore , we simply use lemma [ varbi ] and lemma [ cancnear ] in order to get estimate : for combined term , we consider , where and also for , we apply lemma [ err_omega ] , then lemma [ varbj ] : +{\varepsilon}^5\frac{\check\gamma(j,{\varphi},\tau)}{(\tau-\tau_*)^7}-{\varepsilon}^5\frac{\widetilde\gamma(i_*,{\varphi}_a,\tau_*)}{\omega^7(\tau)}+o\left(\frac{{\varepsilon}^5}{(\tau-\tau_*)^6}\right)\\ & = & \frac{{\varepsilon}^5\delta_1(i_*,{\varphi}_a,\tau_*)}{(\tau-\tau_*)^8}\int\limits_{\tau_*}^{\tau}\frac{\partial h_1(i_*,{\varphi}_a,\tau_*)}{\partial{\varphi}}\,{\mathrm{d}}\tau_1 \\ & & { } + \frac{{\varepsilon}^4\delta_2(i_*,{\varphi}_a,\tau_*)}{(\tau-\tau_*)^5 } + \frac{{\varepsilon}^5\delta_3(i_*,{\varphi}_a,\tau_*)}{(\tau-\tau_*)^8}\int\limits_{\tau_*}^{\tau}\frac{\partial h_1(i_*,{\varphi}_a,\tau_*)}{\partial i}\,{\mathrm{d}}\tau_1 + \frac{{\varepsilon}^5\delta_4(i_*,{\varphi}_a,\tau_*)}{(\tau-\tau_*)^7 } \\ & & { } + o\left(\frac{{\varepsilon}^7}{(\tau-\tau_*)^{10}}\right ) + o\left(\frac{{\varepsilon}^6}{(\tau-\tau_*)^8}\right ) + o\left(\frac{{\varepsilon}^5}{(\tau-\tau_*)^6}\right ) + o\left(\frac{{\varepsilon}^4}{(\tau-\tau_*)^4}\right ) + o\left(\frac{{\varepsilon}^3}{(\tau-\tau_*)^2}\right)\\ & & { } + \frac{{\varepsilon}^6\delta_5(i_*,{\varphi}_a,\tau_*)}{(\tau-\tau_*)^9 } + \frac{{\varepsilon}^7\delta_6(i_*,{\varphi}_a,\tau_*)}{(\tau-\tau_*)^{11 } } + \frac{{\varepsilon}^8\delta_7(i_*,{\varphi}_a,\tau_*)}{(\tau-\tau_*)^{13 } } + \frac{{\varepsilon}^9\delta_8(i_*,{\varphi}_a,\tau_*)}{(\tau-\tau_*)^{15}}.\end{aligned}\ ] ] here is a smooth function . also using lemma [ est_int ] for estimate of integrals and lemma [ cancfar ] for cancellation in symetric intervals , we obtain : for term , we apply an integration by parts : and similarly therefore , the estimate is obtained . by joining estimate of terms together , and taking into accountthat , as well as the identity where , we have finished the proof of the first formula of theorem [ thm1 ] .let we have making use of lemmas [ err_omega ] , [ est ] , [ est_frac ] , [ est_int ] , [ varbi ] , [ varbj ] , one can show that , , , , .this leads to the first formula of theorem [ thm2 ] with the sign " .the proof of the formula with the sign " is completely analogous .[ replacenear ] for and smooth function , for , lemma [ varbj ] can be simplified as . thus with lemmas [ err_omega ] and [ est_frac ] , applying the result of ( a ) and lemma [ varbj ], we obtain here and is smooth functions .[ replacefar ] let be a twice continuously differentiable function .then for , , for \cup[t_r , t_+] ] , for ] , we get similarly for ] , and 11 values of , \{0.02 , 0.015 , 0.01 , 0.007 , 0.005 , 0.003 , 0.002 , 0.0015 , 0.001 , 0.0007 , 0.0005}. we calculate }|i_+^{\rm{numer}}-i_+^{\rm{theor}}|,\quad e_{{\varphi}_+}({\varepsilon})=\max_{{\varphi}_-\in[0,2\pi]}|{\varphi}_+^{\rm{numer}}-{\varphi}_+^{\rm{theor}}|\,,\ ] ] and plot values and as functions of in figures [ figurei ] and [ figurephi ] .linear least squares fit of the data in figures [ figurei ] and [ figurephi ] gives slopes and , respectively . the ideal results for accuracy would be and .thus the numerical simulation indicates that the accuracy is , as expected .kevorkian , j. and cole , j. d. : _ multiple scale and singular perturbation methods_. applied mathematical sciences , vol . 114 .springer - verlag , 1996 .chirikov , b. v. : _ the passage of a nonlinear oscillatory system through resonance_. sov ., dokl . 4 , 1959 , pp .390 - 394 .bosley , d. l. : _ an improved matching procedure for transient resonance layers in weakly nonlinear oscillatory systems_. siam j. appl .math , vol .56 , no . 2 , 1996 , pp .420 - 445 .alekseev , p. a. : _ on change of action at passage through a resonance in a quasilinear hamiltonian system_. m.sc .thesis , moscow university , 2007 . | we consider a quasi - linear hamiltonian system with one and a half degrees of freedom . the hamiltonian of this system differs by a small , , perturbing term from the hamiltonian of a linear oscillatory system . we consider passage through a resonance : the frequency of the latter system slowly changes with time and passes through 0 . the speed of this passage is of order of . we provide asymptotic formulas that describe effects of passage through a resonance with an accuracy . this is an improvement of known results by chirikov ( 1959 ) , kevorkian ( 1971 , 1974 ) and bosley ( 1996 ) . the problem under consideration is a model problem that describes passage through an isolated resonance in multi - frequency quasi - linear hamiltonian systems . |
the structure , strength and stability of ligand - receptor bonds have been investigated extensively over several decades .the concepts ( and definitions ) of ligand and receptor have also evolved during this period . in the words of martin karplus , `` the ligand can be as small as an electron , an atom or diatomic molecule and as large as a protein '' . in principle , the definition of a ligand can be extended even further to a stiff filament formed by a hierarchical organization of proteins ; the microtubule ( mt ) , a nano - tube in eukaryotic cells , is an example of such filaments .thus , if such a stiff straight filament is tethered to a flat rigid wall by wall - anchored proteins that bind specifically to the filament , the attachment can be viewed as a ` ligand - receptor bond ' where the filament is the analog of a ligand .inspired by this generalized definition of ligands and receptors , we study here a very special type of _ non - covalent _ filament - wall attachment from the perspective of ligand - receptor bonds .the tether protein that we consider here are ` active ' in the sense that these consume chemical fuel for their mechanical function .the theoretical model developed in this paper is motivated by the typical attachment formed by a mt with the cortex of a living eukaryotic cell where the two are linked by dynein molecules ; dynein is a motor protein powered by input chemical energy extracted from atp hydrolysis .there are equispaced dynein - binding sites on the surface of a mt where the head of a dynein can bind specifically ; the tail of the dyneins are anchored on the cortex .thus , dyneins function as tethers linking the mt with the wall ; unbinding of _ all _ the dynein heads from the mt would rupture the mt - cortex attachment .however , the model developed in this paper captures only some of the key features of the real mt - cortex attachments like , for example , the polymerization - depolymerization of mt .the wall in our model mimics the cell cortex .it is worth pointing out that the mt - wall model developed here differs fundamentally from another mt - wall model reported earlier in ref. ( from now onwards referred to as the ssc model ) .unlike the active ( i , e . , energy consuming ) linkers , which here mimic dynein motors , the dominant tethers in the ssc model are passive . moreover , the wall in the ssc model represents a kinetochore , a proteinous complex on the surface of a chromosome , whereas the wall in the model developed in this paper represents the cell cortex .although the formation of the mt - wall attachment itself is an interesting phenomenon , we do not study it here . instead ,in this paper we consider a pre - formed attachment to investigate its strength and stability using a kinetic model that mimics the protocols of dynamic force spectroscopy .two distinct protocols are routinely used in force spectroscopy for measuring the strength and stability of ligand - receptor bonds . in the _ force - clamp _ protocol, a time - independent load tension is applied against the bond ; the time duration after which the bond is just broken is the life time of the bond . since the underlying physical process is dominated by thermal fluctuations different values of life timeare observed upon repetition of the experiment .therefore , the _ stability _ of the bond is characterized by the life time distribution ( ltd ) .on the other hand , in the _ force - ramp _ protocol the magnitude of the time - dependent load tension is ramped up at a pre - decided rate till the bond just gets ruptured ; the rupture force distribution ( rfd ) characterizes the _ strength _ of the bond . for a _ slip - bond _ the mean life time ( mlt ) decreases monotonically with the increasing magnitude of the tension in the force - clamp experiment . in contrast , a nonmotonic variation ( an initial increase followed by decrease ) of the mlt with increasing load tension in the force - clamp experiments is a characteristic feature of _ catch - bonds _ . although the force - ramp is a more natural protocol for dynamic force spectroscopy the results provide somewhat indirect evidence for catch - bonds .we mimic both the force - clamp and force - ramp protocols in our computer simulations of the theoretical model to compute the ltd and rfd for the motor - linked mt - wall attachment . the resultsestablish that this attachment is , effectively , like a slip - bond .this is in sharp contrast to results obtained from the ssc model ; the latter exhibits a catch - bond - like behavior as observed also in experiments performed _ in - vitro_. to our knowledge , similar dynamic force spectroscopic experiments have not been attempted so far on mt - cell cortex attachments . nevertheless , our generic theoretical model and the results are likely to motivate such experiments in near future .the model is quite general . in principle , the same model can serve , after appropriate minor adaptation , as a minimal model for many other attachments formed by a polymerizing - depolymerizing filament with a wall where the specific interaction of the two arises from a set of active linkers .we model the mt as a strictly one - dimensional stiff filament . the plus end of the mt is oriented along the + x - direction of the one - dimensional coordinate system chosen for the model .the rigid wall facing the mt filament is perpendicular to the x - axis .as stated earlier , our model is motivated by the cortex - mt attachment where dynein motors tether the mt to the cortex . since a dynein motor , fuelled by atp ,has a natural tendency to walk towards the minus end of the mt , the molecular motors in our model are also assumed to be minus - end directed , while consuming input energy , in the absence of external load tension .we assume that each motor is permanently anchored onto the rigid wall and can not detach from it while the head of a motor can attach to the mt and a motor already attached to the mt can also detach from it .we model the linkage between a motor and the rigid wall by an elastic element that is approximated as a hookean spring .the wall is assumed to execute one - dimensional diffusion along the x - coordinate with a diffusion constant .and represents the distance of mt tip and dynein motor from cell cortex . in the insetmt interact with cell cortex through the cortical dynein motor and dynein motor is attached with cell cortex using different protein .dynein are attached with cell cortex through a spring with spring constant .external force is applied on axoneme ( orange ) from which mts ( green ) are generated.,scaledwidth=50.0% ] although each dynein motor has two heads , each capable of binding specific sites on the mt , we denote the position of a motor by the location of its midpoint .the two heads are , effectively , connected at the midpoint by a hinge ; the midpoint is also linked to the point of anchoring on the wall by an elastic element that is assumed to be a hookean spring . in the one - dimensional coordinate system the origin is fixed on the wall . with respect to this origin denotes the position of the _ midpoint _ of an arbitrary molecular motor at time while denotes the corresponding position the mt tip . in this model , can change because of the following processes : + ( i ) stepping of a motor while the wall remains frozen in its current position , and + ( ii ) diffusion of the wall while motor does not step out of its current binding site on the mt .we assume that is the maximum distance upto which a motor can bind to the mt .so the motor can bind only if its head is located between and ( i.e. , ) . is the total number of motors ( the subscript ` d ' indicates minus - ended directed motor like , for example , dynein ) that can simultaneously attach to the mt whereas denotes the number of motors actually attached at any arbitrary instant of time ( i.e. , . for any given unbound motor, denotes the rate of binding of its head to the mt .therefore , at any arbitrary instant of time , the rate of a binding event , i.e. , the rate at which any of the unbound motor binds to the mt is because of being anchored on the fixed rigid wall , the motors bound to the mt can not walk freely along the mt track . moreover , because of the upper cutoff imposed on the maximum possible elongation of the spring , a motor can move only upto a maximum distance .if the rest position of motor head is , the spring force acting on the motor is given by here we assume external load to be equally shared by motors so that each motor feels the load thus , the effective force felt by single motor head that is bound to mt and located at a position is where is given by ( [ eq - spring ] ) .let denote the rate of unbinding of the head of a motor in the absence of load force .when a load force tends to rupture the attachment , the unbinding rate increases .following kramers ( or bell ) theory we assume an approximate exponential dependence the characteristic ` detachment force ' can be expressed as where is the extension of the energy barrier between the bound and unbound state of a motor. the effective rate of unbinding of a single motor head from the mt is given by suppose the rate of forward hopping of a motor towards the minus end of of the mt is .but the forward stepping of motor will be opposed by the spring force .therefore , the effective rate of forward stepping of a motor is given by here is a constant parameter ( ) .the characteristic force can be expressed as where is the length of a single subunit of mt .based on the well known experimentally observed facts , we assume that a motor can also step towards the positive end of mt under sufficiently high load force ; the rate of stepping in that ` reverse ' direction is given by where the rate of stepping of a motor towards the plus end of the mt in the absence of load force is very small because the natural direction of these motors is the minus end of mt .the form ( [ k_r ] ) captures the intuitive expectation that would increase with increasing spring force . the probability that ( the midpoint of ) a motor is located at and mt tip is at , while the total number of motors are bound to the mt simultaneously at that instant of time , is given by .note that is a continuous variable whereas can take only non - negative integer values .let be the conditional probability that , given a mt - bound motor located at site , there is another mt - bound motor at site on the mt .then is the conditional probability that , given a motor at site , the site is empty .let be the probability that site is not occupied by any motor , irrespective of the state of occupation of any other site . under mfa ,the equations governing the time evolution of is given by where and , the length of each subunit of mt is also the spacing between the successive motor - binding sites on the mt .the rates of polymerization and de - polymerization of a mt tip are given by and , respectively .the rate of depolymerization of mt is suppressed by externally applied tension .we assume that the mt - bound minus - end directed motors at the tip ( plus - end ) of the mt prevents mt protofilaments from curling outwards , thereby slowing down depolymerization rate : /f_{\star}\biggr ) .\ ] ] where is the characteristic load force at which the mt depolymerization rate is an exponentially small fraction of .the kronecker delta function ensures that the external force affects the depolymerization rate only if the motor is bound to the tip of the mt .the force balance equation is given by where is a gaussian white noise .let be the probability of finding mt tip at position at time .time evolution of the govern by the fokker - planck equation this fokker - planck equation can also be re - cast as an equation of continuity for the probability density with the probability current density which is given by .\label{flux}\ ] ] for the calculation of the lifetime of the mt - wall attachment , we have assumed that initially there is no gap between mt tip and cell wall i.e .we have imposed reflecting boundary at the cell wall because mt tip can not penetrate the cell wall .so we have placed an absorbing boundary at so that ; this boundary condition is motivated by our calculation of the life time of the mt - wall attachment where the lifetime is essentially a first - passage time .l*2cr parameter & values + spacing between binding sites on mt & 8/13 + maximal distance dynein can bind & 32 + rate of polymerization of mt & 30 + rate of load - free depolymerization of mt & 350 + rate of binding of motor to mt & 3 + rate of unbinding of motor from mt & 3 + rate of forward stepping of motor & 6 + rate of backward stepping of motor & 0.1 + characteristic depolymerization force & 1 + characterstic spring force of motor & 1 + rest length of the spring & 10 + spring constant & 1000 + diffusion constant & 700 + effective drag coefficient & 6 + by using a method of discretization proposed originally by wang , peskin and elston , we discretize space into contiguous discrete cells , each of length , where denotes the position of the center of the -th cell . the potential is thus replaced by the discretized counterpart {j } \label{discrete}\ ] ] because of this discretization , the fokker - planck equation ( [ fokker_eq_tip ] ) is replaced a master equation where the discrete jumps from the center of one cell to those of its adjacent cells in the forward or backward directions are given by the transition rates and respectively , where .we have carried out simulations of this discretized version of the model using gillespie algorithm . in each time step six types of event are possible , namely , binding/ unbinding , forward / backward hopping of each motor and forward / backward movement of the mt tip . in the simulationa motor can attach to a site only if it it empty .similarly , a motor can step forward or backward provided the target site is empty .we have generated trajectories up to time steps and , after averaging over the trajectories we get the results of our interest .the common parameter values used in the simulation are listed in table [ parameter_values ] .in the two subsections we present the results of our simulations under force - clamp and force - ramp conditions .the main quantities of interest in these two cases are the ltd and rfd , respectively .\(a ) ( b ) in the fig.[fig_dyn_clamp](a ) the probability distributions of the lifetimes are plotted for three different fixed values of the externally applied tension .the corresponding values of the mean lifetimes are plotted against in fig.[fig_dyn_clamp](b ) .each of the distributions of lifetimes shown in the three panels of fig.[fig_dyn_clamp](a ) , for which has a fixed value , has been fitted to the function ( see the lines fitted to the data ) to extract the numerical value of the fitting parameter which has a dimension of time . by repeating this fitting process also for several other values of ( for which the distributions of the lifetimes have not been presented in this paper ) , we have extracted the -dependence of the fitting parameter .the `` best fit function '' is plotted in the inset in the lower left corner of fig.[fig_dyn_clamp](b ) .the function , in turn , is well approximated by the functional form with min and pn .the monotonic decrease of the mean lifetime with indicates a slip - bond - like behaviour of the mt - motor attachment .moreover , the best fits ( [ prob_clamp ] ) and ( [ kappa(f ) ] would suggest a functional form the simulation data for corresponding to pn , indeed , fit well with the form ( [ lifetime ] ) , if the values min and pn extracted above from fig.[fig_dyn_clamp](a ) are used ( see the blue line fit in fig.[fig_dyn_clamp](b ) ) .but , for pn , this fit shows increasing deviation from the simulation data with decreasing .however , in the regime pn , the functional form ( [ lifetime ] ) would still be consistent with the numerical data for if one uses min and 4.32 pn for the two fitting parameters ( see the green dashed line in fig.[fig_dyn_clamp](b ) ] .thus , the simulation data indicate two different regimes in both of which decreases exponentially with , but the decrease is sharper in the higher tension regime that corresponds to pn .a possible interpretation of these two apparent regimes is that in the exponential of the expression for there is a correction term proportional to ; such corrections emerge naturally from systematic taylor expansion in the extended bell models .from the log - log plot of against in the inset at the upper right corner of the fig.[fig_dyn_clamp](b ) we conclude that the mean lifetime increases nonlinearly with the number of motors following with .the effects of the motors on the lifetime of the mt - surface attachment is not simply additive .recall that , at any arbitrary instant of time , the load force is equally shared by the number of motors bound to the mt .therefore , unbinding of a motor from the mt increases the load on those still attached to it .if the redistributed share of the load is too high to resist , more motors are likely to unbind from the mt thereby causing further increase in the load share of the remaining motors still bound to the mt .the larger is the total number , less catastrophic is the effect of such `` avalanche '' of unbinding of the motors and hence the nonlinear increase in the mean lifetime of the mt - wall attachment .\(a ) ( b ) is applied on a suitable handle ( orange cylinder ) attached to the minus end of the same mt ., scaledwidth=40.0% ] the distribution of the rupture force obtained from our simulation using the ramp force is shown in the fig.[fig_dyn_ramp](a ) . at any constant loading rate ,the distribution exhibits a single peak ; the asymmetric bell - shape is the well known typical shape of rfd for common ligand - receptor bonds .at very small forces the bonds are unlikely to get enough time to rupture .similarly , at very large forces the likelihood of bond rupture is also small because the bond would have ruptured already at some intermediate value of the force .therefore , is expected to be small at both the extremes ; naturally , it would exhibit a peak corresponding to some intermediate value of force .these intuitive arguments not only explain the qualitative shape of in fig.[fig_dyn_ramp](a ) but also the observation that most probable rupture force ( i.e. , the force that corresponds to the peak in the distribution of the rupture forces ) is larger in case of a faster loading rate .the simulation data fits well with the function \label{prob_ramp}\ ] ] with a dimensionless fitting parameter and , given by ( [ kappa(f ) ] ) .the survival probability is defined as the probability that till time the mt tip has not reached the absorbing boundary at .as is well known , the survival probability is related to the detachment rate by \label{surv_ramp}\ ] ] in the fig.[fig_dyn_ramp](b ) the simulation data for the survival probability are compared with the corresponding function obtained using ( [ surv_ramp ] ) .the agreement is excellent thereby establishing consistency of the data obtained for the different quantities from our computer simulations . as expected at ( i.e. , ) and as ( i.e. , ) .the survival probability remains high upto certain force after which it falls rapidly .it is worth pointing out that one can account for the skewed shape of the curves in fig.[fig_dyn_ramp](a ) mathematically exploiting increase of and sharp decrease of in the relation .extending the earlier generalizations of the concept of a ligand , we have treated a microtubule ( mt ) as a ` ligand ' that is tethered to a ` receptor ' wall by a group of minus - end directed molecular motors .the tails of the motors are permanently anchored on the wall while their motor heads can bind to- and unbind from the mt .this model of mt - wall attachment captures only a few key ingredients of the mt - cortex attachments in eukaryotic cells , particularly those formed during chromosome segregation .this minimal model incorporates the polymerization and depolymerization kinetics of mt .but , for the sake of simplicity , it does not include the processes of ` catastrophe ' and ` rescue ' that are caused by the dynamic instability of mt filaments citedesai97 although these can be captured in an extended version of this model .we consider a pre - formed mt - wall attachment and carry out computer simulations to study statistical properties of its rupture under conditions that mimic the protocols of force - clamp and force - ramp experiments _ in - vitro _ .the simulation results that we report are interpreted in the light of the theory of single - molecule force spectrocopy , popularized by bell and some of its later generalizations . to our knowledge ,the _ in - vitro_ experimental set up ( schematically depicted in fig.[fig_mtdynein ] ) that comes closest to our model is that used by laan et al . ; the wall in our model corresponds to the microfabricated vertical barrier that mimics the cell cortex in their experiment .dynein motors were anchored on the barrier just as we have depicted in fig.[fig_xm ] for our model . in the experiment these dyneins captured the mt that grew from a centrosome which was fixed on a horizontal glass surface . a slightly different _ in - vitro _ experimental set us was used by hendricks et al . ; in this experiment a dynein coated bead was used to mimic the cell cortex .however , within the broader context of the role of mt - cortex interaction in positioning of the mitotic spindle , the authors of refs. , demonstrated only the stabilization of the mt by the dynein tethers .in contrast , the main aim of our study is to focus on the ltd and rfd for the pre - formed attachment of a mt with a wall tethered by minus - end directed motors .this work has been supported by a j.c .bose national fellowship ( dc ) and by the `` prof .s. sampath chair '' professorship ( dc ) . | a microtubule ( mt ) is a tubular stiff filament formed by a hierarchical organization of tubulin proteins . we develop a stochastic kinetic model for studying the strength and stability of a pre - formed attachment of a mt with a rigid wall where the mt is tethered to the wall by a group of motor proteins . such an attachment , formed by the specific interactions between the mt and the motors , is an analog of ligand - receptor bonds ; the mt and the motors anchored on the wall being the counterparts of the ligand and receptors , respectively . however , unlike other ligands , the length of a mt can change with time because of its polymerization - depolymerization kinetics . the simple model developed here is motivated by the mts linked to the cell cortex by dynein motors . we present the theory for both force - ramp and force - clamp conditions . in the force - ramp protocol we investigate the strength of the attachment by assuming imposition of a time - dependent external load tension that increases linearly with time till the attachment gets ruptured ; we calculate the distribution of the rupture forces that fluctuates from one loading to another . in the force - clamp protocol , to test the stability , we compute the distribution of the lifetimes of the attachments under externally applied time - independent load tension ; the results establish the mt - wall attachment to be an analog of a slip - bond . |
neural networks have been successfully implemented in a plethora of prediction tasks ranging from speech interpretation to facial recognition . because of ground - breaking work in optimization techniques ( such as batch normalization [ 5 ] ) and model architecture ( convolutional , deep belief , and lstm networks ) , it is now tractable to use dnn methods to effectively learn a better feature representation compared to hand - crafted methods .however , one area where such methods have not been utilized is the space of adversarial multiagent systems - specifically when the multiagent behavior comes in the form of trajectories .there are two reasons for this : i ) procuring large volumes of data where deep methods are effective is difficult to obtain , and ii ) forming an initial representation of the raw trajectories so that deep neural networks are effective is challenging . in this paper, we explore the effectiveness of deep neural networks on a large volume of basketball tracking data , which contains the locations of multiple agents in an adversarial domain . to throughly explore this problem , we focus on the following task : given the trajectories of the players and ball in the previous five seconds, can we accurately predict the likelihood that a player with role will make the shot ? "we express this as a ten - class prediction problem by considering the underlying role of the player . for this paper, player role refers to the position of the player such as a point guard . since we are utilizing an image - based technique , the cnn will not know which role shoots the ball , especially if there are multiple offensive players nearby .thus , solving as a ten - class problem is ideal for our deep - learning approach .our work contains three main contributions .first , we create trajectories for the offense , ball , and defense as an eleven channel image .each channel corresponds to the five offensive and defensive players , as well as the ball . in order to encode the direction of the trajectories , we fade the paths of the ball and players . in our case ,an instance is a possession that results in an shot attempt .second , we apply a combined convolutional neural network ( cnn ) and feed forward network ( ffn ) model on an adversarial multiagent trajectory based prediction problem .third , we gain insight into the nature of shot positions and the importance of certain features in predicting whether a shot will result in a basket .our results show that it is possible to solve this problem with relative significance .the best performing model , the cnn+fnn model , obtains an error rate of 26% .in addition , it is able to accurately create heat maps by shot location for each player role . during training we found that one particular feature , whether the player received a pass just before the shot , is highly predictive .other features , which are un - surprisingly important , are the number of defenders around the shooter and location of the ball at the time of the shot .with the rise of deep neural networks , sports prediction experts have new tools for analyzing players , match - ups , and team strategy in these adversarial multiagent systems .. trajectory data was not available at the time , so much previous work on basketball data using neural networks have used statistical features such as : the number of games won and the number of points scored .for example , bauer et .al . [ 7 ] , use statistics from 620 nba games and a neural network in order to predict the winner of a game .another interested in predicting game outcomes is mccabe [ 11 ] . on the other hand , nalisnick [ 12 ] in his blogdiscusses predicting basketball shots based upon the type of shot ( layups versus free throws and three - point shots ) and where the ball was shot from . in other sports related papers , chang et.al . [ 4 ] use a neural network to predict winners of soccer games in the 2006 world cup . also , wickramaratna et.al .[ 16 ] predict goal events in video footage of soccer games . although aforementioned basketball work did not have access to raw trajectory data , lucey et.al .[ 9 ] use the same dataset provided by stats llc . for some of their work involving basketball .they explore how to get an open shot in basketball using trajectory data to find that the number of times defensive players swapped roles / positions was predictive of scoring. however , they explore open versus pressured shots ( rather than shot making prediction ) , do not represent the data as an image , and do not implement neural networks for their findings .other trajectory work includes using conditional random fields to predict ball ownership from only player positions [ 18 ] , as well as predicting the next action of the ball owner via pass , shot , or dribble [ 20 ] .goldsberry et .[ 1 ] use non - negative matrix factorization to identify different types of shooters using trajectory data . because of a lack of defensive statistics in the sport , goldsberry et .create counterpoints ( defensive points ) to better quantify defensive plays [ 2][3 ] .al . [ 13 ] make use of trajectory data by segmenting a game of basketball into phases ( offense , defense , and time - outs ) to then analyze team behavior during these phases .wang and zemel [ 17 ] use trajectory data representations and recurrent neural networks ( rather than cnn s ) to predict plays . because of the nature of our problem ,predicting shot making at the time of the shot , there is not an obvious choice of labeling to use for a recurrent network .they also fade the trajectories as the players move through time .similar to us , they create images of the trajectory data of the players on the court .our images differ in that we train our network on the image of a five second play and entire possession , while their training set is based on individual frames represented as individual positions rather than full trajectories .they use the standard rgb channels , which we found is not as effective as mapping eleven channels to player roles and the ball for our proposed classification problem . also , the images they create solely concentrate on offensive players and do not include defensive positions . the final model that we implement , the combined network , utilizes both image and other statistical features .there is work that utilizes both image and text data with a combined model .recently , bengio et .al . [ 19 ] , fei fei et . al . [ 6 ] , ng et . al . [ 14 ] , erham et . al . [ 15 ] , and mao et10 ] all explore the idea of captioning images , which requires the use of generative models for text and a model for recognizing images .however , to the best of our knowledge , we have not seen visual data that incorporates a fading an entire trajectory for use in a cnn .the dataset was collected via sportsvu by stats llc .sportsvu is a tracking system that uses 6 cameras in order to track all player locations ( including referees and the ball ) .the particular data used in this study was from the 2012 - 2013 nba season and includes thirteen teams , which have approximately forty games each .each game consists of at least four quarters , with a few containing overtime periods .the sportvu system records the positions of the players , ball , and referees 25 times per second . at each recorded frame, the data contains the game time , the absolute time , player and team anonymized identification numbers , the location of all players given as ( x , y ) coordinates , the role of the player , and some event data ( i.e. passes , shots made / missed , etc . ) .it also contains referee positions , which are unimportant for this study , and the three - dimensional ball location .this dataset is very unique in that before sportvu , there was very little data available of player movements on the court and none known that provides frame - by - frame player locations . since it is likely that most events in basketball can be determined by the movements of the players and the ball , having the trajectory data along with the event data should provide a powerful mixture of data types for prediction tasks .there are a few ways to extract typical shot plays from the raw data .one is to choose a flat amount of time for a shooting possession . in our casewe choose to include five seconds of a typical possession . to obtain clean plays , those that lasted less than 5 seconds due to possession changes andthose in which multiple shots were taken were thrown out . after throwing out these cases ,we were left with 70,000 five second plays .the other way of obtaining play data would be to take the entire possession .thus , rather than having plays be limited to five seconds , possessions can be much longer or shorter . since the raw data does not contain labels for possession , we had to do this ourselves . to identify possession ,we calculate the distance between the ball and each of the players .the player closest to the ball would be deemed the ball possessor .since this approach may break during passes and other events during a game , we end the play of the possession when the ball is closer to the defensive team for 12 frames ( roughly 0.5 seconds ) .the procedure yields 72,200 possession examples .since the classification problem is also dependent upon a player s role , a player s role must be chosen for each play .one way to do this would be to identify the role of a player at the beginning of the play and hold it constant .however , a player s role at the beginning of the play is usually not the same as a player s role at the time of the shot .therefore , we decide to label a player s role via their position at end of the play in which the ball is shot .in terms of applying deep neural networks to multiagent trajectories , we first have to form an initial representation .a natural choice for representing these trajectories is in the form of an image . given that the basketball court is feet , we can form a pixel image . in terms of the type of image we use , there are multiple choices : i ) grayscale ( where we identify that a player was a specific location by making that pixel location 1 ) , ii ) rgb ( we can represent the home team trajectories in the red channel , the away team in the blue channel and the ball in the green channel , and the occurrence of a player / ball at that pixel location can be representing by a 1 ) , and iii ) 11-channel image ( where each agent has their own separate channel ) .examples of the grayscale and rgb approach are shown in figure 1 .the 11-channel approach requires some type of alignment . in this paper, we apply the ` role - representation ' which was first deployed by lucey et al . [the intuition behind this approach is that for each trajectory , the role of that player is known ( i.e. , point - guard , shooting guard , center , power - forward , small - forward ) .this is found by aligning to a pre - defined template which is learnt in the training phase .figure 1 shows examples of the methods we use to represent our data for our cnn . the grayscale image , which appears on the left , can accurately depict the various trajectories in our systemhowever , because the image is grayscale , the cnn will treat each trajectory the same .since we have an adversarial multiagent system in which defensive and offensive behavior in trajectory data can lead to different conclusions , grayscale is not the best option for representation .therefore , to increase the distinction between our agents , we represent the trajectories with an rgb scale .we choose red to be offense , blue to be defense , and green to be the ball .this approach takes advantage of multiple channels to allow the model to better distinguish our adversarial agents .although the ball may be part of the offensive agent structure , we decide to place the ball in a channel by itself since the ball is the most important agent .this approach , although better than the gray images , lacks in distinguishing player roles . since we classify our made and missed shots along with the role of the player that shoots the ball , a cnn will have trouble distinguishing the different roles on the court .therefore , for our final representation , we decide to separate all agents into their own channel so that each role is properly distinguished by their own channel .the above ideas nearly creates ideal images ; however , it does not include time during a play .since each trajectory is equal brightness from beginning to end , it may be difficult to identify where the ball was shot from and player locations at the end of the play .therefore , we implement a fading variable at each time frame . we subtract a parameterized amount from each channel of the image to create a faded image as seen in figures 1 and 2 .thus , it becomes trivial to distinguish the end of the possession from the beginning and leads to better model performance .we create two options with our final faded image data : five seconds before the ball was shot and the total possession .five seconds likely performs better since full possession data contains more temporal information than cnn s can handle . in longer possessions , trajectories tend to cross more , which confuses the cnn model by adversely affecting the fading since new trajectories will be higher in value .in order to fully utilize the power of this dataset , we implement a variety of networks for our prediction task . for our base model, we use logistic regression with 197 hand - crafted features detailed later . to improve upon this basic model, we use a multi - layer ffn with the same features and utilize batch normalization during training .because of the nature of these two models , we could only include the positions of the players at the time of the shot .therefore , we craft images to include the position of the players throughout the possession. we then apply a cnn to these new image features .finally , we create a combined model that adopts both images and the original ffn features for training .for the first models , features with the knowledge of the game in mind were crafted .the list of features includes : * player and ball positions at the time of the shot * game time and quarter time left * player speeds over either five seconds or an entire possession * distances and angles ( with respect to the hoop ) between players * whether the shooting player received a pass two seconds or less before the shot * number of defenders in front of the shooter ( of the shooter ) and within six feed based upon the angles calculated between players * individual time of ball possession for each offensive player logistic regression and fnn both use the same calculated features .in addition , only the cnn does not incorporate the above features . for our model, we use a cnn that consists of three full convolutional layers , each with 32 3x3 filters and a max - pooling layer with a pool - size of 2x2 following each convolutional layer .after the last pooling layer there is a fully connected layer with 400 neurons , and finally an output layer consisting of a softmax function and ten neurons .in addition , we use the relu function for our nonlinearity at each convolutional layer and the fully connected layer .we also implement alexnet and vgg-16 , but we did not garner significant improvement from either model . the final network implemented is a combination of both the feed forward and convolutional networks . for this model , we use both the feed forward features and the fading trajectory images from the cnn . the idea behind the combined network is to have the model identify trajectory patterns in the images along with statistics that are known to be important for a typical basketball game .the cnn and ffn parts of the combined network have the exact same architecture as the stand - alone versions of each model .the final layers just before the softmax layer of each stand - alone network are then fully - connected to a feed - forward layer that consists of 1,000 neurons .this layer is then fed into the final softmax layer to give predictions . after performing experiments and measuring log loss , we found that adding layers to this final network or adding additional neurons to this layer did not improve our final results .all of the models ( fnn , cnn , and fnn + cnn ) use the typical log loss function as the cost function with a softmax function at the output layer .the weights are initialized with a general rule of where is the number of incoming units into the layer . for training, we implement the batch stochastic gradient method utilizing batch normalization .batch normalization is used on the convolutional and feed forward layers of the model . in addition , we train the models on a nvidia titan x using theano . for the cnn and cnn+fnn networkswe utilize our eleven channel images with fading . to justify our representation, we predict our classification problem with each crafted image set with the previously mentioned cnn architecture .figure 3 displays the log loss and error rate for each of our representations .evaluating our representations by log loss and error rate show a dramatic difference between our images .the eleven channel method is understandably the best choice since having eleven channels gives the cnn much more role information .figure 4 gives a summary of the results of the three different models .it is intriguing how poorly the convolutional neural network captures the adversarial multiagent properties without the addition of the hand - crafted features used in the ffn .theoretically , the image data contains all pertinent information related to the play .therefore , in our problem , hand - crafted features clearly remain essential for successful prediction .since the combined model surpasses the performance of either model separated , we presume that the networks find different features during training .while the performance of the eleven channel images is an improvement ( a 10% gain ) , there is still much potential progress .other methods include making each trajectory a different color in rgb space , varying the strength of the fading effect , and including extra channels of heat maps depicting the final ball position .therefore , successfully representing trajectory data in sport that outperforms traditional metrics is a nontrivial problem and is not further explored here .the remaining analyses are based on the combined model .in addition to assessing the accuracy of our model , we explore a basic heat map of basketball shots based upon the raw data . at the very least, we expect that our complete model should be able to recreate a heat map created via raw data .we make the heat map by taking a count of shots made against missed within a square foot of the basketball court . since our classification model gives probabilities of making a shot ( rather than a binary variable ) , we take the maximum probability to create a heat map equivalent to the raw data map . in the raw data heat map ,figure 6 , we note that the best probability of making a shot lies on top of the basket .as we get farther back , the probability decreases with two dead zones ( lines of dark blue signifying very low probability ) : one right outside the paint and another just inside the three point line .the model results , figure 5 , expectedly prefers shots near the basket .the model also predicts a larger high value area surrounding the basket , which extends further into the paint of the court .the model dead - zones are less arc - like as well .there is a single dark - blue arc around the paint / basket , but the model predicts more pocket like behavior for missed shots than the raw data heat map . to further explore our results ,we create heat maps solely based on the role of the player ( to break down scoring chances by agent ) .as before , each role represents an offensive player . in figure 7we present a few player roles and their representative heat maps .role 3 is obviously the center position from their respective shot selection and role 5 is the left guard .note that the model predicts a much smaller area of midrange scoring probability than from the raw data for role 3 .the model heat map for role 3 strictly covers the paint , while the raw data has significantly higher shot probabilities outside of the paint .role 5 , which depicts a guard has very similar heat maps for both the raw and model predictions .the only notable differences are the larger hot area at the basket and the pocket of low probability shots just outside the paint detailed in the model heat map .the probabilities that the combined model predicts for each shot may also provide useful insight into the game and our model s interpretation of high versus low value shots .we create these histograms by finding which examples the model gives the highest probability as a shot made or missed by player with role .we then group these examples together , and the probability of making a shot is reported in the histogram . in the histogram figure 8we see that the majority of shots have a low probability .this agrees with common basketball knowledge because many of a guard s shots are beyond the paint .on the other hand , a center lives primarily under the basket and in the paint .therefore , many of their shots are much more likely . watching a game live , a center getting a clean pass right under the basketoften results in a successful goal .in addition , most roles tend to follow the probability pattern of role 5 with the exception of role 3 ( the center ) , which has a wider distribution and higher average probability of making a shot .a brief glance at nba statistics agrees with this interpretation as the players with the highest shooting percentage ( barring free throws ) tend to be centers . the large number of low probability shots is not limited to the combined model .the ffn has many low probability shots while the cnn does not .we probe this phenomena by leaving single features out of the ffn to find the ones responsible for causing low probability shots .we found that one feature in particular , pass - received , is the root cause .this feature detects whether the shooter received a pass two seconds before the shot was taken .we found that when a pass was received by the shooter during the two seconds before the shot the likelihood of missing the shot is much higher .in fact , when a player received a pass , 94% of the shots are missed .although counter - intuitive at first , two seconds is a small window to shoot . since two secondsis the maximum time for this particular feature , many of the shooters may have shot immediately after the pass .therefore , many of these shots are hurried explaining why a majority of them are missed .this one feature led to a nearly 10% decrease in the error rate .we also exhibit figure 9 to provide additional visual context for our model probabilities .these figures depict the final positions of all players on the court at the time of the shot .thus we can see how `` open '' the shot maker is at the time of the shot and the relative position of both the offensive and defensive players .the offensive players are blue , defensive players red , and the ball is green .each offensive and defensive player has the letter `` o '' and `` d '' respectively followed by a number signifying the role of that player at that time .there are some times where the model makes some pretty questionable predictions .for example , the three - point shot that is exhibited in the top left of figure 9 with a 0.892 probability is much too high .even unguarded the best three - point shooters in the game can not hit 89.2% .thankfully , these examples are very rare in our model . for the most part ,three - point shots are rated extremely low by the model garnering probabilities of less than 20% .in addition to three - point shots to having a generally low probability , shots that were well - covered by defenders had a much lower probability of success .this is an unsurprising well - known result , but it does add validity to our model .in addition , shots that are open and close to the basket are heavily favored in our model .for example , in the right pictures , both players have open shots right under the basket with the defense well out of position . since our combined model predicts shots with more accuracy , we are curious to find the features that deep learning with a cnn observes with raw trajectory data .our images contain both more temporal and spatial data since we fade images and do not include all multiagent interactions ( since it is difficult to label all inter - player events ) .to investigate our raw trajectory approach , we use a simple gradient ascent method to create an image that most strongly reacts to the filters in our network .for example , in a problem that contains cats and dogs , some filters may be looking for long whiskers to help identify a cat .in addition , these filters help determine whether the network has been well - trained .networks having filters with little to no interpretation or appear random are usually poorly trained .figure 10 shows a few representative images that we craft with the gradient ascent method . in the first image of figure 10 from the left, we see that the model is searching for defensive positioning ( light blue ) and the ball ( green ) .there are clusters of these positions , which shows that the model is searching for the role that shoots the ball via ball positioning and the number of defensive players near the ball . on the other hand ,the filter on the far right only depicts ball locations and does not include defensive positioning .the second figure from the left is actually very similar to the far right filter ; however , it includes much more than simple ball information .this filter is specifically looking for ball location as well as offensive / defensive player locations near the ball .thus , the cnn model is likely more effective in capturing defensive structure and paths for shot prediction , which is difficult to construct by hand .for further research , it would be very interesting to identify time dependency in basketball plays . in our image data ,we subtract a flat amount at each equally spaced frame to cause the fading effect .however , this assumes that the data in time is linearly related .since this is not necessarily true , designing a recurrent model to find this temporal dependency could be a very interesting problem . instead of having a fading effect in the image data, we can design an lstm that takes a moving window of player and ball trajectories .one last aspect that was not taken into account during this study was the identities of teams and particular players .the focus of this research was to gather more insight on the average shooting plays of teams in the nba .however , teams in the nba have drastically different strategies .for example , the golden state warriors tend to rely on a three - point strategy while bigger teams , such the thunder , build their offensive strategy around being inside the paint .thus , new knowledge on basketball could be gathered if models were applied to different teams and possibly identify some overall team strategies .such a more fine - grained analysis will require much more data .[ 1 ] a. miller , l. bornn , r. adams , and k. goldsberry . ( 2014 ) `` factorized point process intensities : a spatial analysis of professional basketball . ''_ international conference on machine learning_. [ 4 ] k.y .huang and w.l .( 2010 ) `` a neural network method for prediction of 2006 world cup football game . '' _ the 2010 international joint conference on .institute of electrical and electronics engineers . _ 74 [ 5 ] s. ioffe and c. szegedy .( 2015 ) `` batch normalization : accelerating deep network training by reducing internal covariate shift . '' _ international conference on machine learning_. [ 8 ] p. lucey , a. bialkowski , p. carr , s. morgan , i. matthews and y. sheikh . `` representing and discovering adversarial team behaviors using player roles . '' in _ ieee international conference on computer vision and pattern recognition . _ [ 9 ] p. lucey , a. bialkowski , p. carr , y. yue and i. matthews .( 2014 ) `` how to get an open shot : analyzing team movement in basketball using tracking data . '' in _ mitsloan sports analytics conference_. [ 14 ] r. socher , a. karpathy , q. le , c. manning , a. ng .( 2014 ) `` grounded compositional semantics for finding and describing images with sentences . '' _ transactions of the association for computational linguistics 2 : 207 - 218 . _ [ 18 ] x. wei , l. sha , p. lucey , p. carr , s. sridharan and i. matthews .( 2015 ) `` predicting ball ownership in basketball from a monocular view using only player trajectories . ''_ knowledge discover and data mining workshop on large - scale sports analytics _ ,sidney , australia .[ 19 ] k. xu , j. lei ba , r. kiros , k. cho , a. courville , r. salakhutdinov , r. zemel , and y. bengio .`` show , attend and tell : neural image caption generation with visual attention . ''_ international conference on machine learning ( icml-15 ) . | in this paper , we predict the likelihood of a player making a shot in basketball from multiagent trajectories . previous approaches to similar problems center on hand - crafting features to capture domain specific knowledge . although intuitive , recent work in deep learning has shown this approach is prone to missing important predictive features . to circumvent this issue , we present a convolutional neural network ( cnn ) approach where we initially represent the multiagent behavior as an image . to encode the adversarial nature of basketball , we use a multi - channel image which we then feed into a cnn . additionally , to capture the temporal aspect of the trajectories we use `` fading . '' by using gradient ascent , we were able to discover what the cnn filters look for during training . last , we find that a combined fnn+cnn is the best performing network with an error rate of 26% . |
the need for a coherent theory of physics and mathematics together arises from considerations of the basic relationship between physics and mathematics. why is mathematics relevant to physics .one way to see the problem is based on the widely held platonic view of mathematics .if mathematical systems have an abstract ideal existence outside of space and time and physics describes the property of systems in space and time , then why should the two be related at all ? yet it is clear that they are very closely related .the problem of the relationship between the foundations of mathematics and physics is not new .some recent work on the subject is described in and in .in particular the work of tegmark is is quite explicit in that it suggests that the physical universe is a mathematical universe .another approach to this problem is to work towards construction of a theory that treats physics and mathematics together as one coherent whole .such a theory would be expected to show why mathematics is so important to physics by describing details of the relation between mathematical and physical systems . in this papera possible approach to a coherent theory of physics and mathematics is described .the approach is based on the field of reference frames that follows from the properties of quantum mechanical representations of real and complex numbers .the use , here , of reference frames is similar in many ways to that used by different workers in areas of physics . in general, a reference frame provides a background or basis for descriptions of systems .in special relativity , reference frames for describing physical dynamics are based on choices of inertial coordinate systems . in quantum cryptography ,polarization directions are used to define reference frames for sending and receiving messages encoded in qubit string states .the use of reference frames here differs from those noted above in that the frames are not based on a preexisting space and time as a background .instead they are based on a mathematical parameterization of quantum theory representations of real and complex numbers . in particular , each frame in the field is based on a quantum theory representation , of the real and complex numbers where can be viewed as a set of equivalence classes of cauchy sequences of quantum states of qukit strings . is a set of pairs of these equivalence classes .the parameter is the base ( for qubits ) , denotes a basis choice for the states of qukit strings that are values of rational numbers , and denotes an iteration stage .the existence of iterations follows from the observation that the representations of real and complex numbers are based on qukit string states .these are elements of a hilbert space that is itself defined as a vector space over a field of complex numbers .consequently one can use the real and complex numbers constructed in a stage frame as the base of a stage frame .each reference frame contains a considerable number of mathematical structures .besides and a frame contains representations of all mathematical systems that can be described as structures based on and however frames do not contain physical theories as mathematical structures based on the reason is that the frames do not contain any representations of space and time .the goal of this paper is to take a first step in remedying this defect by expansion of the domain of each frame to include discrete space and time lattices .the lattices , in a frame , are such that the number of points in each dimension is given by the spacing and for each lattice , and are fixed with an arbitrary nonnegative integer .it follows that each dimension component of the location of each point in a lattice is a rational number expressible as a finite string of base digits .representations of physical systems of different types are also present in each frame . however , the emphasis here is on strings of qukits , present in each frame .these strings are considered to be hybrid systems in that they are both physical systems and mathematical systems . as mathematical systems , the quantum states of each string , in some basis ,represent a set of rational numbers .as physical systems the motion of strings in a frame is described relative to a space and time lattice in the frame .this dual role is somewhat similar to the concept that information is physical .considerable space in the paper is devoted to how observers in a stage ( parent ) frame view the contents of a stage frame .for an observer , in a frame the numbers in the real and complex number base of the frame are abstract and have no structure .the only requirement is that they satisfy the set of axioms for real or complex number systems .points of lattices in the frame are also regarded as abstract and without structure .the only requirement is that the lattices satisfy some relevant geometric axioms . the view of the contents of a stage frame as seen by an observer in a stage frame , is quite different .elements of the stage frame that sees as abstract and with no structure , are seen by to have structure .numbers in are seen by to be equivalence classes of cauchy sequences of states of stage hybrid systems .space points of stage lattices with space and one time dimension are seen in a stage j-1 frame to be tuples of hybrid systems with the location of each point given by a state of the tuple .time points are seen to be hybrid systems whose states correspond to the possible lattice time values .all this and more is discussed in the rest of the paper .the next section is a brief summary of quantum theory representations and the resulting frame fields .section [ pfctpm ] describes a possible approach to a coherent theory of physics and mathematics as the inclusion of space and time lattices in each frame of the frame field .properties of the lattices in the frames are described .qukit strings as hybrid systems are discussed in the next section .their mathematical properties as rational number systems with states as values of rational numbers are described . also included is a general hamiltonian description of the rational number states as energy eigenvalues and a schrdinger equation description of the dynamics of these systems .section [ fevpf ] describes frame entities as viewed from a parent frame .included is a description of real and complex numbers , quantum states and hilbert spaces , and space and time lattices .section [ lhs ] discusses in more detail a stage views of stage lattice points and locations as tuples of hybrid systems and point locations as states of the tuples .dynamics of these tuples in stage is briefly described as is the parent frame view of the dynamics of physical systems in general .the last section is a discussion of several points .the most important one is that frame field description given here leads to a field of different descriptions of the physical universe , one for each frame , whereas there is just one .this leads to the need to find some way to merge or collapse the frame field to correspond to the accepted view of the physical universe .this is discussed in the section as are some other points .whatever one thinks of the ideas and systems described in this work , it is good to keep the following points in mind .one point is that the existence of the reference frame field is based on properties of states of qukit string systems representing values of rational numbers .however the presence of a frame field is more general in that it is not limited to states of qukit strings .reference frame fields arise for any quantum representation of rational numbers where the values of the rational numbers , as states of some system , are elements of a vector space over the field of complex numbers .another point is that the three dimensional reference frame field described here exists only for quantum theory representations of the natural numbers , the integers , and the rational numbers . neither the basis degree of freedom , nor the iteration stages , are present in classical representations .this is the case even for classical representations based on base digit or kit strings .the reason is that states of digit strings are not elements of a vector space over a complex number field . finally ,although understandable , it is somewhat of a mystery why so much effort in physics has gone into the description of various aspects of quantum geometry and space time and so little into quantum representations of numbers .this is especially the case when one considers that natural numbers , integers , rational numbers , and probably real and complex numbers , are even more fundamental to physics than is geometry .in earlier work quantum theory representations of natural numbers , integers , and rational numbers , were described by states of finite length qukit strings that include one qubit . to keep the description as simple as possible ,the strings are considered to be finite sets of qukits and one qubit with the qukits and qubit parameterized by integer labels .the natural ordering of the integers serves to order the set into a string .this purely mathematical representation of qukit strings makes no use of physical representation of qukit strings as extended systems in space and/or time .physical representations are described later on in section [ qsshs ] after the introduction of space and time lattices into each frame of the frame field . the qukit ( ) string states are given by where is a valued function with domain and denotes the sign. the string location of the sign qubit is given by where is any nonnegative integer .this expresses the range of possible locations of the sign qubit from one end of the string to the other . by convention has the sign qubit at the right end of the string , at the left end .the qubit can occupy the same integer location as a qukit .the reason for the subscript will be clarified later on .a compact notation is used where the location of the sign qubit is also the location of the point . as examples ,the base numbers are represented here by respectively .strings are characterized by the values of for each and the string states give a unified quantum theory representation of natural numbers and integers in and for numbers in ; for numbers in , and there are no restrictions for here is the set of rational numbers expressible as where is any integer whose absolute value is and and is the set of nonnegative integers in the correspondence between the numbers and the states is given by the observation that each corresponds to an integer also , as noted , is the location of the point measured from the right end of the string . since one is dealing with quantum states of qukit strings , states with leading and trailing are included . in this casethere are many states that are all arithmetically equal even though they are orthogonal quantum mechanically .for example even though the two states are orthogonal .the set of states so defined form a basis set that spans a fock space of states .a fock space is used because the basis set includes states of strings of different lengths .linear superposition states in the space have the form the and sums are over all positive integers and from to , and the sum is over all functions with domain . ] is the domain of the sequence is a cauchy sequence if it satisfies the cauchy condition: here is the basis state that is the base arithmetic absolute value of the state resulting from the arithmetic subtraction of from the cauchy condition says that this state is arithmetically less than or equal to the state }1_{-\ell}\rangle_{k , g} ] and increasing without bound .it follows from this description that the points of a lattice , with space and one time dimension can be taken to be -tuples of rational numbers .the value of each number in the tuple is expressible as a finite string of base digits .one may hope that the structure of space and time , as a continuum or as some other structure , will emerge when one finds a way to merge the frames in the frame field . ]so far each frame contains in its domain , space and time lattices , and strings of qukits that are numbers .it is reasonable to expect that it also contains various types of physical systems . for the purposes of this paperthe types of included physical systems do not play an important role as the main emphasis here is on qukit string systems . also in this first paper descriptions of system dynamics will be limited to nonrelativistic dynamics .it follows from this that each frame includes a description of the dynamics of physical systems based on the space and time lattices in the frame .the kinematics and dynamics of the systems are expressed by theories that are present in the frame as mathematical structures over the real and complex number base of the frame .this is the case irrespective of whether the physical systems are particles , fields , or strings , or have any other form .one reason space and time are described as discrete lattices instead of as continua is that it is not clear what the appropriate limit of the discrete description is .as is well known , there are many different descriptions of space and time that are present in the literature .the majority of these descriptions arise from the need to combine quantum mechanics with general relativity .they include use of various quantum geometries , and space time as a foam and as a spin network as in loop quantum gravity .these are in addition to the often used assumption of a fixed flat space and time continuum that serves as a background arena for the dynamics of all physical systems , from cosmological to microscopic .space and time may also be emergent in an asymptotic sense .the fact that there are many different lattices in a frame each characterized by different values of and and that each can serve as a background space and time is not a problem .this is no different than the fact that one can use many different space and time lattices with different spacings and numbers of points to describe discrete dynamics of systems .so far the domain of each frame contains space and time lattices , many types of physical systems ( such as electrons , nuclei , atoms , etc . ) and physical theories as mathematical structures based on the real and complex numbers .these describe the kinematics and dynamics of these systems on the lattices .also included are qukit strings .states of these strings were seen to be values of rational numbers .these were used to describe real and complex numbers as cauchy sequences of these states .here it is proposed to consider these strings as systems that can either be numbers , i.e. mathematical systems , or they can be physical systems . because of this dual role , they are referred to as hybrid systems . as such they will be seen to play an important role .support for this proposal is based on the observation that the description of qukit strings as both numbers and physical systems is not much different than the usual view in physics regarding qubits and strings of qubits . as a unit of quantum information, the states of a qubit can be and which denote single digit binary numbers .the states can also be as spin projection states of a physical spin system . in the same way strings of qubits are binary numbers in quantum computation , or they can represent physical systems such as spins or atoms in a linear ion trap . to be blunt about it , ``information is physical '' , and information is mathematical .also it is reasonable to expect that the domain of a coherent theory of physics and mathematics together would contain systems that are both mathematical systems and physical systems .the hybrid systems are an example of this in that they are number systems , which are mathematical systems , and they are physical systems .let denote a hybrid system in that contains qukits and one qubit . is any nonnegative integer , is any nonnegative integer is the iteration stage of any frame containing these systems , and is the base ( or dimension of the hilbert state space ) of the in the string system .note that can be different from the base of the frame containing these systems .the different in the system are distinguished by labels in the integer interval .$ ] the qubit has label where the canonical ordering of the integers serves to order the and qubit into a string system . the presence of the sign qubit is needed if the states of the hybrid systems are to be values of rational numbers . since the qubit also corresponds to the point , the value of gives the location of the sign and point in note that there is a change of emphasis from the usual description of numbers . in the usual description, strings of base digits , such as with are called rational numbers . herestates , such as are called _ values _ of rational numbers .the hybrid system will also be referred to as a rational number system .the reason is that the set of all basis states of a hybrid system correspond to a set of values of rational numbers .the type of number , represented is characterized by the value of and state of the sign qubit in a rational number instead of a rational number system .this would agree with the usual physical description of systems .for example a physical system of a certain type is a proton , not a proton system .however referring to as a rational number instead of a rational number system seems so at odds with the usual use of the term that it is not done here . ]as was noted , the states of the systems in a frame are elements of a hilbert space in the frame .the choice of a basis set or gauge fixes the states of that are values of rational numbers .these states are represented as often the subscripts on the states will be dropped as they will not be needed for the discussion .the description of the hybrid systems as strings of qukits is one of several possible structures .for example , as physical systems that move and interact on a space lattice the strings could be open with free ends or closed loops . in this case , aspects of string theory may be useful in describing the physics of the strings .whatever structure the hybrid systems have , it would be expected that , as bound systems , they have a spectrum of energy eigenstates described by some hamiltonian if the rational number states of the hybrid system , are energy eigenstates , then one has the eigenvalue equation where is the energy eigenvalue of the state the superscript on allows for the possibility that the hamiltonian depends on the type of hybrid system .the gauge variable has been removed because the requirement that eq .[ shenergy ] is satisfied for some choice of fixes the gauge or basis to be the eigenstates of since is not known , neither is the dependence of on the existence of a hamiltonian for the hybrid systems means that there is energy associated with the values of rational numbers represented as states of hybrid systems . from thisit follows that there are potentially many different energies associated with each rational number value .this is a consequence of the fact that each rational number value has many string state representations that differ by the number of leading and trailing one way to resolve this problem is to let the energy of a hybrid system state with no leading or trailing be the energy value for the rational number represented by the state . in this way one has , for each a unique energy associated with the value of the rational number shown by the state .one consequence of this association of energy to rational number values is that to each cauchy sequence of rational number states of hybrid systems there corresponds a sequence of energies .the energy of the state in the sequence is given by it is not known at this point if the sequence of energies associated with a cauchy sequence of hybrid system states converges or not . even if energy sequences converge for cauchy sequences in an equivalence class , the question remains whether or not the energy sequences converge to the same limit for all sequences in the equivalence class .the above description is valid for one hybrid system . in order to describe more than one of these systems , another parameter , is needed whose values distinguish the states of the different systems . to this end the states of a system are expanded by including a parameter as in in this case the state of two is given by where .this allows for the states of the two systems to have the same and values .pairs of hybrid systems are of special interest because states of these pairs correspond to values of complex rational numbers .the state of one of the pairs is the real component and the other is the imaginary component . since these components have different mathematical properties , the corresponding states in the pairs of states of hybrid systems must be distinguished in some way .one method is to distinguish the hybrid systems in the pairs by an index added to as in in this case states of and are values of the real and imaginary components of rational numbers . in this casecomplex numbers are cauchy sequences of states of pairs , of hybrid systems .as might be expected , the kinematics and dynamics of hybrid systems in a frame are described relative to a space and time lattice in the frame .for example a schrdinger equation description of two hybrid systems interacting with one another is given by is the discrete forward time derivative where here is the state of the two hybrid systems at time the hamiltonian can be expressed as the sum of a hamiltonian for the separate systems and an interaction part as in for two hybrid systems where the first term of is the kinetic energy operator for the system .the second term is the hamiltonian for the internal states of the system .it is given by eq .[ shenergy ] .also is the mass of . and are the discrete forward and backward space derivatives , and is planck s constant .the dot product indicates the usual sum over the product of the components in the derivatives .note that a possible dependence of the mass of on has been included .the question arises regarding how one should view -tuple hybrid systems as physical systems .should they be regarded as independent systems each with its own hamiltonian ( in eq .[ h0hint ] ) or as systems bound together with energy eigenstates that are quite different from those of the single hybrid systems in isolation .one way to shed light on this question is to examine physical representations of number tuples in computers .there -tuples of numbers are represented as strings of bits or of qubits ( spin systems ) bound to a background matrix of potential wells where each well contains one qubit .the locations of the qubits in the background matrix determines their assembly into strings and into tuples of strings .here it is assumed that tuples of hybrid systems consist of systems bound together in some fashion .details of the binding , and its effect on the states of the individual in the -tuple are not known at this point .however it will be assumed that the effect is negligible . in this casethe energy of each component state in the tuple will be assumed to be the same as that for an individual system .then , the energy of the state is the sum of the energies of the individual component states .also the energy is assumed to be independent of the values .this picture is supported by the actual states of computers and their computations .the background potential well matrix that contains the -tuples of qubit string states is tied to the computer .since the computer itself is a physical system , it can be translated , rotated , or given a constant velocity boost . in all these transformationsthe states of the qubit strings in the -tuples and the space relations of the qubit strings to one another is unchanged .these parameters would be changed if two computers collided with one another with sufficient energy to disrupt the internal workings .this picture of each frame containing physical systems and a plethora of different hybrid systems and their tuples may seem objectionable .however , one should recall that here one is working in a possible domain of a coherent theory of physics and mathematics together . in this casethe domain might be expected to include many types of hybrid systems that have both physical and mathematical properties .this is in addition to the presence of physical systems and mathematical systems .so far it has been seen that each frame , in the frame field contains a set of space and time lattices where the number of points in each dimension and the point spacing satisfy eq .[ mdelta ] for some and .the frame also contains qukit string systems as hybrid systems and various tuples of these systems . here need have no relation to each frame also contains physical theories as mathematical structures based on the real and complex number base of the frame .these theories describe the kinematics and dynamics of physical systems on the space and time lattices in the frame . for quantum systemsthese theories include hilbert spaces as vector spaces over since this is true for every frame , it is true for a frame and for a parent frame this raises the question of how entities in a frame are seen by an observer in a frame at an adjacent iteration stage .as was noted in section [ firf ] , it is assumed that ancestor frames and their contents are not visible to observers in descendant frames ; however , descendant frames and their contents are visible to observers in ancestor frames .it follows that observers in frame can not see frame or any of its contents .however , observers in can see and its contents .one consequence of the relations between frames at different iteration stages is that entities in a frame , that are seen by an observer in the frame as featureless and with no structure , correspond to entities in a parent frame that have structure .for example , elements of are seen by an observer in as abstract , featureless objects with no properties other than those derived from the relevant axiom sets .however , to observers in a parent frame numbers in are seen as equivalence classes ( pairs of equivalence classes for ) of cauchy sequences of states of base qukit string systems .thus entities that are abstract and featureless in a frame have structure as elements of a parent frame .it is useful to represent these two in - frame views by superscripts and .thus and denote the stage and stage frame views of the number base of frame they are often referred to in the following as parent frame images of .the distinction between elements of a frame and their images in a parent frame exists for other frame entities as well .the state in corresponds to the state in which is the parent frame image of in the above is a featureless abstract complex number in whereas as an element of is an equivalence class of cauchy sequences of hybrid system states .the use of stage superscripts and subscripts applies to other frame entities , such as hybrid systems , physical systems , and space and time lattices.a hybrid system in has as a parent frame image .states of are vectors in the hilbert space in states of are vectors in the two states are different in that is an eigenstate of an operator whose corresponding eigenvalue is a rational real number with no structure .the state of is an eigenstate of a number value operator whose corresponding eigenvalues are rational real numbers in corresponding to these eigenvalues are equivalence classes of cauchy sequences of states of hybrid systems in a stage frame . here are fixed and and for the term in the sequence .this accounts for the fact that each state in the sequence is a state of a different hybrid system with held fixed .the eigenvalue equivalence class is a base real rational number .since it is an element of , it contains a constant sequence of hybrid system states if and only if all prime factors of are factors of if this is the case then one can equate the equivalence class to a single state of a hybrid system to conclude that the eigenvalue associated with is a state of a hybrid system in stage frame .is the view of a physical system as seen by an observer in a stage frame .the image of this view in a stage frame is denoted by the difference between the two is that physical properties of as eigenvalues of operators over are featureless abstract real numbers .properties of as operator eigenvalues , are equivalence classes of cauchy sequences of hybrid system states .a frame independent description would be expected to appear only asymptotically when the frames in the field are merged . ] if has prime factors that are not prime factors of ( such as and ) , the eigenvalue for the eigenstate is still a real rational number . however , as an equivalence class of cauchy sequences of base hybrid system states , ] it does not contain a constant sequence of hybrid system states .instead it contains a sequence that corresponds to an infinite repetition of a base hybrid system state ( just like the decimal expansion of ) .a similar representation holds for parent frame views of point locations of lattices .let denote a lattice of space dimensions and one time dimension in a frame the components ( with ) of the space locations of the lattice points , are such that is a rational real number .the lattice points are abstract and have no structure other than that imparted by the values as rational real numbers in the have no structure other than the requirement that they are both rational and real numbers .the view or image of from the position of an observer in a stage parent frame is denoted by the image points and locations of points in are denoted by and the space point locations are different from the in that they have more structure .this follows from the fact that they are tuples of rational real numbers in it follows from this that each component , of a space point location , in is an equivalence class of cauchy sequences of states of hybrid systems .since each of the equivalence classes is a real number equivalent of a rational number value , the equivalence class includes many ( numerically ) constant sequences of hybrid system states .the existence of constant sequences follows from the observation that the subscript of the lattice image is the same as that for .the states in the different sequences differ by the presence of different numbers of leading and trailing a useful way to select a unique constant sequence is to require that all states in the sequence be a unique state of the hybrid system where the subscripts are the same as those for replacement of the sequence by its single component state gives the result that , for each is a state of it follows from this that the locations of space points in as viewed from a stage frame , are seen as states of a -tuple of hybrid systems in the stage frame .these states correspond to -tuples of rational number values .a similar representation of the time points in the lattices is possible for the real rational number time values . in this casethe time values of a lattice are seen in a parent frame as rational number states of a hybrid system .the above description of a stage frame view of lattices gives point locations of as states of a tuple of hybrid systems for the space part and states of another hybrid system for the time part .the superscripts and denote the hybrid systems associated with space and time points respectively .this strongly suggests that each space point image in the space part of a parent frame image , of be identified with a tuple , of hybrid systems and each time point image in the time part be identified with a single hybrid system , for each image point the location is given by the state of the tuple of hybrid systems in a parent frame that is associated with the space point image .similarly the location of each image time point in is given by the state of the hybrid system , associated with the point .this shows that set of all parent frame images of the space points of become a set of tuples of parent frame hybrid systems , with the state of each tuple corresponding to the image space point location in the parent frame images of the time points of become a set of hybrid systems each is in a different state corresponding to the different possible lattice time values .figure [ pactpm3 ] illustrates the situation described above for lattices with one space and one time dimension .the stage and lattice points are shown by the intersection of the grid lines . for the stage lattice the points correspond to rational number pairs whose locations are given by the values of the pairs of numbers . for the stage image latticethe points correspond to pairs of hybrid systems , one for the space dimension and one for the time dimension .point locations are given by the states of the hybrid system pairs .non relativistic world paths for physical systems and its stage image are also shown .it is of interest to compare the view here with that in .tegmark s explicitly stated view is that real numbers , as labels of space and time points , are distinct from the points themselves .this is similar to the setup here in that points of parent frame images of lattices are tuples of hybrid systems , and locations or labels of the points of parent frame images of lattice are states of tuples of hybrid systems .the differing views of hybrid systems as either number systems or physical systems may seem strange when viewed from a perspective outside the frame field and in the usual physical universe .however it is appropriate for a coherent theory of physics and mathematics together as such a theory might have systems that represent different entities , depending on how they are viewed .the description of parent frame images of lattice space points and their locations as tuples of hybrid systems and states of the tuples , means that the image of each point has a mass .the mass is equal to that of the tuple of hybrid systems associated with each point image .the ( rest ) masses of all space points in an image lattice should be the same as the tuples of hybrid systems associated with each point are the all the same .however each of the tuples is in a different image state that corresponds to the different locations of each point image .each component of corresponds to a hybrid system state of a component hybrid system in the tuple ( note the subscript ) .each of these component hybrid system states is an energy eigenstate of a hybrid system hamiltonian .the corresponding energy eigenvalue , is defined by eq . [ shenergy ] .here is positive as the lattice component locations are all if the component hybrid systems in a tuple do not interact with one another , then the energy associated with a parent frame image lattice location is the sum of the component energies . in this case the energy associated with the location image , , of is given by denotes the tensor product of all component states if the component systems in a tuple do interact with one another then the situation becomes more complex as the hamiltonian and energy eigenvalues must take account of the interactions . at this point the specific dependence of the energy on the parent frame image of the lattice point locationsis not known as it depends on the properties of the hybrid system hamiltonian .nevertheless the existence of energies associated with locations of points of parent frame images of a space lattice is intriguing .one should note , though , that this association of energy to space points holds only for parent frame images .it does not extend to lattices when viewed by an observer in this aspect is one reason why one needs to do more work , particularly on the merging of frames in the frame field .it is quite possible that , in the case of a cyclic frame field , some aspects of the association of energy with space points in the same frame will be preserved .note that for cyclic frame fields the restriction that an observer can not see ancestor frames or their contents must be relaxed .the reason is that ancestor frames are also descendant frames .the description of the motion of hybrid systems and other physical systems in a stage frame , as seen from a parent stage frame , is interesting .the reason is that the dynamics and kinematics of systems are based on a parent frame image lattice , whose space points and point locations are tuples , , and tensor product states of the tuples of hybrid systems .the time points and point locations are hybrid systems , and states of these systems .the and superscripts allow for the possibility that the hybrid systems associated with space point images may be different from those associated with time images .it follows that the stage frame description of the motion and dynamics of stage systems is described relative to the states of certain hybrid systems in the parent frame . to understand thisconsider a simple example where denotes the state of some physical system in a stage frame at position and at time the pair denote a point in a stage frame space and time lattice which , for simplicity , consists of one space and one time dimension .the stage frame time evolution of the state is given by a discrete schrdinger equation, here is the discrete forward time derivative defined by as a sum of kinetic and potential terms , the hamiltonian , has the form to keep things simple , the description is restricted to just one system interacting with an external potential . in this case also is planck s constant and is the mass of system . in the above ,the forward and backward discrete derivatives and are defined similar to the forward time derivative .one has description of the time development and hamiltonian for a system is a description in a stage frame . viewed from a parent stage frame the image of the schrdinger equation , eq .[ schrj ] describes the motion of the system in the image lattice since the space and time point locations in the image lattice are states of hybrid systems and , the image schrdinger equation describes the motion of system relative to these states .the image equation is given by the image state , is the same state in the hilbert space as is in here and are shorthand notations for the hybrid system states , and of and the hamiltonian is given by the potential is a function of the states of the value of is a real number in that is expected to be the same as that of in the forward and backward discrete derivatives are expressed by equations similar to eq .[ deltafb]: in these equations and denote the hybrid system states and where the subscript denotes arithmetic addition and subtraction .for example if in binary then this use of states of and as point locations of an image space and time lattice must be reconciled with the observation that these hybrid systems are dynamical systems that move and interact with one another and with other physical systems. if the and and their states are to serve as points and point locations of a space and time lattice image , they must be dynamically very stable and resistant to change .this suggests that these systems must be very massive and that their interactions with each other and other systems is such that state changes occur very rarely , possibly on the order of cosmological time intervals .is described in a stage frame . here need have no relation to ] the reason for these restrictions on the properties of space and time hybrid systems is that one expects the space and time used to describe motion of systems to be quite stable and to change at most very slowly .changes , if any , would be expected to be similar to those predicted by the einstein equations of general relativity .it is to be emphasized that the work presented so far is only a beginning to the development of a complete framework for a coherent theory of physics and mathematics together . not only that but one must also find a way to reconcile the multiplicity of universes , one for each frame , to the view that there is only one physical universe . one way to achieve reconciliation is to drop the single universe view and to relate the multiplicity of frame representations of physics and mathematics to the different many physical universes view of physics .these include physical universes in existing in different bubbles of space time , and other descriptions of multiple universes including the everett wheeler description . whether any of these are relevant here or not will have to await future work .if one sticks to the single physical universe view then the frames with their different universes and space and time representations need to be merged or collapsed to arrive in some limit at the existing physical universe with one space time .this applies in particular to the iteration stage and gauge degrees of freedom as their presence is limited to the quantum representation of numbers .one expected consequence of the merging is that it will result in the emergence of a single background space time as an asymptotic limit of the merging of the space time lattices in the different frames .whether the ultimate space time background is a continuum , a foam , or has some other form , should be determined by details of the merging process .in addition the merging may affect other entities in the frames .physical systems , denoted collectively as , may become the observed physical systems in the space time background .in addition , hybrid systems may split into either physical systems or mathematical systems .one potentially useful approach to frame merging is the use of gauge theory techniques to merge frames in different iteration stages .one hopes that some aspects of the standard model in physics will be useful here .this will require inclusion of relativistic treatment of systems and quantum fields in the frames .this look into a possible future approach emphasizes how much there is to accomplish .nevertheless one may hope that the work presented here is a beginning to the development of a coherent theory .the expansion of the frames in the frame field to include , not only mathematical systems , but also space and time lattices , and hybrid systems that are both mathematical systems and physical systems , seems reasonable from the viewpoint of a coherent theory of physics and mathematics together .one might expect such a theory to contain systems that can be either physical systems or mathematical systems .the use of massive hybrid systems to be stage frame images of points of space and time lattices in stage frames suggests that there must be different types of hybrid systems .for example , stage theoretical predictions of the values of some physical quantity are , in general , real numbers in their images in a stage parent frame are equivalence classes of cauchy sequences of states of hybrid systems where are dependent on positions in the cauchy sequences . if the predicted values are rational numbers expressible as a finite string of base digits , then the stage values can be expressed as states of rather than equivalence classes of sequences of states .if the images of properties of the physical quantities are to be reflected in the properties of the hybrid systems , then different types of hybrid systems , such as must be associated with different physical quantities . whether these descriptions of parent frame images as hybrid systems will remain or will have to be modified , remains to be seen .however , it should be recalled that these images are based on the dual role played by values of rational numbers both as mathematical systems and as locations of components of points in the lattices .recall that the notion of a point in a lattice is separated from the location of the point just as the notion of a number , as a mathematical system , is separated from the value of a number .this use of number and number value is different from the usual use in mathematics in that expressions , as are usually considered as rational numbers and not as values of rational numbers . here a rational number , as a hybrid system , is similar to the usual mathematical concept of a set of rational numbers as a model of the rational number axioms . in conclusion , it is worth reiterating the last paragraphs at the end of the introduction .whatever one thinks of the ideas presented in the paper , the following points should be kept in mind .two of the three dimensions of the field of reference frames are present only for quantum theory representations of the real and complex numbers .these are the gauge or basis degree of freedom and the iteration stage degree of freedom .they are not present in classical descriptions .the number base degree of freedom is present for both quantum and classical representations based on rational number representations by digit strings .the presence of the gauge and iteration degrees of freedom in the quantum representation described here is independent of the description of rational number values as states of qukit string systems .any quantum representation of the rational numbers , such as states of integer pairs , where the states are elements of a vector space will result in a frame field with gauge and iteration degrees of freedom . finally , the importance of numbers to physics and mathematics should be emphasized .it is hoped that more work on combining quantum physics and the quantum theory of numbers will be done .the need for this is based on the observation that natural numbers , integers , rational numbers , and probably real and complex numbers , are even more fundamental to physics than is geometry .this work was supported by the u.s .department of energy , office of nuclear physics , under contract no .de - ac02 - 06ch11357 .e. wigner , the unreasonable effectiveness of mathematics in the natural sciences , _ commum .pure and applied math . _ * 13 * 001 ( 1960 ) , reprinted in e. wigner , _ symmetries and reflections _ , ( indiana univ . press , bloomington in 1966 ) , pp222 - 237 .p. benioff , properties of frame fields based on quantum theory representations of real and complex numbers , contemporary mathematics 482 , _ advances in quantum computation _ , k. mahdavi and d. doslover , eds ., september 10 - 13 , 2007 , university of texas , tyler , tx , pp .125 - 163 ; arxiv:0709.2664 .n. seiberg , emergent spacetime in _ the quantum structure of space and time _ , proceedings of the 23rd solvay conference , brussels , belgium , dec . 1 - 3 , 2005 , d. gross , m. henneaux , a. sevrin , eds . , world scientific press , new jersey . | this work is based on the field of reference frames based on quantum representations of real and complex numbers described in other work . here frame domains are expanded to include space and time lattices . strings of qukits are described as hybrid systems as they are both mathematical and physical systems . as mathematical systems they represent numbers . as physical systems in each frame the strings have a discrete schrodinger dynamics on the lattices . the frame field has an iterative structure such that the contents of a stage j frame have images in a stage j-1 ( parent ) frame . a discussion of parent frame images includes the proposal that points of stage j frame lattices have images as hybrid systems in parent frames . the resulting association of energy with images of lattice point locations , as hybrid systems states , is discussed . representations and images of other physical systems in the different frames are also described . |
ideally , a good cosmological -body simulation should resolve in some detail the internal structure of individual galaxies ( on mass scales as low as to and on length scales of a few kiloparsecs or less ) while at the same time representing the growth of density perturbations on the scale of clusters and even superclusters of galaxies .galaxy morphologies are observed to correlate with their membership in clusters .the virgo cluster is estimated to be an appreciable , if not dominant , source of tides on the local group 15 to 20 mpc away . andwhen studying the nonlinear growth of galaxy - sized perturbations , one should not discount the possible coupling to modes with wavelengths as large as 50 mpc ( ) . achievingthe required dynamic range is a difficult task given the limitations of present - day computers .generally , numerical codes have been able to obtain a large dynamic range either in mass or in length , but not in both .particle - particle ( pp ) methods ( ) and tree codes ( appel 1981 , 1985 ; ; ) have essentially unlimited length resolution , bounded only by the need to soften two - body encounters when modeling a collisionless physical system , but their use for simulations with more than a few tens , respectively hundreds , of thousands of particles is currently restricted to special - purpose hardware ( such as grape chips ) or massively parallel computers ( ) .particle - mesh ( pm ) methods ( , hereafter he ; ) , by contrast , can integrate the orbits of millions of interacting particles even on a more modest single - processor mainframe or high - end workstation .the forces are computed by solving poisson s equation on a cartesian grid . barring complications such as a memory latency that increases with problem size , the computational cost per time step is linear in the number of particles and of order in the number of grid cells ( which is typically of the same order as the number of particles ) .the dynamic range in length , however , is limited by the maximum size of the grid , typically 256 or 512 cells on a side in three dimensions on a current single - processor computer .parallel processing can naturally push this limit towards higher values . one can circumvent this obstacle by decomposing the inter - particle forces into a long - range part , calculated by the pm technique , and a small - range part which can be evaluated in a number of different ways . for some time, a popular approach has been to compute the short - range forces by direct summation over all sufficiently close pairs of particles .this technique , known as the particle - particle , particle - mesh ( p m ) method ( he 8) , was originally applied to plasma simulations where the electrostatic repulsion between charges of the same sign makes it difficult for large density contrasts to develop .its use with gravitating systems suffers from the tendency for ever more particles to condense into small volumes , causing the cost of the pp summation to become prohibitive as the system evolves .increasing the total number of particles and resolving smaller scales ( on which density perturbations become nonlinear at earlier times , at least in the presently favored `` bottom - up '' scenario ) both exacerbate the problem .we are unaware of any p m simulations using more than about particles. one may be able to raise this limit by using a tree method to compute the short - range forces , as in xu s ( 1995 ) tpm , but in principle the difficulty remains .most other attempts to enhance the spatial resolution of pm codes rely on the introduction of local grid refinements .this idea lies at the root of couchman s ( 1991 ) adaptive p m ( ap m ) ( which however still uses direct summation where the number of particles is small enough ) , of numerous hierarchical pm codes ( ; ; ; , hereafter anc ; ; , hereafter jdc ; ) and of various related approaches ( ; ) . as pointed out by various authors , and most recently by suisalu and saar ( 1995b ) ,the choice between pm , p m , and tree codes is not only a question of the spatial and mass resolution that can be achieved , but also of the behavior of each method with respect to two - body gravitational collisions .these are to be avoided when the physical system being modeled is essentially collisionless , as is the case for example for cosmological dark matter .pm codes usually prove to be less collisional , although it should be emphasized that this is at least in part a consequence of their spatial resolution being weaker than their mass resolution , and need not carry over to hierarchical pm methods unless precautions are taken to ensure that the mass granularity remains adequate at all times .the purpose of this article is to document in some detail our implementation of a dynamically adaptive multiple - mesh code and the tests we performed to validate it .so far , only a brief report of an earlier stage of development ( ) and a description of the methodology but not of the tests ( ) have appeared .our code , like that of jdc , is able to track the formation and subsequent motion of density concentrations by dynamically adjusting the number , nesting depth and location of the subgrids . in our case ,the entire grid structure is chosen afresh on every step . since our first intended application was a cosmological problem the formation of the local group that requires the freedom to specify external tidal fields other than would be implied by periodic image charges , our code presents the relatively uncommon combination of an expanding system of coordinates with isolating ( more properly non - periodic ) boundary conditions .this introduces a number of complications not usually discussed in the literature on pm codes , and of which we shall give an account here .( our method is also applicable to non - cosmological problems , for which isolating boundary conditions are an asset and the aforementioned complications do not arise . )a future article ( ) will detail the results of our simulations of group formation .the general outline of this paper is as follows .section [ s : method ] describes in detail the method used .the tests and their results are presented in section [ s : tests ] .we conclude , in section [ s : conclusions ] , with our assessment of the strengths and limitations of this code .our code integrates the equations of motion for a set of gravitating particles .these equations can be derived from the lagrangian here the indices and span the set of particles in the system , is the mass of particle , its position at time , and describes the law of pairwise interaction between particles. ideally ( we choose units such that ) but the numerical method forces us to use an approximation which , due to adaptive grid refinement , can depend explicitly on time and on the individual coordinates and , not just on their difference .we could have included the term proportional to ( where is an arbitrary constant ) within the external potential , but for cosmological applications it is useful to show it explicitly .it is often convenient to apply a time - dependent rescaling of the coordinate system , with new coordinates defined by .an equivalent lagrangian is then for notational convenience , we define note that in principle is arbitrary , reflecting our freedom to choose the coordinate system . for cosmological applicationsone normally chooses to be the solution of friedman s equation for some cosmological model , in which case is the background density and the hubble constant for that model . later in this paperwe shall impose the additional restriction , which holds for cosmological models in the matter - dominated era. the second term in the lagrangian ( [ q : lag - x ] ) can be regarded as involving an effective potential which satisfies .similarly , in the continuum limit and for a coulomb interaction ( ) , the third term satisfies , where the proper density is given by and ( respectively ) denotes differentiation with respect to the first ( respectively the second ) of the two position vectors on which depends . the usual derivation of the equations of motion from the lagrangian ( [ q : lag - x ] ) yields : for later convenience, we define this allows the equation of motion ( [ q : motion ] ) to be written more compactly as it is also useful to recast the equation in terms of a generalized time coordinate : with a common choice is the so - called conformal time , ; another ( ) is for some constant .naturally , any sufficiently differentiable monotonic function of is acceptable , and one is free to construct such a function _ ad hoc _ for each individual simulation .we avail ourselves of this freedom .we lay down a rectangular grid of by by nodes with uniform spacings , and .the values of , and must be acceptable to the fast fourier transform ( fft ) routines used .( all our tests were done with a power of 2 , and with , as this is the most common situation in practice and the only one we needed for our applications . ) at every step , a density is assigned to each grid point using the cloud - in - cell ( cic ) algorithm .ffts are then used to solve for the potential on the same grid , and an acceleration is obtained for each particle by differentiating the grid potential , then using cic interpolation .we use a two - point finite - difference formula to compute the derivatives of the potential at grid nodes .this choice means that truncation errors in the force law scale with the square of the grid spacing ( he , 5 - 4 ) .the discrete green s function we use is obtained by sampling at grid points ( and fourier transforming the result ) .this differs from the more common approach in cosmological pm codes ( ) of basing the green s function on the seven - point finite difference approximation to the laplacian ; our choice has the merit of guaranteeing the truncation of the interaction law at large separations , as required for a correct implementation of isolated boundary conditions ( ) . unlike in the p m method , where the grid - based part of the force must possess a good degree of translational invariance to avoid introducing terms in the direct particle - particle contribution that depend explicitly on the position of each pair relative to the grid , we are not compelled to soften the interaction at small scales .to do so would cost us precious spatial resolution ; already with our choice of green s function the effective force turns out to be softened at separations grid cells .a consequence of our decision not to soften the force law any further is that the -minimization procedure of he 8 - 3 - 3 , which requires the reference force to have no harmonics on scales smaller than the grid spacing , is not appropriate for us .the choice of the shape of the interaction law at small separations and of the charge assignment and force interpolation scheme is a matter of compromise between accuracy and computational cost .our choices could undoubtedly be improved upon , but we preferred to concentrate our efforts on the more innovative aspects of our method .isolated boundary conditions are implemented by conceptually doubling the size of the grid in each dimension , and padding the density array with an appropriate constant value ( normally zero ) outside the principal octant , which contains the particles ( he , 6 - 5 - 4 ) .if the green s function is truncated so that the interaction falls to zero at separations larger than the system ( before doubling ) , this completely suppresses any interaction between the system and its periodic images .the computed potential , however , will only match that of an isolated system within the principal octant . since we need to apply a gradient operator to the potential in order to compute the forces, there is a layer ( one cell thick for our approximation to the gradient ) in which we would not be able to compute them .we therefore allow no particles in this outer layer .( we are free to increase the thickness of this layer .it has been convenient to do so in some of the tests discussed below , to achieve an integral ratio of cell counts in comparisons between different grid resolutions . )we do not use james ( 1977 ) method to impose isolated boundary conditions without doubling the grid by calculating appropriate screening charges on the surface , since strictly speaking the procedure is only justified when a finite - difference approximation to the laplacian is used in solving poisson s equation .the fft technique does not satisfy this condition , and would require screening charges throughout the volume of the box . in cosmological applications ,the space surrounding the system is not to be thought of as empty , but rather as containing a background of uniform density .( departures from uniformity can be represented through an appropriate choice for the tidal potential . )we must therefore add this background , convolved with the same ( cic ) charge assignment function that is used for the particles .its contribution to the density at any node on the boundary of the region where particles are allowed is then proportional to the number of cells adjacent to this node that lie outside the particle region : for nodes on a face , on an edge , at a vertex .we implement the term of the equation of motion ( equation [ q : motion ] ) by subtracting from the density at all points before solving for the potential .if coincides with the background cosmological density , the source function for the potential outside the principal octant is exactly zero .this is the most common case , and the one that presents the fewest conceptual and practical difficulties . with periodic boundary conditions it would also be the only possible case since the mean density outside the simulation box must always equal that inside it . with isolated conditions a mismatch is permissible , andcould be used for example in applying an expanding or contracting grid to follow the evolution of an isolated system without a cosmological density background .the varying would then be adjusted to match the expansion or contraction of the simulated system , providing better resolution at lower cost during the collapse phase .a very significant difference between periodic and isolated boundary conditions is that the latter allow the exchange of mass , momentum , energy , angular momentum between the particles and the exterior . with periodic boundary conditions, there is effectively no exterior .particle flows are a significant concern in cosmological applications , where matter may both leave and enter the computational box during the simulation . in a typical cosmological model ,the mass variance is of order unity in a sphere of radius mpc .one expects particles at the edge of a sphere of radius to be displaced with respect to the center of mass of the sphere by about , a significant length when compared to the size of a group or cluster of galaxies .a simulation of characteristic size may reasonably be expected to exhibit a twofold increase or decrease in the total mass within the box during a run . on smaller scalesthe variance is even larger , at least for the currently fashionable `` bottom - up '' scenarios of structure formation . exiting particlesare easily handled by removing them from the simulation , but incoming particles have to be injected according to some prescription that fits the physical problem at hand . in the cosmological case , as long as the density perturbations grow linearly on the scale of the box , a reasonable prescription can be based on extrapolations from the linear theory . at late times , in the strongly non - linear regime , simply extrapolating from the linear solution causes large amounts of material to be injected which in reality would collapse into bound objects outside the box and remain outside . in other words , the zeldovich approximation is clearly inappropriate beyond the time at which caustics form .it would lead to a gross overestimate of the total mass flow into the box .an adequate model of the inflow in the nonlinear regime therefore requires an actual simulation of the mass flows in a larger region .this has become normal practice in simulations with tree codes , and hierarchical grid methods such as ours also lend themselves well to this approach .the main difficulty is that in order to keep the total number of particles manageable , one must essentially run a preliminary simulation simply to find out where the mass that flows into the region of interest originated and sample it with a finer granularity in the initial conditions .this may lead to problems of contamination by more massive `` background '' particles if the small - scale structure that is not resolved by the preliminary simulation turns out to have a significant impact on the dynamics .furthermore , the presence of particles of different masses in the system could lead to spurious mass segregation effects .for these reasons , we attempted to keep the overall size of the computational box as small as possible , handling the tidal fields on larger scales , as well as any mass flows ( as long as they remained moderate ) as externally imposed boundary conditions .this turned out not to be a particularly successful design choice , and we can not recommend it to others .injecting too many , or too few , particles can have a destabilizing effect .the first term on the right hand side of equation [ q : motion ] has , in the usual case , the effect of accelerating particles towards the boundary of the box .this is balanced by the second term , which represents the attraction between the gravitating particles .if more particles leave the box than are injected , the first term will tend to dominate and cause even more particles to be ejected ; conversely , if too many particles are added the material will tend to collapse towards the center .this may be taken to represent a physical effect : a void is expected to expand faster than the universal average , an over - density more slowly .but unless the algorithm for replenishing the box with particles is well thought out , a runaway instability may occur .in practice we have found it expedient to couple the injection of particles to the outflow so as to maintain a constant mass within the system .we then monitor the cumulative mass of particles so recycled and compare it with the total mass in the box . if the ratio becomes too large ( greater than 10% or so ) , we take it as an indication that one needs to simulate a larger volume of space .as the volume that needs to be simulated grows larger , our original motivation for using isolated boundary conditions becomes weaker .it turns out that the method we described in this paragraph works much better if the system is located in a region of lower density than the cosmic mean , as the mass fraction that undergoes reinjection is smaller in that case .this allows us to simulate a number of interesting systems , but is not suitable for statistical studies involving a random selection of initial conditions .momentum , angular momentum , and energy can be carried by the particles that flow in and out of the system and by direct interaction with the external potential and the uniform density we impose in the space that surrounds the particle region .the latter unfortunately has the cubic symmetry of the computational box rather than the more convenient spherical symmetry that would be required for an exact cancellation of the induced tides .consequently , these tides are always present except in the pure non - cosmological case with a constant , where the background density vanishes . whether this is a serious drawback depends on the physical problem being studied .if necessary , one can enlarge the computational box ( this may be required in any case to keep mass flows under check ) , include a compensating term in , or both .the idea of adding subgrids in regions where better spatial resolution is required is not a new one .past approaches have differed on whether to add mass resolution at the same time by splitting the particles into a larger number of less massive ones , on how to minimize errors at the interface between the finer and the coarser grid , and on whether the solution on the coarser grid should be modified to take into account the results from the finer grid . unlike villumsen ( 1989 ) and splinter ( 1996 ), we do not automatically introduce a new set of less massive particles on each subgrid .we take the view that our initial mass granularity already matches the resolution we wish to achieve , and simply increase the force resolution ( by adding subgrids ) when and where the particle density is high enough to make this permissible and worthwhile .this allows the decision of where to place subgrids to be made on a step by step basis , and spares us the need to perform a first simulation without subgrids to find out where particles of smaller mass should be placed in the initial conditions .an additional advantage of having a single set of particles of equal mass is that we need not worry about mass segregation effects .a drawback is that maintaining equivalent resolution within a larger computational volume requires more particles .of course our code does support multiple particle masses , and as will become clear below the criteria for subgrid placement can be tuned to favor tracking the lower - mass particles ; we may therefore decide to experiment with particles of different masses in future . in our schemethe various grids are merely devices , introduced independently on each integration step , to compute inter - particle forces . in this respectwe are closer in spirit to couchman s ap m than to most other multiple - grid approaches .ours is effectively ( in the terminology of anc ) a particle - multiple - mesh ( pm ) scheme .this distinction will have important consequences below . since there is only one set of particles that exist independently of any subgrids , the equivalent of a back - reaction from the subgrid solution to the parent gridis automatically included : it is mediated by the particles themselves .we typically decrease the grid spacing by a factor of two for each additional level of subgrids .each subgrid thus covers a substantial number of cells of its parent grid .other integral refinement factors are allowed by our formulation ( subject to the condition that the boundaries of any subgrid must coincide with cell boundaries of the parent grid ) but become increasingly inefficient in terms of volume coverage and we have not tried them in practice .they would also exacerbate any `` ringing '' at the interface between a subgrid and its parent grid ; since this is one of the most delicate aspects of any hierarchical grid scheme , it is probably not advisable to use refinement factors larger than two under any circumstances .a particle passing from a region of low resolution to one of higher resolution effectively undergoes a sudden change in its spatial extent .( in pm codes , particles are best thought of as extending over about one grid cell or more , depending on the shape of the interaction law and on the charge assignment scheme . ) in its current state , our code takes no particular precautions against any transients that may result .our comparison tests between runs that use adaptive subgridding and runs with uniform resolution show that such transients are not important enough to produce significant discrepancies in the results .this is fortunate , since otherwise we would have had to associate a time - dependent smoothing length with every particle , which would complicate the method . that the smoothing length would have to be associated with the particle rather than with the location relative to a subgrid follows both from our choice to treat the particles as primary and the grids as auxiliary objects and from the fact that new subgrids may be added , removed or relocated to follow the flow at every step .our strategy for computing the forces between subgrid particles differs slightly from that of villumsen .his approach was to recalculate the potential on the entire parent grid after having set the density in the volume occupied by the subgrid to a constant value equal to the mean density within the subgrid , and use this solution as the tidal field on the subgrid particles from the rest of the system .the forces between subgrid particles were then computed in the usual way by solving for the potential on the subgrid .instead , we first compute forces for all particles on the parent grid , then generate the density array on the subgrid ( whose nodes must be aligned with those of the parent grid ) as well as on a coarser grid that has the same spacing as the parent grid and nodes in the same locations but covers only the volume of the subgrid . since coordinates on the subgridare affected by the same expansion factor as on the parent grid , we must subtract the same .we solve for the forces on all subgrid particles from both these density arrays , and add the difference to the forces we had previously calculated on the parent grid .this avoids double counting and means in effect that pairwise forces between particles that both lie in the subgrid are always evaluated at the higher of the two resolutions .the coarse fft on the subgrid involves far fewer grid points , so our approach is more efficient than villumsen s .it is also more flexible , in that we only need to store the accumulated force for every particle and the density potential for a single subgrid at a time , independently of the number and nesting depth of subgrids .the cost of actually storing a force vector for every particle may seem high when one knows that in a traditional pm code with leapfrog time stepping the particle accelerations can be interpolated when the velocities are updated and need not be kept afterwards ; but saving them allows us to use them to adjust the time step and to compute subgrid corrections to the energy conservation test . on the other hand , we could have supported independent time steps had we chosen to save the potential on each subgrid instead .this , however , would have caused the storage requirements to scale with the subgrid nesting depth .let be the set of particles in the subgrid , its complement .schematically , the acceleration on particle is computed as follows . for , for , where is the approximation to the coulomb potential on the parent grid , the approximation on the subgrid , and that on the subset of the parent grid that covers the subgrid .( in reality , the sums are evaluated by fft and include the term . ) to a very good degree , provided only that the grid nodes are in the same locations .( this is an essential requirement : we have found that violating it has deleterious effects on the solution . )the two rightmost terms in the display of equation ( [ q : acc - sgr-1 ] ) therefore cancel each other .note that we add together force contributions from the various grids rather than the potentials ; this allows the arbitrary additive constant in the potential to differ from grid to grid without affecting our solution .we impose isolated boundary conditions on the subgrid , with an external background density ( ) equal to that used on the parent grid .it may seem more appropriate to use the mean density within the subgrid , or better yet the mean density in the immediate vicinity of the subgrid ; but one should note that some ringing is to be expected no matter what the exact scheme adopted since the density at the boundary of the subgrid always has some short - wavelength component that is resolved on the subgrid but not on the parent grid . the usual approach in codes of this type is to try to limit the ringing by introducing around the subgrid a buffer region , in which particles contribute to the potential but are not affected by it , and/or a transition region in which the potentials of the parent grid and of the subgrid are blended linearly .the transition region implements the variable smoothing length we alluded to at the end of the previous subsection . as already noted, it is of limited usefulness in our case since it does nt guard against transients induced by the sudden activation of a new subgrid .the possible motivation for introducing a buffer region can be understood by considering the case of a particle located on the subgrid but near its edge , and subject to the forces of two nearby equidistant particles , also on the subgrid , and just outside it . in the exact solution , and exert forces of equal magnitude on . without a buffer region ,the calculated force from is stronger. this can create spurious small - wavelength perturbations as the pair tends to collapse on its own rather than as a triplet .the introduction of a buffer region would allow the balance to be preserved between the attractions of and .our implementation of the buffer region is as follows . extending the notation introduced in the previous subsection , we have split into the buffer region and a distant exterior .equation [ q : acc - sgr-1 ] is modified in that the sum over becomes a sum over and the sums over become sums over .we normally set the thickness of this buffer region to be equivalent to three cells of the parent grid : the grid - based force does not differ significantly from the exact coulomb force at separations this large , so that .since the thickness of the buffer region is fixed in units of the spacing of the parent grid , large refinement factors result in the buffer region `` eating away '' most of the volume of the subgrid .this is one of the reasons why we only experimented with a refinement factor of 2 at each level .an essential feature of this buffer region is that it enables us to use multiple adjacent subgrids to cover extended structures that are worthy of higher resolution but do not fit within a single grid .this could occur either because of their size or because they are located too close to other high - density objects already covered by their own subgrids .( we do not allow arbitrary overlap of subgrids , for both simplicity and efficiency . ) a problem that might arise when the boundary between two adjacent subgrids cuts through the middle of a high density structure is that the force between two nearby particles separated by the boundary would be calculated with the ( poorer ) resolution of the parent grid .thanks to the buffer region around each subgrid , this difficulty is avoided : the force exerted by either particle on the other will be computed at subgrid resolution by virtue of each particle lying in the buffer region of the other s subgrid . in the overlap region between adjacent subgrids, it matters little whether the border region is implemented as described above or in the following alternative way , by subjecting particles in to the subgrid forces without including them in the subgrid potential .this corresponds to applying equation [ q : acc - pgr-1 ] to particles only , and equation [ q : acc - sgr-1 ] to particles .our first implementation used the latter approach , and comparison tests show only minute differences between the results of both variants .a difficulty arises wherever the buffer region has finite width and does not overlap with an adjacent subgrid. then the balancing of the forces by and over comes at the cost of a violation of newton s second law : the force exerted by on , being computed on the parent grid , does not balance the force by on .the net result is that will be accelerated outwards , away from the subgrid .( if the alternative implementation of the buffer region is adopted , the net effect has opposite sign and tends to push the particles inwards . ) the consequences for the orbit of a bound clump can be spectacular , as the following worst - case example , illustrated by figure [ f : tf4 g ] , shows .we launch a 4096-particle realization of a truncated isothermal sphere ( with a half - mass radius of 0.02 units , corresponding roughly to half the cell width on the top grid ) with a mean velocity of 0.1 units towards a region covered by a fixed subgrid ( the inner bounds of which are indicated by dotted lines in the figure ) with refinement factor of two and a buffer region of width 0 ( solid curve ) or 2 ( dashed curve ) parent grid cells . for this example we adopted the alternative implementation of the buffer region , which causes the particles to be accelerated inwards .the initial velocity is along the axis , perpendicular to the faces of the subgrid .the figure shows the coordinate of the center of mass of the sphere as a function of time . when the buffer region is suppressed ( width 0 , solid curve )the clump enters and exits the subgrid without changes to its bulk velocity .( the apparent turnaround occurs when the clump reaches the edge of the computational volume and some particles leave the box .the solid curve reflects the mean velocity of only those particles that remain in the box . )the presence of a finite buffer region , by contrast , causes the clump to accelerate as it enters the subgrid , and to bounce back as it tries to exit on the other side .the reason it does nt simply lose the momentum it gained when entering the subgrid is that the momentum change depends on the difference between the forces calculated on the subgrid and on the parent grid , and that the passage through the more highly resolved subgrid has allowed the clump core to relax to a more sharply peaked density profile , increasing the force mismatch .the main features of this behavior are quite generic : a more carefully constructed clump with a larger number of particles that was allowed to settle into a numerical equilibrium before being launched through the subgrid underwent similar , if slightly milder , accelerations and rebounds .clearly it is better for us to suppress the buffer region where there is no adjacent subgrid .the violation of momentum conservation that a finite border region implies can have a drastic influence on the orbit of a bound clump .this would not be much of a concern if we could guarantee that bound clumps are always entirely covered by subgrids at the same resolution ; the problem does not arise for smooth distributions of matter where short - range forces do nt predominate .( in section [ s : gunn - gott - test ] we demonstrate this fact through a test of smooth infall onto a localized seed mass perturbation . ) in principle , an adaptive algorithm for subgrid placement based on the density should tend to ensure this .however , one may want to place additional restrictions on which regions are followed at higher resolution , in which case the guarantee does not hold and we must take the precaution of suppressing the buffer region in the absence of an adjacent subgrid . note that the problem is less severe when expanding coordinates are used , as in cosmological applications , since the peculiar velocity acquired while crossing a subgrid interface will be damped away .why has this same difficulty not been recognized by other authors , notably anc and splinter ( 1996 ) ?the reason may be traced to a fundamental difference in philosophy between our code and theirs .the very idea of a particle on a subgrid interacting symmetrically with a particle on the parent grid is a consequence of our decision to regard the particles , rather than the density field on the grids , as primary . in a scheme where every particle is associated with only one grid , andparticularly when the dynamics of the parent grid particles are unaffected by the subgrid ( the so - called `` one - way interface '' schemes ) , momentum conservation should be examined separately on each grid : on the parent grid , no violation is expected , while on the subgrid the effects of the parent grid particles are treated as an external field and the buffer and transition regions are used to smoothly bring any particles exiting the subgrid under the control of the parent grid field alone , as test particles , rather than reflecting them back into the subgrid . thus , the apparent contradiction between these authors use of a buffer region two cells wide and our restriction of the buffer region to the sole case of adjacent subgrids does not signal an error either on our part or on theirs , but is merely a manifestation of our different view of the respective roles of particles and grids .a distinguishing feature of our code is that the activation of subgrids is entirely automated : the user need only set a few parameters ( maximum depth , and various optional thresholds ) , after which the code decides at every step where to place subgrids .tuning the criteria for subgrid placement is not an easy task ; our current choices can undoubtedly be improved upon .here we shall describe some of the criteria we have implemented .not all of them have proved very useful in practice .our subgrids have fixed shape and size , and may not overlap .consequently , we found it simplest to introduce a uniform tiling of subgrids covering the entire volume of the parent grid. each subgrid may be either active or inactive .a subgrid is activated whenever our requirements for subgridding are satisfied within its volume .we are free to choose the origin of the tiling independently for every parent grid at every step .we make use of this freedom to reduce the number of active subgrids .typically we try eight different possible origins , and adopt one that requires the activation of fewest subgrids .the first criterion for subgrid activation is that the particle number density be sufficiently high in at least part of the covered region : there is no point in the grid ever being much finer than the smallest distance between particles . to activate a subgrid, we require that among the corresponding cells of the parent grid either ( a ) at least one cell contains more than particles ; or ( b ) one cell contains at least particles and its 26 immediate neighbors together contain a further particles or more . , and are specified as input parameters to the code .the main reason for considering cubes of cells as well as single cells is that one needs to detect the collapse of a high - density region before it is too advanced : the additional resolution is already required during the later stages of collapse .we do nt try to detect larger , lower - contrast potentially collapsing structures on larger scales since for separations larger than about three cells the forces are computed accurately on the parent grid . one way of choosing and is the following .let be a measure of the mean number of particles per cell in some larger region , such as the entire candidate subgrid .( this measure need not be unbiased ; in fact , if the mean particle number density is very low we found it necessary to artificially increase by setting it to the value 1 .otherwise our subgrid placement criterion would be satisfied far too easily in low - density regions . )if the particle counts were distributed according to poisson statistics , one would expect a group of cells to contain on average particles , with a standard deviation around that mean . by setting and , where is a tunable parameter, we try to ensure that only particularly significant deviations from homogeneity trigger subgrid activation .we have found that this approach works reasonably well during the early stages of nonlinear evolution , where it causes higher resolution to be applied to the regions we are most interested in .however , as the simulation progresses we have needed to increase in order to limit the proliferation of subgrids , as our goal was merely to achieve high resolution in a few large peaks near the center of the computational box . clearly the criteria can andshould be tuned according to the needs of individual applications ; no single choice is optimal for all cases .we follow the usual practice in pm codes of advancing the positions and velocities of the particles in leapfrog fashion : an alternative formula applies on a starting step , when both the positions and the velocities are known at the same time : likewise , one can synchronize positions and velocities at the final step by advancing the velocities from to then updating the positions as in large simulations , it is particularly important to avoid taking more time steps than required to obtain accurate results .the time step should therefore be allowed to vary . in the standard leapfrog formulation ,the cancellation of second - order terms in equation ( [ q : lf - v ] ) depends on being evaluated at the midpoint of the velocity step .the cancellation of the second - order term in the formula to update the positions is less important since the are known ; however , it contributes to lowering the operation count for the method .various approaches can be used to change the time step in the middle of a simulation .one is to use a different midpoint in equation ( [ q : lf - v ] ) and calculate the corresponding . the other , which is the one we adopted , is to keep constant by an appropriate choice of the function .we construct this function and its first and second derivatives , which are needed to evaluate and according to equations ( [ q : a ] ) and ( [ q : b ] ) , step by step as the simulation progresses .( in practice , we do not allow to vary too quickly , since that tends to spoil the results .we shrink by at most 25% on each step , and let it grow even more slowly .situations in which this forces a larger than desired have fortunately been very rare in our runs . )our preferred criterion for determining the time step is the courant condition : where the index spans the set of all particles , is the grid spacing of the finest grid used to compute the forces on particle , and is a constant coefficient . should be taken as being the mean velocity over the time step , and is a function of both the velocity before the step is taken and of the acceleration .the latter can be important if the initial velocity is uncharacteristically small , for example when all the particles are initially at rest .the equation for the maximum acceptable for each particle is actually a quartic .we do not solve it exactly , preferring to estimate a close lower bound on the solution for the particles with the tightest constraints .this way of taking the accelerations into account fulfills the same function as other authors use of the maximum density on a subgrid to estimate a local dynamical time . in some cosmological simulations with a large number of particles , we found that adopting the smallest found in this fashion was leading to time steps much shorter than warranted by our physical understanding of the dynamical time scales involved ( based on the maximum density ) .this turned out to be due to a small number of high velocity particles : the tail of the velocity distribution produced by the violent relaxation of a newly collapsed object , which is naturally better sampled as the number of particles in the simulation increases . since these particles represent a small fraction of the mass in the typically high - density regions in which they are found , their motion has almost no impact on the mean gravitational field , which evolves on much longer time scales .this led us to modify the time step criterion for these simulations by pretending that the velocity of each particle before the time step is negligible and basing the choice of solely on the accelerations .this is not unreasonable for cosmological applications since in the linear regime the velocities and peculiar accelerations are related and in virialized clumps they are a better indication of the dynamical time scale than the velocity of the fastest particle ( which is not the same as the local velocity dispersion ) .moreover , expanding coordinates have the property that peculiar velocities are damped away and need to be continually regenerated by the forces . as a further precaution, we have analyzed the velocities of the particles at a late stage in a simulation run with this relaxed time step criterion and found , as intended , that the number of particles that violated the stricter courant condition was small .the relaxed criterion is known to fail , however , for highly ordered collapse ( such as that of a homogeneous sphere ) . in that casethe largest acceleration does not provide a good estimate of the time scale over which the mean field evolves : based as it is on the more extended distribution at the beginning of the step , it systematically overestimates the maximum allowable time increment .accordingly , we only resort to the relaxed criterion , with some reluctance , for runs in expanding coordinates , with large numbers of particles , and where different mass shells collapse at different times .our tests on smaller cosmological runs have shown the results to be substantially identical , but with the relaxed condition requiring a much smaller number of steps .we use a single value of for all particles on all subgrids at any given step . a far better approach in principle , but much more complicated to implement , would be to adopt multiple time steps and to distinguish between the time steps used for advancing individual particles and those for updating the potentials on subgrids .independent time steps for the particles would require a way of estimating the accelerations at positions and times slightly different from those for which the potential has been calculated . for this , it may be necessary to calculate the time derivative of the subgrid potential as well as the potential itself .in addition to the constraints resulting from the rate of change of the subgrid density , the frequency with which the subgrid potential is recalculated can also be affected by particles flowing out of the region in which forces can be interpolated from the subgrid potential : in our scheme , such events change the mapping between particles and subgrids and require recalculation of all the subgrid potentials involved if the total momentum is to be conserved .these are our main reasons for adopting a single time step for all grids .the other main reason is that a single time step allows us to process subgrids one by one , accumulating the force contributions on each particle without needing to store the solutions for the potential on many nested subgrids simultaneously .almost any other time step scheme requires these subgrid potentials to be available simultaneously .the coefficients in the leapfrog formulae ( equations [ q : lf - v ] and [ q : lf - x ] ) can depend on time . for good accuracy , their variation in a time stepmust be small .this constraint is clearly unrelated to the grid spacing , and must be imposed separately .it is equivalent to the requirement that .conservation laws provide an important diagnostic of how accurately the equations of motion have been integrated in a given simulation .good conservation of physical invariants is not a sufficient condition for a good integration , but it is a necessary one . in the absence of coordinate expansion , subgrids , and external fields , a pm code such as ours would be expected to conserve total momentum algebraically , energy a little less well , and angular momentum only in the limit where the interparticle separation is much larger than the grid spacing ( he 76 ) .cosmic expansion causes the usual equation of conservation of the total energy , ( where is the kinetic , the potential energy ) , to be replaced with the layzer - irvine ( ; ) equation , which can be derived from the well - known property of the hamiltonian : computing the canonical momentum from equation [ q : lag - x ] , it follows in the usual way that we define the kinetic and potential energies as : then our exclusion of the external potential from the definition of is somewhat arbitrary . in practice , one wants to separate the terms associated with the mutual interactions of particles , which do measure the accuracy of the integrator , from the correction terms that merely account for known , external sources and sinks of energy . in applying equation [ q : dham - dt - law ] ,one notes that in the last term does not vanish if depends explicitly on time ( as may occur in practice when a new subgrid is activated and the force resolution increases locally ) , or if is not constant . for simplicity , we shall assume from now on , since that is the only case of actual practical interest to us .straightforward algebra leads to the two equations : which although physically equivalent are evaluated numerically in slightly different ways when is not constant .the corresponding conserved quantities are : in proper - length coordinates ( ) , the first integral term ( or ) vanishes and both criteria reduce to the usual law of conservation of energy .the last two terms represent respectively the work done by external fields and time dependence in the interaction law .one may question the appropriateness of including the latter effect , since its origin lies in the numerical method used rather than in the underlying physics. should nt contributions from the term be counted as violations of energy conservation by the code ?in reply we point out that much of that term may result from fluctuations in the zero level of the gravitational potential on subgrids , which we did not attempt to suppress since they have no bearing on the computed forces . and in any case the contributionshould be evaluated so that its magnitude relative to the other terms may be assessed .it is unrealistic to expect a pm code to conserve and arbitrarily well when gravitational instability causes the individual terms ( , ) to grow by orders of magnitude . likemany other authors before us , we deem and adequately conserved if their variation is a small fraction ( typically 13% ) of the change in or , respectively . in our definition of ( equation [ q : w ] ) we excluded the self - energy terms .but the natural way of calculating the potential in a pm code effectively includes these terms ( the restriction can be ignored since the scheme avoids self - forces ) ; we subtract them explicitly as part of our energy conservation test .we adopted a momentum - conserving scheme for calculating the forces .the total momentum should therefore be unaffected by particle - particle interactions on any given grid , and ( given the precautions we have taken in our handling of the buffer region surrounding each subgrid ) be only weakly modified by interactions across the boundary between adjacent subgrids .however , it is still affected by all other interactions between particles and external fields , including the constant - density background outside the computational box .( this field gives rise to forces only when isolated boundary conditions are used , which is the reason why it is nt normally discussed in the literature . ) the canonical momentum of particle satisfies summation over all particles and time integration yield a conserved quantity in the special case when ( i ) has no spatial dependence , ( ii ) or , and ( iii ) , the total momentum is constant . if one s aim is to measure the deviation of the results from those expected when ( such a deviation can occur in the presence of subgrids ) , one should naturally omit the corresponding term from equation [ q : pc ] .likewise , the conservation of angular momentum amounts to the constancy of angular momentum is only conserved if ( i ) and ( ii ) is a function of and alone .the exact coulomb force law satisfies the first condition , but grid - based approximations to it do nt . as in the analogous case for the momentum ,if the aim is to measure deviations from the ideal behavior where ( i ) is satisfied , then one should compare the magnitude of the corresponding term to itself , and to the magnitude of the external torque term when present .our first tests are meant to see how accurately the code calculates the accelerations of particles given a known mass configuration .the simplest case is that of the forces due to a single point mass , which can be compared to the value predicted by coulomb s law .figure [ f : tf1b.dff ] illustrates this comparison .it shows the accelerations induced by a single particle at the center of a grid with isolated boundary conditions .a single subgrid , twice as fine as the top grid , was centered on the massive particle .two sets of points are readily distinguished , corresponding to test particles inside and outside the subgrid .as expected , the acceleration is systematically underestimated since the effective potential is softer than that of a point mass . also as expected , the subgrid extends the range over which the force is accurate to within a few percent ; the accuracy threshold ( a little over 3% in this test ) could be lowered by increasing the size of the subgrid , the spacing being equal .our second test checks that the accelerations computed on a set of adjacent subgrids agree with those one would obtain on a single , larger grid with the same spacing as the subgrids .we compare forces on individual particles randomly chosen to sample a uniform distribution .the long - range forces , which are adequately resolved on the parent grid , tend to cancel , so that the system is dominated by short - range interactions that will be most affected by the grid refinement .our reference forces are from a non - subgridded run with cells and particles .we compare these to those obtained from three different runs with only cells on the parent grid , in which all possible first - level subgrids are activated .the only difference between the three runs is in the width of the buffer region where adjacent subgrids overlap .the results are illustrated in figure [ f : df ] .the magnitude of the force difference is plotted against that of the reference force for a randomly selected 1% of the particles ( to avoid overcrowding the plot ) .panels ( a ) , ( b ) , and ( c ) correspond respectively to a buffer width of 0 , 1 , and 2 parent grid cells .the solid diagonal line represents ; the last panel also contains a dotted line where .the results show that when the buffer is sufficiently wide ( at least two cells ; we like to use three cells in production runs with larger grids ) , the forces computed by the subgrid method are mostly within 1% of the forces from a single grid of equivalent resolution .whenever the buffer region is suppressed , an appreciable fraction of the particles exhibits force errors larger than 10% .here it is essential to understand what we mean by `` errors '' .figure [ f : df ] only compares the forces computed using a set of abutting subgrids of equal resolution with those from a single , larger grid with the same spacing , and shows that some overlap is both required and sufficient to avoid loss of resolution at the boundary between such abutting subgrids .it does _ not _ compare forces computed at different grid spacings , and does not address the issue of convergence to a `` perfect '' solution in the limit of an infinitely fine grid .we know that at the interface between a subgrid and a coarser parent grid the accuracy of the forces within the outer two or three cell layers of the subgrid will be less than that of a uniform fine grid regardless of the thickness of the buffer region .( it will , however , be no worse than on the parent grid alone . )we compensate for this by making the subgrids cover a slightly larger volume than the region where the higher resolution is required .the collapse of a homogeneous self - gravitating pressure - less ellipsoid initially at rest can be described by a small set of coupled ordinary differential equations for the lengths of the principal axes , which can be integrated in a straightforward way ( ) .this solution provides a convenient test of our code .the test corresponds to that of collapse to a sheet or filament in a cosmological code with periodic boundary conditions , but is more appropriate to our use of isolated boundary conditions . in particular , the plane - wave test of efstathiou et al .( 1985 ) is only easy to interpret and compare with a semi - analytic solution when periodic boundary conditions are used , and these lie outside the scope of the present article .another test , in section [ s : voidtest ] below , shows the behavior of our code when integrating the collapse of a moving sheetlike ridge in the presence of cosmological expansion .we followed the collapse of a homogeneous ellipsoid with a ratio between the lengths of the principal axes .the experiment was repeated for two different choices of the grid spacing in order to verify convergence towards the correct solution . for the higher resolution , we performed two runs , one using a single grid of cells and one with a top - level grid of only cells but up to two levels of subgrids .each subgrid also has only cells ; when necessary , multiple adjacent subgrids were automatically generated by the code to cover the entire ellipsoid .the number of particles is the same in all three runs , about .figure [ f : ts7-overview ] shows the short axis of the ellipsoid as a function of time .the solid curve represents the analytic solution , the dot - dash curve the solution computed on the coarse grid , while the remaining two dashed curves show the results of the two high - resolution runs .the horizontal dotted line corresponds to the grid spacing in the high resolution runs ; that of the low resolution grid is four times larger .figure [ f : ts7-blowup ] presents a blown - up view of the same results around the time of collapse of the ellipsoid to a spindle .one sees from these figures that increasing the resolution does indeed yield a better agreement between the -body and analytic solutions .furthermore , the results with subgridding are nearly identical to those obtained with a single grid of equivalent resolution . in the high - resolution runs ,the minimum radius of the spindle is comparable to the grid spacing . at the lower resolution ,it is significantly less than the grid spacing .we did not investigate the reasons for this in detail , but it is likely that the convergence is also affected by the number of particles used to represent the ellipsoid . in these testswe have kept this number constant .the two runs without subgrids also used a small , constant value of the time step .collapse occurs after 1524 and 1644 steps respectively .the subgridded run , by contrast , used the courant condition to adjust the time step ; only 125 steps were required to reach the point of collapse .the good agreement between the high - resolution results obtained with both choices of time step confirms the validity of our implementation of adaptive time steps .we have also repeated the low - resolution run using our adaptive time step scheme ; on figure [ f : ts7-blowup ] the results would be indistinguishable from those obtained with a fixed time step .an interesting test of the behavior of mass flows into a subgrid is the case of accretion from a uniform - density background onto a point mass ( or any other compact , spherically symmetric density profile ). a regular lattice of particles with zero initial peculiar velocity was laid down in a computational box of unit side to be evolved with isolated boundary conditions in an , cosmological model .we added at the center of the box a single particle of mass , where is the mean density of the other particles and of the external background , , and .we followed the collapse to a time , corresponding to an expansion factor .a scaling solution is available ( ; ; ; ) : .this simple law is expected to hold only for those shells that enclose a mass much larger than that of the initial seed and that have evolved for a few crossing times after their initial turnaround .the crossing time for a shell is of the same order as its turnaround time , and its proper radius is a factor times the turnaround radius . using a linear theory approximation ( in which the density perturbation grows in place ) until the turnaround at a mean enclosed overdensity , we find that for a given initial enclosed overdensity the turnaround occurs at , the subsequent virialization at , and the final comoving radius of the shell is , where is the initial comoving radius . requiring , only shells with are expected to show self - similar behavior , and then only for .one can also evaluate bertschinger s ( 1985 ) equation ( 2.7 ) with , , ( these values lead to ) and make direct use of his numerically determined similarity solution .a twist of our simulations is that shells with are prevented from falling in by the fact that they are not entirely included within the simulation volume .accordingly , the accretion will be starved for , and the last complete shell should virialize at .our main reason for performing this test was to compare the solutions obtained with various treatments of the border region around a subgrid .we performed four runs : one with a uniform grid , the other three with a grid and a nested subgrid with a linear refinement factor of 2 .( after subtracting edge cells as discussed in [ s : bc ] , the computational box was covered respectively by and cells . ) in one run the border region was suppressed , as suggested by our tests with a bound clump crossing the boundary , while the other two both used a border width of two parent grid cells , once with the particles in the border region contributing to the subgrid potential without feeling its effects and once with the border particles being subjected to the subgrid potential without contributing to it . in the last two cases , the net effect of the force asymmetries on overdense regions results in an acceleration pointing respectively away from the subgrid and towards it .figure [ f : tf6d - rho ] shows the logarithmic density profiles for the runs we just described .the differences between the results of all these runs are minute ; we find it difficult to ascribe any significance to such small deviations .this is reassuring : it suggests that the exact way in which subgrid borders are treated does not unduly affect the profiles of collapsed peaks .the dotted line connects open triangles that correspond to the values given by bertschinger ( 1985 ) in his table 4 . at small radii, his solution tends towards the expected , and ours is in good agreement with his .the first few caustics can easily be identified .peebles ( 1987 ; see also , hereafter pmhj ) has proposed and used the following test for cosmological pm codes : integrating the evolution of a spherically symmetric under - dense region surrounding by a compensating over - dense shell .the interior of such a void expands faster than the universe as a whole , and in so doing compresses the surrounding shell , making its profile higher and narrower .the evolution can be computed analytically until the time at which the first density cusp appears . for peebles profile , with , this occurs at in a flat universe .( here is the initial value of the expansion factor . )we integrated such a void , realized with particles , on a grid and compared our results to the analytic prediction at .figure [ f : ts6aa ] can be directly compared to the corresponding figure 4 of pmhj .a significant difference between our code and theirs lies in their use of a staggered grid for the force ( ) .the staggered grid increases the spatial resolution twofold ( which is why these authors used it ) at the cost of inducing self - forces on the particles ( for which reason we did not follow their example ) . because of this difference , our results obtained with particles on a grid correspond to their results with the same number of particles on a grid .a comparison reveals differences of detail , particularly in the structure of the outer parts of the shell , but our results match the analytic solution as closely as theirs .we take this occasion to illustrate the energy conservation properties of the code .figure [ f : ts6aa - econs ] shows the evolution with time of the conserved quantities and of equations [ q : c ] and [ q : cprime ] ( top ) , as well as that of the ratio of the changes in and in the total potential energy from the start of the run ( bottom ) .apart from an initial transient due to rounding in the print - out from which the plot was made , an examination of the ratio shows the energy to be conserved to about 12% accuracy throughout the run .we have developed a useful and versatile tool for the simulation of collisionless systems of gravitating particles .our code is particularly suited to situations that call for both fine - grained mass resolution everywhere and high spatial resolution in selected regions of high density the exact locations of which need not be known in advance .our method is a development of the well - known particle - mesh technique .tests indicate that in comparable conditions our code performs substantially like similar codes described in the published literature . when subgrids are introduced to increase the spatial resolution locally , the results in the regions covered by the subgrids are in very good agreement with those of a conventional pm code with a single , finer grid .if the number of subgridded regions is sufficiently small , our approach requires less computing time and less memory than the equivalent single - grid approach .furthermore , by not increasing the resolution in regions where the particle number density is low , we avoid making the behavior of the code collisional , which would be undesirable for applications to collisionless systems .a significant advantage of pm ( and its variants p m , ap m , etc . ) for cosmological applications is that comoving coordinates and periodic boundary conditions can be supported in a natural way .however , periodic boundary conditions are not always a physically appropriate approximation .in particular , if one s interest is in systems not much smaller than the size of the computational cube , periodic boundary conditions introduce a much stronger coupling between the external tidal field and the internal dynamics of the system than one would expect to occur in the aperiodic real universe .periodic boundary conditions are only appropriate when the individual structures of interest are much smaller than the computational volume , in which case the tides due to periodic images are small ; but then the relatively small dynamic range of conventional pm methods severely limits the resolution that can be achieved . by introducing hierarchical subgrids , we are able to extend that dynamic range ; however , it is also tempting to construct the simulation in such a way that the system of interest fills as much of the computational box as possible .this is what led us to implement isolated boundary conditions . in principle, the additional cost of imposing isolated boundary conditions ( which is relatively small when hierarchical grids are used since isolated conditions have to be imposed on the subgrids in any case ) is offset by the greater freedom one has to apply an arbitrary time - dependent external tidal field and to adopt a system of expanding coordinates that matches the evolution of the simulated object without necessarily coinciding with the expansion of the global cosmological model .of course our method can also be used in a more conventional way , with periodic boundary conditions on the top grid ; in fact , numerous simplifications occur when this is done .in addition , our code should be well - suited to non - cosmological dynamical simulations , _e.g. _ , of brief interactions between already formed galaxies , star clusters , and so on .our approach also has a few limitations .the most important is shared by all particle simulation methods : it is generally impossible to improve the spatial resolution without a corresponding refinement of the time resolution : more time steps need to be taken . other difficulties of note are the non - radial nature of forces at small separations ( which could be cured , at some cost in computing time , by softening the interaction law and increasing the depth of subgridding to compensate for this softening ) , the complexity and difficulty of tuning the subgrid placement algorithm , and the fact that subgrids of fixed shape will almost inevitably also cover some regions where the particle number density is low .the behavior of the code can become collisional in such regions ; this should not affect the computed structure of high - density condensations , but may well be a source of high - velocity particles that lead to shorter time steps according to the courant condition .an enhancement to our current code could be to refrain from applying the refined forces to such low - density cells .alternatively , it could prove useful to introduce individual time - varying softening lengths associated with individual particles , to represent the fact that particles in this method are to be thought of as possessing a finite extent that depends on the local number density .another desirable improvement is the adoption of different time steps at different levels of grid refinement ; the savings that can be achieved from this depend on the detailed structure of the tree of subgrids , and will be greatest for deep trees with most of their branches at the finer resolution levels .this work was partially supported by grants nsf - ast-86 - 57647 , nsf - ast-91 - 19475 , and nasa - nagw-2224 .some calculations were conducted using the resources of the cornell theory center , which receives major funding from the u.s . national science foundation and the state of new york , with additional support from the advanced research projects agency , the national center for research resources at the national institutes of health , ibm corporation , and other members of the center s corporate partnership program .we thank adrian melott , richard james , and the referee randall splinter for interesting and valuable remarks on an earlier version of this paper . | this article describes a new , fully adaptive particle - multiple - mesh ( pm ) numerical simulation code developed primarily for cosmological applications . the code integrates the equations of motion of a set of particles subject to their mutual gravitational interaction and to an optional , arbitrary external field . the interactions between particles are computed using a hierarchy of nested grids constructed anew at each integration step to enhance the spatial resolution in high - density regions of interest . as the code is aimed at simulations of relatively small volumes of space ( not much larger than a single group of galaxies ) with independent control over the external tidal fields , significant effort has gone into supporting isolated boundary conditions at the top grid level . this makes our method also applicable to non - cosmological problems , at the cost of some complications which we discuss . we point out the implications of some differences between our approach and those of other authors of similar codes , in particular with respect to the handling of the interface between regions of different spatial resolution . we present a selection of tests performed to verify the correctness and performance of our implementation . the conclusion suggests possible further improvements in the areas of independent time steps and particle softening lengths . |
solar magnetic activity is widely believed to be associated with dynamo action somewhere in the solar convective shell .identification of particular details of solar dynamo with surface manifestations of solar dynamo accessible for observations remains however a disputable problem .the point is that many important details of dynamo action being hidden in solar interior can not be observed directly , and we have to learn about them basing on indirect tracers .the link between a particular parameter important for a solar dynamo model and observable tracers of solar activity may be quite complicated , so adding new tests for comparison between concepts of solar dynamo and observations is a very attractive and simultaneously highly nontrivial undertaking .recently a new physical entity solar small - scale magnetic field was suggested for observational verification of solar dynamo concepts .solar magnetic field obviously contains some small - scale details which hardly can be included in the global solar magnetic field produced by traditional mean - field dynamo models . of course, more detailed dynamo models based on direct numerical simulations of non - averaged mhd - equations give dynamo - driven magnetic configurations much more complicated than those produced by the mean - field models .this fact is in agreement with theoretical expectations from mean - field models because certain terms in mean - field equations appear as a result of averaging of magnetic fluctuations .the point however is that , apart from the mean - field ( or global ) dynamo based on a joint action of solar differential rotation and mirror - asymmetric convection , the dynamo theory ( in the framework of average description in terms of correlation tensor , see , e.g. , a review by zeldovich et al .( 1990 ) , as well as in direct numerical simulations , see , e.g. , a review by brandenburg et al .( 2012 ) predicts an additional mechanism of magnetic field self - excitation , so - called small - scale dynamo which produces small - scale magnetic field , i.e. , magnetic fluctuations . in other words, the dynamo theory suggests that the total dynamo - driven magnetic field can contain the following contributions : mean magnetic field , small - scale magnetic field connected with the mean field , and small - scale magnetic field generated independently of .theoretical distinction between and was known quite a long time ago ( see , e.g. , brandenburg & subramanian , 2005 ) .however a general presumption was that it is more or less hopeless to distinguish between and observationally .as a result , the relationship between and was very rare addressed in dynamo theories .a naive theoretical expectation was that both sources of small - scale magnetic field contribute somehow in magnetic fluctuations ( as soon as fluctuations are ubiquitous in cosmic phenomena ) , however any attempt to distinguish between them hardly can be a fruitful undertaking .modern progress in solar magnetic field observations ( e.g. , ishikawa et al ., 2007 , lites et al ., 2008 , a review by de wijn et al . , 2009 ,abramenko et al . , 2010 , martinez gonzalez et al . , 2012 , to mention a few ) makes it possible to explore in detail various solar small - scale magnetic structures which often look quite specific in comparison with mean solar magnetic field .a natural naive expectation appears that we clearly see imprints of the small - scale dynamo action , i.e. the field .the fundamental conceptual progress here occurs due to the papers by stenflo ( 2012 , 2013 ) .he critically examined `` the relative contributions of these two qualitatively different dynamos to the small - scale magnetic flux '' with the following conclusion : `` the local dynamo does not play a significant role at any of the spatially resolved scales , nearly all the small - scale flux , including the flux revealed by hinode , is supplied by the global dynamo '' .we appreciate the importance of the clear and convincing statement of the problem formulated by stenflo ( 2012 , 2013 ) .the point however is that , generally speaking , the small - scale dynamo can act in the solar interior and contribute very little into surface observables . on the contrary, certain versions of the dynamo theory ( e.g. , cattaneo & tobias , 2014 ) predict that `` large - scale dynamo action can only be observed if there is a mechanism that suppresses the small - scale fluctuations '' .so , we deal with a fundamental physical problem and it looks reasonable to spend efforts in order to find tiny surface spores of the small - scale dynamo action in the solar interior . in this paper , we suggest that an imprint of the small - scale dynamo action in solar interior can be hidden in statistics of sunspot groups which violate the hale polarity law .the law states that in odd , for example , cycles , bipolar groups in the northern ( southern ) hemisphere have a positive ( negative ) magnetic polarity of the leading sunspot .we refer to the groups , which violate this law as anti - hale groups .the key issue of the suggested test is that observations ( as well as direct numerical simulations ) deal with the total magnetic field while the underlying physical problem is formulated in terms of the statistical quantities originated from the mean - field approach . and it is far from straightforward how to make inferences about the later in terms of the observed .in other words , we need an explicit description of how we separate various contributions to the total magnetic field . in statistical studies ,such a description is referred as a probabilistic model .the start point of the observational test suggested is as follows .it is well - known that through a solar cycle , the polarity of sunspot groups follows the so - called hale polarity law , i.e. , leading sunspots have opposite polarities in northern and southern hemispheres .besides , from cycle to cycle , the leading - spot polarity changes .the hale polarity rule reflects symmetry of the mean solar magnetic field .we recall that in the framework of dynamo studies the number of sunspot groups is believed to be related to the mean field strength . of course , there are few sunspot groups which do not follow the hale polarity rule .suppose that anti - hale groups can be considered as a result of some magnetic fluctuations which break symmetry of the mean magnetic field .suppose also that the amplitude of fluctuations is governed by the mean field strength , as it should be if the small - scale dynamo does not work .then we have to expect that the number of sunspot groups which follow the hale polarity law should be proportional to the number of anti - hale groups , and the relative number ( the percentage ) of the anti - hale groups should be cycle independent .in contrast , if there is a substantial source of magnetic fluctuations independent on the mean field , then the number of anti - hale groups has to contain a cycle - independent component . in this case, the relative number of anti - hale groups has to be enhanced during each solar minimum .this expectation admits an observational verification ( at least in principle ) and looks reasonable at least in the first sight .the point however is that realization of the above scheme is not a trivial task .below we present the key features of our approach in the most simple form as a toy model of relationship between the sunspots polarity and two contributors to magnetic fluctuations .let us consider a given solar hemisphere in a given solar cycle and assume that according to the hale polarity law toroidal component of the mean magnetic field ( which determines polarity of sunspot groups ) is directed , say , westwards .let the magnetic field be organized in tubes of two types : the first type are tubes produced by the mean field , and the second type tubes are associated with the small - scale dynamo action . for the sake of simplicity , we adopt that the tubes are directed * longitudinally * , i.e. , oriented along the equator .let the tubes of the first type contain the magnetic field of two components .the fist component , , is a non - random flux directed westwards .the number density of the tubes is denoted as , so that is the mean field .the number density is modulated by 11-year cycle , which results in the 11-year modulation of .at the same time , is time - independent .we presume that sunspot formation is a threshold phenomenon with a threshold magnetic field strength , , slightly exceeding the magnitude of .the other magnetic field component is a random ( say , gaussian ) magnetic field which is directed * longitudinally * as well , however its mean value vanishes , and the field is directed with the equal probabilities west- or eastwards .the r.m.s .value of this field , , is proportional to .if is directed westwards and exceeds , then a magnetic tube arises on the solar surface and creates a sunspot group which follows the hale polarity law .if , however , the field is directed eastwards and , by chance , it is so strong that exceeds , then the tube arises as well , but the newcomer violates the hale polarity law .a simple estimates ( khlystova & sokoloff , 2009 ; sokoloff & khlystova , 2010 ) show that if , the number of anti - hale groups should be a few percent of the total number of sunspot groups .note that in order to explain why sunspots arise at solar minima ( when the mean magnetic field is low ) , we have to suppose that is time - independent and the mean field cyclic modulation comes from the modulation of .the tubes of the second type contain a random ( say , gaussian ) magnetic field with a cycle - independent r.m.s .value and zero mean value .they do not contribute to the mean magnetic field .when exceeds , a tube arises , and the polarity orientation depends on the occasional tube s orientation : a hale polarity law group appears if was directed westward , and an anti - hale group appears if was directed eastward . obviously , the _ number _ of anti - hale groups originated from the first - type tubes depends on the cycle ( as soon as the total number of sunspots diminishes at solar minima ) , while that originated from the second - type tubes is cycle - independent .on the contrary , the _ percentage _ of anti - hale groups originated from the first - type tubes does not depend on the cycle , while the percentage of that produced by the second - type tubes should reach its maximum value at the solar minima because the total number of sunspots is lowered at that times .thus , exploring a presence / absence of cyclic modulation of the anti - hale groups , we can shed light on a relationship between the small - scale field , originated from the small - scale dynamo , and the small - scale field , produced by the mean - field dynamo action .the above model is an obvious oversimplification .we only include the features that are ultimately required to illustrate our observational test .further steps will be to admit that the magnetic tubes might be directed not exactly * longitudinally * , to include temporal growth of magnetic field in a given tube due to the dynamo action , etc .the mount wilson observatory ( mwo ) has carried out magnetic observations of sunspot groups throughout a century . on the basis of those observations ,the catalog of magnetic classes of sunspot groups was compiled , which contains the information on number of anti - hale sunspot groups within certain time intervals .unfortunately , the digital format of the complete mwo catalog does not exist yet .monthly catalogs for 19201958 , which include the magnetic classification of sunspot groups , had been published in each issue of the _ _ publications of the astronomical society of the pacific _ _ starting with a paper titled : `` summary of mount wilson magnetic observations of sun - spots for may and june '' ( 1920 ) . from 1962 to 2009 , the catalog data had been published in the _ solar - geophysical data _, however there were gaps in the information on anti - hale sunspot groups .there is a digital mwo catalog covering the time interval from 1962 to 2004 .this one is available at the website of the _ _ national geophysical data center _ _ , however the information on anti - hale groups is presented for the 19892004 time interval only . note that in the mwo catalog , normal and anti - hale sunspot groups are determined by the sign of magnetic field in the leading sunspots and by the group s tilt relative to the e w direction .this method works well for a majority of bipolar groups .however , uncertainties can appear in some cases .thus , the tilt of the sunspot axis can be misaligned with the tilt of magnetic polarities , and as a result , a normal group with a rare unusual tilt can be classified as an anti - hale one ; groups of a singular sunspot can not be classified ; besides , in some cases , the identification of a group as an anti - hale one is ambiguous because the group can change the polarity orientation during its evolution . to frame a group from its neighbors sometimes is not a unique procedure ( see , for example , the magnetic complex ar noaa 09393/09394 ) .we acknowledge the above shortcomings , however we decided to use the catalog data .richardson ( 1948 ) studied in detail anti - hale sunspot groups from 1917 to 1945 using the mwo catalog data published in hale & nicholson ( 1938 ) and in _ publications of the astronomical society of the pacific_. he examined the anti - hale sunspot groups on the original records .table 5 in his paper gives yearly numbers of normal and anti - hale sunspot groups in the northern and southern hemispheres separately for the 15th , 16th and 17th solar cycles .richardson normalized the data : the number of sunspot groups observed at mwo during a year was multiplied by 365 and divided by the number of observation days in a year , which resulted in non - integer numbers of anti - hale groups in his table 5 .the data from this table for 19171945 were used in the present study . during periods of cycles overlapping ,the high - latitude groups are thought to belong to new cycle , and therefore , they have the opposite ( to the old cycle ) magnetic polarity of the leading sunspots . these groups must be marked as normal , not anti - hale ones .having this in mind , richardson introduced the cycle separation of groups , so that during the cycles overlapping , the low - latitude groups belong to the old cycle , whereas the high - latitude groups belong to the new cycle .the mark `` anti - hale '' was attributed depending on the cycle .figure 1 of richardson ( 1948 ) demonstrates this for anti - hale sunspot groups .we follow this rule in the present study . to obtain the annual number of normal ( anti - hale ) groups from table 5 from richardson ( 1948 ) , we combined the normal ( anti - hale ) groups of both hemispheres . besides , for years of cycles overlapping ( 19221924 , 19331935 , 19431945 ) , we combined low - latitude normal ( anti - hale ) groups of old cycle with high - latitude normal ( anti - hale ) groups of new cycle . as a result of these efforts , our first data set , named hereinafter as the 19171945 data set , was compiled .our second data set was compiled on the basis of the above mentioned 19892004 mwo data . in detail , these data were observed from january 1 , 1989 to august 31 , 2004 .they are available at the website of the _ national geophysical data center_. there , for anti - hale groups , the sign `` + '' is added to the magnetic class , according to the catalog description .total 354 groups were marked as anti - hale groups .( note , that during the time period under study , the data have gaps , namely , november 1995 , december 1998 , and january to march 2002 . )we scrutinized each ( out of 354 ) group marked as `` anti - hale '' . using the data about the magnetic polarity in sunspots , we determined the polarity of leading spots .the time - latitude diagram was compiled ( figure 1 , top frame ) . following the method used by mcclintock et al .( 2014 ) , we separated wings of activity waves to determine the boundaries between cycles ( dashes lines in figure 1 ) .the diagram shows that there are certain mistakes in attributing the `` anti - hale '' property ( e.g. , black dots are present in the midst of white , and visa versa ; two high - altitude sunspot groups of the new cycle were wrongly marked as anti - hale groups in 1996 ) .this forced us to re - examine all groups on the diagram by means of full disk magnetograms of different observatories available at the dpd website ( gyri et al . , 2011 ) .we found that about one - half of identifications were erroneous .the mis - identifications are associated predominantly with long - lived spots in decaying active regions with formation of small spots or pores of opposite polarity to the west from the main spot , or mis - identifications of boundaries of sunspot groups in decaying activity complexes .the corrected data set 19892004 was used to explore the anti - hale statistics in the next section ( figure 1 , bottom frame ) . in our study, we also consider the results of mcclintock et al .the authors analyzed magnetic tilt angles of sunspot groups along with information on the magnetic polarities of leading spots .their statistics of anti - hale bipolar sunspot regions for 1974 - 2012 was obtained from the data of li & ulrich ( 2012 ) , who determined sunspot magnetic tilt angles from mwo sunspot records and daily averaged magnetograms , as well as from soho / mdi magnetograms .the results for the 19171945 data set are shown in figure 2 .the upper panel shows the annual average of sunspot groups and demonstrates 11-year solar activity cycle .the lower panel represents the annual average percentage of anti - hale groups .the percentage of anti - hale groups tends to be enhanced during the cycles minima .the similar result was obtained for the 19892004 data set , figure 3 .we found that the observed enhancement of the percentage becomes less pronounced when we average data over time intervals longer than 1 year . on the contrary , averaging over shorter time intervals seem to be interesting , and it is considered below ..,scaledwidth=50.0% ] thus , figure 4 presents results for the 19892004 data set as derived from three different temporal averaging routines .namely , the two top frames ( a ) show the outcome from one - month averaging , the two middle frames ( b ) present the result of two - month averaging , and , finally , the two bottom frames ( c ) refer to the three - month averaging outcome .data gaps in november 1995 , in december 1998 and from january till march 2002 are visible .the data show that during the time intervals of active sun , the percentage of anti - hale groups predominantly does not exceed the 7% level ( the dashed horizontal lines on the percentage frames ) , however , during the minimum between the 22nd and 23rd cycles , the value of percentage reaches up to 1525 % for different averaging intervals . from figure 4one can conclude that the relative number of anti - hale groups overcomes the 7%-level when the total number of sunspot groups becomes low ( below approximately 20 , the level marked with the dashed line on the top frame of figure 4 ) .we compare the above results with findings of mcclintock et al .( 2014 ) who also reported an enhancement of percentage of anti - hale groups near the ends of solar activity cycles ( figure 5 ) .slight differences between our results and that reported by mcclintock and colleagues are visible , however we presume that they might be due to different applied routines to outline a sunspot group .note , that the local peak in the percentage of anti - hale groups in 1995 is present in both studies , compare figure 3 ( bottom frame ) and figure 5 ( top frame ) .thus , an enhancement of the relative number of anti - hale groups during the solar minima can be regarded as a solid result .we demonstrate that the relative number ( the percentage ) of sunspot groups which violate the hale polarity law ( anti - hale groups ) do increase during the minima of 11-year solar activity cycles . in accordance with the probabilistic model for the magnetic fields ,suggested in section 2 , an increase of the relative number of anti - hale groups during the solar minima implies the small - scale dynamo at work , because the small - scale dynamo provides a cycle - independent income in the number of anti - hale groups and the total number of groups becomes lower during the solar minimum .in other words , the observed statistics of anti - hale groups give a hint that small - scale dynamo is active in the solar interior .this conclusion is compatible with the inferences made by stenflo ( 2012 , 2013 ) , who considers the small - scale dynamo as a very negligible contributor to the total solar magnetic flux because the anti - hale groups are associated with a tiny part of solar magnetic flux only ( see table 1 in wang & sheeley , 1989 ) .we stress however that a verification of this hint in the framework of other approaches remains highly desirable .the point is that the link between small - scale magnetic field in solar interior and statistics of anti - hale groups is very far from straightforward , and it is problematic to exclude firmly alternative interpretations of the result obtained . in particular , in mcclintock et al .( 2014 ) , it is supposed that the enhancement of the relative number of anti - hale groups could be associated with the solar activity at low latitudes via interaction across the equator .we appreciate this option and its importance for solar dynamo , however we suppose that it is insufficient to explain the observed effect because , during solar minima , anti - hale groups were observed at both low and intermediate / high latitudes ( see figure 1 and time - latitude diagrams in sokoloff & khlystova 2010 ; mcclintock et al . , 2014 ) .d.s . and a.kh . are grateful for the rfbr financial support under the grant 15 - 02 - 01407 .efforts of v.a . were supported by the program of the presidium of russian academy of sciences no .abramenko , v. , yurchyshyn , v. , goode , p. , kilcik , a. , statistical distribution of size and lifetime of bright points observed with the new solar telescope , the astrophysical journal letters , volume 725 , issue 1 , pp .l101-l105 , 2010 .lites , b. w. , kubo , m. , socas - navarro , h. , berger , t. , frank , z. , shine , r. , tarbell , t. , title , a. , ichimoto , k. , katsukawa , y. , tsuneta , s. , suematsu , y. , shimizu , t. , nagata , s. , 2008 , apj , 672 , 1237 | in order to clarify a possible role of small - scale dynamo in formation of solar magnetic field , we suggest an observational test for small - scale dynamo action based on statistics of anti - hale sunspot groups . as we have shown , according to theoretical expectations the small - scale dynamo action has to provide a population of sunspot groups which do not follow the hale polarity law , and the density of such groups on the time - latitude diagram is expected to be independent on the phase of the solar cycle . correspondingly , a percentage of the anti - hale groups is expected to reach its maximum values during solar minima . for several solar cycles , we considered statistics of anti - hale groups obtained by several scientific teams , including ours , to find that the percentage of anti - hale groups becomes indeed maximal during a solar minimum . our interpretation is that this fact may be explained by the small - scale dynamo action inside the solar convective zone . [ firstpage ] magnetic fields sun : activity sun : magnetic fields |
wireless sensor networks ( wsn ) consists of energy constrained sensor nodes having limited sensing range , communication range , processing power and battery .the sensors generally follow different hop - by - hop ad - hoc data gathering protocols to gather data and communicate .the sensors can sense the information whichever lies in its sensing range using rf - id ( radio frequency identification ) .the sensed data can be communicated to another sensor node which lies within communication range of the sender .the gathered data finally reaches the base station which may be hops apart from any sensor .since sensor transceivers are omni - directional , we assume the sensing and communication ranges as spheres of certain radii .the network is called homogeneous when all the sensors have the same radii and heterogeneous otherwise .coverage of a certain foi and deployment of sensors are an issue of research where the aim is to make energy efficient networks .sensor deployment and coverage in 2d requires simpler strategies and protocols as compared to 3d .3d sensor network is used generally for underwater sensor surveillance , floating lightweight sensors in air and space , air and water pollution monitoring , forest monitoring , any other possible 3d deployments etc .real life applications of wsns are mostly confined to 3d environments .the term -coverage in 3d is used to describe a scenario in which the sensors are deployed in such a way that sensors cover a common region .more precisely , a point in 3d is said to be -covered if it lies in the region that is common to the sensing spheres of -sensors , being termed as degree of coverage .indeed , the ultimate aim of this project is to * come up with a deployment strategy for sensors that guarantees -coverage of a given 3-dimensional field of interest ( foi ) for large values of *. a first step to address this issue is to come up with a 3-dimensional convex body ( tile ) that is guaranteed to be -covered by a certain arrangement of ( or more ) sensors , and then fill the foi with non overlapping copies of that shape by repeating the same arrangement . the term sixsoid has been coined in this paper to signify a geometrical shape that resembles a super - ellipsoid .sixsoid is created by the intersection of six sensors , each having the same sensing radius , which are placed on the six face centers of a cube of side length where is the radius of the sensing spheres .we compare the implications of this convex body with the previously proposed model on 3d -coverage based on reuleaux tetrahedron .recall that the reuleaux tetrahedron , is created by the intersection of four spheres placed on the vertices of a regular tetrahedron of side length . in an attempt to guarantee 4-coverage of the given field, considers a scenario in which four sensors are placed on the vertices of a regular tetrahedron of side length equal to the sensing radius .it is well known that the volume of of a reuleaux tetrahedron constructed out of a regular tetrahedron of side length is approximately .unfortunately it is not possible to obtain a tiling of the 3d space with non - overlapping copies of reuleaux tetrahedron .in fact , such a tiling is not possible even with a tetrahedron . in a plausible deployment strategy is hinted that exploits this construction by overlapping two reuleaux tetrahedrons , gluing them at a common tetrahedron s face , but this deployment does nt seem to be pragmatic . in this paper , we propose another 3d solid ( the sixsoid ) for this purpose and an extremely pragmatic deployment strategy . we show that using our deployment strategy one is guaranteed to have 6-coverage of approximately 68.5% and 4-coverage of 100% of a given 3d polycubical field of interest which is a significant improvement over the gurantees provided in .there are relatively fewer works done to address the problem of 3d -coverage as compared to the 2d version of the problem . in authors make significant progress on this problem .they discuss the relevance of reuleaux tetrahedron in various issues dealing with connectivity and -coverage in 3d .prior to that the following works disscuss the 3d version ; in , authors propose a coverage optimization algorithm based on sampling for 3d underwater wsns . proposes an optimal polynomial time algorithm based on voronoi diagram and graph search algorithms , in authors suggest algorithms to ensure -coverage of every point in a field where sensors may have same or different sensing radii , defines the minimum number of sensors for -coverage with a probability value , studies the effect of sensing radius on the probability of -coverage , proposes an optimal deployment strategy to deal with full coverage and 2-connectivity , brought forward a sensor placement model based on voronoi structure to cover a 3d region , in , authors provide a study on connectivity and coverage issues in a randomly deployed 3d wsn . in this paper, we compare our proposed sixsoid based -coverage model with the existing reuleaux tetrahedron based model .comparison has been done in terms of volume of -coverage , sensor spatial density in 3d and placement and packing in 3d .in a previous work , the authors used a convex body that resembles a super - ellipse for -coverage in 2d homogeneous wsn . this model showed much better efficiency in terms of area of -coverage , energy consumption and requirement of less number of sensors as compared to the -coverage model with reuleaux triangle .this work done in is further extended to 3d by taking reuleaux tetrahedron .motivated by the fact that considering a `` super - elliptical '' tile has proven to be much better than the reuleaux triangle based model , in this work we extend that idea to 3d .the main hurdle was to compute the volume of the solid ( sixsoid ) generated out of our construction . unlike the reuleaux tetrahedron its volume was not already known .another reason for opting our construction is its resemblence to a superellipsoid because of which it fits much better in 3d rather than a reuleaux tetrahedron .moreover , sixsoid provides a practical and easier packing in 3d . once we fill the foi with cubes of side length , a sixsoid is formed by the overlapping of six sensors lying on the six face centers of a cube .so , every cube contains a sixsoid inside it .when we consider packing in 3d , cubes are space filling .we propose a result in a later section that states that our deployment strategy ensures 4-coverage of the entire foi along with fact that approximately 68% of it is 6-covered .thus , the sixsoid based model ensures at least 4-coverage of the entire foi .packing of reuleaux tetrahedron in 3d as proposed in is harder to achieve and it is not feasible for practical deployment . in the subsequent sections , we calculate the volume of the sixsoid with the supporting theorems and compare our results with the reuleaux tetrahedron based model . in the remaining part of the paper , we discuss the computation of volume of a sixsoid in section 3 . in section 4we discuss the deployment and packing strategy in 3d that exploits the structure of sixsoid .section 5 is devoted to the comparison of our model with the results proposed in .we end the paper with conclusion and potential research directions for future .the * sixsoid * which we denote by is defined as the 3-d object obtained due to the intersection of six spheres placed on the centers of the faces of a cube of side - length .the radius of each sphere is . in this section ,we compute the volume of .our basic strategy is to take a horizontal cross section ( via a sliding plane ) of the cube at a distance of from the top of the cube and perform the following integration , where is the area of the cross section and the last equality follows due to the symmetry of .henceforth , we will use the following notation regarding the geometry of the objects involved in this section .let (top ) , (bottom ) , (left ) , (right ) , (front ) , (back ) be the faces of the cube and we denote the respective spheres ( centered at face centers ) by .we will denote the sliding plane by and let be its radius .let be the circle which is obtained by the intersection of with where and . in the remainder of this sectionwe show the behavior of the cross - section as the plane slides from to . for all , and concentric circles and . also as varies from to , monotonically decreases , monotonically increases and at . for , . from to , where we observe that , is exactly as slides below .so to compute we only need to compute the radius of .this value can be computed as , which proves the claim regarding .now inorder to know ( the event when the cross section changes ) , we need to find the value of at which is tangential to , this is also the instance at which is tangential to each and . to find this we need to solve the following equation , the root of the above quadratic equation which is less than is .this proves the lemma . + where and from to where and recall that above , the cross - section is a circle and at , is tangential to . as slides slightly below ,the cross - section is a 8-sided closed region drawn in fig.[z2 ] .let be the center of the square and , .our aim is to find out the area of the region . from the notation used in the fig .[ z2 ] we deduce that .also it is easy to see that where and is the length of the line segment joining and .we first show how to compute in terms of and . + * computing * : notice that and are points of intersection of the circles and .let us write the equation of these circles assuming to be the origin and lines parallel to and axis passing though it as and axes .the equations of and in this coordinate system would be and respectively .subtracting the later from the former gives .now if we conside the equation of as a quadratic equation in with then the modulus of the difference of the roots of this equation will exactly be the length of .simple algebraic manipulation shows that the desired area can be decomposed into two parts .( i ) 4 times the area of sector ( ii ) 4 times the area of region .we compute each of these as follows : * area of sector is * area of the region = area of ( ) + area of the cap , which is equal to + + adding the aforementioned two values , multiplying by 4 , and replacing the values of , gives us the expression of in the lemma . + * computing * : to find the value of at which the cross - section changes from the above mentioned 8-sided region , we need to look at the instant when the circle circumscribes the region formed due the intersection of . at this instantthe following equality holds ; which solves to and , since , the only root which is of concern for us is . + 4\left[(y - \frac{r}{2})\right]^2 ] for to where , , , _ ( sketch ) _ consider the cross - section as depicted in the figure [ z5 ] .the area that is only 3-covered is the union of regions .a simple but tideous calculation gives us the result . is equal to for to . _( sketch ) _ consider the cross - section as depicted in the figure [ z5 ] .the area that is only 3-covered is the union of regions .again a simple calculation gives us the result .the volume inside a cube that is 4-covered is \end{aligned}\ ] ] where , we again use numerical methods to estimate this integral , which is approximately .thus the volume of region that is 4-covered is approximately .consider a tessellation of a ( poly)cubical 3d field of interest ( foi ) using cubes of side length .we place the sensors each of sensing radius on the centers of the faces of every cube .clearly , every sensor will be shared by 2 cubes in the tiling ( except the boundary ones ) . according to the aforementioned packing every point in the foiis 3-covered , approximately 95.2 % of foi is 4-covered and about 68.5 % of the foi is 6-covered . by construction ,every point inside a sixsoid formed will be covered by 6 sensors .we just have to prove that every other point will be 4-covered . according to the notations used in the construction of sixsoid in section 3 , which we borrow here , this is equivalent to proving that every point in the cross - section ( which is a sqaure of side length r ) obtained by slicing the cube with the plane is 3-covered for all . again by symmetry of , we only have to prove this for all values between . as we have noticed in section 3 that due to the nature of sixsoid the topology of the cross section changes at two values of , namely and .consider the arrangement of circles formed on the square of side length , it can be verified that for all the ranges when the topology of the cross - section remains fixed ( i.e. from 0 to , to and to ) the entire cross section is 3-covered . also from the previous section the claims regarding 4 and 6coverage follows .recall that according to our deployment strategy every sensor is shared by 2 cubes . thus given a 3d cubical foi of volume number of sensors of radius needed according to our deployment strategy is where is the radius of the sensing spheres ..volume comparison between sixsoid and reuleaux tetrahedron [ cols="<,<,<",options="header " , ]in this section we present the comparison of our model with the releaux tetrahedron model .we compare the volume of sixsoid with the volume of reuleaux tetrahedron with varying sensing radius .this comparison is tabulated in table [ t2 ] .next , we evaluate the sensor spatial density for -coverage in the sixsoid model and compare it with the reuleaux tetrahedron model . recall that the minimum sensor spatial density per unit volume needed for full -coverage of a 3d field is defined as a function of sensing radius as where .we take the similar parameter for reuleaux tetrahedron from , which is where .table [ t3 ] shows the comparison of the minimum sensor spatial densities and we find that sixsoid requires much less sensors per unit volume to guarantee -coverage .in this paper , we discussed a new geometric model for addressing the problem of -coverage in 3d homogeneous wireless sensor networks which we have named as the sixsoid .we show in a number of ways how this model outperforms the existing model of reuleaux tetrahedron , namely the volume of the convex body involved , which in turn implies improved spatial density and a better ( in terms of coverage ) and more pragmatic deployment strategy for sensors in a given 3d field of interest . from the point of view of geometry ,our construction of sixsoid and its volume computation might be of independent interest .we suspect our construction might have interesting consequences for non - homogenoeus networks as well which we leave as a direction of future research .ammari , on the problem of -coverage in 3d wireless sensor networks : a reuleaux tetrahedron - based approach , 7th international conference on inteligent sensors , sensor networks and information processing ( 2011 ) x. bai , c. zhang , d. xuan , j. teng , w. jia , constructing low - connectivity and full - coverage three dimensional wireless sensor networks , ieee journal on selected areas in communications , vol .28 , issue : 7 a. jaklic , a. leonardis , f. solina , superquadratics and their geometrical properties , computational imaging and vision , chap .20 , kluwer academic publishers , dordrecht , http://lrv.fri.uni-lj.si/ franc / srsbook / geometry.pdf ( 2000 ) | coverage in 3d wireless sensor network ( wsn ) is always a very critical issue to deal with . coming up with good coverage models implies energy efficient networks . -coverage is a model that ensures that every point in a given 3d field of interest ( foi ) is guaranteed to be covered by sensors . in the case of 3d , coming up with a deployment of sensors that gurantees -coverage becomes much more complicated than in 2d . the basic idea is to come up with a convex body that is guaranteed to be -covered by taking a specific arrangement of sensors , and then fill the foi with non - overlapping copies of this body . in this work , we propose a new geometry for the 3d scenario which we call a * sixsoid*. prior to this work , the convex body which was proposed for coverage in 3d was the so called * reuleaux tetrahedron * . our construction is motivated from a construction that can be applied to the 2d version of the problem in which it implies better guarantees over the * reuleaux triangle*. our contribution in this paper is twofold , firstly we show how sixsoid gurantees more coverage volume over reuleaux tetrahedron , secondly we show how sixsoid also guarantees a simpler and more pragmatic deployment strategy for 3d wireless sensor networks . in this paper , we show the construction of sixsoid , calculate its volume and discuss its implications on the -coverage in 3d wsns . wireless sensor networks , -coverage , reuleaux tetrahedron , sixsoid . |
in many numerical simulations in general relativity one integrates einstein s field equations on a spatially compact domain with artificial timelike boundaries , effectively truncating the computational domain .this raises the question of how to specify boundary conditions . in this articlewe address this question within the context of the cauchy formulation , in which the field equations split into evolution and constraint equations .we adopt a free evolution approach , in which the constraints are solved on the initial time slice only and the future of the initial slice is computed by integrating the evolution equations .boundary conditions within this approach should ideally satisfy the following three requirements : ( i ) be compatible with the constraints in the sense of guaranteeing that initial data which solves the constraint equations yields constraint - satisfying solutions , ( ii ) permit to control , in some sense , the gravitational degrees of freedom at the boundary , and ( iii ) be stable in the sense of yielding a well posed initial - boundary value problem ( ibvp ) .there are several motivations for the construction of boundary conditions satisfying the above properties .first , recent detailed analysis of binary neutron star evolutions showed that the presence of artificial boundaries in current state of the art evolutions , which use rather ad hoc boundary conditions , dramatically affects the dynamics in the strong field region , near the stars .while it can be argued that this effect should disappear when placing the boundaries further and further away from the strong field region , until ideally , the region of interest in the computational domain is causally disconnected from the boundaries , in practice , this would require huge computer resources , especially in the three - dimensional case . even though some kind of mesh refinement should help in placing the boundaries far away , there is in any case a minimum resolution needed in the far region in order to reasonably represent wave propagation , thus constraining the size of the computational domain for a given amount of memory .next , an understanding of boundary conditions is important if the evolution equations include elliptic equations , since in this case effects from the boundaries can propagate with infinite speed , and so have an immediate effect on the fields being evolved .examples of cases in which elliptic equations arise include elliptic gauge conditions ( such as maximal slicing or minimal distortion conditions ) and constraint projection methods .finally , isolating the physical degrees of freedom at the boundaries should be important in view of cauchy - characteristic ( see for a review ) or cauchy - perturbative matching techniques , where a cauchy code is coupled to a characteristic or perturbative `` outer module '' which is well adapted to carry out the evolution in the far zone . in these cases, it is important to communicate only the physical degrees of freedom at the boundary between the two codes , since the cauchy code and the outer module might be based on completely different formulations .boundary conditions satisfying requirement ( i ) can be constructed by analyzing the constraint propagation system , which constitutes an evolution system for the constraint variables and is a consequence of bianchi s identities and the evolution equations .if it can be cast into first order symmetric hyperbolic form , the imposition of homogeneous maximally dissipative boundary conditions for the constraint propagation system guarantees that a smooth enough solution of the evolution system ( if it exists ) which satisfies the constraints initially automatically satisfies the constraints at later times .homogeneous maximally dissipative boundary conditions consist in a linear relation between the in- and outgoing characteristic fields of the system , which is chosen such that an energy estimate can be derived .this energy estimate implies that the unique solution of the constraint propagation system with zero initial data is zero , i.e. that the constraints are preserved during evolution .maximally dissipative boundary conditions for the constraint variables usually translate into differential conditions for the fields satisfying the main evolution equations , since the constraint variables depend on derivatives of the main variables .this means that the resulting boundary conditions for the main system usually are not of maximally dissipative type and , as discussed below , analyzing well posedness of the corresponding ibvp becomes more difficult .requirement ( ii ) , controlling the physical degrees of freedom , is a difficult one since there are no known _ local _expressions for the energy or the energy flux density in general relativity .nevertheless , one should be able to control the physical degrees of freedom in some approximate sense , as for example in the weak field regime approximation , in which one linearizes the equations around flat spacetime . in this approximation, it might be a good idea to specify conditions through the weyl tensor , since it is invariant with respect to infinitesimal coordinate transformations of minkowski spacetime . more precisely ,since there are two gravitational degrees of freedom , we should specify two linearly independent combinations of the components of the weyl tensor .below we will discuss boundary conditions which involve the newman - penrose complex scalars and ( see , for instance , ) with respect to a null tetrad adapted to the time - evolution vector field and the normal to the boundary .if the boundary is at null infinity these scalars represent the in- and outgoing gravitational radiation , respectively .furthermore , it turns out that these scalars are invariant with respect to infinitesimal coordinate transformations and tetrad rotations for linear fluctuations of any petrov type d solution represented in an adapted background tetrad .this class of solutions not only comprises flat spacetime but also the family of kerr solutions describing stationary rotating black holes . since in many physical situationsone is interested in modeling asymptotically flat spacetimes , such that if the outer boundary is placed sufficiently far away from the strong field region spacetime can be described by a perturbed kerr black hole , we expect the boundary condition to be a good approximation for a `` non - reflecting '' wave condition .these boundary conditions are actually part of the family of conditions imposed in the formulation of ref . , to date the only known well posed initial - boundary value formulation of the vacuum einstein equations , and were also considered in . requirement ( iii ) , the well posedness of the resulting ibvp , turns out to be a difficult problem as well : for quasilinear symmetric hyperbolic systems with maximally dissipative boundary conditions there are well - known well posedness theorems which state that a ( local in time ) solution exists in some appropriate hilbert space , that the solution is unique , and that it depends continuously on the initial and boundary data .the proof of ref . is based on these techniques . there , using a formulation based on tetrad fields , the authors manage to obtain a symmetric hyperbolic system by adding suitable combinations of the constraints to the evolution equations in such a way that the constraints propagate tangentially to the boundary . in this way, the issue of preserving the constraints becomes , in some sense , trivial .( see for a treatment in spherical symmetry in which the constraints propagate tangentially to the boundary as well . )however , for the more commonly used metric formulations it seems difficult to achieve tangential propagation for the constraints with a symmetric or strongly hyperbolic system , and therefore , one has to deal with either constraint propagation across the boundary or systems that are not strongly or symmetric hyperbolic .here we choose to deal with constraint propagation across the boundary and strongly or symmetric hyperbolic systems .there has been a lot of effort in understanding such systems , both at the analytical and numerical level .although partial proofs of well posedness have been obtained using symmetric hyperbolic systems with maximally dissipative boundary conditions , it seems that these kind of boundary conditions are not flexible enough since constraint - preserving boundary conditions usually yield differential conditions . in this articlewe construct constraint - preserving boundary conditions ( cpbc ) for a family of first order strongly and symmetric hyperbolic evolution systems for einstein s equations . for definiteness , we focus on the formulation presented in ref . , which is a generalization of the einstein - christoffel type formulations with a bona - mass type of gauge condition for the lapse . however , our approach is quite general and should also be applicable to other hyperbolic formulations of einstein s equations .in section [ sect : shlg ] we briefly review the family of formulations considered here and recall under which conditions the main evolution equations are strongly or symmetric hyperbolic . in section [sect : escv ] we discuss the constraint propagation system and analyze under what circumstances it is symmetric hyperbolic .having cast this system in symmetric hyperbolic form we impose cpbc via homogeneous maximally dissipative boundary conditions for the constraint variables .these conditions are differential boundary conditions when expressed in terms of the fields satisfying the main system .we complete these boundary conditions in section [ sect : bc ] by imposing extra conditions which control the physical and gauge degrees of freedom .there are several possibilities for doing so , of which we analyze the following two : 1 ) we first specify algebraic conditions in the form of a coupling between the outgoing and ingoing characteristic fields ( referred to as cpbc without weyl control later in this article ) ; 2 ) we specify boundary conditions via the weyl scalars and , as discussed above ( referred to as cpbc with weyl control later in this article ). section [ sect : fl ] is devoted to an analysis of the well posedness of the resulting ibvp . as mentioned above ,this is a difficult problem since our boundary conditions are not in algebraic form .i particular , they are not in maximally dissipative form , so we can not apply the standard theorems for symmetric systems with maximally dissipative boundary conditions .therefore , our goal here is more modest : we analyze the ibvp in the `` high frequency limit '' by considering high - frequency perturbations of smooth solutions . in this regimethe equations become linear with constant coefficients , and the domain can be taken to be a half plane . in this case solutionscan be constructed explicitly by performing a laplace transformation in time and a fourier transformation in the spatial directions that are tangential to the boundary .this leads to the verification that a certain determinant is non - zero . if this determinant condition is violated , the system admits ill posed modes growing exponentially in time with an arbitrarily small growth time .therefore , the determinant condition yields _ necessary _ conditions for well posedness of the resulting ibvp and allows us to discard several cases which would lead to an ill posed formulation ._ these ill posed formulations appear even in cases in which both the main and the constraint propagation system are symmetric hyperbolic_. we also stress that the determinant condition verified in this article is a weaker version of the celebrated kreiss condition , which yields well posed ibvp for hyperbolic problems with _ algebraic _ boundary conditions .however , the kreiss condition guarantees nothing for the present case of _ differential _ boundary conditions .see for an example of an ibvp with differential boundary conditions which satisfies the kreiss condition but fails to be well posed in .in section [ sect : nr ] we discretize the ibvp by the method of lines .the spatial derivatives are discretized using finite difference operators that satisfy the summation by parts property and we explain in detail how we numerically implement our cpbc . then, in sections [ sect : sim1 ] and [ sect : sim ] we perform the following numerical tests : first , we evolve ibvps which fail to fulfill the determinant condition and are therefore ill posed .the goal of evolving these systems is to confirm the expected lack of numerical stability .we do confirm such instability , finding an obvious lack of convergence : the results exhibit exponentially in time growing modes , where the exponential factor gets larger as resolution is increased , as predicted by our analytical analysis .next , we focus on evolutions of two systems that do satisfy the determinant condition , differing only on whether they control components of the weyl tensor at the boundary or not .finally , we compare stable evolutions of systems with cpbc with evolutions where maximally dissipative boundary conditions are given for the _ main _ evolution system , and are therefore expected to violate the constraints .we first analyze evolutions of these four systems through a _ robust stability test _ . in this test, random initial and boundary data is specified at different resolutions , and the growth rate in the time evolved fields is observed .a growth rate that becomes _ larger _ as resolution is increased is a strong indication of a numerical instability , while for numerical stability the growth rate should be _ bounded _ by a constant that is independent of resolution .we find that systems that violate the determinant condition fail to pass the robust stability test , as expected .however , we _ also _ find that at least some systems with cpbc with weyl control which satisfy the determinant condition are numerically unstable as well , although in a somehow weaker sense ( explained in the text ) that reminds the numerical evolution of weakly hyperbolic systems .in contrast to this , the systems with cpbc without weyl control which satisfy the determinant condition that we have evolved pass the robust stability test for the length of our simulations ( usually between and crossing times ) .next , we concentrate on evolutions of brill waves , and confirm the expectations drawn from the robust stability test in what concerns numerical stability .using these waves we further concentrate on a detailed comparison between the results using maximally dissipative boundary conditions for the main evolution system and stable cpbc .our convergence tests strongly suggest that in the former case the constraint variables _ do not _ converge to zero in the limit of infinite resolution , implying that one _ does not _ obtain a solution to einstein s field equations , while in the latter case for the same resolutions the constraint variables do converge to zero .next we concentrate on the stable cpbc case and evolve pure gauge solutions , using high order accurate finite difference operators which satisfy the summation by parts property .the operators used are eighth order accurate in the interior points and fourth order accurate at and near the boundary points . as a final numerical experiment, we also concentrate on the stable cpbc case and inject pulses of gravitational radiation through the boundaries of an initially flat spacetime .we inject pulses of large enough amplitude to create very large curvature in the interior ( as measured by curvature invariants ) , showing that our cpbc are strong enough to handle very non - linear dynamics .the order of magnitude achieved by the curvature invariant measured corresponds to being at roughly from the singularity , in a schwarzschild spacetime of mass one . in these simulationsthis curvature is produced _ solely _ by the injected pulses .a summary of the results and conclusions are presented in sect .[ sect : conc ] .technical details , like the derivation of the constraint propagation system and of the characteristic fields , and a special family of solutions to the linearized ibvp with weyl control are found in [ app : mecps ] , [ app : charfields&sh ] and [ app : sfs ] .in this section we review the family of hyperbolic formulations of einstein s field equations constructed in , which is an extension of the einstein - christoffel type of formulations which incorporates a generalization of the bona - mass slicing conditions .it consists of evolution equations for the variables , where is the lapse function , the three - metric , the extrinsic curvature , and where the extra variables and represent the first order spatial derivatives of the three - metric and of the logarithm of the lapse , respectively .the evolution equations are obtained from the adm evolution equations in vacuum by adding suitable constraints to the right - hand side of the equations ( see [ app : mecps ] for more details ) . following the notation of ref . , we have here , , and is a function that is smooth in all its arguments and that satisfies . forthe simulations below we shall choose which corresponds to time - harmonic slicing , but for our analytical results we shall leave unspecified for generality .we assume that the shift vector is a fixed , a priori specified vector field .the parameters , , , , control the dynamics off the constraint hypersurface , defined by the vanishing of the following expressions }\ ; , \label{eq : clkij}\\ c^{(a)}_i & \equiv & a_i - \partial_i n / n\ ; , \label{eq : cai}\\ c^{(a)}_{ij } & \equiv & \frac{1}{n } \partial_{[i } ( n a_{j]})\ ; . \label{eq : caij}\end{aligned}\ ] ] here , is the hamiltonian constraint , the momentum one , and , , and are artificial constraints that arise as a consequence of the introduction of the extra variables , . and are automatically zero if the constraints , are satisfied , and so they are redundant .however , we will need these variables in the next section in order to cast the constraint propagation system into first order form . ]the ricci tensor belonging to the three - metric is written as where , and the evolution equations ( [ eq : ndot],[eq : gdot],[eq : kdot],[eq : ddot],[eq : adot ] ) have the form of a quasilinear first order system , where and the matrix - valued functions , , and the vector - valued function are smooth .we are looking for solutions with given initial data on some three - dimensional manifold which satisfies the constraints equations . in order to guarantee the existence of solutionswe restrict the freedom in choosing the parameters , , , , , by demanding that the corresponding initial - value formulation is well posed .this means that given smooth initial data , a ( local in time ) solution should exist in some appropriate hilbert space , be unique , and depend continuously on the initial data .although in this article we are interested in the numerical evolution of spacetimes on a spatially compact region with boundaries , well posedness of the problem in the absence of boundaries is a necessary condition for obtaining a numerical stable and consistent evolution inside the domain of dependence of the initial slice . an easy and intuitive way of finding necessary conditions for well posedness is to look at high frequency perturbations of smooth solutions : let be a fixed point and a smooth solution in a neighborhood of .perturb according to , with , real , and a constant one - form on which is normalized such that .if we evaluate the evolution equations at a point near , divide by , and take first the limit and then the limit , the evolution equations reduce to \tilde{u } , \label{eq : frozen}\ ] ] where the matrix is given by \\ -2n_k k_{ij } + \eta\,g_{k(i}\left(k_{j)n } - n_{j ) } k \right ) + \chi\,g_{ij}\left ( k_{kn } - n_k k \right ) \\-2\sigma n_i k + \xi\,\left ( k_{in } - n_i k \right ) \end{array } \right ) , \nonumber\\ \label{eq : principalsymb}\end{aligned}\ ] ] where we have set and where the index refers to the contraction with .notice that by rescaling and rotating the coordinates one can always achieve that , and by rescaling the coordinate one can achieve that .for this reason , we drop the entry in the following .we call the system ( [ eq : frozen ] ) the associated frozen coefficient problem. a necessary condition for the well posedness of the initial - value formulation defined by the ( non - linear ) eqs .( [ eq : ndot],[eq : gdot],[eq : kdot],[eq : ddot],[eq : adot ] ) is the well posedness of the associated frozen coefficient problem .if some extra smoothness properties are satisfied ( see [ app : charfields&sh ] ) this condition is also a sufficient one .the frozen coefficient problem is well posed if the matrix is diagonalizable and has only real eigenvalues for each . clearly , this is true if and only if the matrix is diagonalizable and has only real eigenvalues .as we have shown in this can be easily analyzed by taking advantage of the block structure of : suppose is an eigenvector of with eigenvalue . then where , , , and where the matrices and are read off from eq .( [ eq : principalsymb ] ) . a sufficient condition for to be diagonalizable andposses only real eigenvalues can be obtained by considering the equation explicitly , we have where the coefficients , and are we now demand that is diagonalizable and has only positive eigenvalues . as we have shown in this guarantees that is diagonalizable and has only real eigenvalues .representing in an orthonormal basis , , such that , we find where and . from thiswe immediately see that is diagonalizable with only positive eigenvalues if and only if are positive and if whenever . in [ app : charfields&sh ] we derive the characteristic fields , which are given by the projections of onto the eigenspaces of .these fields play an important role in the construction of boundary conditions . using these fields, we also derive in [ app : charfields&sh ] sufficient conditions for the non - linear evolution system to be strongly hyperbolic and thus yield a well posed initial value formulation .in order to obtain a solution to einstein s equations , not only do we have to solve the evolution equations but also the constraints .we want to follow a free evolution scheme , in which the constraints are solved initially only . for this scheme to be valid, we have to guarantee that any solution for such initial data automatically satisfies the constraints in the computational domain everywhere and at every time . at the numerical level , since the constraints are already violated initially due to truncation or roundoff errors , we have to guarantee that the numerical solution to the evolution equations converge to a constraint - satisfying solution to the continuum equations in the limit of infinite resolution . in order to show this , the constraint propagation system , which gives the change of the constraint variables under the flux of the main evolution system , plays an important role . in this sectionwe show that for a suitable range of the parameters , , , , this system can be cast into first order symmetric hyperbolic form .we then specify boundary conditions that guarantee that zero initial data for this system leads to zero constraint variables at later times .the constraint propagation system is derived in [ app : mecps ] , up to lower order terms which are linear algebraic expressions in the constraint variables and whose precise form are not needed for the purpose of this article . in order to analyze under which conditions the system is symmetric hyperbolic , it is convenient to decompose into its trace and trace - less parts , where (ij)} ] and , ( notice that is symmetric trace - less while and are antisymmetric . ) in terms of the variables , where , the non - linear constraint propagation system has the form where the principal symbol is given by \\\left[\frac{1}{4}(2\chi-\eta ) + \xi \right]\ , n_{[i } c_{j ] } \\ ( \eta + 3\chi)\ , n_{[i } c_{j ] } \\ \xi\ ; n_{[i } c_{j ] } \end{array } \right),\ ] ] and where the matrix depends on the main fields and their spatial derivatives , but not on .the system eq .( [ eq : evolconstr ] ) is called _ symmetric hyperbolic _ if there exists a symmetric positive definite matrix which may depend on but not on such that is symmetric for all and . from the above representation of the principal symbol it is not difficult to see that the system is symmetric hyperbolic if the following conditions hold notice that these conditions automatically imply that and , which are necessary conditions for the main evolution system to be strongly hyperbolic .if the conditions ( [ eq : constrsym1],[eq : constrsym2],[eq : constrsym3 ] ) are satisfied , a symmetrizer is given by the quadratic form where the symmetrizer allows us to obtain an energy estimate for solutions to eq .( [ eq : evolconstr ] ) on a domain of with smooth boundary and suitable boundary conditions on . defining the energy norm differentiating with respect to and using the constraint propagation system , eq .( [ eq : evolconstr ] ) , we obtain \ ; d^3 x \nonumber\\ & = & \int_{\partial o } u^t { \bf h } n{\cal a}_c(n ) u \ ; d^2 x \nonumber\\ & + & \int_{o } u^t \left [ n { \bf h}{\cal b } + n { \cal b}^t { \bf h } - \partial_i(n{\bf h}{\cal a}_c^i + { \bf h}\beta^i ) \right ] u\ ; d^3 x,\end{aligned}\ ] ] where here denotes the unit outward one - form to the boundary . in the last stepwe have used the fact that is symmetric with respect to the scalar product defined by ( [ eq : sym ] ) and assumed that the shift is tangential to the boundary at .as one can easily verify , where are the in and out going characteristic constraint fields . therefore ,if we impose the boundary conditions where the matrix satisfies for all one - forms , we obtain the energy estimate where the constant only depends on bounds for and . since this proves that for all provided that . for this reason ,we call the three conditions ( [ eq : cpbc ] ) _ constraint - preserving boundary conditions_.in this section we consider the main evolution system , eqs .( [ eq : ndot],[eq : gdot],[eq : kdot],[eq : ddot],[eq : adot ] ) , on a open domain of with smooth boundary .we also assume that the shift vector is chosen such that it is tangential to at the boundary .this means that at the boundary we have six ingoing characteristic fields , denoted by , , , ( see [ app : charfields&sh ] for their definition ; here and in the following , , and quantities with a hat denote trace - free two by two matrices ) , and thus we have to provide six independent boundary conditions . following the classification scheme of ref . one can show that the first field is a gauge field , the second ones are physical fields , and the last are constraint - violating fields .we stress that this classification scheme does only make precise sense in the linearized regime for plane waves propagating in the normal direction to the boundary ( see for a more detailed discussion about this ) .if we forgot about the constraints , we could give data to the six ingoing fields .the simplest possibility would be to freeze the ingoing fields to their values given by the initial data . provided the evolution system is symmetric hyperbolic this would yield a well posed ibvp .however , in the presence of constraints , the boundary conditions have to ensure that no constraint - violating modes enter the boundary , i.e. the boundary conditions have to be compatible with the constraints .in fact , the numerical results of section [ sect : sim ] show explicitly that freezing of the ingoing fields to their initial values does not , in general , provide a solution to einstein s equations : the constraints are violated . instead of the freezing non - constraint preserving boundary conditions just mentioned ,we impose the three constraint - preserving boundary conditions ( [ eq : cpbc ] ) .this fixes three of the six conditions we are allowed to specify at .we complete these three conditions in the following two ways : 1 .* cpbc without weyl control * + here we adopt the simplest possibility and impose algebraic boundary conditions on the `` gauge '' and `` physical '' fields , where and are constants satisfying , , and and are functions on describing the boundary data . the justification for the bounds on and will become clear in the next section .notice that the choice results in dirichlet conditions in the sense that data is imposed on some components of the extrinsic curvature while the choice yields boundary conditions that impose data on combinations of spatial derivatives of the three metric ( see the definitions of and in [ app : charfields&sh ] ) . in our simulationsbelow , we choose which yields sommerfeld - like conditions .* cpbc with weyl control * + here we replace eq .( [ eq : maxdissmodes ] ) by a similar condition for the weyl tensor .we impose the boundary conditions where and where are defined in terms of the electric ( ) and magnetic ( ) parts of the weyl tensor in the following way : let , , be an orthonormal triad with respect to the three metric such that coincides with the unit outward normal to the boundary .then , , where is the volume element associated to .if the vacuum equations hold , the electric and magnetic components can be determined by where one uses the evolution equation ( [ eq : kdot ] ) in order to reexpress in terms of spatial derivatives of the main variables .the boundary conditions ( [ eq : maxdissweyl ] ) correspond to the conditions imposed by friedrich and nagy .in the symmetric hyperbolic system considered in , where the components of the weyl tensor are evolved as independent fields , these boundary conditions arise naturally when analyzing the structure of the equations since they give rise to maximally dissipative boundary conditions .in particular , a well posed initial - boundary value problem incorporating the condition ( [ eq : maxdissweyl ] ) is derived in .in contrast to this , the boundary conditions ( [ eq : maxdissweyl ] ) are not maximally dissipative for our symmetric hyperbolic system since they are not even algebraic .+ the conditions ( [ eq : maxdissweyl ] ) can also be expressed in terms of the newman - penrose scalars and with respect to a null tetrad which is adapted to the boundary in the following sense : let denote the future - directed unit normal to the slices .together with the above vectors , , it forms a tetrad . from this , we construct the following newman - penrose null tetrad : then , we find and the boundary condition ( [ eq : maxdissweyl ] ) is simply where the star denotes complex conjugation ( we could generalize this boundary condition by allowing for complex values of ) .notice that and are not uniquely determined by the unit normals and : with respect to a rotation of , about the angle , these quantities transform through , , the factor reflecting the spin of the graviton .however , the boundary condition ( [ eq : maxdissweylbis ] ) is indeed invariant with respect to such transformations . in our simulations below , we shall choose corresponding to an outgoing radiation condition .+ finally , as discussed in the introduction , and represent , respectively , in- and outgoing radiation when evaluated at null infinity and are gauge - invariant quantities for linearizations about a kerr background .we now analyze the well posedness of the ibvp defined by the evolution equations ( [ eq : ndot],[eq : gdot],[eq : kdot],[eq : ddot],[eq : adot ] ) , the cpbc ( [ eq : cpbc ] ) and the boundary conditions ( [ eq : gauge ] ) and ( [ eq : maxdissmodes ] ) or ( [ eq : gauge ] ) and ( [ eq : maxdissweyl ] ) .we derive necessary conditions for well posedness by verifying a certain determinant condition in the high frequency limit .we assume that the parameters are such that the evolution equations are strongly hyperbolic since otherwise the problem is ill posed even in the absence of boundaries , and such that the constraint propagation system is symmetric hyperbolic .let be a point on the boundary . by taking the high frequency limitwe obtain the associated frozen coefficient problem at .after rescaling and rotating the coordinates if necessary , we can achieve that , and that the domain of integration is . in this way, we obtain a constant coefficient problem on the half space . introducing the operator , it is given by notice that this system is equivalent to the one that one would obtain by linearizing the evolution equations around flat spacetime in a slicing with respect to which the three metric is flat , the lapse is one and the shift is constant and tangential to the boundary , but not necessarily zero .since we have a linear constant coefficient problem on the half plane , we can solve these equations by means of a laplace transformation in time and a fourier transformation in the and directions .that is , we write the solution as a superposition of solutions of the form , where with , , and .( notice that for such solutions , . ) substituting this into eqs .( [ eq : linkij],[eq : lindkij],[eq : linai ] ) one obtains a system of ordinary differential equations coupled to algebraic conditions . since there are six in- and six outgoing modes , there are twelve independent differential equations .the remaining equations which are algebraic can be used in order to eliminate the characteristic fields which have zero speeds , and one ends up with a closed system of twelve linear ordinary differential equations . because the system is strongly hyperbolic we expect exactly six linearly independent solutions that decay as , and six solutions that blow up as .since we require the solution to lie in we only consider the six decaying solutions .the determinant condition consists in verifying that the boundary conditions with homogeneous boundary data annihilate these six solutions .if the determinant condition is violated , the problem admits solutions of the form for some , where can be arbitrarily large , and the system is ill posed since as for each fixed .thus , the determinant condition is necessary for the well posedness of the ibvp and , as we will see , will yield nontrivial conditions .a convenient way for finding the six decaying solutions is to look at the second order equation for , which is a consequence of eqs .( [ eq : linkij],[eq : lindkij],[eq : linai ] ) , where the coefficients , and are given in eqs .( [ eq : coeffa],[eq : coeffb],[eq : coeffc ] ) .we show below that there are exactly six solutions to this second order system which have the form , with and such that decays as . since in fourier - laplace space the operator is just multiplication with the nonzero factor , corresponding solutions to the original systemcan be obtained by determining and from eqs .( [ eq : lindkij ] ) and ( [ eq : linai ] ) , respectively .in order to find the decaying solutions of eq .( [ eq : secondkij ] ) we make the ansatz , where here and in the following we set for simplicity , and where is a complex number with negative real part to be determined .it is also convenient to introduce a unit two - vector which is orthogonal to , and to decompose in the components , , , , and . using this ansatz , eq .( [ eq : secondkij ] ) splits into the following two decoupled systems , where the matrices and are given by and , , are defined in eqs .( [ eq : lam1],[eq : lam2],[eq : lam3 ] ) .the matrix has the eigenvalue - eigenvector pair the two vectors are always linearly independent from each other since .this yields the solution where , are two constants and where here and in the following , ( ) where the branch of the square root for which for is chosen .similarly , after obtaining the eigenvalues and eigenvectors of one obtains the solution where therefore , we have obtained six linearly independent solutions which decay exponentially as and thus lie in .they are parameterized by the constants ( which depend on and ) , ... , .a necessary condition for the ibvp to be well posed is that these constants are uniquely determined by the boundary data . before checking this condition , it is instructive to have a closer look at the six - parameter family of solutions given by eqs .( [ eq : k1 ] ) and ( [ eq : k2 ] ) .let us first compute the momentum constraint variable : it has the form where where if and otherwise . thus , for this family of solutionsif and only if .in other words , the three - parameter subfamily of solutions parametrized by , and are _ constraint - violating _ modes .next , consider an infinitesimal coordinate transformation parametrized by a vector field , and assume zero shift for simplicity .with respect to such a transformation , the linearized lapse and extrinsic curvature change according to on the other hand , the linearization of eq.([eq : ndot ] ) around a minkowski background for all , which is satisfied by the time - harmonic slicing condition adopted in our simulations .] yields we see that the choice leaves eq .( [ eq : ndotlin ] ) invariant and induces the transformation while and remain invariant .therefore , it is possible to gauge away the solution parametrized by and we call this solution a _ gauge _ mode from hereon .the remaining family of solutions parametrized by and are _ physical _ modes : they satisfy the constraints and and are gauge - invariant .next , we verify that the integration constants , ... , are uniquely determined by the boundary conditions .first , we notice that the expressions ( [ eq : ceta],[eq : cx],[eq : comega ] ) for the fourier - laplace transformation of the momentum constraint yield a three - parameter family of solutions for the constraint propagation system , eq .( [ eq : evolconstr ] ) . since this system is symmetric hyperbolic and since we specify homogeneous maximally dissipative boundary conditions for it ( see eq .( [ eq : cpbc ] ) ) , the corresponding ibvp is well posed . in particular , zero is the only solution with trivial initial data .this implies that ,[eq : cx],[eq : comega ] ) into the fourier - laplace transformed of the cpbc ( [ eq : cpbc ] ) . ] .we stress that such a conclusion can not be drawn if the constraint propagation is strongly but not symmetric hyperbolic , see ref . for a counterexample . next , using eqs .( [ eq : lindkij ] ) and ( [ eq : linai ] ) , we find the following expressions for the relevant characteristic fields at the boundary ,\\ \hat{v}^{(\pm)}_{ab } & = & \hat{k}_{ab } \mp \frac{1}{\omega z}\left [ \partial_x\hat{k}_{ab } - ( 1+\zeta)\partial_{(a } k_{b)x } \right]^{tf},\\ \hat{w}^{(\pm)}_{ab } & = & \omega z\hat{k}_{ab } \mp \left [ \partial_x\hat{k}_{ab } - \partial_{(a } k_{b)x } \right]^{tf } + \frac{1}{\omega z}\left [ -2\sigma\partial_a\partial_b k + \xi\partial_{(a } c_{b ) } \right]^{tf},\\\end{aligned}\ ] ] where and where ^{tf} ] of side length . herewe consider initial data corresponding to flat space and add a random perturbation to it .therefore , initially , the fields are chosen to be where the different quantities are random numbers which are uniformly distributed in ] .note that the hatted quantities vanish if .next , we replace the gradient of the logarithm of the lapse by a new field minus a corresponding constraint variable , and rewrite using eqs .( [ eq : subst1 ] ) and ( [ eq : subst2 ] ) , we rewrite eq .( [ eq : fourricci ] ) as where groups together the four ricci tensor ( which vanishes in vacuum ) and the constraint variables .an evolution equation for in vacuum can be obtained from the identity ( [ eq : kijid ] ) by setting to zero . however , in order to obtain a strongly hyperbolic evolution system , one needs to set equal to suitable combinations of the constraint variables ( see below ) .next , in order to obtain evolution equations for the new fields and , we apply the operator on eqs .( [ eq : splitchris],[eq : splitgradnabn ] ) and use the commutation relation t_{i_1 i_2 ... i_r } & = & \frac{\partial_k n}{n } \partial_0 t_{i_1 i_2 ... i_r } \nonumber\\ & + & \frac{1}{n}\left ( t_{s i_2 ... i_r } \partial_k\partial_{i_1 } \beta^s + ... + t_{i_1 i_2 ...i_{r-1 } s } \partial_k\partial_{i_r } \beta^s\right ) , \label{eq : comrel}\end{aligned}\ ] ] for any -rank symbol , where the lie derivative of is formally defined by using the evolution equations and for the three metric and the lapse , respectively , we obtain where finally , we rewrite the hamiltonian and momentum constraint .let be the einstein tensor , and let denote the contraction with the vector field .then , we have where the expressions for and are given by eqs .( [ eq : c],[eq : ci ] ) and where the main evolution equations are , , and eqs .( [ eq : kijid],[eq : dkijid],[eq : aiid ] ) where one sets the quantities to zero . with this informationit is not very difficult to find the constraint propagation system using the commutation relation ( [ eq : comrel ] ) and the twice contracted bianchi identities ( written in 3 + 1 form ) substituting , and using the equations , and and eqs .( [ eq : lambdaij],[eq : lambdakij],[eq : lambdai ] ) , a lengthy but straightforward calculation yields } + \frac{1}{2 } c_{ijkl } - \zeta c_{k(ij)l } \right ) \nonumber\\ & - & g^{is}\partial_s c_{ij}^{(a ) } + l.o.,\\ \partial_0 c_{kij } & = & l.o.,\\ \partial_0 c_{lkij } & = & \frac{\eta}{2}\left ( g_{i[k } \partial_{l ] } c_j + g_{j[k } \partial_{l ] } c_i \right ) + \chi\ , g_{ij } \partial_{[l } c_{k ] } + l.o.,\\ \partial_0 c_k^{(a ) } & = & l.o.,\\ \partial_0 c_{ij}^{(a ) } & = & \xi\,\partial_{[l } c_{k ] } + l.o.,\end{aligned}\ ] ] where we have defined }^{(a)}) ] and the matrix is well defined . using this ,compute + ( 2\sigma + \xi)\omega ( b_n - d_n ) , \\a_a & = & v_a^{(0 ) } + \xi\omega(b_a - d_a ) , \\d_{nab } & = & -2\hat{d}_{ab } + ( 1+\zeta ) d_{(ab)n } + \delta_{ab}\left [ \gamma(b_n - d_n ) - d^c_c \right ] , \\d_{nna } & = & \zeta^{-1}\left [ 2 d_{na } - \frac{1}{2}(1+\zeta)\delta^{bc}(d_{bca } - d_{abc } ) + a_a \right ] + b_a - \frac{1}{2}\ , d_a\ ; , \\d_{nnn } & = & 2\zeta^{-1}\left [ d_{nn } -\frac{1}{2}(1-\zeta ) b_n + \frac{1}{2}\ , d_n + a_n - \frac{\gamma}{2}\ , ( b_n - d_n ) \right].\end{aligned}\ ] ] from which one can compute the components of and with respect to the orthonormal basis .finally , one obtains the coordinate components of the main variables by contracting with the dual basis , which is defined by , : this appendix we show explicitly that the linearized ibvp with weyl control can not be well posed in if the parameter defined in eq .( [ eq : omega ] ) is one ; independent on whether or not the determinant condition is satisfied . in order to see this , we consider the following family of solutions : let be a one - form that satisfies , , and , and let it is not difficult to check that these expressions satisfy the evolution equations ( [ eq : linkij],[eq : lindkij],[eq : linai ] ) and the constraints , } = 0 $ ] , . for systems without boundaries ,these solutions are trivial if appropriate fall off conditions on the fields are demanded since then the harmonic condition on implies that it must vanish. however , if boundaries are present , may be nontrivial .the electric and magnetic components of the linearized weyl tensor corresponding to the solutions ( [ eq : sol ] ) are and vanishes if the one - form is closed .in particular , this is true if is exact , i.e. if for some time - independent harmonic function . in this case , we also have therefore , if , the family of solutions ( [ eq : sol ] ) with and harmonic and time - independent shows that the linearized ibvp with the boundary conditions ( [ eq : cpbc],[eq : gauge],[eq : maxdissweyl ] ) is not well posed in since then the boundary conditions are satisfied with homogeneous data and since the initial data depends only on second derivatives of whereas depends on third derivatives of for .this results in frequency dependent growth of the solution of the form , where is a characteristic wave number of the initial data .a. m. abrahams _ et al . _[ binary black hole grand challenge alliance collaboration ] , gravitational wave extraction and outer boundary conditions by perturbative matching , phys .* 80 * ( 1998 ) 18121815 .s. frittelli and r. gomez , boundary conditions for hyperbolic formulations of the einstein equations , _ class .grav . _ * 20 * ( 2003 ) 23792392 , einstein boundary conditions for the 3 + 1 einstein equations , _ phys .d _ * 68 * ( 2003 ) 044014 , einstein boundary conditions in relation to constraint propagation for the initial - boundary value problem of the einstein equations , _ phys . rev .d _ * 69 * ( 2004 ) 124020 , einstein boundary conditions for the einstein equations in the conformal - traceless decomposition , _ phys .d _ * 70 * ( 2004 ) 064008 .m. alcubierre , g. allen , b. brugmann , e. seidel and w. m. suen , towards an understanding of the stability properties of the 3 + 1 evolution equations in general relativity , phys .d * 62 * ( 2000 ) 124011 .g. calabrese , l. lehner , d. neilsen , j. pullin , o. reula , o. sarbach and m. tiglio , novel finite - differencing techniques for numerical relativity : application to black hole excision , class .quant . grav .* 20 * ( 2003 ) l245l252 . | outer boundary conditions for strongly and symmetric hyperbolic formulations of einstein s field equations with a live gauge condition are discussed . the boundary conditions have the property that they ensure constraint propagation and control in a sense made precise in this article the physical degrees of freedom at the boundary . we use fourier - laplace transformation techniques to find necessary conditions for the well posedness of the resulting initial - boundary value problem and integrate the resulting three - dimensional nonlinear equations using a finite - differencing code . we obtain a set of constraint - preserving boundary conditions which pass a robust numerical stability test . we explicitly compare these new boundary conditions to standard , maximally dissipative ones through brill wave evolutions . our numerical results explicitly show that in the latter case the constraint variables , describing the violation of the constraints , do not converge to zero when resolution is increased while for the new boundary conditions , the constraint variables do decrease as resolution is increased . as an application , we inject pulses of `` gravitational radiation '' through the boundaries of an initially flat spacetime domain , with enough amplitude to generate strong fields and induce large curvature scalars , showing that our boundary conditions are robust enough to handle nonlinear dynamics . we expect our boundary conditions to be useful for improving the accuracy and stability of current binary black hole and binary neutron star simulations , for a successful implementation of characteristic or perturbative matching techniques , and other applications . we also discuss limitations of our approach and possible future directions . |
observations of secondary eclipses in exoplanetary systems , starting with hd 209458b and tres-1 , have made it possible to measure the integrated day - side flux of hot jupiters ( for a review of transiting exoplanet science see * ? ? ?* ) . by carefully studying of the shape of the ingress and egress of secondaryeclipses , it should eventually be possible to map the day - side of such planets . to characterize the planet s longitudinal temperature profile at all longitudes , however , observations must be made at a variety of points in the planet s orbit .the first successful observations of this sort were reported by , and .the unprecedented quality of the data in and made it possible to model the planet not merely as day and night hemispheres , but to divide the planet into longitudinal slices , hence producing the first ( albeit coarse ) maps of an exoplanet under restricted assumptions . in this paperwe elaborate on the inversion techniques which one can use to obtain a longitudinal brightness map from the light curve of phase variations .such maps promise to be powerful diagnostic tools for simulations of hot jupiter atmospheric dynamics because they are nearly model - independent .the detailed study of coupled radiative transfer and dynamics in the atmospheres of hot jupiters is a tremendously complex science ( for a current review of the field , see * ? ? ?certain models of hot jupiter atmospheres predict significant variability in the integrated brightness of the planets ( * ? ? ?* and references therein ) .although it may be possible to glean useful information from the phase function light curves of planets with such variable atmospheres , we elect in this letter to ignore time variability , in line with the detailed modeling of . we also choose to ignore limb darkening / brightening since most atmospheric models of hot jupiters predict infrared photon absorption lengths much shorter than the scale height of temperature variations .in any case , the presence of limb darkening would not significantly change the present analysis since the limb does not contribute much to the integrated light of the planet ( we verify this in 3.4 ) .star spots can plague phase function observations since they change the overall brightness of a planet / star system as they rotate into and out of view .fortunately , star spots vary on longer timescales ( ) than the orbital periods of currently known transiting exoplanets ( ) , and have larger variations in the optical / near - ir which can be used to characterize and subtract the star spot variation ( eg : * ? ? ?furthermore , some instruments on the spitzer space telescope exhibit detector ramps which distort phase function observations , especially near the beginning of a time series ( eg : * ? ? ?* ) . both of these observational challenges can be overcome through longer observations : full orbits or more .for the purposes of this letter , we assume that any and all variations in the infrared brightness of an extrasolar planetary system are due to changes in the flux from the planet as different portions of the planet rotate in and out of view . stellar variability , which contributes to the photometric scatter in the light curve , will likely be the limiting factor in future phase function analysis .the mapping technique described here is predicated on the known rotation rate of the planet .as such , it is only applicable to tidally locked planets , where the rotation and orbital periods are identical .most hot jupiters are tidally locked so this is not a problematic restriction .eccentric hot jupiters are thought to be in pseudo - synchronous rotation , in which case their rotational period can be derived from their orbital parameters .it is unlikely that eccentric planets have steady - state atmospheric dynamics and hence , although the light curve inversion methodology described in this letter may be applicable , it is not clear that the result would be a `` map '' .the object of this letter is to demonstrate how to transform an observed light curve , , into a longitudinal brightness map of the planet , where the planet s phase , , is the angle in the plane of the planet s orbit between the planet , its host star and the planet s position at superior conjunction ( at superior conjunction ; at inferior conjunction ) . for edge - on systems, corresponds to the observer planet star angle ( at secondary eclipse ; at transit ) . in the interest of simplicity , we ignore the dips in light due to transit and secondary eclipse which occur in transiting systems , although these are crucial in pinning down the absolute flux of the planet ( for non - transiting planets is only known to within an additive constant ) . if and are the longitude and latitude on the planetary disk as seen by an observer and is the intensity map of the planet , the total flux from the planet as seen by that observer is : for a tidally locked planet , it is possible to define longitude , , and latitude , , in the planet s rotating frame , such that at the sub - stellar point , at the planet s north pole , and increases in the direction of rotation of the planet .the rotation that relates to can be expressed in terms of euler angles : , , , where is the inclination of the planet s orbit ( for a face - on orbit ; for an edge - on orbit ) .the rotation leads to a system of three coupled equations which can be solved for and . by inserting these expressions into equation [ flux_equation ] ,one obtains an equation for the observed flux from the planet as a function of the specific intensity of the planet at different longitudes and latitudes in effect , a transformation from a brightness map to a light curve : . although the formalism above is sufficient to produce a light curve from a two - dimensional planetary map , it is instructive to further constrain the problem by considering planets with inclinations of _ precisely _ , with the understanding that the resulting solutions should be approximately correct for transiting planets . in the _ worst - case _ scenario of a transiting system with and constant in ,the edge - on approximation leads to 5% errors in . for edge - on orbits , the rotation relating to to and .the expression for can then be written as there are no current observations which can constrain the -dependence of , but for edge - on orbits the latitudinal dependence of the intensity is unimportant since one can define , which represents the flux contribution from an infinitesimal slice of the planet when viewed face - on .the integrated flux from the planet at a given point in its orbit is then it is useful to think of this integral as a convolution , with the piece - wise defined kernel : the kernel , which represents the response of the phase function to a delta function in , is very broad , with a full width at half - maximum of .in the previous section we developed an analytic expression , equation [ edge_on_integral ] , for the convolution . there is no closed expression for the deconvolution , , and there is no guarantee that solutions are unique .furthermore , two problems arise in practice : the light curve is only sampled at discrete values of which may not span a full planetary rotation , and the measurements of are not arbitrarily precise but instead have associated uncertainties . given these realities , it is useful to develop model maps which simplify the integral in equation [ edge_on_integral ] and then use numerical methods to solve the problem or better yet allow for direct inversion .the planet is divided into longitudinal slices of width .each slice has a uniform intensity in both longitude and latitude .this flux distribution is not ruled out by the observations and more importantly smoothing the steps does not significantly change the light curve , provided the total flux from each slice and their brightness - weighted longitude are unchanged .the slice is centered on , where the phase offset , , is useful to accommodate slight discrepancies between the light curve maximum and superior conjunction .the intensity map for the planet is given by : since , in practice , one is only ever concerned with comparing the model phase function to data at a finite number of discrete phases , the transformation from n - slice map to light curve can be expressed in matrix form : .the matrix is defined as , where and represent the leading and trailing edges of the slices : \right).\ ] ] sinusoidal basis maps have the advantage of producing sinusoidal light curves _ of the same frequency and phase offset_. if a planet map is composed of sinusoids , , the light curve is simply given by .the coefficients of are related to those of by : where must be even .sinusoidal modes with odd ( other than ) do not have a phase function signature . in figure [ sinusoidal_kernel ]we show the light curve contributions for a handful of sinusoidal modes , assuming that all of the modes have the same amplitude in .the higher frequency modes are strongly suppressed due to the broad smoothing kernel .this low - pass filter limits the number of modes which can be meaningfully fit with a given light curve .the uncertainty in a sinusoidal mode in the light curve is related to the uncertainty in the map sinusoidal modes by equation [ sinusoidal ] ( eg : ) . ) are invisible due to symmetry.,width=317 ] since both the n - slice and sinusoidal models described above provide computationally efficient ways to generate light curves from maps , one can use a fitting routine ( markov chain monte carlo , levenberg - marquardt , etc . ) to produce a map from a given light curve .it is simply necessary to demand that be strictly positive .these techniques have the advantage of naturally producing error estimates for the resulting map .although a unique best - fit map can always be determined in this way , the uncertainty in the fit parameters may be very large if the number of parameters is not commensurate with the signal - to - noise of the light curve . for the n - slice model , the uncertainty balloons for _ all _ the if too many free parameters are used .we therefore suggest running multiple fits with different numbers of slices .when the addition of a slice does not improve the , one has achieved the best model that the data can support . instead of repeating the fit with fewer slices, one can apply smoothing to the map in the form of a bayesian prior ( eg : * ? ? ?* ) . for well - designed priors, however , there is no fundamental difference between models with large and long smoothing lengths , versus models with smaller and little or no smoothing . in the interest of simplicitywe recommend using fewer slices rather than smoothing .an observed light curve can be quickly deconvolved into sinusoidal maps by determining its fourier components , , and , then converting them via the equation [ sinusoidal ] .it is expedient to assume that there is no power in the odd modes ( other than ) to avoid degenerate solutions , but this may lead to a systematic error in the model map , depending on how much power is present in these modes in the real map .the uncertainty in the map parameters may be determined using a routine or monte carlo analysis .since the sinusoidal model has linearly independent modes , only the uncertainty in the highest - frequency modes explodes when too many modes are considered . as a rule of thumb , one should truncate the fourier series once the uncertainty in coefficients becomes greater than the coefficients themselves .we now turn to an example map and test the ability of the algorithms described above to recover the correct features of this map .the top panel of figure [ map_cho_03 ] shows the brightness map computed from a snapshot of the atmospheric dynamics model of .we performed an analogous test and obtained comparable results using a snapshot from the model of , which we do not include here in the interest of space .the brightness map was generated by treating each pixel of the temperature map as a blackbody and computing an associated intensity at m .the bottom panel of figure [ map_cho_03 ] shows the integrated longitudinal brightness map , .note that the term in the integral for attenuates the flux contribution from the poles of the planet . also shown in the bottom panelare the best fit maps for the n - slice and sinusoidal models .the light curve associated with the map , as well as the best - fit light curves for the models , are shown in figure [ lc_cho_03 ] .the map was converted to an idealized light curve , and mock observations ( comprised of 100 data points ) were generated by removing the segments of the light curve corresponding to the transit and secondary eclipse , then scaling the planet / star flux ratio and the photometric uncertainties to roughly match those of .both models reproduce the features of the map , as well as the light - curve .the best - fit 5-slice model was determined using a levenberg - marquardt -minimization routine .the sinusoidal map was determined by decomposing the light curve into sinusoidal components to , then converting the coefficients using equation [ sinusoidal ] . the uncertainties estimated by monte carlo analysis in the terms are larger than their amplitudes and are therefore ignored .the insensitivity of phase functions to the first and most important odd mode , , leads to % errors in the resulting map , based on the maps of and .we model the effect of limb darkening by adding ] to the integrand of equation [ edge_on_integral ] . we find the resulting light curve to differ by less than % , justifying our decision to neglect limb darkening in the formalism above .k and k. in the lower panel , the solid line is , the histogram and associated error bars represents the best - fit 5-slice model , and the gray band is the confidence interval for the sinusoidal map . note that positive are to the left of the plot , to facilitate comparison with the light curve in figure [ lc_cho_03].,width=317 ] , convolved with photometric scatter comparable to .the solid and dashed lines shows the light curves of the best - fit n - slice and sinusoidal models , respectively.,width=317 ]the best current light curves can be represented by the function which can be directly translated into a longitudinal brightness map using equation [ sinusoidal ] .the modes should cancel out by symmetry so their presence in the light curve would indicate systematic errors ; modes are generally lost in the noise .the longitudes and amplitudes of the primary hot - spots and cold - spots can be determined from the terms , while the relative strength of the and modes indicates whether there are secondary local maxima / minima . by the same token ,a 4-slice model with variable phase offset should be sufficient to model most phase function light curves . for light curves with incomplete phase coverage , the uncertainty in the sinusoidal map is the same at all longitudes whereas the n - slice model naturally has larger uncertainties for slices which were visible for less time , an intuitive and desirable property .so far only half - orbits have been allocated to phase function studies , but the warm spitzer mission will provide a perfect opportunity to obtain light curves spanning full planetary orbits . the contribution of sinusoidal modes to the observed light curve decreases precipitously with . not only does the transformation suppress the terms as , but the intrinsic power of these modes in the underlying map might drops as ( as is the case with themap from * ? ? ?* ) . as a result, the modes in light curves will be times weaker than the modes and will likely remain undetectable with jwst . even when higher quality light curves are eventually obtained and assuming that stellar variability is not the limiting factor the physical significance of the model map would be questionable due to the insensitivity of the phase function to odd sinusoidal modes .the power in these modes can be constrained theoretically by dynamical atmospheric models and observationally through secondary eclipse mapping , which promises to be feasible with jwst .n.b.c . is supported by the natural sciences and engineering research council of canada .is supported by a national science foundation career grant .support for this work was provided by nasa through an award issued by jpl / caltech .the authors wish to thank e. rauscher and a. showman for use of their model temperature maps ., d. , agol , e. , charbonneau , d. , cowan , n. , knutson , h. , & marengo , m. 2007 , in american institute of physics conference series , vol .943 , american institute of physics conference series , ed .l. j. storrie - lombardi & n. a. silbermann , 89100 | we describe how to generate a longitudinal brightness map for a tidally locked exoplanet from its phase function light curve . we operate under a number of simplifying assumptions , neglecting limb darkening / brightening , star spots , detector ramps , as well as time - variability over a single planetary rotation . we develop the transformation from a planetary brightness map to a phase function light curve and simplify the expression for the case of an edge - on system . we introduce two models composed of longitudinal slices of uniform brightness , and sinusoidally varying maps , respectively which greatly simplify the transformation from map to light curve . we discuss numerical approaches to extracting a longitudinal map from a phase function light curve , explaining how to estimate the uncertainty in a computed map and how to choose an appropriate number of fit parameters . we demonstrate these techniques on a simulated map and discuss the uses and limitations of longitudinal maps . the sinusoidal model provides a better fit to the planet s underlying brightness map , although the slice model is more appropriate for light curves which only span a fraction of the planet s orbit . regardless of which model is used , we find that there is a maximum of free parameters which can be meaningfully fit based on a full phase function light curve , due to the insensitivity of the latter to certain modes of the map . this is sufficient to determine the longitudes of primary equatorial hot - spots and cold - spots , as well as the presence of secondary maxima / minima . |
we consider a system of hamiltonian differential equations on defined by the hamiltonian function .we denote the time flow map of these equations by .the flow of these differential equations has two important features .the first is that is conserved along trajectories .that is , for all and all .the second is that phase space volume is conserved by the flow : if is a bounded open set , then for all .the latter property is a consequence of the symplecticity of the flow .( see , for example , ) . for certain molecular dynamics applications , the ideal numerical integrator would retain these two properties of the flow .that is , if the integrator with time step defines a map on , we would like both for all ( energy conservation ) and for all bounded open subsets of (volume conservation ) .symplectic integrators such as the implicit midpoint rule conserve volume exactly for all hamiltonian systems , with any number of degrees of freedom , but do not conserve energy .it has already been shown that in certain circumstances it is unreasonable to expect that symplectic integrators which also conserve energy exist . however , volume - conservation is a weaker property than symplecticity for hamiltonian systems of more than one degree of freedom ( ) .thus it seems plausible that there is a consistent integration scheme which for any hamiltonian function and time step yields a map which conserves both volume and energy . in this articlewe will argue that there is no such integration scheme .we do this by showing that no numerical integrator is able to integrate _ all _ hamiltonian systems in while simultaneously conserving energy and phase space volume . for special hamiltonian systems in an integrator is possible .( see for examples of such systems of any number of degrees of freedom . ) however , our theorem states that for any energy - conserving numerical integrator from a very broad class , there will be at least one hamiltonian system on ( and in fact very many ) for which it does not conserve phase space volume . before stating and proving our main theorem , in section [ sec : integrator ] we will define what we mean by an integrator . from the class of integratorswe then define the _ computationally reasonable _ integrators .this will include any explicit or implicit formula for defining a new state as a function of a state and a timestep such that the only information used about is its value and the value of its derivatives at a finite number of points , . in section [ sec : main ] we will state and prove our main result in terms of this definition . we imagine that a numerical analyst has devised a computationally reasonable numerical integrator that is energy - conserving for all hamiltonian systems in .we apply this integrator to the system with hamiltonian function in . for two different inputs we observeat which points the integrator depends on the function and its derivatives . using this information, we construct another hamiltonian function which is arbitrarily close to and has the following property : the numerical integrator can not be volume - conserving for both and .the main result demonstrates that there is no integration scheme that conserves volume and energy for arbitrary hamiltonian systems of any number of degrees of freedom .however , we can not conclude that there is no scheme that is energy- and volume - conserving for hamiltonian systems in a particular number of dimensions , .this question is open . to partially address this issue , in section [ sec : multi ]we will show that under some reasonable but not essential conditions on an integrator , the problem reduces to that of the case .thus , there is no energy- and volume - conserving integrator for fixed that satisfies these additional assumptions .we now explain the relation between the results here and those of the well - known paper of zhong and marsden . in that paperthe authors consider a hamiltonian system for which there are no invariants except energy .they show that any integrator that is both symplectic and energy - conserving actually computes exact trajectories of the system up to a time reparametrization . from this resultthey conclude that energy - conserving symplectic integration is not possible in general , since presumably the set of hamiltonians for which one could compute trajectories exactly up to a time reparametrization are very small .though this argument is very plausible , it leaves open two questions : 1 .how do we make precise the idea of something not being possible for any numerical integrator ? 2 .is it possible to perform energy - conserving and symplectic integration for general hamiltonians when we restrict ourselves to the case ?the importance of the latter question is that if it were possible , then volume- and energy - conserving integration would be possible for general hamiltonian systems via -splitting .we provide an answer to the first question in section [ sec : integrator ] with the definition of a computationally reasonable integrator . as for the second question , since volume - conservation and symplecticityare identical in , zhong and marsden s result shows that in volume- and energy - conserving integration is equivalent to solving the original system exactly up to a time - reparametrization .( this result is stated and proved for this case as lemma [ lem : reparam ] in section [ sec : main ] . )the main result of our paper answers the second question in the negative by showing that it is not possible for general hamiltonian systems in using computationally reasonable integrators .before we begin we discuss some interesting related work . even though energy and volume conserving integrators may not exist, the paper does the next best thing .there the authors show how to approximate any hamiltonian function arbitrarily well by a special piece - wise smooth function of a form described by whose trajectories can be integrated while conserving volume and energy .the original hamiltonian function is not conserved .however , unlike for standard symplectic methods , a modified hamiltonian function close to the original is conserved exactly for all time .an integrator for a hamiltonian system of ordinary differential equations takes a hamiltonian , a step length , and an initial value , and produces a value .typically , depends on through components of and perhaps itself . if is an approximation to , we take to be an approximation to . [defn : integrator ] an integrator is a function that takes arguments , , and either returns or is not defined .we write we have allowed the integrator to not be defined for certain input values .this is often the case for implicit integrators when the vector field is insufficiently smooth or the time step is too large . in order to get meaningful constraints on what is computationally reasonable, we can not let arbitrary maps be included in the class of algorithms we study .after all , the exact flow map has all the qualitative features one could want , but it is not feasible to compute it ( even to machine precision ) for most applications .we would like our definition to be broad enough to include most existing numerical integrators .informally , we say an integrator is _ computationally reasonable _ if for each and , depends on only through its value and the value of its derivatives at a finite number of points , . in the following formal definition we use multi - index notation to define higher - order derivatives : for and we let [ defn : reasonable ] an integrator is _ computationally reasonable _ if for each , and there exists 1 . , 2 . , 3 . , such that for any function that satisfies either or both are not defined . in the remainder of this section we discuss examples of the class of computationally reasonable integrators .first note that all explicit methods fit into this class .we informally define an integrator to be explicit if it can be implemented by an algorithm that terminates in a finite number of steps using function evaluations of or its derivatives , arithmetic operations , and logical operations .this includes all of the explicit runge - kutta and partitioned runge - kutta methods , for example .it also includes integrators that collect information adaptively to perform a step , such as the bulirsch - stoer method .taylor series methods are also included .the class of computationally reasonable integrators also includes implicit methods , such as the implicit runge - kutta methods .here we define an implicit method to be one where is specified by requiring it to be the solution to a nonlinear system of equations in and its derivatives .implicit algorithms can not in general be implemented exactly in a finite number of steps , but they still fit into the framework of definition [ defn : reasonable ] . to see this , note that even though solving a system of nonlinear equations exactly typically requires looking at and its derivatives at an infinite number of points ( while performing the newton iteration , for example ) , determining if we have a solution to a nonlinear system of equations only requires examining a finite number of points .so if solves the equations for , it will still solve the equation for any which is identical to at the points .similarly , step - and - project methods are included in this class .these are methods that consist of one step of a simpler method followed by a projection onto a manifold ( * ? ? ?what integrators do not satisfy definition [ defn : reasonable ] ?integrators that require the exact computation of integrals of or its derivatives do not . computing the integral of a general functionrequires knowing its value at an infinite set of points on the domain of integration . unlike in the case of solving nonlinear equations ,if we are given a value for the integral , there is no finite set of points at which we can examine the function to verify that the value is correct .of course , it is possible to define an numerical integrator that uses integrals of functions , but the actual computation of these integrals for general hamiltonian systems would require numerical quadrature .this in turn would require sampling the function at a finite number of points , and introducing truncation error .the new method with this additional truncation error does form a computationally reasonable integrator , while the original method with the exact integral does not .finally , we note that multistep methods are not even integrators according to definition [ defn : integrator ] .we believe our framework could be extended to multistep method but we do not do so here .to prove our main result theorem [ thm : main ] we use the following lemma .it shows that for a hamiltonian system in , the map defined by an energy and area preserving integrator is just a time - reparametrization of the flow map .as discussed in the introduction , this is essentially zhong and marsden s result in the two dimensional case .the only addition is that we show that the time - reparametrization is just a constant rescaling of time where locally the constant does not depend on energy .[ lem : reparam ] let be a smooth function and its induced hamiltonian flow map .let be a particular energy .let be an open set whose intersection with is a simple curve .suppose that on .let , be a continuous area - conserving map defined on that conserves . then there is a constant such that for all .* proof : * on we can define canonical action - angle coordinates in which the hamiltonian function is .the flow map is then where , since we still have ^t = \nabla h \neq 0 ] , since conserves energy and is continuous on , the set is mapped onto itself .as conserves volume , lemma [ lem : reparam ] shows that it is identical to the flow of the original hamiltonian system on with a rescaling of time : for all , where does not depend .the consistency condition at implies that for small enough we have that . from now on , we assume is small enough so that .consider the integrator applied to at the point . since is computationally reasonable ( definition [ defn : reasonable ] ), there are a finite number of points , , such that only depends on at these points .choose a big enough so that the interval ] .consider the integrator applied to at the point .there are points such that only depends on at these points .let be a function such that 1 . for not in ] .so .this is a contradiction .therefore , can not be simultaneously defined , continuous , and volume - conserving on . the following lemma asserts the intuitively clear fact that if is positive for ] .let and .let and be the respective hamiltonian flow maps of and .then for . *the trajectory for the hamiltonian as a function of time is for all .letting describe the position for the hamiltonian , the usual solution technique gives the function is strictly increasing and so has a well defined inverse .we can write now for , so for .hence for and the two flow maps can not be equal for . the previous section we showed that there can be no general energy- and volume - conserving integration schemes because there are no integrators that conserve energy and volume for all hamiltonians in . however ,suppose we ask if such integrators exists for hamiltonian systems of dimension , .we conjecture that a result like theorem [ thm : main ] still holds in this case . however , the method of proof for the case does not extend to this case .instead we will state two conditions on an integrator , either one of which prevents it from being volume- and energy - conserving for general hamiltonian systems in .both of these conditions are desirable for an integrator to have , but unlike computational reasonibility , it is not difficult to imagine a practical integrator that did not satisfy them .the proof of the theorems in this section will work by showing that , if a computational reasonable energy - conserving integrator with either condition exists , a special hamiltonian in can be constructed for which it does not conserve volume . 1 . is consistent ( definition 3 ) . is computationally reasonable ( definition 2 ) .3 . conserves energy for any , and for which it is defined . 4 . satisfies condition [ cond : untouch ] 5 . for sufficiently small , is defined , continuous , and conserves volume for .then for sufficiently small , there is a function such that if , then is not simultaneously defined , continuous , and volume - conserving on . for each such ,the constructed can be replaced by for and the same result holds .* proof : * we will use the hypothesized integrator on to construct an integrator on satisfying the conditions of theorem [ thm : main ] .let be a given hamiltonian function .define by .we define the integrator by let be given by it is straightforward to check that satisfies the conditions of theorem [ thm : main ] on .thus , by the theorem , we have an arbitrarily small function such that is not simultaneously defined , continuous , and volume - conserving on for .let for .now suppose that is defined , continuous , and volume preserving on .we will derive a contradiction by showing this implies that is , in fact , defined , continuous , and volume - conserving on .first note that being defined and continuous on implies that is defined and continuous on . to check volume conservation , note that the jacobian of the map has structure \ ] ] where we have put the variables in order and is a 2-by-2 matrix .since the determinant of this matrix is 1 by volume - conservation , the determinant of must be 1 .but is the jacobian of , so this latter map must be area preserving .this contradicts our earlier assumption . second condition states that if the hamiltonian system consists of identical uncoupled one - degree - of - freedom systems , then the integrator itself should consist of identical uncoupled maps on the state - space of each subsystem . though this is certainly a nice property for the integrator to have ( since the flow map has the same property ) there are many integrators for which it does not hold .for example , step - and - project methods may not satisfy this condition , even if the underlying one - step method does . 1 . is consistent ( definition 3 ) . is computationally reasonable ( definition 2 ) .3 . conserves energy for any , and for which it is defined . satisfies condition [ cond : prod ] 5 . for sufficiently small , is defined , continuous , and conserves volume for .then for sufficiently small , there is a function such that if then is not simultaneously defined , continuous , and volume - conserving on . for each such ,the constructed can be replaced by for and the same result holds .* proof : * this theorem is proven analogously to the previous theorem . for any we define by .we define the integrator by we define by as in the proof of the previous theorem , and satisfy the conditions of theorem [ thm : main ] .thus , by the theorem , we have an arbitrarily small function such that is not defined continuous and volume - conserving on for .now is defined and continuous on . to check volume conservation , note that the jacobian of the map in this case has structure \ ] ] where we have put the variables in order andeach is 2-by-2 .since the determinant of this matrix must be 1 by volume - conservation and the determinants of the are identical , the determinant of must be . as in the proof of the previous theoremthis implies is area conserving on which is a contradiction . | we consider the numerical simulation of hamiltonian systems of ordinary differential equations . two features of hamiltonian systems are that energy is conserved along trajectories and phase space volume is preserved by the flow . we want to determine if there are integration schemes that preserve these two properties for all hamiltonian systems , or at least for all systems in a wide class . this paper provides provides a negative result in the case of two dimensional ( one degree of freedom ) hamiltonian systems , for which phase space volume is identical to area . our main theorem shows that there are no computationally reasonable numerical integrators for which all hamiltonian systems of one degree of freedom can be integrated while conserving both area and energy . before proving this result we define what we mean by a computationally reasonable integrator . we then consider what obstructions this result places on the existence of volume- and energy - conserving integrators for hamiltonian systems with an arbitrary number of degrees of freedom . ordinary differential equations , numerical integration , hamiltonian systems , geometric integration , no - go theorems , volume - conservation , energy - conservation 65p10 |
in recent years , various ideas have been proposed to realize intelligent highway systems to reduce traffic congestions and improve safety levels .it is envisioned that navigation , communication and automatic driver assistance systems are critical components . a great deal of monitoring and controlling capabilitieshave been implemented through roadside infrastructures , such as cameras , sensors , and control and communication stations .such systems can work together to monitor in real time the situations on highways and at the same time guide vehicles to move in a coordinated fashion , e.g. to keep appropriate distances from the vehicles in front of and behind each individual vehicle .in intelligent highway systems , the guiding commands are expected to be simple and formatted as short digital messages to scale with the number of vehicles and also to avoid conflict with the automatic driver assistance systems installed within the vehicles .similar guided formation control problems also arise when navigating mobile robots or docking autonomous vehicles . motivated by this problem of guiding platoons of vehicles on highways, we study in this paper the problem of controlling a one - dimensional multi - agent formation using only _ coarsely _ quantized information .the formation to be considered are rigid under inter - agent distance constraints and thus its shape is uniquely determined locally .most of the existing work on controlling rigid formations of mobile agents , e.g. , assumes that there is no communication bandwidth constraints and thus real - valued control signals are utilized .the idea of quantized control through digital communication channels has been applied to consensus problems , e.g. and references therein , and more recently to formation control problems .the uniform quantizer and logarithmic quantizer are among the most popular choices for designing such controllers with quantized information . moreover , the paper has discussed krasowskii solutions and hysteretic quantizers in connection with continuous - time average consensus algorithms under quantized measurements .the problem studied in this paper distinguishes itself from the existing work in that it explores the limit of the least bandwidth for controlling a one - dimensional rigid formation by using a quantizer in its simplest form with only two quantization levels . as a result , for each agent in the rigid formation , at most four bits of bandwidth is needed for the communication with the navigation controller .the corresponding continuous - time model describing the behavior of the overall multi - agent formation is , however , non - smooth and thus an appropriate notion of solution has to be defined first .we use both the lyapunov approach and trajectory - based approach to prove convergence since the former provides a succinct view about the dynamic behavior while the latter leads to insight into the set of initial positions for which the proposed controller may fail .we also discuss some situations when different assumptions about the quantization scheme are made and indicate those scenarios in which the formation control problem with quantized information can be challenging to solve .the rest of the paper is organized as follows .we first formulate the one - dimensional guided formation control problem with coarsely quantized information in section [ se : formulation ] . then in section[ se : analysis ] , we provide the convergence analysis results first using the lyapunov method and then the trajectory - based method . simulation results are presented in section [ se : simulation ] to validate the theoretical analysis .we make concluding remarks in section [ se : conclusion ] .the one - dimensional guided formation that we are interested in consists of mobile agents .we consider the case when the formation is rigid ; to be more specific , if we align the given one - dimensional space with the -axis in the plane and label the agents along the positive direction of the -axis by , then the geometric shape of the formation is specified by the given pairwise distance constraints , , where are desired distances .although the guidance system can monitor the motion of the agents in real time , we require that it can only broadcast to the mobile agents quantized guidance information through digital channels .in fact , we explore the limit for the bit constraint by utilizing the quantizer that only has two quantization levels and consequently its output only takes up one bit of bandwidth .the quantizer that is under consideration takes the form of the following sign function : for any , each agent , modeled by a kinematic point , then moves according to the following rules utilizing the coarsely quantized information : [ quantized.n.agent.system ] x_1 & = & -k_1 ( x_1-x_2 ) ( |x_1-x_2|-d_1 ) + x_i & = & ( x_i-1-x_i ) ( |x_i-1-x_i|-d_i-1)- + & & k_i ( x_i - x_i+1 ) ( |x_i - x_i+1|-d_i ) , + & & i=2, ,n-1 + x_n & = & ( x_n-1-x_n ) ( |x_n-1-x_n|-d_n-1 ) where is the position of agent in the one - dimensional space aligned with the -axis , and are gains to be designed . notethat since each agent is governed by at most two distance constraints , as is clear from ( [ quantized.n.agent.system ] ) , a bandwidth of four bits is sufficient for the communication between the guidance system and the agents and the required bandwidths for the guidance signals for agents and are both 2 bits .hence , in total only bits of bandwidth is used .the main goal of this paper is to demonstrate under this extreme situation of using coarsely quantized information , the formation still exhibits satisfying convergence properties under the proposed maneuvering rules . towards this end , we introduce the variables of relative positions among the agents [ z ] z_ix_i - x_i+1,i=1,2, ,n-1 .let us express the system in the -coordinates to obtain [ quantized.n.agent.system.z ] z_1 & = & -(k_1 + 1 ) ( z_1 ) ( |z_1|-d_1 ) + & & + k_2 ( z_2 ) ( |z_2|-d_2 ) + z_i & = & ( z_i-1 ) ( |z_i-1|-d_i-1 ) + & & -(k_i+1 ) ( z_i ) ( |z_i|-d_i ) + & & + k_i+1 ( z_i+1 ) ( |z_i+1|-d_i+1 ) , + & & i=2, ,n-2 + z_n-1 & = & ( z_n-2 ) ( |z_n-2|-d_n-2 ) + [ 2 mm ] & & -(k_n-1 + 1 ) ( z_n-1 ) ( |z_n-1|-d_n-1 ) . to study the dynamics of the system above , we need to first specify what we mean by the solutions of the system .since the vector field on the right - hand side is discontinuous , we consider krasowskii solutions , namely solutions to the differential inclusion , where denotes the involutive closure of a set , and is the ball centered at and of the radius .the need to consider these solutions becomes evident in the analysis in the next section . since the right - hand side of ( [ quantized.n.agent.system ] ) is also discontinuous ,its solutions are to be intended in the krasowskii sense as well .then we can infer conclusions on the behavior of ( [ quantized.n.agent.system ] ) provided that each solution of ( [ quantized.n.agent.system ] ) is such that defined in ( [ z ] ) is a krasowskii solution of ( [ quantized.n.agent.system.z ] ) .this is actually the case by , theorem 1 , point 5 ) , and it is the condition under which we consider ( [ quantized.n.agent.system.z ] ) .it turns out that the -system ( [ quantized.n.agent.system.z ] ) is easier to work with for the convergence analysis that we present in detail in the next section .in this section , after identifying the equilibria of the system , we present two different approaches for convergence analysis . the first is based on a lyapunov - like function and the second examines the vector field in the neighborhood of the system s trajectories .we start the analysis of system ( [ quantized.n.agent.system.z ] ) by looking at the discontinuity points of the system .a discontinuity point is a point at which the vector field on the right - hand side of the equations above is discontinuous .hence , the set of all the discontinuity points is : it is of interest to characterize the set of equilibria : [ lemma.equilibria ] let , for , and .the set of equilibria , i.e. the set of points for which with being the vector field on the right - hand side of ( [ quantized.n.agent.system.z ] ) , is given by for , if for , and , then ._ proof : _suppose by contradiction that .this implies that in a neighborhood of this point , the state space is partitioned into different regions where is equal to constant vectors . in view of ( [ quantized.n.agent.system.z ] ) , the component of these vectors is equal to one of the following values : , , , , if , or , , , , if .any is such that its component belongs to ( a subinterval of ) the interval ] if ) . in both cases ,if , then the interval does not contain and this is a contradiction .this ends the proof of the lemma . _ proof of proposition [ lemma.equilibria ] :_ first we show that if , then . as a first step , we observe that implies .in fact , suppose by contradiction that the latter is not true .this implies that at the point for which , any is such that the first component takes values in the interval ] . in both cases ,if , then does not belong to the interval and this contradicts that .hence , .this and lemma [ lemma.claim ] show that , consider the last equation of ( [ quantized.n.agent.system.z ] ) , and again suppose by contradiction that . then the last component of belongs to a subinterval of ] .if , then neither of these intervals contain and this is again a contradiction .this concludes the first part of the proof , namely that implies .now we let and prove that . by definition , if , then lies at the intersection of planes , which partition into regions , on each one of which is equal to a different constant vector .any is the convex combination of these vectors , which we call .we construct such that .we observe first that , the component of the vectors s can take on four possible values , namely , , , , and that there are exactly ( we are assuming that , as the case is simpler and we omit the details ) vectors among whose first component is equal to , whose first component is equal to and so on . as a consequence , if for all , then .+ similarly , the component , with , can take on eight possible values ( , , see the expression of in ( [ quantized.n.agent.system.z ] ) ) and as before , the set can be partitioned into sets , and each vector in a set has the component equal to one and only one of the eight possible values .moreover , these values are such that . + finally , if , the set can be partitioned into four sets , and each vector in a set has the last component equal to one and only one of the four possible values , , , .hence , .let now be such that , with for all . since for all , then and this proves that for all , we have .this completes the proof . next , we show that the equilibrium set is attractive .now we are in a position to present the main convergence result .if [ gains ] k_1k_2 , k_ik_i+1 + 1 , i=2, ,n-2 , k_n-11 , then all the krasowskii solutions to ( [ quantized.n.agent.system.z ] ) converge to ( a subset of ) the equilibria set ._ proof : _ let be a smooth non - negative function .we want to study the expression taken by , where is the vector field on the right - hand side of ( [ quantized.n.agent.system.z ] ) .we obtain : \\\qquad i=1\\ \qquad \\ z_i(z_i^2-d_i^2)[-(k_i+1 ) \textrm{sgn}(z_i ) \textrm{sgn}(|z_i|-d_{i } ) \\ + \textrm{sgn}(z_{i-1 } ) \textrm{sgn}(|z_{i-1}|-d_{i-1 } ) \\ + k_{i+1 } \textrm{sgn}(z_{i+1 } ) \textrm{sgn}(|z_{i+1}|-d_{i+1 } ) ] \\ \qquad i=2,\ldots , n-2\\ \qquad \\ z_{n-1}(z_{n-1}^2-d_{n-1}^2)[\textrm{sgn}(z_{n-2 } ) \textrm{sgn}(|z_{n-2}|-d_{n-2 } ) & \\-(k_{n-1}+1 ) \textrm{sgn}(z_{n-1 } ) \textrm{sgn}(|z_{n-1}|-d_{n-1 } ) ] \\ \qquadi = n-1 \ea \right.\end{aligned}\ ] ] if , i.e. if is not a point of discontinuity for , then : -(k_i - k_{i+1 } ) |z_i|\,|z_i^2-d_i^2| & i=2,\ldots , n-2\\[2 mm ] -k_{n-1 }|z_{n-1}|\,|z_{n-1}^2-d_{n-1}^2| & i = n-1 \ea \right.\end{aligned}\ ] ] where we have exploited the fact that . hence , if ( [ gains ] ) holds , then if , we look at the set we distinguish two cases , namely ( i ) and ( ii ) . in case ( i ) , , and therefore , . in case ( ii ), there must exist at least one agent such that and at least one agent such that .let ( respectively , ) be the set of indices corresponding to agents for which ( ) . clearly , .+ since if , then let and . in view of ( [ quantized.n.agent.system ] ) , for , it holds : \}\;,\end{aligned}\ ] ] with then by ( [ gains ] ) , for all , and therefore , if , then latexmath:[\[\ba{rcl } \nablav(z ) \cdot v & \le & -{\displaystyle}\sum_{i\in { \cal i}_2(z ) } .this shows that for all , either or . in summary , for all , either or , and if and only if .it is known ( lemma 1 in ) that if is a solution of the differential inclusion , then exists almost everywhere and .we conclude that is non - increasing .let , with a compact and strongly invariant set for ( [ quantized.n.agent.system.z ] ) .for any , such a set exists and includes the point ( hence ) , by definition of and because is non - increasing along the solutions of ( [ quantized.n.agent.system.z ] ) .since or for all , then by the lasalle invariance principle for differential inclusions , any solution to the differential inclusion converges to the largest weakly invariant set in ( is closed ) . since the choice ( [ gains ] ) yields that the gains s satisfy the condition in lemma [ lemma.equilibria ], is the set of equilibria of ( [ quantized.n.agent.system.z ] ) ( and therefore it is weakly invariant ) and since , we conclude that any solution converges to the set of points . since the equilibrium set contains those points for which two agents coincide with each other , it is of interest to characterize those initial conditions under which the asymptotic positions of some of the agents become coincident . in the next subsection, we use a three - agent formation as an example to show how such analysis can be carried out .we specialize the rigid formation examined before to the case . letting ,the one - dimensional rigid formation becomes : [ quantized.3.agent.system ] x_1 & = & -(x_1-x_2 ) ( |x_1-x_2|-d_1 ) + x_2 & = & ( x_1-x_2 ) ( |x_1-x_2|-d_1)- + & & ( x_2-x_3 ) ( |x_2-x_3|-d_2 ) + x_3 & = & ( x_2-x_3 ) ( |x_2-x_3|-d_2 ) .let us express the system in the coordinates , , so as to obtain : [ quantized.3.agent.system.z ] z_1 & = & -2 ( z_1 ) ( |z_1|-d_1)+(z_2 ) ( |z_2|-d_2 ) + z_2 & = & ( z_1 ) ( |z_1|-d_1)- 2(z_2 ) ( |z_2|-d_2 ) .we study the solutions of the system above . in what follows ,it is useful to distinguish between two sets of points : clearly , .we now prove that all the solutions converge to the desired set except for solutions which originates on the - or the -axis : [ p1 ] all krasowskii solutions of ( [ quantized.3.agent.system.z ] ) converge in finite time to the set .in particular , the solutions which converge to the points must originate from the set of points .moreover , the only solution which converges to is the trivial solution which originates from . _proof : _ because of the symmetry of , it suffices to study the solutions which originate in the first quadrant only .in the first quadrant we distinguish four regions : ( i ) , ( ii ) , ( iii ) , ( iv ) .now we examine the solutions originating in these regions .+ ( i ) . if both and , then the system equations become and the solution satisfies . in other words, the solution evolves along the line of slop and intercept .if , then the solution converges to the point in finite time .in particular with .if , then converges in finite time to the semi - axis .this is a set of points at which is discontinuous , since for , , and for , .since at these points , denotes the smallest closed convex set which contains . ] and vectors in intersect the tangent space at the semi - axis in those points , a sliding mode along the semi - axis must occur . since , we conclude that the sliding mode must satisfy the equations and therefore , after a finite time , the solution converges to the point . on the other hand ,if , then the solution reaches the ray .similar considerations as before can show that a sliding mode occurs along the ray and that it satisfies the equations and again convergence in finite time to is inferred .finally we examine the case . at the point , i.e. and is an equilibrium point .similarly as before , one shows that the solution which originates from must stay in .+ ( ii ) . if and , then the map is equal to the vector and the solution satisfies . if , then converges to , while if , it first converges to the semi - axis , and then it slides towards . when , the solution reaches the segment . on this segment , , and since this intersects the tangent space at the segment , a sliding mode occurs .the sliding mode obeys the equations which show that the state reaches .+ if and , then the initial condition lies on another discontinuity surface of .observe that , for those points such that and , .hence , intersects the tangent space at the semi - axis in those points , and the solutions can slide along the semi - axis until they reach the point and stop , or can enter the region , and then converge to , or they can enter the region and converge to the point .+ the point is an equilibrium , and if , solutions stay at the equilibrium . +we review the remaining cases succinctly , as they are qualitatively similar to the cases examined above .+ ( iii ) . if for , then the solutions converge to possibly sliding along the segments or .if and , then the solution can converge to the points , or .if and , then the solutions can converge to , or . finally ,if for , the solutions can converge to any of the points in .in particular , a possible solution is the one which remains in .+ ( iv ) .solutions which start from initial conditions such that and converge to . if and , then the solution converge to possibly sliding on the segment .if and , the solutions can converge to one of the three possible points : , , . a few comments are in order : * sliding modes arise naturally for those situations in which , for instance , the state reaches the semi - axis .this forces us to consider krasowskii solutions rather than carathodory solutions . on the other hand, the set of krasowskii solutions may be too large in some cases , as it is evident for instance for those solutions which start on the - or -axis .* the occurrence of sliding modes are not acceptable in practice as they would require fast information transmission . a mechanism to prevent sliding modes in the system ( [ quantized.3.agent.system ] )can be introduced following .in this section , we present simulation results for the guided formation control with coarsely quantized information .we consider a formation consisting of 6 agents , labeled by .the distance constraints are , .the initial positions of agents 1 to 6 are 0 , 0.5 , 1 , 2 , 4 and 5 respectively .then the shape of the initial formation is shown in figure [ fig1 ] .we choose , , , and and simulate the agents motion under the control laws ( [ quantized.n.agent.system ] ) . in figure [ fig2 ] , we show the shape of the final formation . to see how the shape evolves with time, we present the curve of the lyapunov function in figure [ fig3 ] .since our analysis has been carried out using krasowskii solutions , when we further look into the dynamics of , it is clear that the sliding mode may still happen when the krasowskii solution converges . butthis effect due to the system s non - smoothness is within an acceptable level as shown in figure [ fig4 ] which presents the curve of .in this paper , we have studied the problem of controlling a one - dimensional guided formation using coarsely quantized information .it has been shown that even when the guidance system adopts quantizers that return only the one - bit sign information about the quantized signal , the formation can still converge to the desired equilibrium under the proposed control law .the point model we have used throughout the analysis is a simplified description of vehicle dynamics .when more detailed models are taken into consideration , we need to deal with collision avoidance and other practical issues as well .so it is of great interest to continue to study the same problem with more sophisticated vehicle models and more physical constraints from the applications . | motivated by applications in intelligent highway systems , the paper studies the problem of guiding mobile agents in a one - dimensional formation to their desired relative positions . only coarse information is used which is communicated from a guidance system that monitors in real time the agents motions . the desired relative positions are defined by the given distance constraints between the agents under which the overall formation is rigid in shape and thus admits locally a unique realization . it is shown that even when the guidance system can only transmit at most four bits of information to each agent , it is still possible to design control laws to guide the agents to their desired positions . we further delineate the thin set of initial conditions for which the proposed control law may fail using the example of a three - agent formation . tools from non - smooth analysis are utilized for the convergence analysis . |
pinterest is an online catalog used to discover and save ideas .hundreds of millions of users organize pins around particular topics by saving them to boards .each of the more than 50 billion pins saved on pinterest has an image , resulting a large - scale , hand - curated collection , with a rich set of metadata .most pins images are well annotated : when a person bookmarks an image on to a board , a _pin _ is created around an image and a brief text description supplied by the user . when the same pin is subsequently saved to a new board by a different user , the original pin gains additional metadata that the new user provides .therefore , the data structure surrounding each pin continues to get richer each time the pin is re - saved .furthermore , boards ( i.e. collections of pins ) reveal relations _ between _ pins : if many users save these two pins together , there is a high likelihood that another user may find them to be related as well. such aggregated _ image co - occurrence _ statistics are found to be useful for related content recommendation .this work explores how user curation signals can be used in conjunction with content - based features to improve recommendation systems .specifically we introduce related pins , an _ item - to - item _ content recommendation service triggered when a pin closeup is shown to the user , and describe in detail our experiments using visual features ( such as those obtained from convolutional neural networks ) , which are of particular interest since this system ultimately recommends visual content .as one of the most popular features on pinterest , related pins is a recommendation system that combines collaborative filtering with content - based retrieval . since may 2015 , the user engagement metric on pin recommendations has improved by more than 50% .note that the improvement is the result of using both visual features and other metadata signals in the learning - to - rank framework the scope of this paper is limited to the understanding of user curation and visual features in the context of recommendation systems .this work makes two contributions : first , we demonstrate that `` pinning , '' a form of user curation , provides valuable user signals for content recommendation .specifically we present our use of _ image / board co - occurrences _ , including the _ pinjoin _ data structure used to derive this signal .second , we demonstrate that combining collaborative filtering with content - based retrieval methods , such as applying a learning - to - rank framework to a set of semantic and visual features associated with the candidate pins , can significantly improve user engagement .in particular , our a / b experiments demonstrate that the use of recently developed visual features ( when used in conjunction with other text and graph signals ) , such as those obtained from vgg and faster r - cnn yield significant gains in recommendation quality .collaborative filtering using user - generated signals ( e.g. co - views , co - clicks ) is widely used in commercially deployed recommendation systems such as youtube related videos and amazon related items .this work investigates the use of _ user curation _ signals derived from pins image / board co - occurrences , which are unique to pinterest .visual features are widely used in both content - based recommendation systems and image search systems , and the the learning - to - rank framework used in this paper from has been widely used in industry . to our best knowledgethis work contains the first published empirical results on how the latest convolutional neural network ( cnn ) based visual features ( e.g. vgg ) and large - scale object detection using faster r - cnn can improve commercial recommendation systems .visual features are computed using a distributed process described in our previous work .content curation is the process of organizing and collecting content relevant to a particular topic of interest .pinterest is a user - powered content curation service as content is collected and grouped into topic boards , creating a rich set of metadata associated with pins images .for example , during pin creation , users typically provide a text description of the images as shown in figure 3 .although any single instance of text description can be noisy , an aggregated collection reveals important annotations relevant to the pin s image .furthermore , when a pin is saved to a board , one can infer the categorical information of the pin s image from the category the user selected for the board .formally , we denote the data structures associated with pins and boards in the following way : each _ pinjoin _ is a 3-tuple , where is the image url , is the collection of pins generated for that image , and is the aggregation of text annotations or keywords ( extracted from board titles and descriptions ) . each _boardjoin _ is represented as a 2-tuple , where t is the board title and p is a list of pins .pinjoin is conceptually similar to visual synsets except in this case , all the images within a visual synset are exact duplicates of each other and the collection is curated manually . in practice , both of these structures contain additional metadata , such as category and topic information , which is used for deriving other features during re - ranking .user curation reveals relations _ among _ images : we observed that images of pins on the same board are semantically ( and to some extent visually ) related to each other .therefore , if enough users save these two pins together , there is a high likelihood that a new user may also find them to be related .this is helped by the fact that users on pinterest actively curate content our engaged users have an average of 24 boards .an example of image / board co - occurrences is shown in figure [ fig : occur_examples ] .this section presents the architecture that powers the pinterest related pins recommendation system .the first step described relies on collaborative filtering over user curation signals ( image co - occurrences on boards ) to generate a candidate sets .the second step uses content - based ranking approach to rank the candidates based on content signals such as visual features , textual signals , and categories signals derived from _pinjoins_. the first step of the pipeline is to generate a set of image candidates for each query image , which will serve as candidate sets for content - based re - ranking .we adopt a classic collaborative filtering approach to exploit the pin / board co - occurrences as described in section 3 . for each pin , we select up to 10,000 pins with the highest number of shared boards . in practice, the candidate generation process is accomplished through a mapreduce job , which takes _ boardjoin _ as input .the mapping stage outputs image pairs for all image pairs in each board , and in the reduce stage all the related images are grouped by the same query image . for computational efficiency , we sample images based on the quality / popularity of the images . for pins that do not generate enough candidates through board co - occurrence, we rely on a content - based retrieval service described in our previous paper . after generating a set of candidates , we re- rank with a set of both content pair - features ( defined between a query and a candidate ) and query - independent features .in addition to standard features such as text annotation match and topic vector similarity , we were particularly interested in the effectiveness of visual similarity features for our re - ranking step .examples of visual features include the _ fc6 _ and _ fc8 _ activations of intermediate layers of deep convolutional neural networks ( cnns ) based on alexnet and vgg .these features are binarized ( _ fc6 _ ) and sparsified ( _ fc8 _ ) for representation efficiency and compared using hamming distance ( _ fc6 _ ) and cosine similarity ( _ fc8 _ ) , respectively .we use the open - source caffe framework to perform training and inference of our cnns on multi - gpu machines . in this work ,we also trained an object detection module using faster r - cnn , initially fine - tuned on a dataset containing the most common objects found on pinterest , including home decor and fashion categories such as various furniture types , shoes , dresses , glasses , bags and more .to learn the weight vector for our linear model , we adopted the learning - to - rank approach from joachims .given training data in the form of relative ranking triplets , where document is considered to be more relevant to query than document , the ranksvm algorithm described in approximates a weight vector which maximizes the number of training examples satisfying , where gives features of the document in the context of query .the relevance triplets we use in training are generated through user clicks and impression logs , normalized by position and device to account for position bias , using a clicks over expected clicks ( coec ) model .for each query in our training set , given the set of observed results , we generate the training triplet : corresponding to the query , best engaged document , and worst engaged document .we also generate random negative examples : on the intuition that even poorly engaged candidates generated through our board co - occurrence signal should still be more relevant than a random document from our corpus of pins .in this subsection we present a qualitative analysis of using user curation signals to generate candidates for related pins .as we described previously , board co - occurrences for pins is a strong signal of relevance and has been the foundation of candidate generation for our system .figure [ fig : occur_examples ] illustrates that the relevance of the candidates grows gradually when the number of co - occurrences with the query pin increases .we also show the percentage of image pairs having different number of board co - occurrences in figure [ fig : cooccurrence ] .note that the majority of the image pairs ( around 80% ) only co - occur once on the same board , which suggests that ranking based on content features as the next step is important for finding high - quality recommendations . on the other hand , there are only a handful of image pairs which co - occur many times on the same boards .we found that one of the most important features for re - ranking the pin candidates generated through board co - occurrence is visual similarity .we validated this by setting up a series of a / b experiments , where we selected five million popular pins on pinterest as queries , and re - ranked their recommendations using different sets of features .the control group re - ranked related pins using a linear model with a set of standard features : text annotation similarity , topic vector similarity , category vector similarity , as well as query - independent features .the treatment group re - ranked using fine - tuned vgg _fc6 _ and _ fc8 _ visual similarity features along with indicator variables ( in addition to the features used in control ) . across the 5 m query pins ,the treatment saw a 3.2% increase in save / clickthrough rate . after expanding the treatment to 100 m query pins, we observed a net gain of 4.0% in propensity to engage with related pins , and subsequently launched this model into production .similar experiments with a fine - tuned alexnet model yielded worse results ( only 0.8% engagement gain ) .when broken down by category , we noted that the engagement gain was stronger in predominantly visual categories , such as art ( 8.8% ) , tattoos ( 8.0% ) , illustrations ( 7.9% ) , and design ( 7.7% ) , and lower in categories which primarily rely on text , such as quotes ( 2.0% ) and fitness planning ( 0.2% ) . given the difference in performance among categories , we performed a follow - up experiment where we introduced a cross feature between the category vector of the query and the scalar _ fc6_ visual similarity feature ( between the query and candidate ) .this introduces 32 new features to the model , one for each of our site - wide categories ( these features are sparse , since the pinterest category vector thresholds most values to zero ) .the result from this was a further 1.2% engagement increase in addition to the gains from the initial visual re - ranking model .further work into component and cross product features is of interest to us , as they are essentially free to compute at rank - time , since the raw feature data is already stored .users are sometimes interested in the _ objects _ in the pin s image , instead of the full image ( as shown in figure [ fig : visualobject ] ) .we therefore speculate that object detection , when feasible , should improve relevance targeting . after applying non - maximum suppression ( nms ) to the proposals generated by our fine - tuned faster r - cnn module mentioned in section , we considered query pins where the largest proposal occupies at least 25% of the pin s image , or if the proposal is smaller , it passes a confidence threshold of 0.9 in faster r - cnn .we categorize these images as containing a dominant visual object , and using the best - performing fine - tuned vgg re - ranking variant from the previous section as our control , we experimented with the following treatments : * _ variant a _ :if a dominant visual object is detected in the query pin , we compute visual features ( vgg ) on just that object . * _ variant b _ : same as _ variant a _ , but we also hand - tune the ranking model by increasing the weight given to visual similarity by a factor of 5 .the intuition behind this variant is that when a dominant visual object is present , visual similarity becomes more important for recommendation quality . * _ variant c _ :if a dominant visual object is detected in the query pin , we still use the features from the entire image ( as the control does ) , but increase the weight given to visual similarity by a factor of 5 , as in _variant b_. in this variant , we assume that the presence of detected visual objects such as bags or shoes indicates that visual similarity is more important for this query ..results when using cross features and object detection , measured over a 7 day period in oct .2015 [ cols="<,<,^",options="header " , ] results for these variants are listed in table [ tbl : visualsearchrel ] .variants a and b of the object detection experiments suggest that the tight bounding boxes from our object detection module do not provide enough context for our cnn models , but variant c , which results in an additional 4.9% engagement gain over the vgg similarity feature control , demonstrates that the presence of visual objects indicates that visual similarity should be weighed more heavily .based on these results , our future focus is scaling up the number of object categories we can detect , and tuning the weight given to visual similarity in variant b and c.the related pins system described in this work has improved user engagement metric and traffic on pin recommendations by more than 50% from may 2015 to november 2015 .this demonstrates that signals derived from user curation and the activity of users organizing content contain rich information about the images and are very effective when used in conjunction with collaborative filtering .we also demonstrate that visual features such as representations learned from cnns or presence of detected visual objects can be used in the learning - to - rank framework to improve item - to - item recommendation systems .one important component not discussed in this work is our use of user signal in the form of _ navboost _ , which also uses a model based on coec ( extended to actions beyond clicks ) to re - rank content based on user engagement .our future work includes exploring a richer set of features ( e.g. sparse features , dense features , cross - product features , more object categories ) and real - time recommendations ( enabling re - ranking based on locale , current search query , and other forms of personalization ) .we would like to thank our colleagues on the visual discovery and recommendations teams at pinterest , in particular dmitry chechik , yunsong guo , and many others .we d also like to acknowledge jeff donahue and trevor darrell from berkeley vision and learning center ( bvlc ) for their collaboration with pinterest and their work on caffe .s. baluja , r. seth , d. sivakumar , y. jing , j. yagnik , s. kumar , d. ravichandran , and m. aly .video suggestion and discovery for youtube : taking random walks through the view graph . in _ proceedings of the 17th international conference on world wide web _ , www 08 , pages 895904 , new york , ny , usa , 2008 .m. bendersky , l. garcia - pueyo , j. harmsen , v. josifovski , and d. lepikhin .up next : retrieval methods for large scale related video suggestion . in _ proceedings of the 20th acm sigkdd international conference on knowledge discovery and data mining _ , kdd 14 , pages 17691778 , new york , ny , usa , 2014 .o. chapelle and y. zhang .a dynamic bayesian network click model for web search ranking . in _ proceedings of the 18th international conference on world wide web _ , www 09 , pages 110 , new york , ny , usa , 2009 .acm .r. datta , j. li , and j. z. wang .content - based image retrieval : approaches and trends of the new age . in _ proceedings of the 7th acm sigmm international workshop on multimedia information retrieval _ , mir 05 , pages 253262 , new york , ny , usa , 2005 .d. a. ferrucci , e. w. brown , j. chu - carroll , j. fan , d. gondek , a. kalyanpur , a. lally , j. w. murdock , e. nyberg , j. m. prager , n. schlaefer , and c. a. welty . building watson : an overview of the deepqa project ., 31(3):5979 , 2010 .y. jing , d. liu , d. kislyuk , a. zhai , j. xu , j. donahue , and s. tavel . visual search at pinterest . in _ proceedings of the 21th acm sigkdd international conference on knowledge discovery and data mining _ , kdd 15 , pages 18891898 , new york , ny , usa , 2015 .t. joachims . optimizing search engines using clickthrough data . in _ proceedings of the eighth acmsigkdd international conference on knowledge discovery and data mining _ , kdd 02 , pages 133142 , new york , ny , usa , 2002 .acm .m. richardson , a. prakash , and e. brill . beyond pagerank : machine learning for static ranking . in _ proceedings of the 15th international conference on world wide web _ , www 06 , pages 707715 , new york , ny , usa , 2006 .b. sarwar , g. karypis , j. konstan , and j. riedl .item - based collaborative filtering recommendation algorithms . in _ proceedings of the 10th international conference on world wide web _ , www 01 , pages 285295 , new york , ny , usa , 2001 . | this paper presents pinterest related pins , an item - to - item recommendation system that combines collaborative filtering with content - based ranking . we demonstrate that signals derived from _ user curation _ , the activity of users organizing content , are highly effective when used in conjunction with content - based ranking . this paper also demonstrates the effectiveness of visual features , such as image or object representations learned from convnets , in improving the user engagement rate of our item - to - item recommendation system . |
in network communications , the communication could fail if some nodes or some edges are broken .though the failure of a modem could be considered the failure of a node , we can model this scenario also as the failure of the communication link ( the edge ) attached to this modem .thus it is sufficient to consider edge failures in communication networks .it is also important to note that several nodes ( or edges ) in a network could fail at the same time .for example , all brand x routers in a network could fail at the same time due to a platform dependent computer worm ( virus ) attack . in order to design survivable communication networks , it is essential to consider this kind of homogeneous faults for networks .existing works on network quality of services have not addressed this issue in detail and there is no existing model to study network reliability in this aspect . in this paper , we use the colored edge graphs which could be used to model homogeneous faults in networks .the model is then used to optimize the design of survivable networks and to study the minimum connectivity ( and design ) requirements of networks for being robust against homogeneous faults within certain thresholds .a colored edge graph is a tuple , with the node set , the edge set , the color set , and a map from onto .the structure be distinct nodes of . are called -_color connected _ for if for any color set of size , there is a path from to in such that the edges on do not contain any color in . a colored edge graph is -_color connected _ if and only if for any two nodes and in , they are -color connected .the interpretation of the above definition is as follows . in a network ,if two edges have the same color , then they could fail at the same time .this may happen when the two edges are designed with same technologies ( e.g. , with same operating systems , with same application software , with same hardware , or with same hardware and software ) .if a colored edge network is -color connected , then the network communication is robust againt the failure of edges of any colors ( that is , the adversary may tear down any types of devices ) . in practice, one communication link may be attached to different brands of network devices ( e.g. , routers , modems ) on both sides . for this case ,the edge can have two different colors .if any of these colors is broken , the edge is broken .thus from a reliability viewpoint , if one designs networks with two colors on the same edge , the same reliability / security can be obtained by having only one color on each edge . in the following discussion, we will only consider the case with one color on each edge .meanwhile , multiple edges between two nodes are not allowed either .we are interested in the following practical questions . for a given number of nodes in ( i.e. , the number of network nodes ) , a given number of the colors ( e.g. , the number of network device types ) , and a given number , how can we design a -color connected colored edge graphs with minimum number of edges ? in another words , how can we use minimum resources ( e.g. , communication links ) to design a network that will keep working even if types of devices in the network fail ? for practical network designs, one needs first to have an estimate on the number of homogeneous faults .for example , the number of brands of routers that could fail at the same time .then it is sufficient to design a -color connected network with colors ( e.g. , with different brands of routers ) .necessary and sufficient conditions for this kind of network design will be obtained in this paper .another important issue that should be taken into consideration in practical network designs is that the number of colors ( e.g. , the number of brands for routers ) is quite small .for example , is normally less than five .necessary and sufficient conditions for network designs with and with optimized resources will be obtained in this paper .note that for cases with small , we may have .the outline of the paper is as follows .section [ meqtp1sec ] describes the necessary and sufficient conditions for the case of without optimizing the number of edges in the networks .section [ generalnecsec ] gives a necessary condition for colored edge networks in terms of optimized number of edges .section [ practiffsec ] shows that the necessary conditions in section [ generalnecsec ] are also sufficient for the most important three cases : ( 1 ) ; ( 2 ) ; and ( 3 ) .section [ hardsec ] shows that it is * np*-hard to determine whether a given colored edge graph is -connected .though colored - edge graph is a new concept which we used to model network survivability issues , there are related research topics in this field . for example , edge - disjoint ( colorful ) spanning trees have been extensively studied in the literature ( see , e.g. , ) .these results are mainly related to our discussion in the next section for the case of .a colored edge graph is _ proper _ if whenever two edges share an end point they carry different colors . a spanning tree for a colored edge graph is called colorful if no two of its edges have the same color .two spanning trees of a graph are edge disjoint if they do not share common edges . for a non - negative integer , let denote the complete graph on vertices .a classical result of euler states that the edges of can be partitioned into isomorphic spanning trees ( paths , for example ) and each of these spanning trees can easily be made colorful , but the resulting edge colored graph usually fails to be proper .though it is important to design colored edge graphs with required security parameters , for several scenarios it is also important to calculate the robustness of a given colored edge graphs .roskind and tarjan designed a greedy algorithm to find -edge disjoint spanning trees in a given graph .this is related to the questions -color connectivity for the case of .we are not aware of any approximate algorithms for deciding -color connectivity of a given colored edge graph .indeed , we will show that this problem is * np*-hard .in this section , we show necessary and sufficient conditions for some special cases .[ lemma1s ] a colored edge graphs is -color connected if and only if , for all , , , , is a connected graph , where is a partition of under the different colors . as we have mentioned in the previous section , the classical result by euler states that can be partitioned into spanning trees .thus , by lemma [ lemma1s ] , we have the following theorem .[ eulerresult ] ( euler ) for , there is a coloration of such that is -color connected . in the following ,we extend theorem [ eulerresult ] to the general case of .[ lemma2s ] for and , there exists a graph with , and such that the following conditions are satisfied : 1 . is a connected graph for all ; 2 . for all .* we prove the lemma by induction on and . for and ,the lemma holds obviously .assume that the lemma holds for . in the following ,we show that the lemma holds for and for .let be the graph with , and such that the conditions in the lemma are satisfied : for the case of and , let where is a new node that is not in , and let , , , where are distinct nodes from .it is straightforward to show that , is a connected graph , and for all .thus the lemma holds for this case . for the case of and ,let where are new nodes that are not in , and define as follows .1 . set and , where is a temporary variable .2 . define : 1 .select an edge .2 . let .3 . let and .3 . define for : 1 .select .2 . let .3 . let and .it is straightforward to show that ( thus ) , is a connected graph , and for all .this completes the proof of the lemma .[ iffthmt1 ] given with , there exists a -color connected colored edge graphs with and if and only if .* by lemma [ lemma1s ] , a -color connected colored edge graphs with and contains at least edges .meanwhile , contains at most edges .thus for , we have . in nother words , for , there is no -color connected colored edge graphs with and .now the theorem follows from lemmas [ lemma1s ] and [ lemma2s ] .first we note that for a colored edge graph to be -color connected , each node must have a degree of at least .thus the total degree of an -node graph should be at least .this implies the following lemma .[ nesslemma1 ] for , and a -color connected colored edge graph with , , and , we have . in the following , we usecover free family concepts to study the necessary conditions for colored edge graphs connectivity .let be a finite set with and be a set of mutually disjoint subsets of with .then is called a -partition of if .let be positive integers .an -partition is called a -cover free family ( or -cff( ) ) if , for any elements , we have that it should be noted that our above definition of cover - free family is different from the generalized cover - free family definition for set systems in the literature ( see , e.g. , ) . in , a set system called a -cover free family if for any blocks and any blocks , one has .specifically , there are two major differences between our -partition system and the set systems in the literature . 1 . for a set system , may contain repeated elements . 2 .for a set system , the elements in are not necessarily mutually disjoint .it is straightforward to show that a colored edge graph is -color connected if and only if for any color set of size , after the removal of edges in with colors in , remains connected .assume that contains nodes .then a necessary condition for connectivity is that contains at least edges . from this discussion , we get the following lemma . for a colored edge graph , with , , , a necessary condition for to be -color connectedis that the -partition is a -cff( ) with and where . in the following ,we analyze lower bounds for the number of edges for the existence of a -cff( ) . for a set partition and a positive integer , let it is straightforward to see that a -partition is a -cff( ) if andonly if . given positive integers , let from the above discussion and lemma [ nesslemma1 ] , we have the following theorem .[ dogthm ] let be given positive integers . and are necessary conditions for the existence of a -color connected colored edge graph , with , , .[ sufbound ] let be given positive integers .then we have * proof . * for a given -partition , let be an enumeration of elements in such that for all .it is straightforward to show that .thus takes the maximum value if is maximized .it is straightforward to show that this value is maximized when the -partition satisfies the following conditions : 1 . for , and 2 . for .the theorem follows from the above discussion . for , and , we have . however , .this shows that the condition in theorem [ dogthm ] is not redundant .there are no -color connected colored edge graphs for the following special cases : 1 .2 . .3 . . * proof . *before we consider the specific cases , we observe that , when and are fixed , the function is nondecreasing when increases .in this case , the maximum value that could take is .thus . that is, there is no -cff( ) , which implies the claim .note that this result also follows from theorem [ iffthmt1 ] .\2 . in this case , the maximum value that could take is .thus .we only show this for the case . in this case , the maximum value that could take is .thus .note that this result also follows from theorem [ iffthmt1 ] .the following theorem is a variant of theorem [ dogthm ] .[ necessarybound ] for , a necessary condition for the existence of a -color connected colored edge graph with , , and is that and the following conditions are satisfied : * if for some integer , then . * if for some integer , then . *if for some integer , then .* * if for some integer , then .* proof . * for , by theorem [ sufbound ] , we have thus the necessary condition in theorem [ dogthm ] can be interpreted as the following conditions : in other words , for a -color connected colored edge graphs , the following conditions ( the disjunction not conjunction ) are satisfied : * , and . * , and . * * , and . by distinguishing the cases for , , , and , and by reorganizing above lines , these necessary conditions can be interpreted as the following conditions : * and for some .note that this follows from the last line of the above conditions ( one can surely take other lines , but then the value of would be larger ) .this comment applies to following cases also .* and for some .* and for some .* * and for some .generally we are interested in the question whether the necessary condition in theorems [ dogthm ] and [ necessarybound ] are also sufficient . in the following ,we show that this is true for several important practical cases .[ mistp1 ] the necessary condition in theorem [ dogthm ] is sufficient for the case of .* proof . *since is the remainder of divided by , we trivially have .now assume that . by theorem [ sufbound ] , we have .the rest follows from theorem [ iffthmt1 ] . before we show that the necessary conditions in theorems [ dogthm ] and [ necessarybound ] are sufficient for the case of , we first present two lemmas whose proofs are straightforward . for and ,the following -node circle graph is -color connected : with for and .[ lemmacircle ] for , , and , the graph in figure [ figcircle ] that is defined in the following is -color connected with [ tp1case ] the necessary conditions in theorems [ dogthm ] and [ necessarybound ] are sufficient for the case of .* for the case of and , it follows from theorem [ mistp1 ] .now assume that and . in this special case ,the necessary conditions in theorem [ necessarybound ] is as follows : * and for some .* and for some .* and for some .* * and for some . in the following we first show that the condition `` and '' is sufficient .let the graph in figure [ figt1basic ] be defined as follows : for each with , let .then it is straightforward to check that the colored edge graphs is -color connected , , and .now we show that the condition `` and for '' is sufficient .let be the colored edge graph that we have just constructed with , and .let .define a new colored edge graph ( see figure [ figt1case1 ] ) by attaching the following edges to the -node circle : the colors for the new edges are defined by letting for and .it is straightforward to check that is -color connected , , and . for and , , there exists an -color connected colored edge graph with and if and only if * proof . *it follows from the proof of theorem [ tp1case ] .[ m4t2 ] the conditions in theorems [ dogthm ] and [ necessarybound ] are sufficient for the case of .* proof . *it is sufficient to show that both of the conditions `` and '' and `` and '' are sufficient ( note that and ) . in the followingwe first show that the condition `` and '' is sufficient by induction on . for the case of , we have , and .let the graph in figure [ figm4t2 ] be defined as where means that the edge takes color .it is straightforward to check that is -color connected . for the case of , we have , and .let the graph in figure [ fign7m4t2 ] be defined as where means that the edge takes color .it is straightforward to check that is -color connected .now for ( ) , we have and .if we glue the node of copies of , we get a -color connected colored graph with and .thus the condition for the case of holds . for ( ), we have and .if we glue glue the node of copies of and one copy of , we get a -color connected colored graph with and .thus the condition for the case of holds .this completes the induction . for the condition `` and '', one can add one node to the graph for the case `` and '' with edges ( with distinct colors ) to any three nodes .the resulting graph meets the requirements .theorem [ m4t2 ] could be extended to the case of and .[ met2 ] the conditions in theorems [ dogthm ] and [ necessarybound ] are sufficient for the case of and . * proof .* it is sufficient to show that both of the conditions `` and '' and `` and '' are sufficient ( note that ) . in the following we first show that the condition `` and '' is sufficient by induction on and .for and , we have .the graph in figure [ fign5m5t3 ] shows that the condition is sufficient also . for the case of , we have .the graph in figure [ fign7m5t3 ] shows that the condition is sufficient also . for ( ) ,the condition becomes and .if we glue the node of copies of , we get a -color connected colored graph with and .thus the condition for the case of holds . for ( ) ,the condition becomes and .if we glue glue the node of copies of and one copy of , we get a -color connected colored graph with and .thus the condition for the case of holds .this completes the induction . for the condition `` and '', we have and .we can add one node to the graph for the case `` and '' with edges ( with distinct colors ) to any four nodes .the resulting graph meets the requirements .* open questions : * we showed in this section that the conditions in theorems [ dogthm ] and [ necessarybound ] are sufficient for practical cases. it would be interesting to show that these conditions are also sufficient for general cases .we leave this as an open question .we have given necessary and sufficient conditions for -color connected colored edge graphs .sometimes , it is also important to determine whether a given graph is -color connected .unfortunately , the following theorem shows that the problem ceconnect is * conp*-complete .the ceconnect problem is defined as follows .before we prove the hardness result , we first introduce the concept of color separator . for a colored edge graph , a color separator for two nodes and of the graph is a color set such that the removal of all edges with colors in from the graph will disconnect and .it is straightforward to observe that and are -color connected if and only there is no -size color separator for and . * proof . *it is straightforward to show that the problem is in * conp*. thus it is sufficient to show that it is * np*-hard .the reduction is from the vertex cover problem .the vc problem is as follows ( definition taken from ) : question : is there a vertex cover of size or less for , that is , a subset such that and , for each edge , at least one of and belongs to ? for a given instance of vc , we construct a colored edge graph as follows .first assume that the vertex set is ordered as in .let in the following , we show that there is a vertex cover of size in if and only if there is a -color edge separator for . without loss of generality , assume that is a vertex cover for .then it is straightforward to show that is a color separator for since each incoming path for in contains two colors corresponding to one edge in .for the other direction , assume that is a -color separator for .let . by the fact that is a color separator for , for each edge in , the path in contains at least one color from .since this path contains only two colors and , we know that or or both belong to . in another word , is a -size vertex cover for .this completes the proof of the theorem .y. desmedt , y. wang , and m. burmester . a complete characterization of tolerable adversary structures for secure point - to - point transmissions without feedback . in _ proc .isaac 2005 _ , pages 277287 .lecture notes in computer science 3827 .springer verlag 2005 .h. wang and d. m. blough .construction of edge - disjoint spanning trees in the torus and application to multicast in wormhole - routed networks . in _ proc .1999 intl conf . on parallel anddistributed computing systems _pages 178184 , 1999 . | in this paper , we use the concept of colored edge graphs to model homogeneous faults in networks . we then use this model to study the minimum connectivity ( and design ) requirements of networks for being robust against homogeneous faults within certain thresholds . in particular , necessary and sufficient conditions for most interesting cases are obtained . for example , we will study the following cases : ( 1 ) the number of colors ( or the number of non - homogeneous network device types ) is one more than the homogeneous fault threshold ; ( 2 ) there is only one homogeneous fault ( i.e. , only one color could fail ) ; and ( 3 ) the number of non - homogeneous network device types is less than five . |
matrix multiplication is one of the most fundamental tasks in mathematics and computer science .while the product of two matrices over a field can naturally be computed in arithmetic operations , strassen showed in 1969 that arithmetic operations are enough .the discovery of this algorithm for matrix multiplication with subcubic complexity gave rise to a new area of research , where the central question is to determine the value of the exponent of square matrix multiplication , denoted , and defined as the minimal value such that two matrices over a field can be multiplied using arithmetic operations for any .it has been widely conjectured that and several conjectures in combinatorics and group theory , if true , would lead to this result .however , the best upper bound obtained so far is , as we explain below .coppersmith and winograd showed in 1987 that .their approach can be described as follows .a trilinear form is , informally speaking , a three - dimensional array with coefficients in a field . for any trilinear form can define its border rank , denoted , which is a positive integer characterizing the number of arithmetic operations needed to compute the form . for any trilinear form and any real number ] , the following statement hold : here the notation represents the trilinear form obtained by taking the -th tensor power of .coppersmith and winograd presented a specific trilinear form , obtained by modifying a construction given earlier by strassen , computed its border rank , and introduced deep techniques to estimate the value .in particular , they showed how a lower bound on can be obtained for any ] , due to the fact that the analysis of was finer , thus giving a better upper bound on via statement ( [ statement ] ) with and .solving numerically the new optimization problem , they obtained the upper bound . in view of the improvement obtained by taking the second tensor power ,a natural question was to investigate higher powers of the construction by coppersmith and winograph . investigating the third powerwas explicitly mentioned as an open problem in .more that twenty years later , stothers showed that , while the third power does not seem to lead to any improvement , the fourth power does give an improvement ( see also ) .the improvement was obtained again via statement ( [ statement ] ) , by showing how to reduce the computation of to solving a non - convex optimization problem .the upper bound was obtained in by finding numerically a solution of this optimization problem .it was later discovered that that solution was not optimal , and the improved upper bound was given in by exhibiting a better solution of the same optimization problem .independently , vassilevska williams constructed a powerful and general framework to analyze recursively powers of a class of trilinear forms , including the trilinear form by coppersmith and winograd , and showed how to automatically reduce , for any form in this class and any integer , the problem of obtaining lower bounds on to solving ( in general non - convex ) optimization problems .the upper bound was obtained by applying this framework with and , and numerically solving this optimization problem .obtained for the eighth power is stated as in the conference version , the statement has been corrected to in the most recent version , since the previous bound omitted some necessary constraints in the optimization problem .our results confirm the value of the latter bound , and increase its precision . ]a natural question is to determine what bounds on can be obtained by studying for .one may even hope that , when goes to infinity , the upper bound on goes to two .unfortunately , this question can hardly be answered by this approach since the optimization problems are highly non - convex and become intractable even for modest values of . in this paperwe show how to modify the framework developed in in such a way that the computation of reduces to solving instances of _ convex _ optimization problems , each having variables . from a theoretical point a view , since a solution of such convex problems can be found in polynomial time , via statement ( [ statement ] ) we obtain an algorithm to derive an upper bound on from in time polynomial in . from a practical point of view, the convex problems we obtain can also be solved efficiently , and have several desirable properties ( in particular , the optimality of a solution can be guaranteed by using the dual problem ) .we use this method to analyze and , and obtain the new upper bounds on described in table [ table : chart ] . besides leading to an improvement for , these results strongly suggest that studying powers higher than 32 will give only negligible improvements .our method is actually more general and can be used to efficiently obtain lower bounds on for any trilinear forms and that have a structure similar " to .indeed , considering possible future applications of our approach , we have been attentive of stating our techniques as generally as possible .to illustrate this point , we work out in the appendix the application of our method to an asymmetric trilinear form , originally proposed in . | this paper presents a method to analyze the powers of a given trilinear form ( a special kind of algebraic constructions also called a tensor ) and obtain upper bounds on the asymptotic complexity of matrix multiplication . compared with existing approaches , this method is based on convex optimization , and thus has polynomial - time complexity . as an application , we use this method to study powers of the construction given by coppersmith and winograd [ journal of symbolic computation , 1990 ] and obtain the upper bound on the exponent of square matrix multiplication , which slightly improves the best known upper bound . |
this paper presents an importance sampling based approximate optimal planning and control algorithm .optimal motion planning in deterministic and continuous systems is computationally np - complete except for linear time invariant systems . for nonlinear systems, there is a vast literature on approximate solutions and algorithms . in optimal planning ,the common approximation scheme is discretization - based . by discretizing the state and input spaces ,optimal planning is performed by solving the shortest path problem in the discrete transition systems obtained from abstracting the continuous dynamics , using heuristic - based search or dynamic programming . comparing to discretization - based methods , _ sampling - based graph search _ ,includes probabilistic roadmap ( prm ) , rrt , rrt * , are more applicable for high - dimensional systems .while rrt has no guarantee on the optimality of the path , rrt * compute an optimal path asymptotically provided the cost functional is lipschitz continuous .however , such lipschitz conditions may not be satisfied for some cost functions under specific performance consideration .the key idea in the proposed sampling - based planning method builds on a unification of importance sampling and approximate optimal control .in approximate optimal control , the objective is to approximate both the value function , i.e. , optimal cost - to - go , and the optimal feedback policy function by weighted sums of _ known _ basis functions . as a consequence, the search space is changed from infinite trajectory space or policy space to a continuous space of weight vectors , given that each weight vector corresponds to a unique feedback controller . instead of solving the approximate optimal control through training actor and critic neural networks ( nns ) using trajectory data , we propose a sampling - based method for sampling the weight vectors for a policy function approximation and searching for the optimal one .this method employs , a probabilistic complete global optimization algorithm , for searching the optimal weight vector that parametrizes the approximate optimal feedback policy .the fundamental idea is to treat the weight vector as a random variable over a parameterized distribution and the optimal weight vector corresponds to a dirac s delta function which is the target distribution .the algorithm iteratively estimates the parameter that possesses the minimum kullback - leibler divergence with respect to an intermediate reference model , which assigns a higher probability mass on a set of weights of controllers with improved performance over the previous iteration . at the meantime ,a set of sampled weight vectors are generated using the parameterized distribution and the performance of their corresponding policies are evaluated via simulation - based policy evaluation . under mild conditions ,the parameterized distribution converges , with probability one , to the target distribution that concentrates on the optimal weight vector with respect to given basis functions . resembles another adaptive search algorithm called cross - entropy(ce ) method and provides faster and stronger convergence guarantee for being less sensitive to input parameters , ce algorithm has been introduced for motion planning based on sampling in the trajectory space . the center idea is to construct a probability distribution over the set of feasible paths and to perform the search for an optimal trajectory using ce .the parameters to be estimated is either a sequence of motion primitives or a set of via - points for interpolation - based trajectory planning .differ to these methods , ours is the first to integrate importance sampling to estimate parameterization of the optimal policy function approximation for continuous nonlinear systems .since the algorithm performs direct policy search , we are able to enforce robustness and stability conditions to ensure the computed policy is both robust and approximate optimal , provided these conditions can be evaluated efficiently . to conclude, the contributions of this paper are the following : first , we introduce a planning algorithm by a novel integration of model reference adaptive search and approximate optimal control .second , based on contraction theory , we introduce a modification to the planning method to directly generate stabilizing and robust feedback controllers in the presence of bounded disturbances . last but not the least , through illustrative examples , we demonstrate the effectiveness and efficiency of the proposed methods and share our view on interesting future research along this direction .notation : the inner product between two vectors is denoted or . given a positive semi - definite matrix , the -norm of a vector is denoted .we denote for being the identity matrix . is the indicator function , i.e. , if event holds , otherwise . for a real , is the smallest integer that is greater than .we consider continuous - time nonlinear systems of the form where is the state , is the control input , is the initial state , and is a vector field .we assume that and are compact .a feedback controller takes the current state and outputs a control input .the objective is to find a feedback controller that minimizes a finite - horizon cost function for a nonlinear system where is the stopping time , defines the running cost when the state trajectory traverses through and the control input is applied and defines the terminal cost . as an example, a running cost function can be a quadratic cost for some positive semi - definite matrices and , and a terminal cost can be where is a goal state .we denote the set of feedback policies to be . for infinite horizon optimal control ,the optimal policy is independent of time and a feedback controller suffices to be a minimizing argument of ( see ref . ) . for finite - horizon optimal control ,the optimal policy is time - dependent .however , for simplicity , in this paper , we only consider time - invariant feedback policies and assume the time horizon is of sufficient length to ignore the time constraints . algorithm , introduced in , aims to solve the following problem : where is the solution space and is a deterministic function that is bounded from below .it is assumed that the optimization problem has a unique solution , i.e. , and for all , .the following regularity conditions need to be met for the applicability of .[ assume1 ] for any given constant , the set has a strictly positive lebesgue or discrete measure .this condition ensures that any neighborhood of the optimal solution will have a positive probability to be sampled .[ assume2 ] for any constant , , where , and we define the supremum over the empty set to be . *selecte a sequence of reference distributions with desired convergence properties .specifically , the sequence will converge to a distribution that concentrates only on the optimal solution .* selecte a parametrized family of distribution over with parameter .* optimize the parameters iteratively by minimizing the following kl distance between and . where is the lebesgue measure defined over .the sample distributions can be viewed as compact approximations of the reference distributions and will converge to an approximate optimal solution as converges provided certain properties of is retained in .note that the reference distribution is unknown beforehand as the optimal solution is unknown .thus , the algorithm employs the estimation of distribution algorithms to estimate a reference distribution that guides the search . to make the paperself - contained , we will cover details of in the development of the planning algorithm .in this section , we present an algorithm that uses in a distinguished way for approximate optimal feedback motion planning . the _ policy function approximation _ is a weighted sum of basis functions , where are basis functions , and the coefficients are the weight parameters , .an example of basis function can be polynomial basis ] and ^\intercal ] for specifying the quantile , the _ improvement parameter _ , a _ sample increment percentage _ , an initial sample size , a _ smoothing coefficient _ ] if or ] , and verify whether , at each time step along the nominal trajectory in the closed - loop system under control , the following condition holds . where is the component in the matrix .we verify this condition numerically at discrete time steps instead of continous time .further , if the function is semi - continuous , according to the extreme value theorem , this condition can be verified by evaluating at all critical points where and the boundary of the set .the modification to the planning algorithm is made in step 3 ) , if a controller of elite sample does not meet the condition , then is rejected from the set of elite samples . alternatively , one can do so implicitly by associating with a very large cost . however , since the condition is sufficient but not necessary as we have the matrix , constant and pre - fixed and is chosen to be a constant matrix , the obtained robust controller may not necessary be optimal among all robust controllers in . a topic for future work is to extend joint planning and control policies with respect to adaptive bound , , and a uniformly positive definite and time - varying matrix .in this section , we use two examples to illustrate the correctness and efficiency of the proposed method .the simulation experiments are implemented in matlab on a desktop with intel xeon e5 cpu and 16 gb of ram . to illustrate the correctness and sampling efficiency in the planning algorithm, we consider an optimal control of systems with non - quadratic cost . for this class of optimal control problems ,since there is no admissible heuristic , one can not use any planning algorithm facilitated by the usage of a heuristic function .moreover , the optimal controller is nonlinear given the non - quadratic cost .consider a system where and with and .the initial state is ] .suppose the magnitude of external disturbance is bounded by .the following parameters are used in stability verification : , at any time , for all such that , the controller ensures because . with this choice for stability analysis , the constraint in this case ,if we select nonpositive , and , then closed - loop system , which is a nonlinear polynomial system , will become globally contracting .0.32 under feedback controller computed with .,title="fig : " ] 0.33 under feedback controller computed with .,title="fig : " ] 0.33 figures [ fig : cost ] and [ fig : mean ] show the convergence result with in one simulation in terms of cost and the mean of the multivariant gaussian over iterations .the following parameters are used : initial sample size , improvement parameter , quantile percentage , smoothing parameter , sample increment parameter .the algorithm converges after iterations with samples to the mean ^\intercal ] . in simulation , .we select as basis functions and define ^\intercal ] .the basis vector is ^\intercal ] and thus the total number of basis functions is .the control input ^\intercal$ ] where and .the total number of weight parameters is twice the number of bases and in this case . 0.45 computed using the mean of multivariate gaussian over iterations ( from the lightest to the darkest ) .( b ) the convergence of the covariance matrix .( c ) the total cost evaluated at the mean of the multivariate gaussian over iterations ., title="fig : " ] 0.45 the following parameters are used : initial sample size , improvement parameter , smoothing parameter , sample increment percentage , and . in fig .[ fig : dubinstraj ] we show the trajectory computed using the estimated mean of multivariate gaussian distribution over iterations , from the lightest ( -th iteration ) to the darkest ( the last iteration when stopping criterion is met ) .the optimal trajectory is the darkest line . in fig .[ fig : dubinscost ] we show the cost computed using the mean of multivariate gaussian over iterations .converges after 22 iterations with samples and the optimal cost is .each iteration took about to seconds .however , it generates a collision - free path only after iterations . due to input saturation, the algorithm is only ensured to converge to a local optimum .however , in 24 independent runs , all runs converges to a local optimum closer to the global one , as shown in the histogram in fig .[ fig : dubins_histo ] .our current work is to implement trajectory - based contraction analysis using time - varying matrices and adaptive bound , which are needed for nonlinear dubins car dynamics .in this paper , an importance sampling - based approximate optimal planning and control method is developed . in the control - theoretic formulation of optimal motion planning ,the planning algorithm performs direct policy computation using simulation - based adaptive search for an optimal weight vector corresponding to an approximate optimal feedback policy .each iteration of the algorithm runs time linear in the number of samples and in the time horizon for simulated runs .however , it is hard to quantify the number of iterations required for to converge .one future work is to consider incorporate multiple - distribution importance sampling to achieve faster and better convergence results .based on contraction analysis of the closed - loop system , we show that by modifying the sampling - based policy evaluation step in the algorithm , the proposed planning algorithm can be used for joint planning and robust control for a class of nonlinear systems under bounded disturbances . in future extension of this work ,we are interested in extending this algorithm for stochastic optimal control .l. e. kavraki , p. svestka , j .- c .latombe , and m. h. overmars , `` probabilistic roadmaps for path planning in high - dimensional configuration spaces , '' _ ieee transactions on robotics and automation _ , vol . 12 , no . 4 , pp . 566580 , 1996 .s. c. livingston , e. m. wolff , and r. m. murray , `` cross - entropy temporal logic motion planning , '' in _ proceedings of the 18th international conference on hybrid systems : computation and control_.1em plus 0.5em minus 0.4emacm , 2015 , pp . | in this paper , we propose a sampling - based planning and optimal control method of nonlinear systems under non - differentiable constraints . motivated by developing scalable planning algorithms , we consider the optimal motion plan to be a feedback controller that can be approximated by a weighted sum of given bases . given this approximate optimal control formulation , our main contribution is to introduce importance sampling , specifically , model - reference adaptive search algorithm , to iteratively compute the optimal weight parameters , i.e. , the weights corresponding to the optimal policy function approximation given chosen bases . the key idea is to perform the search by iteratively estimating a parametrized distribution which converges to a dirac s delta that infinitely peaks on the global optimal weights . then , using this direct policy search , we incorporated trajectory - based verification to ensure that , for a class of nonlinear systems , the obtained policy is not only optimal but robust to bounded disturbances . the correctness and efficiency of the methods are demonstrated through numerical experiments including linear systems with a nonlinear cost function and motion planning for a dubins car . |
due to the central limit theorem a great deal of phenomena can be described by gaussian statistics .this also guides our perception of the risks of large deviations from an expectation value .consequently , the occurence of any aggravated probability of extreme events is always cause for concern and subject of intense research interest . in a large variety of systemswhere heavy - tailed distributions are observed , gaussian statistics holds only locally the parameters of the distribution are changing , either in time or in space .thus , to describe the sample statistics for the whole system , one has to average the parametric distribution over the distribution of the ( shape ) parameter .this construction is known as _ compounding _ or _ mixture _ in the mathematics and as _ superstatistics _ in the physics literature .an important example for parameter distribution functions is the k - distribution mentioned 1978 for the first time by jakeman and pusey .it was introduced in the context of intensity distributions , and their significance for scattering processes of a wide range of length scales was stressed .moreover the distribution is known to be an equilibrium solution for the population in a simple birth - death - immigration process which was already applied in the description of eddy evolution in a turbulent medium .the underlying picture of turbulence assumes that large eddies are spontaneously created and then give birth to generations of children eddies , which terminates when the smallest eddies die out due to viscous dissipation . in jakeman and puseyuse the k - distribution for fitting data of microwave sea echo , which turned out to be highly non - rayleigh .the k - distribution is also found as a special case of a full statistical - mechanical formulation for non - gaussian compound markov process , developed in .field and tough find k - distributed noise for the diffusion process in electromagnetic scattering .experimentally the k - distribution appeared in the contexts of irradiance fluctuations of a multipass laser beam propagating through atmospheric turbulence , synthetic aperture radar data , ultrasonic scattering from tissues and mesoscopic systems .also in our study we will encounter the k - distribution for one of the systems under consideration .compounded distributions can be applied to very different empirical situations : they can describe aggregated statistics for many time series , where each time series obeys stationary gaussian statistics , the parameters of which vary only between time series . in this caseit is straightforward to estimate the parameter distribution .the situation is more difficult when we consider the statistics of single long time series with time - varying parameters . in this non - stationary case ,one often makes an ad hoc assumption about the analytical form of the parameter distribution , and only the compounded distribution is compared to empirical findings . in this paperwe address the problem of determining the parameter distribution empirically for univariate non - stationary time series .specifically we consider the case of gaussian statistics with time - varying variance . in this endeavorwe encounter several problems : if the variance for each time point is purely random , as the compouding ansatz would suggest , we have no way of determining the variance distribution from empirical data . a prerequisite for an empirical approach to the parameter distributionis a time series which is quasi - stationary on short time intervals . in other words ,the variance should vary only slowly compared to the time scale of fluctuations in the signal .the estimation noise for the local variances competes with the variance distribution itself .therefore the time interval on which quasi - stationarity holds , should not be too short .furthermore , we have to heed possible autocorrelations in the time series themselves , since they might lead to an estimation bias for the local variances .our aim is to test the validity of the compounding approach on two different data sets .the paper is organized as follows : in section [ sec2 ] we give a short summary of the compounding approach and present two recent applications where the k - distribution comes into play . in section [ sec3 ]we introduce the two systems we are going to analyse , a table top experiment on air turbulence and the empirical time series of exchange rates between us dollar and euro . in section [ sec4 ] we address the problem of estimating non - stationary variances in univariate time series .our empirical results are presented in section [ sec5 ] .we consider a distribution of random variables , ordered in the vector .it is also a function of a parameter that determines the shape or other features of the distribution , e.g. the variance of a gaussian .if , in a given data set , the parameter varies in an interval , one can try to construct the distribution of as the linear superposition of all distributions with . here , is the weight function determining the contribution of each value of in the superposition .since itself typically is a random variable , we assume that the function is a proper distribution . in particular , it is positive semidefinite . as each andthe resulting have to be normalized with respect to the random vector , eq .( [ comp1 ] ) implies the normalization the physics reasons for the variation of the parameter can be very different . in non equilibrium thermodynamics , might be the locally fluctuating temperature .although our systems are not of a thermodynamic kind , we also have in mind non stationarities . in recent experiments we studied the propagation of microwaves through an arrangement of disordered scatterers in a cavity .the distribution of the electric fields was measured at fixed frequencies as a function of position .then time - dependent wave fields were generated by superposition of patterns , here is the wave pattern at frequency , and is a random phase . for fixed positions ,always a rayleigh distribution was found in the time sequence for the distribution of intensities , this is nothing but a manifestation of the central limit theorem . the variance , _i.e. _ , the averaged depends on the position .the large amount of data made it possible to extract the distribution of the parameter .in good approximation , it turned out to be a distribution , see fig .[ fig : freak ] . the number of degrees of freedom was related to the number of independent field components and took a value of .the authors of refs . then used the compounding ansatz eq .( [ comp1 ] ) in the form the integral can be done and yields where is the modified bessel function of degree .this is the k - distribution introduced in the introduction . omitting local regions with extremely high amplitudes ,so called `` hot spots '' , the intensity distributions could be perfectly well interpreted in terms of k - distributions , see fig .[ fig : freak ] . found for the 780 pixels of our measurement .the inset shows the same data using a semi - logarithmic scale .the solid curve is a distribution with degrees of freedom .( bottom ) intensity distribution for the time - dependent wave patterns generated by eq .( [ eq : pulse ] ) .the solid ( red ) line is given by eq .( [ eq : kbess]).,title="fig:",scaledwidth=46.0% ] + found for the 780 pixels of our measurement .the inset shows the same data using a semi - logarithmic scale .the solid curve is a distribution with degrees of freedom .( bottom ) intensity distribution for the time - dependent wave patterns generated by eq .( [ eq : pulse ] ) .the solid ( red ) line is given by eq .( [ eq : kbess]).,title="fig:",scaledwidth=46.0% ] we briefly sketch another example which stems from finance .details can be found in ref .the conceptual difference to the previous example is that the compounding formula ( [ comp1 ] ) appears as an intermediate result , not as an ansatz .we consider a selection of stocks with prices belonging to the same market .one is interested in the distribution of the relative price changes over a fixed time interval , also referred to as returns as the companies operate on the same market , there are correlations between the stocks which have to be measured over a time window much larger than .the corresponding covariances form the matrix . the business relations between the companies as well as the market expectations ofthe stock market traders change in time .thus , the market is non stationary and the correlation coefficients fluctuate in time . only for time windows of a month or less, is approximately constant .the multivariate distribution of the returns ordered in the component vector at a given time is well described by the gaussian the non stationarity over longer time windows can be modeled by observing that the ensemble of the fluctuating covariance matrices can be approximated by an ensemble of random matrices . in ref . a wishart distribution was assumed for this ensemble .the ensemble average of the distribution ( [ multivar ] ) over the wishart ensemble yields where is the sample - averaged covariance matrix over the entire time window .as is fixed , this result has the form of the compounding ansatz ( [ comp1 ] ) .furthermore , it closely matches the result ( [ comp3 ] ) found in the context of microwave scattering .the number of degrees of freedom in the distribution determines the variance in the distribution of the random covariance matrices .the role of the locally averaged intensity is now played by an effective parameter which fully accounts for the ensemble average . again , a k - distribution follows , in which the bilinear form takes the place of the intensity in eq .( [ eq : kbess ] ) . in both examples ,averages over fluctuating quantities produce heavy - tailed distributions which describe the vast majority of the large events .the first data set is obtained by measuring the noise generated by a turbulent air flow . for the turbulence generation we used a standard fan with a rotor frequency of 18.44hz .we restricted ourselves to standard audio technique handling frequencies up to 20khz and standard sampling rates of 48khz offering reliable quality at an attractive price .the microphone for the sound recording is a e 614 by sennheiser with a frequency response of 40hz - 20khz , a good directional characteristic and a small diameter of 20 mm .it guarantees a broadband frequency resolution and a point - like measuring position .an external sound card with matching properties was necessary to use the full capacity of the quality of microphone and to minimize the influence of the intrinsic noise of the pc .[ fig : sound ] shows a photograph or the used setup .a microphone has been placed in front of a fan running continuously and generating a highly turbulent air flow .the microphone records the time signal of the sound waves excited by the turbulence .the details about the analysed time series will be discussed in section [ sec4 ] .the foreign exchange markets have the peculiar feature of all - day continuous trading .this is in contrast to stock markets ,where the trading hours of different stock exchanges vary due to time zones , with partial overlap of different markets and very peculiar trading behavior at the beginning and the end of each trading day . therefore foreign exchange rates are particularly suited for the study of long time series .we consider the time series of hourly exchange rates between euro and us dollar in the time period from january 2001 to may 2013 .the empirical data were obtained from ` www.fxhistoricaldata.com ` .we denote the time series of exchange rates by . from thesewe calculate the time series of returns , i.e. , the relative changes in the exchange rates on time intervals , since we work with hourly data , the smallest possible value for is one hour .however , as we will see later on , a return interval of one trading day , , is preferable for the variance estimation .note that foreign exchange rates are typically modeled by a multiplicative random process , such as a geometric brownian motion , see , _e.g. _ , .therefore we consider the relative changes of the exchange rates instead of the exchange rates themselves .while the latter resemble at least locally - a lognormal distribution , the returns are approximately gaussian , conditioned on the local variance , that is .we consider the problem of univariate time series with time - dependent variance . more specifically , we consider time series where the variance is changing , but exhibits a slowly decaying autocorrelation function .this point is crucial , because otherwise it is not possible to make meaningful estimates of the local variances .time series with this feature show extended periods of large fluctuations interupted by periods with moderate or small fluctuations .this is illustrated in fig .[ fig : signal ] for the two data sets we are studying in this paper . in the top plot of fig .[ fig : signal ] , we show the sound signal for the ventilator measurement . in the bottom plot , the time series of daily returns for the foreign exchange data is plotted . in both caseswe observe the same qualitative behavior , which is well - known in the finance literature as volatility clustering .the compounding ansatz for univariate time series assumes a normal distribution on short time horizons , where the local variance is nearly stationary .however , we wish to determine the distribution of the local variances empirically , since it is a critical part in the compounding ansatz .if the variances were fluctuating without a noticable time - lagged correlation , this would not be feasible .still , we need to establish the right time horizon on which to estimate the local variances . introduced a method to locally normalize time series with autocorrelated variances . to this end, a local average was subtracted from the data and the result was divided by a local standard deviation . in this spirit , we determine the time horizon on which this local normalization yields normal distributed values and analyse the corresponding local variances . another aspect we need to take into account is a possible bias in the variance estimation which occurs for correlated events . in fig .[ fig : acf ] we show the autocorrelation function ( acf ) of the measured sound signal , as well as the autocorrelation function of the absolute value of hourly returns . both plots hint at possible problems for the variance estimation . due to the high sampling frequency , the sound signal is highly correlated . in other words ,the sampling time scale is much shorter than the time scale on which the turbulent air flow changes . after 2500 data points , or about 52 ms ,the autocorrelation function has decayed to zero .consequently , we consider only every 2500th data point for our local variance estimation . to improve statistics , we repeat the variance estimation starting with an offset of 1 to 2499 .the results are presented in the following section . in the case of the foreign exchange datawe are confronted with a different problem .the consecutive hourly returns are not correlated .while local trends may always exist , it is unpredictable when a positive trend switches to a negative one , and _vice versa _ , see ref . . however , the autocorrelation of the absolute values shows a rich structure which is due to characteristic intraday variability .this would lead to a biased variance estimation and , consequently , to a distortion of the variance distribution .therefore we consider returns between consecutive trading days at the same hour of the day .put differently , we consider for the returns and get 24 different time series , one for each hour of the day as starting point . + +we first discuss the results for the turbulent air flow . as described in the previous section , we sliced the single measurement time series into 2500 time series with lower sampling rate , taking only every 2500th measurement point .this is necessary to avoid a bias in the estimation of the local variances . before proceeding ,each of these time series is globally normalized to mean zero and standard deviation one .figure [ fig : venti ] shows the empirical results for the distribution of local variances and the compounded distribution .the local variances are rather well described by a distribution with degrees of freedom .we find to provide the best fit to the data .the distribution of the empirical sound amplitudes is well described by a k - distribution with the same which fits the variance distribution .hence , we arrive at a consistent picture , which supports our compounding ansatz for this measurement .-distribution with degrees of freedom .( bottom ) distribution of the sound amplitudes , compared to the k - distribution with parameter . , title="fig:",scaledwidth=46.0% ] + -distribution with degrees of freedom . ( bottom )distribution of the sound amplitudes , compared to the k - distribution with parameter ., title="fig:",scaledwidth=46.0% ] the results for the daily returns of eur - usd foreign exchange rates are shown in fig . [fig : forex ] . as outlined in section [ sec4 ], we calculated the daily returns as the relative changes of the exchange rate between consecutive trading days with respect to the same hour of each day . this procedure yields 24 time series of daily returns .we normalize each time series to mean zero and standard deviation one .this allows us to produce a single aggregated statistics . in the top plot of fig .[ fig : forex ] we show the histogram of local variances , i.e. the variances estimated on 13-day intervals . in accordance with the finance literature , the empirical variances follow a lognormal distribution over almost three orders of magnitude , with only some deviations in the tail .the histogram of the daily returns is shown in the bottom plot of fig .[ fig : forex ] .the empirical result agrees rather well with the normal - lognormal compounded distribution .it is important to note , however , that we only achieve this consistent picture of variance and compounded return distribution because we have taken into account all the pitfalls of variance estimation , which we described in section [ sec4 ] .we applied the compounding approach to two different systems , a ventilator setup generating turbulent air flow and foreign exchange rates .both systems are characterized by univariate time series with non - stationary variances .our main objective was to empirically determine the distribution of variances and thus arrive at a consistent picture .the estimation of variances from a single , non - stationary time series presents several pitfalls , which have to be taken into account carefully .first of all , we have to avoid serial correlations in the signal itself .these might otherwise lead to an estimation bias . for the sound measurement, we had to reduce the sampling rate of the data to achieve this .the foreign exchange data presented another obstacle for variance estimation : we observed a characteristic intraday variability which had to be taken into account .last but not least , it is a prerequisite that the non - stationary variances are not purely stochastic , but exhibit a slowly decaying autocorrelation . otherwise we would not be able to determine a reasonable variance distribution for the compounding ansatz .when we take all these aspects into account , we arrive at the correct variance distribution . in good approximationwe found a distribution in the case of ventilator turbulence , which leads to a k - distribution for the compounded statistics .for the foreign exchange returns we observe lognormal distributed variances ; and the normal - lognormal compounded distribution fit the return histogram well .a central assumption in the compounding ansatz is the stationarity of the variance distribution .this assumption might not always be satisfied and lead to deviations from the compounded distribution .the sound measurements have been supported by the deutsche forschungsgemeinschaft via the forschergruppe 760 `` scattering systems with complex dynamics '' .j. s. lee , d. l. schuler , r. h. lang , and k. j. ranson , in _ geoscience and remote sensing symposium , 1994 .surface and atmospheric remote sensing : technologies , data analysis and interpretation ., international _ ( ieee , new york , 1994 ) , vol . 4 , pp . | a defining feature of non - stationary systems is the time dependence of their statistical parameters . measured time series may exhibit gaussian statistics on short time horizons , due to the central limit theorem . the sample statistics for long time horizons , however , averages over the time - dependent parameters . to model the long - term statistical behavior , we compound the local distribution with the distribution of its parameters . here we consider two concrete , but diverse examples of such non - stationary systems , the turbulent air flow of a fan and a time series of foreign exchange rates . our main focus is to empirically determine the appropriate parameter distribution for the compounding approach . to this end we have to estimate the parameter distribution for univariate time series in a highly non - stationary situation . |
the propagation of two pulses in resonant interaction in a - atomic medium has been widely studied during the last decade , in particular with application to quantum information processing ( see , e.g. , reviews ) .the possibility of quantum information storage in the gas phase has been demonstrated in cold atoms and in hot ensembles . achieving information storage in solid - state systems , which are more attractive due to their higher density , compactness , and absence of diffusion ,has been pursued .the main drawbacks preventing the efficiency of storage for such solid - state materials are huge inhomogeneous broadenings and high rates of decoherence .for instance , in crystalline films doped with rare - earth metals the rates of transverse relaxations amount to tens of ghz . in practice an efficient storage of information requires rather large optical length such that even a weak loss rate will ruin it . in order to reduce the inhomogeneous broadening ,it was proposed in a number of works to use the so - called hole burning technique .it has been shown that media prepared in such a way allow efficient coherent population transfer in systems .however , the hole burning technique faces a so far unsolved problem that leads to the reduction of the optical length of the samples , which will be detrimental in general for the efficiency of the storage . in the present workwe show that , for the storage of optical information ( i.e. of classical fields ) , the recording length can be dramatically reduced with the use of intense pulses .the possibility to store and retrieve optical information in resonant media has mainly been studied in the `` linear approximation '' with respect to the so - called probe field that has to be stored , i.e. for a weak probe pulse .it was usually assumed that the control pulse propagates in a medium without pulse shape change .numerical simulation of this problem without restrictions on the probe intensity was performed in .analytical studies of the problem taking into account the group velocities of both pulses were performed in in the limit of pulses of duration much shorter than all the relaxation times , but sufficiently long to allow adiabatic evolution during the interaction .it has been found that the length of information storage depends remarkably on the ratio between the oscillator strengths of the adjacent transitions that determine the group velocities of the pulses in a medium .the present work aims at a complete analytical study of information storage in case of arbitrary relaxation times and intensities of the probe and control fields .we show analytically and numerically that it is possible , for a proper choice of solid - state medium , to dramatically reduce the optical length needed to store intense short pulses .this would in particular make more efficient the use of the hole burning technique .the paper is organized as follows : we first describe the model , and analyze the propagation in the adiabatic limit .we next derive the conditions of storage for short lengths of the medium , before concluding . -systeminteracting with two laser pulses . ]we consider a medium of three - level atoms with two ground states and a metastable excited state , of energy , , and two laser pulses referred to as the probe and control pulses , coupling the transition respectively and ( see fig .the interaction hamiltonian in the rotating wave approximation can be written in the basis as \ ] ] with the one - photon detuning , ( ) the probe ( coupling ) laser frequency , the rabi frequencies associated to the corresponding field amplitudes and to the dipole moments of the corresponding atomic transitions .we consider an exact two - photon resonance .the propagation of two laser pulses in the medium is described by the maxwell equations which , in the slowly - varying - amplitude approximation and in the moving coordinate system ( such that the original wave operator becomes ) , read here are the coupling factors with the density of atoms .the density matrix elements are determined by the system of equations ( where the dot denotes the derivative , since ) [ syst ] with and the rates of spontaneous emission from state 3 to states 1 and 2 respectively , the dephasing rate between the two ground states , and the transverse relaxation rate .we moreover assume pulses of durations much shorter than the characteristic time of decoherence of the ground states ( usually microseconds in solids ) such that will be neglected .the envelopes of the fields , the rabi frequencies , and the density matrix components depend on the position ] , , at the entrance of the medium with and : a ) usual scheme ( shown here for , ) ; b ) proposed scheme with , ., title="fig : " ] $ ] , , at the entrance of the medium with and : a ) usual scheme ( shown here for , ) ; b ) proposed scheme with , ., title="fig : " ] the storage requires the stopping of the excitation , , for all the characteristics connected with the initial excitation at the entrance of the medium . we can estimate the length required to completely store the excitation from ( [ xi2 ] ) using the longest characteristic , i.e. and : [ xmax ] with and the duration of respectively the probe and the control , and , .minimizing this quantity requires thus and a weak probe pulse , as is well known . on the other hand , adiabaticity of the interaction is required .criterion for this adiabaticity has been analyzed in , where it has been shown that , under the certain condition adiabaticity is broken with formation of shock - wave fronts for distances exceeding the critical length estimated as the latter approximation holds for a probe pulse of duration shorter than the control .this shows that reducing to lower will however also shorten .preserving adiabaticity usually requires thus a strong control pulse and an efficient storage will take place when through their asymmetry in and with respect to , eqs .( [ xmax ] ) and ( [ x0 ] ) show that we can shorten with the use of a _ weak control _ pulse in addition to using a small parameter , while preserving the condition of adiabaticity using a_ strong probe _ ( see fig . 2 for the pulse scheme ) . for : a ) usual scheme of pulses with gaussian envelopes at the entrance of the medium displayed in fig .2a with ; b ) novel scheme of pulses ( displayed in fig . 2b).,title="fig : " ] for : a ) usual scheme of pulses with gaussian envelopes at the entrance of the medium displayed in fig .2a with ; b ) novel scheme of pulses ( displayed in fig . 2b).,title="fig : " ] + we have performed numerical calculations of the full set of the density - matrix and maxwell equations for both pulses taking into account a fast relaxation .it indeed shows that the storage of the probe field in inhomogeneously broadened media is more efficient when we use atoms of different adjacent transition strengths .3b presents the temporal evolution of the probe pulse at different propagation lengths ( normalized as ) for the novel scheme of the pulses at the entrance of the medium shown in fig .the chosen parameters lead to , , and , which satisfy the adiabatic conditions ( [ conds ] ) and ( [ cond_fast_relax ] ) at the entrance of the medium ( here the time of interaction is ) . for the comparison we present in fig .3a the dynamics for the usual scheme of pulses used for the optical information storage .one can see from figs . 3a and 3b that in the case of the novel scheme the length of information storage is dramatically reduced as compared with that of the usual case .namely , for the novel scheme at the propagation length of about 1 the probe pulse is completely absorbed by the medium while at the same length in the usual scheme the probe pulse remains still unabsorbed .we have shown that the optical information storage of a probe in inhomogeneously broadened media of atoms occurs for short medium length when strong probe pulses , weak control pulses and very different strengths of adjacent transitions are used .for such a scheme the optical length may be of the order of unity in contrast to the conventional scheme where the optical length must well exceed unity .the proposed scheme would allow one to reduce the unavoidable losses caused by decoherence .the required systems with very different strengths of adjacent transitions are expected to be found in , e.g. , rare - earth doped solid - state materials .study to elaborate efficient schemes to retrieve the stored pulse in this strong regime is in progress .we briefly recall here the characteristics method ( see for instance ) to solve the maxwell equation in the case of a non - linear group velocity of the form : .we want to transform this linear first order partial differential equation into an ordinary differential equation along the characteristic curves . using the chain rule , we obtain if we set ( choosing and denoting ) equation ( [ ode ] ) leads to the solution which is thus a constant along the characteristics . allows one to label a characteristics rewritten from eq .( [ cond_odeb ] ) as ( since ) the non - linear velocity derived from ( [ cond_odec ] ) : corresponds to the instantaneous velocity along a characteristic .we acknowledge support from intas 06 - 100001 - 9234 , the research project ansef ps - opt-1347 of the republic of armenia , the agence nationale de la recherche ( anr comoc ) and the conseil rgional de bourgogne .lukin , d.f .phillips , a. fleischhauer , a. mair , r.l .walsworth , phys .lett . * 86 * , 783 ( 2001 ) ; a.s .zibrov , a.b .matsko , o. kocharovskaya , y.v .rostovtsev , g.r .welch , m.o .scully , phys.rev.lett . *88 * , 103601 ( 2002 ) ; m.d .eisaman , a. andre , f. massou , m. fleischhauer , a.s .zibrov , m.d .lukin , nature * 438 * , 837 , ( 2005 ) ; r. pugatch , m. shuker , o. firstenberg , a. ron , n. davidson , phys .lett . * 98 * , 203601 ( 2007 ) ; p. k. vudyasetu , r. m. camacho , j. c. howell phys .rev . lett . * 100 * , 123903 ( 2008 ) .e. kuznetsova , o. kocharovskaya , ph .hemmer , m.o .scully , phys .a * 66 * , 063802 ( 2002 ) ; a.v .turukhin , v.s .sudarshanam , m.s .shahriar , j.a .musser , b.s .ham , p.r .* 88 * , 023602 ( 2002 ) ; s.e .yellin , p.r .hemmer , phys .a , * 66 * , 013803 ( 2002 ) ; l. alexander , j. j. longdell , m. j. sellars , and n. b.manson , phys .lett . * 96 * , 043602 ( 2006 ) ; m. u. staudt , s. r. hastings - simon , m. nilsson , m. afzelius , v. scarani , r. ricken , h. suche , w. sohler , w. tittel , and n. gisin , phys . rev . lett . * 98 * , 113601 ( 2007 ) ; s , m. d. lukin , and r. l. walsworth , phys . rev . lett . * 98 * , 243602 ( 2007 ) .i. novikova , a. v. gorshkov , d. f. phillips , a. s. s , m. d. lukin , r. l. walsworth , phys .lett . * 98 * , 243602 ( 2007 ) ; a. v. gorshkov , a. andr , m. fleischhauer , a. s. s , m. d. lukin , phys .lett . * 98 * , 123601 ( 2007 ) . | we propose a novel scheme of storage of intense pulses which allows a significant reduction of the storage length with respect to standard schemes . this scheme is particularly adapted to store optical information in media with fast relaxations . |
the internet of things ( iot ) and 5 g architectures require connecting heterogeneous devices including machine - to - machine ( m2 m ) and wireless sensor networking ( wsn ) units with potentially lower data rate but low latency , reliable , energy efficient and secure mechanisms .these applications require short packets and simple modulation / demodulation mechanisms for the widespread utilization of iot .the massive number of devices require not only low hardware complexity schemes but also methods of energy harvesting such as simultaneous wireless information and power transfer ( swipt ) transmitting data and energy by using radio frequency ( rf ) or magneto - inductive ( mi ) methods .however , the existing literature generally concentrates on the energy versus capacity trade - off including power allocation schemes .the recent studies analyze rf based swipt modulation methods including spatial domain ( sd ) and intensity based energy pattern changes .rf solutions are not mature today to achieve swipt and require specialized circuits for rf to direct current ( dc ) conversion . in this article , novel and practical network topology modulation and demodulation architecturesare presented for mi communications ( mic ) networks by changing the spatial pattern of power transmitting active coils . the proposed mic based iot architecture ( mi - iot ) provides reliable , simple and secure mechanisms with low cost and low latency performances for connecting everyday objects with direct power transmission capability .mic is an alternative method with the advantage of uniformity for varying environments without medium specific attenuation , multi - path and high propagation delay in challenging environments including underwater , underground and nanoscale medium with in - body and on - chip applications . in and , a trade - off analysis is presented for the problem of information and power transfer on a coupled - inductor circuit with power allocation policies . in , a nanoscale communication architecture with graphene coils is presented satisfying both power and data transmissions for in - body and on - chip applications . on the other hand , existing studies on mic networkstreat other coils as sources of interference including multiple - input multiple - output ( mimo ) and diversity architectures .however , swipt architectures , mac schemes utilizing the same time - frequency resources , and modulation methods other than classical signal waveform approaches are not discussed . in this article , the information is embedded to coil positions by _ modulating the frequency selective mi channel _ or _ network topology _ instead of classical signal waveform modulation .the proposed scheme fulfills the idea of fully coupled information and power transfer for widespread utilization of mi - iot applications .it eliminates signal modulation and does not waste any power for data transmission .furthermore , it does not require transmitter pre - coding , channel state information and intensity pattern modulation as in rf counterpart utilizing generalised pre - coding aided spatial modulation ( gpsm ) scheme .moreover , mac sharing problem is intrinsically solved due to including whole network topology as data symbol .the solution requires no receiver feedback and provides mac sharing without sacrificing resources for transmitter synchronization or interference prevention .the contributions achieved , for the first time , are summarized as follows : * network topology modulation mechanism directly modulating the frequency selective mi channel . * practical topology demodulation combined with swipt . * topology modulating macs fully utilizing time - frequency bands without synchronization or contention . * reliable , capable of energy harvesting , low cost , low hardware complexity and low latency iot networking .the proposed method supports the widespread adoption of mic networks for iot satisfying the following requirements : 1 . _low latency _ : continuous energy and data transmission in a mac topology without the overhead of resource sharing , synchronization or receiver feedback .high reliability _ : the robustness of the mic channel to fading and low probability of symbol error .low hardware complexity _ : simple and low - cost swipt transceiver with already available mi circuits , and without separate structures for data and power transmission .energy harvesting capability _ : intrinsically by mi coils without requiring separate rf to dc conversion circuits ._ security _ : immunity to rf fields and radiative effects ; potential detection of intruder coils as the changes in network topology symbol and power transmission levels. 6 . _ energy efficiency _ : no extra energy for signal waveform based data transmission ; the crowded set of mi coils forming a waveguide to enhance the communication range without consuming any active transmission power .the remainder of the paper is organized as follows . in section [ sysmodel ] , mi network system model for mac topologiesis presented . in sections[ stm ] and [ stdm ] , topology modulation and demodulation mechanisms are introduced , respectively .then , in section [ numersim ] , the proposed methods are simulated for two - user mac network .finally , in sections [ openis ] and [ conclusion ] , open issues and conclusions are discussed , respectively .coils at each transceiver with varying topologies consisting of different spatial symbols.,title="fig:",width=158 ] coils at each transceiver with varying topologies consisting of different spatial symbols.,title="fig:",width=158 ] + ( a ) ( b ) + in this paper , there are transmitters denoted by for ] where with resonance frequency .furthermore , each coil has the same properties to simplify the analysis , i.e. , , and for ] , is the operating frequency , denotes the voltage level in coil of user , ] and denotes transpose . has the elements for , for ] where , is the complex unity and is the mutual inductance between and coil .next , the network topology modulation is introduced .the spatial grid topologies of each transmitter are changed to form varying mutual couplings .a signal modulating voltage waveform is not required but only varying spatial patterns of actively resonating coils .therefore , the network is continuously transmitting power to the receiver by also embedding information into the spatial structure that the energy is transmitted , however , without any signal modulation complexity or energy reservation for data .the network topology modulation is mainly realized by changing denoted by which is easily calculated by using matrix identities as follows : where the elements of are equal to for and otherwise , diagonal includes eigenvalues , , , is the identity matrix , is the row of , for ] by using pilot training phase or they are estimated based on the preliminary knowledge of spatial modulation patterns and physical size of lan topology .it is observed that is robust to angular disorientations while the detailed analysis is left as a future work . in the following discussions ,four different modulation methods are proposed in terms of oscillation frequencies of each coil and time sharing .[ 1]>m#1 .network topology modulation types and properties [ cols="^,^,^,^,^,^,^,^",options="header " , ] [ tab1 ] assume that each coil with the index in a grid transmits power at a different frequency for ] and becomes equal to the following : where is obtained by deleting the rows and columns of with the indices not in ] , where is the number of concurrent frequencies then , the transmitter diversity is realized and the received current is given as follows : if tdma is utilized then , due to is as follows : assume that a set of symbols consisting of different topologies for each transceiver is utilized .the total number of different symbol combinations excluding is denoted by and network symbol is denoted by .the possible set of modulation types indexed with for ] and is the power transmission frequency then , the measured noisy current denoted by at the coil receiver is found by combining ( [ eq3 ] ) and ( [ eq6 ] ) as follows : where is the row of in ( [ eq3 ] ) , , , and . the proof is given in appendix [ proof2 ] .it is further simplified with , and by using perturbation equality where and the proof is given in appendix [ proof3 ] .then , becomes as follows : then , inserting into ( [ demod1 ] ) and dropping , it becomes where , is the kronecker product and for ] is approximated by where ] is the complex receiver noise vector having spectral density for real and imaginary parts at each frequency , satisfying and equalizes the received or transmitted power for the symbol .+ ( a ) + + ( b ) there are two different demodulation methods with respect to the amount of knowledge about for each symbol such that it is either perfectly known by pilot aided transmission or is estimated based on pre - calculation for the communication ranges without any information about .it is observed in section [ numersim ] that the eigenvalues in have low variances for varying positions and orientations , and they are specific for each symbol .demodulation schemes are realized by utilizing this uniqueness as shown in the following .the receiver is assumed to have the full knowledge of , and for any with the aid of pilot symbols .the modulation schemes are shown in fig . [ fig5](a ) where in pilot training case .the demodulation mechanism is shown in fig . [fig5](b ) . each symbol is the vector with period , transmission delay and is the awgn component with independent and identically distributed elements having variance for ] for ] and ^t ] . and are set to three and five , respectively .modulation topology for a single coil is assumed to be one of two schemes , i.e. , either the coils with the indices or are active at any time .therefore , there is a total of three different network topology symbols as shown in fig . [ fig34](b ) .single channel use transmits bits of total mac data . can be easily increased , however , three symbols better clarify the proposed system .[ 1]>m#1 & the coil radius and inter - coil distance in the grids & cm , mm + & the resistance , inductance and capacitance of a single coil unit & nh , - nf + & operating resonance frequency & mhz , mhz + & transmission frequency interval & - , - + , , & the number of random freq .sets , freqs . in a set , andthe symbols & , , + & noise power spectral density & w / hz + & average trans .power per symbol & mw + [ tab3 ] the simulation parameters are shown in table [ tab3 ] . the coil radius and the boundary distance between the coilsare set to cm and mm , respectively , so that a transceiver with a planar grid of coils has the area of to be easily attached to daily objects as shown in fig .[ fig12](a ) .the number of turns is one for simplicity . is calculated by using for square cross - section copper wire of width and height mm compatible with and where the resistivity is , skin depth is and . is equal to nh where .the capacitance is nf or nf for mhz or mhz , respectively , with mhz to reduce parasitic effects .noise spectral density is simulated for \times \mbox{n}_{th } ] and zero otherwise , the equality is obtained by using , and is utilized in the equality .then , the current at receiver coil is found by replacing with .[ [ proof3 ] ] is calculated by using ( [ app_eq2 ] ) as follows : where . if , then the perturbation equality , i.e. , with , is utilized where , and .then , ( [ demoduoneuser2 ] ) is easily obtained .this work is supported by vestel electronics inc . ,manisa , 45030 turkey .b. gulbahar and o. b. akan , `` a communication theoretical modeling and analysis of underwater magneto - inductive wireless channels , '' _ ieee trans . on wireless communications _ ,11 , no . 9 , pp . 33263334 , 2012 .s. li , y. sun and w. shi , `` capacity of magnetic - induction mimo communication for wireless underground sensor networks , '' _ international journal of distributed sensor networks _ , article i d 426324 , 2015. b. gulbahar , `` theoretical analysis of magneto - inductive thz wireless communications and power transfer with multi - layer graphene nano - coils , '' to appear in _ ieee transactions on molecular , biological , and multi - scale communications _ , pp . 11 , 2017 .h. nguyen , j. i. agbinya and j. devlin , `` fpga - based implementation of multiple modes in near field inductive communication using frequency splitting and mimo configuration , '' _ ieee transactions on circuits and systems i _ , vol .62 , no . 1 ,pp . 302310 , 2015 .s. babic et al . , `` mutual inductance calculation between circular filaments arbitrarily positioned in space : alternative to grover s formula , '' _ ieee transactions on magnetics _ , vol .46 , no . 9 , pp . 3591 - 3600 , 2010 | internet - of - things ( iot ) architectures connecting a massive number of heterogeneous devices need energy efficient , low hardware complexity , low cost , simple and secure mechanisms to realize communication among devices . one of the emerging schemes is to realize simultaneous wireless information and power transfer ( swipt ) in an energy harvesting network . radio frequency ( rf ) solutions require special hardware and modulation methods for rf to direct current ( dc ) conversion and optimized operation to achieve swipt which are currently in an immature phase . on the other hand , magneto - inductive ( mi ) communication transceivers are intrinsically energy harvesting with potential for swipt in an efficient manner . in this article , novel modulation and demodulation mechanisms are presented in a combined framework with multiple - access channel ( mac ) communication and wireless power transmission . the network topology of power transmitting active coils in a transceiver composed of a grid of coils is changed as a novel method to transmit information . practical demodulation schemes are formulated and numerically simulated for two - user mac topology of small size coils . the transceivers are suitable to attach to everyday objects to realize reliable local area network ( lan ) communication performances with tens of meters communication ranges . the designed scheme is promising for future iot applications requiring swipt with energy efficient , low cost , low power and low hardware complexity solutions . ps . simultaneous wireless information and power transfer , magneto - inductive communication , network topology modulation , internet - of - things |
the _ solar dynamics observatory ( sdo ) _ is a solar satellite launched by nasa on february 11 2010 .the scientific goal of this mission is a better understanding of how the solar magnetic field is generated and structured and how solar magnetic energy is stored and released into the helio- and geo - sphere , thus influencing space weather ._ sdo _ contains a suite of three instruments : * the _ helioseismic and magnetic imager ( sdo / hmi ) _ has been designed to study oscillations and the magnetic field at the solar photosphere . *the _ atmospheric imaging assembly ( sdo / aia ) _ is made of four telescopes , providing ten full - sun images every twelve seconds , twenty four hours a day , seven days a week .* the _ extreme ultraviolet variability experiment ( sdo / eve ) _ measures the solar extreme ultraviolet ( euv ) irradiance with unprecedented spectral resolution , temporal cadence , accuracy , and precision . the present paper deals with an important aspect of the image reconstruction problem for _ sdo / aia _ .the four telescopes of such instrument capture images of the sun s atmosphere in ten separate wave bands , seven of which centered at euv wavelengths .each image is a square array with pixel width in the range arcsec and is acquired according to a standard ccd - based imaging technique .in fact , each _ aia _ telescope utilizes a -megapixel ccd divided into four quadrants .as typically happens in this kind of imaging , _ aia _ ccds are affected by primary saturation and blooming , which degrade both quantitatively and qualitatively the _ aia _ imaging properties . _primary saturation _ refers to the condition where a set of pixel cells reaches the full well capacity , i.e. these pixels store the maximum number possible of photon - induced electrons . at saturation ,pixels lose their ability to accommodate additional charge , which therefore spreads into neighboring pixels , causing either erroneous measurements or second - order saturation . such spread of chargeis named _ blooming _ and typically shows up as a bright artifact along a privileged axis in the image .figure [ fig : saturation - blooming ] shows a notable example of combined saturation and blooming effects in an _ sdo / aia _ image captured during the september 6 , 2011 event .the recovery of information in the primary saturation region by means of an inverse diffraction procedure is the main goal of the present paper .further , we also introduce here an interpolation approach that allows a robust estimate of the background in the diffraction region as well as reasonable estimate of the flux in the central blooming region . [ cols="^,^ " , ]_ sdo / aia _ images are strongly affected by both primary saturation and blooming , that may occur at all different wavelengths and acquisition times , even in the case of flaring events characterized by a moderate peak flux .this paper describes the first mathematical description of a robust method for the de - saturation of such images at both a primary and secondary ( blooming ) level .the method relies on the description of de - saturation in terms of inverse diffraction and utilizes correlation and expectation - maximization for the recovery of information in the primarily saturated region .this approach requires to compute a reliable estimate of the image background which , for this paper , has been obtained by means of interpolation in the fourier space .the knowledge of the background permits to recover information in the blooming region in a very natural way .the availability of an automatic procedure for image de - saturation in the _ sdo / aia _ framework may potentially change the extent with which euv information from the sun can be exploited .in fact , armed with our computational approach , many novel problems can be addressed in _sdo / aia _ imaging .for example , one can study the impact of the choice of the model for the diffraction psf on the quality of the de - saturation . in this paperwe used a synthetic estimate of the diffraction psf provided by solar software ( ssw ) but other empirical or semi - empirical forms can be adopted .furthermore , this technique can be extended to account for the dependance of the psf from the passband wavelengths .finally , the routine implementing this approach is fully automated and this allows the systematic analysis of many events recorded by _ aia _ and their integration with data provided by other missions such as _ rhessi _ or , in the near future _ stix _ .this work was supported by a grant of the italian indam - gncs and by the nasa grant nnx14ag06 g .the _ sdo / aia _ hardware is equipped with a feedback system that reacts to saturation in correspondence of intense emissions by reducing the exposure time . as a result , for a typical _ aia _ acquisition along a time range of some minutes , during which saturation occurs , the telescope always provides some unsaturated frames that can be utilized to estimate the background . a possible way to realize such an estimateis based on the following scheme .let us denote with and two unsaturated images acquired at times and , respectively and with a saturated image acquired at with ( note that , , and are normalized at the same exposure time ) .the algorithm for the estimate of the background is : 1 . and are deconvolved with em ( using the global psf ) to obtain the reconstructions and ( the kl - kkt rule can be used to stop the iterations ) .both and are fourier transformed by means of a standard fft - based procedure to obtain and .a low - pass filter is applied to both and to obtain and , respectively .4 . for each corresponding pair of pixels in and that are not negligible , an interpolation routineis applied , both for the real and imaginary part .this provides in correspondence of .the resulting vector is fourier inverted to obtain the interpolated reconstruction .the core psf is finally applied to to obtain in the image domain . is a reliable estimate , for time , of the background introduced in section 2 . on the other hand ,a reliable estimate of the image in the bloomed region is provided by the restriction of onto determined as in section 2 .we finally observe that , in this algorithm , the interpolation step is applied in the fourier domain because , after filtering , a lot of pixels are negligible and therefore the computational burden of the procedure is notably decreased .9999 pesnell w d , thompson b j and chamberlin p c 2012 the solar dynamics observatory ( sdo ) _ sol .phys . _ * 275 * 3 scherrer p h et al 2012 the helioseismic and magnetic imager ( hmi ) investigation for the solar dynamics observatory ( sdo ) _ solar phys . _ * 275 * 207 lemen j r et al 2012 the _ atmospheric imaging assembly ( aia ) _ on the _ solar dynamics observatory ( sdo ) _ _ solar phys ._ * 275 * 17 woods t n et al 2012 extreme ultraviolet variability experiment ( eve ) on the solar dynamics observatory ( sdo ) : overview of science objectives , instrument design , data products , and model developments _ solar phys ._ * 275 * 115 grigis p , su y and weber m 2012 _ aia psf characterization and image deconvolution _ , sdo documentation ( http://www.lmsal.com/sdodocs ) boerner p et al 2012 initial calibration of the atmospheric imaging assembly ( aia ) on the solar dynamics observatory ( sdo ) _ solar phys . _* 275 * 41 poduval b , deforest c e , schmelz j t and pathak s 2013 point - spread functions for the extreme - ultraviolet channels of sdo / aia telescopes _ astrophys. j. _ * 765 * 144 martinez p and klotz a 1997 _ a practical guide to ccd astronomy _( cambridge : cambridge university press ) bekefi g and barrett a h 1978 _ electromagnetic vibrations , waves , and radiation _ ( boston : mit press ) raftery c l , krucker s and lin r p 2011 imaging spectroscopy using _ aia _ diffraction patterns in conjunction with _ rhessi _ and _ eve _ observations _ astrophys . j. lett . _ * 743 * l27 schwartz r a , torre g and piana m 2014 systematic de - saturation of images from the _ atmospheric imaging assembly _ in the _ solar dynamics observatory _ _ astrophys . j. lett ._ * 793 * l23 shepp l a and vardi y 1982 maximum likelihood reconstruction for emission tomography ieee _ trans . med . imaging _* 1 * 113 benvenuto f and piana m 2014 regularization of multiplicative iterative algorithms with non - negative constraint _ inverse problems _ * 30 * 035012 benvenuto f , schwartz r , piana m and massone a m 2013 expectation maximization for hard x - ray count modulation profiles _ astron . astrophys . _* 555 * a61 de pierro a r 1987 on the convergence of the iterative image space reconstruction algorithm for volume ect ieee _ trans . med. imaging _ * 6 * 2 lin r p et al 2002 the reuven ramaty high energy solar spectroscopic imager ( rhessi ) _ solar phys . _* 210 * 3 benz a et al 2012 the spectrometer telescope for imaging x - rays on board the solar orbiter mission _ proc . _spie conference on space telescopes and instrumentation 2012 - ultraviolet to gamma ray * 8443 * 84433l | the _ atmospheric imaging assembly _ in the _ solar dynamics observatory _ provides full sun images every seconds in each of extreme ultraviolet passbands . however , for a significant amount of these images , saturation affects their most intense core , preventing scientists from a full exploitation of their physical meaning . in this paper we describe a mathematical and automatic procedure for the recovery of information in the primary saturation region based on a correlation / inversion analysis of the diffraction pattern associated to the telescope observations . further , we suggest an interpolation - based method for determining the image background that allows the recovery of information also in the region of secondary saturation ( blooming ) . |
constructing martingales with given marginal distributions has been an active area of research over the last decade ( e.g. ) .( here and in the entire paper , marginal distributions ( also marginals ) refer to the 1-dimensional distributions . ) a condition for the existence of such martingales is given by kellerer ( see hirsch and roynette for a new and improved proof ) .three constructions of markov martingales with pre - specified marginal distributions were given by madan and yor , namely the skorokhod embedding method , the time - changed brownian motion and the continuous martingale approach pioneered by dupire .recently , hirsch _et al_. gave six different methods for constructing martingales whose marginal distributions match those of a given family of probability measures .they also tackle the tedious task of finding sufficient conditions to ensure that the chosen family is indeed increasing in the convex order , or as they coined it , a peacock . in this paper , we deal with a different , albeit related , scenario. we do not start with a family of probability distributions , rather we start with a given martingale ( the existence of which is assumed ) and produce a large family ( as opposed to just a handful ) of new martingales having the same marginal distributions as the original process .we say that these martingales `` mimic '' the original process .this same task was undertaken in for the brownian motion .it gave rise to the papers and ( who coined the term faked brownian motion ) .albin and oleszkiewicz answered the question of the existence of a continuous martingale with brownian marginals .however , their constructions yield non - markov processes .et al_. then generalised albin s construction and produced a sequence of ( non - markov ) martingales with brownian marginals . in this paper , we extend the construction of to a much larger class of processes , namely self - similar markov martingales . before formulating a solution to this problemwe give a brief account on the origin and relevance of the mimicking question to finance , and more specifically to option pricing ; that is the pricing of a contract that gives the holder the right to buy ( or sell ) the instrument ( a stock ) at a future time for a specified price .the theoretical valuation of an option is performed in such a way as not to allow arbitrage opportunities arbitrage occurs when riskless trading results in profit .the first fundamental theorem ( e.g. , , page 231 ) states that the absence of arbitrage in a market with stock price , , is essentially equivalent to the existence of an equivalent probability measure under which the stock price is a martingale .( here without loss of generality , we let the riskless interest rate be zero . )the second fundamental theorem ( e.g. , , page 232 ) implies that the arbitrage - free price of an option is given by ] determines the distribution of or the marginal distribution , for example . therefore , if one wants to keep the option prices given by the original formula but without the limitations of the original process ( such as constant volatility ) one has to look for martingales ( to have the model arbitrage free ) with given marginals ( to keep the same option prices ) .this question received much attention in the last ten years , see the pioneering work of madan and yor . throughout this paper , we assume that all processes are cdlg and progressively measurable. we will use the notation to mean equal in distribution for random variables or equal in finite - dimensional distributions for processes , and this will be clear in the context . for a given random measure , the measure for defined by .we will also write ] for any positive function .we start with a martingale which is also a markov process , that is , for any bounded measurable function , = \mathbb{e}[g(z_t)|z_s] ] and a measurable set , and here we prove only the second equality , the first one is proved similarly .suppose that ] , where , the dirac measure at .suppose that for any bounded measurable function and , we have then defined as follows is a transition function , clearly , for each , is a probability measure and for each , is measurable in .note also that .next , using lemma [ lemmap ] , we obtain , for , and in other words , satisfies the chapman kolmogorov equations .if depends on and only through ( i.e. ) , then the scaling property of carries over to : this follows immediately from the definition of and the scaling of .let , for , be a random variable having distribution .property ( [ e : propg ] ) is equivalent to the property that if , for , and are independent random variables , then .further , if we let and write for the distributions of ( with ) , then property ( [ e : propg ] ) translates to the convolution identity as we seek to retain the scaling property of the original process , we assume that and immediately reduce property ( [ e : propg ] ) to the family defines a subordinator ( process with positive , independent and stationary increments ) and by lvy khintchine it has laplace transforms of the form the function , known as the laplace exponent , takes the form with drift and lvy measure satisfying and .conversely , to each ( i.e. , to each pair as above ) corresponds a convolution semigroup and in turn a family which satisfies property ( [ e : propg ] ) .for details see , for example , , section 1.2 .this ensures the existence of and a process with transition function .[ t : transition ] let be a -self - similar markov process . to each ,laplace exponent of a subordinator , corresponds a -self - similar markov process , starting from 0 and having the marginals of .furthermore , if is a martingale and , then is also a martingale .writing in terms of , , the new transition function ,\quad\quad s\le t\ ] ] can be seen as a randomisation of .furthermore , the condition on for to be a martingale can be written as = ( s / t)^{\kappa} ] .since is measurable in and right - continuous in , it is measurable as a function of .hence , for each , is a random variable . for ,let and , so that and . then we have , for and with , for , let be the law of .then the measures ( are consistent and by the kolmogorov extension theorem , there exists a process with finite - dimensional distributions ( .a similar argument shows that , from which we deduce that is -self - similar . as such, extends by continuity to by letting .the equality of marginal distributions of and follows from the scaling property of as for any fixed . using successively lemmas [ l : lamperti ] , [ l : genbochner ] , [ l : genproduct ] and [ l : gentimechange ] ( see the ) , we see that is markovian . by the scaling property of and the stationarity of subordinator , we obtain , for , the transition function of as .\ ] ] notice that the transition function does not depend on and it is the same as defined earlier with being the distribution of .the rest of the assertions of the proposition then follows immediately from theorem [ t : transition ] .( alternatively , we can carry out the proof independently , without referring to , following oleszkiewicz . )the process constructed in proposition [ t : timechange ] is identical ( in law ) to the process obtained in theorem [ t : transition ] with being the distribution of , or . for to be a martingale , in ( [ e : laplaceexp ] ) must be at most 1 , and is 1 if and only if ( and for any ) and .in proposition [ t : timechange ] , we can not replace with a two - sided subordinator for , where and are independent subordinators .this is because by doing that , we will not have independent increments . in particular , since , then for , the increment , where denotes the filtration generated by .in this section , we obtain the infinitesimal generators of the process and display some of their path properties .we will work within the martingale framework , that is , unless otherwise stated , we will assume that our initial process is a martingale and we will use a subordinator with drift , lvy measure and lapace exponent satisfying . [t : generator ] suppose that has infinitesimal generator .then the infinitesimal generator of the process is given by for differentiable and in the domain of .first , from lemma [ l : lamperti ] , is time - homogeneous with generator and transition semigroup .next , applying lemma [ l : genbochner ] , the generator of the process is then , let and using lemma [ l : genproduct ] , the generator of is since for .finally , we time - change the process with to get .thus , by lemma [ l : gentimechange ] , the generator of is due to a change of variable , the scaling property of and identity ( [ e : ssgen1 ] ) .the generator of is established by noting that does not depend on . since is self - similar, is in the domain of for all whenever is in the domain of .therefore , is in the domain of , if is also differentiable .note that when and , we recover the process and . for a measurable function ,if there exists a measurable function such that for each , almost surely and is a martingale , then is said to belong to the domain of the extended infinitesimal generator of and the extended infinitesimal generator . if belongs to the domain of the extended infinitesimal generator of , then has predictable quadratic variation see examples in section [ s : examples ] for the computation in some specific cases .suppose that is continuous in probability .then the process is also continuous in probability , that is , for every , we have , for , \\[-8pt ] & & { } + \mathbb{p } \biggl ( \bigl| s^{\kappa } \mathrm{e}^{-{\kappa}\zeta_{a+\ln s } } ( z_{\mathrm{e}^{\zeta_{a+\ln t } } } - z_{\mathrm{e}^{\zeta_{a+\ln s } } } ) \bigr| > \frac{c}{2 } \biggr).\nonumber\end{aligned}\ ] ] however , the first term which converges to 0 as , since is continuous in probability as a subordinator . to deal with the last term in ( [ e : contprob1 ] )we first observe that a process that is continuous in probability does not jump at fixed points so that .further , since is also continuous in probability , as soon as .therefore , for , = 0,\end{aligned}\ ] ] where . if is continuous in probability with finite second moments and has no drift , then is a purely discontinuous martingale .let so that for .first , we observe that with probability one , does not jump at if jumps at . indeed ,if is a countable set of points in , then , as is continuous in probability , .let and where , then = 0,\ ] ] where denotes the filtration of .taking to infinity , we obtain the desired result . to show that is purely discontinuous , that is , , we compute the sum of the square of jumps of . in general, we have & = & \mathbb { e } \biggl [ \sum _ { \mathrm{e}^{-a}<s\le t } ( \delta x_s)^2 \mathbf{1}_{\delta u_s > 0 , \delta z_{(u_{s-})}\neq0 } \biggr ] \\ & & { } + \mathbb{e } \biggl [ \sum_{\mathrm{e}^{-a}<s\le t } ( \delta x_s)^2 \mathbf { 1}_{\delta u_s > 0 , \delta z_{(u_{s-})}=0 } \biggr ] \\ & & { } + \mathbb{e } \biggl [ \sum_{\mathrm{e}^{-a}<s\le t } ( \delta x_s)^2 \mathbf{1}_{\delta u_s = 0 } \biggr].\end{aligned}\ ] ] as is continuous in probability , the first term is zero due to the observation at the start of the proof .we write ] . since is -self - similar , . as is a martingale , =0 ]thus , we obtain \\ & & \quad= s^{2\kappa } \bigl ( u_s^{-2\kappa } ( \theta_{u_s}-\theta_{(u_{s- } ) } ) + \theta_{(u_{s- } ) } \bigl(u_s^{-\kappa}-u_{s-}^{-\kappa } \bigr)^2 \bigr ) \\ & & \quad = 2 s^{2\kappa } \theta_1 \bigl ( 1 - u_s^{-\kappa}u_{s-}^{\kappa } \bigr).\end{aligned}\ ] ] since has probability one on the set , it follows that & = & \mathbb{e } \biggl [ \sum_{\mathrm{e}^{-a}<s\le t } 2 s^{2\kappa } \theta_1 \bigl ( 1 - u_s^{-\kappa}u_{s-}^{\kappa } \bigr ) \mathbf{1}_{\delta u_s>0 } \biggr ] \\ & = & \mathbb{e } \biggl [ \sum_{0<r\le a+\ln t } 2 \mathrm{e}^{2\kappa(r - a ) } \theta_1 \bigl ( 1 - \mathrm{e}^{-{\kappa}\delta\zeta_r } \bigr ) \mathbf{1}_{\delta\zeta _ r>0 } \biggr].\end{aligned}\ ] ] writing in terms of and , and using ( [ e : laplaceexp ] ) with , & = & \theta_1 \int_0^{a+\ln t } 2 \mathrm{e}^{2\kappa(r - a ) } \,\mathrm{d}r \int_{(0,\infty ) } \bigl(1-\mathrm{e}^{-z\kappa}\bigr ) \nu(\mathrm{d}z ) \\ & = & \theta_1 \bigl(t^{2\kappa}-\mathrm{e}^{-2a\kappa}\bigr ) ( 1-\beta).\end{aligned}\ ] ] adding those three terms and taking limit as , we have = \theta_1 t^{2\kappa } ( 1-\beta ) + \lim_{a\to\infty } l(a , t).\ ] ] since is square integrable on any finite interval if has finite second moments , it has quadratic variation with expectation ] = \mathbb{e } [ x_t^2 ] = \mathbb { e } [ z_t^2 ] = t^{2\kappa } \theta_1 ] and are non - negative .thus , when , we must have , which gives = 0 ] . examples of subordinators include poisson process , compound poisson process with positive jumps , gamma process and stable subordinators .for example , we can take as a poisson process with rate to satisfy . in the following ,we provide some examples of mimicking with the infinitesimal generators and the predictable quadratic variations computed explicitly to have a better understanding of the processes .we finish this section with a discussion on modifying our construction to mimic some brownian related martingales and its limitation . for any , define the process by , where is a brownian motion .note that this is a gaussian process with zero mean and covariance function .it is a markov process and also a martingale .moreover , it is -self - similar ( ) since for all , a key aspect of the construction in is the following representation of the mimic when : where has distribution , is standard normal and , , and are independent .this representation extends to the case of other gaussian continuous martingales .in fact , it also extends to the case of stable processes see proposition [ p : stablerep ] .[ p : gauscontmartrep ] with , the mimic has the representation where , and , , and are independent . since and have independent and stationary increments , for , where is an independent copy of , and is a random variable distributed as .note that this representation holds also for .knowing that has generator for , we can compute the generator of following theorem [ t : generator ] and obtain , for , taking , then equation ( [ e : quadvar ] ) , along with routine calculations and equation ( [ e : laplaceexp ] ) , gives the following result .the predictable quadratic variation of is since , we can also write when , is a brownian motion and our results agree with .a process is a squared bessel process of dimension , for some , if it satisfies , where denotes a brownian motion .the squared bessel process started at 0 is a continuous markov process satisfying the self - similarity with .let .then is a 1-self - similar markov process and satisfies the sde note that is stochastically dominated by the square of the norm of an -dimensional brownian motion , where , thus \leq nt ] .it follows that is a true martingale ( see , e.g. , , theorem 7.35 ) .the infinitesimal generator of is , , thus , that of is the predictable quadratic variation of is where is the laplace exponent of . from , we have for , and taking conditional expectation ( is a true martingale see above ) , = z_{t\mathrm{e}^{-u}}^2 + 4t\bigl(1-\mathrm{e}^{-u}\bigr)z_{t\mathrm{e}^{-u } } + 2\delta t^2\bigl(1-\mathrm{e}^{-2u}\bigr).\ ] ] thus , using equation ( [ e : laplaceexp ] ) we obtain , with , however .the result then follows from equation ( [ e : quadvar ] ) .note that if , for any and we recover .suppose is an -stable process with .then is a markov process and that is , is -self - similar with .it is a lvy process with lvy triplet , where for some positive constants and .assume that is a martingale , in which case the lvy triplet must satisfy [ p : stablerep ] the mimic has the representation where , and , , and are independent .see proposition [ p : gauscontmartrep ] .the stable process has infinitesimal generator where . to distinguish the characteristics of from that of , we add the subscript to the drift and lvy measure of the subordinator . then theorem [ t : generator ] gives the infinitesimal generator of for as let be an integrable , symmetric random variable ( i.e. ) and be a brownian motion independent of . following and extending , page 283 , for any let and . then is a markov martingale such that for each , .moreover , is -self - similar . indeed , using the brownian motion , we have it follows that and .hence , since is a markov process with transition semigroup where is the cumulative distribution function of , it has infinitesimal generator using theorem [ t : generator ] , we obtain the infinitesimal generator of the mimic , for , the predictable quadratic variation of is let . since and , the result follows immediately from equation ( [ e : quadvar ] ) .note that .the predictable quadratic variations of and are given by the same functional of the process .now we discuss how we can ( and can not ) alter our martingale condition to mimic some brownian related processes , including the martingales associated with the hermite polynomials and the exponential martingale of brownian motion .consider the hermite polynomials which are defined by equivalently , .let and .then , where denotes a brownian motion , is a local martingale for every since is space time harmonic , that is , .take , the process is markovian and 1-self - similar , thus can be mimicked using our mimicking scheme with any subordinator that satisfies . for , is -self - similar , but it is not markovian ( see ) .so we are not able to mimic this process by a direct application of the method described above. however , a slight modification of our construction proves sufficient to achieve our aim .let be a mimic of the brownian motion as in section [ s : gauscontmart ] with , but without the requirement that be a martingale .then we have the following result .[ p : hermite ] for each , the process has the same marginal distributions as and is a martingale if and only if , or = ( s / t)^{n/2} ] .therefore , in order to mimic the process , we can mimic , with the martingale requirement changed to , and then apply the function to the resulting process .it is of interest to ask whether the above trick extends to other space time harmonic functions .in particular , could this enable us to mimic the geometric brownian motion .unfortunately , this is not the case .in fact , , , are the only analytic functions for which this trick works .suppose that and there exists such that , in other words , is analytic on the set .suppose further that is space time harmonic , so that is a martingale .suppose that mimics with the martingale requirement replaced with for a positive integer .then has the same marginals as and is a martingale if and only if where is the hermite polynomial . for to be a martingale , we must have for any and , that is , = h(x , s) ] for all and , or = \sum _ { m , n=0}^\infty a_{m , n } x^ms^n.\ ] ] therefore , for all , and , we must have = a_{m , n} ] . recall that = \mathbb{e}[\exp(\lambda t-\lambda \zeta_t ) ] = \exp(-t(\psi(\lambda)-\lambda)) ] for a . then , for all such that , .therefore , furthermore , by pluciska and fitzsimmons , is the hermite polynomial .let be any mimic of in the sense of section [ s : scheme ] but without the martingale requirement .although the process has the same marginal distributions as , it is not a martingale unless , in which case .[ l : gentimechange ] let be a markov process with infinitesimal generator and be a deterministic , differentiable , increasing function in with derivative .then the time - changed process is also a markov process with infinitesimal generator .furthermore , if is in the domain of , then is in the domain of .let be the filtration of and be the filtration of .for any bounded measurable function , we have , for , = \mathbb { e}\bigl[g(y_{c_t})|\widetilde{\mathcal{f}}_s \bigr ] = \mathbb { e}\bigl[g(y_{c_t})|\mathcal{f}_{c_s}\bigr ] = \mathbb{e}\bigl[g(y_{c_t})|y_{c_s}\bigr ] = \mathbb{e}\bigl[g ( \widetilde{y}_t)|\widetilde{y}_s\bigr].\ ] ] for where the function is strictly increasing , the infinitesimal generator of is - f(x)}{c_u - c_t } \frac{c_u - c_t}{u - t } = a_{c_t}f(x)c'_t.\ ] ] if in a small neighbourhood of , then and let be a markov process with infinitesimal generator and be a deterministic , differentiable function in with derivative and for any . then the process is also a markov process and has generator where is an operator defined by .furthermore , if is in the domain of for any and is differentiable , then is in the domain of .let be the filtration of and be the filtration of .let be a function such that .since is one - to - one , .therefore , for any bounded measurable function , we have = \mathbb { e}\bigl[g\circ h_t ( y_t)|\widetilde { \mathcal{f}}_s\bigr ] = \mathbb{e}\bigl[g \circ h_t ( y_t)|\mathcal{f}_s\bigr ] = \mathbb{e}\bigl[g\circ h_t(y_t)|y_s\bigr ] = \mathbb { e}\bigl[g ( \widetilde{y}_t)|\widetilde{y}_s\bigr].\ ] ] the infinitesimal generator of is - \pi_{c_u}f\biggl(\frac{1}{c_t}x\biggr ) + f\biggl(\frac{c_u}{c_t}x\biggr ) - f(x)\biggr)\big/(u - t ) \\ & = & a_t\pi_{c_t}f \biggl(\frac{1}{c_t}x \biggr ) + \frac{c'_t}{c_t}xf'(x).\end{aligned}\ ] ] [ l : lamperti ] suppose is a -self - similar markov process .suppose and are , respectively , the transition function and infinitesimal generator of .let .then is a time - homogeneous markov process with transition semigroup and infinitesimal generator furthermore , if is in the domain of and differentiable , then it is in the domain of . by the scaling property of , we have it follows that is time - homogeneous and .the generator of can be obtained by applying lemma [ l : gentimechange ] and lemma [ l : genproduct ] , and seeing that from equation ( [ e : ssgen ] ) with and .note that for all , is in the domain of by the scaling property of .thus , writing as the generator of , is in the domain of by lemma [ l : gentimechange ] .[ l : genbochner ] suppose is a time - homogeneous markov process with semigroup and generator , and is a subordinator independent of with drift and lvy measure .set .then the process is a time - homogeneous markov process with generator where furthermore , if is in the domain of , then it is in the domain of .if has zero drift , then is a pure jump process .see sato , theorem 32.1 .the authors are grateful to two referees and an associate editor for their careful reading of an earlier version of the paper and a number of suggestions and improvements . | we construct a family of self - similar markov martingales with given marginal distributions . this construction uses the self - similarity and markov property of a reference process to produce a family of markov processes that possess the same marginal distributions as the original process . the resulting processes are also self - similar with the same exponent as the original process . they can be chosen to be martingales under certain conditions . in this paper , we present two approaches to this construction , the transition - randomising approach and the time - change approach . we then compute the infinitesimal generators and obtain some path properties of the resulting processes . we also give some examples , including continuous gaussian martingales as a generalization of brownian motion , martingales of the squared bessel process , stable lvy processes as well as an example of an artificial process having the marginals of for some symmetric random variable . at the end , we see how we can mimic certain brownian martingales which are non - markovian . ./style / arxiv - general.cfg , |
dropout is a technique to regularize artificial neural networks it prevents overfitting . a fully connected network with two hidden layers of 80 units each can learn to classify the mnist training set perfectly in about 20 training epochs unfortunately the test error is quite high , about 2% . increasing the number of hidden units by a factor of 10 and using dropout results in a lower test error , about 1.1% .the dropout network takes longer to train in two senses : each training epoch takes several times longer , and the number of training epochs needed increases too .we consider a technique for speeding up training with dropout it can substantially reduce the time needed per epoch . consider a very simple -layer fully connected neural network with dropout . to train it with a minibatch of samples ,the forward pass is described by the equations : \times w_{k}\qquad k=0,\dots,\ell-1.\ ] ] here is a matrix of input / hidden / output units , is a dropout - mask matrix of independent bernoulli( ) random variables , denotes the probability of dropping out units in level , and is an matrix of weights connecting level with level .we are using for ( hadamard ) element - wise multiplication and for matrix multiplication .we have forgotten to include non - linear functions ( e.g. the rectifier function for the hidden units , and softmax for the output units ) but for the introduction we will keep the network as simple as possible . the network can be trained using the backpropagation algorithm to calculate the gradients of a cost function ( e.g. negative log - likelihood ) with respect to the : ^{\mathsf{t}}\times\frac{\partial\mathrm{cost}}{\partial x_{k+1}}\\ \frac{\partial\mathrm{cost}}{\partial x_{k } } & = \left(\frac{\partial\mathrm{cost}}{\partial x_{k+1}}\times w_{k}^{\mathrm{\mathsf{t}}}\right)\cdot d_{k}.\end{aligned}\ ] ] with dropout training , we are trying to minimize the cost function averaged over an ensemble of closely related networks .however , networks typically contain thousands of hidden units , so the size of the ensemble is _ much _ larger than the number of training samples that can possibly be ` seen ' during training .this suggests that the independence of the rows of the dropout mask matrices might not be terribly important ; the success of dropout simply can not depend on exploring a large fraction of the available dropout masks .some machine learning libraries such as pylearn2 allow dropout to be applied batchwise instead of independently ] .this is done by replacing with a row matrix of independent bernoulli random variables , and then copying it vertically times to get the right shape .to be practical , it is important that each training minibatch can be processed quickly . a crude way of estimating the processing time is to count the number of floating point multiplication operations needed ( naively ) to evaluate the matrix multiplications specified above : however , when we take into account the effect of the dropout mask , we see that many of these multiplications are unnecessary .the -th element of the weight matrix effectively ` drops - out ' of the calculations if unit is dropped in level , or if unit is dropped in level . applying 50% dropout in levels and renders 75% of the multiplications unnecessary .if we apply dropout independently , then the parts of that disappear are different for each sample .this makes it effectively impossible to take advantage of the redundancy it is slower to check if a multiplication is necessary than to just do the multiplication .however , if we apply dropout batchwise , then it becomes easy to take advantage of the redundancy .we can literally drop - out redundant parts of the calculations .the binary batchwise dropout matrices naturally define submatrices of the weight and hidden - unit matrices .let ] denote the submatrix of consisting of weights that connect active units in level to active units in level .the network can then be trained using the equations : the redundant multiplications have been eliminated .there is an additional benefit in terms of memory needed to store the hidden units : needs less space than . in section [ sec : implementation ]we look at the performance improvement that can be achieved using cuda / cublas code running on a gpu .roughly speaking , processing a minibatch with 50% batchwise dropout takes as long as training a 50% smaller network on the same data .this explains the nearly overlapping pairs of lines in figure [ fig : training - time ] .we should emphasize that batchwise dropout only improves performance during training ; during testing the full matrix is used as normal , scaled by a factor of .however , machine learning research is often constrained by long training times and high costs of equipment . in section [ sec: results - for - fully - connected ] we show that all other things being equal , batchwise dropout is similar to independent dropout , but faster .moreover , with the increase in speed , all other things do not have to be equal . with the same resources ,batchwise dropout can be used to * increase the number of training epochs , * increase the number of hidden units , * increase the number of validation runs used to optimize `` hyper - parameters '' , or * to train a number of independent copies of the network to form a committee .these possibilities will often be useful as ways of improving generalization / reducing test error . in section [ sec : convolutional - networks ]we look at batchwise dropout for convolutional networks .dropout for convolutional networks is more complicated as weights are shared across spatial locations .a minibatch passing up through a convolutional network might be represented at an intermediate hidden layer by an array of size : 100 samples , the output of 32 convolutional filters , at each of spatial locations .it is conventional to use a dropout mask with shape ; we will call this independent dropout .in contrast , if we want to apply batchwise dropout efficiently by adapting the submatrix trick , then we will effectively be using a dropout mask with shape .this looks like a significant change : we are modifying the ensemble over which the average cost is optimized . during training , the error rates are higher .however , testing the networks gives very similar error rates .we might have called batchwise dropout _ fast dropout _ but that name is already taken _ _ .fast dropout is very different approach to solving the problem of training large neural network quickly without overfitting .we discuss some of the differences of the two techniques in the appendix .in theory , for matrices , addition is an operation , and multiplication is by the coppersmithwinograd algorithm .this suggests that the bulk of our processing time should be spent doing matrix multiplication , and that a performance improvement of about 60% should be possible compared to networks using independent dropout , or no dropout at all .in practice , sgemm functions use strassen s algorithm or naive matrix multiplication , so performance improvement of up to 75% should be possible .we implemented batchwise dropout for fully - connected and convolutional neural networks using cuda / cublas ] .we found that using the highly optimized function to do the bulk of the work , with cuda kernels used to form the submatrices and to update the using , worked well .better performance may well be obtained by writing a sgemm - like matrix multiplication function that understands submatrices . for large networks and minibatches , we found that batchwise dropout was substantially faster , see figure [ fig : training - time ] .the approximate overlap of some of the lines on the left indicates that 50% batchwise dropout reduces the training time in a similar manner to halving the number of hidden units .the graph on the right show the time saving obtained by using submatrices to implement dropout .note that for consistency with the left hand side , the graph compares batchwise dropout with dropout - free networks , _ not _ with networks using independent dropout .the need to implement dropout masks for independent dropout means that figure 1 slightly undersells the performance benefits of batchwise dropout as an alternative to independent dropout . for smaller networks ,the performance improvement is lower bandwidth issues result in the gpu being under utilized .if you were implementing batchwise dropout for cpus , you would expect to see greater performance gains for smaller networks as cpus have a lower processing - power to bandwidth ratio .if you have hidden units and you drop out % of them , then the number of dropped units is approximately , but with some small variation as you are really dealing with a binomial random variable its standard deviation is .the sizes of the submatrices and are therefore slightly random . in the interests of efficiency and simplicity ,it is convenient to remove this randomness .an alternative to dropping each unit independently with probability is to drop a subset of exactly of the hidden units , uniformly at random from the set of all such subsets .it is still the case that each unit is dropped out with probability .however , within a hidden layer we no longer have strict independence regarding which units are dropped out .the probability of dropping out the first two hidden units changes very slightly , from also , we used a modified form of nag - momentum minibatch gradient descent .after each minibatch , we only updated the elements of , not all the element of . with and denoting the momentum matrix / submatrix corresponding to and , our update was the momentum still functions as an autoregressive process , smoothing out the gradients , we are just reducing the rate of decay by a factor of .dropout networks trained using a restricted the number of dropout patterns ( each is from an independent experiment ) .the blue line marks the test error for a network with half as many hidden units trained without dropout ., scaledwidth=45.0% ]the fact that batchwise dropout takes less time per training epoch would count for nothing if a much larger number of epochs was needed to train the network , or if a large number of validation runs were needed to optimize the training process .we have carried out a number of simple experiment to compare independent and batchwise dropout . in many caseswe could have produced better results by increasing the training time , annealing the learning rate , using validation to adjust the learning process , etc .we choose not to do this as the primary motivation for batchwise dropout is efficiency , and excessive use of fine - tuning is not efficient .for datasets , we used : * the mnist set of pixel handwritten digits . * the cifar-10 dataset of 32x32 pixel color pictures ( ) . * an artificial dataset designed to be easy to overfit .following , for mnist and cifar-10 we trained networks with 20% dropout in the input layer , and 50% dropout in the hidden layers .for the artificial dataset we increased the input - layer dropout to 50% as this reduced the test error . in some cases ,we have used relatively small networks so that we would have time to train a number of independent copies of the networks .this was useful in order to see if the apparent differences between batchwise and independent dropout are significant or just noise .our first experiment explores the effect of dramatically restricting the number of dropout patterns seen during training .consider a network with three hidden layers of size 1000 , trained for 1000 epochs using minibatches of size 100 .the number of distinct dropout patterns , , is so large that we can assume that we will never generate the same dropout mask twice . during independent dropout trainingwe will see 60 million different dropout patterns , during batchwise dropout training we will see 100 times fewer dropout patterns . for both types of dropout , we trained 12 independent networks for 1000 epochs , with batches of size 100 . for batchwise dropout we got a mean test error of 1.04% [ range ( 0.92%,1.1% ) , s.d .0.057% ] and for independent dropout we got a mean test errors of 1.03% [ range ( 0.98%,1.08% ) , s.d . 0.033% ] .the difference in the mean test errors is not statistically significant . to explore furtherthe reduction in the number of dropout patterns seen , we changed our code for ( pseudo)randomly generating batchwise dropout patterns to restrict the number of distinct dropout patterns used. we modified it to have period minibatches , with ; see figure [ fig : limited - dropout - patterns ] . for corresponds to only ever using one dropout mask , so that 50% of the network s 3000 hidden weights are never actually trained ( and 20% of the 784 input features are ignored ) . during trainingthis corresponds to training a dropout - free network with half as many hidden units the test error for such a network is marked by a blue line in figure [ fig : limited - dropout - patterns ] .the error during testing is higher than the blue line because the untrained weights add noise to the network .if is less than thirteen , is it likely that some of the networks 3000 hidden units are dropped out every time and so receive no training . if is in the range thirteen to fifty , then it is likely that every hidden unit receives some training , but some pairs of hidden units in adjacent layers will not get the chance to interact during training , so the corresponding connection weight is untrained . as the number of dropout masks increases into the hundreds , we see that it is quickly a case of diminishing returns .to test the effect of changing network size , we created an artificial dataset .it has 100 classes , each containing 1000 training samples and 100 test samples .each class is defined using an independent random walk of length 1000 in the discrete cube . for each class we generated the random walk , and then used it to produce the training and test samples by randomly picking points along the length of walk ( giving binary sequences of length 1000 ) and then randomly flipping 40% of the bits .we trained three layer networks with hidden units per layer with minibatches of size 100 .see figure [ fig : artificial - dataset.-100 ] .looking at the training error against training epochs , independent dropout seems to learn slightly faster .however , looking at the test errors over time , there does not seem to be much difference between the two forms of dropout . note that the -axis is the number of training epochs , not the training time .the batchwise dropout networks are learning much faster in terms of real time .learning cifar-10 using a fully connected network is rather difficult .we trained three layer networks with hidden units per layer with minibatches of size 1000 .we augmented the training data with horizontal flips .see figure [ fig : cifar - fc ] .dropout for convolutional networks is more complicated as weights are shared across spatial locations .suppose layer has spatial size with features per spatial location , and if the -th operation is a convolution with filters .for a minibatch of size , the convolution involves arrays with sizes : dropout is normally applied using dropout masks with the same size as the layers .we will call this independent dropout independent decisions are mode at every spatial location .in contrast , we define batchwise dropout to mean using a dropout mask with shape .each minibatch , each convolutional filter is either on or off across all spatial locations .these two forms of regularization seem to be doing quite different things .consider a filter that detects the color red , and a picture with a red truck in it .if dropout is applied independently , then by the law of averages the message `` red '' will be transmitted with very high probability , but with some loss of spatial information .in contrast , with batchwise dropout there is a 50% chance we delete the entire filter output .experimentally , the only substantial difference we could detect was that batchwise dropout resulted in larger errors during training . to implement batchwise dropout efficiently , notice that the dropout masks corresponds to forming subarrays of the weight arrays with size the forward - pass is then simply a regular convolutional operation using ; that makes it possible , for example , to take advantage of the highly optimized function from the https://developer.nvidia.com/cudnn[nvidia cudnn ] package . for mnist , we trained a lenet-5 type cnn with two layers of filters , two layers of max - pooling , and a fully connected layer . there are three places for applying 50% dropout : the test errors for the two dropout methods are similar , see figure [ fig : mnist - test - errors , ] . , scaledwidth=38.0% ] for a first experiment with cifar-10 we used a small convolutional network with small filters .the network is a scaled down version of the network from ; there are four places to apply dropout : the input layer is .we trained the network for 1000 epochs using randomly chosen subsets of the training images , and reflected each image horizontally with probability one half .for testing we used the centers of the images . in figure[ fig : cifar-10-results - using ] we show the effect of varying the dropout probability .the training errors are increasing with , and the training errors are higher for batchwise dropout .the test - error curves both seem to have local minima around .the batchwise test error curve seems to be shifted slightly to the left of the independent one , suggesting that for any given value of , batchwise dropout is a slightly stronger form of regularization . .batchwise dropout produces a slightly lower minimum test error .[ fig : cifar-10-results - using],scaledwidth=45.0% ] we trained a deep convolutional network on cifar-10 _ without _ data augmentation . using the notation of , our network has the form {2})_{12}-832c2 - 896c1-\mathrm{output},\ ] ]i.e. it consists of 12 convolutions with filters in the -th layer , 12 layers max - pooling , followed by two fully connected layers ; the network has 12.6 million parameters .we used an increasing amount of dropout per layer , rising linearly from 0% dropout after the third layer to 50% dropout after the 14th . even though the amount of dropout used in the middle layers is small, batchwise dropout took less than half as long per epoch as independent dropout ; this is because applying small amounts of independent dropout in large hidden - layers creates a bandwidth performance - bottleneck .as the network s max - pooling operation is stochastic , the test errors can be reduced by repetition .batchwise dropout resulted in a average test error of 7.70% ( down to 5.78% with 12-fold testing ) .independent dropout resulted in an average test error of 7.63% ( reduced to 5.67% with 12-fold testing ) .we have implemented an efficient form of batchwise dropout . all other things being equal, it seems to learn at roughly the same speed as independent dropout , but each epoch is faster . given a fixed computational budget , it will often allow you to train better networks .there are other potential uses for batchwise dropout that we have not explored yet : * restricted boltzmann machines can be trained by contrastive divergence with dropout . batchwise dropout could be used to increase the speed of training . *when a fully connected network sits on top of a convolutional network , training the top and bottom of the network can be separated over different computational nodes .the fully connected top - parts of the network typically contains 95% of the parameters keeping the nodes synchronized is difficult due to the large size of the matrices . with batchwise dropout, nodes could communicate instead of and so reducing the bandwidth needed .* using independent dropout with recurrent neural networks can be too disruptive to allow effective learning ; one solution is to only apply dropout to some parts of the network .batchwise dropout may provide a less damaging form of dropout , as each unit will either be on or off for the whole time period . *dropout is normally only used during training .it is generally more accurate use the whole network for testing purposes ; this is equivalent to averaging over the ensemble of dropout patterns .however , in a `` real - time '' setting , such as analyzing successive frames from a video camera , it may be more efficient to use dropout during testing , and then to average the output of the network over time .* nested dropout is a variant of regular dropout that extends some of the properties of pca to deep networks .batchwise nested dropout is particularly easy to implement as the submatrices are regular enough to qualify as matrices in the context of the sgemm function ( using the lda argument ) .* dropconnect is an alternative form of regularization to dropout . instead of dropping hidden units ,individual elements of the weight matrix are dropped out . using a modification similar to the one in section [ sub : fixed - dropout - amounts ] , there are opportunities for speeding up dropconnect training by approximately a factor of two. 10 d. ciresan , u. meier , and j. schmidhuber .link : www.idsia.ch/~juergen / cvpr2012.pd[multi - column deep neural networks for image classification ] . in _ computer vision and pattern recognition ( cvpr ) , 2012 ieee conference on _ , pages 36423649 , 2012 .ben graham .fractional max - pooling , 2014 .http://arxiv.org/abs/1412.6071 .hinton and salakhutdinov .http://www.cs.toronto.edu/~hinton/science.pdf[reducing the dimensionality of data with neural networks ] ., 313 , 2006 .alex krizhevsky .http://www.cs.toronto.edu/~kriz/cifar.html[learning multiple layers of features from tiny images ] .technical report , 2009 .alex krizhevsky .one weird trick for parallelizing convolutional neural networks , 2014 .http://arxiv.org/abs/1404.5997 .y. l. le cun , l. bottou , y. bengio , and p. haffner .gradient - based learning applied to document recognition ., 86(11):22782324 , november 1998 .oren rippel , michael a. gelbart , and ryan p. adams .learning ordered representations with nested dropout , 2014 .http://arxiv.org/abs/1402.0915 .nitish srivastava , geoffrey hinton , alex krizhevsky , ilya sutskever , and ruslan salakhutdinov .http://jmlr.org/papers/v15/srivastava14a.html[dropout : a simple way to prevent neural networks from overfitting ] . , 15:19291958 , 2014 .ilya sutskever , james martens , george e. dahl , and geoffrey e. hinton .http://jmlr.org/proceedings/papers/v28/[on the importance of initialization and momentum in deep learning ] . involume 28 of _ jmlr proceedings _, pages 11391147 .jmlr.org , 2013 . li wan , matthew zeiler , sixin zhang , yann lecun , and rob fergus .http://jmlr.org/proceedings/papers/v28/wan13.html[regularization of neural networks using dropconnect ] , 2013 .jmlr w&cp 28 ( 3 ) : 10581066 , 2013 .sida wang and christopher manning .http://jmlr.csail.mit.edu/proceedings/papers/v28/wang13a.html[fast dropout training ] ., 28(2):118126 , 2013 .wojciech zaremba , ilya sutskever , and oriol vinyals .recurrent neural network regularization , 2014 .we might have called batchwise dropout _ fast dropout _ but that name is already taken _ _fast dropout is an alternative form of regularization that uses a probabilistic modeling technique to imitate the effect of dropout ; each hidden unit is replaced with a gaussian probability distribution .the _ fast _ relates to reducing the number of training epochs needed compared to regular dropout ( with reference to results in a preprint of ) . training a network 784 - 800 - 800 - 10 on the mnist dataset with 20% input dropout and 50% hidden - layer dropout, fast dropout converges to a test error of 1.29% after 100 epochs of l - bfgs .this appears to be substantially better than the test error obtained in the preprint after 100 epochs of regular dropout training . however , this is a dangerous comparison to make .the authors of used a learning - rate scheme designed to produce optimal accuracy eventually , _ not _ after just one hundred epochs .we tried using batchwise dropout with minibatches of size 100 and an annealed learning rate of .we trained a network with two hidden layers of 800 rectified linear units each .training for 100 epochs resulted in a test error of 1.22% ( s.d .0.03% ) . after 200 epochsthe test error has reduced further to 1.12% ( s.d .moreover , per epoch , batchwise - dropout is faster than regular dropout while fast - dropout is slower . assuming we can make comparisons across different programs , the 200 epochs of batchwise dropout training take less time than the 100 epoch of fast dropout training . | dropout is a popular technique for regularizing artificial neural networks . dropout networks are generally trained by minibatch gradient descent with a dropout mask turning off some of the units a different pattern of dropout is applied to every sample in the minibatch . we explore a very simple alternative to the dropout mask . instead of masking dropped out units by setting them to zero , we perform matrix multiplication using a submatrix of the weight matrix unneeded hidden units are never calculated . performing dropout _ batchwise _ , so that one pattern of dropout is used for each sample in a minibatch , we can substantially reduce training times . batchwise dropout can be used with fully - connected and convolutional neural networks . |
increasing computational power and algorithmic advancements are making many computational materials problems more tractable .for example , density functional theory ( dft ) is used to assess the stability of potential metal alloys with high accuracy . however ,dft computational burdens prevent feasible exploration of all possible configurations of a system . in certain cases ,one can map first - principles results on to a faster hamiltonian , the cluster expansion ( ce ) . over the past 30 years, ce has been used in combination with first - principles calculations to predict the stability of metal alloys , to study the stability of oxides , and to model interaction and ordering phenomena at metal surfaces . numerical error and_ relaxation _ effects decrease the predictive power of ce models .the aim of this paper is to demonstrate the effects of both and to provide a heuristic for knowing when a reliable ce model can be expected for a particular material system .ce treats alloys as a purely configurational problem , i.e. , a problem of decorating a fixed lattice with the alloying elements .however , ce models are usually constructed with data taken from `` relaxed '' first - principles calculations where the individual atoms assume positions that minimize the total energy , displaced from ideal lattice positions .unfortunately , cluster expansions of systems with larger lattice relaxation converge more slowly than cluster expansions for unrelaxed systems .in fact , ce with increased relaxation may fail to converge altogether . no rigorous description of conditions for when the ce breakdown occurs exists in the literature .a persistent question in the ce community regards the impact of relaxation on the accuracy of the cluster expansion .proponents of ce argue that the ce formalism holds even when the training structures are relaxed because there is a one - to - one correspondence in configurational space between relaxed and unrelaxed structures . in this paper, we demonstrate a relationship between relaxation and loss of sparsity in the ce model .as sparsity decreases , the accuracy of ce prediction decreases .in addition to the effects of relaxation , we also examine the impact of numerical error on the reliability of the ce fits .there are several sources of numerical error : approximations to the physics of the model , the number of -points , the smearing method , basis set sizes and types , etc .most previous studies only examine the effect of gaussian errors on the ce model , but arnold et al . also investigated systematic error ( round - off and saturation error ) .they showed that , above a certain threshold , the ce model fails to recover the correct answer , that is , the ce model started to incorporate spurious terms ( i.e. , sparsity was reduced ) . a primary question that we seek to answeris how the shape of the error distribution impacts predictive performance of a ce model . in this study, we quantify the effects of : 1 ) relaxation , by comparing ce fits for relaxed and unrelaxed data sets , and 2 ) numerical error , by adding different error distributions ( i.e. , gaussian , skewed , etc . )to ideal ce models .we study more than one hundred hamiltonians ranging from very simple pair potentials to first - principles dft hamiltonians .we present a heuristic for judging the quality of the ce fits .we find that a small mean - squared displacement is indicative of a good ce model . in agreement with past studies ,we show that the predictive power of ce is lowered when the level of error is increased .we find that there is no clear correlation between the shape of the error profile and the ce predictive power .it is possible to decide whether the computational cost of generating ce fitting data is worthwhile by examining the degree of relaxation in a smaller set of 50150 structures .relaxation is distinct from numerical error it is not an error but it has a similar negative effect . when relaxations are significant , it is less likely that a reliable ce model exists .relaxation is a systematic form of distortion , the local adjustment of atomic positions to accommodate atoms of different sizes .atoms `` relax '' away from ideal lattice sites to reduce the energy , with larger atoms taking up more room , smaller atoms giving up volume .the type of relaxations ( i.e. , the distortions that are possible ) for a particular unit cell are limited by the symmetry of the initially undistorted case , as shown in fig .[ fig : symmetry_allowed_relaxations ] . in the rectangular case ( left ) ,the unit cell aspect ratio may change without changing the initial rectangular symmetry . at the same time , the position of the blue atom is _ not allowed _ to change because doing so would destroy rectangular symmetry .in contrast , the two blue atoms in the similar structure shown in the right panel of the figure can move horizontally without reducing the symmetry .( color online ) symmetry- allowed distortions for two different unit cells .the atomic positions of the cell on the left do not have any symmmetry - allowed degrees of freedom , but the aspect ratio of the unit cell is allowed to change . for the unit cell on the right , the horizontal positions of the atoms in the middle layer may change without destroying the symmetry .( the unit cell aspect ratio may also change.),scaledwidth=55.0% ] conceptually , the cluster expansion is a technique that describes the local environment around an atom and then sums up all the `` atomic energies '' ( environments in a unit cell ) to determine a total energy for the unit cell . for the cluster expansion model to be sparse to be a predictive model with few parameters it relies on the premise that any specific local neighborhood contributes the same atomic energy to the total energy regardless of the crystal in which it is embedded . for example , the top row of fig .[ fig : relaxex ] shows the same local environment ( denoted by the hexagon around the central blue atom ) embedded in two distinct crystals .if the contribution of this local environment to the total energy is the same in both cases , then the cluster expansion of the energy will be sparse .the effect of relaxation on the sparsity becomes clear in the bottom row of fig .[ fig : relaxex ] . in the left - hand case [ panel ( a )] , the crystal relaxes dramatically and the central blue atom is now _ four - fold coordinated _ entirely by red atoms .by contrast , in the right - hand case [ panel ( b ) ] , a collapse of the layers is not possible and the blue atoms are allowed by symmetry to move closer to each other . from the point of view of the cluster expansion ,the local environments of the central blue atom are the same for both cases .this fact , that two different relaxed local environments have identical descriptions in the cluster expansion basis , leads to a slow convergence of cluster expansion models .the problem is severe when the atomic mismatch is large and relaxations are significant ( i.e. , when atoms move far from the ideal lattice positions . ) ( color online ) relaxation scheme .the top images show the original unrelaxed configurations , while the bottom figures show the relaxed configuration .the left images ( a ) shows the relaxation where the hexagon is contracted as shown by the black arrows in bottom left figure .the relaxation in the right images ( b ) is restricted to displacement of the blue atoms as shown by the black arrows in bottom right figure ., scaledwidth=55.0% ] we investigated the predictive power of cluster expansions using data from more than one hundred hamiltonians generated from density functional theory ( dft ) , the embedded atom method , lennard - jones potential and stillinger - weber potential . to investigate the effects of relaxation, we examined different metrics to measure the degree of atomic relaxation in a crystal configuration .first - principles dft calculations have been used to simulate metal alloys and for building cluster expansion models . however , dft calculations are too expensive to extensively examine the relaxation in many different systems ( lattice mismatch ) .thus , we examine other methods such as the embedded atom method ( eam ) which is a multibody potential .the eam potential is a semi - empirical potential derived from first - principles calculations .eam potentials of metal alloys such as ni - cu , ni - al , and cu - al have been parameterized from dft calculations and validated to reproduce their experimental properties such as bulk modulus , elastic constants , lattice constants , etc . .eam potentials are computationally cheaper , allowing us to explore the effects of relaxation for large training sets ; however , we are limited by the number of eam potentials available . therefore , we also selected two classical potentials , lennard - jones ( lj ) and stillinger - weber ( sw ) , to adequately examine various degrees of relaxation , which can be varied using free parameters in each model .the lennard - jones potential is a pairwise potential . using the lj potential , we can model a binary ( ) alloy with different lattice mismatch and interaction strength between the a and b atoms by adjusting the parameter in the model .additionally , we also examined the stillinger - weber potential which has a pair term and an angular ( three - body ) term . in attempting to determine the conditions under which the ce formalism breaks down , we implemented a set of parameters in the sw potential wherethe angular dependent term could be turned on / off using the coefficient .for example , depending on the strength of , the local atomic environment in 2d could switch between 3- , 4- and 6-fold coordination by changing a single parameter .thus , when the system relaxes to a different coordination , the ce fits would no longer be valid or at least not sparse .all first - principles calculations were performed using the vienna ab initio simulation package ( vasp ) .we used the projector - augmented - wave ( paw ) potential and the exchange - correlation functional proposed by perdew , burke , and ernzerhof ( pbe ) . in all calculations, we used the default settings implied by the high - precision option of the code .equivalent -point meshes were used for brillioun zone integration to reduce numerical errors .we used 1728 ( ) -points for the pure element structures and an equivalent mesh for the binary alloy configurations .each structure was allowed to fully relax ( atomic , cell shape and cell volume ) .relaxation was carried out using molecular dynamics simulations for eam , lj and sw potentials .two molecular dynamics packages were used to study the relaxation : gulp and lammps .details for the lj , sw and eam potentials and the dft calculations can be found in the supplementary materials .the universal cluster expansion ( uncle ) software was used to generate 1000 derivative superstructures each of face - centered cubic ( fcc ) , body - centered cubic ( bcc ) and hexagonal closed - packed ( hcp ) lattice . for the dft calculations, we used only 500 structures instead of 1000 due to the computational cost .we generated a set of 1100 clusters , ranging from 2-body up to 6-body interactions .100 independent ce fits were performed for each system ( hamiltonian and lattice ) .we performed cluster expansions using the uncle software .we briefly discuss some important details about cluster expansion here , but for a more complete description , see the supplementary materials and past works .cluster expansion is a generalized ising model with many - body interactions .the cluster expansion formalism allows one to map a physical property , such as e , to configuration ( ) : where e is energy , is the correlation matrix ( basis ) , and is coefficient or effective cluster interaction ( eci ) . when constructing a ce model , we are solving for the effective cluster interactions , or .we used the compressive sensing ( cs ) framework to solve for these coefficients . the key assumption in compressivesensing is that the solution vector has few nonzero components , i.e. , the solution is sparse .the cs framework guarantees that the sparse solution can be recovered from a limited number of dft energies . using the , we can build a ce model to interpolate the configuration space .each ce fit used a random selection of 25% of the data for training and 75% for validation .results were averaged over the 100 ce fits with error bars computed from the standard deviation .we defined the percent error as a ratio of the prediction root mean squared error ( rms ) over the standard deviation of the input energies , this definition of percent error allowed us to consistently compare different systems . currently , there is no standard measure to indicate the degree of relaxation .we evaluated different metrics as a measure of the relaxation : normalized mean- squared displacement , ackland s order parameter , difference in steinhardt order parameter ( ) , soap , and the centro - symmetry parameter .we compared the metrics across various hamiltonians to find a criterion that is independent of the potentials and systems .we found that none of these metrics are descriptive / general enough except for the normalized mean- squared displacement . to measure the relaxation of each structure / configuration, we used the mean- squared displacement ( msd ) to measure the displacement of an atom from its reference position , i.e. , the unrelaxed atomic position .the msd metric is implemented in the lammps software , which also incorporates the periodic boundary conditions to properly account for displacement across a boundary .the msd is the total squared displacement averaged over all atoms in the crystal : - x[0])^2\ ] ] where is the final relaxed configuration and 0 is the initial unrelaxed configuration .additionally , we defined a normalized mean - square displacement ( nmsd ) percent : which is the ratio of msd to volume of the system .this allows for a relaxation comparison parameter that is independent of the overall scale . to explore the effects of relaxation on ce predictability ,we examine relaxation in various systems from very high accuracy ( dft ) to very simple , tunable systems ( lj and sw potentials ) .we examine more than one hundred different hamiltonians and we find several common trends among the different systems . in most cases , we find that the relaxed ce fits are worse ( higher prediction error and higher number of coefficients ) than the unrelaxed .for example , fig .[ fig : unrel - rel - vasp - eam ] shows the cluster expansion fitting for unrelaxed and relaxed data sets of ni - cu alloy system using dft and eam with two different primitive lattices .though it seems strange for us to model ni- cu using a bcc primitive lattice when ni - cu is closed - packed , this is a method for us to evaluate the relaxation of ni - cu for a highly relaxed system . as fig .[ fig : unrel - rel - vasp - eam ] shows , ni - cu alloy fitting for a fcc lattice is below 10% error , while bcc fitting result in more and higher percent error ( above 10% ) .we find similar results in the relaxation of ni - cu alloy using first - principles dft and eam potential .the difference between relaxed and unrelaxed ce fits are negligible when relaxations are small .this is shown in fig .[ fig : unrel - rel - vasp - eam ] for the relaxation of fcc superstructures using a ni - cu eam potential .( color online ) cluster expansion fits for ni - cu alloy using dft or eam potential .each bar represents the average percent error and error bar ( standard deviations ) for 100 independent ce fits .the blue bars represent the unrelaxed ce fits , while the red bars represent the relaxed ce fits .the colored number represents the average number of coefficients used in the ce models .when the configurations are relaxed , we find that the ce fits are often worse ( higher prediction error and higher number of ) than unrelaxed system .however , we show that in one case ( ni - cu eam ) the unrelaxed and relaxed ce fits are identical ( same error and same number of coefficients ) and this is due to a small relaxation.,scaledwidth=50.0% ] 0.55 [ fig : vasp - fcc - prob ] 0.55 0.180 0.178 0.55 [ fig : vasp - bcc - prob ] 0.55 0.178 0.192 fig . [fig : unrel - rel - vasp - eam ] shows that increased relaxation is associated with reduced sparsity ( increased cardinality of ) .one possible implication is that number of coefficients ( ) could be used to evaluate the predictive performance of the ce fits . the number of coefficients used in the fits ( such as in fig .[ fig : unrel - rel - vasp - eam ] ) is a simple way to determine whether or not a ce fit can be trusted .[ fig : vasp - fcc - ce ] and [ fig : vasp - bcc - ce ] show similar clusters across the 100 independent ce fittings ; thus , vertical lines indicate the presence of the same cluster across all ce fits . when the fit is good , only a small subset of clusters is needed ( fig .[ fig : vasp - fcc - ce ] ) . on the other hand ,[ fig : vasp - bcc - ce ] shows some common clusters in all of the ce fits with several additional clusters .[ fig : rms - js ] shows the correlation of the percent error with the number of terms in the expansion .we find that as the number of coefficients increases the percent error increases .however , this is not a sufficient metric as shown in fig . [fig : rms - js ] where the number of coefficient varies a lot .nonetheless , the number of coefficients may be used as a general , quick test .0.5 displays the ce fitting error vs the number of coefficients , while plot [ fig : msd - js ] highlights the relationship between number of coefficients and relaxation .the dashed line approximates what we consider as the maximum acceptable error for a ce model ( 10% ) . the dashed line in fig .[ fig : msd - js ] marks the estimated threshold for acceptable relaxation level .each symbol represents 100 independent ce fittings for each hamiltonian .higher error correlates with a higher number of coefficients.,title="fig : " ] 0.5 displays the ce fitting error vs the number of coefficients , while plot [ fig : msd - js ] highlights the relationship between number of coefficients and relaxation .the dashed line approximates what we consider as the maximum acceptable error for a ce model ( 10% ) . the dashed line in fig .[ fig : msd - js ] marks the estimated threshold for acceptable relaxation level .each symbol represents 100 independent ce fittings for each hamiltonian .higher error correlates with a higher number of coefficients.,title="fig : " ] the degree of relaxation is crucial to define whether or not the ce model is accurate or not . however , there is no standard for _ when _ cluster expansion fails due to relaxation . thus far, we have made some remarks about relaxation and ce fits .but the question of how much relaxation is allowed has not been addressed . by examining a few metrics : nmsd , soap , d6 , ackland and centro - symmetry , we find that there is a relationship between degree of relaxation and the quality of ce fits .as shown in the supplementary information , we have used these metrics to investigate over 100 + systems ( different potentials , lattice mismatches , and interaction strengths ) . here , we present a heuristic to measure the degree of relaxation based on the nmsd. in general , cluster expansion will fail when the relaxation is large .figure [ fig : msd - js ] shows that a small nmsd weakly correlates with a small number of coefficients .however , figure [ fig : rms - msd ] highlights the correlation between degree of relaxation and prediction error .there is a roughly linear relationship between the degree of relaxation and the ce prediction .we partition the quality of the ce models into three regions : good ( nmsd 0.1% ) , maybe ( 0.1% nmsd 1% ) and bad ( nmsd 1% ) .the `` maybe '' region is the gray area where the ce fit can be good or bad .this metric provide a heuristic to evaluate the reliability of the ce models , i.e. , any systems that exhibit high relaxation will fail to provide an accurate ce model .as we have shown in the previous section , greater relaxation results in worse ce fitting .in addition to the effects of relaxation , we now investigate the effects of numerical error on reliability of ce models .numerical error arises from various sources such as the number of -points , the smearing method , minimum force tolerance , basis set sizes and types , etc .these errors are not stochastic error or measurement errors ; they arise from tuning the numerical methods .we assume that the relaxation - induced change in energy for each structure is an _ error term _ that the ce fitting algorithm must handle .the collection of these `` errors '' from all structures in the alloy system then form an error profile ( or distribution ) . using the simulated relaxation error profiles from the previous section together with common analytic distributions , we built `` toy '' ce models with known coefficients .we then examined whether or not the shape of the error distribution affects the ce predictive ability .the numerical errors in dft calculations are largely understood , but it is difficult to disentangle the effects of different , individual error sources . instead of studying the effects of errors separately , we added different distributions of error to a `` toy '' model in order to imitate the aggregate effects of the numerical error on ce models .hence , we opt to simplify the problem by creating a `` toy '' problem for which the exact answer is known . to restrict the number of independent variables, we formulated a `` toy '' cluster expansion model by selecting five non - zero values for a subset of the total clusters . using this toy ce ,we predicted a set of energies for 2000 known derivative superstructures of an fcc lattice , these values are used as the true energies for all subsequent analysis .we added error to , chosen from either : 1 ) `` simulated '' distributions obtained by computing the difference between relaxed and unrelaxed energies predicted by either dft , eam , lj or sw models ( fig .[ fig : experimentaldistrbutions ] ) ; or 2 ) common analytic distributions ( fig .[ fig : analyticequalwidthdistributions ] ) . to generate the simulated distributions , we chose a set of identical structures and fitted them using a variety of classical and semi - classical potentials , and quantum mechanical calculations using vasp .for each of the potentials we selected , we calculated an unrelaxed total energy for each structure and then performed relaxation to determine the lowest energy state , .the difference between these two energies ( ) was considered to be the `` relaxation '' error .( color online ) distributions from real relaxations using classical and semi - classical potentials , as well as dft calculations .the distributions are all normalized to fall within 0 and 1 .the widths , , were calculated by taking the difference between the 25 and 75 percentiles.,scaledwidth=50.0% ] ( color online ) the analytic , equal width distributions used for adding error to the toy model ce fit.,scaledwidth=50.0% ] certain assumptions are usually made about the error in the signal , namely that it is gaussian .the original cs paradigm proves that the error for signal recovery obeys : where bounds the amount of error in the data , is the cs solution , is the true solution , and is the vector with all but the largest components set to zero .this shows that , _ at worst , the error in the recovery is bounded by a term proportional to the error_. for our plots of this error , we first normalized so that ] .a physical quantity such as energy can be expressed as a linear combination of basis function : where the argument to the function is a vector of occupation variable , .the are the basis function or often referred to as the cluster functions .each cluster function corresponds to a cluster of lattice sites .the coefficients are the effective cluster interactions or eci s .the main task of building a ce model is to find the and their values .we can solved for the using the structure inversion method .however , we use a new approach based on compressive sensing to solve for these coefficients .this is a more extensive version of the method present in the paper including the various parameters , forms of the potential and relaxation metrics .two molecular dynamics packages were used to study the relaxation : gulp and lammps .gulp ( general utility lattice program ) is written to perform a variety of tasks based on force field methods such molecular dynamics , monte carlo and etc .gulp is a general purpose code for the modeling of solids , clusters , embedded defects , surfaces , interfaces and polymers .lammps ( large - scale atomic molecular massively parallel simulator ) is a widely used molecular dynamics program .we used these two programs to minimize / relaxed each structure .we computed the energy of each structure ( unrelaxed and relaxed ) .relaxation of each structure was obtained by minimization of total energy using gulp or lammps via a conjugate gradient scheme .molecular dynamics simulations were carried for the embedded atom method ( eam ) , lennard - jones ( lj ) and stillinger - weber ( sw ) potentials .the eam potential is a semi - empirical potential derived from first - principles calculations .the embedded atom method ( eam ) potential has following form : eam potentials of metal alloys such ni - cu , ni - al , cu - al have been parameterized from first - principle calculations and validated to reproduce experimental properties , bulk modulus , elastic constants , lattice constants , etc .compared to first - principles calculations , eam potentials are computationally cheaper .thus , this allows us to explore the effect of relaxation for large training sets . nonetheless , we are limited by the number of eam potentials available .we used various eam potentials to study the relaxation ; these binary eam potentials are shown in table [ eam - parameter ] . .eampotentials used to study the relaxation .lattice mismatch is shown in percentage .lattice mismatch = ( )/ ( ( )/2 ) , where is the lattice constant of the pure element . [ cols="^,^",options="header " , ] in order to distinguish and measure the relaxation of the atoms from their ideal positions , we examined several metrics ( order parameters ) to quantify the relaxation : normalized mean - squared displacement or nmsd ( see the method in the main article ) , ackland s order parameter , , soap , and centro - symmetry .we found that some of these order parameters are not descriptive / general enough for all cases ( potentials and crystal lattices ) .we used the ackland s order parameter to identify the crystal structure after relaxation .ackland s op identify each atomic local environment and assign it as fcc , bcc , hcp and unknown .we used this op to determine which structures remain the same or on lattice and which structures undergo a structural change .we can use this order parameter to separate / sort those structure that remain the same to examine the robustness of ce due the relaxation of crystal structure . similar to the ackland s order parameter , the centro - symmetry identifies the crystal structure of each atom based on the local arrangement ( neighbors ) . for example , figure [ fig : relaxed - sw - ack - fcc ] shows the mapping of msd and ackland order parameter for each structure at 5% ( top plot ) and 15% ( bottom plot ) .overall , we can see that the msd increases with higher lattice mismatch . the spread of the ackland s order parameter is also affected .going from 5% ( system a in table [ sw - parameter ] ) to 15% ( system c in table [ sw - parameter ] ) , the ce fitting error increases from 41.2% ( 110 clusters ) to 63.5% ( 156 clusters ) for the bcc .when the lattice mismatch increases , the mean - squared displacement also increases .ackland s order parameter and centro - symmetry are useful since they provide information about individual atoms . however , ackland s order parameter and centro - symmetry is too specific and it does not provide a useful measure of relaxation .in addition to using the crystallographic information as a measure of relaxation , we used a variant of the steinhardt s bond order parameter that we called the order parameter or the metric , which is a measure of the difference between the local atomic environment ( relaxed and unrelaxed ) .we computed the local atomic environment using the ( spherical harmonic with ) for the unrelaxed and relaxed configuration .we averaged the difference of the two configurations , .figure [ fig : d6-op ] shows the metric as a measure of relaxation vs the mean - squared displacement ( msd ) .we observe that metric does not correlate withe msd . as relaxation increases ( higher msd ) ,we expect that the value also increase .however , this metric is not robust for all systems , that is , we can not compare the relaxation across all hamiltonians ( potentials and crystal lattices ) .similar to the metrics , we used another metrics known as the soap ( smooth overlap of atomic position ) similarity kernel .the soap similarity kernel measures the difference in configuration ( 1 when it is identical and decreasing as the difference increases ) .the soap kernel is invariant to rotation and translation ; however , this metric is not applicable for multiple species cases .[ fig : soap - error ] shows the prediction error vs soap .similar to , the soap value does not correlate with the prediction error or displacement , that is , these metrics are too broad and vary too much for small displacements .this problem lies in the normalization of soap and values . as a measure of the relaxation using a lennard- jones potential .we show that the metric does not correlated with the displacement .we show the relaxation of three crystal lattice .lj favor fcc/ hcp ; thus , we should not observe high relaxation ( this is indicated by the displacement which is less than 0.001 .however , the metrics show a very broad range from 0.0 ( identical configuration ) up to 0.1 .although we only show this plot for lj , the results of sw and eam potential reveal the same conclusion , that is , is not a sufficient metric to analyze the various crystal lattices and potentials.,scaledwidth=50.0% ] none of the normal quantifying descriptions of distribution shape ( e.g. , width , skewness , kurtosis , standard deviation , etc . )show a correlation with the ce prediction error .the error increased proportionally with the level of error in each system ( 2 , 5 , 10 and 15% error ) .0.47 ( color online ) width , skewness and kurtosis using the relaxation energies .the relaxation energies are obtained by taking the absolute difference of unrelaxed and relaxed energies .we show only the 2% and 15% error instead of all four error levels .this allows us to illustrate the effect of error level on the width , skewness and kurtosis of the distribution.,title="fig : " ] 0.47 ( color online ) width , skewness and kurtosis using the relaxation energies .the relaxation energies are obtained by taking the absolute difference of unrelaxed and relaxed energies .we show only the 2% and 15% error instead of all four error levels .this allows us to illustrate the effect of error level on the width , skewness and kurtosis of the distribution.,title="fig : " ] 0.47 ( color online ) width , skewness and kurtosis using the relaxation energies .the relaxation energies are obtained by taking the absolute difference of unrelaxed and relaxed energies .we show only the 2% and 15% error instead of all four error levels .this allows us to illustrate the effect of error level on the width , skewness and kurtosis of the distribution.,title="fig : " ]one other possibility is that the presence of outliers has a large impact on the performance of the bcs fit . to rule out that possibility , we performed fits with 0 , 1 , 2 , 10 , 20 , 30 , 40 , 50 and 60 outliers added to the error ( representing between 0 and 3% of the total data ) .outliers were selected randomly from between 2 and 4 standard deviations from the mean and then appended to the regular list of errors drawn from the distribution ( the total number of values equaling 2000 again to match the number of structures ) . the summary is plotted in figure [ fig : outliersummary ] .the difference between fits as the number of outliers changes is comparable to the variance in the individual fits .we conclude then that outliers have no direct effect on the error profile s performance .+ ( color online ) fitting errors as outliers are added to the error profile .the difference between fits as the number of outliers changes is comparable to the variance in the individual fits.,scaledwidth=50.0% ] to further elucidate the claims relative to the ce framework s failures , we investigated whether we could measure the change in configuration upon relaxation . since the values selected in the model are backed by geometric clusters , the presence or absence of certain clusters has some correlation to the configuration of the physical system . when the physics is mostly dependent on configuration , the function can be sparsely represented by the ce basis .thus , we expect that the sparsity will be a good heuristic in determining when the ce breaks down .when the expansion terms do not decay well in the representation , it shows a misapplication of the ce basis to a problem that is not mostly configurational . 1 .[ item : xidefinition ] : total number of unique clusters used over 100 ce fits of the same dataset .we also call this the model complexity as shown in fig .[ fig : errorvsmodelcomplexity ] .[ item : notindefinition ] : number of `` exceptional '' clusters .these are clusters that show up fewer than 25 times across 100 fits , implying that they are not responsible for representing any real physics in the signal , but are rather included because the ce basis is no longer a sparse representation for the relaxed alloy system ( shown in fig .[ fig : errorvssingleshow ] .it is sensitive to the training subsets .[ item : lambdadefinition ] : number of _ significant _ clusters in the fit ; essentially just the total number of unique clusters minus the number of `` exceptional '' clusters , ( see fig .[ fig : errorvssignificantterms ] ) . | cluster expansion ( ce ) is effective in modeling the stability of metallic alloys , but sometimes cluster expansions fail . failures are often attributed to atomic relaxation in the dft - calculated data , but there is no metric for quantifying the degree of relaxation . additionally , numerical errors can also be responsible for slow ce convergence . we studied over one hundred different hamiltonians and identified a heuristic , based on a normalized mean - squared displacement of atomic positions in a crystal , to determine if the effects of relaxation in ce data are too severe to build a reliable ce model . using this heuristic , ce practitioners can determine a priori whether or not an alloy system can be reliably expanded in the cluster basis . we also examined the error distributions of the fitting data . we find no clear relationship between the type of error distribution and ce prediction ability , but there are clear correlations between ce formalism reliability , model complexity , and the number of significant terms in the model . our results show that the _ size _ of the errors is much more important than their distribution . |
the resistive plate chamber ( rpc ) muon detector of the compact muon solenoid ( cms ) experiment utilizes a gas recirculation system called closed loop ( cl ) , to cope with large gas mixture volumes and costs .a systematic study of closed loop gas purifiers has been carried out in 2008 and 2009 at the isr experimental area of cern with the use of rpc chambers exposed to cosmic rays with currents monitoring and gas analysis sampling points .goals of the study were to observe the release of contaminants in correlation with the dark current increase in rpc detectors , to measure the purifier lifetime , to observe the presence of pollutants and to study the regeneration procedure .previous results had shown the presence of metallic contaminants , and an incomplete regeneration of purifiers , .the basic function of the cms cl system is to mix and purify the gas components in the appropriate proportions and to distribute the mixture to the individual chambers .the gas mixture used is 95.2% of c in its environmental - friendly version r137a , 4.5% of , and 0.3% sf to suppress streamers and operate in saturated avalanche mode .gas mixture is humidified at the 45% rh ( relative humidity ) level typically to balance ambient humidity , which affects the resistivity of highly hygroscopic bakelite , and to improve efficiency at lower operating voltage .the cl is operated with a fraction of fresh mixture continuously injected into the system .baseline design fresh mixture fraction for cms is 2% , the test cl system was operated at 10% fresh mixture .the fresh mixture fraction is the fraction of the total gas content continuously replaced in the cl system with fresh mixture .the filter configuration is identical to the cms experiment .in the cl system gas purity is guaranteed by a multistage purifier system : * the purifier-1 consisting of a cartridge filled with 5 ( 10% ) and 3 ( 90% ) type linde molecular sieve based on zeolite manufactured by zeochem ; * the purifier-2 consisting of a cartridge filled with 50% cu - zn filter type r12 manufactured by basf and 50% cu filter type r3 - 11 g manufactured by basf ; * the purifier-3 consisting of a cartridge filled with ni alo filter type 6525 manufactured by leuna . the experimental setup ( fig . [ fig : setup ] )is composed of a cl system and an open mode gas system .a detailed description of the cl , the experimental setup , and the filters studied can be found in .the cl is composed of mixer , purifiers ( in the subunit called filters in the fig .[ fig : setup ] ) , recirculation pump and distribution to the rpc detectors .eleven double - gap rpc detectors are installed , nine in cl and two in open mode .each rpc detector has two gaps ( upstream and downstream ) whose gas lines are serially connected .the the gas flows first in the upstream gap and then in the downstream gap .the detectors are operated at a 9.2 kv power supply .the anode dark current drawn because of the high bakelite resistivity is approximately 1 - 2 .gas sampling points before and after each filter in the closed loop allow gas sampling for chemical and gaschromatograph analysis .the system is located in a temperature and humidity controlled hut , with online monitoring of environmental parameters .chemical analyses have been performed in order to study the dynamical behaviour of dark currents increase in the double - gap experimental setup and correlate to the presence of contaminants , measure lifetime of unused purifiers , and identify contaminant(s ) in correlation with the increase of currents . in the chemical analysis set - up ( fig .[ fig : chem_setup ] ) the gas is sampled before and after each cl purifier , and bubbled into a set of pvc flasks .the first flask is empty and acts as a buffer , the second and third flasks contain 250 ml solution of lioh ( 0.001 mole / l corresponding to 0.024g / l , optimized to keep the ph of the solution at 11 ) .the bubbling of gas mixture into the two flasks allows one to capture a wide range of elements that are likely to be released by the system , such as ca , na , k , cu , zn , cu , ni , f. at the end of each sampling line the flow is measured in order to have the total gas amount for the whole period of sampling .( m ) have been installed upstream the flasks .the sampling points ( fig .[ fig : sampling_point ] ) are located before the whole filters unit at position hv61 , after purifier-1 ( zeolite ) at hv62 , after purifier-2 ( cu / zn filter ) at hv64 and after the ni filter at position hv66 .rpc are very sensitive to environmental parameters ( atmospheric pressure , humidity , temperature ) , this study has been performed in environmentally controlled hut with pressure , temperature and relative humidity online monitoring . the comparison of temperature and humidity inside and outside the hut is displayed in fig .[ fig : isrtemp ] and fig .[ fig : isrrh ] , respectively , over the whole time range of the test .the inside temperature shows a variation of less than ; the inside humidity still reveals seasonal structures between 35% and 50% , it is , however , much smaller than the variation outside .gas mixture composition was monitored twice a day by gaschromatography , which also provided the amount of air contamination , stable over the entire data taking run and below 300 ( 100 ) ppm in closed ( open ) loop .purifiers were operated with unused filter material .the data - taking run was divided into cycles where different phenomena were expected .we have four cycles ( fig .[ fig : closedloop ] ) , i.e. , initial stable currents ( cycles 1 and 2 ) , at the onset of the raise of currents ( cycle 3 ) , in the full increase of currents ( cycle 4 ) .cycle 4 was terminated in order not to damage permanently the rpc detectors .the currents of all rpc detectors in open loop were found stable over the four cycles .[ fig : closedloop ] shows the typical behaviour of one rpc detector in cl .while the current of the downstream gap is stable throughout the run , the current of the upstream gap starts increasing after about seven months .such behaviour is suggestive of the formation of contaminants in the cl which are retained in the upstream gap , thus causing its current to increase , and leaving the downstream gap undisturbed . while the production of f is constant during the run period , significant excess of k and ca is found in the gas mixture in cycles 3 and 4 .the production of f is efficiently depressed by the zeolite purifier ( fig .[ fig : fmeno ] ) . the observed excess of k and ca could be explained by a damaging effect of hf ( continuously produced by the system ) on the zeolite filter whose structure contains such elements .further studies are in progress to confirm this model .preliminary results show that the lifetime of purifiers using unused material is approximately seven months . contaminants ( k , ca ) are released in the gas in correlation with the dark currents increase .the currents increase is observed only in the upstream gap .the study suggests that contaminants produced in the system stop in the upstream gap and affect its noise behaviour , leaving the downstream gap undisturbed .the presence of an excess production of k and ca in coincidence with the currents increase suggests a damaging effect of hf produced in the system on the framework of zeolites which is based on k and ca .further studies are in progress to fully characterize the system over the four cycles from the physical and the chemical point of view .the main goal is to better schedule the operation and maintenance of filters for the cms experiment , where for a safe and reliable operation the filter regeneration is presently performed several times per week .a second run is being started with regenerated filter materials to measure their lifetime and confirm the observation of contaminants .finally , studies in high - radiation environment at the cern gamma irradiation facility are being planned . the technical support of the cern gas group is gratefully acknowledged .thanks are due to f. hahn for discussions , and to nadeesha m. wickramage , yasser assran for help in data taking shifts .this research was supported in part by the italian istituto nazionale di fisica nucleare and ministero dell istruzione , universit e ricerca .9 r. santonico and r. cardarelli , `` development of resistive plate counters , '' nucl .instrum .meth . * 187 * ( 1981 ) 377 .cms collaboration , `` the cms experiment at the cern lhc '' , jinst * 3 * ( 2008 ) s08004 .m. bosteels et al ., `` cms gas system proposal '' , cms note 1999/018 . l. besset et al . , `` experimental tests with a standard closed loop gas circulation system '' , cms note 2000/040 .m. abbrescia _ et al ._ , `` proposal for a systematic study of the cern closed loop gas system used by the rpc muon detectors in cms '' , frascati preprint lnf-06/27(ir ) , available at http://www.lnf.infn.it/sis/preprint/. g. saviano _ et al ._ , `` materials studies for the rpc detector in cms '' , presented at the rpc07 conference , mumbai ( india ) , january 2008 . s.bianco _, `` chemical analyses of materials used in the cms rpc muon detector '' , cms note 2010/006 . manufactured by zeochem , 8708 uetikon ( switzerland ) .basf technical bulletin .leuna data sheet september 9 , 2003 , catalyst kl6526-t .grace davison molecular sieves data sheet .linde technical bullettin .l. benussi _ et al ._ , `` sensitivity and environmental response of the cms rpc gas gain monitoring system , '' jinst * 4 * ( 2009 ) doi:10.1088 1748 - 0221 4 08 p08006 [ arxiv:0812.1710 [ physics.ins-det ] ]. m. abbrescia _ et al ._ , `` hf production in cms - resistive plate chambers , '' nucl .* 158 * ( 2006 ) 30 .nuphz,158,30 ; g. aielli _ et al ._ , `` fluoride production in rpcs operated with f - compound gases '' , 8th workshop on resistive plate chambers and related detectors , seoul , korea , 10 - 12 oct 2005 . published in nucl.phys.proc.suppl .* 158 * ( 2006 ) 143 . | the cms rpc muon detector utilizes a gas recirculation system called closed loop ( cl ) to cope with large gas mixture volumes and costs . a systematic study of cl gas purifiers has been carried out over 400 days between july 2008 and august 2009 at cern in a low - radiation test area , with the use of rpc chambers with currents monitoring , and gas analysis sampling points . the study aimed to fully clarify the presence of pollutants , the chemistry of purifiers used in the cl , and the regeneration procedure . preliminary results on contaminants release and purifier characterization are reported . rpc , cms , gas , purifier detectors hep muon |
the mechanical response of most cells arises from the mechanics of its cytoskeleton , a polymeric scaffold that spans the interior of these cells , and its interaction with the extra - cellular environment .the cytoskeleton is made up of complex assemblies of protein filaments crosslinked and bundled together by a variety of accessory proteins .for example , there are approximately 23 distinct classes of accessory proteins such as fascin , -actinin , and filamin a that crosslink filamentous - actin ( f - actin ) , a major component of the cytoskeleton that is resposible for the mechanical integrity and motility of cells .given the multitude of crosslinkers , several natural questions arise : are the different types of crosslinkers redundant , or do they each serve specific functions ?do they act independently or cooperatively ?what are the consequences of their mechanics for the mechanical integrity and response of the cell ?a mutation study of _ dictyostelium discoideum _ cells lacking a particular actin crosslinking can still grow , locomote , and develop , though with some defects , thereby suggesting at least partial redundancy in the crosslinker s mechanical function . on the other hand , two types of crosslinkers working cooperativelymay produce enhanced mechanical response .this cooperativity has been demonstrated in stress fibers crosslinked with the actin binding proteins ( abp ) -actinin and fascin , where stress fibers containing both -actinin and fascin were more mechanically stable than stress fibers containing only -actinin or fascin .in addition , it has been found that two different crosslinkers are required for actin bundle formation _ in vivo _ . it could also be the case that different crosslinkers work independently of one another such that the dominant crosslinker dictates the mechanical response of the network . given these various possibilities , how the cell uses different crosslinking proteins to optimize for certain mechanical characteristics is an important open issue in cytoskeletal mechanics . here, we address this redundancy versus cooperativity issue by studying a model network of semiflexible filaments crosslinked with two types of crosslinkers .we first study the mechanical properties of the model network with one type of crosslinker and then add the second type of crosslinker and look for mechanical similarities and differences with the original model network .in addition , we also address the redundancy versus cooperativity issue of two types of crosslinkers for networks made of flexible filaments . as for the two types of crosslinkers ,we consider crosslinkers that allow the crossing filaments to rotate freely ( freely - rotating crosslinks ) and crosslinkers that constrain the angle between two filaments .the abp -actinin is a candidate for the former type of crosslinking mechanics : optical trapping studies demonstrate that two filaments bound by -actinin can rotate easily . as an example of the latter , we consider filamin a ( flna ) , which binds two actin filaments at a reasonably regular angle of ninety degrees , suggesting that flna constrains the angular degrees of freedom between two filaments . here, we do not take into account the possible unfolding of flna since the energy to unfold filamin a is large , nor do we take into account the kinetics of flna since we seek to understand fully the mechanics in the static regime first .there exist other possible examples of angle - constraining crosslinkers such as arp2/3 that serves a dual role as an f - actin nucleator and a crosslinker .while its role as a nucleator has been emphasized in lamellipodia formation , its role constraining the angle between the mother and daughter filaments is presumably also important for lamellipodia mechanics .better understanding of the mechanical role of arp2/3 in lamellipodia may also help to distinguish between the dendritic nucleation model for lamellipodia formation and a new model wherearp2/3 only nucleates new filaments but does not produce branches .in studying the mechanical properties of compositely crosslinked filamentous networks , we focus on the onset of mechanical rigidity as the filament concentration is increased above some critical threshold .this onset is otherwise known as rigidity percolation . above this critical threshold ,both experiments and theoretical studies of f - actin networks have observed distinct mechanical regimes . for dense , stiff networksthe mechanical response is uniform or affine and the strain energy is stored predominantly in filament stretching modes . while for sparse , floppy networks one finds a non - affine response dominated by filament bendingwhere the observed mechanical response of the network is inhomogeneous and highly sensitive to the lengthscale being probed .it has been recently reported that there exists a _bend - stretch _ coupled regime for intermediate crosslinking densities and filament stiffnesses .while considerable progress has been made in understanding the mechanics of cytoskeletal networks that are crosslinked by one type of crosslinkers , compositely crosslinked networks are only beginning to be explored experimentally as are composite filament networks with one type of crosslinker theoretically . herewe investigate the mechanics of such networks as a function of the concentration and elasticity of the crosslinkers and the filaments .we arrange infinitely long filaments in the plane of a two - dimensional triangular lattice .the filaments are given an extensional spring constant , and a filament bending modulus .we introduce finite filament length into the system by cutting bonds with probability , where , with no spatial correlations between these cutting points .the cutting generates a disordered network with a broad distribution of filament lengths .when two filaments intersect , there exists a freely - rotating crosslink preventing the two filaments from sliding with respect to one another .next , we introduce angular springs with strength between filaments crossing at angles with a probability , where denotes non - collinear .these angular springs model the second type of crosslinker .see fig.[fig0 ] for a schematic .we study the mechanical response of this disordered network under an externally applied strain in the linear response regime . for simplicity we set the rest length of the bonds to unity .let be the unit vector along bonds and the strain on the bond . for small deformation ,the deformation energy is [ energies ] e & = & _ ij p_ij ( _ ij . _ij ) ^2 + _= p_ij p_jk ( ( _ ji + _ jk ) _ ji ) ^2 + & + & _p_ij p_jk p_nc ^2 where is the probability that a bond is occupied , represents sum over all bonds and represents sum over pairs of bonds sharing a node .the first term in the deformation energy corresponds to the cost of extension or compression of the bonds , the second term to the penalty for the bending of filament segments made of pairs of adjacent collinear bonds , and the last term to the energy cost of change in the angles between crossing filaments that meet at angle .furthermore , for small deformations .it is straightforward to see that the angular spring between and will contribute to an effective spring in parallel with , giving rise to an enhanced effective spring constant .we study the effective medium mechanical response for such disordered networks following the mean field theory developed in for central force networks and for filament bending networks .the aim of the theory is to construct an effective medium , or ordered network , that has the same mechanical response to a given deformation field as the depleted network under consideration .the effective elastic constants are determined by requiring that strain fluctuations produced in the original , ordered network by randomly cutting filaments and removing angular springs vanish when averaged over the entire network .let us consider an ordered network with each bond having a spring constant , a filament bending constant for adjacent collinear bond pairs , and an angular bending constant between bonds making angles . under small applied strain ,the filament stretching and filament bending modes are orthogonal , with stretching forces contributing only to deformations along filaments ( ) and bending forces contributing only to deformations perpendicular to filaments ( ) , and hence we can treat them separately . the angular forces due to the angular ( non - collinear ) springs , when present , contribute to stretching of filaments as discussed earlier , where we only consider three body interactions . for these springs to contribute to bending one needs to consider four - body interactions which is outside the scope of this paper and will be addressed in future work .we start with the deformed network and replace a pair of adjacent collinear bonds with bending rigidity by one with a rigidity , and a bond spring with extensional elastic constant by a spring with an elastic constant and the facing angular spring by .this will lead to additional deformation of the above filament segments and the angle which we calculate as follows .the virtual force that needs to be applied to restore the nodes to their original positions before the replacement of the bonds will have a stretching , a bending and an angular contribution : , , and .the virtual stretching force is given by , the virtual filament bending force is , while the virtual force to restore the angle is , where , and are the corresponding deformations in the ordered network under the applied deformation field . by the superposition principle , the strain fluctuations introduced by replacing the above bending hinges and bonds in the strained network are the same as the extra deformations that result when we apply the above virtual forces on respective hinges and segments in the unstrained network .the components of this `` fluctuation '' are , therefore , given by : d _ & = & + d _ & = & + d&= & the effective medium spring and bending constants , , and , respectively , can be calculated by demanding that the disordered - averaged deformations , , and vanish , i.e. , , and . to perform the disorder averaging , since the stretching of filaments is defined in terms of spring elasticity of single bonds , the disorder in filament stretchingis given by .filament bending , however , is defined on pairs of adjacent collinear bonds with the normalized probability distribution .similarly , for the angular springs , the normalized probability distribution is given by .this disorder averaging gives the effective medium elastic constants as a function of and as the constants , and for the network contribution to the effective spring constant of bonds , to the filament bending rigidity , and the bending rigidity of angular springs making angles respectively , are given by $ ] .the sum is over the first brillouin zone and is the coordination number .the stretching , filament bending and non - collinear bending contributions , respectively , to the full dynamical matrix , are given by : \rbold_{ij } \rbold_{ij } \nonumber \\ { \dbold_b}(q ) & = & \kappa_m \sum_{\langle ij \rangle } \left [ 4(1 - \cos(\qbold.\rbold_{ij } ) ) \right .\nonumber \\ & & \left . - ( 1 - \cos(2 \qbold.\rbold_{ij } ) ) \right ] \left(\itens-\rbold_{ij } \rbold_{ij } \right ) \nonumber \\ \dbold_{nc } ( q ) & = & \frac{3}{2 } \kappa_{nc , m } \sum \left [ 2 ( 1 - \cos(\qbold.\rbold_{ij } ) ) + 2 ( 1 - \cos(\qbold.\rbold_{ik } ) ) \right .\nonumber \\ & & \left .- 2 ( 1 - \cos(\qbold.\rbold_{jk } ) ) \right ] \rbold_{ij } \rbold_{ik}\label{dnc}\end{aligned}\ ] ] with the unit tensor and the sums are over nearest neighbors .note that for small , and have the expected wavenumber dependencies for bending and stretching . by definition , , where is the dimensionality of the system . at the rigidity percolationthreshold , , and vanish , giving , and . for semiflexible filament networks with only freely - rotating crosslinks i.e. filament stretching and bending interactionsonly , the rigidity percolation threshold is given by . for networks with angle - constraining crosslinks , at , we obtain rigidity percolation thresholds for the case of flexible filament networks , and for semiflexible filament networks .we also calculate how changes on continuously increasing from to .simulations were carried out on a triangular lattice with half periodic boundary conditions along the shear direction for the energetic terms whose small deformation limit is given in eq . .networks were constructed by adding bonds between lattice sites with probability .next , a shear deformation was applied to the two fixed boundaries of magnitude .the lattice was then relaxed by minimizing its energy using the conjugate gradient method allowing the deformation to propagate into the bulk of the lattice .once the minimized energetic state was found within the tolerance specified , in this case the square root of the machine precision , the shear modulus was then measured using the relation , , using small strains , with denoting the system length and denoting the area of the unit cell for a triangular lattice which is equal to in our units .system size was studied , unless otherwise specified , and sufficient averaging was performed .* mechanical integrity as measured by the shear modulus : * on a triangular lattice , networks made solely of hookean springs lose rigidity at a bond occupation probability around .this result corresponds to the central force isostatic point at which the number of constraints is equal to the number of degrees of freedom on average .in contrast , networks made of semiflexible filaments become rigid at a smaller due to extra constraints placed on the system via filament bending . for semiflexible networks with freely - rotating crosslinks , our effective medium theory shows that the shear modulus , , approaches zero at as shown in fig.[fig1 ] .this result is in good agreement with our simulation results yielding and previous numerical results .see fig.[fig1 ] . a different formulation of the emt yields .by introducing additional crosslinks that constrain angles between filaments at , the rigidity percolation threshold is lowered .our emt yields and our simulations yield for ( fig.[fig1 ] and ) .the cooperative mechanical interplay between these crosslinks and their interaction with filaments allows the network to form a rigid stress - bearing structure at remarkably low crosslinking densities , almost immediately after it attains geometric percolation , , which agrees with a calculation by kantor and webman . for flexible filament networks , introducing angle - constraining crosslinkers also lowers the rigidity percolation threshold as compared to the isostatic point with the network attaining rigidity at for our emt and in the simulations ( ( fig.[fig1 ] and ) .incidentally , our result agrees very well with a previous simulation .we also compute analytically and numerically how changes with .see fig.[fig2] , and .note that is lowered continuously as the concentration of angle - constraining crosslinks is increased . just above the rigidity percolation threshold ,for a semiflexible network with freely - rotating crosslinks , we find a bending - dominated regime for sparse networks with the shear modulus eventually crossing over to a stretch dominated affine regime at higher filament densities .the purely stretch dominated regime is represented by the macroscopic shear modulus staying almost constant with increasing , while in the purely bend dominated regime the network is highly floppy and is a sensitive function of , decreasing rapidly as is lowered . this behavior has been observed previously in . for , both the effective medium theory andthe simulations yield a bend - stretch coupled regime , which is characterized by an inflection in as a function of as observed most clearly for ( with ) .we find a similar non - affine to affine crossover for the compositely crosslinked flexible filament networks and semflexible filament networks as is increased .for the flexible filament networks , however , the bend - stretch coupling regime occurs for , i.e. replaces . for semiflexible filament networks , as long as , the bend - stretch coupled regime is robust ( for fixed ) .in contrast , for , the angle - constraining crosslinker suppresses the bend - stretch coupled regime and enhances the shear modulus to that of an affinely deforming network ( for fixed ) .the mechanics of the network has been altered with the introduction of the second type of crosslinker .* non - affinity parameter : * to further investigate how the interaction of the crosslinkers affects the affine and non - affine mechanical regimes , we numerically study a measure for the degree of non - affinity in the mechanical response , , defined in ref. as : = _ i^n(_i-_aff)^2 .[ na ] the non - affinity parameter can be interpreted as a measure of the proximity to criticality , diverging at a critical point as we approach infinite system size .we find that develops a peak at the rigidity percolation threshold , which progressively moves to smaller values of as the concentration of angular crosslinkers is increased ( fig.[fig3 ] ) .a second peak develops near the isostatic point for as seen in fig.[fig3 ] . as both the collinear and non - collinear bending stiffnesses tend to zero , the network mechanics approaches that of a central force network , and the second peak in at the isostatic point becomes increasingly more pronounced . on the other hand , this second peak can be suppressed by increasing ( fig.[fig3 ] ) , or by increasing the concentration ( fig.[fig3 ] ) even for very small values of .this further corroborates that adding angle - constraining crosslinkers to non - affine networks can suppress non - affine fluctuations , provided they energetically dominate over filament bending .the reason for this suppression can be understood by considering the effect of adding a constraint which prohibits the free rotation of crossing filaments .as the concentration of these non - collinear crosslinks is increased ( at fixed avg .filament length ) microscopic deformations will become correlated .the lengthscale associated with this correlation will increase on increasing either or , and will eventually reach a lengthscale comparable to system size even at at large enough concentration and/or stiffness of the angular springs . as a result the mechanical response of the network will approach that of an affinely deforming network . upon decreasing the value of relativeto we again recover the second peak because energetically the system can afford to bend collectively near the isostatic point .* scaling near the isostatic point : * finally , using scaling analysis we quantify the similarity in mechanics between freely - rotating crosslinked semiflexible networks and compositely crosslinked flexible networks . to do this , we examine the scaling of the shear modulus near the isostatic point with . for ( or ) , the shear modulus scales as ( or ) . for both , and , ,the emt predicts and as shown in fig.[fig4](a ) and ( b ) , indicating that both types of networks demonstrate redundant , or generic , mechanics . to compare the emt results with the simulations , we use the position in the second peak in to determine the central force percolation threshold , , and then vary and to obtain the best scaling collapse .for case ( a ) , , and . for case( b ) , , and .both sets of exponents are reasonably consistent with those found in ref . for a semiflexible network with freely - rotating crosslinks only .preliminary simulations for compositely crosslinked semiflexible networks indicate that the shear modulus scales as also with a similar and a similar with .in the limit of small strain , we conclude that the presence of multiple crosslinkers in living cells can be simultaneously cooperative and redundant in response to mechanical cues , with important implications for cell mechanics .redundant functionality helps the cytokeleton be robust to a wide range of mechanical cues . on the other hand , different crosslinkerscan also act cooperatively allowing the system to vary the critical filament concentration above which the cytoskeleton can transmit mechanical forces .this may enable the cytoskeleton to easily remodel in response to mechanical cues via the binding / unbinding of crosslinkers ( tuning concentration ) or their folding / unfolding ( tuning stiffness and type of crosslinker ) . since the cytoskeleton consists of a finite amount of material , the ability to alter mechanics without introducing major morphological changes or motifs may play important role in processes such as cell motility and shape change . * cooperativity : * in our study of two types of crosslinkers , crosslinkers that allow free rotations of filaments and crosslinkers that do not , we find two types of cooperative effects in the mechanics of such compositely crosslinked networks .the first cooperative effect depends on the relative concentration of the two types of crosslinkers and second depends on the relative stiffness of the angle - constraining crosslinkers to the bending stiffness of the individual filaments .the first cooperative effect can be most strikingly observed beginning with an actin/-actinin network and increasing the concentration of flna , with -actinin representing the freely - rotating crosslinker and flna representing the angle - constraining crosslinker . by tuning the concentration of flna, the cell can modulate the minimum concentration of actin filaments necessary to attain mechanical rigidity , which can be essentially as low as the filament concentration required to form a geometrically percolating structure .this is in good agreement with the experimental observation that flna creates an f - actin network at filament concentrations lower than any other known crosslinker . increasing the flna concentrationalso suppresses the non - affine fluctuations near the rigidity percolation threshold by increasing the shear modulus of the network and giving rise to a more affine mechanical response while keeping the filament concentration fixed .moreover , the cooperativity of -actinin and flna working to ehance the mechanical stiffness of actin networks has recently been observed in experiments .the addition of angle - constraining crosslinkers to flexible filament networks also decreases the concentration threshold required for mechanical rigidity , though the lower bound on the threshold is not as close as to geometric percolation as it is for semiflexible filaments .the lowering of the rigidity percolation threshold is independent of the energy scale of the crosslinker .it depends purely on the number of degrees of freedom the crosslinker can freeze out between two filaments , i.e. the structure of the crosslinker .the second cooperative interplay between the two crosslinkers depends on the energy scale of the angle - constraining crosslinker to the filament bending energy . for ,the freely - rotating semiflexible filament system exhibits large non - affine fluctuations near the isostatic point .upon addition of the angle - constraining crosslinkers , for , the non - affine fluctuations near this point become suppressed and the mechanics of the angle - constraining crosslinker dominates the system . once again , with a small change in concentration of the second crosslinker , the mechanical response of the network is changed dramatically .* redundancy : * we observe two redundant effects in these compositely crosslinked networks , the first of which depends on energy scales . for with ,the non - affine fluctuations near the isostatic point in the freely - rotating crosslinker semiflexible filament network remain large even with the addition of the angle - constraining crosslinker . in other words ,the angle - constraining crosslinkers are redundant near the isostatic point .their purpose is to decrease the amount of material needed for mechanical rigidity as opposed to alter mechanical properties at higher filament concentrations .redundancy is also evident in the mechanics of these networks sharing some important , generic properties .all three networks studied here ( free - rotating crosslinked semibflexible networks and compositely crosslinked semiflexible and flexible networks ) have three distinct mechanical regimes : a regime dominated by the stretching elasticity of filaments , a regime dominated by the bending elasticity of filaments and/or stiffness of angle - constraining crosslinkers , and an intermediate regime which depends on the interplay between these interactions .the extent of these regimes can be controlled by tuning the relative strength of the above mechanical interactions .in particular , the ratio of bending rigidity to extensional modulus of an individual actin filament is .since the bend - stretch coupled regime has not been observed in prior experiments on _ in - vitro _ actin networks crosslinked with flna only , we conjecture that the energy cost of deformation of angles between filaments crosslinked with flna is larger than the bending energy of filaments .the qualitative redundancy becomes quantitative , for example , near the isostatic point where we obtain the same scaling exponents for as a function of and (or ) for the free - rotating crosslinked semiflexible network and the compositely crosslinked flexible network .preliminary data suggests the same scaling extends to compositely crosslinked semiflexible networks .this result is an indication of the robustness of these networks and should not be considered as a weakness . whether or not this robustness extends to systems experiencing higher strains such that nonlinearities emerge is not yet known . * lamellipodia mechanics : * the interplay between cooperative and redundant mechanical properties may be particularly important for the mechanics of branched f - actin networks in lamellipodia . within lamellipodia, there exist some filament branches occuring at an angle of around with respect to the plus end of the mother filament ( referred to as junctions ) .these branches are due to the abp arp2/3 . during lamellipodia formation, these branches are presumed to be the dominant channel for filament nucleation .the mechanics of arp2/3 can be modeled as an angular spring between the mother and daughter filament with an angular spring constant of approximately . in other words, arp2/3 is an angle - constraining crosslinker for ( as opposed to ) , and thereby plays an important role in lamellipodia mechanics as demonstrated in this work .the mechanical role of arp2/3 in lamellipodia has not been investigated previously and may help to discriminate between the dendritic nucleation model and a new model by predicting the force transmitted in lamellipodia as a function of the arp2/3 concentration .in addition to arp2/3 , flna localizes at in the lamellipodia and is thought to stabilize the dendritic network .both angle - constraining crosslinkers lower the filament concentration threshold required for mechanical rigidity in the system .depending on the energy scale of flna as compared to the energy scale of arp2/3 , addition of the flna may or may not modulate , for example , the bend - stretch coupling regime at intermediate filament concentrations .again , at times mechanical redundancy is needed and at times not . with three crosslinkers ,the system can maximize the redundancy and the cooperativity .of course , lamellipodia are dynamic in nature and are anisotropic since the arp2/3 is activated from the leading edge of a cell .both attributes will modulate the mechanical response .* outlook : * we have demonstrated both cooperativity and redundancy in the mechanics of compositely crosslinked filamentous networks .we have done so while maintaining the structure of an isotropic , unbundled filament network .of course , crosslinkers can alter the morphology of the network via bundling , for example .in other words , different crosslinkers serve specific functions .this specificity results in a change in microstructure .this will presumably affect the mechanics such that the cooperative and redundant interactions between multiple crosslinkers may differ from the above analysis .for example , the crosslinker that dominates in terms of creating the morphology will presumably dominate the mechanics .schmoller and collaborators suggest that crosslinker with the higher concentration determines the structure and , therefore , the mechanics . instead of redundancy or cooperativity, the specificity leads to the simple additivity of two types of crosslinkers in that different crosslinkers act independently of one another . in this study, however , we find both cooperativity and redundancy in the network mechanics even in the absence of such structural changes , which , is arguably less intuitive and , therefore , more remarkable . finally ,while our focus here has been on the actin cytoskeleton as an example of a filamentous network , our results can be extended to collagen networks as well .daq would like to thank silke henkes and xavier illa for useful discussions regarding lattice simulations .md would like to thank alex j. levine , f. c. mackintosh , c. broedersz , t.c .lubensky , c. heussinger and a zippelius for discussions on the mechanics of semiflexible networks .md and jms also acknowledge the hospitality of the aspen center for physics where some of the early discussions took place .jms is supported by nsf - dmr-0654373 .md is supported by a veni fellowship from nwo , the netherlands .rivero f , furukawa r , fechheimer m , noegel aa ( 1999 ) three actin cross - linking proteins , the 34 kda actin - bundling protein , alpha - actinin and gelation factor ( abp-120 ) have both unique and redundant roles in the growth and development of _ dictyostelium_. _j. cell sci ._ 112:2737 - 2751 .tseng y , kole tp , lee jsh , fedorov e , almo sc , schafer bw , wirtz d ( 2005 ) how actin crosslinking and bundling proteins cooperate to generate an enhanced mechanical response . __ 334 : 183 - 192 .tilney lg , connelly ka , vranich mk , shaw mk , guild gm ( 1998 ) why are two different cross - linkers necessary for actin bundle formation in vivo and what does each cross - link contribute ?_ j. cell biol ._ 143 : 121 - 133 .nakamura f , osborn tm , hartemink ca , hartwig jh , stossel tp ( 2007 ) structural basis of filamin a functions _ j. cell biol _ 179 : 1011 - 1025 ; stossel tp , condeelis j , cooley l , hartwig jh , noegel a , schleicher m , shapiro ss ( 2001 ) filamins as integrators of cell mechanics and signalling _ nat rev mol cell biol _ 2:138 - 45 .gardel ml , nakamura f , hartwig jh , stossel tp , weitz da ( 2006 ) prestressed f - actin networks cross - linked by hinged filamins replicate mechanical properties of cells __ 103:1762 - 1767 .blanchoin l , amann kj , higgs hn , marchand j - b , kaiser da , pollard td ( 2000 ) direct observation of dendritic actin filament networks nucleated by arp2/3 complex and wasp / scar proteins _ nature _ 404 : 1007 - 1011 .latva - kokko m , timonen j ( 2001 ) rigidity of random networks of stiff fibers in the low - density limit _ phys .64:066117 , 5 pgs ; latva - kokko m , mkinen j , timonen j , rigidity transition in two - dimensional random fiber networks _ phys .63:046113 , 10 pages . head da , levine aj , mackintosh fc ( 2003 ) deformation of cross - linked semiflexible polymer networks _ phys .lett . _ 91:108102 , 4 pgs . ; head da , mackintosh fc , levine aj ( 2003 ) distinct regimes of elastic response and deformation modes of cross - linked cytoskeletal and semiflexible polymer networks _ phys .68 : 061907 , 15 pgs . ;head , f.c .mackintosh and a.j .levine ( 2003 ) nonuniversality of elastic exponents in random bond - bending networks _ phys .68:025101(r ) , 4 pgs .gardel ml , shin jh , mackintosh fc , mahadevan l , matsudaira pa , weitz da scaling of f - actin network rheology to probe single filament elasticity and dynamics ( 2004 ) _ phys ._ 93 : 188102 , 4 pgs . ; gardel ml , shin jh , mackintosh fc , mahadevan l , matsudaira pa , weitz da , elastic behavior of cross - linked and bundled actin networks _ science _ 304:1301 - 1305 .wagner b , tharmann r , haase i , fischer m , bausch a r ( 2006 ) cytoskeletal polymer networks : the molecular structure of cross - linkers determines macroscopic properties _ proc ._ 103:13974 - 13978 . , and angle - constraining crosslinker occupation probability the purple lines denote semiflexible filaments , the red arcs denote angle - constraining crosslinks , the black circles represent nodes where all crossing filaments are free to rotate , while the grey circles denote nodes where some of the crossing filaments are free to rotate . the filament bending stiffness relative to stretching stiffness and the stiffness of angular crosslinks relative to stretching stiffness . ] for semiflexible networks with freely - rotating crosslinks ( ( a ) and ( d ) ), flexible networks with freely - rotating and angle - constraning crosslinks ( ( b ) and ( e ) ) , and semiflexible networks with both crosslinkers ( ( c ) and ( f)).the top panels show results from the effective medium theory and bottom panels show results from the simulations.,scaledwidth=100.0% ] for the semiflexible filaments and for the flexible filaments . figures ( b ) and ( c ) show the shear modulus ( in logarithmic scale described by the colorbar ) as a function of and for flexible networks ( b ) and semiflexible networks ( c ) .the parameter values studied are ( b ) and ( c ) , .the black dashed lines in ( b ) and ( c ) correspond to the effective medium theory prediction of the rigidity percolation threshold ., scaledwidth=100.0% ] as a function of for semiflexible networks with both types of crosslinkers . in ( a )we show the effect of changing the concentration of the angle - constraining crosslinkers for , and , while in ( b ) we show the effect of changing their stiffness for . ]scales with and ( ) as .the effective medium theory predicts mean field exponents and for both semiflexible networks with freely - rotating crosslinkers ( a ) and compositely crosslinked flexible networks ( b ) , while simulations predict and for semiflexible networks with freely - rotating crosslinkers ( c ) and and for compositely crosslinked flexible networks ( d ) . ] | the actin cytoskeleton in living cells has many types of crosslinkers . the mechanical interplay between these different crosslinker types is an open issue in cytoskeletal mechanics . we develop a framework to study the cooperativity and redundancy in the mechanics of filamentous networks with two types of crosslinkers : crosslinkers that allow free rotations of filaments and crosslinkers that do not . the framework consists of numerical simulations and an effective medium theory on a percolating triangular lattice . we find that the introduction of angle - constraining crosslinkers significantly lowers the filament concentrations required for these networks to attain mechanical integrity . this cooperative effect also enhances the stiffness of the network and suppresses non - affine deformations at a fixed filament concentration . we further find that semiflexible networks with only freely - rotating crosslinks are mechanically very similar to compositely crosslinked flexible networks with both networks exhibiting the same scaling behavior . we show that the network mechanics can either be redundant or cooperative depending on the relative energy scale of filament bending to the energy stored in the angle - constraining crosslinkers , and the relative concentration of crosslinkers . our results may have implications for understanding the role of multiple crosslinkers even in a system without bundle formation or other structural motifs . |
[ cols=">,^,^,^",options="header " , ] [ tab : predictions ] full mars tracking and showering monte carlo simulations were conducted for 6 gev and 24 gev protons incident on the target , returning predictions for the pion yield and energy deposition densities .the detailed level of the mars simulations is illustrated by figure [ marsgeom ] , using the example of several 24 gev proton interactions in an inconel band .figure [ phadron ] shows the corresponding yield and momentum spectra for all hadrons ; figure [ ppion ] gives more detailed information for the pions .several scatter plots to illustrate the distribution in phase space of the produced pions are displayed in figure [ scatter ] .the plots are seen to be relatively symmetric in the x and y coordinates , which indicates that any asymmetries due to the band tilt and elliptical beam spot are largely washed out by the large phase space volume occupied by the produced pions .the yield per proton for positive and negative pions - plus - kaons - plus - muons at 70 cm downstream from the central intersection of the beam with the target was predicted for the kinetic energy range 32 mev that approximates the capture acceptance of the entire cooling channel .note that the material in the flanges of the i - beam for the inconel and nickel targets was not included in the calculation ; its inclusion might result in a small change in the predicted yield .table [ tab : predictions ] summarizes the yield and energy deposition results from the mars calculations .it includes the several rows of derived results that assume the scenario , taken from section [ sec : protons ] , of captured pions .these derived quantities are identified with a superscript `` 3.2 '' and include : the required number of protons per pulse , , the required total proton pulse energy , , the maximum localized energy deposition in the target material and corresponding temperature rise , and .approximately 7% of the proton beam energy is deposited in the target .detailed 3-dimensional maps of energy deposition densities were generated for input to the dynamic target stress calculations that are discussed in the following section .24 gev protons .this is a smaller bunch charge than would be typical for muon colliders ; the distribution of stress values will scale in approximate proportion to the bunch charge unless the material s fatigue strength is exceeded ., height=240 ] 6 gev protons with transverse dimensions as given in table [ target_band_specs ] .the time origin corresponds to the arrival of the proton pulse .the stress values are shown for the position of maximum stress in all cases . ,width=336 ] , but for an incident bunch of 24 gev protons . , width=336 ] and [ nick3 ] , for 6 gev and 24 gev proton beams respectively , showing the close correspondence in the stress time development ., width=336 ] , for 24 gev protons on a nickel target , but extended to larger time values to show the dissipation of the shock stresses after multiple reflections from the band surfaces ., width=336 ] , for 24 gev protons on a nickel target but for both 50 ns and 100 ns time steps in the ansys simulation .the reasonable agreement between the two curves suggests that the normal 100 ns step size is adequately short for approximate stress predictions ., width=336 ] probably the most critical issue faced in solid - target design scenarios for pion production at neutrino factories or muon colliders is the survivability and long - term structural integrity of solid targets in the face of repeated shock heating . to investigate this , finite element computer simulations of the shock heating stresseshave been conducted using ansys , a commercial package that is widely used for stress and thermal calculations .the target band geometry was discretised into a 3-dimensional mesh containing approximately 30 000 elements .this was as fine as the computing capacity and memory allowed and was judged to be adequate for the accurate modeling of shock wave propagation .the ansys simulations conservatively assumed that the deposited energy is all converted to an instantaneous local temperature rise .the dynamic stress analyses were preceded by a transient thermal analysis to generate temperature profiles using as input the 3-dimensional energy deposition profiles previously generated by mars for the production assumption of total captured pions ( see the preceding section ) .dynamic stress calculations were then performed both for a `` free edge '' band , i.e. , with no i - beam flanges , and with a `` fixed edge '' constraint , in which the edges of the band are constrained against displacement in both the radial and axial direction .the `` free edge '' boundary condition is appropriate for the titanium alloy band ; the `` fixed edge '' model is considered likely to provide an improved approximation to the inconel and nickel bands with their i - beam flanges without requiring the extra computing capacity that would be needed to simulate the more complicated true geometry . the von mises stress (i.e. , the deviation from the hydrostatic state of stress ) was found to be initially zero but to develop and fluctuate over time as the directional stresses relax or are reflected from material boundaries .figure [ band_vonmises_initial ] gives an example snapshot of the predicted von mises stress distribution at one microsecond after the arrival of a proton pulse , and the remaining figures [ nick4 ] to [ nick6 ] show various aspects of the predicted stress at the position of maximum stress , respectively : the time development for 6 gev protons and for all three band material candidates ; the same for 24 gev protons ; superimposed plots for 6 gev and 24 gev protons and for the nickel band ; the stress development over a long enough time - span to see the attenuation of the stress levels ; and a check on the time step used in the ansys calculations .table [ tab : predictions ] summarizes the ansys predictions for the maximum stress created at any time and any position in each of the band materials , .these values were obtained by reading off from figures [ nick4 ] and [ nick3 ] and then scaling to the bunch charge for a total yield of captured pions .the final row of table [ tab : predictions ] displays the percentage of the fatigue strength ( from table [ band_materials ] ) that this represents . for the inconel band ,the calculated fraction of the fatigue strength that the band would be exposed to in this `` worst case '' proton bunch scenario , 53 - 69% , is either close to or slightly above what could be considered a safe operating margin for the target band .a more definitive determination of the proton beam parameters that allow survivability and adequate safety margins for this target scenario could be provided by data from the ongoing bnl e951 targetry experiment , with planned stress tests for bunched 24 gev proton beams incident on several types of targets , including inconel 718 .the inconel target may well be appropriate for some proton beam specifications at a muon collider , and it has already been shown to likely give a wide safety margin for the more relaxed beam parameters of neutrino factories . the titanium alloy was predicted to have a very conservative safety margin even for the assumed muon collider beam parameters : only 10 - 14% of the fatigue strength . although the yield is about 20% lower than for the other two candidate materials , target bands from titanium alloys look likely to survive with any proton bunch charges that might reasonably be contemplated for muon colliders .finally , nickel targets are known to evade the predictions for fatigue strength limits , as already mentioned .test beam experiments would be required to establish the suitability or otherwise of a nickel band production target for any particular muon collider scenario .all of the above calculations apply for a circumferentially continuous band .it remains to check the level of von mises stresses at the gaps between the eight welded band sections , although it is noted that the bnl g-2 target was deliberately segmented longitudinally in order to reduce the beam stresses .for rotating band targets in muon colliders , additional periodic slots in the webbing may also be considered for thermal stress relief and eddy current reduction in rotating band targets for muon colliders .in summary , the inconel rotating band target design appears to be a promising option for pion production targets at muon colliders . the design concept appears to be manageable from an engineering point of view , and initial simulations of target yields and stresses are encouraging for each of three candidate target materials : inconel 718 , titanium alloy 6al-4v grade 5 and nickel .priorities for further evaluation of this target scenario include engineering designs of the components , optimization of the band geometry for pion yield and calibration of the target stress predictions to experimental targetry results .we acknowledge helpful discussions with charles finfrock , george greene and charles pearson , all of bnl .this work was performed under the auspices of the u.s .department of energy under contract no .de - ac02 - 98ch10886 . ,b.j . king _et al . _ ,pac99 , ieee , pp .3041 - 3 . , b.j .king , nim a 451 ( 2000 ) pp .335 - 343 , proc . icfa / ecfa workshop `` neutrino factories based on muon storage rings ( nufact99 ) '' , physics/0005007 . | a conceptual design is presented for a high power pion production target for muon colliders that is based on a rotating metal band . three candidate materials are considered for the target band : inconel alloy 718 , titanium alloy 6al-4v ( grade 5 ) and nickel . a pulsed proton beam tangentially intercepts a chord of the target band that is inside a 20 tesla tapered solenoidal magnetic pion capture channel similar to designs previously considered for muon colliders and neutrino factories . the target band has a radius of 2.5 meters and is continuously rotated at approximately 1 m / s to carry heat away from the production region and into a water cooling tank . the mechanical layout and cooling setup of the target are described , including the procedure for the routine replacement of the target band . a rectangular band cross section is assumed , optionally with i - beam struts to enhance stiffness and minimize mechanical vibrations . results are presented from realistic mars monte carlo computer simulations of the pion yield and energy deposition in the target and from ansys finite element calculations for the corresponding shock heating stresses . the target scenario is predicted to perform satisfactorily and with conservative safety margins for multi - mw pulsed proton beams . 1.5 cm |
the finite - dimensional edwards - anderson spin glass is a model for disordered systems which has attracted much attention over the last decades .the opinion on its nature , especially for three dimensional systems , is still controversial . beside trying to address the problem with the help of analytic calculations and simulations at finite temperature , it is possible to investigate the behavior of the model by means of ground - state calculations . since obtaining spin - glass ground statesis computationally hard , the study is restricted to relatively small systems . recentlya new algorithm , the cluster - exact approximation ( cea ) was presented , which allows in connection with a special genetic algorithm the calculation of true ground states for moderate system sizes , in three dimensions up to size . by applying this methodit is possible to study the ground - state landscape of systems exhibiting a degeneracy . for a thermodynamical correct evaluationit is necessary that each ground state contributes to the results with the same weight , since all ground states have exactly the same energy .recently it was shown , that the genetic cea causes a bias on the quantities describing the landscape .the aim of this paper is to analyze the algorithm with respect to its ground - state statistics .the reasons for the deviation from the correct behavior are given and an extension of the method is outlined , which guarantees thermodynamical correct results . in this work , three - dimensional edwards - anderson( ea ) spin glasses are investigated .they consist of spins , described by the hamiltonian the sum runs over all pairs of nearest neighbors .the spins are placed on a three - dimensional ( d=3 ) cubic lattice of linear size with periodic boundary conditions in all directions .systems with quenched disorder of the interactions ( bonds ) are considered .their possible values are with equal probability .to reduce the fluctuations , a constraint is imposed , so that .the article is organized as follows : next a description of the algorithms is presented .then it is shown for small systems , that the method does not result in a thermodynamical correct distribution of the ground states . in section four, the algorithm and its different variants are analyzed with respect to the ground - state statistics . in the last section a summaryis given and an extension of the method is outlined , which should guarantee thermodynamical correct results .the algorithm for the calculation bases on a special genetic algorithm and on cluster - exact approximation .cea is an optimization method designed specially for spin glasses .its basic idea is to transform the spin glass in a way that graph - theoretical methods can be applied , which work only for systems exhibiting no bond - frustrations .now a short sketch of these algorithms is given , because later the influence of different variants on the results is discussed .genetic algorithms are biologically motivated .an optimal solution is found by treating many instances of the problem in parallel , keeping only better instances and replacing bad ones by new ones ( survival of the fittest ) .the genetic algorithm starts with an initial population of randomly initialized spin configurations (= _ individuals _ ) , which are linearly arranged using an array .the last one is also neighbor of the first one .then times two neighbors from the population are taken ( called _ parents _ ) and two new configurations called _ offspring _ are created . for that purposethe _ triadic crossover _ is used which turned out to be very efficient for spin glasses : a mask is used which is a third randomly chosen ( usually distant ) member of the population with a fraction of of its spins reversed . in a first step the offspring are created as copies of the parents .then those spins are selected , where the orientations of the first parent and the mask agree .the values of these spins are swapped between the two offspring. then a _ mutation _ with a rate of is applied to each offspring , i.e. a randomly chosen fraction of the spins is reversed .next for both offspring the energy is reduced by applying cea : the method constructs iteratively and randomly a non - frustrated cluster of spins . during the construction of the cluster a local gauge - transformation of the spin variablesis applied so that all interactions between cluster spins become ferromagnetic .[ figceaexample ] shows an example of how the construction of the cluster works for a small spin - glass system . to increase the performance , spins adjacent to many unsatisfied bonds are more likely to be added to the clusterthis may introduce a bias on the resulting distribution of the ground states . later this scheme ( `` bias '' )is compared to a variant ( `` same '' ) , where all spins may contribute to the cluster with the same probability . for 3d glasses each cluster contains typically 55 percent of all spins .the non - cluster spins remain fixed during the following calculation , they act like local magnetic fields on the cluster spins . consequently , the ground state of the gauge - transformed cluster is not trivial , although all interactions inside the cluster are ferromagnetic .since the cluster exhibits no bond - frustration , an energetic minimum state for its spins can be calculated in polynomial time by using graph - theoretical methods : an equivalent network is constructed , the maximum flow is calculated and the spins of the cluster are set to orientations leading to a minimum in energy .please note , that the ground state of the cluster is degenerate itself , i.e. the spin orientations can be chosen in different ways leading all to the same energy .it is possible to calculate within one single run a special graph , which represents all ground states of the cluster , and select one ground state randomly .this procedure is called `` broad '' here . on the other hand, one can always choose a certain ground state of the cluster directly .usually this variant , which is called `` quick '' here , is applied , because it avoids the construction of the special graph .but this again introduces a certain bias on the resulting distribution of the ground states .later the influence of the different methods of choosing ground states is discussed .this cea minimization step is performed times for each offspring . afterwards each offspring is compared with one of its parents . the offspring/ parent pairs are chosen in the way that the sum of the phenotypic differences between them is minimal .the phenotypic difference is defined here as the number of spins where the two configurations differ .each parent is replaced if its energy is not lower ( i.e. not better ) than the corresponding offspring .after this whole step is conducted times , the population is halved : from each pair of neighbors the configuration which has the higher energy is eliminated . if more than 4 individuals remain the process is continued otherwise it is stopped and the best individual is taken as result of the calculation .the following representation summarizes the algorithm .= = = = = = genetic cea( , , , , ) + * begin * + create configurations randomly + ( ) * do * + + * to * * do * + + select two neighbors + create two offspring using triadic crossover + do mutations with rate + both offspring * do * + + * to * * do * + + construct unfrustrated cluster of spins + construct equivalent network + calculate maximum flow + construct minimum cut + set new orientations of cluster spins + + offspring is not worse than related parent + + replace parent with offspring + + + half population ; + + one configuration with lowest energy + * end * the whole algorithm is performed times and all configurations which exhibit the lowest energy are stored , resulting in statistically independent ground - state configurations ( _ replicas _ ) . a priori nothing about the distribution of ground states raised by the algorithm is known .thus , it may be possible that for one given realization of the disorder some ground states are more likely to be returned by the procedure than others .consequently , any quantities which are calculated by averaging over many independent ground states , like the distribution of overlaps , may depend on a bias introduced by the algorithm . for a thermodynamical correct evaluation all ground states have to contribute with the same weight , since they all have exactly the same energy . for the preceding work ,the distribution of the ground states determined by the algorithm was taken .the method was utilized to examine the ground state landscape of two - dimensional and three - dimensional spin glasses by calculating a small number of ground states per realization .some of these results depend on the statistics of the ground states , as it will be shown in the next section for the case . on the other hand ,the main findings of the following investigations are not affected by the bias introduced by genetic cea : the existence of a spin - glass phase for nonzero temperature was confirmed for the three - dimensional spin glass .the method was applied also to the random - bond model to investigate its ferromagnetic to spin - glass transition .finally , for small sizes up to all ground - state valleys were obtained by calculating a huge number of ground states per realization and applying a new method called _ ballistic search _ .in this section results describing the ground - state landscape of small three - dimensional spin glasses are evaluated .it is shown that the data emerging from the use of raw genetic cea and from a thermodynamically correct treatment differ substantially .several ground states for small systems of size were calculated .1000 realizations of the disorder for and 100 realizations for were considered .the parameters ( , , , ) , for which true ground states are obtained , are shown in .for all calculations the variants bias and quick were used to obtain maximum performance .the effect of different variants on the results is discussed in the next section .two schemes of calculation were applied : * for each realization runs of genetic cea were performed and all states exhibiting the ground - state energy stored .consequently , this scheme reflects the ground - state statistics which is determined solely by the genetic cea method .configurations which have a higher probability of occurrence contribute with a larger weight to the results .* for each realization the algorithm was run up to times .each particular state was stored only once . for later analysisthe number of times each state occurred was recorded .additionally , a systematic local search was applied to add possibly missing ground states which are related by flips of free spins to states already found .finally , a realization exhibits 25 different ground states on average . for a realization on average 240 stateswere found and 6900 states for .+ for the evaluation of physical quantities every ground state is taken with the same probability in this scheme .thus , the statistics obtained in this way reflect the true thermodynamic behavior . to analyze the ground - state landscape ,the distribution of overlaps is evaluated . for a fixed realization of the exchange interactions and two replicas , the overlap is defined as the ground state of a given realization is characterized by the probability density .averaging over the realizations , denoted by {j}$ ] , results in ( = number of realizations ) {j } = \frac{1}{z } \sum_{j } p_j(q ) \label{def_p_q}\ ] ] because no external field is present the densities are symmetric : and .so only is relevant .the result of for is shown in fig .[ figpqlfive ] . for the true thermodynamic resultsmall overlaps occur less frequent than for the data obtained by the application of pure genetic cea .large overlap values occur more often .this deviation has an influence on the way the spin glass behavior is interpreted .the main controversy about finite - dimensional spin glasses mentioned at the beginning is about the question whether for the infinite system shows a long tail down to or not . to investigate the finite size behavior of fraction of the distribution below is integrated : the development of as a function of system size is shown in fig .[ figxl ] .the datapoints for the larger sizes , obtained using pure genetic cea , are taken from former calculations .these values are more or less independent of the system size , while the correct thermodynamic behavior shows a systematic decrease . whether for the long tail of persists can not be concluded from the data , because the systems are too small .nevertheless , the true behavior differs significantly from the former results .to understand , why genetic cea fails in producing the thermodynamical correct results , in this section the statistics of the ground states , which is determined by the algorithm , is analyzed directly . for the case where all ground states were calculated using a huge number of runs , the frequencies each ground state occurred were recorded . in fig .[ fighistogramm ] the result for one sample realization of is shown .the system has 56 different ground states . for each statethe number of times it was returned by the algorithm in runs is displayed .obviously the large deviations from state to state can not be explained by the presence of statistical fluctuations .thus , genetic cea samples different ground states from the same realization with different weights . to make this statement more precise ,the following analysis was performed : two ground states are called _ neighbors _ , if they differ only by the orientation of one spin .all ground states which are accessible from each other through this neighbor - relation are defined to be in the same ground - state _valley_. that means , two ground states belong to the same valley , if it is possible to move from one state to the other by flipping only free spins , i.e. without changing the energy . for all realizationsthe valleys were determined using a method presented in , which allows to treat systems efficiently exhibiting a huge number of ground states .then the frequencies for each valley were computed as the sum of all frequencies of the states belonging to . in fig .[ fighistosample ] the result is shown for a sample realization , which has 15 different ground state valleys .large valleys are returned by the algorithm more frequently , but seems to grow slower than linearly .a strict linear behavior should hold for an algorithm which guarantees the correct behavior .for averaging has to be normalized , because the absolute values of the frequency differ strongly from realization to realization , even if the size of a valley , i.e. the number of ground states belonging to it , is the same . for each realization, the normalized frequency is measured relatively to the average frequency of all valleys of size 1 : if a realization does not exhibit a valley consisting only of one ground state , the frequency of the smallest valley is taken .it is assumed , that the normalized frequency exhibits a dependence , which is justified by the results shown later .consequently , for the case the size of the smallest valley is larger than one , is chosen .the value of is determined self - consistently .the result for of as a function of the valley - size is presented in fig .[ fighistolthree ] . a value of was determined .please note , that the fluctuations for larger valleys are higher , because quite often only one valley was available for a given valley - size .the algebraic form is clearly visible , proving that genetic cea overestimates systematically the importance of small ground - state valleys .for a value of was obtained , while the case resulted in .consequently , with increasing system size , the algorithm fails more and more to sample configurations from different ground - state valleys according to the size of the valleys .this explains , why the difference of between the correct result and the values obtained in increases with growing system size .similar results were obtained for two - dimensional systems . for self - consistent value of was found , while the treatment of systems resulted in .here only a slight finite - size dependence occurs .this may explain the fact , that the width of the distribution of overlaps , even calculated only by the application of pure genetic cea , seems to scale to zero . in the second section of this papertwo variants of the algorithm were presented , which may be able to calculate ground states more equally distributed . to investigate this issue ,similar ground - state calculations were conducted for and again was calculated . for the case , were same was used instead of bias , a value was determined self - consistently . using broad instead of quick resulted in .finally , by applying same and broad together , was obtained .consequently , applying different variants of the method decreases the tendency of overestimating small valleys , but the correct thermodynamic behavior is not obtained as well .even worse , broad and same are considerably slower than the combination of quick and bias .so far it was shown , that genetic cea fails in sampling ground states from different valleys according the size of the valleys .now we turn to the question , whether at least states belonging to the same valley are calculated with the correct thermodynamic distribution . by investigating the frequencies of different ground states belonging to the same valley it was found again , that these configurations are not equally distributed .but it is possible to study this issue in a more physical way . for that purpose ground states of 100 realizationswere calculated .then the valley structure was analyzed .the average distribution of overlaps was evaluated , but only contributions of pairs of states belonging to the same valley were considered .for comparison , for the same realizations a long monte - carlo ( mc ) simulation was performed , i.e. randomly spins were selected and flipped if they were free .the ground states were used as starting configurations . sincea mc simulation ensures the correct thermodynamic distribution of the states , all ground states of a valley appear with the same frequency , if the simulation is only long enough .a length of 40 monte - carlo steps per spin were found to be sufficient for .the result for the distribution of overlaps restricted to the valleys is displayed in fig .[ figpqvalley ] .significant differences between the datapoints from the pure genetic cea and the correct behavior are visible .consequently , the algorithm does not sample configurations belonging to the same ground - state valley with the same weight as well .in this work the genetic cluster - exact approximation method is analyzed .the algorithm can be used to calculate many independent true ground states of ea spin glasses .the results from the raw application of the method and from calculations of _ all _ ground states for small system sizes were compared . by evaluating the distribution of overlapsis was shown , that genetic cea imposes some bias on the ground - state statistics .consequently , the results from the application of the raw method do not represent the true thermodynamics . to elucidate the behavior of the algorithmthe statistics of the ground states were evaluated directly .it was shown , that different ground states have dissimilar probabilities of occurrence . to understand this effect better ,the ground - state valleys were determined .the genetic cea method finds configurations from small ground - state valleys relative to the size of the valley more often than configurations from large valleys .additionally , within a valley the states are not sampled with the same weight as well .it was shown that two variants of the algorithm , which decrease its efficiency , weaken the effect , but it still persists . summarizing , two effects are responsible for the biased ground - state sampling of genetic cea : small valleys are sampled too frequently and the distribution within the valleys is not flat. for small system sizes it is possible to calculate all ground states , so one can obtain the true thermodynamic average directly .but already for there are realizations exhibiting more than different ground states .since the ground - state degeneracy grows exponentially with system size larger systems can not be treated in this way .the following receipt should overcome these problems and should allow to obtain the true thermodynamic behavior for larger systems : * calculate several ground states of a realization using genetic cea . *identify the ground states which belong to the same valleys .* estimate the size of each valley .this can be done using a variant of ballistic search , which works by flipping free spins sequentially , each spin at most once .the number of spins flipped is a quite accurate measure for the size of a valley . *sample from each valley a number of ground states , which is proportional to the size of the valley .this guarantees , that each valley contributes with its proper weight .each state is obtained by performing a mc simulation of sufficient length , starting with true ground - state configurations .since mc simulations achieve a thermodynamical correct distribution , it is guaranteed that the states within each valley are equally distributed .please note , that it is not necessary to calculate all ground states to obtain the true thermodynamic behavior , because it is possible to estimate the size of a valley by analyzing only some sample ground states belonging to it .furthermore , it is even only necessary to have configurations from the largest valleys available , since they dominate the ground - state behavior .this condition is fulfilled by genetic cea , because large valleys are sampled more often than small valleys , even if small valleys appear too often relatively . from the results presented hereit is not possible to deduce the correct behavior of the infinite system , because the system sizes are too small . using the scheme outlined above , it is possible to treat system sizes up to .the author thanks k. battacharya and a.w .sandvik for interesting discussions .the work was supported by the graduiertenkolleg `` modellierung und wissenschaftliches rechnen in mathematik und naturwissenschaften '' at the _ interdisziplinres zentrum fr wissenschaftliches rechnen _ in heidelberg and the _ paderborn center for parallel computing _ by the allocation of computer time .the author announces financial support from the dfg ( _ deutsche forschungsgemeinschaft _ ) . for reviews on spin glassessee : k. binder and a.p .young , rev .phys . * 58 * , 801 ( 1986 ) ; m. mezard , g. parisi , m.a .virasoro , spin glass theory and beyond , world scientific , singapur 1987 ; k.h .fisher and j.a .hertz , spin glasses , cambridge university press , 1991 [ 0.45 ] for .the dashed line shows the old result obtained by computing about independent ground states per realization using genetic cea .the solid line shows the same quantity for the case , where all existing ground states were used for the evaluation , i.e. where the correct thermodynamic behavior is ensured ., title="fig : " ] [ 0.45 ] of the distribution of overlaps for as a function of system size .the upper points ( circles ) where obtained by calculating about independent ground states per realization using genetic cea .the lower points ( triangles ) show the result ( ) for the case , where all existing ground states were used for the evaluation , i.e. where the correct thermodynamic behavior is ensured ., title="fig : " ] [ 0.45 ] is found by genetic cea for .the frequency is normalized so that ( see text ) . the probability that a cluster is found increases with the size of the cluster , but slower than linearly .the line shows a fit with , title="fig : " ] [ 0.45 ] of overlaps for restricted to pairs of ground states belonging to the same valley .the full line shows the result for the case , where the statistics of the ground state is determined by the genetic cea algorithm .the data represented by the dashed line was obtained using states which are equally distributed within each valley , which was guaranteed by performing a mc simulation ., title="fig : " ] | the genetic cluster - exact approximation algorithm is an efficient method to calculate ground states of ea spin glasses . the method can be used to study ground - state landscapes by calculating many independent ground states for each realization of the disorder . the algorithm is analyzed with respect to the statistics of the ground states and the valleys of the energy landscape . furthermore , the distribution inside each valley is evaluated . it is shown that the algorithm does not lead to a true thermodynamic distribution , i.e. each ground state has not the same frequency of occurrence when performing many runs . an extension of the technique is outlined , which guarantees that each ground states occurs with the same probability . * keywords ( pacs - codes ) * : spin glasses and other random models ( 75.10.nr ) , numerical simulation studies ( 75.40.mg ) , general mathematical systems ( 02.10.jf ) . |
the arctic ocean soundscape is defined by its sea ice cover .internal frictional shearing , thermal stress fracturing , and interaction within leads generate distinct sounds that can exceed 100 db re 1 pa hz . at the same time , the upward refracting sound speed profile and nearly year round ice cover create a propagation channel that preserves low frequency signals while attenuating higher frequency components .this unique environment depends strongly on the properties of the arctic sea ice , including percentage of areal cover , thickness ( age ) , and lateral extent . in the past decade , the arctic sea ice has dramatically reduced in thickness as well as annual extent, resulting in unknown changes to the soundscape .sea ice noise and the arctic soundscape properties have historically been an area of interest in underwater acoustics. measurements of ice noises have shown that they are highly non gaussian, varying in characteristic by the generating mechanism, but often more prevalent near ice ridges. the cumulative ambient noise levels generated by ice noise have been shown to correlate with environmental variables like wind , air pressure , and temperature. near the marginal ice zone ( miz ) , where the ice is subject to increased wave forcing , noise levels have been shown to increase as much as 10 db from those further under the ice cap. it is well known that the sea ice is a strong scatterer that attenuates high frequencies at a much higher rate than the open ocean, although the exact attenuation coefficients depend on the local sea ice structure and have yet to be determined. due in large part to biological activity and experimental accessibility , the western arctic soundscape near the beaufort sea has been studied more extensively than the eastern arctic soundscape near and north of the fram strait .studies north of 85 are extremely rare . in april 2013 ,a bottom moored vertical hydrophone array was deployed at ice camp barneo near 89n , 62w .the experiment was designed to study the propagation properties and transmission loss under the sea ice , as well as the northern polar soundscape . aroundapril 15 , 2013 the mooring cable failed .the subsurface float rose to the surface where it remained constrained by the sea ice , with the array hanging below .it drifted southward with the transpolar current toward the fram strait , recording ambient noise as scheduled .microcat pressure measurements ( see sec . [subsec : microcats ] ) showed that the array was vertical under its own weight during much of the transit .the resulting data record the spatiotemporal variation of the far northern arctic soundscape ( 85 n ) . in this study ,the dataset is analyzed and the observations are interpreted in terms of previous studies of the arctic soundscape . the paper is organized as follows . in sec .ii , the acoustic experiment is described , data processing methods are explained , and collection of supplementary environmental data is discussed . in sec .iii , select noise events are discussed . in sec .iv , the results of statistical soundscape analyses in both time and depth are presented . in the final sec .v , arctic soundscape power estimates from previous studies are compared with summer 2013 .the goal of this paper is to establish an understanding of soundscape contributors and sound levels in the northeastern arctic during summer 2013 .p1.8 cm < p2 cm < p2 cm < p1.7 cm < hydrophone # & hydrophone depths ( m ) & microcat depths ( m ) & microcat # + 1 & 12 & 5 & 1 + 2 & 26.5 & 25 & 2 + 3 & 41 & - & - + 4 & 55.5 & 50 & 3 + 5 & 70 & - & - + 6 & 84.5 & - & - + 7 & 99 & 100 & 4 + 8 & 113.5 & - & - + 9 & 128 & - & - + 10 & 142.5 & - & - + 11 & 159.1 & 150 & 5 + 12 & 178.1 & - & - + 13 & 199.9 & 201 & 6 + 14 & 224.8 & - & - + 15 & 253.4 & 250 & 7 + 16 & 286.1 & - & - + 17 & 323.6 & - & - + 18 & 366.5 & 350 & 8 + 19 & 415.6 & - & - + 20 & 471.8 & 450 & 9 + 21 & 536.2 & - & - + 22 & 610 & 600 & 10 + a 600 m long bottom - moored acoustic receiving array was deployed at ice camp barneo , 8923.379n , 6235.159w , on april 14 , 2013 .twenty - two omnidirectional hydrophones were spaced along the array , with phones 110 separated by 14.5 m and phones 1122 separated by logarithmically increasing spacing starting at 16.5 m ( table [ table : array ] , fig .[ fig : array ] ) .the topmost hydrophone was located 12 m below the subsurface float .the hydrophones recorded underwater sound for 108 minutes six days per week ( sunday through friday ) , starting at 1200 utc , with a sampling frequency of 1,953.125 hz .acoustic recordings are available for 119 days between april 29 and september 20 , 2013 .the raw acoustic recordings were scaled to be in units of instantaneous sound pressure using the analog - to - digital conversion parameters , the gain , and the hydrophone receiving sensitivity given by the manufacturer .the system noise floor was taken from a model combining the known self noise of its individual components .the system was also tested in a faraday cage and by calculating the coherence between multiple sensors recording noise in a quiet room , which both fit well with the modeled system noise floor .median spectral estimates were created by segmenting data across a given time period into 4096-point windows ( 2 s ) , taking a 16,384-point fast fourier transform of each window , and calculating the median of the individual spectral estimates .the frequency bins are 0.12 hz for these estimates .spectrograms were estimated separately using 512-point windowed segments ( 0.25 s ) zero - padded to 2048 points ( df 1 hz ) .all data were recorded at 84.5 m ( hydrophone # 6 ) unless otherwise noted .ten sea - bird sbe 37sm / smp microcat instruments were co - located with the hydrophones , spaced 25 , 50 , 50 , 50 , 50 , 100 , 100 , and 150 m apart . the topmost microcat was located 5 m below the subsurface float ( table [ table : array ] , fig .[ fig : array ] ) .the microcats began recording on april 28 and sampled continuously until september 19 .the sampling period of the top four instruments was 480 s , the next five 380 s , and the deepest 300 s. a xeos technologies kilo iridium - gps mooring location beacon located on top of the subsurface float began transmitting alarm messages on may 3 , 2013 , indicating that the mooring had prematurely surfaced .the reported position at the time of surfacing was 8850.30n , 5117.91w , 63 km from the deployment location .analysis of an acoustic survey on april 14 , following deployment of the mooring , revealed that the acoustic release was significantly shallower than expected .the implication is that the mooring failed shortly after deployment , but the subsurface float was trapped beneath sea ice , preventing the location beacon from obtaining gps positions or transmitting alarm messages until it was exposed on may 3 .the float drifted southward in the transpolar drift .there were frequent gaps in transmissions from the location beacon , which are presumed to coincide with periods when the subsurface float was covered by sea ice ( fig .[ fig : map ] ) .the buoy was recovered on september 21 , 2013 , at 8403.50n , 00305.83w ( fig .[ fig : float ] ) . the mooring line was found to have parted immediately above the anchor ( fig .[ fig : array ] ) .a 56day discontinuous timeseries of the bathymetry along the array drift path between may 3 and september 21 was created using the international bathymetric chart of the arctic ocean from the national centers for environmental information ( fig . [fig : bathy ] ) .the georeferenced polar stereo projection bathymetry grid was indexed at the desired coordinate locations to obtain the timeseries data .acoustic propagation along the drift path is ray dominated and confined primarily to the top 200 m , so the variation in bathymetry has a minimal effect on the measured ambient noise ( fig .[ fig : ssp ] ) . low frequency ( hz ) cable strum was observed but kept in the spectral estimates for comparison purposes .strong spectral bands were also observed , exceeding 100 db re 1 hz and extending to the nyquist frequency ( 976.56 hz ) ( fig .[ fig : filters ] ) . periods of unexpectedly low pressures ( depth ) on the microcats corresponded to the periods of high acoustic power ( fig . [ fig : pressures ] ) .it therefore seems unlikely that these high spectral levels are due to propagating acoustic noise . with the buoyant subsurface float constrained to the surface ,flow past the mooring lifts and thus tilts the array and reduces the microcat pressures ( depths ) .potential non - acoustic noise sources associated with flow past the mooring include strumming , flow noise , and/or mechanical vibrations .to remove data affected by flow noise , the median microcat pressure for each day was computed .the pressure on the deepest microcat ( 600 m ) had the largest variation between days and was used as an indicator of flow - related noise ( fig .[ fig : mucats ] ) . by comparing the good and bad spectrograms ( fig .[ fig : filters ] ) with their median microcat pressures , it was found that most corrupted data had a median pressure level below 604.9 dbars .therefore days with dbars were not used .this method selected 19 days for further analysis : april 30 , may 1 , 2 , 7 , 8 , 9 , 12 , 14 , june 16 , 18 , july 3 , 14 , 19 , 24 , august 2 , and september 10 , 18 , 19 , 20 .the near polar arctic under - ice soundscape is generated by sea ice , wind , anthropogenic and biologic noise sources that travel long distances within the subsurface propagation duct .the following section discusses specific examples of underwater noise for the eastern arctic soundscape of summer 2013 .the soundscape observed during the array transit consists primarily of background noise from distant events .the spectrum is characterized by a broad spectral peak at 1020 hz and a power fall off above hz ( fig .[ fig : background_noise ] , log frequency ) .the hydrophone received pressure timeseries shows that the background noise and broadband noise vary by a factor of 10 , despite appearing similar in spectrogram estimates ( fig .[ fig : background_noise ] , [ fig : broadband_noise ] ) .the hydrophone timeseries also emphasizes the wide variability within a recording period due to ice noise ( fig .[ fig : broadband_noise ] ) . the median spectral estimate for each spectrogramcan be compared to the daily and monthly median spectral estimates ( fig .[ fig : dailymonthly ] ) , demonstrating the smoothing effect of using a longer recording for the median spectral estimate .ice noises were observed to be either broadband or tonal in nature , lasting from 1 to 100 s. broadband ice noise extended across the frequency band ( fig .[ fig : broadband_noise ] , log frequency ) . tonal ice noises are single frequency or harmonic signatures modulated in time ( figs .[ fig : squeak1]-[fig : squeak3 ] , linear frequency ) . xie andfarmer demonstrated that constant frequency ice tonals could be modeled as resonance in an infinitely long sea ice block of uniform height , density , and velocity generated by frictional shear stress on its edge .the nonlinear tonals observed here may indicate anomalies in the local height or composition of the sea ice or a frictional stress that is velocity dependent ( fig .[ fig : squeak1 ] ) .the degree of nonlinearity varies between hydrophone recordings ( fig .[ fig : squeak2 ] ) , indicating that significant changes in ice properties and dynamics may occur within the spatiotemporal span of 23 array drift days .another interesting case is ocean swell modulated ice tonals , with period s ( fig .[ fig : squeak3 ] ) .ocean waves impinging on the sea ice edge can generate seismic or flexural waves that propagate within the sea ice , if the frequency ice product is less than about 300 hz m. swell modulated ice tonals observed on the receiving array suggest that these effects can be seen as far as 230 km from the ice edge .broadband pressure pulses generated by airguns are used to image the ocean bottom subsurface during seismic surveys . at long distances ,higher frequencies ( f 100 hz ) are attenuated by scattering at the water - ice boundary .the resulting pulses can be observed on hydrophone receivers at f 50 hz .distant noise from seismic surveys can be observed almost daily in the fram strait during summer months .for example , airgun surveys were observed on 9095% of days between july and september 2009. airgun survey pulses were observed on 11 out of 19 selected recording days between may 7 and sep .[ fig : airgun ] , linear frequency ) . due to irregular sampling , the 13 seismic surveys recorded per month represented seismic surveys on 43%100% of the recording days in may through september .these pulses arrived at about 4from horizontal ( fig .[ fig : beam ] ) , as determined by incoherently averaged 2.1 s bartlett beamformer estimates across june 16 . for other recording days, the pulse arrivals ranged between 515 .the shallow arrival angles indicate that these low frequency signals travel great distances within the subsurface sound propagation channel .location , type and date of surveys in norwegian territory are available through the norwegian petroleum directorate . according to these data ,the array was between 1800 and 3500 km distant from seismic surveys at the beginning of its transit in april and 1000 to 3000 km at the end in september . using the advanced microwave scanning radiometer-2 ( amsr-2 ) 89-ghz channel satellite dataset ( ) , the array was found to be about 1000 km distant from the ice edge in april and 200 km distant in september ( fig .[ fig : source_distances ] ) .it is likely that the signals observed arrived from the closest surveys .thus airgun signals received at 7080 db re 1 hz had propagated approximately 800 km in open water and anywhere from 200 to 1000 km under the ice . frequent seismic activity and low magnitude earthquakes occur where the north american / eurasian plate boundary meets the gakkel ridge . because the nearest seismic stations are land based and up to 1000 km away , many of these earthquakesare not registered with the global seismic network . a receiving array deployed in the lincoln seahas been successful in detecting and locating earthquakes originating from this juncture. earthquakes can be observed acoustically through the phase arrival .the phase is an acoustic pressure wave coupled into the water column from the ocean bottom at an anomaly ( for example , a seamount ) .the phase arrives after the and seismic arrivals but is the most visible arrival .the differences in arrival time between the three waves can be used to localize the earthquake if appropriate propagation velocities are known . phase arrivals were observed during the array transit in summer 2013 , indicating that arctic basin earthquakes also contribute to the low frequency soundscape ( fig .[ fig : earthquake ] ) .the earthquake center was estimated to be about 100 km from the receiving array .in this section , spectral estimates from different months at several depths are compared to establish the spatiotemporal dependence of ambient noise in the eastern arctic during summer 2013 .three days were selected to examine the daily variation in median ambient noise levels : may 7 , july 14 , september 10 ( fig . [ fig : dailymonthly ] ) .the 10 , 50 ( median ) , and 90 percentiles for may 2013 show the variation in spectral shape between the most and least frequent generation mechanisms .for example , cable strum is visible at f 10 hz in the 90 percentile .other peaks near 10100 hz may be caused by flow related noise or seismic survey activity .the 10 percentile is defined by single , broad peak at 15 hz due to distant sources within the sound channel .the daily median estimates progress from least to most peaked , with may 7 showing a peak to peak ( 15 to 900 hz ) difference of 26 db and september 10 showing a peak to peak difference of 44 db .this trend indicates that the earlier spatiotemporal recordings contain more higher frequency noises whereas the later recordings were influenced by lower frequency sources .the trend of increasing peakedness in the median spectral is more apparent in the monthly median spectra ( fig .[ fig : dailymonthly ] ) . at low frequencies ( f 100 hz ) ,an increase in median ambient noise levels corresponds to decreasing distance to the ice edge and to seismic survey activity . at higher frequencies ( f 100 hz ) medianambient noise levels decrease with time , corresponding to fewer ice creak signatures observed in spectrograms . when limited data was available , such as in august , the median monthly spectra was likely to show evidence of flow related noise i.e. at about 75 and 90 hz , despite efforts to remove this noise .it is interesting to note these spatiotemporal differences in median ambient noise levels with proximity to the marginal ice zone ( miz ) ; the effect of the miz on ambient noise was previously studied only for distances less than 150 km .the decrease in high frequency median ambient noise levels may represent a physical change in the spatial or temporal ice stress field .the median ambient noise depth profile during summer 2013 was found to depend on frequency . at low frequencies ( 1050 hz ) , a local peak in median ambient noiseis centered in the subsurface propagation channel at the depth where ice reflections and upward refracting rays converge ( fig .[ fig : depth ] ) .below the sound channel and at higher frequencies ( 200500 hz ) , the median noise level is constant with depth as expected for an isotropic distribution of surface sources ( fig .[ fig : depth ] ) .there is a strong signal during may at about 400 600 m depth that is likely due to flow related noise , which could not be completely removed from the lower hydrophones using the noise filtering method ( sec .[ sssec : filtering ] ) .[ fig : thaaw_framiv ] cccccccccc + & location ( lat , lon ) & experiment & dates & 15 hz & 50 hz & 100 hz & 500 hz & 1 khz + & 86n 56.9w & may june 2013 & 05/2013 & 76.5 & 66 & 60.2 & 43.7 & - + & 89n 1e & & 06/2013 & & & & & + & 86n 1.3e & july sep .2013 & 07/2013 & 78.7 & 64.9 & 55.6 & 37.6 & - + & 83.8n 4.5e & & 09/2013 & & & & & + & 83n 20e & fram iv & 04/1982 & 90 & 79.5 & 73 & 60 & 53 + + & 82n 168e & mellen , marsh 1985 & 0910/1961 & 72 & 70 & 61 & 51 & 40 + & 75n 168w & & 0509/1962 & 63 & 64 & 49 & 37 & 32 + & & & & - & 75 & 72 & 61 & 52 + & 78.5n 105.25w & ice pack i & 27/04/1961 & 50 & 42 & 38 & 37 & 20 + & & & 28/04/1961 & 58 & 52 & 51 & 52 & 51 + & 74.5n 115.1w & ice pack ii & 9/23/1961 & - & 57 & 56 & 52 & 43 + & beaufort sea & prl & april 1975 & 73 & 68 & 62 & 48 & 43 + & & & & ( 10 hz ) & ( 32 hz ) & & + & 142w & aidjex & 08/1975 & 6585 & 6575 & - & - & 3855 + & & & 11/1975 & 7090 & 6588 & - & - & 4070 + & & & 02/1976 & 6590 & 6090 & - & - & 3570 + & & & 05/1976 & 6588 & 6090 & - & - & 3768 + & & & & & & & & + & 71n 126.07w & kinda et al .2013 & 11/2004 & 68 & 69 & 66 & 58 & 54 + & & & 06/2005 & & & & & + & 72.46n 157.4w & roth et al .2011 & 09/2008 & 84 & 80 & 74 & 60 & 56 + & & & 03/2009 & 84 & 70 & 62 & 48 & 48 + & & & 05/2009 & 76 & 61 & 56 & 44 & 44 + [ tab : spectrallevels ] the summer 2013 median ambient noise results are here compared with historical estimates from both western and eastern arctic stations .the median spectral estimates for may 2013 was below , but similarly structured to , a composite spectral estimate from april 1982. the peak at 15 hz appears less prominent at lower frequencies in 2013 than in 1982 . in comparison ,a spectral estimate recorded in the beaufort sea in april 1975 shows comparable ambient noise levels and structure to 2013 but does not extend to lower frequencies ( fig .[ fig : comparisons]). the differences in these spectra may be caused by environmental factors or by experimental factors , including recording length and post processing methods , which were not published alongside the results .the broad peak at 15 hz in all estimates can be attributed to distant ice and seismic survey noises propagating in the sound channel, as higher frequencies are more attenuated and lower frequencies have long wavelengths compared to the channel .the higher frequencies fall off at a consistent rate of f .it is likely that an increased cumulative ice noise source level increases high frequency noise , thus reducing the fall off rate ( fig .[ fig : dailymonthly ] ) .[ fig : results ] demonstrates the wide variability in arctic ambient noise estimates across frequency , year , and study .this variability arises from a complex relationship between the arctic soundscape and both environmental and anthropogenic factors , such as sea ice percent cover , sea ice age / thickness , barometric conditions and wind patterns , local subsurface currents , seismic survey activity , and marine biologic activity .the studies shown indicate that , without correction for environmental factors , there is not a significant trend in the arctic soundscape power levels between 1960 and 2013 , but that frequency dependent ambient noise levels are within a 3040 db range for both regions of the ice covered arctic .between april and september , 2013 , a 22element vertical hydrophone array recorded the eastern arctic for 108 minutes / day between 89n , 62w and svalbard ( north of the fram strait ) .the data were analyzed to produce spectral estimates of the median soundscape and demonstrate that ice noise and seismic airgun surveys were the dominant noise sources .the median ambient noise level for may 2013 was below a similar eastern arctic estimate from april 1982, but comparable to a western arctic estimate from april 1975. a multi decadal summary of arctic soundscape studies demonstrates that the estimated ambient noise levels depend strongly on experimental and environmental parameters and that there is not a significant trend in ambient noise levels among the studies examined .we would like to thank john colosi for providing the microcat data , john kemp and the whoi mooring operations and engineering group for their assistance , and hanne sagen and the norwegian coast guard for assistance in recovering the array mooring .this work is supported by the office of naval research under award numbers n00014 - 13 - 1 - 0632 and n00014 - 12 - 1 - 0226 .any opinions , findings , and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the office of naval research . daily sea ice concentration , defined as the areal percentage of satellite imagery above a certain brightness level , was obtained from the advanced microwave scanning radiometer-2 ( amsr-2 ) 89-ghz channel satellite dataset, provided in a 4 km x 4 km gridded format from the institute of environmental physics , university of bremen , germany ( fig .[ fig : environmental ] ) .the sea ice concentration ranges from 0 ( no ice ) to 100 ( solid ice ) .the georeferenced latitude and longitude grids were transformed into regular latitude and longitude grids with 0.1 resolution and the ice concentration was interpolated to the array location .n - s and e - w wind velocity components were obtained from the european center for medium - range weather forecasts ( ecmwf ) ( fig .[ fig : environmental ] ) .the era - interim reanalysis from which the data was drawn uses 30 minute time steps and a spectral t255 geographic resolution corresponding to 79 km spacing on a reduced gaussian grid. local wind velocity for the 10 m isobar at the array location was extracted .r. lindsay and a. schweiger , sea ice thickness loss determined using subsurface , aircraft , and satellite observations, _ t.c . _ * 9 * , 269283 ( 2015 ) .i. dyer , song of sea ice and other arctic ocean melodies, in _ arctic technology and policy _ , edited by i. dyer and c. chryssostomidis ( hemisphere , new york , 1984 ) , pp .w. c. cummings , o.i .diachok , and j. d. shaffer , transients of the marginal sea ice zone : a provisional catalog, _ naval research laboratory memorandum report 6408 _ , dtic number ada214142 . naval research lab , washington , d.c .veitch and a.r.wilks , characterization of arctic undersea noise, _ j. acoust .* 77 * , 989999 ( 1985 ) .kinda , y. simard , c. gervaise , j.i .mars , and l. fortier , underwater noise transients from sea ice deformation : characteristics , annual time series , and forcing in beaufort sea, _ j. acoust .am . _ * 138 * , 20342045 ( 2015 ) .diachok , of sea ice ridges on sound propagation in the arctic ocean, _ j. acoust .* 59 * , 11101120 ( 1976 ) .b.m . buck and j.h .wilson , noise measurements from an arctic pressure ridge, _ j. acoust .* 80 * , 256264 ( 1986 ) .greene and b.m .buck , ocean ambient noise, _ j. acoust .* 36 * , 1218 ( 1964 ) .makris and i. dyer , correlates of pack ice noise, _ j. acoust . soc .* 79 * , 1434 1440 ( 1986 ) .makris and i. dyer , correlates of arctic ice - edge noise, _ j. acoust .am . _ * 90 * , 32883298 ( 1991 ) .kinda , y. simard , c. gervaise , j.i .mars , and l. fortier , - ice ambient noise in the eastern beaufort sea , canadian arctic , and its relation to environmental forcing, _ j. acoust .* 134 * , 7787 ( 2013 ) .o.i . diachok and r.s .winokur , variability of underwater ambient noise at the arctic ice - water boundary, _ j. acoust .* 55 * , 750753 ( 1974 ) . f. geyer , h. sagen , g. hope , m. babiker , and p. f. worcester , and quantification of soundscape components in the marginal ice zone, _ j. acoust .* 139 * , 18731885 ( 2016 ) .o. diachok , hydroacoustics, , _ cold reg ._ * 2 * , 1861201 .d. tollefsen and h. sagen , exploration noise reduction in the marginal ice zone, _ j. acoust .* 136 * , el4752 ( 2014 ) .e. h. roth , j. a. hildebrand , s. m. wiggins , and d. ross , ambient noise on the chuckchi sea continental slope from 2006 - 2009, _ j. acoust .* 131 * , 104110 ( 2012 ) . c. l. berchok , p. j. clapham , j. crance , s. e. moore , j. napp , j. overland , m. wang , p. stabeno , m. guerra , and c. clark , acoustic detection and monitoring of endangered whales in the arctic ( beaufort , chukchi ) and ecosystem observations in the chukchi sea : biophysical moorings and climate modeling, annual report 2012 , contract m09pc00016 ( akc 083 ) , bureau of ocean energy management , regulation , and enforcement , anchorage , alaska . j. k. lewis and w. w. denner , ambient noise in the beaufort sea : seasonal space and time scales, _ j. acoust .* 82 * , 988997 ( 1987 ) .jensen , w.a .kuperman , m.b . porter , and h. schmidt , _ computational ocean acoustics _( springer , new york , new york , 2011 ) , pp . 2728 . y. xie and d. m. farmer , sound of ice break - up and floe interaction, _ j. acoust_ * 91 * , 22152231 ( 1992 ) .stein , of a few ice event transients, _ j. acoust .soc . am . _* 83 * , 617622 ( 1988 ) .moore , k.m .stafford , h. melling , c. berchok , .wiig , k.m .kovacs , c. lydersen , and j. richter menge , marine mammal acoustic habitats in atlantic and pacific sectors of the high arctic : year long records from fram strait and the chukchi plateau, _ polar biol .* 35 * , 475480 ( 2012 ) .g. spreen , l. kaleschke , and g.heygster ice remote sensing using amsr - e 89 ghz channels, _ j. geophys .res . _ * 113 * , c02s03 ( 2008 ) .sohn and j.a .hildebrand , earthquake detection in the arctic basin with the spinnaker array, _ b. seismol .* 91 * , 572579 ( 2001 ) . r.h .mellen and h.w .marsh , sound in the arctic ocean, , accession number ad718140 .navy underwater sound laboratory , new london , connecticut , 1965 .a.r milne and j.h .ganton , noise under arctic sea ice, _ j. acoust .* 36 * , 855863 ( 1964 ) .b. buck , preliminary under ice propagation models based on synoptic ice roughness, _ prl tr30 _ ( may 1981 ) .seattle , washington .dee , et al . , era - interim reanalysis : configuration and performance of the data assimilation system, _ q. j. r. meteorolog. soc . _ * 137 * , 1477870x ( 2011 ) . | the soundscape in the eastern arctic was studied from april to september 2013 using a 22 element vertical hydrophone array as it drifted from near the north pole ( 8923n , 6235w ) to north of fram strait ( 8345n 428 w ) . the hydrophones recorded for 108 minutes on six days per week with a sampling rate of 1953.125 hz . after removal of data corrupted by nonacoustic flow related noise , 19 days throughout the transit period were analyzed . major contributors include broadband and tonal ice noises , seismic airgun surveys , and earthquake phases . statistical spectral analyses show a broad peak at about 15 hz similar to that previously observed and a mid frequency decrease at f . the median noise levels reflect a change in dominant sources , with ice noises ( 200500 hz ) decreasing and seismic airgun surveys ( 1050 hz ) increasing as the array transited southward . the median noise levels were among the lowest of the sparse reported observations in the eastern arctic , but comparable to the more numerous observations of western arctic noise levels . |
polar codes provably achieves the capacity of channels by using the ( low complexity ) decoding algorithm , in the limit of infinite block length . at short block lengths polar codes under decoding tend to exhibit a poor performance . in was suggested that such a behavior might be due , on one hand , to an intrinsic weakness of polar codes and , on the other hand , to the sub - optimality of decoding w.r.t . decoding .an improved decoding algorithms were proposed in , while the structural properties of polar codes ( e.g. , their distance properties ) were studied , among others , in .the minimum distance properties of polar codes can be improved by resorting to concatenated schemes like the one of , where the concatenation of polar codes with an outer code is considered .this solution , together with the use of the list decoding algorithm of , allows short polar codes to become competitive against other families of codes .a theoretical characterization of the performance of concatenated -polar codes is still an open problem .furthermore , for a fixed code length , a concatenated scheme can be realized with several combinations of the component codes parameters ( e.g. , one may choose various polynomials and polar codes designed for different target signal - to - noise ratios ) . in this paper, we provide an analysis of the concatenation of polar codes with binary cyclic outer codes .the analysis is carried out by introducing concatenated ensembles and by deriving by following the well - known uniform interleaver approach . in the analysis , the knowledge of the of the inner polar codeis required . however , the polar code calculation through analytical methods is still an unsolved problem .hence , we restrict our attention to short , high - rate polar codes for which the problem can be solved through a pragmatic approach . more precisely , we consider the dual code of the selected polar code and then we find its by listing the codewords .subsequently , by using the generalized macwilliams identity , we obtain the of the original polar code .the of the outer cyclic code , instead , is computed by following the method presented in . by adopting the uniform interleaver approach , we subsume the existence of an interleaver between the inner and the outer code , andobtain the average performance of an ensemble composed by the codes obtained by selecting all possible interleavers .our analysis shows that the performance of the concatenated scheme with and without interleaver ( as proposed in ) may differ substantially .similarly , by considering both and outer codes , we show that the choice of the outer code plays an important role in the short block length regime .the paper is organized as follows .notation and definitions are introduced in section [ sec : definitions ] .the distance spectrum analysis of the concatenated scheme is discussed in section [ sec : upperunionbound ] .numerical results are reported in section [ sec : numericalresults ] .finally , section [ sec : conclusion ] concludes the paper .let be the hamming weight of a vector .we denote as and the outer cyclic code length and dimension , respectively , while and identify the same parameters for the inner polar code .therefore , the code rates of the outer and inner code are and , respectively .the two codes can be serially concatenated on condition that , thus the overall code rate of the concatenated code is .given a binary linear code , its is defined as where is the number of codewords with . in this workwe focus on systematic polar codes ( the reason of this choice will be discussed later ) so the first bits of a codeword coincide with the information vector , yielding with being the parity vector .the of a code is where is the multiplicity of codewords with and .the enumeration of the codeword weights entails a large complexity even for small code dimensions . in order to overcome this problem and obtain the of the considered polar codes, we focus on short , high - rate polar codes and we exploit the generalized macwilliams identity .this approach was followed also for cyclic codes , for example , in to compute the of several .denote by the dual code of .given the dual code , we can express the original code as where is the cardinality of the dual code .when the is of interest , a significant reduction of the computational cost can be achieved by considering systematic codes .for such reason , in this work we have used only systematic inner polar codes . in the case of a systematic code , it is convenient to derive the from the defined as where is the multiplicity of codewords with and , with and .hence , starting from the of the dual code , we have then , the is obtained as an ensemble of binary linear codes , the expected of a random code under decoding over a with erasure probability can be upper bounded as \leq p_b^{(s)}(n , k,\epsilon ) \nonumber \\ & + \sum_{e=1}^k \binom{n}{e } \epsilon^e(1-\epsilon)^{n - e } \min\left\{1,\sum_{\omega=1}^e\binom{e}{n}\frac{\bar{a}_\omega}{\binom{n}{\omega}}\right\ } \label{eq : upperunionbound}\end{aligned}\ ] ] where $ ] is the average multiplicity of codewords with and represents the of an ideal code , with parameters and .the in is applicable to every code ensemble whose expected is known . in order to use , the of the concatenated code ensemble must be known .when dealing with concatenated codes , it is commonplace to consider a general setting including an interleaver between the inner and the outer codes .the concatenated code ensemble is hence given by the codes obtained by selecting all possible interleavers . the special case without any interleaver can then be modeled as an identity interleaver . from , the of a concatenation formed by an inner polar code and an outer cyclic code can be obtained from the cyclic code and the polar code as where is the weight enumerator of the outer code and is the input - output weight enumerator of the inner code ( we remind that the average multiplicities resulting from are , in general , real numbers ) .the ensemble contains the codes generated by all possible interleavers .thus , also bad codes ( i.e. , characterized by bad error rate performance ) belong to the ensemble .it is clear that the bad codes adversely affect the obtained through causing a too pessimistic estimate of the error probability obtained through , with respect to that achieved by properly designed codes .a simple way to overcome this issue is to divide into the bad and good code subsets , and then derive the only of good codes through the expurgated .in fact , , where and denote the good and the bad codes ensemble , respectively . in this work we have assumed , hence at least one code belongs to the good codes ensemble .therefore , when the expurgation method is adoptable ( i.e. , the first term of the has a codewords multiplicity less than ) , through and the average performance of the good codes subset is derived .studying the dual codes and exploiting macwilliams identities allow considerable reductions in complexity of exhaustive analysis as long as the original code rate is sufficiently large .this will be the case for the component codes considered next , which are characterized by .therefore , in our case and can effectively be exploited to calculate and in .in this section we consider several examples of polar - cyclic concatenated codes and assess their performance through the approach described in the previous sections .our focus is on short , high rate component codes , which allow to perform exhaustive analysis of their duals in a reasonable time .our results are obtained considering a with erasure probability . as known , in the polar code construction a fixed value of the error transition probability is considered .therefore , as usual in literature , in each of the following examples the polar code is designed by using .all performance curves provided next are obtained through .we consider codes with bits and a or a outer code , with according to the polar code dimension . in this work ,we have used a and a with the following generator polynomials : * : ; * : . instead ,regarding the code , we study two codes that have the same redundancy as the -8 and -16 codes . thus , for the same value , a performance comparison of the results achieved by using a or a outer code is feasible and fair . in order to keep the selected polar codes unchanged ,we have considered shortened codes . for the case without interleaver considered in ,the generator matrix of the concatenated code can be obtained from the cyclic and polar code generator matrices and , respectively , as when we instead consider the more general case with an interleaver between the inner and outer codes , we have where is an permutation matrix representing the interleaver . considering all codes in ,leads to their and hence to the average performance in terms of , according to the uniform interleaver approach . in order to have an idea of the gap between this average performance and those of single codes in the ensemble , in each of the following figures we include the performance of codes randomly picked in . without changing the outer and inner code ,this can be easily done by introducing a random interleaver between the cyclic code and the polar code in the concatenated scheme .hence , in this case , the matrix in is a random permutation matrix . for readability reasons , in addition to the solution without interleaver , in each of the following examples , we have considered only random interleavers ; however , the obtained results allow to address some general conclusions .moreover , when the results of the expurgated differs from that achieved by the , its performance in terms of is also considered .the of the considered polar code alone is also included as a reference .clearly , the comparison between the performance of the polar code and those of the concatenated schemes is not completely fair at least because of the different code rates ; however , in this way , the performance gain achieved through concatenation can be pointed out .figures [ fig : polar64 - 48-crc8 ] and [ fig : polar64 - 48-bch48 - 40 ] show the of the concatenated scheme , with and without random interleaver , formed by a polar code with and an outer cyclic code with ( i.e. , ) in terms of . in fig .[ fig : polar64 - 48-crc8 ] and fig .[ fig : polar64 - 48-bch48 - 40 ] a -8 and a ( 48 , 40 ) outer code is considered , respectively , the latter obtained by shortening the ( 255 , 247 ) code . in both the examples , the result obtained through the corresponds to an average behavior , while some interleaver configurations achieve a smaller .this trend is due to the different minimum distance of the codes .the results in figs .[ fig : polar64 - 48-crc8 ] and [ fig : polar64 - 48-bch48 - 40 ] are very similar , showing that , for this particular case , there is no substantial difference in performance arising from the different type of outer code . this conclusion is also supported by the results in tab . [tab : minimumdistance ] , where the number of concatenated schemes with random interleaver corresponding to a minimum distance value is reported . from these figureswe can observe the performance gain introduced by the concatenated scheme with respect to the ( 64,48 ) polar code used alone , that has . in both the examplesthe and the concatenated code without interleaver have and , respectively .code over the under decoding .performance of the ( 64,48 ) polar code alone is also reported.,width=340 ] .number of concatenated codes with random interleaver for a given minimum distance value [ cols="^,^,^,^",options="header " , ] in fig .[ fig : polar64 - 48-crc16 ] and fig .[ fig : polar64 - 48-bch48 - 32 ] the of the concatenated codes , with and without interleaver , composed by a polar code with and an outer code with ( i.e. , ) in terms of is plotted . in figs . [fig : polar64 - 48-crc16 ] and [ fig : polar64 - 48-bch48 - 32 ] a -16 and a ( 48 , 32 ) outer code is considered , respectively , the latter obtained by shortening the ( 255 , 239 ) code . differently from the previous figures ,the obtained with the expurgated is now available . as in figs .[ fig : polar64 - 48-crc8 ] and [ fig : polar64 - 48-bch48 - 40 ] , the curve obtained through the well describes the ensemble average performance , while we see that the curve corresponding to the expurgated belongs to the group of best codes .also in these cases the performance gap between the ( 64,48 ) polar code alone and the concatenated codes is remarkable . in both the examplesthe has , while the expurgated and the concatenated scheme without interleaver have .however , from fig . [fig : polar64 - 48-bch48 - 32 ] , we can observe that , on the contrary to fig .[ fig : polar64 - 48-crc16 ] , all curves of concatenated codes are very close .in fact , for the case in fig . [fig : polar64 - 48-bch48 - 32 ] only the results in but with a codewords multiplicity equal to 0.0336 , instead the realizations of concatenated codes have , as shown in tab .[ tab : minimumdistance ] .therefore , differently from the , in this case the use of a code is able to increase the minimum distance of the concatenated code ( also for the solution without interleaver ) ; thus , for this specific case , the code should be preferred to the code .code over the under decoding .performance of the ( 64,48 ) polar code alone is also reported.,width=340 ] in all the previous figures the curve of the concatenated scheme without interleaver proposed in falls within the group of best performing codes .this may lead to the conclusion that this configuration always produces a good result ; in reality this trend is not preserved for any choice of the code parameters .an example of the latter kind is shown in fig .[ fig : polar64 - 56-crc16 ] , where a polar code with and a -16 code ( i.e. , and ) are considered . from the figure we observe that , in this case , the introduction of a random interleaver can improve the scheme without interleaving . also for this example , tab .[ tab : minimumdistance ] summarizes the number of concatenated schemes with interleaver for each value of the code minimum distance . in this casethe polar code has , while both the and the concatenated scheme without interleaver have .instead , the expurgated belongs to the group of good codes with .we have found a similar result also by using the -8 code in place of the -16 but it is omitted for the sake of brevity .so , these counter examples ( others can be found ) clearly demonstrate that the use of a selected interleaver may be beneficial from the error rate viewpoint .code over the under decoding .performance of the ( 64,56 ) polar code alone is also reported.,width=340 ]in this paper , the analysis of the performance of the concatenation of a short polar code with an outer binary linear block code is addressed from a distance spectrum viewpoint .the analysis is carried out for the case where an outer cyclic code is employed together with an inner systematic polar code . by introducing an interleaver at the input of the polar encoder ,we show that remarkable differences on the block error probability at low erasure probabilities can be observed for various permutations .the variations are due to the change in the overall concatenated code minimum distance ( and minimum distance multiplicity ) induced by the choice of the interleaver .bounds on the achievable error rates under maximum likelihood decoding are obtained by applying the union bound to the ( expurgated ) average weight enumerators .the results point to the need of careful optimization of the outer code , at least in the short block length regime , to attain low error floors .e. arikan , `` channel polarization : a method for constructing capacity - achieving codes for symmetric binary - input memoryless channels , '' _ ieee trans .inf . theory _55 , no . 7 , pp . 30513073 , jul .2009 .m. bardet , v. dragoi , a. otmani , and j. p. tillich , `` algebraic properties of polar codes from a new polynomial formalism , '' in _ proc .inf . theory ( isit ) _ , barcelona , spain , jul .2016 , pp . 230234 .a. bhatia , v. taranalli , p. h. siegel , s. dahandeh , a. r. krishnan , p. lee , d. qin , m. sharma , and t. yeo , `` polar codes for magnetic recording channels , '' in _ proc .theory workshop ( itw ) _ , jerusalem , israel , apr .2015 , pp . 15 .j. k. wolf and r. d. blakeney , `` an exact evaluation of the probability of undetected error for certain shortened binary crc codes , '' in _ proc .21st ieee military commun ._ , vol . 1 , san diego , ca , usa , oct .1988 , pp .287292 .f. chiaraluce and r. garello , `` extended hamming product codes analytical performance evaluation for low error rate applications , '' _ ieee trans . on wireless commun ._ , vol . 3 , no . 6 ,. 23532361 , nov . | in this paper , the analysis of the performance of the concatenation of a short polar code with an outer binary linear block code is addressed from a distance spectrum viewpoint . the analysis targets the case where an outer cyclic code is employed together with an inner systematic polar code . a concatenated code ensemble is introduced placing an interleaver at the input of the polar encoder . the introduced ensemble allows deriving bounds on the achievable error rates under maximum likelihood decoding , by applying the union bound to the ( expurgated ) average weight enumerators . the analysis suggests the need of careful optimization of the outer code , to attain low error floors . |
information about the health outcomes in many epidemiological studies is obtained from multiple data sources or over a certain time - period with multiple observations . the multiple data sources provide multiple measures of the same underlying variable , measured on a similar scale . as an example are chosen for high - blood pressure study at the age of and are asked about their diet , smoking and drinking habits and their blood pressures are measured .the same subjects over the course of time are monitored again on the basis of the same variables choice as before .this illustrates the standard data collecting exercise to measure health risk .once such a data is available we can begin the risk modelling to ascertain the factors which contribute towards high - blood pressure and those that contribute towards low - blood pressure .to put it mathematically , given subjects and data sources or time - points the data is collected as a dimensional vector of covariates , where and .given such a vector the outcome is reported as a variable for the subject and from the data source .then we can construct a dimensional vector as . in a dimensional space the above vector for a given value of is a point . since the total number of points in the dimensional space equal .these points follow a certain distribution which is a - priori unknown . in the conventional analysis the outcome variables are treated as independent variables and nothingis assumed about the joint distribution in the dimensional space .the assumption about the independence is not correct but as we will see in section [ [ tradition ] ] this does not affect the statistical analysis that we intend to carry out . + we propose a novel analysis tool to carry out health risk modelling by transforming the dimensional a - priori unknown density to that of a gaussian density whilst keeping the shannon entropy constant . to doso we transform the number of dimensional vectors in a basis set consisting of divergence - free vector fields .the condition that the basis set can only consist of divergence - free vector fields enforces the entropy conserving condition .entropy conserving condition can be enforced by having volume preserving maps and our choice of the basis set represents a flow of incompressible fluid hence is a volume preserving map . to determine the coefficients of these basis vectors such that the dimensional density is a gaussian , karplus theoremis used .we will demonstrate in section[[vecs ] ] that how this theorem allows for determination of the basis coefficients in such a way that the dimensional density is transformed to a gaussian .+ the paper is divided as follows , in section[[tradition ] ] the traditional approach to model health risk is reviewed and a novel approach is proposed . in section[[basis ] ] the construction of basis set consisting of high dimensional vector fields is shown . in section[[vecs ] ] the question of determining the coefficients of the basis set is settled .the algorithm and program structure is discussed in section[[algo ] ] . in section[[test ] ] a test case is computed to see how the transformation works in practice .finally we conclude our work done so far and make proposals for the future work .consider a trial to model high - blood pressure with subjects monitored over time - period .the response obtained from the subject can be written as a dimensional vector as traditionally a joint distribution for subjects is not specified .instead a working generalized linear model ( glm ) to describe the marginal distribution of as in liang , zeger ( 1986 ) is used , \label{marginal}\ ] ] if the output is a binary random variable then the parameters for the above exponential family are , \theta_{ij}=log\left [ \frac{\mu_{ij}}{1-\mu_{ij } } \right ] , b(y_{ij},\theta_{ij})=0\ ] ] the probability of a favorable outcome if is a binary random variable can be modelled via a logit function as ) = \vec{x}_{ij}\vec{\beta},\label{logit}\ ] ] where represents the favourable outcome _i.e _ low risk of high - blood pressure whereas corresponds to high - risk of high - blood pressure .+ given eq[[marginal ] ] the log - likelihood function can be written as to determine the regression parameters we differentiate the log - likelihood with respect to , this gives the following equation to estimate this is the traditional approach to determine the regression parameters that help us evaluate the factors which contribute towards high or low health risks given the data .equation[[assump ] ] was derived assuming that the outcome is a binary random variable , however similar equation can be derived when the outcome is of any other type .+ in this approach it is assumed that all the are independent observations . this assumption although not correct , yields the estimates for which are valid but their variances are not .however using techniques such as empirical variance estimator valid standard errors can be obtained .having reviewed the traditional approach we now present our idea .as remarked earlier the joint distribution of the subjects is not specified in the dimensional space as this is an a - priori unknown .however if the unknown distribution is transformed to that of a dimensional gaussian then a valid analysis tool can be developed .we have developed an algorithm and a program which does this transformation in an entropy conserving way _ i.e _ the shannon entropy is preserved during the transformation .+ to preserve the shannon entropy requires transformation of the high dimensional vectors in a basis set consisting of divergence - free vector fields as that represents a volume preserving map thereby preserving shannon entropy .such a construction of orthocomplete basis set is available in any dimensions .we use this mathematical construct to transform then number of dimensional vectors . to determine the basis coefficients such that the dimensional density is transformed to that of a gaussian karplus theorem discussed in section[[vecs ] ]is invoked .once the coefficients are determined the dimensional joint distribution of the subjects is a gaussian having the same shannon entropy as the starting distribution .this has been implemented in entra .+ once the joint distribution for the subjects is known log - likelihood function can be written for such a distribution and the regression parameters determined thereby . in the next sectionwe show the construction of the divergence - free vector fields used in our program entra .a mathematically rigorous construction of divergence free smooth vector fields in any dimensions was provided in .we use it to construct a basis set consisting of dimensional divergence free vector fields . to doso we define the following matrix valued operator here the first term is a dimensional laplacian operator , being the identity matrix and the second term consists of column and row vectors of the gradient operator in dimensions .this operator acts on a smooth scalar function which we construct from a dimensional vector as where the symbol is the euclidean distance between two dimensional vectors . is chosen to be where is the spacing between the basis vectors .the vector is chosen as a constant where goes from .now we define a matrix valued function by applying the operator in eq[[operator ] ] to the scalar field in eq[[scalar ] ] as e^{- \arrowvert\arrowvert \vec{x}-\vec{x}_l\arrowvert\arrowvert^2/(2\sigma^2 ) } \end{aligned}\ ] ] it was proven rigorously that the columns of the above matrix consist of divergence free vector fields . for a given choice of centre we therefore obtain dimensional vector field . from the results in .+ for a given centre there are number of dimensional mutually orthogonal basis vectors .hence for each centre we have a complete basis set .due to such a construction the basis vectors enforce the divergence free condition strictly .each vector has a unique coefficient attached to it, . in the next sectionwe demonstrate this for a simple case .to demonstrate how the vector field looks like we take a simple 2d case and define the scalar operator in eq[[scalar ] ] as the matrix valued function becomes the vectors and defined from the columns of the above matrix as below are then divergence free , any linear combination of these divergence free fields is also divergence free .we plot these fields in fig.[div ] for and . and respectively . ,title="fig:",scaledwidth=52.0% ] ; and respectively ., title="fig:",scaledwidth=52.0% ] from the above plot we can see that for the case we have two mutually orthogonal divergence free basis vectors which constitute the complete basis set in two dimensions .the result holds in general for any dimensions .in the dimensional space of the outcome variable the subjects are represented by points .these points are arranged according to a certain density .karplus theorem states that for a given covariance gaussian distribution maximizes entropy .mathematically this can be written as =s[\rho_g]-s[\rho]\geq 0.\label{basic}\ ] ] here is the gaussian density .the covariance matrix is defined as here the symbol denotes the ensemble average over the subjects implying that is a dimensional matrix . denotes a dimensional vector or a point in the configuration space . + the equality sign in eq[[basic ] ] holds only if the underlying density distribution is a gaussian .now we introduce the transformation that preserves the entropy , i.e , =s[\rho].\ ] ] with this transformation we want to deform the density towards a gaussian density , this then becomes the following minimization problem \right).\ ] ] here is the group of all the smooth entropy preserving transforms .since the transformation leaves the entropy unchanged and as the entropy of a gaussian density is proportional to the determinant of the covariance matrix , to solve the above minimization problem we have to minimize the determinant of the covariance matrix of the gaussian density .since by karplus theorem the covariance of is same as that of , we can use the covariance in eq[[covar ] ] and write the above minimization problem as .\ ] ] as a consequence of the above corollary if we have a basis set consisting of divergence free vector fields under which dimensional vectors for subjects are transformed , then the basis coefficients can be determined by minimizing the covariance matrix determinant of the transformed vectors with respect to the basis coefficients . in the next sectionwe describe the construction of the basis set consisting of divergence free smooth vector fields .this construction is implemented in the program package entra , that i have developed .given subjects the outcome variable for each subject is a dimensional vector[[master ] ] .we compute the covariance matrix eq[[covar ] ] which is a dimensional matrix . the matrix is then diagonalized by an orthogonal transformation .the covariance matrix can then be written as with being the eigenvalue matrix and columns of matrix being the eigenvectors of .we construct a dimensional vector by appending all the number of dimensional vectors and label it as trajectory . to transform the trajectory to principal coordinates where the mean of the trajectory is centered at we project the eigenvectors onto the trajectory to get mean - centered trajectory as for the trajectory we choose two dimensional vectors and and transform them in the vector field shown before as the coefficients are chosen by minimizing the determinant of matrix with respect to the basis coefficients .the minimization is performed via conjugate gradients method .the process is repeated for all the number of dimensional vectors .this in the end yields a trajectory which has the least determinant of the covariance matrix and the underlying density has the same shannon entropy as that of our starting system .+ the classes that build up the core of the program are entra , trajectory and grid. objects of class trajectory represent real - valued arrays .the arrays are stored as high dimensional vectors and the dimensionality of the vectors is defined by the grid class .the grid class also defines the number and spacing of the basis vectors .the methods provided by trajectory and grid classes allow to initialize the corresponding arrays , to manipulate them , and to store ( load ) them to ( from ) files .to start using the program few parameters have to be provided these are : + long nsources = j ; + long nsubjects = n ; + double deltx = spacing between basis functions ; + long ngpsx = number of basis functions = l ; + long max iter = maximum iteration to find basis coefficients ; +the goal of this example is to generate a trajectory having a random underlying density and then to transform it towards a trajectory having gaussian underlying density . to generate a random trajectory we use the utility of eigen to generate random matrix of any dimension .the parameters chosen for this test are as follows : + long nsources=10 ; + long nsubjects=1000 ; + double deltx = 0.05 ; + long ngpsx=80 ; + long maxiter=500 ; results of the transformation ( after a single iteration cycle ) are shown in fig.[[results ] ] . in this examplewe transform a dimensional vector having configurations .the data is along these axises which are labelled as .here , _i.e _ component of the dimensional vector .we plot two dimensional subspace of original dimensional space along different axises as labelled in fig.[[results ] ] . to prove that the transformation is indeed entropy conserving , we computed subspace entropy via histogramming for the 2d subspaces shown in fig.[[results ] ] for the original and the transformed subspaces .+ the matlab code file along with the output to compute entropy of subspaces is provided here : + * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * + matlab file to estimate entropy via histogramming for the plot in fig.[[results]]a + * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * + load simulation_examplehist.dat x1=simulation_examplehist(:,8 ) ; x2=simulation_examplehist(:,9 ) ; x3=simulation_examplehist(:,11 ) ; x4=simulation_examplehist(:,12 ) ; x11=[x1;x3 ] ; x12=[x2;x4 ] ; x = [ x11,x12 ] ; plotmatrix(x ) ; defaultn=500 ; error(nargchk(1 , 2 , nargin ) ) ; if nargin n = defaultn ; end x = double(x ) ; xh = hist(x ( : ) , n ) ; xh = xh / sum(xh ( : ) ) ;i = find(xh ) ; h = -sum(xh(i ) .* log2(xh(i ) ) ) ; initialentropy = h ; display(initialentropy ) ; y1=simulation_examplehist(:,2 ) ; y2=simulation_examplehist(:,3 ) ; y3=simulation_examplehist(:,5 ) ; y4=simulation_examplehist(:,6 ) ; y11=[y1;y3 ] ; y12=[y2;y4 ] ; y=[y11,y12 ] ; plotmatrix(y ) ; error(nargchk(1 , 2 , nargin ) ) ; if nargin n = defaultn ; end y = double(y ) ; yh = hist(y ( : ) , n ) ; yh = yh / sum(yh ( : ) ) ; i = find(yh ) ; htwo = -sum(yh(i ) .* log2(yh(i ) ) ) ; transformedentropy = htwo ; display(transformedentropy ) ; entropydifference = initialentropy - transformedentropy ; display(entropydifference ) ; * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * + result of the above file + * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * + run simpleentropy initialentropy = 8.7636 transformedentropy = 7.5625 entropydifference = + * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * result for the plot fig.[[results]]b + run simpleentropy initialentropy = 8.7770 transformedentropy = 7.6632 entropydifference = + * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * result for the plot fig.[[results]]c + run simpleentropy initialentropy = 8.7754 transformedentropy = 7.6483 entropydifference = two dimensional histogram plots for initial and transformed configurations are shown in fig.[[results ] ] .now we can also compute entropy in higher dimension via histogramming . as the data is 30 dimensionalwe now look at the data along first three axises as seen in fig.[[threedorg ] ] , in this plot we plot the data along one axis versus another as labelled along with histogram along each axis .we also plot the same plot for the transformed data as seen in fig.[[threedtrans ] ] .we compute the entropy of the three dimensional data using a matlab code similar to above and its results are shown here : * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * result for computation of entropy for 3d data run highdimentropy initialentropy = 8.8220 transformedentropy = 7.5756 entropydifference = .more details in the text , scaledwidth=100.0% ] .more details in the text , scaledwidth=100.0% ] these results imply that in just a single iteration cycle the unknown configuration space density is transformed to a gaussian density with entropy being conserved approximately . clearly points are not sufficient to get an accurate entropy estimate and more statistics is required .this will form part of the work to be done .entra package has been presented . the aim of this package is to do data transformation on high dimensional data sets as found in epidemiology . with this transformationthe underlying high dimensional density function is transformed that to a high dimensional gaussian and due to the nice properties associated with a gaussian distribution the further data analysis can be accomplished easier than before .\1 ) the appropriate choice of the basis set vectors .the number of basis vectors depend on the data and need to be appropriately estimated beforehand .how exactly that can be achieved needs to be determined .+ 2 ) building an example with enough statistics to be able to prove that the entropy conservation is maintained to a high degree of accuracy .+ 3 ) furthermore developing full file support for epidemiologists to enable them to load their data and get the transformed data .+ 4 ) also , developing complete regression analysis to estimate conditional probabilities of the likes in equation[[logit ] ] in the ecosystem of entra .i. andricioaei , and m. karplus , on the calculation of entropy from covariance matrices of the atomic fluctuations , j. chem .phys . , 115:6289 - 6292 , 2001 . m. karplus and j.n .kushick , method for estimating the configurational entropy of macromolecules .macromolecules , 14(2):325332 , 1981 .schlitter , j. ( 1993 ) .estimation of absolute entropies of macromolecules using the covariance matrix. chemical physics letters , * 215 * , 617621 .liang k - y , zeger sl .longitudinal data analysis using generalized linear models .biometrika 1986 ; * 73*:13 22 . | the traditional approach of health risk modelling with multiple data sources proceeds via regression - based methods assuming a marginal distribution for the outcome variable . the data is collected for subjects over a time - period or from data sources . the response obtained from subject is . for subjects we obtain a dimensional joint distribution for the subjects . in this work we propose a novel approach of transforming any dimensional joint distribution to that of a dimensional gaussian keeping the shannon entropy constant . this is in stark contrast to the traditional approaches of assuming a marginal distribution for each by treating the as independent observations . the said transformation is implemented in our computer package called entra . |
the subject of arithmetic constraints on reals has attracted a great deal of attention in the literature .for some reason arithmetic constraints on integer intervals have not been studied even though they are supported in a number of constraint programming systems .in fact , constraint propagation for them is present in ecl , sicstus prolog , gnu prolog , ilog solver and undoubtedly most of the systems that support constraint propagation for linear constraints on integer intervals . yet , in contrast to the case of linear constraints see notably we did not encounter in the literature any analysis of this form of constraint propagation .in this paper we study these constraints in a systematic way .it turns out that in contrast to linear constraints on integer intervals there are a number of natural approaches to constraint propagation for these constraints . to define them we introduce integer interval arithmetic that is modeled after the real interval arithmetic , see e.g. , are , however , essential differences since we deal with integers instead of reals .for example , multiplication of two integer intervals does not need to be an integer interval . in passing by weshow that using integer interval arithmetic we can also define succinctly the well - known constraint propagation for linear constraints on integer intervals . in the second part of the paperwe compare the proposed approaches by means of a set of benchmarks .we review here the standard concepts of a constraint and of a constraint satisfaction problem .consider a sequence of variables where , with respective domains associated with them .so each variable ranges over the domain . by a _ constraint _ on we mean a subset of . given an element of and a subsequence of we denote by ] denotes .a _ constraint satisfaction problem _ , in short csp , consists of a finite sequence of variables with respective domains , together with a finite set of constraints , each on a subsequence of . we write it as , where and . by a _ solution _to we mean an element such that for each constraint on a sequence of variables we have \in c ] ' for each and the binary function symbol ` ' written in the infix notation .for example {(y^2 \cdot z^4)/(x^2 \cdot u^5 ) } \label{eq : extended}\ ] ] is an extended arithmetic expression . here, in contrast to the above is a term obtained by applying the function symbol ` ' to the variable .the extended arithmetic expressions will be used only to define constraint propagation for the arithmetic constraints .fix now some arbitrary linear ordering on the variables of the language . by a _ monomial _ we mean an integer or a term of the form where , are different variables ordered w.r.t . , and is a non - zero integer and are positive integers .we call then the _ power product _ of this monomial . next , by a _ polynomial _ we mean a term of the form where , at most one monomial is an integer , and the power products of the monomials are pairwise different . finally , by a _ polynomial constraint _we mean an arithmetic constraint of the form , where is a polynomial with no monomial being an integer , , and is an integer .it is clear that by means of appropriate transformation rules we can transform each arithmetic constraint to a polynomial constraint .for example , assuming the ordering on the variables , the arithmetic constraint ( [ eq : arithcon ] ) can be transformed to the polynomial constraint so , without loss of generality , from now on we shall limit our attention to the polynomial constraints . next ,let us discuss the domains over which we interpret the arithmetic constraints . by an_ integer interval _ , or an _ interval _ in short , we mean an expression of the form \ ] ] where and are integers ; ] the _ empty interval _ and denote it by . finally , by a _ range _we mean an expression of the form where is a variable and is an interval .to reason about the arithmetic constraints we employ a generalization of the arithmetic operations to the sets of integers . for sets of integers we define the following operations : * addition : * subtraction : * multiplication : * division : * exponentiation : for each natural number , * root extraction : {x } : = { \mbox{}},\ ] ] for each natural number .all the operations except division are defined in the expected way .we shall return to it at the end of section [ sec : third ] . at the moment it suffices to note the division operationis defined for all sets of integers , including and .this division operation corresponds to the following division operation on the sets of reals introduced in : for a ( n integer or real ) number and we identify with and with . to present the rules we are interested in weshall also use the addition and division operations on the sets of real numbers .addition is defined in the same way as for the sets of integers , and division is defined above . in is explained how to implement these operations .further , given a set of integers or reals , we define when limiting our attention to intervals of integers the following simple observation is of importance .[ note:1 ] for integer intervals and an integer the following holds : * , are integer intervals .* is an integer interval .* does not have to be an integer interval , even if or .* does not have to be an integer interval .* for each does not have to be an integer interval .* for odd {x} ] is an integer interval or a disjoint union of two integer intervals .for example we have + [ 3 .. 8 ] = [ 5 .. 12],\ ] ] - [ 1 .. 8 ] = [ -5 .. 6],\ ] ] \cdot [ 1 .. 2 ] = { \mbox{}},\ ] ] /[-1 .. 2 ] = { \mbox{}},\ ] ] /[-1 .. 2 ] = { \cal z},\ ] ] ^ 2 = { \mbox{}},\ ] ] {[-30 .. 100 ] } = [ -3 .. 4],\ ] ] {[-100 .. 9 ] } = [ -3 .. 3],\ ] ] {[1 .. 9 ] } = [ -3 .. -1 ] \cup [ 1 .. 3].\ ] ] to deal with the problem that non - interval domains can be produced by some of the operations we introduce the following operation on the subsets of the set of the integers : for example /[-1 .. 2 ] ) = [ -5 .. 5] ] . to define constraint propagation for the arithmetic constraints on integer intervals we shall use the integer set arithmetic , mainly limited to the integer intervals .this brings us to the discussion of how to implement the introduced operations on the integer intervals .since we are only interested in maintaining the property that the sets remain integer intervals or the set of integers we shall clarify how to implement the intersection , addition , subtraction and root extraction operations of the integer intervals and the closure of the multiplication , division and exponentiation operations on the integer intervals .the case when one of the intervals is empty is easy to deal with .so we assume that we deal with non - empty intervals ] , that is and .[ [ intersection - addition - and - subtraction ] ] intersection , addition and subtraction + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + it is easy to see that \cap [ c .. d ] = [ max(a , c) .. min(b , d)],\ ] ] + [ c .. d ] = [ a+c\ .. \ b+d],\ ] ] - [ c .. d ] = [ a - d\ .. \ b - c].\ ] ] so the interval intersection , addition , and subtraction are straightforward to implement . [[ root - extraction ] ] root extraction + + + + + + + + + + + + + + + the outcome of the root extraction operator applied to an integer interval will be an integer interval or a disjoint union of two integer intervals .we shall explain in section [ sec : first ] why it is advantageous not to apply to the outcome .this operator can be implemented by means of the following case analysis ._ suppose is odd. then {[a .. b ] } = [ { \left \lceil \sqrt[n]{a } \right \rceil } .. { \left \lfloor \sqrt[n]{b } \right \rfloor}].\ ] ] _ case 2 ._ suppose is even and .then {[a .. b ] } = { \mbox{}}.\ ] ] _ case 3 ._ suppose is even and .then {[a .. b ] } = [ -{\left \lfloor |\sqrt[n]{b}| \right \rfloor } .. -{\left \lceil |\sqrt[n]{a^{+}}| \right \rceil } ] \cup [ { \left \lceil |\sqrt[n]{a^{+}}| \right \rceil } .. { \left \lfloor |\sqrt[n]{b}| \right \rfloor } ] \ ] ] where . [ [ multiplication ] ] multiplication + + + + + + + + + + + + + + for the remaining operations we only need to explain how to implement the closure of the outcome .first note that \cdot [ c .. d ] ) = [ min(a ) ..max(a ) ] , \ ] ] where .using an appropriate case analysis we can actually compute the bounds of \cdot [ c .. d]) ] and ] we distinguish the following cases . _ case 1 ._ suppose ] .then by definition /[c .. d ] ) = { \cal z} ] and .then by definition /[c .. d ] ) = { \mbox{}}\emptyset ] and and .it is easy to see that then /[c .. d ] ) = [ -e .. e],\ ] ] where . for example ,/[-2 .. 5 ] ) = [ -100 .. 100].\ ] ] _ case 4 ._ suppose \{{0}\} ] .for example /[-7 .. 0 ] ) = int([1 .. 100]/[-7 .. -1]).\ ] ] this allows us to reduce this case to case 5 below . _ case 5 ._ suppose ] indirectly .first , observe that we have /[c .. d ] ) { \mbox{}}[{\left \lceil min(a ) \right \rceil } .. { \left \lfloor max(a ) \right \rfloor}],\ ] ] where .however , the equality does not need to hold here . indeed , note for example that /[9 .. 11 ] ) = [ 16 .. 16] ] so that its bounds are actual divisors of an element of ] such that \ \exists u \in { \cal z}\ u\cdot c ' = x ] for which an analogous condition holds ./ [ c .. d ] ) = [ \lceil\textit{min\/}(a)\rceil .. \lfloor\textit{max\/}(a)\rfloor] ] of ( [ eq : extended ] ) we have {int(int(y^2 ) \cdot int(z^4))/int(int(x^2 ) \cdot int(u^5))}),\ ] ] where ranges over , etc .the discussion in the previous subsection shows how to compute given an extended arithmetic expression and the integer interval domains of its variables .the following lemma is crucial for our considerations .it is a counterpart of the so - called ` fundamental theorem of interval arithmetic ' established in .because we deal here with the integer domains an additional assumption is needed to establish the desired conclusion .[ lem : correctness ] let be an extended arithmetic expression with the variables .assume that each variable of ranges over an integer interval .choose for ] : _ linear equality _where * for * - \{j\ } } a_i \cdot x_i)/a_j\big{)}.\ ] ] note that by virtue of note [ note:1 ] - \{j\ } } int(a_i \cdot d_i))/a_j.\ ] ] to see that this rule preserves equivalence suppose that for some we have .then for ] : _ linear inequality_ where * for * - \{j\ } } a_i \cdot x_i)/a_j)\ ] ] to see that this rule preserves equivalence suppose that for some we have .then - \{j\ } } a_i \cdot d_i ] and ] , so we conclude . by reusing ( [ eq : x3 ] ) ,now with the information that ] , from which it follows that . in the case of we can isolate it by rewriting the original constraint as from which it follows that , since by assumption .so we could reduce the domain of to ] .this interval reduction is optimal , since and are both solutions to the original constraint .more formally , we consider a polynomial constraint where , no monomial is an integer , the power products of the monomials are pairwise different and is an integer .suppose that are its variables ordered w.r.t . .select a non - integer monomial and assume it is of the form where , are different variables ordered w.r.t . , is a non - zero integer and are positive integers .so each variable equals to some variable in .suppose that equals to .we introduce the following proof rule : _ polynomial equality _ where * for * {int\left(({b- \sigma_{i \in [ 1 .. m ] - \{{\ell}\ } } m_i})/s\right)}\ \right)\ ] ] and to see that this rule preserves equivalence choose some .to simplify the notation , given an extended arithmetic expression denote by the result of evaluating after each occurrence of a variable is replaced by .suppose that .then - \{{\ell}\ } } m'_i,\ ] ] so by the correctness lemma [ lem : correctness ] applied to - \{{\ell}\ } } m'_i ] , ] is not an interval .so to properly implement this rule we need to extend the implementation of the division operation discussed in subsection [ subsec : implementation ] to the case when the numerator is an extended interval .our implementation takes care of this . in an optimized version of this approachwe simplify the fractions of two polynomials by splitting the division over addition and subtraction and by dividing out common powers of variables and greatest common divisors of the constant factors .subsequently , fractions whose denominators have identical power products are added .we used this optimization in the initial example by simplifying to .the reader may check that without this simplification step we can only deduce that . to provide details of this optimization , given two monomials and , we denote by \ ] ] the result of performing this simplification operation on and .for example , ] equals . in this approachwe assume that the domains of the variables , of do not contain 0 .( one can easily show that this restriction is necessary here ) . for a monomial involving variables ranging over the integer intervals that do not contain 0 , the set either contains only positive numbers or only negative numbers . in the first case we write and in the second case we write . the new domain of the variable in the _ polynomial inequality _rule is defined using two sequences and of extended arithmetic expressions such that \ \textrm{and}\ \frac{m_i'}{s_i ' } = - [ \frac{m_i}{s } ] \ \textrm{for .}\ ] ] let - \{{\ell}\ } \} ] .we denote then by the polynomial .the new domains are then defined by { ^{\leq}int\left ( \sigma_{t \in s}\ \frac{p_{t}}{t}\right)}\ \right)\ ] ] if , and by { ^{\geq}int\left(\sigma_{t \in s}\ \frac{p_{t}}{t}\right)}\ \right)\ ] ] if . herethe notation used in the correctness lemma [ lem : correctness ] is extended to expressions involving the division operator on real intervals in the obvious way .we define the operator applied to a bounded set of real numbers , as produced by the division and addition operators in the above two expressions for , to denote the smallest interval of real numbers containing that set .in this approach we limit our attention to a special type of polynomial constraints , namely the ones of the form , where is a polynomial in which each variable occurs _ at most once _ and where is an integer .we call such a constraint a _ simple polynomial constraint_. by introducing auxiliary variables that are equated with appropriate monomials we can rewrite each polynomial constraint into a sequence of simple polynomial constraints .this allows us also to compute the integer interval domains of the auxiliary variable from the integer interval domains of the original variables .we apply then to the simple polynomial constraints the rules introduced in the previous section . to see that the restriction to simple polynomial constraints can make a difference consider the constraint in presence of the ranges ] .it is easy to check that the _ polynomial equality _ rule introduced in the previous section does not yield any domain reduction when applied to the original constraint . in presence of the discussed optimization the domain of reduced to ] ) and consequently can conclude that the original constraint has no solution in the ranges ] , ] . the first approach without optimization and the second approach can not find a solution without search . if , as a first step in transforming this constraint into a linear constraint , we introduce an auxiliary variable to replace , we are effectively solving the constraint with the additional range ] , and \cap int([16 .. 16 ] ) = [ 16 .. 16] ] and \cap int([10 .. 10 ] ) = [ 10 .. 10] ] and \cap int([160 .. 160 ] ) = [ 160 .. 160]\emptyset ] and consequently by the _ multiplication _ _ 2 _ rule we could conclude ,z \in [ -8 .. 10 ] \rangle}.\ ] ] so we reached an inconsistent csp while the original csp is consistent . in the remainder of the paper we will also consider variants of this third approach that allow squaring and exponentiation as atomic constraints . for this purposewe explain the reasoning for the constraint in presence of the non - empty ranges and , and for . to this endwe introduce the following two rules in which to maintain the property that the domains are intervals we use the operation of section [ sec : interval ] : _ exponentiation __ root extraction _ {d_x } ) \rangle}}\ ] ] to prove that these rules are equivalence preserving suppose that for some and we have . then , so and consequently .also {d_x} ] , and consequently {d_x}) ] for which the value of is maximal .[ [ fractions ] ] fractions + + + + + + + + + this problem is taken from : find distinct nonzero digits such that the following equation holds : there is a variable for each letter .the initial domains are ] as multiplication by a constant , which is more efficient in our implementation .such decisions are reflected in the numbers reported in table [ tab - numbers ] .in this paper we discussed a number of approaches to constraint propagation for arithmetic constraints on integer intervals . to assess them we implemented them using the dice ( distributed constraint environment ) framework of , and compared their performance on a number of benchmark problems .we can conclude that : * implementation of exponentiation by multiplication gives weak reduction . in our third approach be an atomic constraint .* the optimization of the first approach , where common powers of variables are divided out , can significantly reduce the size of the search tree , but the resulting reduction steps rely heavily on the division and addition of rational numbers . these operations can be expected to be more expensive than their integer counterparts , because they involve the computation of greatest common divisors . * introducing auxiliary variables can be beneficial in two ways : it may strengthen the propagation , as discussed in sections [ sec : second ] and [ sec : third ] , and it may prevent the evaluation of subexpressions the variable domains of which did not change . * as a result , given a proper scheduling of the rules , the second and third approach perform better than the first approach without the optimization , in terms of numbers of interval operations .actual performance depends on many implementation aspects .however for our test problems the results of variants 2a , 2b and 3c do not differ significantly .in general , our implementation is slow compared to , for example , ilog solver .a likely cause is that we use arbitrary precision integers .we chose this representation to avoid having to deal with overflow , but an additional benefit is that large numbers can be represented exactly . a different approach would be to use floating - point arithmetic and then round intervals inwards to the largest enclosed integer interval .this was suggested in and implemented in for example realpaver .a benefit of this inward rounding approach is that all algorithms that were developed for constraints on the reals are immediately available .a disadvantage is that for large numbers no precise representation exists , i.e. , the interval defined by two consecutive floating - point numbers contains more than one integer .but it is debatable whether an exact representation is required for such large numbers .we realize that the current set of test problems is rather limited .in addition to puzzles , some more complex non - linear integer optimization problems should be studied .we plan to further evaluate the proposed approaches on non - linear integer models for the sat problem .also we would like to study the relationship with the local consistency notions that have been defined for constraints on the reals and give a proper characterization of the local consistencies computed by our reduction rules .this work was performed during the first author s stay at the school of computing of the national university of singapore .the work of the second author was supported by nwo , the netherlands organization for scientific research , under project number 612.069.003 .f. benhamou , f. goualard , l. granvilliers , and j .- f .puget . revising hull and box consistency . in _ proceedings of the 16th international conference on logic programming ( iclp99 )_ , pages 230244 . the mit press , 1999 | we propose here a number of approaches to implement constraint propagation for arithmetic constraints on integer intervals . to this end we introduce integer interval arithmetic . each approach is explained using appropriate proof rules that reduce the variable domains . we compare these approaches using a set of benchmarks . |
language is the most characteristic trait of human communication but takes on many heterogeneous forms .dialects , in particular , are linguistic varieties which differ phonologically , gramatically or lexically in geographically separated regions .however , despite its fundamental importance and many recent developments , the way language varies spatially is still poorly understood .traditional methodological approaches in the study of regional dialects are based on interviews and questionnaires administered by a researcher to a small number ( typically , a few hundred ) of selected speakers known as informants .based on the answers provided , linguistic atlases are generated that are naturally limited in scope and subject to the particular choice of locations and informants and perhaps not completely free of unwanted influences from the dialectologist .another approach is the use of mass media corpora which provide a wealth of information on language usage but suffer from the tendency of media and newspapers to use standard norms ( the `` bbc english '' for example ) that limits their usefulness for the study of informal local variations . on the other hand ,the recent rise of online social tools has resulted in an unprecedented avalanche of content that is naturally and organically generated by millions or tens of millions of geographically distributed individuals that are likely to speak in vernacular and do not feel constrained to use standard linguistic norms .this , combined with the widespread usage of gps enabled smartphones to access social media tools provides a unique opportunity to observe how languages are used in everyday life and across vast regions of space . in this work ,we use a large dataset of geolocated tweets to study local language variations across the world .similar datasets have recently been used to map public opinion and social behavior and to analyze planetary language diversity .preliminary results demonstrating the feasibility of this approach have thus far been limited to considering only few words or just a few geographical areas . here, we move beyond the mere proof of concept and provide a detailed global picture of spatial variants for a specific language . for definiteness , we choose spanish as it is not only one of the most spoken in the world but it has the added advantage of being spatially distributed across several continents . several other languages such as mandarin or english have more native speakers or higher supra - regional status but their use is hindered by the limited local availability of twitter ( mandarin ) or a high abundance of homographs that percludes a detailed lexicographic analysis ( english ) .we used the twitter gardenhose to gather an unbiased sample of all tweets written in spanish that contained gps information over the course of over two years .language detection was performed using the state of the art chromium compact language detector software library .the resulting dataset contained over geolocated tweets written in spanish distributed across the world ( see fig .[ fig_map ] ) .as expected , most tweets are localized in spain , spanish america and extensive areas of the united states .these results are consistent with recent sociolinguistic data , providing an initial level of validation to our approach .interestingly , we also find significant contributions from major non - spanish - speaking cities in latin america and western europe , likely due to considerable population of temporary settlers and tourists . for further details and results on this dataset .traditional approaches in dialectology have preferred rural , male informants while modern analyses include interactions with urban speakers regardless of age and gender . on average ,twitter users are young , urban and more likely to be technologically savvy thus providing more modern perspective on the use of language .to be able to determine exactly what the major local varieties of spanish are , we use a list of concepts and utterances selected from an exhaustive study of lexical variants in major spanish - speaking cities .reference provides a comprehensive list of possible words representing several concepts , such as `` popcorn '' , `` car '' , `` bus '' , etc .we selected a subset of concepts that minimized possible semantic ambiguities by ensuring that they contained no common words . in our initial set of tweetswe observed geolocated instances where words from our catalogue were used .individual instances were then agregated geographically into cells of , which corresponds to an approximate area of km in the equator . * geographical distribution of the dominant word for the concepts computer ( left ) and car ( right ) .* map locations are colored according to the most common expression found in the corresponding cell .the area of the circle is proportional to the number of tweets , title="fig : " ] * geographical distribution of the dominant word for the concepts computer ( left ) and car ( right ) .* map locations are colored according to the most common expression found in the corresponding cell .the area of the circle is proportional to the number of tweets , title="fig : " ] finally , we define the dominant word for each concept in each geographical cell by a simple majority rule and generate a matrix where element is when word is the dominant for a given concept in cell and otherwise .the resulting matrix has rows and columns and constitutes the dataset used for the analysis presented in the remainder of this paper .figure [ fig_popcorn_car ] illustrates two illustrative concepts ( computer and car ) that are both associated to multiple utterances .each utterance is represented with a different color .we draw a circle centered on each cell with an area proportional to the number of tweets that use the corresponding expression it is clear from the map that some expressions ( _ computadora _ , _ ordernador _ , _ computador _ ) are strongly clustered in space , allowing us to easily define regional dialects characterized by the set of dominant words used to express the concepts in our list . due to the unique resolution of our data we could limit the isoglosses ( boundaries ) of the regions corresponding to each concept - word with a high degree of precision .however , the isoglosses corresponding to different concepts can overlap and bundle rendering any simple arrangement of dialect areas almost impossible . the natural way to overcome this difficulty andcharacterize the various regional dialects present in modern day spanish is to apply machine learning ( ml ) approaches to automatically cluster the matrix and identify which cells are closely related to one another .we start by applying principal component analysis to reduce the dimensionality of the matrix .pca determines the linear combinations of the columns ( features in ml literature ) of the matrix that explain most of the variance observed in the rows ( observations ) .we find that by projecting the data onto the principal components ( see fig .[ pca ] ) we are able to maintain over of the variance in the data while reducing by the dimension of the matrix with clear numerical advantages .the task of identifying meaningful clusters in this matrix is now simplified .we proceed by applying the well known -means algorithm that iteratively refines the position of the centers of clusters until it finds a stable set of locations .the main dificulty of utilizing this algorithm lies in identifying the correct number of clusters to utilize . here, we apply the metric introduced by pham __ to establish the best value for .we run -means with values of up to using different random initializations and depict the results in fig .[ population ] a ) . for verification purposes, we also plot the value of the silhouette of the clusters found with each value of .both metrics agree that is the correct number of clusters ( both curves show a extremum at that point ) , leading to two clusters of size ( cluster ) and ( cluster ) , respectively .a geographic plot of the location of the cells belonging to each clusters ( and ) provides a fundamental clue to their meaning ( see fig . [ population ]strikingly , we find a profound correlation between location of cells belonging to cluster ( red dots ) and areas of high population density .we validate this idea using estimates of the population living within each cell provided by the landscan dataset .hence , we plot the population distribution boxplot for each cluster in fig . [ population ]the results clearly confirm our intuition .cluster corresponds to cells with a typical population that is significantly larger than cluster .this suggests a natural lexical bipartition of spanish into two superdialects .superdialect is utilized by speakers in main american and spanish cities and corresponds to an international variety with a strongly urban component while superdialect is comprised mostly of rural areas and small towns .our result provides some evidence that the increasing globalization of major languages leads to an homogenization that is especially apparent for the active lexicon .cities ( our superdialect ) naturally exert an intrinsic linguistic centripetal force that favors dialect unification , smoothing possible lexical differences .this leveling process present in all countries ( thereby its international denomination ) is reinforced by the rapid increase of worldwide social ties and the powerful influence of mass media precisely located in important metropolitan areas ( madrid , mexico city , miami ) .several other sociolinguistic aspects ( prestige , higher educational status ) also have a role that is more visible in urban environments .in contrast , rural areas ( superdialect ) are generally more conservative and keep a larger number of characteristic lexical items and native words . as a result ,the dialectal area corresponding to superdialect is much more geographically diverse and can be further split , as discussed below .* characterization of major cluster * geographical representation of regional dialects . for visualization purposes we increased the size of each cell .three well separated regions are indicated with dashed lines . ]the size imbalance between the two clusters when combined with our intuition suggest that we can also employ the statistical procedure discussed above to further divide the largest cluster ( ) .we apply -means recursively until the remaining cluster has a similar size to the previous ones . in the end, we obtain six well defined clusters that we display in fig .[ cluster ] . clearly , three regions can be distinguished .yellow dots span a wide area covering mexico , central america , the caribbean and north - western areas of south america .green dots correspond to the southern cone while blue dots are almost exclusively accumulated within spain .the first region is quite diverse .in fact , smaller cells can be aggregated into two additional clusters ( depicted with magenta and orange dots in fig . [ cluster ] ) .interestingly , the magenta and orange dots seem to be localized in the mexican plateau , the interior of central america and andean colombia , in contrast with the speech of venezuela , the antilles and coastal areas represented with yellow dots .this division between highland and lowland varieties agrees with classifications discussed previously in the linguistics literature .the two regions marked in fig . [ cluster ] partly reflect the settlement patterns and the formal colonial spanish administration within the empire .conquerors and settlers occupied first the territories of mexico , peru and the caribbean , and only much later colonists established permanent residence in the southern cone , which stayed away from prestigious linguistic norms .this strong cultural heritage that can still be observed , centuries later , in our datasets deserves to be further analysed in future works .using a large dataset of user generated content in vernacular spanish , we analyse the diatopic structure of modern day spanish language at the lexical level . by applying standard machine learning techniques, we find , for the first time , two large spanish varieties which are related to , respectively , international and local speeches .we can also identify regional dialects and their approximate isoglosses .our results are relevant to empirically understand how languages are used in real life across vastly different geographical regions .we believe that our work has considerable latitude for further applications in the computational study of linguistics , a field full of rewarding opportunities. one can envisage much deeper analyses pointing the way towards new developments in sociolinguistic studies ( bilingualism , creole varieties ) .our work is based on a synchronous approach to language. however , the possibilities presented by the combination of large scale online social networks with easily affordable gps enabled devices are so remarkable that might permit us to observe , for the first time , how diatopic differences arise and develop in time .we thank i. fernndez - ordez for useful discussions . this product was made utilizing the landscan 2007 high resolution global population data set copyrighted by ut - battelle , llc , operator of oak ridge national laboratory under contract no .de - ac05 - 00or22725 with the united states department of energy .the united states government has certain rights in this data set .neither ut - battelle , llc nor the united states department of energy , nor any of their employees , makes any warranty , express or implied , or assumes any legal liability or responsibility for the accuracy , completeness , or usefulness of the data set .bauer , l ( 2004 ) inferring variation and change from public corpora . in _ the handbook of language variation and change _ , ed . by chambersj k , trudgill p , and schilling - estes n , backwell publishing , 97:114 tumasjan a , sprenger t , sandner p , welpe i ( 2010 ) predicting elections with twitter : what 140 characters reveal about political sentiment . in : proceedings of the fourth international aaai conference on weblogs and social media : .178185 . eisenstein j , oconnor b , smith n and xing , e. ( 2010 ) a latent variable model for geographic lexical variation . in : proceedings of the 2010 conference on empirical methods in natural language processing : 1277 - 1287 .lpez morales , h ( 2005 ) ltimas investigaciones sobre lxico hispanoamericano : unidad y variedad . in : _homenaje a jos joaqun montes giraldo :estudios de dialectologa , lexicografa , lingstica general , etnolingstica e historia cultural _ : 333358 . | we perform a large - scale analysis of language diatopic variation using geotagged microblogging datasets . by collecting all twitter messages written in spanish over more than two years , we build a corpus from which a carefully selected list of concepts allows us to characterize spanish varieties on a global scale . a cluster analysis proves the existence of well defined macroregions sharing common lexical properties . remarkably enough , we find that spanish language is split into two superdialects , namely , an urban speech used across major american and spanish citites and a diverse form that encompasses rural areas and small towns . the latter can be further clustered into smaller varieties with a stronger regional character . * crowdsourcing dialect characterization through twitter * + bruno gonalves , david snchez + * 1 aix - marseille universit , cnrs , cpt , umr 7332 , 13288 marseille , france + * 2 universit de toulon , cnrs , cpt , umr 7332 , 83957 la garde , france + * 3 instituto de fsica interdisciplinar y sistemas complejos ifisc ( uib csic ) , e-07122 palma de mallorca , spain + e - mail : corresponding bgoncalves.com + * * * |
microscope technology is used in various fields due to its : ( 1 ) high resolution , ( 2 ) extensive field of view , and ( 3 ) three - dimensional observation .an optical microscope can improve the resolution power using a lens with high magnification .however , the field of view becomes narrow .it is theoretically difficult to solve this problem with an optical microscope .digital holographic microscopy ( dhm ) attempts to solve this problem using a principle different to that of the optical microscope .dhm introduces holographic technology into microscope technology , giving both a high resolution power and extensive field of view , for example ref. .furthermore , dhm enables observation of an object three - dimensionally .although the optical system of dhm is large and expensive , low - cost dhm systems have been reported . both studies used a digital camera for consumers as the recording device of a hologram . in ref. ,the optical system was constructed with a digital camera made by canon , a laser diode , and some lenses .the cost was approximately ] and }$ ] indicate fourier and inverse fourier transforms , and the subscripts , and indicate the horizontal , vertical , and depth components .( , ) and ( , ) are the coordinates of the hologram side and the observation side , respectively . is the distance between the hologram side and the observation side , and is the wavelength of the light . , and are the spatial frequency components of each axis , and is shown as follows . if we calculate using the angular spectrum method , discrete fourier transform ( dft ) is calculated on a large scale and at high - speed by fast fourier transform ( fft ) .therefore , it is possible to perform recording of a hologram and reconstruction by computer simulation , and if we use a gpu , the reconstruction is obtained in real - time .in order to realize low - cost and downsizing of dhm , we build hardware as shown in figure [ fig : dhm - fig.eps ] .we used a web camera , hd pro webcam c910 made by logicool , as the ccd camera and removed the lens of the web camera for focusing an image on the ccd camera because the lens is not required for dhm application .the resolution for photography of this web camera is 1920 pixels . using a commercial web camera ,the cost can be sharply reduced .the ostcxbc1c1s , which is a high power rgb- light emitting diode ( led ) made by optosupply , is used as the light source .this led features the ability to allow a 350 ma current to be passed through it .another feature is having three wavelengths , 470 nm , 525 nm , and 625 nm .the light of this led is used as the point light source passing through the pinhole .the diameter of the pinhole is 5 m ( the product number of the pinhole made by koyo is 2412 ) .although the pinhole is generally positioned on the focal side of an object lens , in our study , the perfect lensless optical system is used instead of an object lens for cost reduction . as a substitute, a pinhole is directly attached to the led . moreover , in order to adjust the distance of the light source , an object , and a ccd camera , variable length supports are used .the actual dhm system is shown in figure [ fig : hardware.eps ] . when recording a hologram , the actual dhm system of figure [ fig : hardware.eps ] is housed in a shading case .software for recording a hologram and the reconstruction from a hologram is required .opencv , which is the open source library by which intel is developed and released , is used to record the hologram . using opencv , control of a web camera and editing of a recorded hologram can be performed .moreover , it is necessary to calculate the diffraction of light for reconstruction from a hologram .therefore , the cwo++ library , which is an open source library for optical calculation , is used .the cwo++ library is equipped with several commands used when optical calculation is performed .therefore , calculation of the angle spectrum method adopted as this research can calculate on cpu or gpu by a simple description .thus , use of the opencv and cwo++ library is cost free , enables simplification of programming and improvement in the calculation speed .\(a ) ( b )the costs of the parts used are shown in table [ tab : cost - parts ] .as shown in table [ tab : cost - parts ] , it is constituted for less than 20,000 yen ( approximately 250 us dollars at 80 yen / dollar ) .moreover , the size of dhm is equal to the size of the shading case , and it is 120 mm 80 mm 55 mm ..parts used and costs for the dhm system . [ cols="^,^",options="header " , ] [ tab : cost - parts ] in order to measure the lateral resolution power of the dhm system created in this study , the usaf1951 test pattern was used .in addition , a blue light with the shortest wavelength was used as the light source . to increase the lateral resolution power , we adjusted the positions of the ccd camera and the object using the variable length support . in this system ,the lateral resolution power was highest when the distance from the light source to the object was 3 mm and the distance from the light source to the ccd camera was 13 mm .the hologram recorded in this condition is shown in figure [ fig : hologram.eps ] .furthermore , the reconstruction image obtained from figure [ fig : hologram.eps ] is shown in figure [ fig : reconstruction.eps ] .figure [ fig : hologram.eps ] and figure [ fig : reconstruction.eps ] show that those have resolved element number 5 of group number 4 . as mentioned above, the lateral resolution power of this dhm is 17.5 m .in this study , we reduced the cost of dhm to less than 20,000 yen ( approximately 250 us dollars at 80 yen / dollar ) , and reduced the size to 120 mm 80 mm 55 mm .the lateral resolution power of this dhm was 17.5 m .the downsizing of dhm is useful for fieldwork , and the reduction in the cost of dhm and use of an open source library are of great benefit in educational environments .moreover , since the led light source of rgb is used in this study , various wavelengths can be controlled .this is applicable to wavefront recovery using two - wave digital holography , two - wave laplacian reconstruction , or two - wave laplacian reconstruction . in this study , since the angular spectrum method was used , a reconstruction image could be obtained in the same resolution as a hologram . the scaled angular spectrum method that can calculate diffraction at different sampling rates on a hologram and reconstructed image is useful for this dhm system because it enables observation of a detailed reconstructed - image with a smaller sampling rate on the reconstructed image .adding such a function of dhm and increasing convenience are future study considerations .99 u. schnars and w. juptner , `` direct recording of holograms by a ccd target and numerical reconstruction , '' appl.opt .* 33 * , 179181 ( 1994 ) .t. g. dimiduk , e. a. kosheleva , d. kaz , r. mcgorty , e. j. gardel , and v. n. manoharan , `` a simple , inexpensive holographic microscope , '' in digital holography and three - dimensional imaging , osa technical digest ( cd ) ( optical society of america , 2010 ) , paper jma38 .t. shimobaba , j. weng , t. sakurai , n. okada , t. nishitsuji , n. takada , a. shiraki , n. masuda and t. ito , `` computational wave optics library for c++ : cwo++ library , '' comput .. commun . * 183 * , 11241138 ( 2012 ) .t. shimobaba , n. masuda and t. ito , `` numerical investigation of holographic wavefront retrieval using laplacian reconstruction , '' digital holography and three - dimensional imaging(dh ) dh2012 * jm3a.51*(2012 ) .t. shimobaba , n. masuda and t. ito , `` angular spectrum method for different sampling rates on source and destination planes : scaled angular spectrum method , '' digital holography and three - dimensional imaging(dh ) dh2012 * dtu2c.3 * ( 2012 ) . | this study developed handheld and low - cost digital holographic microscopy ( dhm ) by adopting an in - line type hologram , a webcam , a high power rgb light emitting diode ( led ) , and a pinhole . it cost less than 20,000 yen ( approximately 250 us dollars at 80 yen / dollar ) , and was approximately 120 mm 80 mm 55 mm in size . in addition , by adjusting the recording - distance of a hologram , the lateral resolution power at the most suitable distance was 17.5 m . furthermore , this dhm was developed for use in open source libraries , and is therefore low - cost and can be easily developed by anyone . in this research , it is the feature to cut down cost and size and to improve the lateral resolution power further rather than existing reports . this dhm will be a useful application in fieldwork , education , and so forth . keywords dhm , handheld , low - cost , web camera , high power rgb led |
quantum cosmology studies the relation between the observed universe and its boundary conditions in the hope that a natural _ theory _ of the boundary condition might emerge ( see for an outstanding review of this enterprise . )assessment of a particular theory requires an understanding of its implications for the present day . to that end , this paper elaborates on work of gell - mann and hartle and davies and twamley by examining the observable consequences for the diffuse extragalactic background radiation ( egbr ) of one possible class of boundary conditions , those that are imposed time symmetrically at the beginning and end of a closed universe , and sketches some of the considerable difficulties in rendering this kind of model credible . assuming such difficulties do not vitiate the consistency of time symmetric boundary conditions as a description of our universe , the principal conclusion is that these boundary conditions imply that the bath of diffuse optical radiation from extragalactic sources be at least twice that due only to the galaxies to our past , and possibly much more . in this sense ,observations of the egbr are observations of the final boundary condition .this conclusion will be seen to follow ( section [ sec : limit ] ) because radiation from the present epoch can propagate largely unabsorbed until the universe begins to recollapse ( , and section [ sec : opacity ] ) , even if the lifetime of the universe is very great . by time symmetry ,light correlated with the thermodynamically reversed galaxies of the recollapsing phase must exist at the present epoch .minimal _ predicted excess " egbr in a universe with time symmetric boundary conditions turns out to be consistent with present observations ( section [ sec : observations ] ) , but improved observations and modeling of galactic evolution will soon constrain this minimal prediction very tightly .in addition , many physical complications with the ansatz that time symmetric boundary conditions provide a reasonable and consistent description of the observed universe will become apparent .thus this work may be viewed as outlining some reasons why even if very long - lived , our universe is probably not time symmetric .the plan of the paper is as follows .section [ sec : motivations ] discusses a model universe that will define the terms of the investigation .section [ sec : tsbc ] provides some perspective on doing physics with boundary conditions at two times with an eye toward section [ sec : difficulties ] , where some aspects of the reasonableness of two time boundary conditions not immediately related to the extragalactic background radiation are discussed .section [ sec : opacity ] generalizes and confirms work of davies and twamley in showing that for processes of practical interest , our future light cone ( flc ) is transparent all the way to the recollapsing era over a wide range of frequencies , even if the universe is arbitrarily long - lived .section [ sec : limit ] explains why this fact implies a contribution to the optical extragalactic background radiation in a universe with time symmetric boundary conditions in excess of that expected without time symmetry .in the course of this explanation , some rather serious difficulties will emerge in the attempt to reconcile time symmetric boundary conditions , and a transparent future light cone , into a consistent model of the universe which resembles the one in which we live .section [ sec : observations ] compares the predictions of section [ sec : limit ] for the optical egbr to models of the extragalactic background light and observations of it .section [ sec : summation ] is reserved for summation and conclusions .the possibility that the universe may be time symmetric has been raised by a number of authors .of course , what is meant is not _ exact _ time symmetry , in the sense that a long time from now there will be another earth where everything happens backwards .rather , the idea is that the various observed arrows of time " are directly correlated with the expansion of the universe , consequently reversing themselves during a recontracting phase if the universe is closed .of central interest is the thermodynamic arrow of entropy increase , from which other time arrows , such as the psychological arrow of perceived time or the arrow defined by the retardation of radiation , are thought to flow ( * ? ? ?* ; * ? ? ?* ; * ? ? ?* are some reviews ) .however , the mere reversal of the universal expansion is insufficient to reverse the direction in which entropy increases . in order to construct a quantum physics for matter in a recollapsing universein which the thermodynamic arrow naturally reverses itself , it appears necessary to employ something like the time neutral generalization of quantum mechanics in which boundary conditions are imposed near both the big bang _ and _ the big crunch .these boundary conditions take the form of initial " and final " density operators which , when cpt - reverses of one another , define what is meant here by a time symmetric universe./({\rm tr}{\rho_{\alpha}}{\rho_{\omega}}) p > \frac{1}{2 } \ ; \ p \not= \frac{3}{2 } p = \frac{3}{2}p=\frac{1}{2 } -\frac{1}{2 } < p < \frac{1}{2} p = -\frac{1}{2} p < -\frac{1}{2}$ } \end{array } \right .\label{eq : g}\ ] ] in each case only the leading order correction in has been retained .the most important thing to notice is that for , is perfectly finite even as the lifetime of the universe becomes arbitrarily big , and as becomes very small the opacity converges to the value it would have in a flat universe .it is clear that for the result is a local lower limit on the opacity of the future light cone , as may be verified directly also from the available exact results .however , for reasonable the corrections to the flat universe result are only a factor of order one .( in fact , examination of the exact results reveals that the _ maximum _ of as one varies is at most larger than the flat universe result for .this is a good thing , because unless is fairly close to one , is not a particularly small parameter ! in terms of familiar cosmological parameters , . )\label{eq : t / m}\ ] ] physically , what s at work is the competition between the slower expansion rates of universes with larger s , which tends to increase the opacity because the scattering medium is nt diluted as rapidly , and the decrease in the opacity due to the shortened time between the present epoch and the moment of maximum expansion . ) and ( [ eq : t / m ] ) that taking the limit holding fixed requires to vary as well , converging to the flat universe relation .it is possible to repeat the entire analysis holding the observable quantity fixed instead of ( for this purpose the more standard redshift representation is more useful than that in terms of conformal time used above ) , but unsurprisingly the conclusions are the same : the opacity is always finite for ; as approaches one , the opacity approaches the flat universe result ; and the _ maximum _ opacity for reasonable is only a factor times the flat universe result .the resultant opacities are of course related in these limits via .similarly , it is possible to perform a related analysis of more complicated extinction coefficients than ( [ eq : sigma= ] ) , for example incorporating the exponential behaviour encountered in free - free absorption ( see ( [ eq : sff ] ) ) or in modeling evolving populations of scatterers with , for instance , a schecter function type profile .however , these embellishments are not required in the sequel , and the techniques are tedious and fairly ordinary , so space will not be taken to describe them here . ] to summarize , all of the processes relevant to extinction in the intergalactic medium have extinction coefficients that can be bounded above by a coefficient of the form ( [ eq : sigma= ] ) with . using the limiting relationship , we have from ( [ eq : tau= ] ) and ( [ eq : g ] ) the simple result that for these processes , the upper limit to the opacity between the present epoch and the moment of maximum expansion , no matter how long the total lifetime of the universe , is of order ( i have returned to conventional units in this formula . ) in this section i apply the asymptotic formula ( [ eq : tau ] ) for the upper limit to the optical depth of the flc in a long - lived universe to show that if our universe is closed , photons escaping from the galaxy are ( depending on their frequency ) likely to survive into the recollapsing era .that is , the finite optical depths computed in the previous section are actually small for processes of interest in the intergalactic medium ( igm ) .for simplicity , i focus on photons softer than the ultraviolet at the present epoch ; the cosmological redshift makes it necessary to consider absorption down to very low frequencies .it is important to note that in employing standard techniques for computing opacities the effects of the assumed statistical time symmetry of the universe are being neglected . as discussed in section [ sec : tsbc ] and in section [ sec : difficulties ] , when the universe is very large the thermodynamic and gravitational behaviour of matter will begin to deviate from that expected were the universe not time symmetric . due to the manifold uncertainties involved here it is difficult to approach the effects of time symmetric boundary conditions on the opacity of the future light cone with clarity . ) .this complication is closely related to the difficulty , mentioned in section [ sec : retardation ] , in deriving the retardation of radiation in a universe which is time symmetric and in which the future light cone is transparent . ]i shall assume they are not such as to increase it .this is reasonable as the dominant contribution to the opacity comes when the universe is smallest , where in spite of the noted complications the rth is assumed to hold .what are the processes relevant to extinction of photons in the intergalactic medium ? because the igm appears to consist in hot , diffuse electrons , and perhaps a little dust , extinction processes to include are thomson scattering , inverse bremsstrahlung ( free - free absorption ) , and absorption by dust .in addition , absorption by material in galaxies ( treated as completely black in order to gauge an upper limit ) is important .these processes will treated in turn .( a useful general reference on all these matters is . )the conclusion will be that while absorption by galaxies and thomson scattering are most significant above the radio , none of these processes pose a serious threat to a photon that escapes from our galaxy .this confirms the results of davies and twamley , who however did not consider the possibly significant interactions with galaxies .consequently i will be brief .some results of davies and twamley regarding absorption mechanisms which may be important when the universe is very large and baryons have had time to decay are quoted at the end of this section .these do not appear to be significant either .( for high energy photons compton scattering , pair production , photoelectric absorption by the apparently very small amounts of neutral intergalactic hydrogen , and interactions with cmbr photons will be important , but as none are significant below the ultraviolet i do not discuss them here .all can be treated by the same methods as the lower energy processes . ) to begin , following davies and twamley , i quote barcons _et al . _ on current beliefs regarding the state of the igm in the form where with the values somewhat preferred by the authors .in addition , the present upper limit on a smoothly distributed component of neutral hydrogen is about .thus , the intergalactic medium consists in hot ( but non - relativistic ) electrons , protons , and essentially no neutral hydrogen .the lack of distortions in the microwave background indicates its relative uniformity , at least to our past . from now on , and will be used to refer to the number density and temperature of intergalactic electrons .finally , very little is known about a possible diffuse component of intergalactic dust , except that there is probably very little of it .most dust seems to be clumped around galaxies .therefore i will ignore possible extinction due to it , subsuming it into the black galaxy " opacity .davies and twamley make some estimates for one model for the dust , finding its contribution to the opacity insignificant . at any rate , models for the absorption coefficient due to dust all give a cross section that falls with increasing wavelength , with , so that ( neglecting of course a clumping factor expressing the fact that clumping decreases the opacity . ) thus , the dust opacity is bound to be finite , and with a small present density of diffuse dust it is not surprising to find its contribution to be small . before considering the optical depth due to interactions with intergalactic electrons, i will show that it is reasonable to approximate that most photons escaping our galaxy will travel freely through intergalactic space .that is , few photons will end up running into another galaxy . drastically overestimating the opacity due to galaxies by pretending that any photon which enters a galaxy or its halo will be absorbed by it ( the black galaxy " approximation ) , and taking the number of galaxies to be constant , where is the cross - sectional area of a typical galaxy and is their present number density .thus from ( [ eq : tau ] ) , the upper limit on the opacity due to collisions with galaxies is as noted above , this is finite ( even as the lifetime of the universe becomes very large ) because the dilution of targets due to the expansion of the universe is more important than the length of the path the photon must traverse .notice that assuming target galaxies to be perfectly homogeneously distributed only overestimates their black galaxy " opacity .volume increases faster than cross - sectional area , so clustering reduces the target area for a given density of material . as galaxy clustering is not insignificant today andwill only increase up to the epoch of maximum expansion even in a time symmetric universe , the degree of overestimation is likely to be significant .taking , ( where ) , and ( here captures as usual the uncertainty in the hubble constant ) gives the upper limit this can be interpreted as saying that at most about one percent of the lines of sight from our galaxy terminate on another galaxy before reaching the recollapsing era . by time symmetry , neither do most lines of sight connecting the present epoch to its time - reverse .use of the thomson scattering cross section is acceptable for scattering from non - relativistic electrons for any photon softer than a hard x - ray ( ) .thus , for the frequencies i will consider , will suffice , giving recalling that is at worst one order of magnitude , it is clear that thomson scattering is not signficant for intergalactic photons .it is perhaps worth mentioning that quantum and relativistic effects only tend to decrease the cross section at higher energies . more significant for the purposes of this investigationis the observation that , at the considered range of frequencies , thomson scattering does not change a photon s frequency , merely its direction .thus thomson scattering of a homogeneous and isotropic bath of radiation by a homogeneous and isotropic soup of electrons has _ no effect _ as regards the predictions of section [ sec : egbr ] .even less significant than thomson scattering for frequencies of interest is free - free absorption by the igm . from , _ , the linear absorption coefficient for scattering from a thermal bath of ionized hydrogen is in cgs units .here , the factor contains the effect of stimulated emission , and is a gaunt factor " expressing quantum deviations from classical results .it is a monotonically decreasing function of which is of order one in the optical ( _ cf . _ for a general discussion and some references . ) as increases as the universe expands , taking , a constant of order one , will only overestimate the opacity . similarly , following in dropping the stimulated emission term will yield an upper limit to the free - free opacity . with , when , so stimulated emission will only lead to a noticeable reduction in well below the optical .( actually , methods similar to that employed in section [ sec : transparent ] can be employed to calculate this term , but as will turn out to be insignificant even neglecting it there is no need to go into that here . ) with these approximations , and thus recalling that seem likely physical values , and noting that at worst , taking is not unreasonable for an order of magnitude estimate .thus ( long radio ) is required to get . since it drops sharply for photons with present frequency above that .for instance , at 5000 and inverse bremsstrahlung is completely negligible .finally , i mention that davies and twamley consider what happens if baryons decay in a long lived universe . following the considerations of , they conclude that the positronium atoms " which will form far in the future ( when the universe is large ) remain transparent to photons with present frequencies in the optical .this is because the redshifted photons havent enough energy to cause transitions between adjacent ps energy levels .similarly , if in the nearer future the electrons and protons in the igm recombine to form more neutral hydrogen , this will also be transparent at the considered frequencies .the goal of this section is to explain why , in a statistically time symmetric universe ( such as one with the cpt - related boundary conditions discussed in section [ sec : motivations ] ) , the optical extragalactic background radiation should be at least twice that expected in a universe which is not time symmetric , and possibly considerably more .thus , assuming consistency with the rth ( _ i.e . _the predictive assumption that physics near either boundary condition is practically insensitive to the presence of the other boundary condition , _ cf . _ section [ sec : tsbc ] ) , it is possible to discover _ experimentally _ whether our universe is time symmetric .section [ sec : observations]compares this prediction with present observations , concluding that the minimal prediction is consistent with upper limits on the observed optical egbr . however , better observations and modeling may soon challenge even this minimal prediction . at optical wavelengths , the isotropic bath of radiation from sources outside our galaxyis believed to be due almost exclusively to galaxies on our past light cone ( * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* are some good general references ) .there is no other physically plausible source for this radiation . in a model universe with timesymmetric boundary conditions , however , there must in addition be a significant quantity of radiation correlated with the time - reversed galaxies which will exist in the recollapsing era , far to our future .the reason for this is that light from our galaxies can propagate largely unabsorbed into the recollapsing phase no matter how close to open the universe is , as shown in and in section [ sec : opacity ] .this light will eventually arrive on galaxies in the recollapsing phase , or , depending on its frequency , be absorbed in the time - reversed equivalent of one of the many high column density clouds ( lyman - limit clouds and damped lyman- systems ) present in our early universe , in the intergalactic medium , or failing that , at the time - reversed equivalent of the surface of last scattering .this will appear to observers in the recontracting phase as emission by one of those sources sometime in their galaxy forming era . since future galaxies , up to high time - reversed redshift , occupy only a small part of the sky seen by today s ( on average ) isotropically emitting galaxies , much of the light from the galaxies of the expanding phase will proceed past the recontracting era s galaxies .thus most of this light will be absorbed in one of the other listed media .because of the assumption of global homogeneity and isotropy , the light from the entire history of galaxies in the expanding phase will constitute an isotropic bath of radiation to observers at the time - reverse of the present epoch that is _ in addition _ to the light from the galaxies to _ their _ past . by time symmetry, there will be a similar contribution to our egbr correlated with galaxies which will live in the recollapsing phase , over and above that due to galaxies on our past light cone .to us this radiation will appear to arise in isotropically distributed sources _ other _ than galaxies .this picture of a transparent , time symmetric universe is illustrated in figure 1 .[ fig : egbr ] a lower limit to this excess background can be obtained by considering how much light galaxies to our past have emitted already ( _ cf . _ section [ sec : observations ] ) . according to observers at the time reverse of the present epoch , this background will ( in the absence of interactions )retain its frequency spectrum and energy density because the size of the universe is the same .thus , by time symmetry , at a _ minimum _ the predicted optical egbr in a universe with time symmetric boundary conditions is twice that expected in a universe in which the thermodynamic arrow does not reverse .if much of the luminous matter in galaxies today will eventually be burned into radiation by processing in stars or galactic black holes , the total background radiation correlated with galaxies in the expanding phase could be several orders of magnitude larger , a precise prediction requiring a detailed understanding of the future course of galactic evolution .several other processes may also contribute significant excess backgrounds .these topics are discussed further below .a number of points in the summary argument above require amplification .first , however , i summarize the _ minimal _ predictions for the excess " extragalactic background ( _ i.e . _ radiation from non - galactic sources to our past that is correlated with time - reversed galaxies ) in bands for which the future light cone is transparent : * isotropy : the future starlight " should appear in the comoving frame as an approximately isotropic background .this conclusion depends crucially on the assumed global validity of the cosmological principle .* energy density : comparable to the present energy density in starlight due to the galaxies on our past light cone .this assumes the future light cone ( flc ) is totally transparent .* spectrum : similar to the present spectrum of the background starlight due to galaxies on our past light cone .again , neglect of further emissions in the expanding phase makes this , by time symmetry , a lower limit in each band .this conclusion relies on the assumption of a transparent flc in part to the extent that this implies a paucity of standard astrophysical mechanisms for distorting spectra .thus , at for instance optical frequencies , time symmetry requires an isotropic extragalactic background at least twice that due to galaxies on our past light cone alone .was of order , mostly due to a liberal ( black galaxy " ) assessment of the rate of interception of photons by galaxies , and i will therefore neglect such losses .further , it is worth remembering that processes like thomson scattering do not destroy photons or change their frequency , but only scatter them . thus mere scattering processes may introduce isotropically distributed ( via the cosmological principle ) fluctuations in the background , but not change its total energy .similarly , line or dust absorption usually result in re - radiation of the absorbed photons , conserving the total energy in an isotropic background ( if the size of the universe does nt change much before the photons are re - radiated ) , if not the number of photons with a given energy . ]the potentially far greater background predicted ( by time symmetry ) if further emissions in the expanding phase are accounted for is a subject taken up in the sequel . before proceeding ,a comment on the consistency of this picture is in order .as the excess " radiation is correlated with the detailed histories of future galaxies , the transparency of the future light cone does not appear consistent with the predictive assumption ( the rth ) that physics in the expanding era should be essentially independent of the specifics of what happens in the recontracting phase . at a minimum ,if the model is to be at all believable it is legitimate to demand that the required radiation appears to us to arise in sources in a fashion consistent with known , or at least plausible , astrophysics .thus it may be that given a transparent flc , the only viable picture of a time symmetric universe is one in which the radiation correlated with future galaxies _ should " be there anyway , i.e. _ be predicted also in some reasonable model of our universe which is _ not _ time symmetric , and consequently not be excess " radiation at all , but merely optical radiation arising in non - galactic sources during ( or before ) the galaxy forming era .on the basis of present knowledge this does not describe our universe .the presence of the radiation required by time symmetry and the transparency of the flc appears to be in significant disagreement with what is known about our galaxy forming era , as will become apparent below .were it the case for our universe that non - galactic sources provided a significant component of the optical egbr , the difficulties with time symmetric boundary conditions would from a _practical _ point of view be less severe .it is true that the non - galactic sources emitting the additional isotropic background would have to do so in just such a way that the radiation contain the correct spatial and spectral correlations to converge on future galaxies at the appropriate rate .this implies a distressingly detailed connection between the expanding and recontracting phases .however , if the emission rate and spectrum were close to that expected on the basis of conventional considerations these correlations ( enforced by the time symmetric boundary conditions ) would likely be wholly unobservable in practice , existing over regions that are not causally connected until radiation from them converges onto a future galaxy , and thus not visible to local coarse grainings ( observers ) in the expanding era . in any event, the meaning of the transparency of the flc is that starlight is by no means a short relaxation time process . " of course , on the basis of the models discussed in section [ sec : tsbc ] , perhaps the conclusion to draw from this apparent inconsistency between the rth and the transparency of the flc should rather be , that physical histories would unfold in a fashion quite different from that in a universe in which , namely , in such a way that such detailed correlations would never be required in the first place . the very formation of stars might be suppressed ( _ cf . _ section [ sec : difficulties ] ) .be that as it may , to the extent that the universe to our past is well understood , there are _ no _ sources that could plausibly be responsible for an isotropic optical background comparable to that produced by galaxies .such an additional background , required in a transparent , time symmetric universe , requires significant , observable deviations from established astrophysics .this is in direct contradiction with the rth .thus the entire structure in which an additional background is predicted appears to be both internally inconsistent ( in that it is inconsistent with the postulate which allows predictions to be made in the first place ) , and , completely apart from observations of the egbr discussed in section [ sec : observations ] , inconsistent with what is known about our galaxy forming era .this should be taken as a strong argument that our universe does not possess time symmetric boundary conditions .nevertheless , in order to arrive at this conclusion it is necessary to pursue the consequences of assuming the consistency of the model .the situation may be stated thus : _ either _ our universe is not time symmetric , _ or_ there is an unexpected contribution to the optical egbr due to non - galactic sources to our past ( and there are indeed detailed correlations between the expanding and recontracting eras ) , _ or _ perhaps our future light cone is not transparent after all .this latter possibility , perhaps related to the considerable uncertainty regarding the state of the universe when it is large , seems the last resort for a consistent time symmetric model of our universe .it is now appropriate to justify further some of the points made in arriving at the prediction of an excess contribution to the egbr .claims requiring elaboration include : i ) most of the light from the expanding era s galaxies wo nt be absorbed by the galaxies of the recontracting phase , and _ vice - versa _ ; ii ) it will therefore be absorbed by something else , and this is inconsistent with the early universe as presently understood ; iii ) a detailed understanding of the old age and death of galaxies , as well as other processes when the universe is large , may lead to a predicted egbr in a time symmetric universe that is orders of magnitude larger than the minimal prediction outlined above .i deal with these questions in turn .a photon escaping our galaxy is unlikely to encounter another galaxy before it reaches the time reverse of the present epoch .in fact , as shown in section [ sec : opacity ] , galaxies between the present epoch and its time reversed equivalent subtend , at most , roughly a mere of the sky ( [ eq : hs ] ) ( neglecting curvature and clumping . ) in light of the present lack of detailed information about our galaxy forming era ( and via time symmetry its time - reverse ) , a photon s fate after that is more difficult to determine . a straightforward extrapolation of the results of section [ sec : opacity ] ( or _ cf .e.g. _ section 13 of peebles ) shows that the optical depth for encounters with galaxies of the same size and numbers as today is between a redshift of and today .again this neglects curvature ( hence overestimating ) and clumping ( which now underestimates . ) assuming the bright parts of galaxies form at , this gives only , and the sky is nt covered with them until .this however is roughly at the upper limit on how old galaxies are thought to be .on the other hand , examination of quasar spectra ( out to ) show that most lines of sight pass through many clouds of high column densities of hydrogen called lyman- forest clouds and , at higher densities , damped lyman- systems .( peebles is a useful entry point on all of these matters , as is . are the proceedings of a recent conference concerned with these lyman systems . )the highest density clouds may be young galaxies , but if so galaxies were more diffuse in the past as the observed rate of interception of an arbitrary line of sight with these clouds is a factor of a few or more greater than that based on the assumption that galaxy sizes are constant .( obviously , this would not be too surprising . ) for instance , for the densest clouds peebles relates the approximate formula for the observed interception rate per unit redshift of a line of sight with a cloud of column density greater than or equal to ( in units of ) , in a range of redshifts about .for lyman- forest clouds , , the interception rate is considerably higher .( for some models see . )thus an arbitrary line of sight arriving on our galaxy from a redshift of five , say , is likely to have passed through at least one cloud of column density comparable to a galaxy , and certainly many clouds of lower density . what might this mean for time symmetry ?( for specificity i shall concentrate on photons which are optical today , say around 5000 .this band was chosen because at these wavelengths we have the luxury of the coincidence of decent observations , relatively well understood theoretical predictions for the background due to galaxies , the absence of other plausible sources for significant contributions , and a respectable understanding of the intergalactic opacity , including in particular some confidence that the future light cone is transparent . )photons at 5000 today are at the lyman limit ( 912 ) at , and so are ionizing before that . at these redshifts the bounds on the amount of smoothly distributed neutral hydrogen ( determined by independent measures such as the gunn - peterson test ) are very low ( _ cf ._ section [ sec : flc ] ) , presumably because that part of the hydrogen formed at recombination which had not been swept into forming galaxies was ionized by their radiation . before the galactic engines condensed and heated up ,however , this neutral hydrogen would have been very opaque to ionizing radiation .similarly , near lyman - limit clouds with are opaque at these frequencies .the upshot is that most photons from our galaxies which are optical today will make it well past the time reverse of the present epoch , likely ending up in the ( time - reversed ) l- forest or in a young ( to time reversed observers ! ) galaxy by .( here is the epoch corresponding to the time - reverse of redshift . )the very few that survive longer must be absorbed in the sea of neutral hydrogen between and ( their ) recombination epoch , .now , the important point is that on average , galaxies radiate isotropically into the full of sky available to them .the lesson of the previous paragraph is that most lines of sight from galaxies in the expanding phase will not encounter a high column density cloud until a fairly high time - reversed redshift , , at which point many lines of sight probably _ will _ intersect one of these proto - galaxies or their more diffuse halos . if most photons from our galaxies have not been absorbed by this point , _ this is not consistent with time symmetry _ : the rate of emission of ( what is today ) optical radiation by stars in galaxies could not be time symmetric if the light of the entire history of galaxies in the expanding phase ends up only on the galaxies of the recollapsing phase at high ( due consideration of redshifting effects is implied , of course . )put another way , time symmetry requires the specific energy density in the backround radiation to be time symmetric .thus the emission rate in the expanding era must equal ( what we would call ) the absorption rate in the recontracting phase .if stars in galaxies were exclusively both the sources ( in the expanding phase ) and sinks ( as we would call them in the recontracting phase ) of this radiation , galactic luminosities in the expanding phase would have to track the falling rate of absorption due to photon collisions " with galaxies .this is absurd . at the present epoch , for example , _ at all frequencies_ galaxies would ( by time symmetry ) have to be absorbing the diffuse egbr ( a rate for which the upper limit is determined entirely by geometry in the black galaxy " approximation ) at the same rate as their stars were radiating ( a rate that , in a time symmetric universe which resembles our own , one expects to be mostly determined by conventional physics . )that is , stars would be in radiative equilibrium with the sky !this may be called the no olber s paradox " argument against the notion that a single class of localized objects could be exclusively responsible for the egbr in a transparent universe equipped with time symmetric boundary conditions .( it might be thought that this problem would be solved if the time symmetric boundary conditions lead galaxies to radiate preferentially in those directions in which future galaxies lie .this is not a viable solution , because as noted above , only a small fraction of the sky is subtended by future galaxies up to high time reversed redshift .the deviations from isotropic emission would be dramatic . ) thus , the option consistent with time symmetry is that most galactic photons which are optical today will ultimately be absorbed in the many ( by time symmetry ) time - reversed lyman- forest clouds or lyman - limit clouds believed to dwell between galaxies , and not in the stars of the time - reversed galaxies themselves .fortunately for the notion of time symmetry this indeed appears to be the case .careful studies of the opacity associated with lyman systems , indicates , within the bounds of our rather limited knowledge , that the light cone between and is essentially totally opaque to radiation that is 5000 at , and that this is due largely to lyman clouds near and in the middle range of observed column densities , or so .( to be honest , it must be admitted that hard data on just such clouds is very limited . )we have now arrived at a terrible conundrum for the notion of time symmetry .even if one is willing to accept the amazingly detailed correlations between the expanding and recontracting eras that reconciling a transparent future light cone with time symmetry requires , and even if the excess " radiation correlated with the galaxies of the recollapsing era were to be observed , this picture is incompatible with what little is known about the physical properties of the lyman- forest . recalling the minimal prediction above for the excess background required by time symmetry ,the prediction is that the lyman- forest has produced an amount of radiation at least comparable to that produced by the galaxies to our past .there is no mechanism by which this is reasonable ._ there is no energy source to provide this amount of radiation . _more prosaically , the hydrogen plasma in which the clouds largely consist is observed ( via determination of the line shape , for example ) to be at kinetic temperatures of order , heated by quasars and young galaxies .thermal bremsstrahlung is notoriously inefficient , and line radiation at these temperatures is certainly insufficient to compete with nuclear star burning in galaxies ! at for instance5000today , essentially _ no _ radiation is expected from forest clouds at all , let alone an amount comparable to that generated by galaxies . remembering that by redshifts of 4.5 the lyman forest is essentially totally opaque shortward of 5000 ( observed ) , it might have been imagined that an early generation of galaxies veiled by the forest heated up the clouds sufficiently for them to re - radiate the isotropic background radiation required by time symmetry . while it is true that quasars and such are likely sources of heat for these clouds ( * ? ? ?* for example ) , aside from the considerable difficulties in getting the re - radiated spectrum to resemble that of galaxies , the observed temperatures of the clouds are entirely too low to be compatible with the _ minimum _ amount of energy emission in the bands required .( a related restriction arises from present day observations of cosmological metallicities , which constrain the amount of star burning allowed to our past .if observed discrete sources came close to accounting for the required quantity of heavy elements , the contribution of a class of objects veiled completely by the lyman forest would be constrained irrespective of observations of the egbr .at present direct galaxy counts only provide about 10% of the current upper limits on the extragalactic background light ( _ cf . _ section [ sec : observations ] ) , the rest conventionally thought to arise in unresolvable galactic sources . consequently , correlating formation of the heavy elements with observed discrete sources does not at present provide a good test of time symmetry . at any rate ,such a test is likely to be a less definitive constraint on time symmetry because it is possible that a portion of the radiation lighting the lyman forest from high redshift is not due to star burning , but to accretion onto supermassive black holes at the centers of primordial galaxies .thus the best observational test is the most direct one , comparison of the observed egbr with the contribution expected from galaxies . )the possibility that somehow the excess radiation does _ not _ come from the lyman- forest , but somehow shines through from other isotropically distributed sources even further in the past , is hardly more appealing .familiar physics tells us that the forest is totally opaque to radiation that is 5000at .the conclusion had better be that the universe is not time symmetric , rather than that time symmetry engineers a clear path only for those photons correlated with galaxies in the recollapsing epoch ( and not , say , the light from quasars . ) moreover , even if that were the case , analagous difficulties apply to the vast sea of neutral hydrogen that existed after recombination , totally opaque to ionizing radiation , and again to the highly opaque plasma which constituted the universe _ before _ recombination .it is possible to conjure progressively more exotic scenarios which save time symmetry by placing the onus on very special boundary conditions which engineer such rescues , but this is not the way to do physics .the only _ reasonable _ way time symmetry could be rescued would be if it were discovered that for reasons unanticipated here , the future light cone were not transparent after all , thus obviating the need for an excess background radiation with all its attendant difficulties .otherwise , it is more reasonable to conclude that a universe with time symmetric boundary conditions would not resemble the one in which we actually live .now that we have seen what kind of trouble time symmetry can get into with only the _ minimal _ required excess background radiation , it is time to make the problems worse .the background radiation correlated with the galaxies of the recollapsing era was bounded from below , via time symmetry , by including only the radiation that has been emitted by the galaxies to our past .but as our stars continue to burn , if the future light cone is indeed transparent it is possible a great deal more radiation will survive into the recollapsing era .how much more ? to get an idea of what s possible it is necessary to know both what fraction of the baryons left in galaxies will be eventually be burned into starlight , and when . for a rough upper bound , assume that _ all _ of the matter in galaxies today , including the apparently substantial dark halos ( determined by dynamical methods to contribute roughly ) , will eventually be burned into radiation . to get a rough lower bound ,assume that only the observed luminous matter ( ) will participate significantly , and that only a characteristic fraction of about 4% of _ that _ will not end up in remnants ( jupiters , neutron stars , white dwarfs , brown dwarfs , black holes , _ etc ._ ) to overestimate the energy density of this background at , assume that all of this energy is released in a sudden burst at some redshift .then by time symmetry , further star burning will yield a background of radiation correlated with time - reversed galaxies ( expressed as a fraction of the critical density and scaled to ) somewhere in the range ( here i have used the fact that the mass fraction released in nuclear burning as electromagnetic radiation is .007 . ) when the upper limit is two orders of magnitude more than is in the cmbr today and three orders of magnitude more than present observational upper limits on a diffuse optical extragalactic background ( _ cf ._ section [ sec : observations ] ) .the lower bound , however , is comparable to the amount of radiation that has already been emitted by galaxies .thus if the lower bound obtains , the prediction for the optical egbr in a time symmetric universe is only of order three times that due to the galaxies to our past ( if the excess background inferred from continued star burning is not distributed over many decades in frequency , and if most of this burning occurs near . ) as will be seen in section [ sec : observations ] , this may still be consistent with present observational upper limits . on the other hand , if something closer to the upper limit obtains this is a clear death blow to time symmetry .a more precise prediction is clearly of interest .this would entail acquiring a detailed understanding of further galactic evolution , integrating over future emissions with due attention to the epoch at which radiation of a given frequency is emitted .( naturally , this is the same exercise one performs in estimating the egbr due to galaxies to our past . )some idea of the possible blueshift ( ) involved comes from estimating how long it will take our galaxies to burn out .this should not be more than a factor of a few greater than the lifetime of the longest lived stars , so a reasonable ballpark figure is to assume that galaxies will live for only another ten billion years or so . for convenience ,assume that galaxies will become dark by for some , where is the present age of the universe . to overestimate the blueshift at this time , assume the universe is flat , so that for reasonable s this does not amount to a large ( in order of magnitude ) transfer of energy to the radiation from cosmological recontraction . in a similar fashion to continued burning of our stars ,any isotropic background produced to our future might by time symmetry be expected to imply an additional contribution to the egbr in an appropriately blueshifted band .for instance , even if continued star burning does not ( by time symmetry ) yield a background in contradiction with observations of the egbr , it is possible that accretion onto the supermassive black holes likely to form at the centers of many galaxies could ultimately yield a quantity of radiation dramatically in excess of that from star burning alone . in the absence of detailed information about such possibilitiesit is perhaps sufficient to note that ignoring possible additional contributions leads to a lower limit on the egbr correlated with sources in the recontracting era , and i will therefore not consider them .there is one worrying aspect , however . as discussed in some detail by page and mckee for an approximately universe , andcommented on in a related context in section [ sec : collapse ] , if baryons decay then considerable photons may be produced by for instance the pair annihilation of the resulting electrons and positrons .should not this , by time symmetry , yield a further contribution to the egbr ?the answer may well be yes , but there is a possible mechanism which avoids this conclusion .somehow , with cpt symmetric boundary conditions , the density of baryons must be cpt symmetric .therefore either baryons do not decay , or they are re - created in precisely correlated collisions .( in the absence of a final boundary condition , the interaction rate would be too low for ( anti-)baryon recombination to occur naturally . )the latter ( boundary condition enforced ) possibility appears extraordinary , but if baryon decay occurs in a universe with cpt symmetric boundary conditions , it could be argued that the best electrons and photons for the job would be just those created during baryon decay in the expanding phase , thereby removing this photon background . the no olber s paradox " argument , that most of an isotropically emitting source s light must end up in some homogeneous medium , and not , if time symmetry is to be preserved , equivalent time - reversed point sources , may not apply here if matter is relatively homogeneously distributed when the universe is large .baryon decay _might _ smooth out inhomogeneities somewhat before the resulting electrons and positrons annihilate .( this requires the kinetic energies of the decay products to be comparable to the gravitational binding energy of the relevant inhomogeneity . )then the picture is no longer necessarily of localized sources emitting into , but of a more homogeneous photon - producing background that might cover enough of the sky to more reasonably secrete the required correlations for reconstruction of more massive particles in the recontracting phase .nevertheless the extreme awkwardness of this scenario is not encouraging .the former possibility , clearly more palatable , is that baryons do not decay significantly either because is not so near one after all that they have time enough to do so , or because the presence of the final boundary condition suppresses it . either way , in this ( possibly desperate ) picture there is no additional background due to decaying baryons .a very similar question relates to the enormous number of particles produced in the last stages of black hole evaporation .this time , however , the objection that our black holes cover only a small portion of the recontracting era s sky , and consequently their isotropic emissions could not do the job of forming the white holes of the recontracting era ( black holes to observers there ) time symmetrically , would seem to be forceful .thus if the universe is indeed very long - lived , black hole evaporation may well require yet an additional observable background .this may not be such a serious difficulty if is not very close to one , however , as the time scales for the evaporation of galactic - scale black holes are quite immense. further discussion of black holes in time symmetric cosmologies may be had in .one last point regarding the predictions described in this section needs to be made .clearly , a loose end which could dramatically change the conclusions is the condition of matter in the universe when it is very large .this is uncertain territory , not the least because that is the era in a statistically time symmetric universe when the thermodynamic arrow must begin rolling over . neglecting this confusing complication ( reasonable for some purposes as many interactions are most significant when the universe is small ), there is not a great deal known about what the far future should look like .the study of page and mckee gives the most detailed picture in the case of a flat universe .as mentioned at the end of section [ sec : flc ] , davies and twamley find from this work that interactions of optical ( at ) backgrounds with the electrons produced by baryon decay do not appear to be significant , primarily due to their diffuseness . on the other hand ,if supermassive black holes ( or any large gravitational inhomogeneities ) appear , interactions with them may induce anisotropies in the future starlight .however , clumping only decreases the probability a line of sight intersects a matter distribution. therefore large overdensities probably never subtend enough solid angle to interfere with most lines of sight to the recollapsing era unless gravitational collapse proceeds to the point where it dramatically alters homogeneity and isotropy on the largest scales .because collapse is rather strongly constrained by time symmetry ( _ cf ._ section [ sec : collapse ] ) i will not consider this possibility . thus , insofar as the prediction of an excess " background radiation correlated with galaxies in the recollapsing era is concerned , the state of the universe when it is large would does not obviously play a substantial role . nevertheless , given the manifold difficulties cited , the sentiment expressed above is that the best hope a time symmetric model has of providing a realistic description of our universe is that some unforseen mechanism makes the future light cone opaque after all . to summarize , because our future light cone is transparent , neglecting starburning to our future and considering only the contribution to the egbr from stars in our past provides an estimate of the total egbr correlated with galaxies in the recollapsing era that is actually a lower limit on it . as mechanisms for distortingthe spectrum generically become less important as the universe expands ( barring unforseen effects in the far future ) , it is reasonable to take models for the present egbr due to stars in our past as a minimal estimate of the isotropic background of starlight that will make its way to the recollapsing era . by time symmetry we can expect that at the same scale factor in the recollapsing era similar ( but time - reversed ) conditions obtain . as argued above , by time symmetry and geometry this future starlight " must appear to us as an additional background emanating from homogeneously distributed sources to our past _ other _ than galaxies .therefore , if the universe has time symmetric boundary conditions which ( more or less ) reproduce familiar physics when the universe is small , and our future light cone is transparent , the optical extragalactic background radiation should be at least twice that expected to be due to stars in our past alone , and possesses a similar spectrum .if a considerable portion of the matter presently in galaxies will be burned into radiation in our future , by time symmetry the expected background is potentially much larger , and observations of the egbr may already be flatly incompatible with observations . nevertheless ,in the next section i shall be conservative and stick with the minimal prediction in order to see how it jibes with observations . at optical wavelengths, it is generally believed that the isotropic background of radiation from extragalactic sources is due entirely to the galaxies on our past light cone . as shown in the previous section ,if our universe is time symmetric there must in addition be a significant contribution correlated with the galaxies of the recollapsing era which arises , not in galaxies , but in some homogeneously distributed medium , say for instance the lyman clouds .the apparent inconsistency of this prediction with what is known about the forest clouds has been discussed above , and may be taken as an argument that our universe is not time symmetric . in this section judgement will be suspended , and the prediction of an excess " egbr at least comparable to , but over and above , that due to galaxies to our past will be compared with experiment .the conclusion will be that current data are still consistent with time symmetry _ if _ our galaxies will not , in the time left before they die , emit a quantity of radiation that is considerably greater than that which they already have .a useful resource on both the topics of this section is .tyson has computed how much of the optical extragalactic background is accounted for by resolvable galaxies , concluding that known discrete sources contribute at 4500 . however , because very distant galaxies contribute most of the background radiation it is believed that unresolvable sources provide a significant portion of the egbr . at presentit is not possible to directly identify this radiation as galactic in origin .however , as understanding of galactic evolution grows so does the ability to model the optical extragalactic background due to galaxies .these predictions naturally depend on the adopted evolutionary models , what classes of objects are considered , the cosmological model , and so on . as representative samples i quote the results of code and welch for a flat universe in which all galaxies evolve , at 5000 , and of cole _et al . _ for a similar scenario , also at 5000 .these figures are to be compared with the results of ( extraordinarily difficult ) observations . as surveyed by mattila , they give at 5000 an upper limit of as far as i am aware , there has been no direct detection of an optical radiation background of extragalactic origin .this upper limit represents what is left after what can be accounted for in local sources is removed . comparing these results ,it is clear that if current models of galactic evolution are reliable , present observations of the extragalactic background radiation leave room for a contribution from non - galactic sources that is comparable to the galactic contribution , but not a great deal more .these observations therefore constrain the possibility that our universe is time symmetric .if believable models indicate that further galactic emissions compete with what has been emitted so far , time symmetry could already be incompatible with experiment . a direct detection of the extragalactic background radiation , or even just a better upper limit , could rule out time symmetry on _ experimental _ grounds soon .( the ideal observational situation would be convergence of all - sky photometry and direct hst galaxy counts , allowing one to dispense with models completely . )in this section i comment on some issues of a more theoretical nature which must be faced in any attempt to construct a believable model of a time symmetric universe . among other questions , in such a universe the difficulties with incorporating realistic gravitational collapse , and in deriving the fact that radiation is retarded , appear to be considerable . a careful account of the growth of gravitational inhomogeneities from the very smooth conditions when the universe was small is clearly of fundamental importance in any model of the universe , time symmetric or not , not the least because it appears to be the essential origin of the thermodynamic arrow of time from which the other apparent time arrows are thought to flow .however , if matter in a closed universe is to be smooth at both epochs of small scale factor then it is incumbent to demonstrate that einstein s equations admit solutions in which an initially smooth universe can develop interesting ( non - linear ) inhomogeneities such as galaxies which eventually smooth out again as the universe recollapses .this is because the universe is certainly in a quasiclassical domain now , and if it is assumed to remain so whenever the scale factor is large classical general relativity must apply .laflamme has shown that in the linear regime there are essentially no perturbations which are regular , small , and growing away from both ends of a closed frw universe , so that in order to be small at both ends a linear perturbation must be too small to ever become non - linear .but that is not really the point .interesting perturbations must go non - linear , and there is still no proof of which i am aware that perturbations which go non - linear can not be matched in the non - linear regime so as to allow solutions which are small near both singularities .put differently , what is required is something like a proof that weyl curvature must increase ( _ cf . _ ) , _ i.e. _ that given some suitable energy conditions the evolution of gravitational inhomogeneity must be monotonic even in the absence of trapped surfaces , except possibly on a set of initial data of measure zero . while perhaps plausible given the attractive nature of gravity , proof has not been forthcoming .( it is important for present purposes that genericity conditions on the initial data are not a central part of such a proof , for physically realistic solutions which meet the time symmetric boundary conditions must describe processes in the recollapsing era which from our point of view would look like galaxies disassembling themselves .reducing such a solution to data on one spacelike hypersurface at , say , maximum expansion , said data will be highly specialized relative to solutions with galaxies which do not behave so unfamiliarly . if such solutions with physically interesting inhomogeneities do exist , the real question here is whether they are generic _ according to the measure defined by the two - time boundary conditions . _ since it ought to be possible to treat this problem classically , in principle this measure is straightforward to construct .the practical difficulty arises in evaluating the generic behaviour of solutions once they go non - linear .my own view is that it is highly likely that such solutions remain exceedingly improbable according to a measure defined by a _generic _ set of ( statistical ) boundary conditions requiring the universe to be smooth when small . as noted ,laflamme has already shown that when the initial and final states are required to be smooth , the growth of inhomogeneity is suppressed if only linear perturbations are considered .unless boundary conditions with very special correlations built in are imposed , probable solutions should resemble their smooth initial and final states throughout the course of their evolution , never developing physically interesting inhomogeneities .note the concern here is not with occurrences which would be deemed unlikely in a universe with an initial condition only , but occur in a time symmetric universe because of the fate " represented by the final boundary condition , but with occurrences which are unlikely _ even in a universe with ( generic ) time symmetric boundary conditions . _ ) a related question concerns collapsed objects in a time symmetric universe .page and mckee have studied the far future of a frw universe under the assumption that baryons decay but electrons are stable .assuming insensitivity to a final boundary condition their conclusions should have some relevance to the period before maximum expansion in the only slightly over - closed ( and hence very long - lived ) model universes that have mostly been considered here .as discussed in section [ sec : amplifications ] , if the universe is very long lived it might be imagined that the decay of baryons and subsequent annihilation of the produced electrons and positrons could smooth out inhomogeneities , and also tend to destroy detailed information about the gravitational history of the expanding phase ( by eliminating compact objects such as neutron stars , for instance . )thus , even though interactions are unlikely to thermalize matter and radiation when the universe is very large ( _ cf ._ section [ sec : opacity ] ) , there may be an analagous information loss via the quantum decay of baryons which could serve a similar function . ( for a completely different idea about why quantum mechanics may effectively decouple the expanding and recollapsing eras , see . ) in any case , if there is no mechanism to eliminate collapsed objects before the time of maximum expansion , then the collapsed objects of the expanding phase are the same as the collapsed objects of the recontracting phase , implying detailed correlations between the expanding and recontracting era histories of these objects which might lead to difficulties of consistency with the rth .a particular complication is that it is fairly certain that black holes exist , and that more will form as inhomogeneity grows .the only way to eliminate a black hole is to allow it to evaporate , yet the estimates of page and mckee indicate that it is more likely for black holes to coalesce into ever bigger holes unless for some reason ( a final boundary condition ? ) there is a maximum mass to the black holes which form , in which case they may have time enough to evaporate ( though this requires to be _ exceedingly _ close to one . ) in fact , it may be imperative for a time symmetric scenario that black holes evaporate , else somehow they would have to turn into the white holes of the recollapsing era ( black holes to observers there .) this is because we do not observe white holes today .( a related observation is that in order for the universe to be smooth whenever it is small , black / white hole singularities can not arise . ) here the evaporation of black holes before maximum expansion would be enforced by the time symmetric boundary conditions selecting out only those histories for which there are no white holes in the expanding era and _ mutatis mutandis _ for the recollapsing era . on the other hand , if evaporating black holes leave remnants they too must be worked into the picture .again , if the results of the stochastic models with two - time low entropy boundary conditions discussed above are to be taken seriously , the conclusion should probably be that boundary conditions requiring homogeneity when the universe is small suppress histories in which significant gravitational collapse occurs by assigning low probabilities to histories with fluctuations that will go non - linear .it hardly needs emphasizing that all of these considerations are tentative , and greater clarity would be welcome .besides gravitational considerations , radiation which connects the expanding and recollapsing eras provides another example of a physical process which samples conditions near both boundary conditions .while gravitational radiation and neutrinos are highly penetrating and are likely to provide such a bridge , in neither case are we yet capable of both effectively observing , and accurately predicting , what is expected from sources to our past . therefore the primary focus of this investigation has been electromagnetic radiation . above it was confirmed that modulo the obviously substantial uncertainty regarding the condition of the universe when it is large , even electromagnetic radiation is likely to penetrate to the recollapsing era .section [ sec : egbr ] was concerned with one relatively prosaic consequence of this prediction if our universe possesses time symmetric boundary conditions . herei comment briefly on another .maxwell s equations , the dynamical laws governing electromagnetic radiation , are time symmetric . it is generally believed that the manifest asymmetry in time of radiation _ phenomena _ , that is , that ( in the absence of source - free fields ) observations are described by retarded solutions rather than advanced , is ascribable fundamentally to the thermodynamic arrow of time without additional hypotheses .( for a contemporary review see . ) however , if our universe possesses time symmetric boundary conditions then near the big bang the thermodynamic arrow of entropy increase runs oppositely to that near the big crunch .since radiation can connect the expanding and recollapsing eras , the past light cone of an accelerating charge in the expanding era ends up in matter for which the entropy is increasing , while its future light cone terminates in matter for which entropy is supposed to be _decreasing_. if the charge radiates into its future light cone this implies detailed correlations in this matter with the motion of the charge which are incompatible with the supposed entropy decrease there ( entropy increase to time reversed observers ) , although it is true that these correlations are causally disconnected to time - reversed observers , and consequently invisible to _ local _ coarse grainings defining a local notion of entropy for such observers .this state of affairs makes it difficult to decide whether radiation from an accelerating charge ( if its radiation can escape into intergalactic space ) should be retarded ( from the perspective of observers in the expanding era ) or advanced or some mixture of the two .the conditions under which the radiation arrow is usually derived from the thermodynamic arrow of surrounding matter do not hold .( notice how this situation is reminiscent of the requirements necessary to derive retardation of radiation in the wheeler - feynman absorber theory " of electrodynamic phenomena . )hence the ability of radiation to connect the expanding and recollapsing epochs brings into question the self - consistency of assuming time symmetric boundary conditions on our universe together with physics as usual " ( here meaning radiation which would be described as retarded by observers in both the expanding and recollapsing eras ) near either end .the retardation of radiation is another important example of a physical prediction which would be expected to be very different in a universe with time symmetric boundary conditions than in one without . once again , if the results of the simple stochastic models are generally applicable , the retardation of radiation should no longer be a prediction in such a universe .to summarize this section , consideration of gravitational collapse and radiation phenomena reveals that construction of a model universe with time symmetric boundary conditions which resembles our own may be a difficult task indeed .there are strong suggestions that a model with time symmetric boundary conditions which mimic our own early universe would behave nothing like the universe in which we live .such a model would most likely predict a universe which remained smooth throughout the course of its evolution , with coupled matter components consequently remaining in the quasi - static equilibrium " appropriate to a dynamic universe .in spite of the oft - expressed intuitive misgivings regarding the possibility that our universe might be time symmetric , it has generally been felt that if sufficiently long - lived , there might be no way to tell the difference between a time symmetric and time asymmetric universe .building on suggestions of cocke , schulman , davies , and gell - mann and hartle ( among others ) , this work has explored in some detail one physical process which , happily , belies this feeling : no matter how long our universe will live , the time symmetry of the universe implies that the extragalactic background radiation be _ at least _ twice that due to the galaxies to our past .this is essentially due to the fact that light can propagate unabsorbed from the present epoch all the way to the recollapsing era .moreover , geometry and time symmetry requires this excess " egbr to be associated with sources _ other _ than the stars in those galaxies , sources which , according to present knowledge about the era during which galaxies formed , are not capable of producing this radiation ! thus the time symmetry of a closed universe is a property which is _ directly accessible to experiment _ ( present observations are nearly capable of performing this test ) , as well as extremely difficult to model convincingly on the basis of known astrophysics . in addition , the other theoretical obstacles remarked upon briefly in sections [ sec : amplifications]and [ sec : difficulties ] make it difficult to see how a plausible time symmetric model for the observed universe might be constructed .in particular , any such attempt must demonstrate that in a universe that is smooth whenever it is small , gravitational collapse can procede to an interesting degree of inhomogeneity when the universe is larger .this is necessary in order that the universe display a thermodynamic arrow ( consistently defined across spacelike slices ) which naturally reverses itself as the universe begins to recollapse .furthermore , it appears unlikely that the usual derivation of the retardation of radiation will follow through in a time symmetric universe in which radiation can connect regions displaying opposed thermodynamic arrows . finally , in the context of the time neutral generalized quantum mechanics employed as the framework for this discussion , unless the locally observed matter - antimatter asymmetry extends globally across the present universe , .] natural choices of cpt - related boundary conditions yield a theory with trivial dynamics if the deviations from exact homogeneity and isotropy are specified in a cpt invariant fashion ( see the last paragraph of section [ sec : motivations ] ) . in sum , were the excess " egbr which has been the primary concern of this investigation to be observed , it would appear necessary to place the onus of explanation of the fact that the final boundary condition is otherwise practically invisible upon very specially chosen boundary conditions which encode the details of physics in our universe .this would make it difficult to understand these boundary conditions in a natural way . on the dual grounds of theory and experiment, it therefore appears unlikely that we live in a time symmetric universe .( a definitive expurgation must await more thorough investigation of at least some of the aforementioned difficulties . )for the malcontents in the audience , this appendix offers a flat space model explicitly illustrating the no olber s paradox " argument of section [ sec : amplifications ] . as cosmologicalredshifting is time symmetric , the complications due to curvature are inconsequential for present purposes .( curvature may be included in a straightforward fashion , but that and many other embellishments are , out of courtesy , foregone . )therefore , consider the universe of figure 1 as flat . for convenience ,relocate the zero of conformal time to be at the moment of maximum expansion .the specific energy density in radiation obeys a transfer equation here represents sources ( according to expanders ; thus in the recontracting era may be negative ) , and sinks ( same comment ) , of radiation .time symmetry implies that and now , suppose it is imagined that a time symmetric universe contains only one class of localized , homogeneously and isotropically distributed sources ( _ i.e ._ galaxies ) in the expanding era , with corresponding time - reversed sinks ( in the language used by expanding era observers ) in the recontracting era ( _ i.e . _ thermodynamically reversed galaxies . ) for isotropically emitting sources , in the black galaxy " approximation the absorption rate ( emission rate to thermodynamically reversed observers ) in the recontracting era can be thought of as being controlled by the amount of radiation from the expanding era which the recontracting era s galaxies intercept .that is , the thermodynamically reversed observers of the recontracting era would see their galaxies emitting at a rate given ( at most ) by using time symmetry , and where is as in equation ( [ eq : hardspheresigma ] ) .( this is merely the expression of the fact that in the essentially geometric black galaxy " approximation , galaxies do not care if they are intercepting radiation from " the past or the future . ) but from this it is obvious that and galaxies are in radiative equilibrium with the sky , a situation reminiscent of the historically important olber s paradox ( why is the night sky dark ? " ) thus in a transparent , time symmetric universe in which the night sky is dark , there must be an additional class of sources emitting the radiation which is correlated with the galaxies of the recontracting era .( in reality , . indeed , using a characteristic galactic luminosity , ( _ cf . _( [ eq : hs ] ) ) , ( [ eq : cw ] ) , and taking .it should not come as a surprise that the energy density in the optical egbr is still increasing ! )i wish to thank r. antonucci , o. blaes , t. hurt , and r. geller for useful conversations on matters astrophysical , j. t. whelan for discussions , comments , and questions , and j. b. hartle for raising this question and for provocative discussions .this work was supported in part by nsf grant no .phy90 - 08502 . m. gell - mann and j. b. hartle ( 1994 ) , _ in _ proceedings of the nato workshop on the physical origins of time asymmetry " , j. j. halliwell , j. perez - mercader , and w. zurek , eds .( cambridge university press , cambridge ) , and references therein .l. s. schulman ( 1991 ) , _ physica _ * a177 * , 373 , and l. s. schulman ( 1994 ) , _ in _ proceedings of the nato workshop on the physical origins of time asymmetry " , j. j. halliwell , j. perez - mercader , and w. zurek , eds .( cambridge university press , cambridge ) .r. laflamme ( 1993 ) , _ class .* 10 * , l79 , and r. laflamme ( 1994 ) , _ in _ proceedings of the nato workshop on the physical origins of time asymmetry " , j. j. halliwell , j. perez - mercador , and w. zurek , eds .( cambridge university press , cambridge ) . | this paper examines an observable consequence for the diffuse extragalactic background radiation ( egbr ) of the hypothesis that if closed , our universe possesses time symmetric boundary conditions . by reason of theoretical and observational clarity , attention is focused on optical wavelengths . the universe is modeled as closed friedmann - robertson - walker . it is shown that , over a wide range of frequencies , electromagnetic radiation can propagate largely unabsorbed from the present epoch into the recollapsing phase , confirming and demonstrating the generality of results of davies and twamley . as a consequence , time symmetric boundary conditions imply that the optical egbr is at least twice that due to the galaxies on our past light cone , and possibly considerably more . it is therefore possible to test _ experimentally _ the notion that if our universe is closed , it may be in a certain sense time symmetric . the lower bound on the excess " egbr in a time symmetric universe is consistent with present observations . nevertheless , better observations and modeling may soon rule it out entirely . in addition , many physical complications arise in attempting to reconcile a transparent future light cone with time symmetric boundary conditions , thereby providing further arguments against the possibility that our universe is time symmetric . this is therefore a demonstration by example that physics today can be sensitive to the presence of a boundary condition in the arbitrarily distant future . |
a recent entry in wikipedia , an internet - based encyclopedia , defines the quantum zeno effect as follows : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the quantum zeno effect is a quantum mechanical phenomenon first described by george sudarshan and baidyanaith misra of the university of texas in 1977 .it describes the situation that an unstable particle , if observed continuously , will never decay .this occurs because every measurement causes the wavefunction to `` collapse '' to a pure eigenstate of the measurement basis ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this definition is close to the original language of misra and sudarshan , but is not sufficiently general to describe the many situations that are considered to be examples of the quantum zeno effect .it is true that the quantum zeno effect describes the situation in which the decay of a particle can be prevented by observations on a sufficiently short time scale .however , the quantum zeno effect is much more general , since it describes the situation in which the time evolution of _ any _ quantum system can be slowed by sufficiently frequent `` observations . ''the references to observations and to wavefunction collapse tend to raise unnecessary questions related to the interpretation of quantum mechanics .actually , all that is required is that some interaction with an external system disturb the unitary evolution of the quantum system in a way that is effectively like a projection operator .finally , the word `` never '' describes a limiting case .a slowing of the time evolution , as opposed to a complete freezing , is generally regarded as a demonstration of the quantum zeno effect .graph of the number of citations per year to misra and sudarshan and to itano _et al _ .,width=480 ]the 1977 article `` the zeno s paradox in quantum theory '' by misra and sudarshan studied the evolution of a quantum system subjected to frequent ideal measurements .they showed that , in the limit of infinitely frequent measurements , a quantum system would remain in its initial state .applied to the case of an unstable particle whose trajectory is observed in a bubble chamber or film emulsion , this result seemed to imply that such a particle would not decay , in contradiction to experiment . in this case , the resolution to the apparent paradox lies in the fact that the interactions between the particle and its environment that lead to the observed track are not sufficiently frequent to modify the particle s lifetime .the time distribution of literature citations to misra and sudarshan is shown in fig .[ citesfig ] .the total number of citations listed in the web of science database in october 2006 was 535 .the graph shows a relatively low but steady number of citations per year for about a decade , followed by a large increase that continues for over a decade , possibly peaking about 25 years after the original publication date .the great increase in the rate of citations in recent years is partially due to the increased interest in quantum information processing , where the quantum zeno effect may find practical applications .energy - level diagram for the ihbw demonstration of the quantum zeno effect.,width=384 ]the quantum zeno effect can be derived in an elementary way by considering the short - time behavior of the state vector .( the treatment of misra and sudarshan is more general since it involves the density matrix . )let be the state vector at time .if is the hamiltonian , in units where , then the state vector at time is , and the survival probability is = .if is small enough , it should be possible to make a power series expansion : so that the survival probability is ,\ ] ] where many quantum systems have states whose survival probability appears on ordinary time scales to be a decreasing exponential in time .this is inconsistent with the quadratic time dependence of eq .( [ quadratic ] ) and implies that in such cases eq .( [ quadratic ] ) holds only for very short times .consider the survival probability , where the interval $ ] is interrupted by measurements at times . ideally , these measurements are instantaneous projections and the initial state is an eigenstate of the measurement operator . in that case , the survival probability is ^n , \label{survival_n}\ ] ] which approaches 1 as .it is important to note that at this level there should be nothing controversial or problematic about the existence of the quantum zeno effect .the quantum zeno effect should be observed as long as the physical system can be made to display the behavior shown in eq .( [ survival_n ] ) . for a given system, it may be difficult or impossible to make measurements quickly enough for the quadratic time dependence of the survival probability to be observed , so that , as a practical matter , the quantum zeno effect can not be observed .it should be noted that the semantic arguments over terms such as `` measurement '' or `` observation '' can be avoided if we accept that a `` measurement '' is an operation that interrupts the unitary time evolution governed by in such a way as to yield eq .( [ survival_n ] ) as a good approximation .that is , the `` measurement '' should effectively act as a projection operator .according to this view , it is not necessary that the `` measurements '' be recorded by a macroscopic apparatus or that they be instantaneous .timing of the radio frequency and optical fields applied to the beryllium ions in the ihbw experiment , width=384 ]the experiment of itano , heinzen , bollinger , and wineland ( ihbw ) was based on a proposal of cook for observing the quantum zeno effect in a three - level atom ( see fig . [ 3levelfig ] ) .levels 1 and 2 are stable on the time scale of the experiment .level 3 decays to level 1 with the emission of a photon . in the experiment of ihbw ,levels 1 and 2 were two of the hyperfine sublevels of the ground state of the be ion .level 3 was a sublevel of the excited state that decayed only to level 1 .the experiment was carried out with a sample of about 5000 be ions confined by electric and magnetic fields in a penning trap .the steps in the experiment were as follows : 1 .the ions were prepared in level 1 by optical pumping with the laser beam .2 . a resonant radio frequency ( rf )magnetic field was applied for the interval required to drive the ions to level 2 .3 . during the time that the rf pulse was applied, a variable number of equally spaced short laser pulses was applied to the ions ( see fig . [ pulsefig ] ) .the laser ( resonant with the 1-to-3 transition ) was turned on , and the induced fluorescence was recorded .probability of making the 1-to-2 transition as a function of the number of optical `` measurement '' pulses.,width=480 ] the intensity of the laser - induced fluorescence at the end of the experiment was proportional to the population of level 1 . if there are no optical pulses during the long rf pulse , the population of level 2 as a function of the time that the rf pulse is applied is ,\label{rabiflop}\ ] ] where is proportional to the amplitude of the rf field .if the duration of the rf pulse is chosen to be ( a pi - pulse ) , then all of the population is transferred from level 1 to level 2 .if equally - spaced laser pulses of negligible duration are applied during the rf pi - pulse , the population of level 2 at time is ,\label{simplecalc}\ ] ] which approaches 0 as goes to infinity .figure [ datafig ] compares the data to theory .the solid bars represent the transition probability as a function of according to the simplified calculation of eq .( [ simplecalc ] ) .the bars with horizontal stripes represent the data .the bars with diagonal stripes represent a calculation that takes into account the finite duration of the laser pulses and optical pumping effects .the data are in reasonably good agreement with the simplified calculation and in better agreement with the improved calculation .the decrease in as increases demonstrates the quantum zeno effect .a variation of the experiment was carried out by initializing the ion in level 2 and then applying the rf field and the laser pulses . in this case , the transition from level 2 to level 1 was inhibited as increased .this is another example of the quantum zeno effect . in this case, the inhibition of the transition is accompanied by the _ absence _ of laser - induced fluorescence .recently , the quantum zeno effect was observed for an unstable quantum system by fischer _ et al _ . the quantum zeno effect for induced transitions and for unstable systemsare not fundamentally different , since they both follow from the general arguments of misra and sudarshan , but it has been difficult to observe in the latter case , because of the short times over which the decay is nonexponential .et al _ were able to create an artificial system ( atoms tunneling from a standing - wave light field ) in which the interactions could be controlled so as to observe the desired effects .as can be seen by the history of citations ( fig .[ citesfig ] ) , the publication of the ihbw experiment generated considerable interest . initially , some of the responses were critical in one way or another . some ( e. g. , ref . ) objected to the use of the term `` wavefunction collapse '' in describing the experiment .the authors responded that the concept of wavefunction collapse was not essential , and that any interpretation of quantum mechanics that yielded the same prediction of the experimental results should be regarded as valid .some objected to the fact that photons were not actually observed during the intermediate `` measurements , '' in the sense of having the scattered photons registered by a detector , so that the experiment did not actually demonstrate the quantum zeno effect .however , the results are predicted to be the same whether or not the intermediate measurements are made .it is enough that the measurements _ could _ have been made .as long as the laser interactions act effectively as projection operators , so that the algebra of eqs .( [ quadratic])([survival_n ] ) is followed , the experiment should be regarded as a demonstration of the quantum zeno effect .it should be noted that none of the criticisms were directed at the execution of the experiment itself , only at the interpretation . for the most part , the citations to ref . simply accept it as a demonstration of the quantum zeno effect .in fact , it is cited in quantum mechanics textbooks and popular science books .while misra and sudarshan originally used the term `` quantum zeno paradox , '' as did peres and others , the more recent work usually uses the term `` quantum zeno effect , '' perhaps because the effect no longer seems paradoxical .some authors distinguish between the quantum zeno _ paradox _ and the quantum zeno _ effect _ , but they do so in differing ways .pascazio and namiki call the situation in which the frequency of measurements is finite and the evolution is slowed the quantum zeno _ effect , _ and the limiting case in which the frequency of measurements is infinite and the evolution is frozen the quantum zeno _paradox_. block and berman call the inhibition of spontaneous decay the quantum zeno _ paradox _ and the inhibition of induced transitions ( as in the ihbw experiment ) the quantum zeno _ effect_. in ref . , home and whitaker reserve the term quantum zeno _ paradox _ for a negative - result experiment involving observations with a macroscopic apparatus .this definition of the quantum zeno paradox seems to exclude most , if not all , feasible experiments . in this context , the ihbw experiment is not regarded as an example of the quantum zeno paradox because a local interaction is present between the laser field and the atoms , and also because the electromagnetic field , containing zero or a few scattered photons , is not regarded as a macroscopic observation apparatus .they regard the type of experiments where the time evolution of a quantum system is affected by a direct interaction , for example with an external field , as examples of the quantum zeno _effect_. however , in a later publication the same authors treat the terms quantum zeno _ paradox _ and quantum zeno _ effect _ as synonymous and restrict both to nonlocal negative - result experiments involving a macroscopic observation apparatus . experiments that do not meet these criteria would not be examples of _ either _ the quantum zeno paradox _ or _ the quantum zeno effect , according to their later definition .several variations on the general theme of quantum zeno effects have been described .soon after the ihbw experiment was carried out , peres and ron showed that a partial quantum zeno effect results if the measurements are too weak to completely destroy the coherence of the state of the measured system .a modification of the ihbw experiment was proposed in which the measurement laser pulses are weakened .et al _ showed that a related effect , damped oscillations of the state populations , can occur if the duration of the experiment is extended , while weak measurements are made . some , including kofman and kurizki and facchi _ et al _ have shown that the decay of an unstable quantum system can be accelerated by frequent observationsthis is called the quantum anti - zeno effect or the inverse quantum zeno effect . as is the case for the quantum zeno effect, the observations must take place before the decay becomes exponential . unlike the quantum zeno effect , which follows from rather general arguments , e. g. eqs .( [ quadratic])([survival_n ] ) , the possibility of observing a quantum anti - zeno effect depends on the details of the system .the experiment of fischer _et al _ demonstrated the quantum anti - zeno effect as well as the quantum zeno effect .an interesting generalization of the quantum zeno effect is the concept of quantum zeno dynamics .frequent measurements can confine the evolution of a quantum system to a subspace of the hilbert space rather than simply to the initial state . compared to the ordinary quantum zeno effect, the difference is that the measurements distinguish not between the initial _ state _ and all other states but between a _subspace _ and the rest of the hilbert space .this form of quantum zeno effect may find application in quantum information processing .as already noted , the recent increase in the rate of citations to the articles of misra and sudarshan and ihbw is partially related to increased interest in quantum information processing . in this context, there have been various proposals to use the quantum zeno effect to preserve quantum systems from decoherence .beige _ et al _ have proposed an arrangement of atoms inside an optical cavity capable of carrying out quantum logic operations with low error rates within a decoherence - free subspace of the hilbert space .states outside the decoherence - free subspace are coupled strongly to the environment .the quantum zeno effect then leads to effective confinement of the system to the decoherence - free subspace , which is an example of a quantum zeno subspace .franson _ et al _ have proposed use of the quantum zeno effect to suppress errors in a linear optics implementation of quantum computation . in this implementation , the presence of two photons in the same mode indicates an error .the presence of a strong two - photon absorber in an optical fiber takes the role of the `` observer '' and suppresses the errors .other proposed applications of the quantum zeno effect to error prevention in quantum computation are discussed in refs . .quantum `` bang - bang '' control and related dynamical decoupling techniques utilize frequent , pulsed interactions to effectively prevent decoherence of a quantum system by confining the dynamics to a subspace .this is not exactly the quantum zeno effect , since the interactions are unitary , but the results are mathematically similar to those for the quantum zeno effect .dhar _ et al _ have discussed the `` super - zeno effect , '' which preserves a state ( or more generally , keeps a quantum system within a subspace of the hilbert space ) with a set of pulsed interactions unequally spaced in time .the timing of these interactions can be arranged so as to be more efficient than can be done with the same number of equally spaced interactions ( ordinary quantum zeno effect ) .also , it should be noted that the pulsed interactions are unitary kicks , as in the so - called `` bang - bang control '' , and not observations in the usual sense .the 1977 publication of misra and sudarshan stimulated a great deal of theoretical and experimental work that has enhanced our understanding of the time development of quantum systems , such as the short - time nonexponential decay of unstable quantum systems .the results of the ihbw experiment , published in 1990 , was a clear confirmation of the existence of the quantum zeno effect for the case of the inhibition of an induced transition .interest in the quantum zeno effect continues to be high , partially due to the possibility of practical applications in quantum information processing . | as of october 2006 , there were approximately 535 citations to the seminal 1977 paper of misra and sudarshan that pointed out the quantum zeno paradox ( more often called the quantum zeno effect ) . in simple terms , the quantum zeno effect refers to a slowing down of the evolution of a quantum state in the limit that the state is observed continuously . there has been much disagreement as to how the quantum zeno effect should be defined and as to whether it is really a paradox , requiring new physics , or merely a consequence of `` ordinary '' quantum mechanics . the experiment of itano , heinzen , bollinger , and wineland , published in 1990 , has been cited around 347 times and seems to be the one most often called a demonstration of the quantum zeno effect . given that there is disagreement as to what the quantum zeno effect _ is _ , there naturally is disagreement as to whether that experiment demonstrated the quantum zeno effect . some differing perspectives regarding the quantum zeno effect and what would constitute an experimental demonstration are discussed . |
value transformation ( cvt ) is one of the first and foremost transformation in the field of manipulating the strings of bits and other transformations of similar nature such as extreme value transformation ( evt ) [ 2 ] , 2- variable boolean operation ( 2-vbo ) [ 4 ] , integral value transformations ( ivts ) [ 5 ] and so on came after that .all these transformations have lots of applications in the field like pattern formations [ 2 , 4 ] , solving round rabin tournament problem [ 6 ] , collatz - like functions [ 5 ] and so forth . in [ 7 ] with the help of cellular automata , cvt has been used for efficient hardware design of some basic arithmetic operations .as cvt is one of the important transformations in this area of study so further properties of cvt should have been thoroughly developed for some of its other future scope . in the field of data structure for the organization of non - linear datawe have seen many tree structures like binary tree , avl tree , b tree , b+ tree etc . and many of their applications .in this paper , we have designed a new tree structure named as cvt - xor tree in the domain of cvt and xor . for this treetwo fundamental logics of cvt and xor from [ 3 ] are used : ( 1 ) in [ 3 ] it is proved that addition of any two non - negative integers expressed as binary numbers is same as addition of their cvt and their xor values .this result is also shown to be true for any base of the number system .( 2 ) it has also been proved that in a successive addition of cvt and xor of any two non - negative integers , the maximum number of iterations required to get either cvt=0 or xor=0 is equal to the length of the bigger integer expressed as a binary string .organization of this paper is as follows : in section 2 some preliminaries of cvt and xor along with a recursive algorithm for addition of two numbers are discussed .section 3 deals with the fundamentals of cvt - xor tree and two approaches for the construction of cvt - xor tree .first part in section 4 discusses the different properties of cvt - xor tree and in second part some other significant properties of cvt - xor tree from the p , d and f matrices are enumerated .section 5 deals with the conclusion of this paper along with some future research direction .cvt and xor are two transformations defined on a pair of non - negative integers expressed in binary .interested reader can refer [ 1 , 2 , 3 ] for the definition of cvt which we are omitting for shortening the paper size . some of the properties which we will be using in this paper are enumerated as follows : a ) : : cvt is always an even number and length is ( n+1 ) bits .b ) : : it has been proved in [ ] that addition of two non - negative integers x and y is equal to the sum of their cvt and xor values i.e. x+y = cvt(x , y ) + xor(x , y ) and also the recurrence scheme always converges to ( 0 , x+y ) in at most ( n+1 ) iterations where n is the maximum number of bits required to represent the bigger number . * _ algorithm 1 : _ * recursive algorithm for addition of two non - negative integers : x as at the time of execution every recursive algorithm forms an activation record in a tree form so we are motivated to construct the tree for the above recurrence algorithm named as cvt - xor tree and analyzed some of its significant features .if we start from a pair of numbers as root node of the form ( 0 , n ) and back tracking from it using the iterative process x+y= cvt(x , y)+xor(x , y ) [ ] we come across many intermediate nodes of the form ( cvt , xor ) pair then we will get an m - ary tree . for a given number n, each node in the tree is represented by a pair or co - ordinate of the form ( x , y ) such that x+y = n .first co - ordinate i.e. x is the resultant cvt of two numbers and second co - ordinate i.e. y is the resultant xor of two numbers whose sum is n. as ( cvt ( 0 , n ) , xor ( 0 , n ) ) = ( 0 , n ) so and this forms a self - loop at the root node . thus a cvt - xor tree is like an m - ary tree with a loop only at the root node along with several non - leaf and leaf nodes .depending of n value we have two types of tree .if n is even , then the tree is called even cvt - xor tree and if n is odd , then the tree is called odd cvt - xor tree . *_ illustration 1 : _ * let an even integer n=40 then ( 0 , 40 ) is the root node and other leaf and non - leaf nodes in the form ( x , y ) ( such that x+y=40 ) are shown in fig 1 .as 20 is an even number , so pair ( x , y ) is in the form of either ( even , even ) or ( odd , odd ) . * _ illustration 2 : _ * let an even integer n=25 then ( 0 , 25 ) is the root node and other leaf and non - leaf nodes in the form ( x , y ) ( such that x+y=25 ) are shown in fig 2 . as 25 is an odd number , so pair ( x , y ) is in the form of either ( even , odd ) or ( odd , even ) . using section 2.(b )the maximum height of the tree is and is the number of significant bits required to represent n. total number of nodes in cvt - xor tree is n+1 which is discussed in section 4 .depending on the values of n either even or odd two types of trees can be analyzed named as even cvt - xor tree or odd cvt - xor tree .as cvt of two numbers is always an even number so the first co - ordinate in the node is always even for all internal nodes .therefore for these pairs , second co - ordinate should be an even number when n is even and is an odd number when n is odd .but when n is even the leaf nodes can be any pair of the form either ( even , even ) or ( odd , odd ) otherwise it is of the form ( even , odd ) or ( odd , even ) .following algorithm 2 shows whether a given node ( x , y ) is leaf node or not . *_ algorithm 2 : _ * algorithm to check whether a given node is leaf or not ( x , y ) is root node ( x , y ) is a leaf node in x+y cvty - xor tree ( x , y ) may be a leaf node or an internal node checking is required + finding the structure of the tree for a given number n two different approaches are discussed below : the root is always of the form ( 0 , n ) this is consider in level-0 and using iterative process to find the nodes in level-1 we will search for the pair of integers x and y such that x+y = n , cvt ( x , y)=0 and xor(x , y)=n except node ( 0 , n ) .all the nodes in this process are kept in level-1 .this process is continuing until all the leaf nodes are found i.e. the nodes with no predecessor .this process starts from all the leaf nodes whose sum is n. all the leaf nodes for the number n can be found easily . for each such pairswe apply cvt and xor operation and keep those nodes such that their pair sum is equal to n. many nodes may converge towards the single node after cvt and xor operations .this process is continued until we reach to the root node from all the leaf nodes which are in the form of ( 0 ,n ) . in the cvt - xor treeif we backtrack from the root node to the leaf node we find different paths to reach to the leaf nodes at different depths .it seems that parent child relationship in the cvt - xor tree totally depends on characterization or bit patterns of the node pairs in the path . to understand these dynamicswe have analyzed some of the properties of cvt - xor tree discussed in the following section .the numbers of nodes or vertices in the cvt - xor tree is n+1 .number of non - negative pair wise integer partitions of n is n+1 of the form ( 0 , n ) , ( 1 , n-1 ) ( n , 0 ) . from section 2(b ) all these pairs must converges to ( 0 , n ) so all these n+1 number of nodes are connected and will form a single cvt - xor tree . hence the total numbers of nodes in the cvt - xor tree is n+1 . * _ illustration 3 : _ * let an even integer n=18 , we have 19 pairs as ( 0 , 18 ) , ( 18 , 0 ) , ( 1 , 17 ) , ( 2 , 16 ) , ( 3 , 15 ) , ( 4 , 14 ) , ( 5 , 13 ) , ( 6 , 12 ) , ( 7 , 11 ) , ( 8 , 10 ) , ( 9 , 9 ) , ( 10 , 8) , ( 11 , 7 ) , ( 12 , 6 ) , ( 13 , 5 ) , ( 14 , 4 ) , ( 15 , 3 ) , ( 16 , 2 ) , ( 17 , 1 ) in cvt - xor tree shown in fig 3 . in cvt - xor tree if the pair ( a , b ) is in depth - d , then its symmetric pair ( b , a ) is also in depth - d except the root node .let , cvt(a , b)=p and xor(a , b)=q ; where p and q are the parent node ( p , q ) for one child pair ( a , b ) . + we know that , cvt(a , b)=cvt(b , a ) and xor(a , b)=xor(b , a ) + then cvt(a , b)=cvt(b , a)=p and xor(a , b)=xor(b , a)=q + that implies both the pairs ( a , b ) and ( b , a ) are the child nodes whose parent node is ( p , q ) i.e. nodes ( a , b ) and ( b , a ) are two predecessor .+ therefore , ( a , b) ( p , q ) and ( b , a) .so they must belong to the same level i.e. in same depth . + according to the cvt properties , if the xor value is zero in iteration , then the cvt value is zero in iteration .therefore ( n , 0 ) ( 0 , n ) and ( 0 , n ) is the root node in cvt - xor tree paradigm , so if ( 0 , n ) is in level-0 then the node ( n , 0 ) will be always in level-1 in the cvt - xor tree .* _ illustration 4 : _ * from fig .3 , the pairs ( 2 , 16 ) and ( 16 , 2 ) are in lavel-1 similarly pairs ( 8 , 10 ) and ( 10 , 8) are in level-2 and so on .but the root node ( 0 , 18 ) and its symmetric pair ( 18 , 0 ) are in level-0 and level-1 respectively . for any pair ( p , q ) where is an even number and is either even or odd and if ; such that , then the pair ( p , q ) is the leaf node or contradictory ( even , even ) pair .the theorem statement demands two binary variable and such that and for some i , which is impossible .so the such pair ( p , q ) must be a ( even , even ) leaf node . * _ illustration 5 : _ * as per the definition of cvt , we know that cvt of any two numbers is always an even number .therefor there is no predecessor for an odd pair . but some ( even , even ) contradictory pairs are also in leaf node . fig .4 shows that pair ( 3 , 5 ) and ( 5,3 ) are ( odd , odd ) leaf nodes and ( 6 , 2 ) is a leaf node although the first co - ordinate is even because as third bit and second bit of 6 ( 110 ) and 2 ( 10 ) from lsb position are 1 and 1 respectively .so the pair ( 6 , 2 ) is a contradictory leaf node . if the pair ( p , q ) where the cvt value and the xor value is a non - leaf node in the cvt - xor tree and if and for m such i positions where and ; then the number of successors of the ( p , q ) node= .these nodes can be generated by substituting m column positions either or and keeping other ( n - m ) column values fixed .as m column values has to be changed by or in 2 different ways , so using fundamental principle of counting the total successor pairs will be .* _ illustration 6 : _ * let an example of ( even , even ) non - leaf node ( 2 , 6 ) shown below : + + here two position of a and b i.e. ( * , * * ) in two columns can be filled up ways .so according to the theorem 4 such predecessor of ( 2 , 6 ) are ( 1 , 7)=(001,111 ) , ( 7 , 1)=(111 , 001 ) , ( 3 , 5)=(011 , 101 ) and ( 5 , 3)=(101 , 011 ) shown in fig .4 . let ( p , q)(m , n ) , if ( p , q ) and ( m , n ) are two odd - odd pair ( i.e. leaf node ) and immediate even - even pair respectively , then the second last lsb bit of m must be always 1 i.e. and where and .* + 1st part : * as binary representation of two odd numbers must leading to last lsb bit 1(for both p & q ) , therefore according to cvt definition second last lsb bit of m must be 1 as m = cvt(p , q ) .+ * 2nd part : * we will proof this part through the principle of method of induction .+ for n=1 , m=4 - 2=2 , binary representation of 2 is 10 i.e. second last lsb bit is 1 .so formula is true for n=1 .+ let the formula be true for n i.e. we have to prove that this is true for m = n+1 i.e. + + so the formula is also be true for m = n+1 . therefor the formula is true for all values greater than equal 1now we are ready to write the algorithm to check whether a given input node ( x , y ) is a leaf node or not .algorithm 2 is now modified using theorem 3 & 4 by which a given node whether belong to the leaf or non - leaf can be verified easily is as follows : * _ algorithm 2 : _ * algorithm to verify whether a given node is leaf or not x+y = n is an even number ( x , y ) is root node ( x , y ) is a ( odd , odd ) and ( even , even ) leaf node ( x , y ) is a non - leaf node any even integer ( e ) in the form of will be always a cvt value 2 for all possible ( odd , odd ) pairs ( say ( x , y ) ) of e ; where e = x+y . notes the two facts as following : 1 . for every pair ( x , y ) of e such that if e is an n bits number then the maximum number of bit for x and y will be n-1 .every number in the form m= ( 2k -1 ) is always the addition result of two complementary numbers ( say p and q ) where one of them will be even and another one is odd .we can represent e as : + so to get e we have to add only 1 ( or binary 1 ) to p ( if p is even ) or q ( if q is even ) such that p+q = x+y .+ now we get two odd numbers and such that e = m+1= having both lsb position are binary1 others bits position are complementary . therefore according to the cvt definition therecvt value must be 2 .now question can be asked that what is the total number of leaf nodes and internal nodes in cvt - xor tree whose is root is ( 0 , n ) ? for answering this and other similar questionswe have defined three different matrices namely cvt - xor depth matrix , cvt - xor parent matrix , and cvt - xor frequency matrix to store depth information , parent information and frequency information respectively . 1 .the diagonal from ( n , 0 ) to ( 0 , n ) in the cvt - xor depth matrix gives the following information regarding the cvt - xor tree whose root is the ( 0 , n ) ) 1 .if dm ( i , j)=h then ( i , j ) vertex is in depth - h in the cvt - xor tree .2 . no . of nodes in depth - h = no .of times h present on the diagonal of the cvt - xor depth matrix ( frequency of the occurrence of h ) 3 .average height of the tree= ( sum of all the diagonal value)/n+1 4 .if the binary pits pattern of n such that only the lsb bit is 0 i.e. number is in the form ( 0 , 2k-2 ) then for such even integer n=2 , 6 , 14 , 32, ,2k-2 ) , the maximum value through the diagonal from ( n , 0 ) to ( n , 0 ) is 2 this implies the maximum height of the tree is always 3 . 5 . if the binary pits pattern of n such all bits are 0 i.e. number is in the form ( 0 , 2k-1 ) then for such odd integers n= 3 , 7 , 15 , 31 2k-1 ) , the maximum value through the diagonal from ( n , 0 ) to ( n , 0 ) is 1 this implies the maximum height of the tree is always 2 . once we will find the number of nodes in different depth , link information ( ) among the nodes in cvt - xor tree can be found form the cvt - xor parent matrix .the diagonal from ( n , 0 ) to ( 0 , n ) in the cvt - xor parent matrix gives the following information regarding the cvt - xor tree whose root is the ( 0 ,n ) ) 1 . if pm ( i , j)=(p , q ) this implies ( p , q ) is the parent of node ( i , j ) .if ( p , q ) is repeats m times in the pm matrix then there is m such similar ( i , j ) whose parent is ( 0 , n ) 3 .the frequency matrix ( fm ) help us to find the following : let , fm ( i , j ) = m , where m indicate the number of predecessor or child nodes of the node ( i , j ) .if i is odd then the corresponding value of m is 0 , this indicate if the first co - ordinate of the node is odd and the node has no child or predecessor i.e. the node is leaf node .2 . through the diagonal value we can find what the leaf nodes are for a given n. below in fig . 8 shown the cvt value for all ( odd , odd ) pairs . according to the theorem 5, cvt value for all ( odd , odd ) pairs in this figure contains all the numbers from the set s. where 2 is the smallest number and the difference between two consecutive number is 4 i.e. .it is also showing the beautiful fractal studying in [ 1 ] . if we traverse through a diagonal path from ( k , 10 ) ( where k is any odd number and gradually decreasing k ) , ( k-2 , 3 ) , ... , ( 1 , k ) figure ( fig .9 ) like palindrome triangle is obtained .this figure demonstrate that for any arbitrary given positive integer all the possible cvt values where bold cvt values refer to the prime pair solution .this paper shows a new tree structure called cvt - xor tree along with two different approaches for the construction of the tree .we also have seen different properties of the tree like number of nodes in any depth , characterization of child , parent , internal and leaf nodes .some other significant properties also can be obtained easily from the three different matrices along with an easy construction process of cvt - xor tree from the matrices . in futurethis work will help us in further studies in the domain of cvt - xor to characterize the pattern formation by different cvt - xor trees where it is conjectured that there exists always a path from root(zero , even integer ) to leaf ( prime pair ) signifying the goldach conjecture - the unfinished work remaining , how to distinguish this kind of paths from the rest .we can further explore the tree structure by considering three numbers combination such that x+y+z = n and to study the different properties of such trees .the authors would like to thank ... 1 s. pal , s. shaoo and b. k. nayak , _ properties of carry value transformation_,international journal of mathematics and mathematical sciences volume 2012 , article i d 174372 , 10 pages http://dx.doi.org/10.1155/2012/174372 , 2012 p. pal choudhury , sk .s. hassan , s. shaoo and b. k. nayak , _ act of cvt and evt in the formation of number theoretic fractals_,international journal of computational cognition ( http://www.ijcc.us ) , vol .1 , march 2011 .p. pal choudhury , s. shaoo and b. k. nayak , _ theory of carry value transformation and its application in fractal formation _, ieee international advance computing conference , doi : 10.1109/iadcc.2009.4809146 , 2009 .p. pal choudhury , hassan sk .s.,sahoo s. and b. k. nayak , _ theory of rule 6 and its application to round robin tournament _ , arxiv 0906.5450v1 , cs.dm , cs.gt , int .journal of computational cognition , vol . 8 , no .33 - 37 , sep . 2010 .p. pal choudhury , s. shaoo and m. chakraborty , _ implementation of basic arithmetic operations using cellular automata _ ,ieee computer society , isbn:978 - 0 - 7695 - 3513 - 5,http://doi.ieeecomputersociety.org/10.1109/icit.2008.18 pp : 79 - 80,dec .17 , 2008 to dec . 20 , 2008 | cvt and xor are two binary operations together used to calculate the sum of two non - negative integers on using a recursive mechanism . in this present study the convergence behaviors of this recursive mechanism has been captured through a tree like structure named as cvt - xor tree . we have analyzed how to identify the parent nodes , leaf nodes and internal nodes in the cvt - xor tree . we also provide the parent information , depth information and the number of children of a node in different cvt - xor trees on defining three different matrices . lastly , one observation is made towards very old mathematical problem of goldbach conjecture . carry value transformation ( cvt ) , exclusive or ( xor ) , cvt - xor tree , fractal , goldbach conjecture . |
the use of radiation - tolerant cmos active pixel sensors ( aps ) for direct detection in transmission electron microscopy ( tem ) , opens new opportunities for fast imaging with high sensitivity .one of the key figures of merit for a tem imaging sensor is the point spread function ( psf ) , which affects both the imaging resolving capabilities of the sensor and the absolute image contrast .the psf of a pixellated aps used in direct detection depends on several parameters of which the most important are the pixel size , the electron multiple scattering in the sensor and the charge carrier diffusion in the active layer .this letter discusses the psf achieved using an aps in two different regimes . in traditional bright field illumination ,the electron flux is such that each pixel is illuminated by one or more electrons per acquisition frame . in this regimethe point spread function has a contribution from the lateral charge spread due to charge carrier diffusion in the active volume .since the epitaxial layer of a cmos aps is nearly field - free , charge carriers reach the collection diode through thermal diffusion , with collection times of .the typical cluster size for a cmos aps with 10 m pixels is 4 - 5 pixels for 300 kev electrons and about 45 % of the charge is collected on the central pixel of the cluster . with bright field illumination , at high rate , the signal recorded on each individual pixel is the superposition of the charge directly deposited by a particle below the pixel area with that collected from nearby pixels through diffusion , multiple scattering and backscattering from the bulk si . if the electron rate is kept low enough so that individual electron clusters can be reconstructed , a new regime of operation becomes available .the electron impact position is reconstructed by calculating the centre - of - gravity of the observed pulse heights on the pixels in the cluster . for pixel detectors with pixelpitch this technique makes it possible to obtain an point resolution in tracking applications for accelerator particle physics .the same technique can now be adopted for imaging , provided electron fluxes are low enough so that the detector occupancy is 0.05 , and individual clusters can easily be resolved . under these conditionsthe psf is expected to depend only on the detector pixel size and cluster s / n ( determining the single point resolution ) and on multiple scattering .the image is reconstructed by combining a large number of frames , each giving 2 - 5 m accuracy for 0.01 - 0.05 % of the field of view , provided the frame rate is much faster than the dynamics being observed on the sample . in this paperwe refer to this technique as `` cluster imaging '' .the principle of cluster imaging for tem has been tested using the team1k cmos monolithic pixel sensor , developed as part of the the multi - institutional team ( transmission electron aberration - corrected microscope ) project .the team1k sensor and its performance will be discussed in details in a forthcoming paper on this journal .the chip is produced in the ams 0.35 m cmos - opto process and has a 1,024,024 pixel imaging area with pixels arrayed on a 9.5 m pitch .the detector employed in this test has been back - thinned to 50 m and mounted on a carrier circuit board which is cut out below its active area to minimise back - scattering effects .a 75 - diameter au wire is mounted along the pixel rows on top of the sensor at a distance of mm above its surface .tests have been conducted with the sensor installed at the bottom of a fei titan microscope column at the national center for electron microscopy ( ncem ) .the energy deposited by electrons in the sensor active layer and the lateral charge spread are simulated with the geant-4 program using the low - energy electromagnetic physics models .the cmos pixel sensor is modelled according to the detailed geometric structure of oxide , metal interconnect and silicon layers .electrons are incident perpendicular to the detector plane and tracked through it .charge collection in the sensor is simulated with pixelsim , a dedicated digitisation module .the simulation has the diffusion parameter , used to determine the width of the charge carrier cloud , free .its value is extracted from data by a fit to the pixel multiplicity in the clusters of 300 kev electrons , where multiple scattering is lower .we find = ( 14.5 ) m , which is compatible with the value obtained for 1.5 gev in a prototype sensor with 20 pixels produced in the same cmos process .the response to single electrons is characterised in terms of the cluster size .we operate with a flux of 50 mm frame which allows us to resolve individual electrons .electron hits are reconstructed as clusters of pixels . a clustering algorithm with two thresholdsis used .first the detector is scanned for `` seed '' pixels with pulse height values over s / n threshold set to 3.5 .seeds are sorted according to their pulse heights and the surrounding neighbouring pixels are added to the cluster , if their s / n exceeds 2.5 . [cols="^,^ " , ] for cluster imaging , the electron flux is reduced to 50 e mm frame and 20000 subsequent frames are acquired at each energy .figure [ fig : plotwire ] shows the image of the wire obtained in the two regimes . with cluster imaging , due to the continuous spatial sampling , the information is no longer discretised by the pixel pitch as it is with bright field illumination .charge diffusion , which limits the psf in bright field illumination , now rather improves the cluster resolution by spreading the charge on more pixels and the relevant parameter for psf becomes the ratio of pixel pitch to signal - to - background , which determines the cluster spatial resolution .we also observe that the image contrast improves by a factor of three for cluster imaging compared to bright field illumination , as it is apparent in figure [ fig : plotwire ] .the lsf is extracted by a fit , as discussed above .we measure ( 2.72.25 ) m at 300 kev , ( 3.65.20 ) m at 200 kev and ( 6.80.35 ) m at 80 kev .these results show that cluster imaging improves the point spread function by a factor of two compared to traditional bright field illumination , for the team1k detector parameters .we perform the study on simulation describing the detector and wire geometry , for both operation regimes and compare the results to the measured lsf values .results are summarised in figure [ fig : plotpsf ] .simulation reproduces well the data both in absolute terms and in the observed scaling with the electron energy .we also note that the cluster imaging lsf closely follows the simulation predictions for single point resolution .the team cmos aps has demonstrated a psf for direct detection under bright field illumination comparable to its pixel size .image reconstruction by clustering at the impact point of individual electrons at low flux allows to reduce the psf down to ( 2.72.25 ) m and improves the image contrast . in order to practically exploit this method in temit is necessary to ensure that the sensor can be read out at a rate of several hundred frames s and cluster reconstruction can be performed in real time .this work was supported by the director , office of science , of the u.s .department of energy under contract no.de-ac02-05ch11231 .the team project is supported by the department of energy , office of science , basic energy sciences . | a cluster imaging technique for transmission electron microscopy with a direct detection cmos pixel sensor is presented . charge centre - of - gravity reconstruction for individual electron clusters improves the spatial resolution and thus the point spread function . data collected with a cmos sensor with 9.5.5 pixels show an improvement of a factor of two in point spread function to 2.7 m at 300 kev and of a factor of three in the image contrast , compared to traditional bright field illumination . , , , monolithic active pixel sensor , transmission electron microscopy ; |
the most noted power law in economics is perhaps the pareto law , first observed by vilfredo pareto more than a century ago .it was found that in an economy the higher end of the distribution of wealth follows a power - law and is an exponent ( now known as the pareto exponent ) which pareto estimated to be .for the last hundred years the changes in value 3/2 in time and across the various capitalist economies seem to be small .this implies that there is inequality in the wealth distribution , and only a few persons hold the majority of wealth . in 1931, gibrat suggested that while pareto s law is valid only for the high wealth range , the middle wealth range is given by the probability density where is a mean value and is the variance .the factor is also know an as the gibrat index , and a small gibrat index corresponds to a uneven wealth distribution .an unequal wealth distribution is associated not only to these functions , but also to any other one which is not of the form a dirac -function .the problem of the appearance of inequalities seems therefore to be rather general and not necessarily related to a particular shape e.g. the power - law form of the pareto law of the wealth distribution , despite distributions can vary from case to case assuming qualitatively different shapes .in fact wealth distribution has always been a prime concern of economics .classical economists such as adam smith , thomas malthus and david ricardo were mainly concerned with factor wealth distribution , that is , the distribution of wealth between the main factors of production , land , labor and capital .modern economists have also addressed this issue , but have been more concerned with the distribution of wealth across individuals and households .important theoretical and policy concerns include the relationship between wealth inequality and economic growth .wealth inequality metrics or wealth distribution metrics are techniques used by economists to measure the distribution of wealth among the participants in a particular economy , such as that of a specific country or of the world in general .these techniques are typically categorized as either absolute measures or relative measures , and in the literature one of the most important debates is on the issue of measuring inequality .while one type deals with the objective sense of inequality , usually employing some statistical measure of relative variation of wealth , the other type deals with some indices that try to measure inequality in terms of some normative notion of social welfare for a given total of wealth . _absolute criteria_. absolute measures define a minimum standard , and then calculate the number ( or percent ) of individuals below this threshold .these methods are most useful when determining the amount of poverty in a society .examples include the poverty line , which is a measure of the level of wealth necessary to subsist in a society .it varies from place to place and from time to time , depending on the cost of living and people s expectations . _relative criteria_. relative measures compare the wealth of one individual ( or group ) with the wealth of another individual ( or group ) .these measures are most useful when analyzing the scope and distribution of wealth inequality .examples include the gini coefficient , which is a summary statistic used to quantify the extent of wealth inequality depicted in a particular lorenz curve .the gini coefficient is a number between 0 and 1 , where 0 corresponds with perfect equality ( where everyone has the same wealth ) and 1 corresponds with perfect inequality ( where one person has all the wealth , and everyone else has zero wealth ) .however , it has to be noted that `` wealth '' is here understood differently respect to its common meaning : it represents the total amount of goods and services that a person receives , and thus there is not necessarily money or cash involved .services like public health and education are also counted in , and often expenditure or consumption ( which is the same in an economic sense ) is used to measure wealth .thus , it is not clear how wealth should be defined .there is also the question that `` should the basic unit of measurement be households or individuals ? ''the gini value for households is always lower than for individuals because of wealth pooling and intra - family transfers .the metrics will be biased either upward or downward depending on which unit of measurement is used . these and many other criticisms need to be addressed for the proper use of inequality measures in a well - explained and consistent way . in the attempt to answer some such basic questions and provide a foundation to the complex issues related to the appearance of wealth inequalities ,various authors have independently formulated minimal models of wealth exchange which , while being general enough to catch some universal features of economic exchanges , are simple enough to be simulated numerically in detail and studied analytically . in these models a set of agents ,representing individuals or companies whose state is defined by the respective wealth , interact with each other from time to time by exchanging ( a part of ) their wealths .such exchanges are defined by laws depending on the and also contain some random elements , as detailed below .a striking analogy was recognized and actually motivated the introduction of some of these models between the statistical mechanics of molecule collisions in a gas and these minimal models of economy , which are therefore referred to sometimes as _kinetic wealth - exchange models_. such an analogy , rather than for its peculiarity , should be noticed since it signals a possible universal statical mechanism in action in the dynamical evolution of systems composed by single units , from gas composed of molecules colliding with each other exchanging their energy to economic societies in which single units interact by exchanging wealth .this analogy is being analysed here in more details than done previously , and represents the goal of the investigations presented here .we begin by recalling the main features of kinetic wealth - exchange models in sec .[ features ] , concentrating on an example of model with a fixed _ saving propensity _ . in sec .[ variational ] it is shown how the fact that for a saving propensity one obtains an equilibrium -distribution of order , instead of the boltzmann law obtained for , actually strengthens the kinetic analogy between economy systems and a gas in dimensions , with a known function of : through a general variational approach based on the boltzmann entropy it is shown that the -distribution of order is the equilibrium canonical distribution of a system with a quadratic hamiltonian and degrees of freedom .the analogy is further discussed in sec .[ kinetic ] , this time through a complementary microscopic approach based on the analysis of the dynamics of particle collisions in an -dimensional space . through mechanical considerationsonly based on momentum and energy conservation , it is shown that collision dynamics in dimension can be recast in the form of the evolution laws of kinetic wealth - exchange models , both for and in the case with saving propensity , corresponding to a number of effective dimensions .finally , in sec . [ conclusion ] , conclusions are drawn .simple social models of wealth exchange have been shown to well reproduce many features of real wealth distribution .for instance , the exponential law observed at intermediate values of wealth is reproduced by a many - agent model system composed of agents , who are assumed to exchange wealth in pairs at each iteration , according to the wealth - conserving evolution equations here is a uniform random number in , and are the labels of two agents chosen randomly at each iteration , and and represent the corresponding wealths before and after a trade , respectively . more general versions of this model assign a ( same ) saving propensity to all agents , which represents the minimum fraction of wealth saved during a trade .as an example , in the model of ref . the evolution law is it is to be noticed that while the total wealth is still conserved during a trade , , only a fraction of the initial total wealth is reshuffled between the two agents during the trade .these models relax toward an equilibrium distribution well fitted by a -distribution , as also noted by angle , where the scaling parameter is and the parameter , as shown below , represents an effective dimension of the system and is explicitly given by as the saving propensity varies in , the effective dimension continuously assumes the values in the interval . a comparison between results of numerical simulation of wealth exchange models and the -distribution is shown in fig. [ realdata3 ] , together with some real data for uk . the the zero - saving propensity model of eqs .( [ sp1 ] ) is described by the exponential curve for .the main difference of the curves with respect to the exponential distribution is the appearance of a peak : while the average wealth is unchanged ( the system is closed and one still has , where is the total number of agents ) the number of agents with a a wealth close to the average value increase or , in other words , the wealth distribution becomes more fair for larger s and therefore larger values of the effective dimension .eventually , as shown in ref . , for ( ) thus becoming a perfectly fair distribution . as a consequence , measures of the inequality of the wealth distribution , such as the gini coefficient ,decrease for increasing and tend to zero for . in the rest of the paperwe discuss the statistical mechanical interpretation of the equilibrium -distribution defined by eqs .( [ f - global ] ) and ( [ n ] ) .it may seem that in going from the exponential wealth distribution obtained for to the -distribution corresponding to a the link between wealth - exchange models and kinetic theory has been lost .in fact , the -distribution represents but the maxwell - boltzmann equilibrium kinetic energy distribution for a gas in dimensions , as shown in the appendix of ref . . herea more general derivation of the -distribution is presented , in which it is shown , solely using a variational approach based on the boltzmann entropy , that is the most general canonical equilibrium distribution of an -dimensional system with a hamiltonian quadratic in the coordinates .entropy - based variational approaches have been suggested ( e.g. by mimkes , refs . ) to be relevant in the description and understanding of economic processes .for instance , the exponential wealth distribution observed in real data and obtained theoretically in the framework of the dragulescu - yakovenko model discussed above , can also be derived through entropy considerations . considering a discrete set of economic subsystems which can be in one of possible different states labeled with and characterized by a wealth , one can follow a line similar to the original boltzmann s argumentation for the states of a mechanical system , by defining the total entropy as where represents the occupation number of the -th state ; by variation of respect to the generic , with the constraints of conservation of the total number of systems and wealth , one obtains the canonical equilibrium distribution where defines the temperature of the economic system . herewe repeat the same argumentation for an ensemble of systems described by a continuous distribution , to show that this approach not only can reproduce the exponential distribution , but also the -distribution obtained in the framework of wealth - exchange models with a saving parameter , a natural effective dimension being associated to systems with .the representative system is assumed to have degrees of freedom and a homogeneous quadratic hamiltonian , that for convenience is written in the rescaled form where is the distance from the origin in the -dimensional -space . as a mechanical example, one can think of the momentum cartesian components and the corresponding kinetic energy function , where is the particle mass ; or of the cartesian coordinates of an isotropic harmonic oscillator with elastic constant and potential energy . it can be checked e.g. using the stirling approximation for the factorial function that in the limit of large occupation numbers , the discrete version ( [ w ] ) of the boltzmann distribution becomes proportional to , where the continuous variable replaces the discrete label .for an -dimensional system , the entropy will be proportional to ] , leads to the equilibrium -distribution ( [ f - global ] ) with dimensionless variable and index . as a simple example of application of this formula , one can obtain the maxwell - boltzmann probability density in three dimensions for the kinetic energy letting . in turnthis suggests the interpretation of the parameter of wealth - exchange models as an effective dimension of the system and of the lagrange multiplier as the effective temperature , as it is in fact consistently recovered according to the equipartition theorem , deep analogy between kinetic wealth - exchange models of closed economy systems , where agents exchange wealth at each trade , and kinetic gas models , in which energy exchanges take place at each particle collisions , can be further investigated and justified by studying the microscopic dynamics of interacting particles . in this sectionwe make more rigorous an argumentation only mentioned in ref . . in one dimension, particles undergo head - on collisions , in which they can exchange the total amount of energy they have , i.e. a fraction of it .alternatively , one can say that the minimum fraction of energy that a particle saves in a collision is in this case . in the framework of wealth - exchange models, this case corresponds to the model of dragulescu and yakovenko mentioned above , in which the _ total _ wealth of the two agents is reshuffled during a trade . in an arbitrary ( larger ) number of dimensions ,however , this does not take place , unless the two particles are travelling exactly along the same line in opposite verses .on average , only a fraction of the total energy will be lost or gained by a particle during a collision , that is most of the collisions will be practically characterized by an energy saving parameter .this corresponds to the model of chakraborti and chakrabarti , in which there is a fixed maximum fraction of wealth which can be reshuffled .consider a collision between two particles in an -dimensional space , with initial velocities represented by the vectors and .for the sake of simplicity , the masses of the two particles are assumed to be equal to each other and will be set equal to 1 , so that momentum conservation implies that where and are the velocities after the collisions and is the momentum transferred .conservation of energy implies that which , by using eq .( [ v1 ] ) , leads to introducing the cosines of the angles between the momentum transferred and the initial velocity of the -th particle ( ) , where and , and using eq .( [ v2 ] ) , one obtains that the modulus of momentum transferred is from this expression one can now compute explicitly the differences in particle energies due to a collision , that are the quantities . with the help of the relation ( [ v2 ] ) one obtains comparison with eqs .( [ sp1 ] ) for the kinetic model of dragulescu and yakovenko clearly shows their equivalence consider that also here the s are in the interval and , furthermore , they can be considered as random variables , if a hypothesis of molecular chaos is made concerning the random initial directions of the two particles entering the collision .however , the s are not uniformly distributed and their most probable value drastically depends on the space dimension : the greater the dimension , the more unlikely it becomes that the corresponding values assume values close to 1 and the more probable that instead they assume a small value close to .this can be seen by computing their average over the incoming directions of the two particles or , equivalently , on the orientation of the initial velocity of one of the two particles and of the momentum transferred . in dimensions ,the cartesian components of a generic velocity vector are related to the corresponding hyper - spherical coordinates the velocity modulus and the angular variables through the following relations , using these transformations to express the initial velocity of the first particle and the momentum transferred in terms of their respective moduli and and angular variables and , the expression ( [ cos ] ) for the cosine becomes [ \sin\theta_1 \cos\theta_2 ] + \nonumber\\ & & \!\!\!\!\dots \nonumber\\ & & \!\!\!\!=\ ! [ \sin\phi_1 \sin\phi_2 \dots \cos\phi_n ] [ \sin\theta_1 \sin\theta_2 \dots \cos\theta_n ] .\end{aligned}\ ] ] the average of the square cosine is performed by first taking the square of ( [ v5 ] ) , integrating over the angular variables , considering that the -dimensional volume element is given by ^{{j-1 } } d\varphi_{j } \, , \ ] ] and finally by dividing by the total solid angle. in the integration only the squared terms survive , obtained from the square of ( [ v5 ] ) , since all the cross - terms are zero after integration they contain at least one integral of a term of the form which averages to zero . by direct integration over the angle and , it can be shown that in dimensions this means that the center of mass of the distribution for , considered as a random variable due to the random initial directions of the two particles , shifts toward smaller and smaller values as increases .the dependence of can be compared with the wealth - exchange model with .there a similar relation is found between the average value of the corresponding coefficients or in the evolution equations ( [ sp2 ] ) for the wealth exchange and the effective dimensions : since is a uniform random number in , then and from eq .( [ n ] ) one finds .the appearance of wealth inequalities in the minimal models and examples of closed economy systems considered above appears to reflect a general statistical mechanism taking place for a wide class of stochastic exchange law besides closed economy models in which the state of the units is characterized by the values of a certain quantity ( e.g. wealth or energy ) exchanged when units interact with each other .the mechanism involved seems to be quite general and leads to equilibrium distributions with a broad shape . in the special but important case of systems with a homogeneous quadratic hamiltonian or equivalently with evolution laws linear in the quantities and ( effective ) degrees of freedom , the canonical equilibrium distribution is a -distribution of order .the corresponding distribution for the closed economy model with a fixed saving propensity has the special property that it becomes a fair ( dirac- ) distribution when or .the possibility for single units to exchange only a fraction of their wealth during a trade corresponding from a technical point of view to a wealth dynamics in a space with larger effective dimensions seems to be the key element which makes the wealth distribution less inequal .j. angle , the surplus theory of social stratification and the size distribution of personal wealth , in : proceedings of the american social statistical association , social statistics section , alexandria , va , 1983 , p. 395 .m. patriarca , a. chakraborti , k. kaski , g. germano , kinetic theory models for the distribution of wealth : power law from overlap of exponentials , in : a. chatterjee , s.yarlagadda , b. k. chakrabarti ( eds . ) , econophysics of wealth distributions , springer , 2005 , p. 93 .arxiv.org : physics/0504153 | we discuss the equivalence between kinetic wealth - exchange models , in which agents exchange wealth during trades , and mechanical models of particles , exchanging energy during collisions . the universality of the underlying dynamics is shown both through a variational approach based on the minimization of the boltzmann entropy and a complementary microscopic analysis of the collision dynamics of molecules in a gas . in various relevant cases the equilibrium distribution is the same for all these models , namely a -distribution with suitably defined temperature and number of dimensions . this in turn allows one to quantify the inequalities observed in the wealth distributions and suggests that their origin should be traced back to very general underlying mechanisms : for instance , it follows that the smaller the fraction of the relevant quantity ( e.g. wealth or energy ) that agents can exchange during an interaction , the closer the corresponding equilibrium distribution is to a fair distribution . + + presented to the international workshop and conference on : statistical physics approaches to multi - disciplinary problems , january 07 - 13 , 2008 , iit guwahati , india |
human beings are extraordinarily complex agents .remarkably , in spite of that complexity a number of striking statistical regularities are known to describe individual and societal human behavior .these regularities are of enormous practical importance because of the influence of individual behaviors on social and economic outcomes .even though the analysis of social and economic data has a long and illustrious history , from smith to pareto and to zipf , the recent availability of digital records has made it much easier for researchers to _ quantitatively _ investigate various aspects of human behavior . in particular , the availability and omnipresence of e - mail communication records is attracting much attention .recently , barabsi studied the e - mail records of users at a university and reported two patterns in e - mail communication : the time interval between two consecutive e - mails sent by the same user , which we will denote as the interevent time , and the time interval between when a user sends an e - mail and when a recipient sends an e - mail back to the original sender , which we will denote as the waiting time , follow power - law distributions which decay in the tail with exponent .additionally , barabsi proposed a priority queuing model that reportedly captures the processes by which individuals reply to e - mails , thereby predicting the probability distribution of . here , we demonstrate that the empirical results reported in ref . are an artifact of the data analysis .we perform a standard bayesian model selection analysis that demonstrates that the interevent times are well - described by a single log - normal while the waiting times are better described by the superposition of two log - normals .our analysis rejects beyond any doubt the possibility that the data could be described by truncated power - law distributions .we also critically evaluate the priority queuing model proposed by barabsi to describe the observed waiting time distributions .we show that neither the assumptions nor the predictions of the model are plausible .we thus conclude that the description of human e - mail communication patterns remains an open problem .the remainder of this paper is organized as follows . in sectionii , we describe the preprocessing of the data .we then analyze the distribution of interevent times ( section [ sect : tau ] ) and the distribution of waiting times ( section [ sect : tauw ] ) .finally , in section [ sect : pqm ] we investigate the priority queuing model of ref .we consider here the database investigated by barabsi , which was also the focus of an earlier paper by eckmann et al . .this database consists of e - mail records for 3,188 e - mail accounts at a university covering an 83-day period .each record comprises a sender identifier , a recipient identifier , the size of the e - mail , and a time stamp with a precision of one second . before describing our analysis of the data , we first note some important features of the data which impact the analysis .the first important fact is that the data were gathered at an e - mail server , not from the e - mail clients of the individual users .it is quite possible that some users have e - mail clients , like microsoft outlook , which permit users to send multiple e - mails at once regardless of when the e - mails were composed .moreover , servers may parse long recipient lists into several shorter lists . for this reason , e - mails to multiple recipientswere occasionally recorded in the server as _ multiple _ e - mails .each of these duplicate e - mails was then sent in rapid succession to a different subset of the list of recipients in the actual e - mail .both the client - side and server - side uncertainties introduce artifacts in the time series of interevent times for each user as it could appear that a user is sending several e - mails over a very short time interval . to minimize these uncertainties , we preprocessed the data in order to focus on _ actual _ human behavior .first , we identify sets of e - mails sent by a user that have the _ exact _ same size but whose time stamp differs by at most five seconds . ] .we then remove all but the first e - mail from the time series of e - mails sent , while adjusting the list of recipients to the first e - mail to include all recipients in the removed e - mails .an additional important fact to note is that some of the e - mail accounts do not belong to `` typical '' users .for example , user 1962 only sent 5 e - mails while receiving 2,284 e - mails .this individual s e - mail use is too infrequent to provide useful information on human dynamics .meanwhile user 4099 sent 9,431 e - mails while receiving no e - mails .although it can not be confirmed due to the anonymous nature of the data , this e - mail account was in all likelihood used for bulk e - mails , implying that it can not provide information on _ human _ e - mail usage . to avoid having our analysis distorted ,we first restrict our attention to users which sent at least 11 e - mails over the 83-day experiment , yielding a minimum of 10 interevent times .our reasoning is that users sending fewer e - mails do not use e - mail regularly enough to allow us to truly infer patterns of human dynamics .this procedure excludes 1,976 of the 3,188 original e - mail accounts .next we examine the ratio of the number of e - mails received to the number of e - mails sent to determine what constitutes a `` typical '' user .this ratio is well - described by a log - normal distribution , and we use this fact to consider only those users in our study who are within three standard deviations from the mean .this added constraint excludes an additional 46 users .we thus focus here on the 1,152 users who fulfill the above criteria ( fig .[ parsing ] ) .reference reports that the probability distribution of time intervals between consecutive e - mails sent by an individual follows a power - law with .a basic examination of barabsi s results , however , quickly reveals a number of issues . 1 . figure 2a of ref . features three bins corresponding to interevent times seconds , an unphysical interval ( fig .[ fig2a]a ) .the events in those bins in fact account for 9% of all eventsfigure 2a of ref . features at least one bin confined to interevent times second ( fig .[ fig2a]b c ) while the data have a precision of one second . .two of us sent about twenty e - mails trying to minimize the time interval between consecutive e - mails .to be as fast as possible , we sent replies to an e - mail already in our inboxes .additionally , we did not even write any text or read the e - mail to which we were responding .we find that humans need at least 3 seconds to send consecutive e - mails . *b * , reproduction of fig.2a of ref . obtained with vistametrix and , * c * , the same data with the boundaries of the bins clearly marked .we assumed that the data points in fig .2a of ref . were placed in the middle of the bin .note that there is a bin recording data for second , whereas the data have a resolution of one second .the shaded bins indicate values with second , which contain 9% of all events for the unidentified user . ]we next quantitatively compare the plausibility of our log - normal hypothesis with the plausibility of the power - law hypothesis of ref . for interevent times . to simplify the analysis, we do not consider , but its logarithm .if a random variable is log - normally distributed , then follows a gaussian distribution , whereas if is distributed according to a power - law with exponent , then is uniformly distributed in the interval ] , where and . to properly specify the power - law distribution, we must have at least two data points in [ ] . in order to determine which of the two models provides a more accurate description of the empirical data , we use the results of the ks test as inputs in a bayesian model selection analysis .bayes rule states that where is the posterior probability of selecting model given an observation , is the probability of observing given a model , and is the prior probability of selecting model . assuming no prior knowledge about the correctness of the power - law and log - normal models , one would select - normal - law for each model . however , to eliminate any bias on our part , we perform the bayesian model selection analysis for two cases : ( _ i _ ) no prior knowledge , - normal - law , and ( _ ii _ ) the power - law model is far more likely to be correct , - law .the availability of data for multiple users enables us to perform this analysis recursively to obtain posterior probabilities of selecting each model given the available data .concretely , the analysis of the interevent times from user updates the posterior probabilities of the two models using eq .( [ bayes ] ) .these updated posterior probabilities are then used as prior probabilities for the next user . when all of the users have been included, this analysis reveals the posterior probability of the model given all of the available data .the bayesian model selection analysis demonstrates that the likelihood of the truncated power - law model being a good description of the data vanishes to zero when all data is considered ( fig .[ ks_sent]c ) .before we present our analysis of the waiting times , we must note that the database collected by eckmann et al . and analyzed by barabsi is not particularly well - suited for identifying the waiting times for replying to an e - mail .the data merely records that an e - mail was sent by user a to user b at time .the data does not specify whether the e - mail from a to b is , in fact , a reply to a prior message .imagine the following scenario : user a sends an e - mail to user b. three days later , user b sends an unrelated e - mail to user a. barabsi s approach , which we follow , is to classify this e - mail as a reply to the e - mail sent by user a three days earlier . as this case illustrates ,_ the analysis of waiting times is significantly less reliable than that of interevent times . _ reference reports that the probability distribution of time intervals between receiving a message from a sender and sending another e - mail to that sender follows a power - law distribution with .a cursory analysis of this result again reveals several problems .1 . figure 2b of ref . features three bins corresponding to waiting times seconds , an unphysical interval ( fig . [ fig2b]a ) .the events in those bins account for 1% of all eventsfigure 2b of ref . features two bins confined to waiting times second ( fig .[ fig2b]b c ) while the data have a precision of one second . .two of us sent about twenty replies to an e - mail already in our inboxes . to minimize the time required to do this, we did not read the e - mail to which we were responding but simply wrote `` yes '' at the top of our reply and then clicked send .we find that 6 seconds is the smallest waiting time feasible for a human . *b * , reproduction of fig.2a of ref . obtained with vistametrix and , * c * , the same data but with the boundaries of the bins clearly marked .we assumed that the data points in ( b ) are placed in the middle of the bin .note that there are two bins recording data for second , while the data have a resolution of one second .the shaded bins indicate values with seconds consisting of 1% of the data . ]we characterize the actual distribution of waiting times following the same procedure outlined in section [ sect : tau ] .after parsing the data , we are left with 724 users which have sent at least 10 response e - mails over 83 days and have at least two waiting times in the interval seconds .we then perform ks tests and bayesian model selection to determine whether the waiting times are better described by a power - law or log - normal distribution .the bayesian model selection analysis demonstrates that the likelihood of the truncated power - law model being a good description of the data vanishes to zero when all data is considered ( fig .[ ks_waiting ] ) . for three users in the database .the top panels show data and power - law predictions over intermediate values of whereas the middle and bottom panels depict gaussian and double gaussian model predictions for the entire range of . * b**c * , scatter plot of the ratio of the two values for all available users depending on the number of e - mails sent for each user .the larger circles highlighted in green , yellow , and red correspond to the data shown in ( a ) .users outside the domain are indicated by the arrows .* d**e * , recursively calculated posterior probability of accepting the power - law model for in comparison with the log - normal and double log - normal models .we use bayesian model selection to recursively calculate the posterior probability of selecting the power - law distribution for two different prior probabilities : ( solid black ) and ( red dashed ) .the posterior probability of selecting the power - law model vanishes after considering 140 and 49 of the 724 users for the log - normal and double log - normal comparisons , respectively . ]analysis of the data for the users with the largest number of replies suggests that may actually be better described by a superposition of two log - normal peaks : the first peak which contains most of the probability mass typically corresponds with waiting times of an hour , and the second peak typically corresponds with waiting times of two days .this finding prompted us to investigate whether the superposition of two log - normals would provide a better description of the data than a single log - normal .the probability function in this case has the functional form : \ , , \label{eqn : dln}\ ] ] where and are the means the two peaks , and are the standard deviations of the two peaks , and is the probability mass in the first peak . distributions .after considering all of the 724 available users , the posterior probability of the power - law and log - normal vanishes . ] in order to conduct the ks tests and bayesian model selection , we must first estimate the parameters of the double log - normal distribution , eq .( [ eqn : dln ] ) . unlike the earlier analyses , it is not possible to estimate the parameters of the distribution without performing a fit of eq .( [ eqn : dln ] ) to the data .we perform maximum likelihood estimation to determine the best estimate parametrization of eq .( [ eqn : dln ] ) ; see appendix [ sect : mle ] for details .after determining the parameters of the double log - normal distribution , we conduct ks tests and bayesian model selection as before , and we find that a double log - normal has a posterior probability of one when compared with the power - law model ( fig . [ ks_waiting ] ) .in fact , if we consider all three candidate models simultaneously , we still find that the posterior probability of the double log - normal is one ( fig .[ threeway ] ) .recently , barabsi and co - workers have reinterpreted the definition of the waiting times introduced in ref .barabsi and co - workers note that the actual waiting time should not be counted from the time the original e - mail was sent , but from the time the original e - mail was first read .this appears perfectly logical , but the database under investigation does not provide us with information on when the user _ actually _ first read the e - mail .in fact , as we explained earlier the database does not even provide information that would enable one to decide whether an e - mail is a reply to a previous message or whether it is a totally unrelated message . nonetheless , it is worthwhile to analyze in greater detail the manner in which the authors of refs . measure the waiting time since they characterize it as an improvement over the original method . at time a sends an e - mail to user b. at time , user b sends an e - mail . at time , user b sends an e - mail to user a. the `` real '' waiting time is now defined as , instead of .note that still is _ not _ the actual time when the user actually first read the e - mail ., ) in figs .1a b of ref . . * a * , reproduction of the empirical probability density ( open circles ) and purported model solution ( red line ) from fig .1a of ref . using vistametrix . to match the model with the empirical data , the authors of ref . claim that is actually ( arrow ) .moreover , the authors of ref . do not use the actual model solution from ref . ( blue dashed line ) as claimed .* b * , probability function for the empirical waiting times , the purported model prediction , and the actual model solution .even if the purported model solution was correct , it is visually apparent that it does not match the large gap in waiting times between and seconds . ]we find three major problems with the reported predictive ability of the priority queuing model to capture the peak , the power - law regime , and the exponential cut - off of the waiting time distributions .first , we are troubled that the `` agreement '' for the peak at is obtained by making the transformation if , instead of as would be expected from the definition . in other words , to match the peak at , barabsi and co - workers state that . moreover , we are surprised that barabsi and co - workers claim to use the exact model solution to predict the empirical waiting times . unlike fig .1a of ref . , the exact probability density has a large , discontinuous drop at .when we compare the data presented in ref . with the actual solution , it is clear that the model does not , in fact , match the empirical data ( fig .[ reply_critique]a ). finally , there are no waiting time values for between 1 and 60 seconds whereas the priority queuing model predicts a smooth continuous decrease of the probability density function in that region . while the difference between the two functions is difficult to discern in the plot of ref . , the difference is actually quite marked ( fig . [ reply_critique]b ) .we also examined the priority queuing model presented to explain the reported power - law in e - mail communication .this model is defined as follows .an individual has a priority queue with tasks .each task is assigned a priority drawn from a uniform distribution ] denoted with the gray shading .when new tasks are added to the queue from a uniform distribution on the interval ] ( fig .[ x_evo ] ) , while new tasks arrive with a priority drawn from $ ] .thus , in the limit , an e - mail user has a queue consisting of extremely low priority tasks and consequently performs new tasks with probability immediately upon arrival .this results in a peak at that accounts for nearly all of the probability mass ( fig .[ trans_ss_tauw ] ) .clearly this situation is not representative of e - mail activity , let alone human behavior as ref . claims .( red ) and steady - state for times ( black ) .notice that the likelihood that a task is executed after spending more than one unit of time on the queue is vanishingly small . ]more fundamentally , the priority queuing model predicts a distribution of waiting times that decays as a power - law with an exponent whereas the actual data clearly rejects that description ; cf . section [ sect : tauw ] .in fact , a superposition of two log - normals , one corresponding to waiting times of less than a day and another corresponding to a waiting time of several days provides an excellent description of the empirical data .more importantly , that description agrees with the common experience of e - mails users : one replies to e - mails within the day , if not , within the next few days , and if not then , never .here , we have quantitatively analyzed human e - mail communication patterns . in particular , we have found that the interevent times are well - described by a log - normal distribution while the waiting times are well - described by the superposition of two log - normal distributions .we have simultaneously rejected the hypothesis that either quantity is adequately described by a truncated power - law with exponent .we have also critically examined the priority queuing model proposed by barabsi to match the empirically observed waiting time distributions .after detailed analysis , we conclude that neither the assumptions nor the predictions of the model are plausible .we note that the model does not match the empirically observed waiting time distribution , and we therefore contend that the theoretical description of human dynamics is an open problem .barabsi and coworkers have also examined the dynamics of letter writing , web browsing , library loans , and stock broker transactions .they argue that these processes also follow power - law distributions and are consequences of similar priority queuing processes .our analysis demonstrates that care must be taken when describing data with fat - tails , particularly when the apparent scaling exponent is close to one and the probability distribution is concave .in maximum likelihood estimation , the likelihood function for distribution model given the data is where is the number of data points in the sample , the are the empirical data points and is the probability density function for the candidate model evaluated at each empirical data point .we then maximize the likelihood to find the parametrization of the model distribution that best approximates the data .for the double log - normal model , log - normal . to find the best estimate of the five parameters , we obtain preliminary estimates for and from the mean and standard deviation of and subsequently maximize likelihood to find the appropriate double log - normal parameters . in practice , however , one typically performs a minimization of . | following up on barabsi s recent letter to _ nature _ [ * 435 * , 207211 ( 2005 ) ] , we systematically investigate the time series of e - mail usage for 3,188 users at a university . we focus on two quantities for each user : the time interval between consecutively sent e - mails ( interevent time ) , and the time interval between when a user sends an e - mail and when a recipient sends an e - mail back to the original sender ( waiting time ) . we perform a standard bayesian model selection analysis that demonstrates that the interevent times are well - described by a single log - normal while the waiting times are better described by the superposition of two log - normals . our analysis rejects the possibility that either measure could be described by truncated power - law distributions with exponent . we also critically evaluate the priority queuing model proposed by barabsi to describe the distribution of the waiting times . we show that neither the assumptions nor the predictions of the model are plausible , and conclude that a theoretical description of human e - mail communication patterns remains an open problem . |
in networks , packets have to be routed between nodes through a series of intermediate relay nodes . each intermediate node in the network may receive packets via multiple data streams that are routed simultaneously from their source nodes to their respective destinations . in such conditions , packets may have to be stored at intermediate nodes for transmission at a later time .if buffers are unlimited , intermediate nodes need not have to reject or drop arriving packets .however , in practice , buffers are limited in size .although a large buffer size is preferred to minimize packet drops , large buffers have an adverse effect on the _ latency _ , i.e. , the delay experienced by packets stored in the network .further , using larger buffer sizes at intermediate nodes would also result in secondary practical issues such as increased memory - access latency .though our work is motivated by such concerns , our work is far from modeling realistic conditions .this work modestly aims at providing a theoretical framework to understand the fundamental limits of single information flow in finite - buffer line networks and investigates the tradeoffs between throughput , packet delay and buffer size .the problem of computing capacity and designing efficient coding schemes for lossy wired and wireless networks has been widely studied . however , the study of capacity of networks with finite buffer sizes has been limited .this can be attributed solely to the fact that analysis of finite buffer systems are generally more challenging . with the advent of network coding as an elegant and effective tool for attaining optimum network performance ,the interest in finite - buffer networks has increased .the problem of studying lossy networks with finite buffers has been investigated in the area of queueing theory under a different but similar framework .the queueing theory framework attempts to model packets in a network as customers , the delay due to packet loss over links as service times in the nodes , and the buffer size at intermediate nodes as the queue size .further , the phenomenon of packet overflow in communications network is modeled by blocking ( commonly known as _ type ii _ or _ blocking after service _ ) in queueing networks . however , this packet - customer equivalence fails in general network topologies due to the following reason .when the communications network contains multiple disjoint paths from the source to the destination , the source node can choose to duplicate packets on multiple paths to minimize delay .this replicating strategy can not be captured directly in the customer - server based queueing model .therefore , the queueing framework can not be directly applied to study packet traffic in general communications networks . however , queueing theory offers solid foundation for studying buffer occupancies and packet flow traffic in line networks .there has been extensive study in queueing theory literature on the behavior of open tandem queues , which are analogous to line networks .however , approaches from queueing theory literature predominantly consider a continuous - time model for arrival and departure of customers / packets . in this work, we consider a discrete - time model for packet arrival and departure processes by lumping time into epochs .this model is similar to those in .the broad contributions of this paper can be summarized as follows .the bulk of this work operates under the assumption of perfect hop - by - hop feedback .we present a markov - chain based model for exact analysis of line networks .the capacity of a line network is shown to be related to the steady - state distribution of an underlying chain , whose state space grows exponentially in the number of hops in the network .simple assumptions of renewalness of intermediate packet processes are employed to estimate the capacity of such networks .the estimates are exact for two - hop networks . however , the estimates extend the results of to networks of any number of hops and buffer sizes of intermediate nodes . using the estimates ,the profile of packet delay is derived and studied .using the exact markov chain model in conjunction with network coding , it is shown that the throughput capacity is not affected by the absence of feedback in line networks .this result is similar to the information - theoretic result that feedback does not increase capacity of point - to - point channels .finally , simulations reveal that our estimates closely predict the trends and tradeoffs between hop - length , buffer size , latency , and throughput in these networks .this paper is organized as follows .first , we present the formal definition of the problem and the network model in section [ fb - sec1 ] .next , we present our framework for analyzing capacity of finite - buffer line networks in section [ fb - sec2 ] . the proposed markovian framework is then employed to investigate packet delay in section [ fb - delay ] .we compare our analytical results with simulations in section [ fb - sec4 ] and conclude it with a brief discussion on the inter - dependence of buffer usage , capacity and delay .finally , section [ conc ] concludes the paper .this work focuses on the class of line networks . as illustrated in fig .[ fig1 ] , denotes the number of hops in the network , and and to denote the set of nodes and the set of links in the network , respectively .such a network has intermediate nodes , which are shown by black squares in the figure .each intermediate node is assumed to have a buffer of packets .note that buffer sizes of different nodes can be different . without loss of generality , we assume and , for . further , it is assumed that the destination node has no buffer constraints and that the source node possesses an infinitude of innovative packets at all times .the system is analyzed using a discrete - time model , where each node can transmit at most one packet over a link in any epoch .intermediate buffers are assumed to be empty at epoch and the dynamics for are steered by the loss processes on the edges of the network .the loss process on each link is assumed to be memoryless and statistically independent of the loss processes on other links .we let to denote the erasure probability on the link for . in this model, a node receives a packet on an incoming link when the neighboring upstream node transmits a packet and when the packet is not erased over the link .the reader is directed to appendix [ app.0-discvscts ] for a discussion on how the assumed discrete - time model relates to continuous - time exponential model that is commonly employed in queueing theory . for the bulk of this work , we assume that the network has a perfect hop - by - hop feedback mechanism indicating the transmitting node of the receipt and storage of the transmitted packet by the receiving node. however , a subsequent section of this paper drops this assumption to study the capacity of line networks without feedback .it is also assumed in this work that nodes operate in a _ transmit - first _mode , i.e. , each node first generates a packet ( if it has a non - empty buffer ) and transmits it on the outgoing edge .the node then processes the buffer after receiving the acknowledgement from the next - hop node before accepting / storing the packet on its incoming edge . for notational convenience ,the random process on the link is denoted by . if and only if the packet transmitted at epoch is deleted by the channel , and otherwise . for the sake of succinctness , we let and buffer sizes . the focus of this paper is two - fold .the foremost aim is to identify the _ supremum _ of all rates that are achievable by the use of any coding strategy between the ends of a line network with erasure probabilities and buffer sizes . in the line network illustrated in fig .[ fig1 ] , we first aim to identify the maximum rate of information that the node can transmit to node , which is denoted by . the next issue on which we focus is the delay experienced by packets in intermediate node buffers when the network operates near the throughput capacity . in our analysis , we employ the following notations .vectors will be denoted by boldface letters , eg . , .the indicator function for the set is represented by ] , .the convolution operator is denoted by and is used as a shorthand for the -fold convolution of with itself . for , denotes the probability mass function of a positive random variable that is geometric with mean . for a discrete random variable with probability mass function , and both used to denote the mean of the random variable .lastly , for appropriate , denotes the galois field of size .in this section , we investigate the effect of finite buffers on the capacity of line networks .first , we present a framework for exact computation of the capacity of line networks that have perfect hop - by - hop feedback . we then present bounds on the capacity using techniques from queueing theory .subsequently , we present our approaches to approximate the capacity of a line network . in the concluding subsection , we illustrate that the throughput capacity remains unaltered when feedback is absent provided packet - level network coding is allowed .the problem of identifying capacity is related to the problem of identifying schemes that are _ rate - optimal_. in the presence of lossless hop - by - hop feedback , the scheme performing the following steps in the given order is rate - optimal . *if the buffer of a node is not empty at an epoch , then it must transmit one of the stored packets at that time . *a node deletes the packet transmitted at an epoch if it receives an acknowledgement of packet storage from the next - hop node at that epoch . * after performing and , a node accepts an arriving packet if it has space in its buffer and sends an acknowledgment of packet storage to the previous node .notice that in the above scheme , at each epoch , the buffer of the last intermediate node is updated first , and the buffer of the first intermediate node is updated last . to determine the throughput capacity of the network , we need to track the number of packets that each node possesses at every instant of time by using the rules of buffer update under the above optimal scheme .let be the vector whose component denotes the number of packets possesses at time .the variation of state at the epoch can be tracked using auxiliary random variables defined by (l ) & i = h\\ x_i(l)\sigma[n_{i-1}(l)(m_{i}-n_{i}(l)+y_{i+1}(l ) ) ] & 1<i< h\\ x_i(l)\sigma[m_{i}-n_{i}(l)+y_{i+1}(l ) ] & i=1 \end{array}\right.\hspace{-2mm}.\label{fb - eqn2}\end{aligned}\ ] ] from the definition of the auxiliary binary random variables in ( [ fb - eqn2 ] ) , we see that only if all the following three conditions are met : * node has a packet to transmit to .* the link does not erase the packet at the epoch , i.e. , , and * node is not full after its buffer update due to its transmission over at the epoch .the changes in the buffer states can then by seen to be given by the following . note that since is a function of and , depends only on its previous state and the channel conditions at the epoch .hence , forms a markov chain .the number of states corresponds to the number of possible assignments to , which amounts to possibilities .however , since at each time instant the number of packets that can be transmitted over any link is bounded by unity , we see that for every , therefore , the number of non - zero entries in each row of the probability transition matrix term of the matrix represents the probability that the next state is given that state is presently . ] representing the transitions in the occupancy is bounded above by . a detailed categorization of the states that enables further understanding can be performed thus .we can order the states of the chain in such a way that the state corresponds to the row index in the matrix .denote to be the set of states that have for .let represent the transition matrices for transitions from states in to those in , , , respectively .then , it can be shown that , , and for ( see lemma [ fb - lem1 ] ) .therefore , the transition matrix of the chain can be structurally represented as follows . the dynamics given by the above equation can be depicted pictorially by the chain in fig .note that due to the finite buffer condition and the non - negativity of occupancy , the transitions from the first block and from the last block differ from the transitions from the blocks between them .further , the states within each , , can be organized into sets in a similar fashion .in addition to this structural property , the transition sub - matrices satisfy the following algebraic properties .[ fb - lem1 ] in a generic line network , the following hold .* , , and for . * for , is non - singular and upper triangular for . * for , is singular and lower triangular for .* is non - singular .see appendix [ app.1 - 1 ] .to illustrate the implications of the above lemma , consider the markov chain for a three - hop line network with erasure probabilities , and with buffer sizes presented in fig .[ fb - mc ] . for this network ,the algebraic properties of lemma [ fb - lem1 ] can be understood as follows . *any transition involving a decrease in the second component involves a non - negative change in the magnitude of the first component . *any horizontal transition involving a decrease in the second component is always feasible ( provided the second component of the starting state is positive ) . *any transition involving an increase in the second component involves a non - positive change in the magnitude of the first component . *not all horizontal transitions involving an increase in the second component are feasible .for example , the transitions from the state to and from the state to are infeasible , and hence .while the first two facts relate to the upper triangular structure and non - singularity of , the latter two relate to the lower triangular and singularity properties of .this markov chain for the dynamics of the state of a line network with perfect feedback is _ irreducible , aperiodic , positive - recurrent _ , and _ ergodic _ . by ergodicity, we can obtain temporal averages by statistical averages .therefore , the throughput capacity can be identified by appropriately scaling the likelihood of the event that the system is in a state wherein the last node buffer is non - empty .this quantity is given by \label{fb - eqn5.1 } \end{aligned}\ ] ] notice that packets are not erased from the buffers without a receipt of acknowledgement of storage from the next - hop node .therefore , the packet - flow rate is conserved .therefore , for , the throughput capacity can also be identified from .\label{fb - eqn5.1b } \end{aligned}\ ] ] thus , the problem of identifying the capacity of line networks is reduced to the problem of computing the steady - state probabilities of the aforementioned markov chain . however , due to the size of the markov chain and its transition matrix , and the presence of multiple reflections due to the limited buffers at intermediate nodes , the problem of computing the steady - state distribution and capacity is computationally tedious even for networks of reasonable hop - lengths and buffer sizes . as the first step towards estimation ,we can define a finite sequence of matrices by note that these matrices relate the steady - state distribution of the states in , by . using these relations ,we can , in theory , estimate the capacity by however , this matrix - norm approach does not provide insight into occupancy statistics of various nodes .therefore , we focus on an approximations - based approach to capacity estimation in the remainder of this work . in queueing theory , problems of identifying the steady - state probability of stochastic networks have often been dealt with approximations .most approaches to problems in this area have been to approximate the dynamics of the network by focussing on local dynamics of the network around each node and the edges incident with it .the key idea in this section is to modify the exact markov chain to derive bounds on throughput capacity .to do so , notice that the main reason for intractability of the exact system is the strong dependence of on not only , but also .this dependence translates to a strong dependence of on both and .relaxation of this strong dependence will be a step towards possible decoupling of the system , and a deeper understanding of the tradeoffs in such networks .consider a network operation mode where each intermediate note transmits an acknowledgement whenever it _ receives _ a packet ( as opposed to the rate - optimal setting where it sends an acknowledgement whenever it _ receives and stores _ a packet successfully ) . under this new mode of operation, we notice that the dependence of the state of node on that of nodes further downstream is eliminated .this mode of operation is equivalent to assuming that a packet that arrives at a node whose buffer is full gets lost / dropped unlike the optimal mode of operation where it gets re - serviced . in this mode ,the state updates are given by a simplified markov chain that is generated by the following rule for all and .\hspace{-0.45mm}-\hspace{-0.45mm}\tilde{y}_{i+1}(l ) , \hspace{-1.5mm}\label{amceqn}\end{aligned}\ ] ] where (l ) & 1<i\leq h\\ x_i(l ) & i=1\end{array}\right ..\label{fb - eqn6}\end{aligned}\ ] ] to avoid confusion , we appellate the chain that is obtained by the dynamics defined by ( [ fb - eqn2 ] ) and ( [ fb - eqn3 ] ) as the exact markov chain ( emc ) and the one defined by ( [ amceqn ] ) and ( [ fb - eqn6 ] ) as the approximate markov chain ( amc ) . also , we allow and to always denote the state of an instance of the process generated by the emc and the amc , respectively. then , the following property holds . _( temporal boundedness property of the amc)_[fb - thm1 ] consider a line network with hops and an instance of channel realizations .suppose we track the variation of the states of the emc and the amc using this instance of channel realizations with the same initial state .then , for any and , the following holds . the proof is detailed in appendix [ app.1 - 2 ] .the temporal boundedness property guarantees that statistically , the probability that a node has an empty buffer is overestimated by the amc .in fact , if we can identify the steady - state distribution of the states of amc , we can provide a lower bound for the steady - state probability of any subset of states that have the form where for .using the temporal boundedness property in conjunction with ( [ fb - eqn5.1 ] ) , we can provide a lower bound for the capacity of the line network by underestimating the probability in ( [ fb - eqn5.1 ] ) by using the steady - state distribution of the amc instead of that of the emc .equivalently , the capacity of the line network is at least that of the throughput achievable by the amc .this above idea of lower bound extends easily to an upper bound using the following result .the fundamental idea behind the following bound is to manipulate the buffer sizes at each node so that the packet drop in the modified network is provably infrequent than in the actual network .[ fb - ubthm ] let the operator be defined by . for a given network with distinct erasure probabilities and buffer sizes , denote to be the throughput computed from the steady - state distribution of the amc defined by ( [ amceqn ] ) and ( [ fb - eqn6 ] ) with erasure probabilities and buffer sizes , i.e. , . then , a detailed proof is presented in appendix [ app.1 - 4 ] .thus , the problem of bounding capacity is reduced to identifying the steady - state probability of the amc .notice that the above bounds are not in a computable form , since they still involve identifying the steady - state distribution of the amc .even though the amc is significantly simpler than the emc , the output process from each intermediate node is not renewal .therefore , the distribution of inter - departure times from each intermediate node is insufficient to completely describe the arrival process at intermediate nodes for .therefore , a straightforward hop - by - hop analysis ( without further assumptions ) seems insufficient to identify the capacity of such networks . in this section , we present two iterative estimates for the capacity of line networks that is based on certain simplifying assumptions regarding the emc .we notice that the difficulty of exactly identifying the steady - state probabilities of the emc stems from the finite buffer condition that is assumed .the finite buffer condition introduces a strong dependency of state update at a node on the state of the node that is downstream .this effect is caused by blocking when the state of a node is forced to remain unchanged because the packet that it transmitted is successfully delivered to the next - hop node , but the latter is unable to store the packet due to lack of space in its buffer .additionally , the non - tractability of the emc is compounded by a non - renewal packet departure process from each intermediate node . in this section ,we ignore some of these issues to develop iterative methods for estimation .figure [ fb - ie ] encapsulates the assumptions made in both estimation approaches . while both approaches ignore the non - renewal nature of packet arrival process at each node , the first approach makes an additional memoryless assumption on the arrival process .additionally , both approaches model the effect of blocking by the introduction of a single parameter that represents the probability that an arriving innovative packet will be blocked .this estimate makes the following assumptions to decouple the dynamics of the system and enable capacity estimation . *the packet departure process at each intermediate node is memoryless . in other words ,each node sees a packet arrival process that is memoryless with ( average ) rate packets / epoch .this assumption allows us to track information rates over links while simplifying the higher order statistics . *any packet that is transmitted unerased by the channel is blocked independently with a probability .that is , for any , = { \overline}{\varepsilon}_{i+1}{p_b}_{i+1}.\nonumber\end{aligned}\ ] ] + here , denotes the blocking probability due to full buffer state at .this assumption allows us to track the blocking probability ignoring higher order statistics of the blocking process . * for each node and epoch , the event of packet arrival and the event of blocking from are independent of each other . under the simplifying assumptions.,width=326 ] the above assumptions are valid in the limiting case of large buffers provided the system corresponds to a stable queueing configuration . by assuming that they hold in general , the effect of blocking is spread equally over all non - zero states of occupancy at each node .similarly , the assumptions also spread the arrival rate equally among all occupancy states .given that the arrival rate of packets at the node is packets / epoch , and the blocking probability of the next node is , the local dynamics of the state change for the node under assumptions a1-a3 is given by the markov chain of fig . [ fb-1hmc ] with the parameters set to the following . using these parameters , the steady - state distribution , then we set . ] of the chain of fig .[ fb-1hmc ] can be computed to be assuming that observes a packet arrival rate of from and a blocking probability of from , the blocking probability that the node perceives from the node and the arrival rate that observes can be computed via ( [ fb - ss ] ) using the following equations . note that the blocking probability is computed using the full occupancy probability of the node .while in reality , a packet is blocked by only if at the arriving instant , the node has full occupancy , a2 models any arriving packet to be blocked with the above probability irrespective of the occupancy of .also , in ( [ fb - pb ] ) and ( [ fb - r ] ) the arrival rate from the node is and the blocking probability . given two vectors ^{h} ] ,we term as a rate - approximate solution to emc , if they satisfy the equations ( [ fb - pb ] ) and ( [ fb - r ] ) in addition to having and . since these relations were obtained from making assumptions on the emc , it is _ a priori _ unclear if there exist rate - approximate solutions for a given system .fortunately , the following result guarantees both the uniqueness and an algorithm for identifying the rate - approximate solution to the emc .[ fb - uniqthm ] given a line network with link erasures and intermediate node buffer sizes , there is exactly one rate - approximate solution to the emc .further , the rate - approximate solution satisfies flow conservation .that is , the proof is detailed in appendix [ app.1 - 3 ] finally , the estimate of the capacity can be obtained from the rate - approximate solution by computing the average rate of packet storage at each node using note that by the conservation of flow , any can be used in the above equation to identify capacity . as an illustration ,consider a simple four - hop network with erasures and buffer sizes . from the above estimation method, we arrive at from simulations , the throughput capacity was found to be packets / epoch for the same network . in this section , we assume that the given line network satisfies for . since the capacity of a line network is a continuous function of the system parameters , this assumption is not restrictive .a system with non - distinct erasure parameters can be approximated to any degree of precision by a system with distinct erasure probabilities .before we introduce the second approach for estimation , we present the following technical results or all be positive . we only need that their sum be unity and that they generate a valid probability distribution , respectively . ] wherein we denote to be the identity distribution for the convolution operator .[ fb - lbthm ] consider a tandem queueing system of two nodes where the first node possessing buffer slots is fed by a renewal process whose inter - arrival time distribution is with and for .suppose that the distribution of service time is , where for .further , suppose that the second node blocks an arriving packet memorylessly with probability , and that any blocked packet gets re - serviced .then , the distribution of inter - arrival times as seen by the second node is given by where for some and , , with .a detailed analysis including the means of identifying is given in appendix [ app.2 ] .just as in the rate - based iterative estimate , this estimate also makes three assumptions to simplify the emc . while the distribution - based iterativeestimate makes assumptions a2 and a3 , it relaxes assumption a1 to the following : * the packet departure process at each intermediate node is renewal .note that assumption a1 allows for tracking only the average rate of information flow on edges whereas a1 allows tracking of the distribution of packet inter - arrival times .however , a1 ignores the fact that the distribution of an inter - arrival time changes with the knowledge of past inter - arrival times .to track the inter - arrival distribution and blocking probabilities at each node , the distribution - based iterative estimate uses theorem [ fb - lbthm ] in a hop - by - hop fashion .assuming that the packet arrival process at is renewal with an inter - arrival distribution , and that the memoryless blocking from occurs with probability , we see that the packet inter - arrival distribution seen by is given by notice that just like in ( [ eqn-24 ] ) , uses the effective erasure probability to incorporate the effect of blocking by .however , this corrective term does not appear in term , because represents the distribution of packet inter - arrival times at , and not the distribution of the time between two adjacent successful packet storages at .further , the blocking probability of as perceived by is given by \\ & \stackrel{(\ref{fb - blockingeqn})}{= } \mathcal{p}(f_i , m_i,{\varepsilon}_{i+1},{p_b}_{i+1 } ) .\label{fb - pb - die}\end{aligned}\ ] ] just as in the rate - based iterative estimate , we call a solution to ( [ fb - r - die ] ) and ( [ fb - pb - die ] ) with boundary conditions and as a distribution - approximate solution .though the existence and uniqueness of the distribution - approximate solution for a given system has eluded us , simulations reveal that for each system , the solution is unique and can be found by iteratively using the following algorithm .note that during any round of in the above algorithm , ( [ fb - r - die ] ) can be iteratively used to identify ] for which a good choice of buffer allocation needs to be identified under the constraint that the total number of buffers in the network must be no more than 30 packets . to this end, we use the rate - based estimate to study the effect of individual buffer sizes on throughput and delay .[ delay - throughput ] shows the variation of the throughput and delay contributed by each node when its memory is varied from 1 to 20 packets , while the buffer sizes of other intermediate nodes are kept at 20 packets . in this example, it is noticed that maximum throughput estimate for all choices of memory estimates is packets / epoch when .this setting offers a mean packet delay of epochs .however , minimum delay configuration amongst those that offer a throughput more than packets / epoch is , which offers a throughput of packets / epoch and a mean packet latency of 28.46 epochs . the actual capacity and delay for these configurations were found to be packets / epoch , epochs and packets / epoch , epochs , respectively . to understand further these patterns , we present in fig .[ buffer - occupancy ] the steady - state occupancy of the three intermediate nodes when buffer sizes are set to packets , packets and packets , respectively . in all settings, it is noted that the node is congested because the sub - network from to has a min - cut capacity of , whereas it receives packets at the rate of .therefore , the steady - state occupancy of the node for and are translates of that of . due to congestion ,an arriving packet at such a node usually sees very high occupancy .hence , in a first - come first - serve mode of operation , the arriving packet has to wait long before getting serviced .therefore , it is critical that the buffer size of congested nodes ( such as ) be kept to absolute minimum to minimize average packet delay .similarly , can at most receive packets at a rate of , however the outgoing link can communicate packets at a much higher rate .therefore , the buffer of is never full as long as the buffer size is greater than five .nodes such as that are never congested contribute little to the delay experienced by packets .hence , limiting buffer sizes of such nodes is not critical for delay as long as the sizes are bigger than their threshold sizes ( beyond which throughput increase is marginal ) .occupancy in nodes like that are neither congested nor starved undergo non - trivial changes with changes in buffer sizes .these nodes contribute significantly to both the throughput and average packet delay in the network .for example , in the example network has a near - uniform distribution for both and . just like congested nodes ,such nodes have to be allocated buffer sizes so that the they neither block packets nor contribute to delay significantly .though the classification of nodes as congested , starved or neither can usually be done by focusing on , good memory allocation requires knowledge of trends of latency and throughput with buffer sizes , which in turn require the help of more sophisticated estimates such as those proposed in this work . as a second example , consider another four - hop network with ] .however , notice that no matter how large the buffer sizes are , the probability of blocking at any node is always non - zero .hence , the rate of arrival that and see is smaller than that noticed by .therefore , it is meaningful to assign a larger buffer size to minimize blocking at and maximize throughput .although this intuition is correct , it is unclear as to how to allocate buffers .the strength of the iterative technique is in resolving exactly this issue by assigning estimates to each buffer allocation configuration . by searching around the neighborhood of ,the maximum throughput configuration is found to be ] .hence , given the event , depends only on and and not on .this guarantees that for .\(b ) first suppose .consider for some and the state of the system at some time . represents transitions from states that have the form to states of the form . since , it must be that and that the channel must have erased the packet transmitted by .denote for and .then , it is seen that for any realization of , it is true that the state transition must obey however , is the index of the row corresponding to the state within and is the index of the column corresponding to within .therefore , all possible transitions in correspond to transitions from states to other state that involve a non - positive change in the row - index .therefore , is upper triangular .finally , since each diagonal term of is bounded below by , we conclude that finally , if , it is easy to see that ] , and hence follows .* : assume let .then , (k)\nonumber\\ & \leq{{\tilde{n}}}_i(k)+ 1 - y_{i+1}(k ) \leq n_i(k)-y_{i+1}(k)\nonumber\\ & \leq n_i(k)+y_i(k)-y_{i+1}(k ) = n_i(k+1).\nonumber\end{aligned}\ ] ]lastly , if , then the claim can be violated only if and , which can happen only if . however , under this channel instance , .thus , .thus , we have the following . the proof is then complete by following the above argument for and interpreting =\sigma[{{\tilde{n}}}_{0}(k)]=1 ] as described in algorithm [ alg : rbie ] . note that step 2 of the algorithm can be replaced by a convergence - type step that halts if -\mathbf{r}[l-1]\|_1 ] to be the following map .for each ^h ] for each . also , for this sequence of rates and blocking probabilities , we note that ,{\mathbf{p_b}}[l])-\mathbf{w}^*\|_\infty \nonumber\\ & + \|\xi(({\mathbf{r}}[l],{\mathbf{p_b}}[l]))-({\mathbf{r}}[l],{\mathbf{p_b}}[l])\|_\infty \label{ap1-conv}\\ & + \|\xi(\mathbf{w}^*)-\xi(({\mathbf{r}}[l],{\mathbf{p_b}}[l]))\|_\infty .\nonumber\end{aligned}\ ] ] however , the right - hand side of ( [ ap1-conv ] ) is true for any . by allowing ,the three limits vanish and hence we see that is a fixed point of the map and hence the unique solution to the system of non - linear equations . finally , to see the conservation of flow , notice that the rate - based iterative estimate models the system using a discrete - time m / m/1/ system by the introduction of additional assumptions and parameters . in the model ,the number of innovative packets that are successfully stored by as the system progresses from to is given by +{\overline}{\varepsilon}_{i+1}{\overline}{p_b}_{i+1}^*\pr[n_i = m_i]\big)+o(n)\nonumber\\ & = nr_i^*\big(1-\varphi(m_i|r_i^*,{\varepsilon}_{i+1},{p_b}_{i+1}^*)\big(1-{\overline}{\varepsilon}_{i+1}^*{\overline}{p_b}_{i+1}^*\big)\big)+o(n)\nonumber\\ & \hspace{-1.5mm}\stackrel{(\ref{fb - pb})}{= } nr_i^*{\overline}{p_b}_{i}^*+o(n)\label{fb - cons1}.\end{aligned}\ ] ] similarly , the number of packets successfully output by is given by \big){\overline}{p_b}^*_{i+1}+o(n)\nonumber\\ & = n{\overline}{\varepsilon}_{i+1}^*\big(1-\varphi(0|r_i^*,{\varepsilon}_{i+1},{p_b}_{i+1}^*)\big){\overline}{p_b}^*_{i+1}+o(n)\nonumber\\ & \hspace{-1.5mm}\stackrel{(\ref{fb - r})}{=}nr_{i+1}^*{\overline}{p_b}^*_{i+1}+o(n)\label{fb - cons2}\end{aligned}\ ] ] since the m / m/1/ system is lossless , all stored packets eventually leave the system .thus , the average rate of packet storage at a node must match the average rate of packets output from that node . comparing ( [ fb - cons1 ] ) with ( [ fb - cons2 ] ), the conservation of packet flow for the rate - approximate solution follows .the proof elaborates the behavior of a tandem system via a formal setup for the discrete - time equivalent of the queue . to illustrate the complications in the setup , fig .[ app2-fig1 ] presents a section of an inter - arrival period for the first node .the number of customers in the queue of the node just before an arrival or a departure is presented on the axis .the arrival and departure of customers is marked by incoming and outgoing arrows , respectively . in scenarioa , we see that the queue is never starved and as a result all the inter - departure times are instances of the service process .however , in scenario b , we notice that all the five customers that are in the queue after the arrival are serviced much ahead of the next arrival and hence there is a period of time during which the queue is starved .if the queue were not starved , it could have possibly serviced a customer at the instance marked by the outgoing dotted arrow .hence , this duration of time denoted by in the figure , adds a delay to the inter - departure time .thus , if we are able to extract the distribution of this duration , we can identify the inter - arrival distribution as seen by the second node to be a weighted sum of and . in order to identify the distribution , we need to identify the probability distribution of the number of customers in the first node s buffer just after an arrival .the first step in identifying from the imbedded markov chain for the occupancy of the first node is to construct the distribution of the number of packets that could be potentially transmitted during an inter - arrival duration provided the queue were infinite .this distribution can be computed from the arrival and departure processes in the following manner .\binom{k}{j}\tilde{{\zeta}}_{n}^{k - j}{\overline}{\tilde{\zeta}}_{n}^j \nonumber\\&=\sum_{k=1}^\infty \bigl(\sum_{l=1}^{n-1 } p_l { \overline}{{\zeta}}_l{\zeta}_l^{k-1}\bigr)\binom{k}{j}\tilde{\zeta}_{n}^{k - j}{\overline}{\tilde{\zeta}}_{n}^{j}\nonumber\\ & = \sum_{l=1}^{n-1 } p_l\frac{{\overline}{{\zeta}}_l}{{\zeta}_l}\big[\frac{{\overline}{\tilde{\zeta}_n}}{\tilde{\zeta}_n}\big]^j\bigl(\sum_{k=1}^{\infty } \binom{k}{j}({\zeta}_l\tilde{\zeta}_n)^k\bigr)\nonumber\\ & \stackrel{(a)}{=}\big[\frac{{\overline}{\tilde{\zeta}_n}}{\tilde{\zeta}_n}\big]^j\sum_{l=1}^{n-1 } \frac{p_l{\overline}{{\zeta}}_l}{{\zeta}_l}\big(\frac{({\zeta}_l\tilde{\zeta}_n)^j}{(1-{\zeta}_l\tilde{\zeta}_n)^{j+1}}-\sigma[1-j]\big),\end{aligned}\ ] ] where in the above , we use to incorporate the actual parameter of the memoryless service time , and in ( a ) , we use , . for each , the entry of the probability transition matrix for the imbedded markov chain that tracks the number of customers just after an arrival can be computed by \bigl(\sum_{k = i}^\infty d_k\bigr)&+\sigma[j-1]d_{i+1-j}\nonumber\\&+\sigma[j - m+1]d_{i - j}.\label{app2-eqn1}\end{aligned}\ ] ] note that in ( [ app2-eqn1 ] ) , we set when .the distribution can then be solved from the eigenvector relation .note that a packet arriving at the first node will not be accepted if the node is in full buffer and no packet had left in the preceding inter - arrival duration .the probability of this blocking event at the first node is given by finally , we can identify the distribution of by conditioning on the number of customers just after a customer arrival .it is seen that for , & = \sum_{j=1}^\infty\hspace{-0.5 mm } \bigg[\hspace{-1.5mm}\begin{array}{ll}\pr[\textrm{the queue is emptied at time }]\\ \times \pr[t_a = i+j]\end{array}\hspace{-1.5mm}\bigg ] \nonumber\\ & = \sum_{j=1}^\infty\binom{j-1}{k-1}{\overline}{\tilde{\zeta}}_n^k{\tilde{\zeta}_n}^{j - k}\bigl[\sum_{l=1}^{n-1 } p_i { \overline}{{\zeta}}_l{\zeta}_l^{i+j-1}\bigr]\nonumber\\ & = \sum_{l=1}^{n-1}p_l{\overline}{\zeta}_l{\zeta}_l^{i-1}\bigl[\frac{{\overline}{\tilde{\zeta}}_n^k}{{\tilde{\zeta}_n}^k}\sum_{j=1}^\infty \binom{j-1}{k-1}({\zeta}_l{\tilde{\zeta}_n})^j\bigr]\nonumber\\ & = \sum_{l=1}^{n-1}\bigl(p_l\frac{({\zeta}_l{\overline}{\tilde{\zeta}}_n)^k}{(1-{\zeta}_l{\tilde{\zeta}_n})^k}\bigr)({\overline}{\zeta}_l{\zeta}_l^{i-1 } ) .\label{app2-eqn3}\end{aligned}\ ] ] from ( [ app2-eqn3 ] ) , we notice that the distribution of conditioned on is a weighted sum of geometric distributions . the distribution of can then be computed as follows .}{\sum_{k=0}^m \pi_k \pr[x\geq 1|m = k ] } = \sum_{l=1}^l \beta_l { \overline}{\zeta}_l{\zeta}_l^{i-1},\nonumber\\ \beta_l&= p_l \bigg[{\mathop{\sum_{k\in\{0,\ldots , m\}}}_{l=\{1,\ldots , n-1\ } } } \hspace{-1 mm } p_l\frac{\pi_k({\zeta}_l{\overline}{\tilde{\zeta}_n})^k}{(1-{\zeta}_l{\tilde{\zeta}_n})^k}\bigg]^{-1}{{\sum_{0\leq k \leq m } } } \frac{\pi_k({\zeta}_l{\overline}{\tilde{\zeta}_n})^k}{(1-{\zeta}_l{\tilde{\zeta}_n})^k}\nonumber\end{aligned}\ ] ] also , we notice that the distribution of inter - arrival times as seen by the second node is either an instance of or that of , and hence can be written as for some ] .therefore , by choosing a large field size , the probability of the event can be made arbitrarily close to unity .finally from lemma [ rlcinnolem ] , we see that if , then is innovative w.h.p .it is straightforward to see that in this setting , if , an innovative packet is conveyed w.h.p . from to and finally , if , we notice that both occupancies remain unaltered w.h.p . * + suppose that by updating the buffers of with ( using a randomly selected ) , we introduce a linear dependency in the newly formed buffer entries . that is , such that \end{array}}\in \underbrace{{\mathcal{w}}_{h - j+2}(l , j-1)}_{\subseteq { \mathcal{w}}_{h - j+1}(l , j-1)}\nonumber\\ & \leftrightarrow ( \sigma_k a_kb_k ) m_{h - j+1}(l ) \in { \mathcal{w}}_{h - j+1}(l , j-1)\nonumber\\ & \leftrightarrow(\sigma_k a_kb_k=0)\vee\big(m_{h - j+1}(l ) \in { \mathcal{w}}_{h - j+1}(l , j-1)\big)\nonumber\end{aligned}\ ] ] however , from lemma [ rlcinnolem ] and cor . [ linnolemcor ] , <o(\frac{1}{q}) ] . therefore , w.h.p . there is no linear dependency introduced after update and the occupancy is unaltered in this case .* + just like before , the aim here is to show that there will be no change in occupancy . since in this case, has no innovative packets , the message it generates will be a linear combination of packets in , .therefore , we can write , where and . let be used to update the buffer of and let , then , \end{array}{\in w_{h - j+2}(l , j)}\nonumber\\ & \leftrightarrow { \sigma_{k , l } \big[a_k + e_ka_l b_l\big]p_{h - j+1,k}(l , j-1 ) \in w_{h - j+2}(l , j)}\nonumber\end{aligned}\ ] ] note that the above is true if only if for , since . therefore, a linear dependency of stored packets arises if and only if there is a non - trivial solution for , where ] .however , this occurs if and only if .finally , note that this determinant is zero if and only if .however , this event occurs with probability , since the vector is chosen uniformly at random from .therefore , w.h.p . there is no linear dependency induced in the contents of and the occupancy of remains .* although the dynamics of the system are driven by the spaces , the transitions and their probabilities depend only on , , , and not the spaces as such. therefore , the system can be equivalently modeled using just these occupancy vectors as states .* the transition probabilities for the chain given by occupancies , approach that of the emc as the field size is made large .finally , since the steady - state probability is a continuous function of the probability transition matrix , the steady - state probabilities of the chain for networks without feedback approaches that of the emc , thereby guaranteeing that the throughput achieved by the random coding scheme over a line network without feedback is asymptotically the same as that of a line network with identical parameters and perfect feedback .a. f. dana , r. gowaikar , and b. hassibi , `` on the capacity region of broadcast over wireless erasure networks , '' _ in proc .annual allerton conference on communication , control and computing _ , october 2004 .d. s. lun , n. ratnakar , r. koetter , m. mdard , e. ahmed , and h. lee , `` achieving minimum - cost multicast : a decentralized approach based on network coding , '' _ in proc .24th annual joint conference of the ieee computer and communications societies , infocom _ , vol . 3 , mar .2005 .b. n. vellambi , n. rahnavard , and f. fekri , `` the effect of finite memory on throughput of wireline packet networks , '' _ in proc .information theory workshop ( itw 2007 ) , lake tahoe , ca _, september 2007 .d. s. lun , p. pakzad , c. fragouli , m. mdard , and r. koetter , `` an analysis of finite - memory random linear coding on packet streams , '' _2nd workshop on network coding , theory , and applications ( netcod 2006 ) , boston , ma , april 3 - 7 , 2006_. | this work investigates the effect of finite buffer sizes on the throughput capacity and packet delay of line networks with packet erasure links that have perfect feedback . these performance measures are shown to be linked to the stationary distribution of an underlying irreducible markov chain that models the system exactly . using simple strategies , bounds on the throughput capacity are derived . the work then presents two iterative schemes to approximate the steady - state distribution of node occupancies by decoupling the chain to smaller queueing blocks . these approximate solutions are used to understand the effect of buffer sizes on throughput capacity and the distribution of packet delay . using the exact modeling for line networks , it is shown that the throughput capacity is unaltered in the absence of hop - by - hop feedback provided packet - level network coding is allowed . finally , using simulations , it is confirmed that the proposed framework yields accurate estimates of the throughput capacity and delay distribution and captures the vital trends and tradeoffs in these networks . finite buffer , line network , markov chain , network coding , packet delay , throughput capacity . |
when the characteristic size of the semiconductor device reaches the wavelength of an electron , the quantum effects become important even dominant and can not be neglected .the accurate electromagnetic theory for the case is quantum electrodynamics ( qed ) , i.e. the second quantization for the matter and quantization for the electromagnetic field .however , so far it is extremely difficult even impossible to employ qed to analyze the interaction between the matter and the electromagnetic field for some complex systems . the semiclassical ( or semi - quantum )electromagnetic models are widely used in the semiconductor quantum devices .the basic idea is that we use the maxwell s equations for the electromagnetic field while we use the schrdinger equation of the non - relativistic quantum mechanics for the matter ( see ) .the maxwell - schrdinger coupled system ( m - s ) is written as follows : ^{2 } + q \phi(\mathbf{x},t)+v_{0 } \right\rbrace\psi(\mathbf{x},t),}\\[2 mm ] { \displaystyle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad(\mathbf{x},t)\in\omega\times(0,t),}\\[2 mm ] { \displaystyle -\frac{\partial}{\partial t}\nabla\cdot\big(\epsilon\mathbf{a}(\mathbf{x},t)\big ) -\nabla\cdot\big(\epsilon\nabla\phi(\mathbf{x},t)\big ) = q |\psi(\mathbf{x},t)|^{2 } , \,\, ( \mathbf{x},t)\in\omega\times(0,t ) , } \\[2 mm ] { \displaystyle \epsilon\frac{\partial ^{2}\mathbf{a}(\mathbf{x},t)}{\partial t^{2}}+\nabla\times \big({\mu}^{-1}\nabla\times \mathbf{a}(\mathbf{x},t)\big ) + \epsilon \frac{\partial ( \nabla \phi(\mathbf{x},t))}{\partial t } = \mathbf{j}_{q}(\mathbf{x},t),}\\[2 mm ] { \displaystyle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad ( \mathbf{x},t)\in \omega\times(0,t),}\\[2 mm ] { \displaystyle \mathbf{j}_q=-\frac{\mathrm{i}q\hbar}{2m}\big(\psi^{\ast}\nabla{\psi}-\psi\nabla{\psi}^{\ast}\big)-\frac{\vert q\vert^{2}}{m}\vert\psi\vert^{2}\mathbf{a } , } \\[2 mm ] { \displaystyle \psi , \phi , \mathbf{a } \,\ , \mathrm{subject \ to \ the \ appropriate \ initial \ and\ boundary \ conditions } , } \end{array } \right.\ ] ] where , is a bounded lipschitz polyhedral convex domain , denotes the complex conjugate of , and respectively denote the electric permittivity and the magnetic permeability of the material and is the constant potential energy .it is well - known that the solutions of the maxwell - schrdinger equations ( [ eq:1 - 1 ] ) are not unique .in fact , for any function , if is a solution of ( [ eq:1 - 1 ] ) , then is also a solution of ( [ eq:1 - 1 ] ) .it is often assumed that the further equations can be adjoined to the maxwell - schrdinger equations by means of a gauge transformation . in this paperwe consider the m - s system ( [ eq:1 - 1 ] ) under the temporal gauge ( also called weyl gauge ) , i.e. . in this paper we employ the atomic units , i.e. . for simplicity , we also assume that without loss of generality .hence , and satisfy the following maxwell - schrdinger equations : { \displaystyle \frac{\partial ^{2}\mathbf{a}}{\partial t^{2}}+\nabla\times ( \nabla\times \mathbf{a } ) + \frac{\mathrm{i}}{2}\big(\psi^{*}\nabla{\psi}-\psi\nabla{\psi}^{*}\big)+\vert\psi\vert^{2}\mathbf{a}=0 , \,\ , ( \mathbf{x},t)\in \omega\times(0,t),}\\[2 mm ] \end{array } \right.\ ] ] here we omit the initial and boundary conditions for and temporarily . under the temporal gauge ,the second equation in ( [ eq:1 - 1 ] ) involving the divergence of can be rewritten as which can be derived from ( [ eq:1 - 2 ] ) if the solutions of ( [ eq:1 - 2 ] ) are sufficiently smooth and the initial datas are consistent .integrating with respect to on the both sides of ( [ eq:1 - 2 - 0 ] ) , we have where . for the purpose of theoretical analysis, we take the gradient of ( [ eq:1 - 3 ] ) , multiply it by a parameter and add it to the second equation of ( [ eq:1 - 2 ] ) , to obtain { \displaystyle \frac{\partial ^{2}\mathbf{a}}{\partial t^{2}}+\nabla\times ( \nabla\times \mathbf{a } ) -\gamma \nabla(\nabla \cdot \mathbf{a } ) + \frac{\mathrm{i}}{2}\big(\psi^{*}\nabla{\psi}-\psi\nabla{\psi}^{*}\big)+\vert\psi\vert^{2}\mathbf{a } } \\[2 mm ] { \displaystyle \quad\,+\ , \gamma \nabla(\nabla \cdot \mathbf{a}(\mathbf{x},0 ) ) - \gamma \nabla \int_{0}^{t}\rho(\mathbf{x } , \tau ) d\tau=0 , \,\ , ( \mathbf{x},t)\in \omega\times(0,t).}\\[2 mm ] \end{array } \right.\ ] ] the parameter is referred to as the penalty factor .the choice of depends on how much emphasis one places on the equality ( [ eq:1 - 2 - 0 ] ) . in this paper, we keep fixed . to avoid the difficulty for integro - differential equations , assuming that the change of the density function is smooth with respect to for all , we give an approximation of as follows . first denoting by ,we divide the time interval ] . for ] , we can solve the coupled differential equations ( [ eq:1 - 4 ] ) in the subinterval ] , can be calculated as follows : { \displaystyle\quad\quad \quad \approx \int_{0}^{t_1}\rho(\mathbf{x } , \tau ) d\tau + ( t - t_1)\rho(\mathbf{x } , t_1)+\frac{1}{2}(t - t_1)^{2 } \frac{\partial \rho}{\partial t}(\mathbf{x } , t_1 ) , \quad \forall x \in \omega}. \end{array}\ ] ] now we solve the maxwell - schrdinger equations ( [ eq:1 - 4 ] ) in the subinterval ] successively .therefore , we decompose the original system in ] , respectively . for , j = 1 , 2,\cdots , m ] and ^{d} ] and a banach space .the weak formulation of the maxwell - schrdinger system ( [ eq:1 - 6])- ( [ eq:1 - 8 ] ) can be specified as follows : given , find such that , { \displaystyle ( \frac{\partial ^{2}\mathbf{a}}{\partial t^{2}},\mathbf{v})+(\nabla\times \mathbf{a},\nabla\times\mathbf{v } ) + \gamma(\nabla\cdot \mathbf{a},\nabla\cdot\mathbf{v})+(\frac{\mathrm{i}}{2}\big(\psi^{*}\nabla{\psi}-\psi\nabla{\psi}^{*}\big),\mathbf{v } ) } \\[2 mm ] { \displaystyle\quad \quad \quad + \ , ( |\psi|^{2}\mathbf{a},\mathbf{v})= ( \mathbf{g},\mathbf{v } ) , \qquad \qquad \qquad\qquad\qquad\forall\mathbf{v}\in\mathbf{h}^{1}_{t}(\omega ) , } \end{array } \right.\ ] ] with the initial conditions , and .let m be a positive integer and let be the time step .for any k=1,2, , we introduce the following notation : { \displaystyle \overline{u}^{k } = ( u^{k}+u^{k-1})/2 , \quad \widetilde{u}^{k}=(u^{k}+u^{k-2})/2,}\\[2 mm ] \end{array}\ ] ] for any given sequence and denote for any given functions with a banach space .let be a regular partition of into triangles in or tetrahedrons in without loss of generality , where the mesh size .for any , we denote by the spaces of polynomials of degree defined on .we now define the standard lagrange finite element space we have the following finite element subspaces of , and we shall approximate the wave function and the vector potential by the functions in and , respectively .let and be the conventional pointwise interpolation operators on and , respectively . for , , ,standard finite element theory gives that : { \displaystyle \vert \mathbf{v } - { \bm \pi}_{h } \mathbf{v } \vert_{\mathbf{w}^{s}_{p } } \leq ch^{m - s}\vert \mathbf{v } \vert_{\mathbf{w}^{m}_{p } } \quad \forall \ ; \mathbf{v } \in \mathbf{w}^{m}_{p}(\omega ) . }\end{array}\ ] ] for convenience , assume that the function is defined in the interval ] .[ lem3 - 2 ] for the solution of ( [ eq:2 - 9 ] ) , for , we have where c is a constant independent of k , and . choosing in and taking the imaginary part , we can complete the proof of .let us turn to the proof of .it is obvious that = \frac{1}{2}\partial b(\overline{\mathbf{a}}_{h}^{k};{\psi}_{h}^{k},\psi_{h}^{k } ) } \\[2 mm ] { \displaystyle \quad\quad+ \frac{1}{2\delta t}\left[b(\overline{\mathbf{a}}_{h}^{k-1};{\psi}_{h}^{k-1},\psi_{h}^{k-1 } ) -b(\overline{\mathbf{a}}_{h}^{k};{\psi}_{h}^{k-1},\psi_{h}^{k-1})\right]}\\[2 mm ] { \displaystyle \quad\quad+ \frac{1}{2 \delta t}\mathrm{re}\left[b(\overline{\mathbf{a}}_{h}^{k};\psi_{h}^{k-1},\psi_{h}^{k } ) -b(\overline{\mathbf{a}}_{h}^{k};\psi_{h}^{k},\psi_{h}^{k-1})\right ] . }\end{array}\ ] ] by a direct computation , we get { \displaystyle b(\mathbf{a};\psi,\varphi)-b(\hat{\mathbf{a}};\psi,\varphi)=\left((\mathbf{a } + \hat{\mathbf{a}})\psi \varphi^{*},\mathbf{a}-\hat{\mathbf{a}}\right ) + 2(f(\psi,\varphi),\mathbf{a}-\hat{\mathbf{a } } ) , } \end{array}\ ] ] and consequently = 0.}\ ] ] we thus have = -\left(|\psi_{h}^{k-1}|^{2}\frac{\overline{\mathbf{a}}_{h}^{k}+\overline{\mathbf{a}}_{h}^{k-1}}{2},\frac{\overline{\mathbf{a}}_{h}^{k } -\overline{\mathbf{a}}_{h}^{k-1}}{\delta t}\right ) } \\[2 mm ] { \displaystyle \quad+\frac{1}{2}\partial b(\overline{\mathbf{a}}_{h}^{k};{\psi}_{h}^{k},\psi_{h}^{k})-\left(f(\psi_{h}^{k-1},\psi_{h}^{k-1}),\frac{\overline{\mathbf{a}}_{h}^{k } -\overline{\mathbf{a}}_{h}^{k-1}}{\delta t}\right ) . }\end{array}\ ] ] it is not difficult to check that =\frac{v_{0}}{2}\partial(\psi_{h}^{k},\psi_{h}^{k}).}\ ] ] we choose in and take the real part . combining ( [ eq:3 - 11 ] ) and ( [ eq:3 - 12 ] ) gives { \displaystyle \quad -\left(f(\psi_{h}^{k-1},\psi_{h}^{k-1}),\frac{\overline{\mathbf{a}}_{h}^{k}-\overline{\mathbf{a}}_{h}^{k-1}}{\delta t}\right)=0 . } \end{array}\ ] ] taking in , and combining with ( [ eq:3 - 13 ] ) , we get { \displaystyle \quad+\partial\left(\frac{1}{4}d(\mathbf{a}_{h}^{k},\mathbf{a}_{h}^{k } ) + \frac{1}{4}d(\mathbf{a}_{h}^{k-1},\mathbf{a}_{h}^{k-1})\right)=(\mathbf{g}^{k-1},\frac{1}{2}(\partial \mathbf{a}_{h}^{k}+\partial \mathbf{a}_{h}^{k-1 } ) ) . }\end{array}\ ] ] it follows that { \displaystyle \qquad \leq c\left(\vert \mathbf{g}^{k-1 } \vert_{\mathbf{l}^{2}}^{2 } + \vert \partial \mathbf{a}_{h}^{k}\vert_{\mathbf{l}^{2}}^{2 } + \vert \partial \mathbf{a}_{h}^{k-1}\vert_{\mathbf{l}^{2}}^{2}\right ) . } \end{array}\ ] ] multiply ( [ eq:3 - 13 - 0 ] ) by , sum , to discover now follows from the discrete gronwall s inequality and thus we complete the proof of lemma [ lem3 - 2 ] .[ rem3 - 2 ] lemma [ lem3 - 2 ] shows that the numerical scheme presented in this paper for the modified maxwell - schrdinger equations ( [ eq:2 - 8 ] ) is stable in some senses .[ thm3 - 1 ] the solution of the full discrete system ( [ eq:2 - 9 ] ) fulfills the following estimates { \displaystyle\quad + \|\nabla \times\mathbf{a}_{h}^{k}\|_{\mathbf{l}^2 } + \gamma\|\nabla \cdot\mathbf{a}_{h}^{k}\|_{{l}^2}\leq c , } \end{array}\ ] ] and where is a constant independent of , .( [ eq:3 - 14 ] ) is the direct result of lemma [ lem3 - 2 ] .next we give the proof of ( [ eq:3 - 15 ] ) . since the semi - norm in is equivalent to -norm , from ( [ eq:3 - 14 ] )we get then sobolev s imbedding theorem implies that with for and for . using young s inequality and the interpolation inequalities ( [ eq:3 - 3 ] ) ,we further prove { \displaystyle \quad\leq c\|\psi^{k}_{h}\|_{\mathcal{l}^2}^{\frac{1}{2}}\|\psi^{k}_{h}\|_{\mathcal{l}^6}^{\frac{1}{2 } } \leq c\|\psi^{k}_{h}\|_{\mathcal{l}^2}^{\frac{1}{2}}\|\nabla\psi^{k}_{h}\|_{\mathbf{l}^2}^{\frac{1}{2 } } \leq c+\frac{1}{2}\|\nabla\psi^{k}_{h}\|_{\mathbf{l}^2}}. \end{array}\ ] ] hence we have consequently , we obtain combining ( [ eq:3 - 16 ] ) , ( [ eq:3 - 17 ] ) and ( [ eq:3 - 18 ] ) , we complete the proof of ( [ eq:3 - 15 ] ) . this section , we will give the proof of theorem [ thm2 - 1 ] .let denote the interpolation functions of in .set , . by applying the interpolation error estimates ( [ eq:2 - 3 ] ) andthe regularity assumptions ( [ eq:2 - 10 ] ) , we have { \displaystyle \|i_{h}\psi\|_{\mathcal{l}^{\infty}}+\|{\bm \pi}_h\mathbf{a}\|_{\mathbf{h}^{1 } } + \|\nabla i_{h}\psi \|_{\mathbf{l}^{3}}\leq c , } \end{array}\ ] ] where is a constant independent of . for convenience , we give the following identities , which will be used frequently in the sequel . { \displaystyle \sum_{k=1}^{m}{(a_{k}-a_{k-1})b_{k}}=a_{m}b_{m}-a_{0}b_{0}-\sum_{k=1}^{m}{a_{k-1}(b_{k}-b_{k-1})}. } \end{array}\ ] ]let , . by using the error estimates of the interpolation operators , we only need to estimate and . subtracting ( [ eq:2 - 8 ] ) from ( [ eq:2 - 9 ] ) ,we get the following equations for and : { \displaystyle \quad+2v_0\left(\psi^{k-\frac{1}{2}}-\overline{\psi}_{h}^{k},\varphi\right)+ b(\mathbf{a}^{k-\frac{1}{2}};(\psi^{k-\frac{1}{2}}-i_{h}\overline{\psi}^{k}),\varphi)}\\[2 mm ] { \displaystyle \quad+\left(b(\mathbf{a}^{k-\frac{1}{2}};i_{h}\overline{\psi}^{k},\varphi)-b(\overline{\mathbf{a}}^{k}_{h } ; i_{h}\overline{\psi}^{k},\varphi)\right),\quad \forall \varphi\in\mathcal{v}_{h}^{r } , } \end{array}\ ] ] and { \displaystyle \quad + d(\mathbf{a}^{k-1}-\widetilde{{\bm \pi}_{h}\mathbf{a}^{k}},\mathbf{v } ) + \left(|\psi^{k-1}|^{2}\mathbf{a}^{k-1}-|\psi^{k-1}_{h}|^{2}\frac{\overline{\mathbf{a}}^{k}_{h}+\overline{\mathbf{a}}^{k-1}_{h}}{2 } , \;\mathbf{v}\right)}\\[2 mm ] { \displaystyle \quad+ \left(f(\psi^{k-1},\psi^{k-1})-f(\psi^{k-1}_{h},\psi^{k-1}_{h}),\;\mathbf{v}\right),\quad \forall\mathbf{v}\in\mathbf{v}^{r}_{h } , } \end{array}\ ] ] where , , and are similarly given in ( [ eq:2 - 2 ] ) .now we briefly describe the outline of the proof of ( [ eq:2 - 12 ] ) .first , we take in ( [ eq:4 - 3 ] ) and obtain the estimate of .second , we choose in ( [ eq:4 - 3 ] ) and derive the energy - norm estimate for .finally , let in ( [ eq:4 - 4 ] ) and acquire the estimate involving . combining the above three estimates, we will complete the proof of ( [ eq:2 - 12 ] ) . to begin with ,choosing , as the test function in ( [ eq:4 - 3 ] ) , we get where { \displaystyle i_3^{(k)}=b(\mathbf{a}^{k-\frac{1}{2}};(i_{h}\overline{\psi}^{k}-\psi^{k-\frac{1}{2}}),\overline{\theta}_{\psi}^{k } ) , \quad i_4^{(k)}=b(\overline{\mathbf{a}}^{k}_{h};i_{h}\overline{\psi}^{k},\overline{\theta}_{\psi}^{k } ) -b(\mathbf{a}^{k-\frac{1}{2}};i_{h}\overline{\psi}^{k},\overline{\theta}_{\psi}^{k } ) . }\end{array}\ ] ] using the error estimates ( [ eq:4 - 1 ] ) for the interpolation operator and the regularity of in ( [ eq:2 - 10 ] ) , it is easy to see that we observe that { \displaystyle \quad \leq \|\nabla\psi\|_{\mathbf{l}^2}\|\nabla\varphi\|_{\mathbf{l}^2}+\|\mathbf{a}\|^{2}_{\mathbf{l}^6}\|\psi\|_{\mathcal{l}^6}\|\varphi\|_{\mathcal{l}^2 } + \|\mathbf{a}\|_{\mathbf{l}^6}\big(\|\psi\|_{\mathcal{l}^3}\|\nabla\varphi\|_{\mathbf{l}^2}}\\[2 mm ] { \displaystyle \quad+\|\nabla\psi\|_{\mathbf{l}^2}\|\varphi\|_{\mathcal{l}^3}\big)\leq c\|\nabla\psi\|_{\mathbf{l}^2 } \|\nabla\varphi\|_{\mathbf{l}^2},\quad \forall \mathbf{a}\in \mathbf{l}^6(\omega ) , \,\,\,\psi,\varphi\in\mathcal{h}_{0}^{1}(\omega ) , } \end{array}\ ] ] and it follows from ( [ eq:4 - 1 ] ) , ( [ eq:4 - 7 ] ) and ( [ eq:4 - 8 ] ) that notice that +\big[b({\bm \pi}_{h}\overline{\mathbf{a}}^{k } ; i_{h}\overline{\psi}^{k},\overline{\theta}_{\psi}^{k})}\\[2 mm ] { \displaystyle \quad -b(\overline{\mathbf{a}}^{k } ; i_{h}\overline{\psi}^{k},\overline{\theta}_{\psi}^{k})\big]+\big[b(\overline{\mathbf{a}}^{k } ; i_{h}\overline{\psi}^{k},\overline{\theta}_{\psi}^{k } ) -b(\mathbf{a}^{k-\frac{1}{2 } } ; i_{h}\overline{\psi}^{k},\overline{\theta}_{\psi}^{k})\big]}\\[2 mm ] \end{array}\ ] ] by using ( [ eq:3 - 1])-([eq:3 - 3 ] ) and ( [ eq:3 - 10 ] ) , we prove { \displaystyle\qquad\quad+c\big\{h^{2r}+(\delta t)^{4}+\|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2}+\|\nabla\theta_{\psi}^{k-1}\|_{\mathbf{l}^2}^{2}\big\}. } \end{array}\ ] ] taking the imaginary part of ( [ eq:4 - 5 ] ) , we have { \displaystyle\qquad\quad\leq |i_1^{(k)}|+|i_2^{(k)}|+|i_3^{(k)}|+|i_4^{(k)}| \leq c \left(d ( { \theta}^{k}_{\mathbf{a}},{\theta}^{k}_{\mathbf{a } } ) + d({\theta}^{k-1}_{\mathbf{a}},{\theta}^{k-1}_{\mathbf{a}})\right)}\\[2 mm ] { \displaystyle\qquad\quad+c\big\{h^{2r}+(\delta t)^{4 } + \|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2 } + \|\nabla\theta_{\psi}^{k-1}\|_{\mathbf{l}^2}^{2}\big\ } , } \end{array}\ ] ] and therefore { \displaystyle \quad \leq c\big(h^{2r}+(\delta t)^{4}\big ) + c\delta t\sum_{k=1}^{m}{d({\theta}^{k}_{\mathbf{a } } , { \theta}^{k}_{\mathbf{a}})}+c\delta t\sum_{k=1}^{m } { \|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2}}. } \end{array}\ ] ] here we have used the fact . to proceed further ,we take , in ( [ eq:4 - 3 ] ) , to find where { \displaystyle j_3^{(k)}=b(\mathbf{a}^{k-\frac{1}{2}};(\psi^{k-\frac{1}{2}}-i_{h}\overline{\psi}^{k}),\theta_{\psi}^{k}-\theta_{\psi}^{k-1}),}\\[2 mm ] { \displaystyle j_4^{(k)}=b(\mathbf{a}^{k-\frac{1}{2 } } ; i_{h}\overline{\psi}^{k},\theta_{\psi}^{k}-\theta_{\psi}^{k-1})-b(\overline{\mathbf{a}}^{k}_{h } ; i_{h}\overline{\psi}^{k},\theta_{\psi}^{k}-\theta_{\psi}^{k-1 } ) . }\end{array}\ ] ] by virtue of ( [ eq:4 - 2 ] ) , we get { \displaystyle \quad= 2\mathrm{i}\left(\partial i_{h}\psi^{m}-(\psi_{t})^{m-\frac{1}{2}},\theta_{\psi}^{m}\right)-2\mathrm{i}\left(\partial i_{h}\psi^{1 } -(\psi_{t})^{\frac{1}{2}},\theta_{\psi}^{0}\right)}\\[2 mm ] { \displaystyle \quad-2\mathrm{i}\sum_{k=1}^{m-1}\left(\partial i_{h}\psi^{k+1 } -\partial i_{h}\psi^{k}-(\psi_{t})^{k+\frac{1}{2 } } + ( \psi_{t})^{k-\frac{1}{2}},\,\theta_{\psi}^{k}\right ) . } \end{array}\ ] ] it follows from ( [ eq:4 - 1 ] ) and ( [ eq:4 - 17 ] ) that to estimate the term , we rewrite it as { \displaystyle \quad-2v_0\left(\frac{1}{2}(\theta_{\psi}^{k}+\theta_{\psi}^{k-1}),\theta_{\psi}^{k}-\theta_{\psi}^{k-1}\right ) \stackrel{\mathrm{def}}{=}j_2^{(k),1}+j_2^{(k),2}. } \end{array}\ ] ] by applying a standard argument , we find that { \displaystyle \quad \leq c\big(h^{2r+2}+(\delta t)^{4}\big ) + c \|\theta_{\psi}^{m}\|_{\mathcal{l}^2}^{2}+ c\delta t \sum_{k=1}^{m-1}{\|\theta_{\psi}^{k}\|_{\mathcal{l}^2}^{2}}. } \end{array}\ ] ] we recall ( [ eq:3 - 10 ] ) and rewrite as follows : { \displaystyle\quad \qquad + \left(|\mathbf{a}^{k-\frac{1}{2}}|^2(\psi^{k-\frac{1}{2}}-i_{h}\overline{\psi}^{k}),\theta_{\psi}^{k}-\theta_{\psi}^{k-1}\right)}\\[2 mm ] { \displaystyle \quad \qquad+ \mathrm{i}\left(\nabla(\psi^{k-\frac{1}{2}}-i_{h}\overline{\psi}^{k})\mathbf{a}^{k-\frac{1}{2}},\;\theta_{\psi}^{k}-\theta_{\psi}^{k-1}\right)}\\[2 mm ] { \displaystyle \quad \qquad-\mathrm{i}\left((\psi^{k-\frac{1}{2}}-i_{h}\overline{\psi}^{k})\mathbf{a}^{k-\frac{1}{2}},\;\nabla \theta_{\psi}^{k } -\nabla \theta_{\psi}^{k-1}\right).}\\[2 mm ] \end{array}\ ] ] by employing ( [ eq:4 - 1 ] ) , ( [ eq:4 - 2 ] ) , the regularity assumption ( [ eq:2 - 10 ] ) and young s inequality , we can prove the following estimate of . the proof is standard but tedious . due to space limitations, we omit it here . in order to estimate ,we rewrite in the following form : }\\[2 mm ] { \displaystyle \quad\quad + \big[b(\overline{\mathbf{a}}^{k } ; i_{h}\overline{\psi}^{k},\theta_{\psi}^{k}-\theta_{\psi}^{k-1})-b({\bm \pi}_{h}\overline{\mathbf{a}}^{k } ; i_{h}\overline{\psi}^{k},\theta_{\psi}^{k}-\theta_{\psi}^{k-1})\big]}\\[2 mm ] { \displaystyle \quad\quad + \big[b({\bm \pi}_{h}\overline{\mathbf{a}}^{k } ; i_{h}\overline{\psi}^{k},\theta_{\psi}^{k}-\theta_{\psi}^{k-1})-b(\overline{\mathbf{a}}^{k}_{h } ; i_{h}\overline{\psi}^{k},\theta_{\psi}^{k}-\theta_{\psi}^{k-1})\big]}\\[2 mm ] { \displaystyle\quad\quad \stackrel{\mathrm{def}}{=}j_4^{(k),1}+j_4^{(k),2}+j_4^{(k),3}. } \end{array}\ ] ] by applying ( [ eq:3 - 15 ] ) and ( [ eq:4 - 2 ] ) , we deduce { \displaystyle \quad+ \frac{1}{16 } \|\nabla\theta_{\psi}^{m}\|_{\mathbf{l}^2}^{2}+c\delta t \sum_{k=1}^{m-1}{(\|\theta_{\psi}^{k}\|_{\mathcal{l}^2}^{2}+\|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2})}. } \end{array}\ ] ] in order to estimate , we rewrite it as follows . { \displaystyle\quad\quad\quad\quad-\sum_{k=1}^{m}{\mathrm{i}\left ( i_{h}\overline{\psi}^{k}({\bm\pi}_{h}\overline{\mathbf{a}}^{k}-\overline{\mathbf{a}}^{k}_{h}),\ ; \nabla\theta_{\psi}^{k}-\nabla\theta_{\psi}^{k-1}\right)}}\\[2 mm ] { \displaystyle \quad\quad\quad\quad+\sum_{k=1}^{m}{\mathrm{i}\left(\nabla i_{h}\overline{\psi}^{k}({\bm\pi}_{h}\overline{\mathbf{a}}^{k } -\overline{\mathbf{a}}^{k}_{h}),\;\theta_{\psi}^{k}-\theta_{\psi}^{k-1}\right)}}\\[2 mm ] { \displaystyle\quad\quad\quad \stackrel{\mathrm{def}}{=}q_1+q_2+q_3 . }\end{array}\ ] ] note that { \displaystyle \quad = -\left(i_{h}\overline{\psi}^{m}({\bm\pi}_{h}\overline{\mathbf{a}}^{m } + \overline{\mathbf{a}}^{m}_{h } ) \overline{\theta}_{\mathbf{a}}^{m},\;\theta_{\psi}^{m}\right ) + \left(i_{h}\overline{\psi}^{0}({\bm\pi}_{h}\overline{\mathbf{a}}^{0}+\overline{\mathbf{a}}^{0}_{h } ) \overline{\theta}_{\mathbf{a}}^{0},\;\theta_{\psi}^{0}\right)}\\[2 mm ] { \displaystyle \quad + \sum_{k=1}^{m}\left(i_{h}\overline{\psi}^{k}({\bm\pi}_{h}\overline{\mathbf{a}}^{k } + \overline{\mathbf{a}}^{k}_{h } ) \overline{\theta}_{\mathbf{a}}^{k}-i_{h}\overline{\psi}^{k-1}({\bm\pi}_{h}\overline{\mathbf{a}}^{k-1 } + \overline{\mathbf{a}}^{k-1}_{h } ) \overline{\theta}_{\mathbf{a}}^{k-1},\;\theta_{\psi}^{k-1}\right ) . } \end{array}\ ] ] by applying the young s inequality and ( [ eq:3 - 15 ] ) , we can estimate the first two terms on the right side of ( [ eq:4 - 36 ] ) as follows . { \displaystyle \quad\leq \frac{1}{16}d(\overline{\theta}_{\mathbf{a}}^{m } , \overline{\theta}_{\mathbf{a}}^{m})+ c\|\theta_{\psi}^{m}\|_{\mathcal{l}^2}^{2}+ ch^{2r}. } \end{array}\ ] ] from ( [ eq:3 - 14 ] ) and ( [ eq:3 - 15 ] ) , we further deduce { \displaystyle \quad\leq \delta t \|i_{h}\overline{\psi}^{k}\|_{\mathcal{l}^6}\| { \bm\pi}_{h}\overline{\mathbf{a}}^{k}+\overline{\mathbf{a}}^{k}_{h}\|_{\mathbf{l}^6 } \|\frac{1}{\delta t } ( \overline{\theta}_{\mathbf{a}}^{k}-\overline{\theta}_{\mathbf{a}}^{k-1})\|_{\mathbf{l}^2}\|\theta_{\psi}^{k-1}\|_{\mathcal{l}^6}}\\[2 mm ] { \displaystyle \quad \quad+ \delta t \|\frac{i_{h}\overline{\psi}^{k}-i_{h}\overline{\psi}^{k-1}}{\delta t}\|_{\mathcal{l}^2}\| { \bm\pi}_{h}\overline{\mathbf{a}}^{k}+\overline{\mathbf{a}}^{k}_{h}\|_{\mathbf{l}^6 } \|\overline{\theta}_{\mathbf{a}}^{k-1}\|_{\mathbf{l}^6}\|\theta_{\psi}^{k-1}\|_{\mathcal{l}^6}}\\[2 mm ] { \displaystyle \quad\quad + \delta t \|i_{h}\overline{\psi}^{k-1}\|_{\mathcal{l}^6}\| \overline{\theta}_{\mathbf{a}}^{k-1}\|_{\mathbf{l}^6}\|\frac{{\bm\pi}_{h}\overline{\mathbf{a}}^{k } -{\bm\pi}_{h}\overline{\mathbf{a}}^{k-1}}{\delta t}+\frac{\overline{\mathbf{a}}_{h}^{k}-\overline{\mathbf{a}}_{h}^{k-1}}{\delta t}\|_{\mathbf{l}^2}\|\theta_{\psi}^{k-1}\|_{\mathcal{l}^6}}\\[2 mm ] { \displaystyle \quad\leq c\delta t\big\{\|\partial { \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2 } + \|\partial { \theta}_{\mathbf{a}}^{k-1}\|_{\mathbf{l}^2}^{2 } + d({\theta}_{\mathbf{a}}^{k-1},{\theta}_{\mathbf{a}}^{k-1 } ) + + d({\theta}_{\mathbf{a}}^{k-2},{\theta}_{\mathbf{a}}^{k-2 } ) + \|\nabla\theta_{\psi}^{k-1}\|_{\mathbf{l}^2}^{2}\big\},}\\[2 mm ] \end{array}\ ] ] where we have used the fact : hence we get the following estimate : { \displaystyle \quad\quad\quad\leq \frac{1}{16}d(\overline{\theta}_{\mathbf{a}}^{m } , \overline{\theta}_{\mathbf{a}}^{m } ) + c\|\theta_{\psi}^{m}\|_{\mathcal{l}^2}^{2}+ c h^{2r } } \\[2 mm ] { \displaystyle \quad\quad\quad + c\delta t \sum_{k=1}^{m}\left(\|\partial { \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2 } + d({\theta}_{\mathbf{a}}^{k},{\theta}_{\mathbf{a}}^{k})+ \|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2}\right ) . } \end{array}\ ] ] employing ( [ eq:4 - 1 ] ) and integrating by parts , we discover { \displaystyle \quad\quad + \sum_{k=1}^{m}\left(i_{h}\overline{\psi}^{k}\overline{\theta}_{\mathbf{a}}^{k}- i_{h}\overline{\psi}^{k-1}\overline{\theta}_{\mathbf{a}}^{k-1},\;\nabla\theta_{\psi}^{k-1}\right ) , } \end{array}\ ] ] by using the young s inequality , we can estimate the first three terms on the right side of ( [ eq:4 - 41 ] ) as follows : { \displaystyle \quad\leq \frac{1}{16}d(\overline{\theta}_{\mathbf{a}}^{m } , \overline{\theta}_{\mathbf{a}}^{m})+ c\|\theta_{\psi}^{m}\|_{\mathcal{l}^2}^{2}+c h^{2r}. } \end{array}\ ] ] using ( [ eq:4 - 1 ] ) , the last term on the right side of ( [ eq:4 - 41 ] ) can be estimated by { \displaystyle \quad\leq c\delta t \sum_{k=1}^{m}\left(\|\overline{\theta}_{\mathbf{a}}^{k}\|_{\mathbf{h}^{1 } } \|\nabla\theta_{\psi}^{k-1}\|_{\mathbf{l}^2}+\|\partial \overline{\theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}\|\nabla\theta_{\psi}^{k-1}\|_{\mathbf{l}^2}\right)}\\[2 mm ] { \displaystyle \quad\leq c\delta t \sum_{k=0}^{m}\left(d({\theta}_{\mathbf{a}}^{k},{\theta}_{\mathbf{a}}^{k } ) + \|\partial { \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2}+\|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2}\right ) . }\end{array}\ ] ] hence we get { \displaystyle \quad\quad\leq \frac{1}{16}d(\overline{\theta}_{\mathbf{a}}^{m } , \overline{\theta}_{\mathbf{a}}^{m } ) + c\|\theta_{\psi}^{m}\|_{\mathcal{l}^2}^{2}+c h^{2r}}\\[2 mm ] { \displaystyle \quad \quad\quad+ c\delta t \sum_{k=0}^{m}\left(d({\theta}_{\mathbf{a}}^{k},{\theta}_{\mathbf{a}}^{k})+\|\partial { \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2}+\|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2}\right ) . }\end{array}\ ] ] reasoning as before , we can estimate as follows : { \displaystyle \quad\quad\leq \frac{1}{16}d(\overline{\theta}_{\mathbf{a}}^{m } , \overline{\theta}_{\mathbf{a}}^{m } ) + c\|\theta_{\psi}^{m}\|_{\mathcal{l}^2}^{2}+c h^{2r}}\\[2 mm ] { \displaystyle \quad\quad\quad + c\delta t \sum_{k=0}^{m}\left(d({\theta}_{\mathbf{a}}^{k},{\theta}_{\mathbf{a}}^{k})+\|\partial { \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2}+\|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2}\right ) . } \end{array}\ ] ] combining ( [ eq:4 - 40 ] ) , ( [ eq:4 - 44 ] ) and ( [ eq:4 - 45 ] ) implies { \displaystyle \quad\quad\quad + c\delta t \sum_{k=0}^{m}\left(d({\theta}_{\mathbf{a}}^{k},{\theta}_{\mathbf{a}}^{k } ) + \|\partial { \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2 } + \|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2}\right ) . }\end{array}\ ] ] it follows from ( [ eq:4 - 30 ] ) , ( [ eq:4 - 34 ] ) and ( [ eq:4 - 46 ] ) that { \displaystyle \qquad+c\delta t \sum_{k=0}^{m}\big\{d({\theta}_{\mathbf{a}}^{k},{\theta}_{\mathbf{a}}^{k } ) + \|\partial { \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2}+\|\theta_{\psi}^{k}\|_{\mathcal{l}^2}^{2 } + \|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2}\big\ } } \\[2 mm ] \end{array}\ ] ] now take the real part of ( [ eq:4 - 16 ] ) , and we get = \mathrm{re } \big(j_1^{(k)}\big)+\mathrm{re}\big(j_2^{(k)}\big)+\mathrm{re } \big(j_3^{(k)}\big)+\mathrm{re } \big(j_4^{(k)}\big ) . } \end{array}\ ] ] similarly to ( [ eq:3 - 11 ] ) , we have = -\left(\frac{1}{2}(\overline{\mathbf{a}}_{h}^{k}+\overline{\mathbf{a}}_{h}^{k-1})|\theta_{\psi}^{k-1}|^{2},\frac{1}{2}(\partial \mathbf{a}_{h}^{k}+\partial \mathbf{a}_{h}^{k-1})\right)}\\[2 mm ] { \displaystyle\quad + \frac{1}{2}\partial b(\overline{\mathbf{a}}_{h}^{k};\theta_{\psi}^{k},\theta_{\psi}^{k})-\left(f(\theta_{\psi}^{k-1},\theta_{\psi}^{k-1}),\frac{1}{2}(\partial \mathbf{a}_{h}^{k}+\partial \mathbf{a}_{h}^{k-1})\right ) . } \end{array}\ ] ] substituting ( [ eq:4 - 49 ] ) into ( [ eq:4 - 48 ] ) and summing over , we get }\\[2 mm ] { \displaystyle\quad+ \frac{1}{2}b(\overline{\mathbf{a}}_{h}^{0};\theta_{\psi}^{0},\theta_{\psi}^{0})+\delta t\sum_{k=1}^{m } \big(\frac{1}{2}(\overline{\mathbf{a}}_{h}^{k}+\overline{\mathbf{a}}_{h}^{k-1})|\theta_{\psi}^{k-1}|^{2},\frac{1}{2}(\partial \mathbf{a}_{h}^{k}+\partial \mathbf{a}_{h}^{k-1})\big)}\\[2 mm ] { \displaystyle \quad+\delta t\sum_{k=1}^{m}\big(f(\theta_{\psi}^{k-1},\theta_{\psi}^{k-1}),\frac{1}{2}(\partial \mathbf{a}_{h}^{k}+\partial \mathbf{a}_{h}^{k-1})\big ) . }\end{array}\ ] ] combining ( [ eq:4 - 19 ] ) , ( [ eq:4 - 22 ] ) , ( [ eq:4 - 23 - 0 ] ) and ( [ eq:4 - 47 ] ) implies }\\[2 mm ] { \displaystyle\quad \leq |\sum_{k=1}^{m}j_1^{(k)}|+|\sum_{k=1}^{m}j_2^{(k)}|+|\sum_{k=1}^{m}j_3^{(k)}|+|\sum_{k=1}^{m}j_4^{(k)}|}\\[2 mm ] { \displaystyle \leq c\left(h^{2r}+(\delta t)^{4}\right)+\frac{3}{16}d(\overline{\theta}_{\mathbf{a}}^{m } , \overline{\theta}_{\mathbf{a}}^{m } ) + \frac{1}{8 } \|\nabla\theta_{\psi}^{m}\|_{\mathbf{l}^2}^{2 } + c\|\theta_{\psi}^{m}\|_{\mathcal{l}^2}^{2}}\\[2 mm ] { \displaystyle \qquad+c\delta t \sum_{k=0}^{m}\left(d({\theta}_{\mathbf{a}}^{k},{\theta}_{\mathbf{a}}^{k } ) + \|\partial { \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2}+\|\theta_{\psi}^{k}\|_{\mathcal{l}^2}^{2 } + \|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2}\right ) . } \end{array}\ ] ] by employing theorem [ thm3 - 1 ] , we discover setting and applying ( [ eq:4 - 53 ] ) and ( [ eq:4 - 53 - 0 ] ) , we obtain { \displaystyle \qquad + \delta t \sum_{k=1}^{m } j_5^{(k ) } + c\delta t \sum_{k=0}^{m}\left\{d({\theta}_{\mathbf{a}}^{k},{\theta}_{\mathbf{a}}^{k } ) + \|\partial { \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2}+\|\theta_{\psi}^{k}\|_{\mathcal{l}^2}^{2 } + \|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2}\right\}. } \end{array}\ ] ] arguing as in the proof of theorem [ thm3 - 1 ] , we discover and thus substituting ( [ eq:4 - 57 ] ) into ( [ eq:4 - 55 ] ) , we obtain { \displaystyle \quad\quad\quad+c\delta t \sum_{k=0}^{m}\left\{d({\theta}_{\mathbf{a}}^{k},{\theta}_{\mathbf{a}}^{k } ) + \|\partial { \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2}+\|\theta_{\psi}^{k}\|_{\mathcal{l}^2}^{2 } + \|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2}\right\}. } \end{array}\ ] ] multiplying ( [ eq:4 - 15 ] ) with and adding to ( [ eq:4 - 59 ] ) , we end up with { \displaystyle \quad\quad\quad+c\delta t \sum_{k=0}^{m}\left\{d({\theta}_{\mathbf{a}}^{k},{\theta}_{\mathbf{a}}^{k } ) + \|\partial { \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2}+\|\theta_{\psi}^{k}\|_{\mathcal{l}^2}^{2 } + \|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2}\right\}. } \end{array}\ ] ] setting { \displaystyle k_2^{(k)}= d(\mathbf{a}^{k-1}-\widetilde{{\bm\pi}_{h}\mathbf{a}^{k}},\mathbf{v}),}\\[2 mm ] { \displaystyle k_3^{(k ) } = \big(|\psi^{k-1}|^{2}\mathbf{a}^{k-1}-|\psi^{k-1}_{h}|^{2}\frac{\overline{\mathbf{a}}^{k}_{h } + \overline{\mathbf{a}}^{k-1}_{h}}{2},\;\mathbf{v}\big),}\\[2 mm ] { \displaystyle k_4^{(k)}= \left(f(\psi^{k-1},\psi^{k-1})-f(\psi^{k-1}_{h},\psi^{k-1}_{h}),\;\mathbf{v}\right ) , } \end{array}\ ] ] we rewrite ( [ eq:4 - 4 ] ) as follows : we first estimate , and . under the regularity assumption of in ( [ eq:2 - 10 ] ) , we have by applying the regularity assumption ( [ eq:2 - 10 ] ) , the interpolation error estimates ( [ eq:4 - 1 ] ) and theorem [ thm3 - 1 ] , it is easy to deduce + c\sum_{k=0}^{m}\left(d({\theta}_{\mathbf{a}}^{k } , { \theta}_{\mathbf{a}}^{k})+\|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2 } + \|\mathbf{v}\|_{\mathbf{l}^2}^{2}\right).\ ] ] in order to estimate , we rewrite in the following form : { \displaystyle \quad + \left(f(i_{h}\psi^{k-1 } , i_{h}\psi^{k-1})-f(\psi^{k-1}_{h},\psi^{k-1}_{h}),\;\mathbf{v}\right)\stackrel{\mathrm{def}}{= } k_4^{(k),1}+k_4^{(k),2}. } \end{array}\ ] ] we observe that { \displaystyle = -\frac{\mathrm{i}}{2}\left(\varphi^{\ast}\nabla(\varphi-\psi)-\varphi\nabla(\varphi-\psi)^{\ast}\right ) + \frac{\mathrm{i}}{2}\left((\varphi-\psi)\nabla\psi^{\ast}-(\varphi-\psi)^{\ast}\nabla\psi\right ) . } \end{array}\ ] ] we obtain from ( [ eq:4 - 1 ] ) and ( [ eq:4 - 70 ] ) that similarly , from ( [ eq:4 - 1 ] ) and ( [ eq:4 - 70 ] ) , we deduce { \displaystyle \quad = -\frac{\mathrm{i}}{2}\left((\theta_{\psi}^{k-1})^{\ast}\nabla\theta_{\psi}^{k-1 } -\theta_{\psi}^{k-1}\nabla(\theta_{\psi}^{k-1})^{\ast},\;\mathbf{v}\right ) } \\[2 mm ] { \displaystyle \quad\quad-\frac{\mathrm{i}}{2}\left((i_{h}\psi^{k-1})^{\ast}\nabla\theta_{\psi}^{k-1 } -i_{h}\psi^{k-1}\nabla(\theta_{\psi}^{k-1})^{\ast},\;\mathbf{v}\right ) } \\[2 mm ] { \displaystyle\quad \quad + \frac{\mathrm{i}}{2}\left(\theta_{\psi}^{k-1}\nabla ( i_{h}\psi^{k-1})^{\ast}-(\theta_{\psi}^{k-1})^{\ast}\nabla i_{h}\psi^{k-1},\;\mathbf{v}\right)}\\[2 mm ] { \displaystyle \quad \leq -\left(f(\theta_{\psi}^{k-1},\theta_{\psi}^{k-1}),\mathbf{v}\right ) + c \| i_{h}\psi^{k-1}\|_{\mathcal{l}^{\infty}}\|\nabla\theta_{\psi}^{k-1}\|_{\mathbf{l}^2}\|\mathbf{v}\|_{\mathbf{l}^2}}\\[2 mm ] { \displaystyle \quad\quad+c\|\nabla i_{h}\psi^{k-1}\|_{\mathbf{l}^{3}}\|\theta_{\psi}^{k-1}\|_{\mathcal{l}^6 } \|\mathbf{v}\|_{\mathbf{l}^2}}\\[2 mm ] { \displaystyle \quad \leq -\left(f(\theta_{\psi}^{k-1},\theta_{\psi}^{k-1}),\mathbf{v}\right ) + c\left(\|\nabla\theta_{\psi}^{k-1}\|_{\mathbf{l}^2}^{2 } + \|\mathbf{v}\|_{\mathbf{l}^2}^{2}\right ) . }\end{array}\ ] ] combining ( [ eq:4 - 69 ] ) , ( [ eq:4 - 71 ] ) and ( [ eq:4 - 72 ] ) gives from ( [ eq:4 - 63 ] ) , ( [ eq:4 - 68 ] ) and ( [ eq:4 - 73 ] ) , we obtain { \displaystyle \qquad \quad+ c\sum_{k=0}^{m}\left(d({\theta}_{\mathbf{a}}^{k } , { \theta}_{\mathbf{a}}^{k } ) + \|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2}+\|\mathbf{v}\|_{\mathbf{l}^2}^{2}\right ) . }\end{array}\ ] ] now taking in ( [ eq:4 - 62 ] ) , we find { \displaystyle \quad = \frac{1}{2\vardelta t}\left(\|\partial \theta_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2 } -\|\partial \theta_{\mathbf{a}}^{k-1}\|_{\mathbf{l}^2}^{2}\right)+\frac{1}{4\delta t}\left(d({\theta}_{\mathbf{a}}^{k } , { \theta}_{\mathbf{a}}^{k } ) -(d({\theta}_{\mathbf{a}}^{k-2 } , { \theta}_{\mathbf{a}}^{k-2})\right)}\\[2 mm ] { \displaystyle \quad = k_1^{(k)}+ k_2^{(k)}+ k_3^{(k)}+ k_4^{(k)}. } \end{array}\ ] ] note that thus we have { \displaystyle = -\sum_{k=1}^{m } j_5^{(k ) } + \sum_{k=1}^{m}\left(f(\theta_{\psi}^{k-1 } , \theta_{\psi}^{k-1}),\frac{\partial { \bm \pi}_{h}{\mathbf{a}}^{k } + \partial { \bm \pi}_{h}{\mathbf{a}}^{k-1}}{2}\right ) } \\[2 mm ] { \displaystyle \leq -\sum_{k=1}^{m } j_5^{(k ) } + c\sum_{k=1}^{m}\|\nabla\theta_{\psi}^{k-1}\|_{\mathbf{l}^{2}}^{2}}. \end{array}\ ] ] here we have used the definition of in ( [ eq:4 - 54 ] ) . multiplying ( [ eq:4 - 75 ] ) by , and using ( [ eq:4 - 74 ] ) and ( [ eq:4 - 75 - 0 ] ) ,we obtain { \displaystyle \quad \leqc\left\{h^{2r}+(\delta t)^{4 } \right\ } + c\delta t\sum_{k=0}^{m}\left(d({\theta}_{\mathbf{a}}^{k } , { \theta}_{\mathbf{a}}^{k})+\|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2 } + \|\partial \theta_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2 } \right ) } \\[2 mm ] { \displaystyle \quad+ \delta t \sum_{k=1}^{m } k_2^{(k ) } -\delta t\sum_{k=1}^{m } j_5^{(k)}. } \end{array}\ ] ] since , by applying ( [ eq:4 - 2 ] ) and the young s inequality , we get { \displaystyle \leq c\left\{h^{2r } + ( \delta t)^{4}\right\ } + \frac{1}{32 } d\left(\theta_{\mathbf{a}}^{m},\theta_{\mathbf{a}}^{m}\right ) + \frac{1}{32 } d\left(\theta_{\mathbf{a}}^{m-1},\theta_{\mathbf{a}}^{m-1}\right ) + c\delta t \sum_{k=0}^{m}d\left(\theta_{\mathbf{a}}^{k},\theta_{\mathbf{a}}^{k}\right)}\\[2 mm ] \end{array}\ ] ] combining ( [ eq:4 - 60 ] ) , ( [ eq:4 - 76 ] ) and ( [ eq:4 - 77 ] ) implies { \displaystyle \quad\leq c\left\{h^{2r}+(\delta t)^{4}\right\ } + c\delta t \sum_{k=0}^{m}\left\{d({\theta}_{\mathbf{a}}^{k},{\theta}_{\mathbf{a}}^{k } ) + \|\partial { \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2}+\|\theta_{\psi}^{k}\|_{\mathcal{l}^2}^{2 } + \|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2}\right\}. } \end{array}\ ] ] by applying the discrete gronwall s inequality , we have combine ( [ eq:4 - 82 ] ) with the interpolation error estimates ( [ eq:4 - 1 ] ) and we can complete the proof of theorem [ thm2 - 1 ] .to validate the developed algorithm and to confirm the theoretical analysis reported in this paper , we present numerical simulations for the following case studies . [ exam6 - 1 ] we consider the maxwell - schrdinger system ( [ eq:1 - 2 ] ) , where the initial - boundary conditions are as follows : { \displaystyle \psi(\mathbf{x},0 ) = 2\sqrt{2}\sin(\pi x_1)\sin(\pi x_2)\sin(\pi x_3 ) , \quad \mathbf{a}_{t}(\mathbf{x},0)=\mathbf{a}_1(\mathbf{x})=0 , } \\[2 mm ] { \mathbf{a}(\mathbf{x},0)=\mathbf{a}_0(\mathbf{x } ) = \big(10x_1 x_2 x_3(1-x_2)(1-x_3 ) , 10x_1 x_2 x_3(1-x_1)(1-x_3 ) , } \\[2 mm ] { \displaystyle\qquad \qquad \qquad \qquad 10x_1 x_2 x_3(1-x_1)(1-x_2)\big ) . }\end{array}\ ] ] here we take , , , and the time step . note that the initial wave function is the eigenfunction of the stationary schrdinger s equation .the numerical results are displayed in fig .5.1 . :( a ) the evolution of the density function on the line at time ; ( b ) the evolution of , and , where , and . , title="fig:",width=226,height=226 ] : ( a ) the evolution of the density function on the line at time ; ( b ) the evolution of , and , where , and . , title="fig:",width=226,height=226 ][ fig:6 - 1 ] [ rem6 - 1 ] the numerical results illustrated in fig .5.1 clearly show that the change of is smooth with respect to and the assumption on which the modified maxwell - schrdinger equations are based is valid in this case .[ exam6 - 2 ] we consider the modified maxwell schrdinger s equations as follows : { \displaystyle \frac{\partial ^{2}\mathbf{a}}{\partial t^{2}}+\nabla\times ( \nabla\times \mathbf{a } ) - \gamma\nabla(\nabla \cdot \mathbf{a } ) + \frac{\mathrm{i}}{2}\big(\psi^{*}\nabla{\psi}-\psi\nabla{\psi}^{*}\big)}\\[2 mm ] { \displaystyle \quad\quad+\,\,\vert\psi\vert^{2}\mathbf{a } = \mathbf{g}(\mathbf{x},t),\quad \,\ , ( \mathbf{x},t)\in \omega\times(0,t ) , } \end{array } \right.\ ] ] where the initial - boundary conditions are given in ( [ eq:1 - 7])-([eq:1 - 8 ] ) .we take , , and . the exact solution of ( [ eq:6 - 11 ] ) is given by { \displaystyle \qquad\qquad\qquad+ 5.0e^{\mathrm{i}\pi t } \sin(2\pi x_1)\sin(2\pi x_2)\sin(2\pi x_3 ) , } \end{array}\ ] ] { \displaystyle \quad \sin(2\pi x_1)\sin(2\pi x_2)\cos(2\pi x_3)\big ) + \cos(\pi t)\big(\cos(\pi x_1)\sin(\pi x_2)\sin(\pi x_3),}\\[2 mm ] { \displaystyle \quad \sin(\pi x_1)\cos(\pi x_2)\sin(\pi x_3 ) , \sin(\pi x_1)\sin(\pi x_2)\cos(\pi x_3)\big ) . }\end{array}\ ] ] the functions and in ( [ eq:6 - 11 ] ) are chosen correspondingly to the exact solution .a uniform tetrahedral partition is generated with nodes in each direction and elements in total .we solve the system([eq:6 - 11 ] ) by the proposed crank - nicolson galerkin finite element scheme ( [ eq:2 - 9 ] ) with linear elements and quadratic elements , respectively . to confirm our error analysis, we take for the linear element method and for the quadratic element method respectively .numerical results for the linear element method and the quadratic element method at time are listed in tables [ table6 - 1 ] and [ table6 - 2 ] , respectively .. error of linear fem with and . [ cols="^,^,^,^,^",options="header " , ] [ rem6 - 2 ] numerical results in tables [ table6 - 1 ] and [ table6 - 2 ] are in good agreement with the theoretical analysis , see theorem [ thm2 - 1 ] .[ exam6 - 3 ] we consider the following modified maxwell schrdinger s equations { \displaystyle \frac{\partial ^{2}\mathbf{a}}{\partial t^{2}}+\nabla\times ( \nabla\times \mathbf{a } ) - \gamma\nabla(\nabla \cdot \mathbf{a } ) + \frac{\mathrm{i}}{2}\big(\psi^{*}\nabla{\psi}-\psi\nabla{\psi}^{*}\big)}\\[2 mm ] { \displaystyle \quad\quad+\,\,\vert\psi\vert^{2}\mathbf{a } = \mathbf{g}(\mathbf{x},t),\quad \,\ , ( \mathbf{x},t)\in \omega\times(0,t).}\\[2 mm ] { \displaystyle \psi(\mathbf{x},t)=0,\quad \mathbf{a}(\mathbf{x},t)\times\mathbf{n}=0 , \quad \nabla \cdot \mathbf{a}(\mathbf{x},t ) = 0 , \quad ( \mathbf{x},t)\in \partial \omega\times(0,t),}\\[2 mm ] { \displaystyle \psi(\mathbf{x},0 ) = \psi_0(\mathbf{x}),\quad\mathbf{a}(\mathbf{x},0)=\mathbf{a}_{0}(\mathbf{x}),\quad \mathbf{a}_{t}(\mathbf{x},0)=\mathbf{a}_{1}(\mathbf{x } ) , } \end{array } \right.\ ] ] with { \displaystyle \mathbf{g}(\mathbf{x } , t ) = \left(10\sin(1.5{\pi}^{2 } t ) , 10\sin(1.5{\pi}^{2 } t ) , 10\cos(1.5{\pi}^{2 } t)\right ) . } \end{array}\ ] ] in this example we take , , , . using the mesh in example [ exam6 - 2 ] with m = 50 , we solve the system ( [ eq:6 - 12 ] ) by the proposed crank - nicolson galerkin finite element scheme ( [ eq:2 - 9 ] ) with linear elements .the time step . in fig .5.2 we display the numerical results of on the line and on the intersection at time . in fig .5.3 we plot the numerical results of on the intersections and at time , respectively . :numerical results of the density function on the line and the contour plots of on the intersection at time .( a ) on the line at time ; ( b ) the contour plot at time ; ( c ) the contour plot at time ; ( d ) the contour plot at time ; ( e ) the contour plot at time ; ( f ) the contour plot at time ; , title="fig:",width=226,height=226 ] : numerical results of the density function on the line and the contour plots of on the intersection at time .( a ) on the line at time ; ( b ) the contour plot at time ; ( c ) the contour plot at time ; ( d ) the contour plot at time ; ( e ) the contour plot at time ; ( f ) the contour plot at time ; , title="fig:",width=226,height=226 ] : numerical results of the density function on the line and the contour plots of on the intersection at time .( a ) on the line at time ; ( b ) the contour plot at time ; ( c ) the contour plot at time ; ( d ) the contour plot at time ; ( e ) the contour plot at time ; ( f ) the contour plot at time ; , title="fig:",width=226,height=226 ] : numerical results of the density function on the line and the contour plots of on the intersection at time .( a ) on the line at time ; ( b ) the contour plot at time ; ( c ) the contour plot at time ; ( d ) the contour plot at time ; ( e ) the contour plot at time ; ( f ) the contour plot at time ; , title="fig:",width=226,height=226 ] : numerical results of the density function on the line and the contour plots of on the intersection at time .( a ) on the line at time ; ( b ) the contour plot at time ; ( c ) the contour plot at time ; ( d ) the contour plot at time ; ( e ) the contour plot at time ; ( f ) the contour plot at time ; , title="fig:",width=226,height=226 ] : numerical results of the density function on the line and the contour plots of on the intersection at time .( a ) on the line at time ; ( b ) the contour plot at time ; ( c ) the contour plot at time ; ( d ) the contour plot at time ; ( e ) the contour plot at time ; ( f ) the contour plot at time ; , title="fig:",width=226,height=226 ] : numerical results of the density function on the intersection and at time .( a ) on the intersection ; ( b ) on the intersection ; ( c ) on the intersection ; , title="fig:",width=226,height=226 ] : numerical results of the density function on the intersection and at time .( a ) on the intersection ; ( b ) on the intersection ; ( c ) on the intersection ; , title="fig:",width=226,height=226 ] : numerical results of the density function on the intersection and at time .( a ) on the intersection ; ( b ) on the intersection ; ( c ) on the intersection ; , title="fig:",width=226,height=226 ] althouth the maxwell - schrdinger system and the time - dependent ginzburg - landau equations are somehow formally similar , they describe the different physical phenomenons .the time - dependent ginzburg - landau equations describe the vortex dynamics of superconductor while the maxwell - schrdinger equations describe the wave packet dynamics of an electron . as can be seen in fig .5.2 - 5.3 , the wave packet of the electron is located at the center of the computational domain at first and the external and its self - induced electromagnetic fields cause the motion of the wave packet .unlike the time - dependent ginzburg - landau equations , no stable state is observed in our computation .we have presented the optimal error estimates of a crank - nicolson galerkin finite element method for the modified maxwell - schrdinger equations , which are derived from the original equations under some assumptions .the techniques used in this paper may also be applied to other nonlinear pdes , such as the ginzburg - landau equations . the original maxwell - schrdinger system is challenging and difficult to perform numerical computation and theoretical analysis .our work can serve as an elementary attempt for the numerical analysis of this system .we will study the original system in a further work using the mixed finite element method ., _ coupled analysis of maxwell - schrdinger equations by using the length gauge : harmonic model of a nanoplate subjected to a 2d electromagnetic field _ , intern .j. of numer . model . : electronic networks , devices and fields ( 2013 ) , 26(6 ) : pp .533544 . ,_ a new 3-d transmission line matrix scheme for the combined schrdinger - maxwell problem in the electronic / electromagnetic characterization of nanodevices _ , ieee transactions on microwave theory and techniques ( 2008 ) , 56(3 ) : pp . 654. | in this paper we consider the initial - boundary value problem for the time - dependent maxwell - schrdinger equations , which arises in the interaction between the matter and the electromagnetic field for the semiconductor quantum devices . a crank - nicolson finite element method for solving the problem is presented . the optimal energy - norm error estimates for the numerical algorithm without any time - step restrictions are derived . numerical tests are then carried out to confirm the theoretical results . time - dependent maxwell - schrdinger equations , finite element method , crank - nicolson scheme , optimal error estimate . 65n30 , 65n55 , 65f10 , 65y05 |
the ability to simulate the final inspiral , merger , and ring - down of black hole binaries with numerical relativity plays a key role in understanding a source of gravitational waves that may one day be observed with gravitational wave detectors . while initial simulations focused on binaries of equal - mass , zero spin , and quasi - circular inspirals, there currently is a large effort to explore the parameter space of binaries , e.g. .a key part of studying the parameter space is to simulate binaries with intermediate mass - ratios . to date , the mass ratio furthest from equal masses that has been numerically simulated is 10:1 .these simulations use the baumgarte - shapiro - shibata - nakamura ( bssn ) formulation with 1+log slicing , and the driver condition for the shift . in , it was noted that the stability of the simulation is sensitive to the damping factor , , used in the driver condition , here , is the shift vector describing how the coordinates move inside the spatial slices , , and is the contraction of the christoffel symbol , , with the conformal metric , . the standard choice for is to set it to a constant value , which works well even for the most demanding simulations as long as the mass ratio is sufficiently close to unity . in binary simulations , a typical choice is a constant value of about , with the total mass of the system .this choice , however , leads to instabilities for the mass ratio 10:1 simulation , although stability was obtained for .the value of is chosen to damp an outgoing change in the shift while still yielding stable evolutions . as we will show , if is too small , there are unwanted oscillations , and values that are too large lead to instabilities . by itself , this observation is not new , see e.g. .the key issue for unequal masses is that , as evident from ( [ eq : gammadriver ] ) , the damping factor has units of inverse mass , .therefore , the interval of suitable values for depends on the mass of the black holes . for unequal masses , a constant cannot equally well accommodate both black holes .a constant damping parameter implies that the effective damping near each black hole is asymmetric since the damping parameter has dimensions . for large mass ratios, this asymmetry in the grid can be large enough to lead to a failure of the simulations because the damping may become too large or too small for one of the black holes . to cure this problem , we need a position - dependent damping parameter that adapts to the local mass. in particular , we want it to vary such that , in the vicinity of the puncture with mass , its value approaches .a position - dependent was already considered when the driver condition was introduced , but such constructions were not pursued further because for moderate mass ratios a constant works well .recently , we revived the idea of a non - constant for moving puncture evolutions in order to remove the limitations of a constant for large mass ratios . in , we constructed a position - dependent using the the conformal factor , , which carries information both about the location of the black holes , and about the local puncture mass .the form of was chosen to have proper fall - off rates both at the punctures and at large distance from the binary . in ,this approach was used successfully for mass ratio 10:1 .( we note in passing that damping is useful in other gauges as well , e.g. in the modified harmonic gauge condition includes position - dependent damping by use of the lapse function . ) in the present work , we examine one potential short - coming of the choice of , which leads us to suggest an alternative type of position - dependent . using , we find large fluctuations in the values of that , and this might lead to instabilities in the simulation of larger mass - ratio binary black holes . to address this ,we have tested two new explicit formulas for the damping factor designed to have predictable behavior throughout the domain of computation .we find the new formulas to produce only small changes in the waveforms that diminish with resolution , and there is a great deal of freedom in the implementation .independently of our discussion here , in the stability issues for large are explained , and a non - constant is suggested ( although not yet explored in actual simulations ) , that , in its explicit coordinate dependence , is similar to one of our suggestions .the paper is organized as follows .we first describe the reasons for the damping factor and some of the reasons for limiting its value in sec [ sec : motivation ] . in sec .[ sec : forms_of_eta ] , we discuss some previous forms of that have been used .we also present two new definitions and why we investigated them . in sec .[ sec : results ] , we find that these new definitions agree well with the use of constant in the extracted gravitational waves for mass ratios up to 4:1 . finally , in sec . [sec : discussion ] , we discuss further implications of this work .in order to define a position - dependent form for , it is important to determine what this damping parameter accomplishes in numerical simulations . for this reason , we examine the effects of running different simulations while varying between runs .first we use evolutions of single non - spinning black - holes to identify the key physical changes .then we examine equal - mass binaries to determine specific values desired in at both large and small radial coordinates .for all the work in this paper , we have used the bam computer code described in .it uses the bssn formalism with 1+log slicing and driver condition in the moving puncture framework .puncture initial data with bowen - york extrinsic curvature have been used throughout this work , solving the hamiltonian constraint with the spectral solver described in . for binaries, parameters were chosen using to obtain quasi - circular orbits , while the parameters for single black holes were chosen directly .we extract waves via the newman - penrose scalar .the wave extraction procedure is described in detail in .we perform a mode decomposition using spin - weighted spherical harmonics with spin weight , , as basis functions and calculate the scalar product we further split into mode amplitude and phase in order to cleanly separate effects in these components , . in this paper , we focus on one of the most dominant modes , the mode , and report results for this mode unless stated otherwise . the extraction radius used here is .the damping factor , , in eq .( [ eq : gammadriver ] ) , is included to reduce dynamics in the gauge during the evolution . to examine the problem brought up in the introduction , we compare results of a single , non - spinning puncture with mass .we use a courant factor of and 9 refinement levels centered around the puncture . the resolution on the finest gridis , and the outer boundary is situated at .varying the damping constant between and , two main observations can be made .first , as designed , a non - zero attenuates emerging gauge waves efficiently .second , an instability develops for values of that are too large . figs .[ fig : spbetax1_t15.2 ] , and [ fig : spbetax1_t30.4 ] illustrate the first observation .both figures show the -component of the shift along the -coordinate using .-component of the shift , , for a single non - spinning puncture of mass at time .the three lines were taken for different values of the damping factor .the solid line ( black ) is for .the dashed line ( red ) is for and the dotted - dashed line ( green ) is for .this shows the beginning of a pulse in for smaller values of . ]-component of the shift , , for a single non - spinning puncture of mass at time .the three lines were taken for different values of the damping factor with the same line type and color scheme as in fig .[ fig : spbetax1_t15.2 ] .here it is clear a pulse radiates outward in the shift with smaller values of . ] apart from the usual shift profile , fig .[ fig : spbetax1_t15.2 ] shows the beginnings of a pulse in the case ( solid line ) at after of evolution . examining fig .[ fig : spbetax1_t30.4 ] , where we zoom in at a later time , , one can see that the pulse has started to travel further out ( solid line ) . looking carefully , one can also see a much smaller pulse in the line ( dashed ) .lastly , by examination , one can find almost no traveling pulse in the curve ( dotted - dashed line ) .the observed pulse in the shift travels to regions far away from the black hole and effects the gauge of distant observers .this might have undesirable implications for the value of such numerical data when trying to understand astrophysical sources .-component of the shift vector in the -direction for a single non - spinning puncture of mass at times and .the three different lines mark three values of the damping constant .the solid line ( black ) is for , the dashed line ( red ) for and the dotted - dashed line ( green ) for . at ,the simulation using develops an instability in the shift vector and fails soon afterward , the same happens for at . in the simulation using , no such instability develops ( not shown ) . ] for values of larger than , an instability arises in the shift at larger radius .[ fig : betax_fail ] shows the -component of the shift vector using damping constants ( solid line ) , ( dashed line ) and ( dotted - dashed line ) .the plots show an instability in simulations with developing in , which eventually leads to a failure of the simulations .contrary to this , the simulation using does not show this shift related instability . in test runs we found that by decreasing the courant factor used , we could increase the value of the damping factor and still get stable evolutions .this agrees with where it was shown that the gamma driver possesses the stiff property , which limits the size of the time - step in numerical integration based on the value of the damping .figures [ fig : spbetax1_t15.2 ] , [ fig : spbetax1_t30.4 ] , and [ fig : betax_fail ] make clear how the choice of the damping factor affects the behavior of the simulations .the value we choose for should be non - zero and not larger than to allow for effective damping and stable simulations . the exact cutoff value between stable and unstable simulations is not relevant here since the position dependent form we develop in sec .[ sec : forms_of_eta ] gives us the flexibility we need to obtain stable simulations . to examine the effect of on the extraction of gravitational waves , we compare the results from simulations of an equal mass binary with total mass in quasi - circular orbits with initial separation , using .again , the courant factor is chosen to be and we use , in the terminology of , the grid configuration ] , ] , which corresponds to resolutions on the finest grids of ( ) , ( ) and ( ) , respectively .when referring to results from different resolutions , we will from here on use the number of grid points on the finest grid , , to distinguish between them .in this subsection , we use and in eq .( [ eq : etas6 ] ) . as test systemwe use an unequal mass black hole binary with mass ratio and an initial separation of without spins in quasi - circular orbits . for orientation ,[ fig : um4amp ] shows the amplitude of the 22-mode , , computed with the standard gauge ( displayed as solid lines ) and with the new using eq .( [ eq : etas6 ] ) ( displayed as non - solid lines ) .the three different colors correspond to the three resolutions .the inset shows a larger time range of the simulation , while the main plot concentrates on the time frame around merger .the plot gives a course view of the closeness of the results we obtain with standard and new gauges . in fig .[ fig : um4s6ampdevres2 ] , we plot the relative differences between the amplitudes at low and medium ( solid lines ) , and medium and high resolution ( non - solid lines ) obtained with ( light gray lines ) as well as ( eq . ( [ eq : etas6 ] ) ) ( black lines ) . here , we find the maximum error between the low and medium resolution of the series using amounts to about ( solid gray curve ) . between medium and high resolution ( dashed gray curve ) , we find a smaller relative error , but it still goes up to at the end of the simulation . employing eq .( [ eq : etas6 ] ) , the maximum amplitude error between low and medium resolution ( solid black line ) is only about , and therefore even smaller than the error between medium and high resolution for the constant damping case . between medium and high resolution , the relative amplitude differences for eq .( [ eq : etas6 ] ) are in general smaller than the ones between low and medium resolution , although the maximum error is comparable to it ( dot - dashed black line ) .we repeat the previous analysis for the phase of the 22-mode , . again , we compare the errors between resolutions in a fixed gauge .figure [ fig : um4phasedevres ] shows that the error between lowest and medium resolution using ( solid gray line ) grows up to about 0.31 radians . for the differences between medium and high resolution ( dashed line )we find a maximal error of 0.2 radians for .for following eq .( [ eq : etas6 ] ) , the phase error between low and medium resolution is only about 0.19 radians ( solid black line ) and decreases to 0.1 radians between medium and high resolution ( dot - dashed line ) .again , employing the position dependent form of , eq .( [ eq : etas6 ] ) , the error between lowest and medium resolution is lower than the one we obtain for constant between medium and high resolution .the results for amplitude and phase error suggest that we can achieve the same accuracy with less computational resources using a position - dependent . of a binary with mass ratio 4:1 and initial separation .the different colors correspond to three different resolutions according to the grid setup described in the text .the solid lines are results for , the dashed , dotted and dot - dashed ones are for ( eq .( [ eq : etas6 ] ) ) .the inset shows the simulation from shortly after the junk radiation passed , in the main plot we zoom into the region of highest amplitude ( near the merger ) ., title="fig : " ] ( -187,20 ) of a binary with mass ratio 4:1 and initial separation .the different colors correspond to three different resolutions according to the grid setup described in the text .the solid lines are results for , the dashed , dotted and dot - dashed ones are for ( eq .( [ eq : etas6 ] ) ) .the inset shows the simulation from shortly after the junk radiation passed , in the main plot we zoom into the region of highest amplitude ( near the merger ) ., title="fig : " ] ( -50,105 ) of a binary with mass ratio 4:1 and initial separation .the different colors correspond to three different resolutions according to the grid setup described in the text .the solid lines are results for , the dashed , dotted and dot - dashed ones are for ( eq .( [ eq : etas6 ] ) ) .the inset shows the simulation from shortly after the junk radiation passed , in the main plot we zoom into the region of highest amplitude ( near the merger ) ., title="fig : " ] between resolutions and ( gray solid curve ) as well as and ( gray dashed curve ) when using . the same for ( eq . ( [ eq : etas6 ] ) ) between and ( black solid curve ) and and ( black dot - dashed curve ) .the physical situation is the same as in fig .[ fig : um4amp ] .the maximum differences are above , comparing low and medium resolution of the constant simulations ( gray solid line ) . ,title="fig : " ] ( -200,100 ) between resolutions and ( gray solid curve ) as well as and ( gray dashed curve ) when using . the same for ( eq . ( [ eq : etas6 ] ) ) between and ( black solid curve ) and and ( black dot - dashed curve ) .the physical situation is the same as in fig .[ fig : um4amp ] .the maximum differences are above , comparing low and medium resolution of the constant simulations ( gray solid line ) . , title="fig : " ] ( solid gray line ) and ( eq . ( [ eq : etas6 ] ) ) ( solid black line ) as well as between medium and high resolution for ( dashed gray line ) and for ( eq . ( [ eq : etas6 ] ) ) ( dot - dashed black line ) .the physical situation is the one of fig .[ fig : um4amp ] ., title="fig : " ] ( -215,105 ) ( solid gray line ) and ( eq . ( [ eq : etas6 ] ) ) ( solid black line ) as well as between medium and high resolution for ( dashed gray line ) and for ( eq .( [ eq : etas6 ] ) ) ( dot - dashed black line ) .the physical situation is the one of fig .[ fig : um4amp ] ., title="fig : " ] we repeated the analysis of sec . [ sec : resultss6 ] with the waveforms we obtain using eq .( [ eq : etas5 ] ) ( with and ) .we use the same initial conditions ( mass ratio , , no spins ) , and compare the amplitudes and phases of the -mode of with the results of the -runs .the grid configurations remain the same .the results are very similar to the ones we obtained in figs .[ fig : um4s6ampdevres2 ] and [ fig : um4phasedevres ] , and we therefore do not show them here .although eqs .( [ eq : etas6 ] ) and ( [ eq : etas5 ] ) result in different shapes for , is very similar .therefore , the comparison to naturally gives very similar results , too .the phase differences between results from eqs .( [ eq : etas6 ] ) and ( [ eq : etas5 ] ) at a given resolution are shown in fig .[ fig : um4phasedev_s5_s6 ] . these are , with a maximum phase error of 0.004 radians , very small compared to the phase errors between resolutions , which , at minimum , are about 0.1 radian ( see fig . [fig : um4phasedevres ] ) . fig .[ fig : um4phasedevress5s6psim2 ] compares the phase error between low and medium ( solid lines ) , and medium and high resolution ( dotted - dashed and dashed line ) of eq .( [ eq : etas5 ] ) ( gray ) to the ones of eq .( [ eq : etas6 ] ) ( black ) . for comparison ,the error between medium and high resolution is also plotted for eq .( [ eq : etapsimm ] ) in this figure ( dotted line ) .the plot indicates that the errors between resolutions are in good agreement for the different position dependent formulas of . ) and( [ eq : etas5 ] ) in three different resolutions ( solid , dashed , dotted - dashed lines ) for mass ratio 4:1 , ., title="fig : " ] ( -215,100 ) ) and eq .( [ eq : etas5 ] ) in three different resolutions ( solid , dashed , dotted - dashed lines ) for mass ratio 4:1 , . ,title="fig : " ] ) ( black lines ) or eq .( [ eq : etas5 ] ) ( gray lines ) for mass ratio 4:1 , . for comparison, we also show the phase difference obtained with eq .( [ eq : etapsimm ] ) between medium and high resolution ( dotted line ) ., title="fig : " ] ( -215,80 ) ) ( black lines ) or eq .( [ eq : etas5 ] ) ( gray lines ) for mass ratio 4:1 , . for comparison, we also show the phase difference obtained with eq .( [ eq : etapsimm ] ) between medium and high resolution ( dotted line ) ., title="fig : " ] -component of the shift vector in -direction after of evolution of the system with mass ratio and .the black , dot - dashed line refers to the use of a constant damping , while the black , solid line uses eq .( [ eq : etapsimm ] ) .the gray , dashed line is for the use of eq .( [ eq : etas5 ] ) and the gray , dotted one for eq .( [ eq : etas6 ] ) . except for the constant ( black , dot - dashed line ) ,the results in this plot are indistinguishable . , title="fig : " ] ( -90,80)-component of the shift vector in -direction after of evolution of the system with mass ratio and .the black , dot - dashed line refers to the use of a constant damping , while the black , solid line uses eq .( [ eq : etapsimm ] ) .the gray , dashed line is for the use of eq .( [ eq : etas5 ] ) and the gray , dotted one for eq .( [ eq : etas6 ] ) . except for the constant ( black , dot - dashed line ) ,the results in this plot are indistinguishable ., title="fig : " ] in , we found an unusual behavior of the shift vector .this is illustrated in fig .[ fig : um4betax ] , where we plot the -component of the shift , , in the -direction after of evolution ( this means approximately after merger ) for all four versions of the damping constant we used for comparison in this paper before , and for the same binary configuration as the one used in secs .[ sec : resultss6 ] and [ sec : resultss5 ] . like in , we find that using eq .( [ eq : etapsimm ] ) results in a shift which falls off to zero too slowly towards the outer boundary , and which develops a `` bump '' ( black , solid line ) , while the constant damping case ( black , dot - dashed line ) falls off to zero quickly .employing eqs .( [ eq : etas6 ] ) or ( [ eq : etas5 ] ) avoids this undesirable feature . after merger, the shift falls off to zero when going away from the punctures as it does in the constant damping case ( gray dashed and dotted lines ) . using eq .( [ eq : etas6 ] ) or ( [ eq : etas5 ] ) prevents unwanted coordinate drifts at the end of the simulations .in this work , we examined the role that the damping factor , , plays in the evolution of the shift when using the gamma driver . in particular, we examined the range of values allowed in various evolutions , and what effects showed up because of the value chosen .we then designed a form of for the evolution of binary black holes which provides appropriate values both near the individual punctures and far away from them with a smooth transition in between . in sec .[ sec : results ] , we directly examined the waveforms for the case using eq .( [ eq : etas6 ] ) , where and . while the form of is predictable , and can be easily adjusted for stability , we also saw that the waveforms produced using this definition showed less deviation with increasing resolution than using a constant . when examining the waveforms produced using eq .( [ eq : etas5 ] ) , we found similar results . in the absence of a noticeable difference in the quality of the waveforms , eq .( [ eq : etas6 ] ) is computationally cheaper , and , as such , is our preferred definition for the damping . with initial separation .the black , blue and red lines use , eq . ( [ eq : etas6 ] ) with varying values of the width parameter . the orange line ( dash - dot - dot ) uses the constant damping and the green ( dash - dot ) one refers to the result of with eq .( [ eq : etapsimm ] ) . using eq .( [ eq : etas6 ] ) , the coordinate areas can be varied with respect to each other depending on the choice of . a ratio of 1 means the black holes have the same size on the numerical grid . ]we have already pointed out a certain freedom to pick parameters in eqs .( [ eq : etas6 ] ) and ( [ eq : etas5 ] ) .we did perform some experimentation along this line where we varied to see if we could get a useful effect of the coordinate size of the apparent horizons on the numerical grid . in , it was noticed that the damping coefficient affects the coordinate location of the apparent horizon , and therefore the resolution of the black hole on the numerical grid .[ fig : ah_coordarea ] plots the ratio of the grid - area of larger apparent horizon to the smaller apparent horizon as a function of time for -values of , and for , all with . also plotted is the relative coordinate size for the same binaries using a constant in dashed , double - dotted line , and for using eq .( [ eq : etapsimm ] ) in a blue dashed - dotted line .all the evolutions show an immediate dip , and then increase in the grid - area ratio during the course of the evolution . while a very low ratio was found using eq .( [ eq : etapsimm ] ) , the orange dotted line was later found for the choices of with and with eq .( [ eq : etas6 ] ) . due to this freedom in the implementation of our explicit formula for the damping, it may be possible to further reduce the relative grid size of the black holes .this effect could be important in easing the computational difficulty of running a numerical simulation for unequal mass binaries . having a form of that leads to stable evolutions for any mass - ratio is an important step towards the numerical evolution of binary black holes in the intermediate mass - ratio .we believe the form given in eq .( [ eq : etas6 ] ) provides such a damping factor at a low computational cost , although the test results presented are limited to mass ratio .we plan to examine larger mass ratios in future work .the new method should allow binary simulations for mass ratio , or even .it remains to be seen whether other issues than the gauge are now the limiting factor for simulations at large mass ratios .it is a pleasure to thank zhoujian cao and erik schnetter for discussions .this work was supported in part by dfg grant sfb / transregio 7 `` gravitational wave astronomy '' and the dlr ( deutsches zentrum fr luft und raumfahrt ) .d. m. was additionally supported by the dfg research training group 1523 `` quantum and gravitational fields '' .computations were performed on the hlrb2 at lrz munich . | certain numerical frameworks used for the evolution of binary black holes make use of a gamma driver , which includes a damping factor . such simulations typically use a constant value for damping . however , it has been found that very specific values of the damping factor are needed for the calculation of unequal mass binaries . we examine carefully the role this damping plays , and provide two explicit , non - constant forms for the damping to be used with mass - ratios further from one . our analysis of the resultant waveforms compares well against the constant damping case . |
describing a fluid flow , one labels particles of the fluid by means of lagrangian coordinates .one supposes , that any lagrangian coordinates label the same fluid particle all the time . applying laws of newtonian dynamics to any fluid particle , one obtains hydrodynamic equations for the fluid flow in the lagrangian representation ( in the lagrangian coordinates ) .the lagrangian representation is sensitive to the correct labeling of the fluid particles in the sense , that there are situations , when in different time moments the same lagrangian coordinates describe different fluid particles . for instance , if two like gas beams , consisting of noninteracting molecules , pass one through another , the particle labeling changes after `` collision '' of the two beams .the picture is shown in the figure , which describes world lines of gas particles in the space - time .the solid lines shows gas particle with the same labeling , whereas dashed lines show world lines of real gas molecules .in this example a violation of the fluid particle labeling after `` collision '' is evident .the stream lines , represented by solid lines , do not describe motion of real fluid particles .is it important ?can we observe stream lines of a fluid ?let us imagine that we introduce several flecks of dust in one of gas flows and follow their motion .we suppose that the size of flecks is larger , than the size of gas molecules , and any fleck interacts with many gas molecules .then we may think , that any fleck of dust moves along stream line of the fluid .as far as collision of any fleck of dust with gas molecules is random , the flecks of dust after `` collision '' appear in both gas beams , although before the `` collision '' they were placed only in one of them . on the other hand ,it is generally assumed , that flecks of dust moves together with the gas , and any fleck moves along the stream line of the fluid .observation of flecks of powder is a usual method of the stream lines investigation .the hydrodynamic description of a fluid is valid at the supposition , that one stream of a fluid can penetrate into the other one only to the depth of the mean length of collision path . in this casethe interfusion of different streams of a fluid will be infinitesimal .however , such a small interfusion will take place , and this interfusion may appear to be essential for the shape of stream lines .such a physical phenomenon as turbulence can be discovered only , if one traces the irregular behavior of stream lines . in other words , for observation of turbulence a displacement of fluid particles is important , but not only their velocities .the velocities are important only as a source of displacement .a motion of the inviscid barotropic fluid is described by the euler equations where is the fluid density , is the fluid velocity , is the pressure and is the fluid internal energy per unit mass .stream lines are described by a system of ordinary differential equations .one supposes that fluid particles move along the stream lines , which are defined by the equation where is a solution of the euler equations ( [ a1.1 ] ) , ( [ a1.2 ] ) . the euler system ( [ a1.1 ] ) , ( [ a1.2 ] ) is a closed system of differential equations , which may be solved independently of equations ( a1.3 ) .the euler system is a system of nonlinear partial differential equations .it is difficult for solution .the system of ordinary differential equations ( [ a1.3 ] ) is simpler , than the euler system .besides , it can be solved only after solution of the euler system ( [ a1.1 ] ) , ( [ a1.2 ] ) .it is a reason , why researchers investigate mainly the euler system .the system of equations ( [ a1.3 ] ) for the stream lines remains usually outside the region of consideration as some triviality .however , the euler system ( [ a1.1 ] ) , ( [ a1.2 ] ) and equations ( a1.3 ) are dynamic equations of one dynamic system , and they should be considered together . this dynamic system will be referred as the complete hydrodynamic system .the euler equations ( [ a1.1 ] ) , ( [ a1.2 ] ) are not dynamic equations of a wholesome dynamic system , because the can not be deduced from a variational principle , whereas dynamic equations ( [ a1.1 ] ) , ( [ a1.2 ] ) , ( [ a1.3 ] ) can .it seems , that dynamic equations do not influence on the solution of dynamic equations ( [ a1.1 ] ) , ( [ a1.2 ] ) . in reality , it is true only for irrotational flows . in vorticalflows a situation changes . inthe vortical flows an interfusion appears .the interfusion is not so large as in figure with colliding gas beams .this interfusion is infinitesimal .it is conditioned by different velocities of adjacent fluid volumes .this infinitesimal interfusion influences on the shape of stream lines and on the labeling of the fluid particles , although it does not influence on quantities , which are solution of the euler system ( [ a1.1 ] ) , ( [ a1.2 ] ) . in the case of the euler representation of hydrodynamic equations ,the fluid particle labeling is not necessary .the four hydrodynamic euler equations are obtained as a result of the conservation laws of the energy and momentum . however , in this case the conservation law of the angular momentum is not used , and one can not be sure , that the system of four euler equations for barotropic fluid describes completely rotational degrees of freedom of molecules and those of fluid particles .the rotational degrees of freedom may be essential in turbulent flows , where close consideration of rotational degrees of freedom may appear to be essential .in this paper we try to take into account interfusion , which appears in the rotational flows of the barotropic fluid .influence of interfusion manifests itself in labeling of the fluid particles by means of the lagrangian coordinates .this change of labeling is not important for solution of euler equations ( [ a1.1 ] ) , ( [ a1.2 ] ) , which form a closed system of differential equations .however , this change of labeling may appear to be important for such physical phenomena , where the shape of stream lines is essential ( such as turbulence ) .we consider connection between the labeling and the interfusion on the formal mathematical level .we shall consider the euler dynamic equation for inviscid barotropic fluid we are interesting in the question , whether the labeling , generated by the equation ( [ a1.3 ] ) is an unique possible way of labeling . to solve this problem, we shall consider the lagrangian coordinates to be dependent dynamic variables .the eulerian coordinates are considered to be independent dynamic variables .thus , we consider dynamic system , described by seven dependent dynamic variables * * , * * which are functions of four independent variables .note that system of dynamic equations ( [ a1.1 ] ) , ( [ a1.2 ] ) is a closed system of dynamic equations .however , the dynamic system , described , by four dependent variables is not a wholesome dynamic system in the sense , that dynamic equations ( a1.1 ) , ( [ a1.2 ] ) can not be obtained from some a variational principle . to obtain dynamic equations ( [ a1.1 ] ) , ( [ a1.2 ] ) from the variational principle , one needs to add so - called lin constraints .this conditions have the form is easy to see , that characteristics of the linear differential equation ( [ b1.3]) with the equation ( [ a1.3 ] ) .vice versa , any integral of the equation system ( [ a1.3 ] ) is a solution of the equation ( [ b1.3 ] ). the lin constraints ( [ b1.3 ] ) are interesting in the relation , that independent dynamic variables in ( [ b1.3 ] ) are the same , as in the dynamic equations ( [ a1.1 ] ) , ( [ a1.2 ] ) .hence , dynamic equations ( a1.1 ) , ( [ a1.2 ] ) and dynamic equations ( [ b1.3 ] ) may be considered as dynamic equations of one dynamic system .it is rather difficult to consider system of equations ( [ a1.1 ] ) , ( [ a1.2 ] ) , ( [ a1.3 ] ) as a dynamic equations of a dynamic system , because independent variables are different in equations ( [ a1.1 ] ) , ( [ a1.2 ] ) and ( [ a1.3 ] ) .let us note that the quantities may be considered to be the generalized stream function ( gsf ) , because have two main properties of the stream function .gsf labels stream lines of a fluid .some combinations of the first derivatives of any satisfy the continuity equation identically. is the 4-vector of flux .here and in what follows , a summation over two repeated indices is produced ( 0 - 3 ) for latin indices and ( 1 - 3 ) for greek ones .the jacobian determinant considered to be a four - linear function of .the quantity is the temporal lagrangian coordinate , which appears to be fictitious in expressions for the flux 4-vector a use of jacobians in the description of the ideal fluid goes up to clebsch ( * ? ? ?* ; * ? ? ?* clebsch , 1857,1859 ) , who used jacobians in the expanded form .it was rather bulky .we use a more rational designations , when the 4-flux and other essential dynamic quantities are presented in the form of derivatives of the principal jacobian . dealing with the generalized stream function ,the following identities are useful details of working with jacobians and the generalized stream functions in ( * ? ? ?* rylov,2004 ) . _example_. _application of the stream function for integration of equations , describing the 2d stationary flow of incompressible fluid . _ dynamic equations have the form and are velocity components along -axis and -axis respectively . introducing the stream function by means of relations the first equation ( [ a2.6 ] ) identically , and we obtain for the second equation ( [ a2.6 ] ) the relations can be rewritten in the form is the vorticity of the fluid flow .the general solution of equation ( [ a2.8 ] ) has the form is an arbitrary function of . for the irrotational flow the vorticity ,and we obtain instead ( [ a2.9 ] ) one obtains the unique solution of ( [ a2.10 ] ) inside of a closed region of 2d space provided , that the value of the stream function is given on the boundary of this region .the differential structure of equations ( [ a2.9 ] ) and ( [ a2.10 ] ) is similar .one should expect , that giving the value of the stream function on the boundary , one obtains the unique solution of the equation ( [ a2.10 ] ) .but it is not so , because the indefinite function is not given , and it can not be determined from the boundary condition , because the nature of the function is another , than the nature of the boundary conditions .first , if the flow contains closed stream lines , which do not cross the boundary , one can not determine the values of on these stream lines from the boundary conditions .but for determination of the unique solution the values of on the closed stream lines must be given .second , the boundary conditions are given arbitrarily .the function can not be given arbitrarily .for those stream lines , which cross the boundary more than once , the values of on the different segments of the boundary are to be agreed .thus , the nonuniqueness of the solution , connected with the indefinite function has another nature , than the nonuniqueness , connected with the insufficiency of the boundary conditions .we use the variational principle for the derivation of the hydrodynamic equations ( [ a1.1 ] ) , ( [ a1.2 ] ) , ( [ b1.3 ] ) .the action functional has the form = \dint\limits_{v_{x}}\left\ { \frac{\mathbf{j}^{2}}{2\rho } -\rho e\left ( \rho \right ) -p_{k}\left ( j^{k}-\rho _ { 0}\left ( \mathbf{\xi } \right ) \frac{\partial j}{\partial \xi _ { 0,k}}\right ) \right\ } d^{4}x , \label{a3.1}\]]where , are the lagrange multipliers , which introduce the designations for the 4-flux , the expression for the 4-flux ( [ a3.2 ] ) satisfies the first equation ( [ a2.1 ] ) identically , because the expression ( [ a3.2 ] ) may be reduced to the form of the second relation ( [ a2.1 ] ) by means of a change of variables besides according to the first identity ( [ a2.4 ] ) the relation ( [ a3.2 ] ) satisfies the lin constraint ( [ b1.3 ] ) .variation of the action ( [ a3.1 ] ) with respect to gives relations ( [ a3.2 ] ) .another dynamic equations have the form the third relation ( [ a2.4 ] ) , we obtain now using ( [ a2.5 ] ) , we obtain the first relation ( [ a2.4 ] ) , we obtain there are two ways of dealing with this equation : \1 . elimination of gsf , which leads to the euler equations .integration , which leads to appearance of arbitrary functions ._ the first way : elimination of gsf _ convoluting ( [ a3.8 ] ) with and using dynamic equations ( a3.2 ) , we obtain substituting and from relations ( [ a3.3 ] ) and ( [ a3.4 ] ) , we obtain the euler dynamic equations ( [ a1.1]) continuity equation ( [ a1.2 ] ) is a corollary of equations ( [ a3.2 ] ) and identity ( [ a2.1 ] ) . finally the lin constraints ( [ b1.3 ] ) are corollaries of the first identity ( [ a2.4 ] ) and dynamic equations ( a3.2 ) . _ the second way : integration of the equation for _ let us consider the equations ( [ a3.8 ] ) as linear differential equations for .the general solution of ( [ a3.8 ] ) has the form are arbitrary functions of , is a new variable instead of fictitious variable .let us differentiate ( [ a3.11 ] ) and substitute the obtained expressions in ( [ a3.8 ] ) . using the first identity ( [ a2.4 ] ) , we see , that the relations ( [ a3.12 ] ) satisfy the equations ( [ a3.8 ] ) identically . we may substitute ( [ a3.11 ] ) in the action ( [ a3.1 ] ) , or introduce ( [ a3.11 ] ) by means of the lagrange multipliers .( the result is the same ) .we obtain the new action functional = \dint\limits_{v_{x}}\left\ { \frac{\mathbf{j}^{2}}{2\rho } -\rho e\left ( \rho \right ) -j^{k}\left ( \partial _ { k}\varphi + g^{\alpha } \left ( \mathbf{\xi } \right ) \partial _ { k}\xi _ { \alpha } \right ) \right\ } d^{4}x , \label{a3.14}\]]which contains arbitrary integration functions .here integration functions are considered as a fixed functions of .the term omitted , because it does not contribute to dynamic equations .variation of ( [ a3.14 ] ) with respect to , and gives respectively variation of ( [ a3.14 ] ) with respect to gives is determined by the relation ( [ a3.19 ] ) if , then the lin constraints from ( [ a3.20 ] ) however , the matrix is antisymmetric and it follows from ( [ a3.20]) is an arbitrary quantity , and is the weight function from ( [ a3.2 ] ) . the obtained equation ( [ a3.24 ] ) contains the initial dynamic equation ( [ b1.3 ] ) as a special case . for irrotational flow , when , the equation ( [ a3.24 ] ) turns to ( [ b1.3 ] ) . in the action functional ( [ a3.1 ] ) the initial relation ( [ b1.3 ] ) is used as a side constraint .it is a reason , why the equation ( [ a3.24 ] ) is not obtained from the action functional ( [ a3.1 ] ) .note , that eliminating the variables and from dynamic equations ( [ a3.18 ] ) - ( [ a3.20 ] ) , we obtain the euler dynamic equations ( [ a1.1 ] ) .the vorticity and are obtained from ( [ a3.19 ] ) in the form let us form a difference between the time derivative of ( [ a3.19 ] ) and the gradient of ( [ a3.18 ] ) . eliminating from the obtained equation by means of equations ( [ a3.20 ] ) , one obtains ( [ b3.29 ] ) and ( [ b3.28 ] ) , the expression ( [ b3.30 ] ) reduces to in virtue of the identity equation ( [ b3.31 ] ) is equivalent to ( [ a1.1 ] ) .note , that the euler equations ( [ a1.1 ] ) are obtained at any form of the arbitrary function in the equations ( [ a3.24 ] ) , because the equations ( [ a3.24 ] ) are used in the form ( a3.20 ) , where the form of is unessential .solution of the euler system ( [ a1.1 ] ) , ( [ a1.2 ] ) in the form , does not depend on the form of the indefinite function . if , the dynamic equations ( a3.24 ) describe a violation of the lin constraints ( [ b1.3 ] ) .one obtains another labeling of the stream lines , than that one , which is described by the lin constraints ( [ b1.3 ] ) .if the flow is irrotational , and , the labeling does not depend on .let us consider two different labeling and of the the same fluid flow described by the variables , .the initial conditions are supposed to have the form according to ( [ a3.19 ] ) , ( [ a3.21]) to ( [ a3.24 ] ) the dynamic equations for labeling and have the form the velocity is the same in both equations and the function is defined by the relation the velocity is defined by relations ( [ a3.19 ] ) , it satisfies the euler equations and associates with the generalized stream function , whose evolution is described by the equations ( [ a3.24 ] ) in general , the evolution of the quantities and is different , although the coincide at .let follows from ( [ b3.40 ] ) and ( [ b3.41 ] ) that mismatch between and is determined by the relation system of ordinary differential equations , associated with the equation ( [ a3.33 ] ) , has the form solution of the system of ordinary equations at the initial conditions has the form thus , although solution of the cauchy problem for the euler system of hydrodynamic equation ( [ a1.1 ] ) , ( [ a1.2 ] ) is unique , the solution for the cauchy problem of the complete system of hydrodynamic equations ( [ a1.1 ] ) , ( [ a1.2 ] ) , ( [ a3.24 ] ) is not unique .the reason of this nonuniqueness is consideration of interfusion .this consideration is formal .one can not understand mechanism of the interfusion influence from this consideration .nevertheless this influence takes place , and it should be investigated more closely .it seems , that in the two - dimensional flow instead of determinant ( a3.23 ) we have the determinant does not vanish , in general .then the problem of nonuniqueness of thelabeling is removed and the solution of the cauchy problem for the complete hydrodynamic system becomes to be unique . in reality , we may control the solution only via initial conditions .we may give the two - dimensional initial conditions , i.e. in this case determinant the relations ( [ a3.24 ] ) take the form can not control indefinite quantity , which may depend on .the equation ( [ a4.6 ] ) generates nonunique solution of the cauchy problem of vortical flow for the complete hydrodynamic system .the flow with the two - dimensional initial conditions turns into three - dimensional vortical flow .solution of the cauchy problem for the vortical flow of inviscid barotropic fluid is not unique , if we solve seven dynamic equations of the complete hydrodynamic system , which includes description of the shape of stream lines and has seven dependent variables , , .nonuniqueness is connected with the fact , the initial conditions for variables , , do not control the intermixing effect .solution of the cauchy problem for the vortical flow of inviscid barotropic fluid is unique , if we solve only four dynamic equations of the euler system and ignore shape of stream lines . in this casedynamic equations are written for four dependent variables , . the intermixing effect , generating nonunique solutions , associates with the turbulence phenomenon at the following points : ( 1 ) both effects are not controlled by the initial data for variables , , of the hydrodynamic equations in the euler representation , ( 2 ) both effects take place at the vortical flows , and they are absent at the irrotational flows , ( 3 ) both effects are strong at vanishing viscosity .we admit , that the interfusion may be connected with the turbulence phenomena , although we do not yet insist on this statement .if , indeed , the turbulent phenomena are connected with the interfusion and with the shape of stream lines , it becomes clear , why numerous investigations of hydrodynamic equations , describing only density and velocity , but not the shape of stream lines , had not led to a progress .the researchers looked for turbulence in that region , where it is not placed .a. rylov , hydrodynamic equations for incompressible inviscid fluid in terms of generalized stream function ._ int . j. math . & mat .2004 * , no .11 , 21 february 2004 , pp .541 - 570 .( available at http://arxiv.org/abs/physics/0303065 ) . | it is shown that the euler system of hydrodynamic equations for inviscid barotropic fluid for density and velocity is not a complete system of dynamic equations for the inviscicd barotropic fluid . it is only a closed subsystem of four dynamic equation . the complete system of dynamic equation consists of seven dynamic equations for seven dependent variables : density , velocity and labeling ( lagrangian coordinates , considered as dependent variables ) . solution of the cauchy problem for the euler subsystem is unique . solution of the cauchy problem for the complete hydrodynamic system , containing seven equations , is unique only for irrotational flows . for vortical flows solution of the cauchy problem is not unique . the reason of the nonuniqueness is an interfusion , which can not be taken into account properly in the framework of hydrodynamics . there are some arguments in favour of connection between interfusion and turbulence . |
an application of quantum phenomena to securing optical network has received much attention . in this case , instead of mathematical encryption , a guarantee of security by a physical principle is expected .so far the quantum key distribution(qkd ) based on very weak optical signals has been developed and demonstrated in many institutions .however , these have inherent difficulty such as quantum implementations and very low bit rates compared to current data transmission rates . in order to cope with such a drawback , in 2000 ,a new quantum cipher was proposed [ 1 ] .it is a kind of stream cipher with randomization by quantum noise generated in measurement of coherent state from the conventional laser diode .the scheme is called yuen-2000 protocol(y-00 ) or scheme[2,3 ] which consists of large number of basis to transmit the information bit and pseudo random number generator(prng ) for the selection of the basis . a coherent state as the ciphertext which is transmitted from the optical transmitter(alice ) is determined by the input data and the running key from the output sequence of prng with a secret key .the legitimate receiver(bob ) has the same prng , so he can receive the correct ciphertext under the small error , and simultaneously demodulate the information bit .the attacker ( eve ) , who does not know the key , has to try to discriminate all possible coherent state signals . since the signal distance among coherent state signals are designed as very small , eve s receiver suffers serious error to get the ciphertext .such a difference of the accuracy of the ciphertext for bob and eve brings preferable security which can not be obtained in any conventional cipher .unfortunately , it is very difficult to clarify the quantitative security evaluation for this type of cipher , because all physical parameter for the cipher system is finite .so far , complexity theory approach [ 4 ] and information theoretic approach [ 5 ] have been tried , but still there is no rigorous theoretical treatment .recently , yuen has pointed out that it is possible to show the rigorous security analysis when the parameters are allowed to be asymptotical , and showed a sketch of the properties using a model of coherent pulse position modulation ( cppm ) [ 6 ] .this may open a new way for the quantum key distribution with coherent states of considerable energy and high speed . in this paper , we clarify a generation method of cppm quantum signal by using a theory of unitary operator and symplectic transformation , and show a security property and its numerical examples . in the sectionii , the back ground for the information theoretic security and the shannon limit are explained . in the section iii and iv, we describe a theory of cppm . in the sectionv and vi , we discuss on the security and implementation problem .in the conventional cipher , the ciphertext is determined by the information bit and running key . thisis called non random cipher .however , one can introduce more general cipher system so called random cipher by noise such that the ciphertext is defined as follows : where is noise .such a random cipher by noise may provide a new property in the security . in shannon theory for the symmetric key cipher, we have the following theorem .+ + * theorem 1*(shannon , 1949 [ 7 ] ) + the information theoretic security against ciphertext only attack on data has the limit this is called shannon limit for the symmetric key cipher . to be beyondthe shannon limit is essential for fresh key generation by communication or information theoretic security against known plaintext attack in the symmetric key cipher . in the context of random cipher, one can exceed this limit .it is known that the necessary condition for exceeding the limit is [ 6,8,9 ] .that is , the ciphertexts for bob and eve are different .still the necessary and sufficient condition is not clear , but if the following relation holds , one can say that the cipher exceeds the shannon limit this means that eve can not pin down the information bit even if she gets a secret key after her measurement of the ciphertext while bob can do it . in the following sections, we will clarify that cppm has indeed such a property .the coherent pulse position modulation ( cppm ) cryptosystem has been proposed as a quantum cipher permitting asymptotical system parameters [ 1,6 ] .alice encodes her classical messages by the block encoding where -bit block ( ) corresponds to the pulse position modulation ( ppm ) quantum signals with slots , in addition , alice apply the unitary operator to ppm quantum signals , where the unitary operator is randomly chosen via running key generated by using prng on a secret key .thus , alice gets cppm quantum signal states , which are sent to bob .let us assume an ideal channel .since the secret key , prng and the map are shared by alice and bob , bob can apply the unitary operator to the received cppm quantum signal and obtain the ppm quantum signal .bob decodes the message by the direct detection for , which is known to be a suboptimal detection for ppm signals [ 10 ] .then bob s block error rate is given by here holds for enough large signal energy .in contrast , eve does not know the secret key and hence she must detect cppm quantum signals directly .this makes eve s error probaility worse than bob s one .in this section , we discuss a method for the construction of cppm quantum signals from ppm ones by the unitary operator associated with a symplectic transformation . the classical probability distribution is characterized by the _characteristic function _\pi(dx) ] .we extend this argument to define the quantum gaussian state .we consider self adjoint operators on a hilbert space , satisfying heisenberg ccr : =i\delta _ { jk}\hbar i,\;\;[q_{j},q_{k}]=0,\;\;[p_{j},p_{k}]=0,\ ] ] where for and for .let us introduce unitary operators for a real column -vector and ^{t}.\ ] ] the operators satisfy the weyl - segal ccr (z+z^{\prime } ) , \label{weyl}\ ] ] where is the canonical symplectic form with .\ ] ] the weyl - segal ccr is the rigorous counterpart of the heisenberg ccr , involving only bounded operators .now we can define the quantum characteristic function as the transformation satisfying is called a _ symplectic transformation_.we denote the totality of symplectic transformations by .( [ s_cond ] ) can be rewritten as the symplectic transformation preserves weyl - seagl ccr ( [ weyl ] ) and hence it follows from stone - von neumann theorem that there exists the unitary operator satisfying [ 11 ] we call such derived operator the _ unitary operator associated with symplectic transformation _ .the density operator is called gaussian if its quantum characteristic function has the form .\ ] ] in an -mode gaussian state , is a -dimensional mean vector and is a -corralation matrix .the mean can be arbitrary vector ; the necessary and sufficient condition on the correlation matrix is given by the coherent state ( ) is the quantum gaussian state with the mean and the correlation matrix ,\ ] ] and the -ary coherent state ( ) is the quantum gaussian state with the mean and the correlation matrix .\ ] ] we study a way to generate cppm quantum signals by the unitary operator associated with a symplectic transformation .any unitary operator composed of beam splitters and phase shifts can be described by a symplectic transformation .first , let us consider the state for a general -ary coherent state . by using eq .( [ stone von neumann ] ) , the quantum characteristic function of the state is given as , \end{split}\ ] ] where and is the mean vector and the correlation matrix given by eqs .( [ c ave ] ) and ( [ c cor ] ) respectively .( [ cppm ch ] ) shows that the state is the quantum gaussian state with the mean and the correlation matrix .our interest is devoted to the case where the state is an -ary coherent state .then the symplectic transformation should satisfy the condition , that is , where is the identity matrix and is the set of real matrices . denoting the totality of -unitary matrices by , we have the relation here the matrix with real numbers and rotation matrices , corresponds to the matrix we can find that the unitary operator associated with transforms the state to the state , where and are related in the equation in particular , from the ppm quantum signals , , the cppm ones are generated as in other words , -ary coherent states are the cppm quantum signals generated by applying the unitary operator associated with to the ppm quantum signals if and only if the matrix with -elements is unitary .we give a foundation for discussing security of cppm cryptosystem . without knowingthe secret key eve can not apply the appropriate unitary operator to cppm quantum signals , and hence she has to receive directly cppm quantum signals . since the quantum optimum receiver is unknown for such signals ,we apply the heterodyne receiver , which is suboptimum and appropriate to discuss the performance of error .this scheme is called heterodyne attack .our main target in this subsection is to study the heterodyne attack on , where is the the unitary operator associated with , and is a general -ary coherent state .heterodyne detection is characterized by a family of operators with a parameter , the outcomes of the heterodyne detection for a coherent state appears with the probability density function which represents the normal distribution with the correlation matrix .the outcomes of the indivisual heterodyne detection for obeys the probability density function , with . here , putting and taking account of eq ( [ rel alpha ] ) , we get note that represents the conjugate transpose and corresponds to the unitary operator . substituting this equation to eq .( [ pdf ] ) , we obtain where is the probability density function with which the outcomes of heterodyne detection for the state appears . eq .( [ pu ] ) shows that the vectors given by applying the unitary matrix to the outcomes obeys the probability density function .it is difficult to evaluate the error performance for cppm quantum signals by heterodyne attack , because the randomness of prng has to be taken into account .yuen showed the lower bound of the error performance by using heterodyne detection for the original ppm quantum signals [ 6 ] .but it may be not tight one .here we try another approach .we allow eve to get the secret key after her measurement by heterodyne for cppm quantum signals and hence to know the unitary operator and the corresponding unitary matrix .then , from the discussions in the subsection [ 7_1 ] , eve can apply the unitary matrix to obtain the vector , which obeys to the probability density function .this fact enables us to apply the decoding process for ppm signals .that is , eve may use maximum - likelihood decoding for , whose rule is to pick the for which is largest , and her error probability is given as follows [ 12 ] : (y)dy,\ ] ] where , and ^{n-1},\\ \phi(y)&=\frac{1}{\sqrt{2\pi}}\int _ { -\infty}^{y } \exp(-v^2/2)dv .\end{split}\ ] ] we will compute the lower bounds of eve s error probability to evaluate its convergence speed .the error probability is lower bounded as : (y)dy\\ \geq & q_n(z)\phi(z-\sqrt{2s } ) , \end{split}\ ] ] where the parameter can take any real number value .putting and in ( [ kagen ] ) , we obtain let us consider the case of . then bob s error probability is less than , while converges to .figure [ fig1 ] shows convergence behavior of lower bound for . in this figure ,the lower bounds ( [ kagen ] ) for , are plotted with respect to . since the parameter in the lower bound ( [ kagen ] ) can take arbitrary real number , values of error probability exist the region above the graphs in figure [ fig1 ] .note that the above values of are chosen as they give better lower bounds for . from figure , it is found that the convergence speed of lower bound for is very slow ; ( ) is needed to achieve the error probability 0.9 .thus , in the cppm scheme , eve can not pin down the information bit even if she gets the true secret key and prng after her measurement , and consequently it has been proved that cppm satisfies eq(3 ) .after her measurement for cppm quantum signals ]according to the above analysis , one needs large number of when the signal energy is large . herewe give a requirement of channel bandwidth for the secure communication by cppm .let us assume that the signal band width is when there is no coding . in our scheme , first one has to transform the input information bit sequence to ppm signal with slots .second , such ppm signals are converted into cppm with the same number of slots .if one wants to transmit such cppm signal with no delay , the required bandwidth is thus , the bandwidth exponentially increases with respect to .since one needs the large to ensure the security , one needs a huge bandwidth . on the other hand, we need to realize the unitary transformation to generate cppm quantum signals from ppm ones .such transformations may be implemented by combination of beam splitters and phase shifts [ 6 ] , but to generate the cppm quantum signals with uniform distance for all signal , we need also large number of elements .thus we need more detailed consideration for the practical use . in future work, we will specify the realization method .a crucial element of the coherent pulse position modulation cryptosystem is a generation of cppm quantum signals from ppm ones by a unitary operator . in this paper , we have given a proper theory for a unitary operator and a symplectic transformation basing on the quantum characteristic function .furthermore , by using the above results , we have shown the lower bound of error probability in the case where eve gets the secret key after her measurement .this result clearly guarantees the fresh key generation supported by the secret key encryption system .we are grateful to dr .usuda and the research staff in tamagawa university for the discussion on this subjects . | a quantum cipher supported by a secret key so called keyed communication in quantum noise ( kcq ) is very attractive in implementing high speed key generation and secure data transmission . however , yuen-2000 as a basic model of kcq has a difficulty to ensure the quantitative security evaluation because all physical parameter for the cipher system is finite . recently , an outline of a generalized scheme so called coherent pulse position modulation(cppm ) to show the rigorous security analysis is given , where the parameters are allowed to be asymptotical . this may open a new way for the quantum key distribution with coherent states of considerable energy and high speed . in this paper , we clarify a generation method of cppm quantum signal by using a theory of unitary operator and symplectic transformation , and show an asymptotic property of security and its numerical examples . |
we consider an investment model in continuous time where the decision maker has the option to invest in a given project yielding uncertain returns .the investors objective is to choose the entry time such that a particular objective functional ( often of either discounted or ergodic type ) is maximised . at the time of the entry , a known fixed lump sum of entry costs must be paid .once the entry is made , the investment incurs a known constant instantaneous running cost . in the classical perpetual version of this problem , see , e.g. , , it is assumed that once the entry is made , the return from the investment will accrue from the investment date to infinity .a variant of this problem includes the possibility of voluntary exits , see , e.g. , .the problem becomes then of sequential nature , where a sequence of entry and exit times is determined such that the objective functional is maximised .the purpose of this paper is to study a class of multiple entry problems where the exits are not voluntary but forced .this type of problem was first studied in and can be informally described as follows .subject to return uncertainty modelled by a time homogeneous diffusion process , and known entry and running costs and , the investor chooses the time of entry . however , the investment is subject to exogenous interventions , which will terminate the flow of returns .these interventions occur uniformly over time and are modelled by the jumps of a poisson process independent of .once the investment is terminated , the decision maker can make a new entry .the objective is then to maximise the expected present value of the total revenue from the investment .as was discussed already in , this setting lends itself to a possible application to a so - called liquidation risk . consider the case where the investor is funding the investment with borrowed money . to increase liquidity on the lenders side , it is possible that the lender is given ( or requires ) the right to seize the asset and put it to alternative use , that is , liquidate .thus the intervention would be forced liquidation from the lenders side .after a forced liquidation , the investor can find a new lender to make a new entry .this paper makes two contributions .first , we allow the underlying stochastic process to follow a general one - dimensional diffusion process with natural boundaries . in comparison to , where the case of geometric brownian motion is considered ,this is a substantial generalisation which has not , to our best knowledge , been studied earlier in the literature .furthermore , we introduce a so - called catastrophe risk in the model as follows : we attach iid bernoulli trials to the jump times of and if the trial fails , no further re - entries are allowed . in the liquidation risk application described above , the catastrophe event describes a fundamental change in the economical environment of the investment opportunity such that all lenders lose interest in financing a new entry .such a change could be , for instance , due to the emergence of a new technology .this is again a substantial generalisation and it effectively means that the number of forced exits up to the catastrophe is geometrically distributed . somewhat remarkably we find that the optimal investment threshold is independent of the success probability of the bernoulli trials . the paper is organised as follows .in section 2 we set up the probabilistic framework . in section 3the entry problem is introduced .the solution is derived in section 4 .the paper is concluded in section 5 with illustrative examples .we set up the probabilistic framework for the entry problem , for details , see .let , where , be a complete filtered probability space satisfying the usual conditions .assume that the filtration is rich enough to carry the underlying state process and a poisson process .we assume that the process has jump intensity and that it is independent of .the process is a linear diffusion evolving on with the infinitesimal generator and initial state . in what follows , we assume that the functions and are continuous and that the process does not die inside the state space .the densities of the speed measure and the scale function of are defined , respectively , as and for all , where .we denote as , respectively , and the increasing and decreasing solution of the ode , where , defined on the domain of the characteristic operator of . by posing appropriate boundary conditions depending on the boundary classification of , the functions and are defined uniquely up to a multiplicative constant and can be identified as the minimal -excessive functions . to impose the boundary conditions , we assume , using the terminology of , that the boundaries and are natural .this means that finally , we denote as the probability measure conditioned on the initial state and as the expectation with respect to . for , we denote as the class of real valued functions on satisfying the condition <\infty ] satisfying the constraint for all , where .let be a measurable function , and two non - negative constants , and the constant rate of discounting .motivated by the discussion in the introduction , consider the following multiple entry problems : \end{aligned}\ ] ] and .\end{aligned}\ ] ] where and independent of . here, the random variable is realized as follows : we attach an independent bernoulli trial with success probability to each jump time of .when the first failure occurs , the re - entry possibility expires permanently .this problem was studied in in the case where the diffusion is a geometric brownian motion and no catastrophes occur , i.e. , .we study the problems and under the following assumptions .[ sa ] let the function be , continuous , non - decreasing , non - constant and satisfy . furthermore , let at least one of the constants and be strictly positive .it is helpful to write the value functions and as infinite sums instead of sums of random length . to this end, we use the well - known thinning procedure of poisson processes , see , e.g. , . as we have labeled the jump times of with outcomes of independent bernoulli trials , we can split the process into two independent poisson processes and corresponding to outcomes and , respectively .moreover , the intensities of and are and , respectively .now , denote the jump times of as .these jump times correspond to events where further entries are still possible .on the other hand , the first jump of will terminate the whole investment opportunity .denote this jump time as denote as an arbitrary sequence of stopping times taking values in ] as . for each , we write = \psi_{r+(1-p)\lambda}(x)\hat{\mathbf{e}}_x \left [ \frac{g_i(x_{\tau_{j}})}{\psi_{r+(1-p)\lambda}(x_{\tau_{j}})}\right],\end{aligned}\ ] ] where is the expectation with respect to the probability associated with doob s -transform of , see , e.g. , .we find using the representation that as .since the -transform of is the process conditioned to exit the state space via the upper boundary , the claim follows .we prove first case ( 2 ) .let be an arbitrary sequence of stopping times such that where and the inter - arrival times .by using the supermartingale property of , the fact that for all , then lemma [ bellmanlemma ] , and the supermartingale property again , we obtain \nonumber \\ & \geq \mathbf{e}_x\left [ e^{-(r+(1-p)\lambda)\tau_1}g_a(x_{\tau_1 } )- e^{-(r+(1-p)\lambda)\tau_1}k \right ] \\ & = \mathbf{e}_x\left [ \int_{\tau_1}^{\sigma_1 } e^{-(r+(1-p)\lambda)t}(h(x_t)-c)dt + e^{-(r+(1-p)\lambda)\sigma_1 } g_i(x_{\sigma_1 } ) - e^{-(r+(1-p)\lambda)\tau_1}k \right ] \nonumber \\ & \geq \mathbf{e}_x\left [ \int_{\tau_1}^{\sigma_1 } e^{-(r+(1-p)\lambda)t}(h(x_t)-c)dt - e^{-(r+(1-p)\lambda)\tau_1}k \right ] + \mathbf{e}_x \left[e^{-(r+(1-p)\lambda)\tau_2 } g_i(x_{\tau_2})\right ] .\nonumber\end{aligned}\ ] ] by repeating this argument , we find that \\ & + \mathbf{e}_x \left[e^{-(r+(1-p)\lambda)\tau_{j+1 } } g_i(x_{\tau_{j+1}})\right ] \\ & \geq \mathbf{e}_x\left [ \sum_{j=1}^n \int_{\tau_j}^{\sigma_j } e^{-(r+(1-p)\lambda)t}(h(x_t)-c)dt - e^{-(r+(1-p)\lambda)\tau_j}k \right ] , \end{aligned}\ ] ] for all .let and apply dominated convergence theorem .then , by taking supremum over all , we obtain for all . to establish the opposite inequality ,let , where is given by .we find using remark [ suffiremark ] that in this case , all inequalities in become equalities .therefore \\ & + \mathbf{e}_x \left[e^{-(r+(1-p)\lambda)\tau^*_{j+1 } } g_i(x_{\tau^*_{j+1}})\right],\end{aligned}\ ] ] for all .clearly .this implies that almost surely .using lemma [ limitlemma ] , we find that \rightarrow 0,\ ] ] as .letting , we obtain by dominated convergence , \end{aligned}\ ] ] which implies that for all .thus and the sequence yields the optimal value .next , we consider case ( 1 ). it is sufficient to show that for every , we have \leq 0,\ ] ] for all . by the strong markov property of and the memoryless property of exponential distribution , we only need to establish that = ( r_{r+\lambda}\bar{h})(x)\leq k.\ ] ] by the monotonicity of , we find that the proof is now complete . under assumptions [ sa ] , it is possible to prove , similarly to theorem [ mainthrm ] , that the candidate and that the optimal entry threshold is .the properties of the value functions and the optimal entry threshold with respect to the parameter were studied in detail in in the case of gbm dynamics . for a general diffusion process ,a similar analysis is a formidable task and is left for future research .we illustrate in this section some of our results using explicit examples .assume that the process follows a geometric brownian motion , that is , a diffusion process with the infinitesimal generator we assume that .in this case , the process almost surely as .it is easy to check that for an arbitrary , the functions and read as we verify readily that the densities and read as and .moreover , the wronskian determinant . using this information and the formula , we find that the optimal entry threshold is characterized by which can be further simplified to this is the expression ( 4.19 ) in .as we have observed already in the general case , the optimal entry threshold is independent of the parameter .we illustrate the effect of the parameter on the value .let .straightforward integration yields since for all $ ] , the term becomes smaller as approaches zero .similarly , we observe that the integral term becomes smaller as approaches zero .finally , since the wronskian increases as approaches zero , we conclude that as the probability of success becomes smaller , the value of the idle problem decreases on .let .straightforward integration and an application of yields .\end{aligned}\ ] ] we observe by standard differentiation that in the expression above , both integrands decrease on their respective intervals as approaches zero . summarizing , we conclude that the value decreases on as decreases .this observation is in line with the general result and illustrates that increased catastrophe risk decreases value . to conclude, we graphically illustrate the value function for various values of .let .it is easy to verify that the optimal entry threshold ^ 2.\ ] ] in figure [ fig1 ] we illustrate the curves and and around the optimal entry threshold under the parameter configuration , , , , and .the figure suggests that the curves tangent at .this is in line with the smoothness requirement of . in figure [ fig2 ]we illustrate the effect of parameter on the value under the parameter configuration , , , and .the values of are , , , and and the curves are colored such that the hue becomes lighter as the probability decreases .the figure shows that decreasing decreases value , this is in line with our general result . as a generalization of the geometric brownian motion ,we consider the diffusion with infinitesimal generator with , , and positive .this process exhibits mean reversion and has been applied successfully in investment theory , see .we point out that when , this process reduces to a geometric brownian motion . a straightforward computation yields the functions and : and for all .furthermore , it is known from the literature that for an arbitrary , the functions and reads as where and are , respectively , the confluent hypergeometric functions of first and second type , see . due to the complicated nature of these functions , we resort to numerical solution of the optimal entry threshold and the value function . in figure [ fig3 ]we illustrate the curves and and around the optimal entry threshold under the parameter configuration , , , , , and .the figure suggests that the curves tangent at . in figure [ fig4 ]we illustrate the effect of parameter on the value under the parameter configuration , , , , and .the values of are , , , and and the curves are colored such that the hue becomes lighter as the probability decreases .the figure shows that decreasing decreases value , this is in line with our general result . | we study an optimal investment problem with multiple entries and forced exits . a closed form solution of the optimisation problem is presented for general underlying diffusion dynamics and a general running payoff function in the case when forced exits occur on the jump times of a poisson process . furthermore , we allow the investment opportunity to be subject to the risk of a catastrophe that can occur at the jumps of the poisson process . more precisely , we attach iid bernoulli trials to the jump times and if the trial fails , no further re - entries are allowed . we show in the general case that the optimal investment threshold is independent of the success probability is the bernoulli trials . the results are illustrated with explicit examples . = 1 |
in high energy physics ( hep ) experiments , people usually need to select some events with specific interest , so called signal events , out of numerous background events for study . in order to increase the ratio of signal to background, one needs to suppress background events while keeping high signal efficiency . to this end, some advanced techniques , such as adaboost , -boost , -logitboost , -hingeboost , random forests etc . , from statistics and computer sciences were introduced for signal and background event separation in the miniboone experiment at fermilab .the miniboone experiment is designed to confirm or refute the evidence for oscillations at found by the lsnd experiment .it is a crucial experiment which will imply new physics beyond the standard model if the lsnd signal is confirmed .these techniques are tuned with one sample of monte carlo ( mc ) events , the training sample , and then tested with an independent mc sample , the testing sample .initial comparisons of these techniques with artificial neural networks ( ann ) using the miniboone mc samples were described previously .this work indicated that the method of boosted decision trees is superior to the anns for particle identification ( pid ) using the miniboone mc samples .further studies show that the boosted decision tree method has not only better event separation , but is also more stable and robust than anns when using mc samples with varying input parameters .the boosting algorithm is one of the most powerful learning techniques introduced in the past decade .the motivation for the boosting algorithm is to design a procedure that combines many `` weak '' classifiers to achieve a final powerful classifier . in the present work numerous trials are made to tunethe boosted decision trees , and comparisons are made for various algorithms . for a large number of discriminant variables ,several techniques are described to select a set of powerful input variables in order to obtain optimal event separation using boosted decision trees .furthermore , post - fitting of weights for the trained boosting trees is also investigated to attempt further possible improvement .this paper is focussed on the boosting tuning .all results appearing in this paper are relative numbers .they do not represent the miniboone pid performance ; that performance is continually improving with further algorithm and pid study .the description of the miniboone reconstruction packages , the reconstructed variables , the overall and absolute performance of the boosting pid , the validation of the input variables and the boosting pid variables by comparing various mc and real data samples will be described in future articles .boosting algorithms can be applied to any classifier , here they are applied to decision trees .a schematic of a simple decision tree is shown in figure 1 , s means signal , b means background , terminal nodes called leaves are shown in boxes .the key issue is to define a criterion that describes the goodness of separation between signal and background in the tree split .assume the events are weighted with each event having weight .define the purity of the sample in a node by where is the sum over signal events and is the sum over background events . note that is 0 if the sample is pure signal or pure background . for a given node let where is the number of events on that node .the criterion chosen is to minimize to determine the increase in quality when a node is split into two nodes , one maximizes at the end , if a leaf has purity greater than 1/2 ( or whatever is set ) , then it is called a signal leaf , otherwise , a background leaf .events are classified signal ( have score of 1 ) if they land on a signal leaf and background ( have score of -1 ) if they land on a background leaf . the resulting tree is a _decision tree_. decision trees have been available for some time .they are known to be powerful but unstable , i.e. , a small change in the training sample can produce a large change in the tree and the results . combining many decision trees to make a `` majority vote '' , as in the random forests method , can improve the stability somewhat .however , as will be discussed in section 6 , the performance of the random forests method is significantly worse than the performance of the boosted decision tree method in which the weights of misclassified events are boosted for succeeding trees .if there are total events in the sample , the weight of each event is initially taken as .suppose that there are trees and is the index of an individual tree .let * the set of pid variables for the event .* if the event is a signal event and if the event is a background event . * the weight of the event . * if the set of variables for the event lands that event on a signal leaf and if the set of variables for that event lands it on a background leaf .* if and 0 if .there are several commonly used algorithms for boosting the weights of the misclassified events in the training sample .the boosting performance is quite different using various ways to update the event weights .the first boosting method is called `` adaboost'' or sometimes discrete adaboost .define for the tree : calculate : is the value used in the standard adaboost method . change the weight of each event , . renormalize the weights . score for a given event is which is just the weighted sum of the scores of the individual trees .a second boosting method is called `` -boost '' , or sometimes `` shrinkage '' . after the tree ,change the weight of each event , . where is a constant of the order of 0.01 .renormalize the weights . score for a given event is which is the renormalized , but unweighted , sum of the scores over individual trees .a third boosting method is called `` -logitboost '' .this method is quite similar to -boost , but the weights are updated according to : where for the m tree iteration .a fourth boosting method is called `` -hingeboost '' . againthis method is quite similar to -boost , but here the weights are updated according to : where for the m tree iterations .a fifth boosting method is called `` logitboost'' .let for signal events and for background events .initial probability estimates are set to for event , where is the set of pid variables .let : where is the weight of event .let be the weighted average of over some set of events . instead of the gini criterion , the splitting variable and point to divide the events at a node into two nodes and determined to minimize the output for tree is for the node onto which event falls .the total output score is : the probability is updated according to : a sixth boosting method is called `` gentle adaboost'' .it uses same criterion as described for the logitboost . here is same as .the weights are updated according to : for signal events and for background events , where is the weighted purity of the leaf on which event falls .a seventh boosting method is called `` real adaboost'' .it is similar to the discrete version of adaboost described in section 3.1 , but the weights and event scores are calculated in different ways .the event score for event in tree is given by : where is the weighted purity of the leaf on which event falls .the event weights are updated according to : and then renormalized so that the total weight is one .the total output score including all of the trees is given by mc samples from the february 2004 baseline mc were used to tune some parameters of the boosted decision trees .there are 88233 intrinsic signal events and 162657 background events .20000 signal events and 30000 background events were selected randomly for the training sample and the rest of the events were the test sample .the number of input variables for boosting training is 52 .the relative ratio is defined as the background efficiency divided by the corresponding signal efficiency and rescaled by a constant value .the left plot of figure 2 shows the relative ratio versus the signal efficiency for adaboost with 45 leaves per decision tree and various values for 1000 tree iterations .the boosting performances slightly depend on the values .adaboost with works slightly better in the high signal efficiency region ( eff 65% ) but worse in the low signal efficiency region ( eff 60% ) than adaboost with smaller values , 0.8 , 0.5 or 0.3 . to balance the overall performance , is selected to replace the standard value 1 for the adaboost training .the right plot of figure 2 shows the relative ratio versus the signal efficiency for adaboost with and 1000 tree iterations for various decision tree sizes ranging from 8 leaves to 100 leaves .adaboost with a large tree size worked significantly better than adaboost with a small tree size , 8 leaves ; the latter number has been recommended in some statistics literature . typically , it takes more tree iterations for the smaller tree size to reach optimal performance . for this application ,even with more tree iterations ( 10000 trees ) , results from boosting with small tree size ( 8 leaves ) are still significantly worse ( %-20% ) than results obtained with large tree size ( 45 leaves ) . here , 45 leaves per decision tree is selected ( this number is quite close to the number of input variables , 52 , for the boosting training . ) how many decision trees are sufficient ?it depends on the mc samples for boosting training and testing . for the given set of boosting parameters selected above, we ran boosting with 1000 tree iterations .the left plot of figure 3 shows the relative ratio versus the signal efficiency for adaboost with tree iterations of 100 , 200 , 500 , 800 and 1000 , respectively . the boosting performance becomes better with more tree iterations. the right plot of figure 3 shows the relative ratio versus the number of decision trees for signal efficiencies of 50% , 60% and 70% which cover the regions of greatest interest for the miniboone experiment .typically , the boosting performance for low signal efficiencies converges after few hundred tree iterations and is then stable . for high signal efficiency , boosting performance continues to improve as the number of decision trees is increased .for these particular mc samples , the boosting performance is close to optimal after 1000 tree iterations . for the sake of comparison , the adaboost performance of the boosting training mc samples is also shown in the right plot of figure 3 .the relative ratios drop quickly down to zero ( zero means no background events left after selection for a given signal efficiency ) within 100 tree iterations for 50%-70% signal efficiencies .the adaboost outputs for the training mc sample and for the testing mc sample for 1 , 100 , 500 and 1000 are shown in figure 4 . the signal and background separation for the training sample becomes better as the number of tree iterations increases .the signal and background events are completely distinguished after about 500 tree iterations . for the testing samples , however , the signal and background separations are quite stable after a few hundred tree iterations .the corresponding relative ratios are stable for given signal efficiencies as shown in right plot of figure 3 .the tuning parameter for -boost is .the left plot of figure 5 shows the relative ratio versus the signal efficiency for -boost with values of 0.005 , 0.01 , 0.02 , 0.04 , respectively .-boost with fairly large values for 45 leaves per decision tree and 1000 tree iterations has better performance for the high signal efficiency region ( eff 50% ) .the results from adaboost with =0.5 are comparable to those from -boost .-boost with 0.01 works slightly better because -boost converges more quickly with larger values .however , with more tree iterations , the final performances for different values are very comparable . here = 0.01 is chosen for further comparisons .the right plot of figure 5 shows the relative ratio versus the signal efficiency for adaboost and -boost using two different ways to split tree nodes .one way is to maximize the criterion based on the gini index to select the next tree split , the other way is to split the left tree node first . for adaboost , the performance for the `` left node first '' method gets worse for signal efficiency less than about 65% . at about the same signal efficiency , the performance for the two -boosts are quite comparable and are comparable with adaboost based on the gini index . however , the -boost method based on the gini index becomes worse than the others for high signal efficiency .larger makes -boost converge more quickly , but increasing the the size of decision trees also makes -boost converge more quickly .the performance comparison of adaboost with different tree sizes shown in the right plot of figure 2 is for the same number of tree iterations ( 1000 ) . to make a fair comparison for the boosting performance with different tree sizes , it is better to let them have a similar number of total tree leaves .the top left plot of the figure 6 shows the relative ratio versus the signal efficiency for adaboost and -boost with similar numbers of the total tree leaves , 1800 tree iterations for 45 leaves per tree and 10000 tree iterations for 8 leaves per tree . for a small decision tree size of 8 leaves , the performance of the -boost is better than that of adaboost for 10000 trees . for a large decision tree size of 45 leaves , -boost has slightly better performance than adaboost at low signal efficiency ( % ) , but worse at high signal efficiency ( % ) .the comparison between small tree size ( 8 leaves ) and large tree size ( 45 leaves ) with comparable overall decision tree leaves indicates that large tree size with 45 leaves yields %-20% better performance for the miniboone monte carlo samples .the other five plots in figure 6 show the relative ratio versus the number of tree iterations for adaboost and -boost with 45 leaves and 8 leaves assuming signal efficiencies of 40% , 50% , 60% , 70% , 80% , respectively .the maximum number of tree iterations is 5000 for the large tree size of 45 leaves and 10000 for the small tree size of 8 leaves .usually , the performance of the boosting method becomes better with more tree iterations in the beginning ; then at some point , it may reach an optimal value and gradually get worse with increasing number of trees , especially in the low signal efficiency region .the turning point of the boosting performance depends on the signal efficiency and mc samples used for training and test .generally , if the number of weighted signal events is larger than the number of weighted background events in a given leaf , it is called a signal leaf , otherwise , a background leaf . here ,the threshold value for signal purity is 50% for the leaf to be called a signal leaf .this threshold value can be modified , say , to 30% , 40% , 45% , 60% or 70% .it is seen in figure 7 that the performance of boosted decision trees with adaboost degrades for threshold values away from the central value of 50% . especially for threshold values away from 50% , the of m tree often converges to 0.5 within about 100 tree iterations ; after that the weights of the misclassified events do not successfully update because if . then remains the same as for the previous tree .typically , the value increases for the first 100 - 200 tree iterations and then remains stable for further tree iterations , causing the weight of m tree , , to decrease for the first 100 - 200 tree iterations and then remain stable . for practical use of the adaboost algorithm , a lower limit , say , 0.01 , on will avoid the impotence of the succeeding boosted decision trees .this problem is unlikely to happen for -boost because the weights of misclassified events are always updated by the same factor , . if differing purity threshold values are applied to boosted decision trees with -boost , the performance peaks around 50% and slightly worsens , typically within 5% , for other values ranging from 30% to 70% . the unweighted misclassified event rate , weighted misclassified event rate and for the boosted decision trees with the adaboost algorithm versus the number of tree iterationsare shown in figure 8 , for a signal purity threshold value of 50% . from this plot ,it is clear that , after a few hundred tree iterations , an individual boosted decision tree has a very weak discriminant power ( i.e. , is a `` weak '' classifier ) .the is about 0.4 - 0.45 , corresponding to of around 0.2 - 0.1 .the unweighted event discrimination of an individual tree is even worse , as is also seen in figure 8 .boosted decision trees focus on the misclassified events which usually have high weights after hundreds of tree iterations .the advantage of the boosted decision trees is that the method combines all decision trees , `` weak '' classifiers , to make a powerful classifier as stated in the introduction section . when the weights of misclassified events are increased ( boosted ), some events which are very difficult correctly classify obtain large event weights . in principle , some outliers which have large event weights may degrade the boosting performance . to avoid this effect ,it might be useful to set an upper limit for the event weights to trim some outliers .it is found that setting a weight limit does nt improve the boosting performance , and , in fact , may degrade the boosting performance slightly .however , the effect was observed to be within one standard deviation for the statistical error .one might also trim events with very low weights which can be correctly classified easily to provide a better chance for difficult events .no apparent improvement or degradation was observed considering the statistical error .these results may indicate that the boosted decision trees have the ability to deal with outliers quite well and to focus on the events located around the boundary regions where it is difficult to correctly distinguish signal and background events .besides adaboost and -boost , there are other algorithms such as -logitboost , and -hingeboost which use different ways of updating the event weights for the misclassified events .the four plots of figure 9 show the relative ratio versus the signal efficiency for various boostings with different tree sizes .the top left , top right , bottom left , and bottom right plots are for 500 , 1000 , 2000 , and 3000 tree iterations , respectively . boosting with a large tree size of 45 leavesis seen to work better than boosting with a small tree size of 8 leaves as noted above .adaboost and -boost have comparable performance , slightly better than that of -logitboost .-hingeboost is the worst among these four boosting algorithms , especially for the low signal efficiency region .the top left , top right , bottom left and bottom right plots of figure 10 show the relative ratio versus the signal efficiency with 45 leaves of -boost , adaboost , -logitboost and -hingeboost for varying numbers of tree iterations . generally , boosting performance continuously improves with an increase in the number of tree iterations until an optimum point is reached . from the two top plots , it is apparent that -boost converges more slowly than does adaboost; however , with about 1000 tree iterations , their performances are very comparable .there is only marginal improvement beyond 1000 tree iterations for high signal efficiency , and the performance may get worse for the low signal efficiency region if the boosting is over - trained ( goes beyond the optimal performance range ) .similar plots for the four boosting algorithms with 8 leaves per decision tree are shown in the figure 11 .results for -hingeboost with 30 and 8 tree leaves are shown in the bottom right plots of figures 10 and 11 .the performance for 200 tree iterations seems worse than that for 100 tree iterations .this may indicate that its performance is unstable in the first few hundred tree iterations , but works well after about 500 tree iterations .however , the overall performance of -hinge boost is the worst among the four boosting algorithms described above . for some purposes , logitboost has been found to be superior to other algorithms .for the miniboone data , it was found to have about 10%-20% worse background contamination for a fixed signal efficiency than the regular adaboost .logitboost converged very rapidly after less than 200 trees and the contamination ratio got worse past that point .a modification of logitboost was tried in which the convergence was slowed by taking , the extra factor of slowing the weighting update rate .this indeed improved the performance considerably , but the results were still slightly worse than obtained with adaboost or -boost for a tree size of 45 leaves .the convergence to an optimum point still took fewer than 300 trees , which was less than the number needed with adaboost or -boost .gentle adaboost and real adaboost were also tried ; both of them were found slightly worse than the discrete adaboost .relative error ratio versus signal efficiency for various boosting algorithms are listed in table.[table : ratio ] ..[table : ratio ] relative error ratio versus signal efficiency for various boosting algorithms for miniboone data .differences up to about 0.03 are largely statistical .b=0.5 means smooth scoring function described in section 9 . [cols="^,^,^,^,^,^,^,^",options="header " , ]the random forests is another algorithm which uses a `` majority vote '' to improve the stability of the decision trees .the training events are selected randomly with or without replacement .typically , one half or one third of the training events are selected for each decision tree training .the input variables can also be selected randomly for determining the tree splitters .there is no event weight update for the misclassified events . for the adaboost algorithm ,each tree is built using the results of the previous tree ; for the random forests algorithm , each tree is independent of the other trees .figure 12 shows a comparison between random forests of different tree sizes and adaboost , both with 1000 tree iterations .large tree size is preferred for the random forests ( the original random forests method lets each tree develop fully until all tree leaves are pure signal or background ) . in this studya fixed number of tree leaves were used .the performance of the random forests algorithm with 200 or 400 tree leaves is about equal . compared with adaboost ,the performance of the random forests method is significantly worse .the main reason for the inefficiency is that there is no event weight update for the misclassified events .one of main advantages for the boosting algorithm is that the weights of misclassified events are boosted which makes it possible for them to be correctly classified in succeeding tree iterations .considering this advantage , an event weight update algorithm ( adaboost ) was used to boost the random forests .the performances of the boosted random forests algorithm are then significantly better than those of the original random forests as can be seen in figure 12 .the performance of the adaboost with 100 leaves per decision tree is slightly better than that of the boosted random forests .other tests were made using one half training events selected randomly for each tree together with 30% , 50% , 80% or 100% of the input variables selected randomly for each tree split .the performances of the boosted random forests method using the adaboost algorithm are very stable .the boosted random forests only uses one half or one third of the training events selected randomly for each tree and also only a fraction of the input variables for each tree split , selected randomly ; this method has the advantage that it can run faster than regular adaboost while providing similar performance .in addition , it may also help to avoid over - training since the training events are selected partly and randomly for each decision tree .some recent papers indicate that post - fitting of the trained boosted decision trees may help to make further improvement .one possibility is that a selected ensemble of many decision trees could be better than the ensemble of all trees . herepost - fitting of the weights of decision trees was tried .the basic idea is to optimize the boosting performance by retuning the weights of the decision trees or even removing some of them by setting them to have 0 weight .a genetic algorithm is used to optimize the weights of all trained decision trees .a new mc sample is used for this purpose .the mc sample is split into three subsamples , mc1 , mc2 and mc3 , each subsample having about 26700 signal events and 21000 background events .mc1 is used to train adaboost with 1000 decision trees . the background efficiency for mc1 , mc2 and mc3 for a signal efficiency of 60%are 0.12% , 5.15% and 4.94% , respectively .if mc1 is used for post - fitting , then the corresponding background efficiency can be driven down to 0.05% , but the background efficiency for test sample mc3 is about 5.5% .it has become worse after post - fitting .it seems that it is not good to use same sample for the boosting training and post - fitting .if mc2 is used for post - fitting , then the background efficiency goes down to 4.21% for the mc2 , and 4.76% for the testing sample mc3 .the relative improvement is about 3.6% and the statistical error for the background events is about 3.2% .suppose the mc samples for post - fitting and testing are exchanged , mc3 is used for post - fitting while mc2 is used for testing .the background efficiency is 4.38% for training sample mc3 and 5.06% for the testing sample mc2 .the relative improvement is about 1.5% .a second post - fitting program was tried , the pathseeker program of j.h .friedman and b.e .popescu , a robust regularized linear regression and classification method .this program produced no overall improvement , with perhaps a marginal 4% improvement for 50% signal efficiency .it seems that post - fitting makes only a marginal improvement based on our studies .one of the major advantages of the boosted decision tree algorithm is that it can handle large numbers of input variables as was pointed out previously . generally speaking ,more input variables cover more information which may help to improve signal and background event separation .often one can reconstruct several hundreds or even thousands of variables which have some discriminant power to separate signal and background events .some of them are superior to others , and some variables may have correlations with others .too many variables , some of which are `` noise '' variables , wo nt improve but may degrade the boosting performance .it is useful to select the most useful variables for boosting training to maximize the performance .new mc samples were generated with 182 reconstructed variables . in order to select the most powerful variables ,all 182 variables were used as input to boosted decision trees running 150 tree iterations .then the effectiveness of the input variables was rated based on how many times each variable was used as a tree splitter .the first variable in the sorted list was regarded as the most useful variable for boosting training .the first 100 sorted input variables were selected to train adaboost with , 45 leaves per decision tree and 1000 tree iterations .the dependence of the number of times a variable is used as a tree splitter versus the number of tree iterations is shown for some selected input variables ( variables number 1 , 5 , 10 , 20 , 50 , 80 and 100 ) in the top left plot with linear scale and in the top right plot with log scale . in this way ,the first 30 , 40 , 60 , 80 , 100 , 120 and 140 input variables were selected from the sorted list to train boosted decision trees with 1000 tree iterations .a comparison of their performance is shown in the left plot of figure 14 .the boosting performance steadily improves with more input variables until about 100 to 120 . adding further input variablesdoes nt improve and may degrade the boosting performance .the main reason for the degradation is that there is no further useful information in the additional input variables and these variables can be treated as `` noise '' variables for the boosting training .however , if the additional variables include some new information which is not included in the other variables , they should help to improve the boosting performance . so far only one way to sort the input variables has been described .some other ways can also be used and work reasonably well as shown in the right plot of figure 14 .list1 means the input variables are sorted based on the how many times they were used as tree splitters for 150 tree iterations , list2 means the input variables are sorted based on their gini index contributions for 150 tree iterations , and list3 means the variables are sorted according to which variables are used earlier than others as tree splitters for 150 tree iterations .list1001 , list1002 and list1003 are similar to list1 , list2 and list3 , but use 1000 tree iterations .the first 100 input variables from the sorted lists are used for boosting training with 1000 tree iterations .the performances are comparable for 100 input variables sorted in different ways .however , the boosting performances for list1 and list3 are slightly better than the others .if an equal number of input variables of 100 are selected from each list , the number of variables which overlap typically varies from about 70 to 90 for the different lists . in other words ,about 10 to 30 input variables are different among the various lists . in spite of these differences ,the boosting performances are still comparable and stable . further studies with mc samples generated using varied mc input parameters corresponding to systematic errors show that the boosting outputs are very stable even though some input variables vary quite a lot .if these same varied mc samples are applied to the anns , it turns out that boosted decision trees work significantly better than the anns for both event separation performance and for stability .in the standard boost , the score for an event from an individual tree is a simple square wave depending on the purity of the leaf on which the event lands .if the purity is greater than 0.5 , the score is 1 and otherwise it is .one can ask whether a smoother function of the purity might be more appropriate .if the purity of a leaf is 0.51 , should the score be the same as if the purity were 0.99 ?two possible alternative scores were tested .let . where and are parameters .tests were run for various parameter values for scores a and b and compared with the standard step function .performance comparisons of adaboost for various parameters ( left ) and ( right ) values are shown in figure 15 . for a smooth function with , boosting performance converges faster than the original adaboost algorithm for the first few hundred decision trees , as shown in figure 16 .however , no evidence was found that the optimum was reached any sooner by the smooth function .the reason is that the smooth function of the purity describes the probability of a given event to be signal or background in more detail than the step function used in the original adaboost algorithm . with an increase in the number of tree iterations, however , the `` majority vote '' plays the most important role for the event separation .the ultimate performance of the smooth function with is comparable to the performance of the standard adaboost .in miniboone , one is trying to improve the signal to background ratio by more than a factor of 100 .one might expect that one should start by giving the background a greater total weight than the signal .in fact , giving the background two to five times the weight of the signal slightly degraded the performance . giving the background 0.5 to 0.2 of the weight of the signal gave the same performance as equal initial weights .for one set of monte carlo runs the pid variables were carefully modified to be flat as functions of the energy of the event and the event location within the detector .this decreased the correlations between the pid variables .the performance of these corrected variables was compared with the performance of the uncorrected variables . as expected ,the convergence was much faster at first for the corrected variable boost .however , as the number of trees increased , the performance of the uncorrected variable boost caught up with the other . for 1000 trees ,the performance of the two boost tests was about the same . over the long run ,boost is able to compensate for correlations and dependencies , but the number of trees for convergence can be considerably shortened by making the pid variables independent . the number of mc events used to train the boosting effectively is also an important issue we have investigated .generally , more training events are preferred , but it is impractical to generate unlimited mc events for training .the performance of adaboost with 1000 tree iterations , 45 tree leaves per tree using various number of background events ranging from 10000 to 60000 for training are shown in figure 17 , where the number of signal events is fixed 20000 .for the miniboone data , the use of 30000 or more background events works fairly well ; fewer background events for training degrades the performance .pid input variables obtained using the event reconstruction programs for the miniboone experiment were used to train boosted decision trees for signal and background event separation .numerous trials were made to tune the boosted decision trees .based on the performance comparison of various algorithms , decision trees with the adaboost or the -boost algorithms are superior to the others .the major advantages of boosted decision trees include their stability , their ability to handle large number of input variables , and their use of boosted weights for misclassified events to give these events a better chance to be correctly classified in succeeding trees .boosting is a rugged classification method .if one provides sufficient training variables and sufficient leaves for the tree , it appears that it will , eventually , converge to close to an optimum value .this assumes that for -boost or for adaboost are not set too large .there are modifications of the basic boosting procedure which can speed up the convergence .use of a smooth scoring function improves initial convergence . in the last section, it was seen that removing correlations of the input pid variables improved convergence speed .for some applications , the use of a boosted natural forests technique may also speed the convergence . for a large set of discriminant variables ,several techniques can be used to select a set of powerful input variables to use for training boosted decision trees .post - fitting of the boosted decision trees makes only a marginal improvement in the tests presented here .we wish to express our gratitude to the miniboone collaboration for the excellent work on the monte carlo simulation and the software package for physics analysis .this work is supported by the department of energy and the national science foundation of the united states .j. friedman , _ greedy function approximation : a gradient boosting machine _ , annals of statistics , 29(5 ) , 1189 - 1232(2001 ) ; j. friedman , t. hastie , r. tibshirani , _ additive logistic regression : a statistical view of boosting _ , annals of statistics , 28(2 ) , 337 - 407(2000 ) b.p .roe , h.j .yang , j. zhu , y. liu , i. stancu , g. mcgregor , _ boosted decision trees as an alternative to artificial neural network for particle identification _ , nuclear instruments and methods in physics research a , 543 ( 2005 ) 577 - 584 , physics/0408124 .l. breiman , j.h .friedman , r.a .olshen , and c.j .stone , _ classification and regression trees _, wadsworth international group , belmont , california ( 1984 ) .schapire , _ the boosting approach to machine learning : an overview _ , msri workshop on nonlinear estimation and classification , ( 2002 ) . y. freund and r.e .schapire , _ a short introduction to boosting _ ,journal of japanese society for artificial intelligence , 14(5 ) , 771 - 780 , ( september , 1999 ) .( appearing in japanese , translation by naoki abe . )t. hastie , r. tibshirani , j. friedman , _ elements of statistical learning , data mining , inference and prediction _ , chapter 10 , section 11 , page 324 , springer , ( 2001 ) .m. dettling , p. buhlmann , _ boosting for tumor classification with gene expression data _ , bioinformatics , vol.19 , no.9 , pp1061 - 1069 , ( 2003 ) z.h .zhou , j.x .wu , w. tang , _ ensembling neural networks : many could be better than all _ , artificial intelligence , 137(1 - 2):239 - 263,(2002 ) ; z.h .zhou , w. tang , _ selective ensemble of decision trees _, nanjing university , ( 2003 ) . | boosted decision trees are applied to particle identification in the miniboone experiment operated at fermi national accelerator laboratory ( fermilab ) for neutrino oscillations . numerous attempts are made to tune the boosted decision trees , to compare performance of various boosting algorithms , and to select input variables for optimal performance . * * studies of boosted decision trees * * for miniboone particle identification |
the pupil provides a window into some of the processing that otherwise takes place invisibly inside the human brain .hess and polt , as well as later kahneman and beatty , found evidence that linked emotional and cognitive processes to pupil dilations , and aston - jones et al . , and joshi have provided a framework for understanding some of the anatomical processes that take place in regulating the gain of the networks involved , and why pupillary reactions are visible : at the core , the locus coeruleus - norepinephine ( lc - ne ) system operates in two different modes , _ tonic mode _ that regulates the overall level of preparedness or arousal and _ phasic mode _ that is involved in responding to task - relevant stimuli . as task difficulty increases , so will tonic mode activity , modulating the gain , which in turn leads to a increased performance and a stronger phasic response to task - relevant stimuli .if , however , the arousal system and tonic activity mode increase beyond a certain peak point , the phasic responses decrease , leading to an explanation of the classical trade - off between arousal and optimal performance first analysed by yerkes and dodson . , where the largest phasic dilations are seen as compared to at [ a ] and [ c ] .note that the graphs are not actual data to scale but is drawn for illustrative purposes .( adapted from , resembling the classical yerkes - dodson relationship ) ] ' '' '' , highly focused task - specific attention [ b ] and distractible , scanning attention [ c ] .the blue curve illustrate the level and fluctuations of the pupil size at each condition .the baseline pupil size is shown in black , with the green area denoting the size of the response present when a task - relevant stimuli appear .note that the drawing is not to scale . ] ' '' '' activity in lc - ne cells are further reflected in pupillary dilations , and the pupil can thus be interpreted as a marker of lc - ne activity ( see also fig . [fig : phasictonic ] and fig .[ fig : eyedilation ] ) .baseline pupil size varies on a large scale of 3 - 4 mm as a response to changes in light levels , whereas variations caused by cognitive processes are much smaller , typically on the order of 0.5 mm or 15% compared to typical pupil sizes found in normal conditions .the baseline pupil size is modulated by the tonic activity in lc - ne , and is never at rest ; it has been known for a long time to vary .stark et al . speculated that this could be part of an `` economical '' construction of the eye in the sense that there is no need for the eye to operate at a more narrow range , and in our previous study we also noted slow variations of the baseline pupil size of + /-10% on a timescale of 3060s .task - evoked pupillary responses , ( tepr ) above the current baseline are caused by phasic activity in the lc - ne system , and by averaging over many similarly conditioned tests time - locked to the stimuli , other factors can be filtered out .recent fmri studies by kuchinsky et al . have further established that activity in saliency networks triggered by attentional tasks are reflected in increased tonic pupil size , in contrast to the decreased pupil dilation typically observed when we are in a default resting state .phasic activations of the lc - ne system in the noradrenergic ( ne ) neurons also play a role in rapid adaptation to changing conditions , as demonstrated by bouret et al . , in that it may facilitate reorganisation of the innervated areas .this allows for adaption of behaviour to changes in task conditions ; real or when they deviate from anticipation .preuschoff et al . have found that pupil dilations not only reflect decision making per se or the level of engagement , but also indicates surprise when committing mistakes in decision tasks , suggesting that ne plays a role in error signalling .this appear similar to the negativity feedback components , which is an event related potential ( erp ) typically observed in eeg neuroimaging experiments 250 - 300 ms after participants realize that an incorrect choice was made .while attention can be broadly understood as `` the appropriate allocation of processing resources to relevant stimuli '' , posner and petersen , , have shown that three systems , which regulate attention , are anatomically separate from other processing systems and carry out different cognitive roles as part of the attention networks .these are : * alerting , * orienting and * executive control. fan et al . designed a behavioral experiment , known as the attention network test ( ant ) , to assess which of the network components are activated based on differences in reaction time when responding to visual cues .we have in a previous experiment , measured task - evoked pupillar responses during the ant test in a longitudinal study of two subjects .a stronger response was triggered by incongruent conditions in the conflict resolution decision task , likely involving the executive network .this study expands the number of subjects , investigates the changes in mean pupil size over the experiment , and look at the relationship between the tonic level and the accuracy of the responses .the procedure followed and the equipment used is identical to that described in , and is further illustrated in fig .[ fig : ant - run - exp ] . in this present study , in total n=18 participants (7 female and 11 male ) with a mean age of 25.3 years were tested once .none used glasses or contact lenses , and all but one were right - handed .the participant were all volunteers that were only allowed to complete the test if they gave consent to their data being used anonymously for research purposes .the ant test itself is a standard paradigm in widespread use .pupil size is recorded at 60 hz , and blink - affected periods are removed .a hampel filter with a centered window of + /-83ms and a limit of is applied to remove outliers , and when data is not present in at least half the window , the center point is also removed .this later part takes care of removing any samples immediately before and after blinks , to avoid accidental pupil size changes caused by distortion of the visible part of the eye .finally data is downsampled to 100ms resolution with a windowed averaging filter . for the tepr calcuation ,data is epoched to the cue presentation and individually scaled to the value at the start of the epoch . for the tonic pupil size, a period of 1s immediately before target presentation is sampled to give a representative value without the phasic response , that in most conditions appear to fade away after around 2.5s after stimuli .the pupil was further corrected for variations in head - distance by means of the eye - to - eye distance reported by the eye - tracker .the mean pupil size was calculated in each of the 4 periods ( the initial trial round and the three actual blocks of reaction time tests ) as the average value of the filtered pupil data corrected for head - distance variations , which means it is representative of both the tonic pupil size and the ovelaid phasic responses .' '' '' [ tab : rtant ] .average reaction- and attention network - times over all correct tests across all users ( sample standard deviation listed in parenthesis ) , in seconds .[ cols="^,^,^,^",options="header " , ] left eye tonic pupil size , as measured immediately before target presentation , relative to each session s mean , and the reaction times , are listed across all subjects of the present study , and for both participant a and b over all sessions of the longitudinal study , for incongruent conditions , divided into groups of correct and incorrect responses .the mean reaction time ( ) differ between correct and incorrect responses in a significant way ( welch t - test , and respectively , p<0.000001 ) for both a and b. the means of the tonic pupil size ( psz ) differ significantly between correct and incorrect responses for a ( welch t - test , ) ; for b and the participants of the present study , the means between the conditions do not show a statistically significant difference .almost identical results are found for right eye pupil sizes ( not listed here ) . [tab : ok_vs_nok_incongruent ] ) are 0.345 and 0.301 respectively .initiation of each of the 3 rounds of the session are marked with dashed lines .an initial increased pupil dilation diminishes over time as entraining takes place , with a slight increase towards the end .it can also be seen that , in this case , each round starts out with an increase pupil dilation . ][ fig : tonic_pupil_20160112150851 ] shows an illustrative sample of how the mean pupil size ( corrected for variations in head distance ) varies over the course of the initial training round and the three actual trial blocks .left and right pupil size are slightly different for this particular subject , but there is good correlation between variations of the two ( pearson s ) . an regression corresponding to a low pass filter ( a 2nd order polynomial fit ) is shown overlaid , and can explain approximately 30 - 35% of variance ( explained variance and respectively ) .it also appears as if each block has a slightly larger tonic pupil size initially followed by a decline of approximately 10% .the means for each block also apper to differ , with the initial training round having the larger tonic pupil size . , , , all with a confidence level ) .the differences between the other blocks are not statistically significant . ] , , , all with a confidence level ) .the differences between the other blocks are not statistically significant .however , for a there are no statistically significant differences between the blocks ; the variations between the 4 blocks are comparatively much smaller than than what is seen for other participants . ] when comparing the mean tonic pupil size between the initial training round and the three actual trial blocks , there are statistically significant differences across all participants of the present study , and also for participant b of the longitudinal study . for participant a , however , there are no statistically significant differences .see fig .[ fig : qmeans_all ] and fig .[ fig : qmeans_ab ] average fixation density maps , adjusted for accidental mis - calibrations , were built for each experiment , and were compared between conditions .we did , as expected , see recognisable differences when the target presentation was above vs below the fixation cross , but we were not able to detect any significant spatial differences between congruency conditions nor between cue conditions .the results of this study , with a larger population , supports our previous findings : there is a difference in the incongruent vs congruent / neutral flanker scenarios in that an incongruent condition solicit a larger pupillary response compare to the other two conditions .as the age group is different compared to the previous study , there are indications that the results may be robust and can translate to different settings .. in most cases we see a high correlation ( r values from 0.80.95 ) between left and right pupil size , although a few have what may be less than optimal tracking .we can not conclude any significant difference in the pupil dilation responses between the two eyes , but we notice that the significance level of the difference between the incongruent and the neutral condition is higher and lasts slightly longer for the right eye .in addition , we also found a significantly different response when subjects replied incorrectly , which happens much more frequently for the inconguent condition .this response may be related to the adaptation and required reorganization reported by bouret et al . and/or to the surprise elements reported by preuschoff et al . .thus , the phasic response reported here as well as in our previous study can be divided into two components that apparently cause a higher level of lc - ne activations : one related to the incongruent condition and one to the incorrect reply . comparing the mean relative pupil size over the 4 parts of the experiment ( training round and 3 blocks of tests ) we found that for the subjects of the present study , as well as for subject b of the longitudinal study , the training round had a statistically significant higher level , around 5% , compared to the other three blocks that averaged around -2% .the subject a of the longitudinal study , however , did not show any such variation between the blocks .we hypothesize that this may point to differences in individual characteristics , behavoiur or preferences .further , comparing relative normalized tonic pupil sizes ( excluding the phasic responses ) showed a statistically significant difference between the level immediately before an incorrect reply compared to the level before a correct reply for subject a of the longitudinal study but not for subject b nor for the participants of the present study .however , while for subject b the levels are almost identical , there is a larger difference even if not statistically significant for the participants of this study , and it therefore can not be ruled out that participants could fall in different groups that , with more data , would reveal more individual variation .we also point out that the possible familiarity effects of higher pupillary responses mainly in the two first complete experiments were not tested for in the present experiment , since it was performed only once for each participant .we do , however , see hints at an overall adaptation , as the average ( tonic ) level decreases as initial entraining to the tasks take place , with a flat or in some cases slightly increased tonic levels towards task completion .this appear similar to the _ familiarity effect _ reported by hyn et al . we were not able to find any spatial differences in eye movements , at the resolution we worked with , that was related to the conditions of the test , apart from the up - down position of the target .this work is supported in part by the innovation fund denmark through the project eye tracking for mobile devices .hess eh , polt jm . .available from : http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=retrieve{&}db=pubmed{&}dopt=citation{&}list{_}uids=14401489[http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=retrieve\{&}db=pubmed\{&}dopt=citation\{&}list\{_}uids=14401489 ] .the british journal of ophthalmology .available from : http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1039657{&}tool=pmcentrez{&}rendertype=abstract[http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1039657\{&}tool=pmcentrez\{&}rendertype=abstract ] .bkgaard p , petersen mk , larsen je . in : antona m , stephanidis c , editors . assessing levels of attention using low cost eye tracking .cham : springer international publishing ; 2016 .p. 409420 .available from : http://dx.doi.org/10.1007/978-3-319-40250-5{_}39[http://dx.doi.org/10.1007/978-3-319-40250-5\{_}39 ] .kuchinsky se , panda nb , haarmann hj . in : schmorrow dd , fidopiastis mc , editors . linking indices of tonic alertness :resting - state pupil dilation and cingulo - opercular neural activity .cham : springer international publishing ; 2016 .p. 218230 .available from : http://dx.doi.org/10.1007/978-3-319-39955-3{_}21[http://dx.doi.org/10.1007/978-3-319-39955-3\{_}21 ] .posner mi , petersen se . annual review of neuroscience .available from : http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3413263{&}tool=pmcentrez{&}rendertype=abstractnhttp://www.ncbi.nlm.nih.gov/pubmed/2183676[http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3413263\{&}tool=pmcentrez\{&}rendertype=abstractnhttp://www.ncbi.nlm.nih.gov/pubmed/2183676 ] . | cognitive processes involved in both allocation of attention during decision making as well as surprise when making mistakes trigger release of the neurotransmitter norepinephrine , which has been shown to be correlated with an increase in pupil dilation , in turn reflecting raised levels of arousal . extending earlier experiments based on the attention network test ( ant ) , separating the neural components of alertness and spatial re - orientation from the attention involved in more demanding conflict resolution tasks , we demonstrate that these signatures of attention are so robust that they may be retrieved even when applying low cost eye tracking in an everyday mobile computing context . furthermore we find that the reaction of surprise elicited when committing mistakes in a decision task , which in the neuroimaging eeg literature have been referred to as a negativity feedback error correction signal , may likewise be retrieved solely based on an increase in pupil dilation . |
in biological systems , the survival of a species depends on the frequencies of its kin and its foes in the environment . in some cases, the chance of survival of a certain species _ improves _ as the frequency of its kind increases , since this might enhance the chance for reproduction or other benefits from group interaction .this is denoted as _ positive _ frequency dependence . in other casesa _ negative _ frequency dependence , that is the increase of the survival chance with _ decreasing _ frequency , is observed .this is the case , when individuals compete for rare ressources .moreover , negative frequency dependence is known to be important for maintaining the genetic diversity in natural populations .frequency dependent dynamics are not only found in biological systems , but also in social and economic systems . in democracies , a simple example is a public vote , where the winning chances of a party increase with the number of supporters . in economics ,e.g. the acceptance of a new products may increase with the number of its users . in stock markets ,on the other hand , positive and negative frequency dependencies may interfere .for instance , the desire to buy a certain stock may increase with the orders observed from others , a phenomenon known as the _ herding effect _ , but it also may decrease , because traders fear speculative bubbles . in general, many biological and socio - economic processes are governed by the frequency dependent adoption of a certain behavior or strategy , or simply by frequency dependent reproduction . in order to model such dynamics more rigorously ( but less concrete ) , different versions of _ voter models _ have been investigated .the voter model denotes a simple binary system comprised of _ voters _ , each of which can be in one of two states ( where _ state _ could stand for opinion , attitude , or occupation etc . ) , . here, the transition rate from state to state is assumed to be proportional to the frequency . in this paper , we extend this approach by assuming a _ nonlinear _ voter model , where the frequency dependence of the transition rate , , includes an additional nonlinearity expressed in terms of the ( frequency dependent ) prefactor .linear voter models have been discussed for a long time in mathematics .recently , they gained more attention in statistical physics because of some remarkable features in their dynamics described in sect .[ sec : vm ] .but voter models also found the interest of population biologists dependent on how the frequency is estimated , one can distinguish global from local voter models . in the latter casethe transition is governed only by the local frequency of a certain state in a given neighborhood .in contrast to global ( or mean - field ) models , this leads to local effects in the dynamics , which are of particular interest in the current paper .if space is represented by a two - dimenional lattice and each site is occupied by just one individual , then each species occupies an amount of space proportional to its presence in the total population .local effects such as the occupation of a neighborhood by a particular species or the adoption of a given opinion in a certain surrounding , can then be observed graphically in terms of domain formation .this way , the invasion of species ( or opinions ) in the environment displays obvious analogies to spatial pattern formation in physical systems .physicists have developed different spatial models for such processes .one recent example is the so - called `` sznajd model '' which is a simple cellular automata ( ca ) approach to consensus formation ( i.e. complete invasion ) among two opposite opinions ( described by spin up or down ) . in , we have shown that the sznajd model can be completely reformulated in terms of a linear voter model , where the transition rates towards a given opinion are directly proportional to the frequency of the respective opinion of the _ second - nearest _ neighbors and independent of the nearest neighbors . other spatial models are proposed for game - theoretical interactions among nearest neighbors . here, the dynamics are driven by local payoff differences of adjacent players , which basically determine the nonlinearity .dependent on these payoff differences , we could derive a phase diagram with five regimes , each characterized by a distinct spatio - temporal dynamic .the corresponding spatial patterns range from complete invasion to coexistence with large domains , coexistence with small clusters , and spatial chaos . in this paper, we are interested in the local effects of frequency dependent dynamics in a homogeneous network , where each site has nearest neighbors . in this case, the nonlinearity can be simply expressed by two constants , , .this is a special form of a nonlinear voter model , which for also includes majority voting and for minority voting .we investigate the dynamics of this model both analytically and by means of computer simulations on a two - dimensional stochastic ca ( which is a special form of a homogeneous network with ) .the latter one was already studied in , in particular there was a phase diagram obtained via numerical simulations . in our paper , we go beyond that approach by deriving the phase diagram from an analytical approximation , which is then compared with our own simulations . in sects .[ sec : formal ] , [ sec : micro ] we introduce the microscopic model of frequency dependent invasion and demonstrate in sects .[ sec : a1a2 ] , [ 2.4 ] the role of , by means of characteristic pattern formation . based on the microscopic description , in sect .[ 3.1 ] we derive the dynamics for the global frequency , which is a macroscopic key variable .an analytical investigation of these dynamics is made possible by pair approximation , sect .[ 3.2 ] , which results in a closed - form description for and the spatial correlations . in sect .[ 4.1 ] , we verify the performance of our analytical approximations by comparing them with averaged ca computer simulations .the outcome of the comparison allows us to derive in sect .[ 4.3 ] a phase diagram in the parameter space , to distinguish between two possible dynamic scenarious : ( i ) complete invasion of one of the species , with formation of domains at intermediate time scales , and ( ii ) random spatial coexistence of two species .a third dynamic regime , the nonstationary coexistence of the two species on long time scales together with the formation of spatial domains , can be found in a small , but extended region that separates the two dynamic regimes mentioned above .we further discuss in sect . [ 5 ] that the usual distinctions for the dynamics , such as positive or negative frequency dependence , do not necessarily coincide with the different dynamic regimes . instead , for positive frequency dependence ,all of the three different dynamic regimes ( and the related spatio - temporal patterns ) are observed . in the appendix , calculation details for the pair approximationare given .we consider a model of two species labeled by the index .the total number of individuals is constant , so the global frequency ( or the share of each species in the total population ) is defined as : in the following , the variable shall refer to the global frequency of species 1 .the individuals of the two species are identified by the index and can be seen as nodes of a network .a discrete value indicates whether the node is occupied by species 0 or 1 .the network topology ( specified later ) then defines the nearest neighbors of node . in this paper , we assume homogeneous networks where all nodes have the same number of nearest neighbors . for further use , we define the local occupation of the nearest neighborhood ( without node ) as : a specific realization of this distribution shall be denoted as , while the function assigns to a particular neighborhood : for later use , it is convenient to define these distributions also for the nearest neighborhood _ including _node : for , denotes a binary string , e.g. , where the first value refers to the center node and the other values indicate the particular values of the nearest neighbors .the assignment of these values to a particular neighborhood of node is then described by . in the voter model described in the following section , the dynamics of governed by the _ occupation distribution _ of the _ local neighborhood_. that surrounds each node .using a stochastic approach , the probability to find node in state therefore depends in general on the local occupation distribution of the neigborhood ( eqn .( [ occupat ] ) , in the following manner : hence , is defined as the marginal distribution of , where in eqn .( [ eq : marg_dist ] ) indicates the summation over all possible realizations of the local occupation distribution , namely different possibilities . for the timedependent change of we assume the following master equation : \end{aligned}\ ] ] where denotes the transition rate for state of node into state in the next time step under the condition that the local occupation distribution is given by . the transition rate for the reverse process is .again , the summation is over all possible realizations of , denoted by .it remains to specify the transition rates , which is done in the following section .our dynamic assumptions for the change of an individual state are taken from the so - called voter model ( see also sect . [ 1 ] ) , abbreviated as vm in the following .the dynamics is given by the following update rule : a voter , i.e. a node of the network , is selected at random and adopts the state of a randomly chosen nearest neighbor .after such update events , time is increased by 1 .the probability to choose a voter with a given state from the neighborhood of voter is directly proportional to the relative number ( or frequency ) of voters with that particular state in that neigborhood .let us define the _ local frequencies _ in the neighborhood as : where is the kronecker delta , which is 1 only for and zero otherwise .then the transition rate of voter to change its state does not explicitly depend on the local distribution , but only on the _ occupation frequency _ , i.e. on the number of nodes occupied by either 0 or 1 in the neighborhood of size .hence , the vm describes a frequency dependent dynamics : the larger the frequency of a given state in the neighborhood , the larger the probability of a voter to switch to that particular state _ if _ it is not already in that state .i.e. the transition rate , to _ change _ state increases only with the local frequency of _ opposite _ states , , in the neighborhood : the prefactor determines the time scale of the transitions and is set to .( [ eq : linvm ] , describes the dynamics of the _ linear _ vm because , according to the above update rule , the rate to _ change _ the state is directly proportional to the frequency . the linear ( or standard ) vm has two remarkable features .first , it is known that , starting from a random distribution of states , the system always reaches a completely ordered state , which is often referred to as _ consensus _ in a social context , or complete _ invasion _ in a population biology context . as there are individuals with two different states , the complete ordered state can be either all 0 or all 1 . which of these two possible attractors of the dynamics is eventually reached , depends ( in addition to stochastic fluctuations ) on the initial global frequency , i.e. .it has been shown that , for an ensemble average , the frequency of the outcome of a particular consensus state is equal to the initial frequency of state .this second remarkable feature is often denoted as conservation of magnetization , where the magnetization is defined as .hence , consensus means .thus we have the interesting situation that , for a single realization , the dynamics of the linear vm is a fluctuation driven process that , for finite system sizes , always reaches consensus , whereas on average the outcome of the consensus state is distributed as .the ( only ) interesting question for the linear vm is then how long it may take the system to reach the consensus state , dependent on the system size and the network topology .the time to to reach consensus , , is obtained through an average over many realizations . as the investigation of is not the focus of our paper ( see ) ,we just mention some known results for the linear vm : one finds for one - dimensional regular lattices ( ) and for two - dimensional regular lattices ( ) . for system does not always reach an ordered state in the thermodynamic limit .in finite systems , however , one finds .while the linear vm has some nice theoretical properties , it also has several conceptual disadvantages when applying the model to a social or population biological context .first of all , the `` voters '' do not vote in this model , they are subject to a random ( but frequency based ) assignment of an `` opinion '' , without any choice . secondly , the state of the voter under consideration does not play any role in the dynamics .this can be interpreted in a social context as a ( blind ) herding dynamics , where the individuals just adopt the opinion of the majority . in a population model of two competing species , it means that individuals from a minority species may be replaced by those from a majority species without any resistance . in order to give voter at least some weight compared to the influence of its neighbors , one can simply count its state into the local frequency , i.e. instead of eqs .( [ occupat ] ) , ( [ sigm01 ] ) we may use eqn . ( [ occupat0 ] ) . using for voter the notation ( i.e ), we can still use eqn .( [ eq : linvm ] ) for the transition rates , with the noticable difference that the local frequency of eqn .( [ sum ] ) is now calculated from a summation that starts with .the explicit consideration of thus has the effect of adding some _ inertia _ to the dynamics .in fact , extending the summation to multiplies the transition rate , eqn .( [ eq : linvm ] ) , by a factor , where is the number of nearest neighbors .i.e. , for a local configuration would lead to a transition rate _ without _ the additional inertia , but by counting in the state of voter .i.e. , taking into account the state of voter considerably reduces the transition rate towards the opposite state .we find it useful for conceptual reasons to include some resistance into the model and therefore will use from now on the description which takes the current state of voter into account .this also has the nice advantage that for the case , which describes e.g. square lattices , we avoid stalemate situations , .however , we note that the addition of the constant resistance does not change the dynamics of the model , as it only adjusts the _ time scale _ towards a new factor .so , keeping constant and equal for all voters , we can rescale .we note that there are of course other ways to give some weight to the opinion of voter . in , we have discussed a modified vm , where voters additionally have an inertia $ ] which leads to a decrease of the transition rate to change their state : here is given by the linear vm , eqn .( [ eq : linvm ] ) .the individual inertia is evolving over time by assuming that it increases with the persistence time the voter has been keeping its current state .while this intertia may slow down the microscopic dynamics of the vm and thus may increase the time to reach consensus , , we found the counterintuitive result that under certain circumstances a decelerated microdynamics may even accelerate the macrodynamics of the vm , thus decreasing compared to the linear vm .the addition of a nonlinear inertia to the vm , eqn .( [ eq : reluc ] ) , is a special case for turning the linear vm into a nonlinear one ( wheras the fixed resistence would not change the linear vm ) . in general ,nonlinear vm can be expressed as where is a nonlinear , frequency dependent function describing how voter reacts on the occurence of opposite `` opinions '' in its immediate neighborhood .[ fig : nonlin ] shows some possible examples which have their specific meaning in a social context .whereas any function describes the linear vm , i.e. a majority voting or herding effect , a decreasing means minority voting , i.e. the voter tends to adopt the opinion of the minority .nonmonotonous can account for voting against the trend , i.e. the voter adopts an opinion as long as this is not already the ruling opinion a phenomenon which is important e.g. in modeling the adoption of fashion .an interpretation of these functions in a population biology context will be given in sect .[ sec : a1a2 ] in conclusion , introducing the nonlinear response function will allow us to change the global dynamics of the linear vm . instead of reaching always consensus , i.e. the exclusive domination of one `` opinion '' or species, we may be able to observe some more interesting macroscopic dynamics , for example the coexistence of both states .it is one of the aims of this paper to find out , under which specifications of we may in fact obtain a dynamic transition that leads to a a structured , but not fully ordered state instead of a completely ordered state . ) .note the piecewise linear functions , as the number of neighbors and thus the frequencies have mostly discrete values.,width=264 ]in order to give a complete picture of the dynamics of the nonlinear vm , we have to derive the stochastic dynamics for the _ whole _ system of nodes , whereas eqn .( [ master ] ) gives us `` only '' the _ local _ dynamics in the vicinity of a particular voter .for voters , the distribution of states is given by note that the state space of all possible configurations is of the order . in a stochastic model, we consider the probability of finding a particular configuration at time .if is measured in discrete time steps ( generations ) and the network is synchronously updated , the time - dependent change of is described as follows : where denotes all possible realizations of and denote the conditional probabilities to go from state at time to at time ( [ eq : t+1 ] ) is based on the markov assumption that the dynamics at time may depend only on states at time . with the assumption of small time steps and the definition of the transition rates eqn .( [ eq : t+1 ] ) can be transferred into a time - continuous master equation as follows : \ ] ] in eqn .( [ master0 ] ) , the transition rates depend on the _ whole _ distribution .however , in the frequency dependent dynamics introduced in sect .[ 2.1 ] , only the occupation distribution of the _ local _ neighborhood of node needs to be taken into account .therefore , it is appropriate to think about some reduced description in terms of lower order distributions , such as the local occupation , eqn .( [ occupat ] ) . in principle, there are two different ways to solve this task . the first one , the _ top - down _ approach starts from the _ global _ distribution in the whole state space and then uses different approaches to factorize .however , a markov analysis can only be carried out exactly for small , because of the exponential -dependence of the state space .thus , for larger suitable approximations , partly derived from theoretical concepts in computer science need to be taken into account . in this paper ,we follow a second way which is a _ bottom - up _ approach based on the _ local _ description already given in sect . [ 2.1 ] .i.e. starting from node and its local neighborhood , we want to derive the dynamics for some appropriate _ macroscopic _ variables describing the nonlinear vm . instead of one equation for in the top - down approach , in the bottom - upapproach we now have a set of stochastic equations for , eqn .( [ master0 ] ) , which are locally coupled because of overlapping neighborhoods , . in order to solve the dynamics, we need to discuss suitable approximations for these local correlations .as we are interested in the macroscopic dynamics , these approximations will be done at the macroscopic level . in order to do so, we first derive a macroscopic equation from the stochastic eqn .( [ master ] ) , which is carried out in the following section .the key variable of the macroscopic dynamics is the global frequency , defined in eqn .( [ nconst ] ) . in order to compare the averaged computer simulations with results from analytical approximations later in sect . [ 4 ] , we first derive an equation for the expectation value .we do this without an explicit determination of the transition rates and wish to emphasize that the formal approach presented in sect .[ 2.2 ] remains valid not just for the voter model , but also for other dynamic processes which depend on neighbor interactions ( not only nearest neighbors ) in various network topologies .for the derivation of the expectation value we start from the stochastic description given in sect . [sec : micro ] , where denoted the probability to find a particular distribution , eqn .( [ vector ] ) , at time and denoted all possible realizations of eqn .( [ master0 ] ) . on one hand : and on the other hand : by differentiating eqn .( [ av2 ] ) with respect to time and inserting the master eqn .( [ master ] ) , we find the following macroscopic dynamics for the network : \end{aligned}\ ] ] for the further treatment of eqn .( [ eq : master_macro1 ] ) , we consider a specific distribution of states on nodes defined by .this distribution is assigned to a particular neighborhood of node by ( eqn .( [ occupat0 ] ) ) .since we are interested in how many times a special realization of a specific distribution is present in the population , we define an indicator function that is if the neighborhood of node has the distribution , and otherwise . therefore , we write the frequency of the -tuplet in the population as : the expectation value is inserting eqn .( [ eq : x_ind ] ) into eqn .( [ eq : av_x ] ) , we verify that because of the definition of the marginal distribution . using the identity , we may rewrite eqn .( [ eq : master_macro1 ] ) by means of eqn .( [ eq : av_x1 ] ) to derive the macroscopic dynamics in the final form : \end{aligned}\ ] ] denotes the possible configurations of a specific occupation distribution , eqn .( [ sigm01 ] ) ) . in the following , we use .then , the dynamic for reads : \end{aligned}\ ] ] the solution of eqn .( [ x - fin ] ) would require the computation of the averaged global frequencies and for all possible occupation patterns , which would be a tremendous effort .therefore , in the next section we will introduce two analytical approximations to solve this problem . in sect . [4.1 ] we will further show by means of computer simulations that these approximations are able to describe the averaged dynamics of the nonlinear vm . as a first approximation of eqn .( [ x - fin ] ) , we investigate the mean - field limit . here the state of each node does not depend on the occupation distribution of its neighbors , but on randomly chosen nodes . in this casethe occupation distribution factorizes : for the macroscopic dynamics , eqn .( [ x - fin ] ) , we find : \end{aligned}\ ] ] for the calculation of the we have to look at each possible occupation pattern for a neighborhood .this will be done in detail in sect .[ sec : a1a2 ] .before , we discuss another analytical approximation which solves the macroscopic eqn .( [ x - fin ] ) with respect to _this is the so - called _ pair approximation _, where one is not interested in the occupation distribution of a whole neighborhood , eqn .( [ occupat0 ] ) ) but only in _ pairs _ of nearest neigbor nodes , with .that means the local neighborhood of nearest neighbors is decomposed into pairs , i.e. blocks of size 2 that are called _ doublets_. similar to eqn .( [ eq : x_ind ] ) , the global frequency of doublets is defined as : the expected value of the doublet frequency is then given by in the same way as in eqn .( [ eq : av_x ] ) .we now define the correlation term as : neglecting higher order correlations .thus can be seen as an approximation of the conditional probability that a randomly chosen nearest neighbor of a node in state is in state . using the above definitions, we have the following relations : for the case of two species , and are the _inter - species _ correlations , while and denote the _ intra - species _ correlations . using ,these correlations can be expressed in terms of only and as follows : now , the objective is to express the global frequency of a specific occupation pattern , eqn .( [ eq : av_x ] ) , in terms of the correlation terms . in pair approximation, it is assumed that the states are correlated only through the state and uncorrelated otherwise. then the global frequency terms in eqn .( [ eq : master_macrof ] ) can be approximated as follows : for the macroscopic dynamics , eqn .( [ x - fin ] ) , we find in pair approximation : \end{aligned}\ ] ] note that the can be expressed in terms of by means of eqn .( [ eq : cond_prob ] ) .thus , eqn .( [ macro - pair ] ) now depends on only two variables , and . in order to derive a _ closed _ form description ,we need an additional equation for . that can be obtained from eqn .( [ eq : c - prob ] ) : eqn .( [ eq : local1 ] ) also requires the time derivative of the global doublet frequency . even in their lengthy form, the three equations for , , can easily be solved numerically .this gives the approach some computational advantage compared to averaging over a number of microscopic computer simulations for all possible parameter sets .although the approach derived so far is quite general in that it can be applied to different network topologies and neighborhood sizes , specific expressions for these three equations of course depend on these .therefore , in the appendix , these three equations are explicitly derived for a 2d regular lattice with neighborhood using the specific transition rates introduced in the next section . in sect .[ 4.1 ] , we further show that the pair approximation yields some characteristic quantities such as for the 2d regular lattice in very good agreement with the results of computer simulations .so far , we have developed a stochastic framework for ( but not restricted to ) nonlinear voter models in a general way , without specifying two of the most important features , namely ( i ) the network topology which defines the neighborhood , and ( ii ) the nonlinearity which defines the response to the local frequencies of the two different states . for ( i ) , let us choose a regular network with , i.e. each voter has 4 different neighbors .we note explicitly that our modeling framework and the general results derived hold for _ all homogeneous networks _ , but for the visualization of the results it will be most convenient to choose a regular square lattice , where the neighbors appear next to a node .this allows us to observe the formation of macroscopic ordered states in a more convenient way , without restricting the general case . eventually , to illustrate the dynamics let us now assume a population biology context , where each node is occupied by an individual of either species 0 or 1 .the spreading of one particular state is then interpreted as the invasion of that respective species and the local disappearence of the other one , while the emergence of a complete ordered state is seen as the complete invasion or domination of one species together with the extinction of the other one .keeping in mind that we also consider the state of node itself , we can write the possible transition rates , eqn .( [ eq : nonlinvm ] ) , for the neighborhood of and in the following explicit way ( cf also ) : eqn .( [ trans2 ] ) means that a particular node currently in state , or occupied by an individual of species where is either 0 or 1 , will be occupied by an individual of species with a rate that changes with the local frequency in a _ nonlinear _ manner .the different values of denote the products for the specific values of given .i.e. , the define the piecewise linear functions shown in fig .[ fig : nonlin ] .the general case of six independent transition rates ( n0, ... ,5 ) in eqn . ( [ trans2 ] ) can be reduced to three transition rates , , by assuming a symmetry of the invasion dynamics of the two species , i.e. , and .further , assuming a pure frequency dependent process , we have to consequently choose , because in a complete homogeneous neighborhood , there is no incentive to change to another state ( there are no other species around to invade ) .we recall that if the transition rates , are directly proportional to , i.e. and , this recovers the linear vm , eqn .( [ eq : linvm ] ) .( note that without the resistence of node discussed in sect .[ sec : vm ] the linear voter point would read as and instead . ) dependent on the relation of the two essential parameters , , we also find different versions of _ nonlinear _ vm , which have their specific meaning in a population biology context : note , that the parameters can be ordered in different ways .these reduce to inequalities under the conditions and . in eqn .( [ eps - alpha ] ) , ( pf ) means ( pure ) _ positive freqency dependent invasion _, where the transition rate _ increases _ with an increasing number of individuals of the _ opposite _ species in the neighborhood , and ( nf ) means ( pure ) _ negative freqency dependent invasion _ because the transition rate _decreases_. the two other cases describe positive ( pa ) and negative ( na ) _ allee effects _these regions are described by inequalities each , all of which show the same relative change in parameter values , if going from to .similar to the drawings in fig .[ fig : nonlin ] this can be roughly visualized as an up - down - up change in region ( pa ) and a down - up - down change in region ( na ) . the different parameter regions are shown in fig .[ region ] . on a first glimpse, one would expect that the dynamics as well as the evolution of global variables may be different in these regions .thus , one of the aims of this paper is to investigate whether or to what extent this would be the case .( 1,1 ) ( 0,0.5)(0.5,0.5)(1,1)(0,1)(0,0.5 ) ( 0,0)(0.5,0.5)(1,0.5)(1,0)(0,0 ) ( 0.,0.)(0.,0.5)(0.5,0.5)(0,0 . ) ( 0.5,0.5)(1,0.5)(1,1)(0.5,0.5 ) ( 0,0)(0,1)(1,1)(1,0)(0,0 ) ( 0,0)(1,1 ) ( 0.2,0.4) ( 0.13,0.30)*(pf ) * ( 0.85,0.65)*(nf ) * ( 0.3,0.75)*(pa ) * ( 0.7,0.2)*(na ) * ( 0.5,-0.2) ( -0.2,0.5) in order to find out about the influence of the nonlinear response function , which is specified here in terms of , , let us start with the mean - field approach that lead to eqn .( [ eq : master_macro_mean ] ) . as we outlined in sect .[ 3.2 ] , the calculation of the in eqn .( [ eq : master_macro_mean ] ) requires to look at each possible occupation pattern for a neighborhood , for instance , .the mean - field approach assumes that the occurence of each 1 or 0 in the pattern can be described by the global frequencies and , respectively ( for simplicity , the abbreviation will be used in the following ) . for the example of string find . the same result yields for and for any other string that contains the same number of 1 and 0 , i.e. there are exactly different possibilities . for strings with two nodes of each species , times the contribution results , etc . inserting eqn .( [ trans2 ] ) for the transition rates , we find with the equation for the mean - field dynamics : \\ \ ; - x & \big [ ( 1-\alpha_{1 } ) ( 1-x)^{4 } + 4 ( 1-\alpha_{1 } ) x(1-x)^{3 } \\ & + 6 \alpha_{2 } x^{2}(1-x)^{2 } + 4 \alpha_{1 } x^{3 } ( 1-x ) \big ] \end{aligned}\ ] ] the fixed points of the mean - field dynamics can be calculated from eqn .( [ eq : mean_field ] ) using .we find : the first three stationary solutions denote either a complete invasion of one species or an equal share of both of them .`` nontrivial '' solutions , i.e. a _ coexistence _ of both species with different shares of the total population , can only result from , provided that the solutions are ( i ) real and ( ii ) in the interval .the first requirement means that the two functions , are either both positive or both negative .the second requirement additionally results in if and if .this leads to the phase diagram of the mean - field case shown in fig .[ phase ] .( 1,1 ) ( 0,0)(0.2,0)(0.2,0.4)(0,0.3)(0,0 ) ( 0.2,1)(0.2,0.4)(1,0.8)(1,1)(0.2,1 ) ( 0.2,0.4)(0.4667,0)(1,0)(1,0.8)(0.2,0.4 ) ( 0.2,0.4)(0,0.3)(0,0.7)(0.2,0.4 ) ( 0,0.7)(0.4667,0.0 ) ( 0,0.3)(1,0.8 ) ( 0.2,0)(0.2,1 ) ( 0,0)(1,1 ) ( 0.2,0.4) ( 0.1,0.1)*a * ( 0.075,0.45)*b * ( 0.1,0.9)*c * ( 0.5,0.9)*d * ( 0.7,0.3)*e * ( 0.3,0.1)*f * ( 0.5,-0.2) ( -0.2,0.5) ( 0.68,0.72) ( 0.55,0.1) and .( top ) , ( bottom ) .the solid lines refer to stable solutions , the dashed lines to unstable ones .the notations a - f refer to the respective areas in the phase diagram , fig .[ phase ] .[ a1a2],title="fig:",width=245 ] and .( top ) , ( bottom ) .the solid lines refer to stable solutions , the dashed lines to unstable ones .the notations a - f refer to the respective areas in the phase diagram , fig .[ phase ] .[ a1a2],title="fig:",width=245 ] in order to verify the stability of the solutions , we have further done a perturbation analysis ( see also sect . [sec : pertub ] ) .the results can be summarized as follows : * in the regions and of the mean - field phase diagram , fig .[ phase ] , and are the only stable fixed points of the dynamics , while is an unstable fixed point ( cf . also fig .[ a1a2]top ) .species 1 with will most likely become extinct , while it will remain as the only survivor for .thus , the region can be characterized as the region of _ invasion_. * in region , the mean - field limit predicts the three stable fixed points , and . the attractor basin for is the largest as fig .[ a1a2](top ) indicates .the separatices are given by the unstable solutions , eqn .( [ zero ] ) . in this parameter region , the mean - field limit predicts either _ coexistence _ of both species with equal shares , or _invasion _ of one species , dependent on the initial condition .* in the regions and , only one stable fixed point can be found , while the solutions and are unstable ( cf . also fig .[ a1a2]bottom ) .thus , the mean - field approach predicts the _ coexistence _ of both species with equal share . * finally , in region the solutions 0 , 1 and are unstable fixed points , but the two remaining solutions , eqn .( [ zero ] ) are stable fixed points ( cf .[ a1a2]bottom ) .thus , this region is the most interesting one , since it seems to enable `` nontrivial '' solutions , i.e. an _ asymmetric coexistence _ of both species with different shares .we note again , that this is a prediction of the mean - field analysis . at the intersection of regions and , these two solutions approach and , while at the intersection of regions and they both converge to . we will compare these mean - field predictions both with computer simulations and analytical results from the pair approximations later in this paper .before , in sects .[ sec : limit ] , [ sec : pertub ] we would like to point to some interesting combinations in this phase diagram where the mean - field analysis does not give a clear picture of the dynamics . the first set of interesting points are combinations of values 0 and 1 , such as etc .these cases are special in the sense that they describe the _ deterministic _ limit of the nonlinear voter dynamics . whereas for a finite probability exist to change to the opposite state , for the state of node _ never _ changes as long as at least half of the nearest neighbor nodes are occupied by the same species .on the other hand , it will _ always _ change if more then half of the neighboring nodes are occupied by the _ other _ species .this refers to a _ deterministic _ positive frequency invasion process .similarly , a _deterministic _ negative frequency invasion process is described by .the deterministic dynamics , as we know from various other examples , may lead to a completely different outcome as the stochastic counterpart . in order to verify that we have conducted computer simulations using a _ cellular automaton _ ( ca ) , i.e. , a two - dimensional regular lattice with periodic boundary conditions and _ synchronous update _ of the nodes. the latter one can be argued , but we verified that there are no changes in the results of the computer simulations if the sequential update is used . the time scale for the synchronous update is defined by the number of simulation steps . if not stated otherwise , the initial configuration is taken to be a random distribution ( within reasonable limits ) of both species , i.e. initially each node is randomly assigned one of the possible states , .thus , the initial global frequency is .[ determ ] shows snapshots of computer simulations of the deterministic dynamics taken in the ( quasi-)stationary dynamic regime .if we compare the snapshots of the _ deterministic _ voter dynamics with the mean - field prediction , the following observations can be made : 1 .a spatial coexistence of both species is observed for the values , fig .[ det00 ] , , fig .[ det10 ] , , fig .[ det11 ] , where the global frequency in the stationary state is .this contradicts with the mean - field prediction for , which is part of region ( a ) and thus should display complete invasion .a complete invasion of one species is observed for , fig .[ det01 ] .this would agree with the mean - field prediction of _ either _ coexistence _ or _ invasion. a closer look at the bifurcation diagram , fig .[ a1a2](top ) , however tells us that for the given intial condition the stationary outcome should be _ coexistence _ , whereas the deterministic limit shows _ always invasion _ as was verified by numerous computer simulations with varying initial conditions .3 . for the deterministic frequencydependent processes , the spatial pattern becomes stationary after a short time .for the negative frequency dependence the pattern flips between two different configurations at every time step .so , despite a constant global frequency in the latter case , local reconfigurations prevent the pattern from reaching a completely stationary state , but it may be regarded as ( quasi-)stationary , i.e. the macroscopic observables do not change but microscopic changes still occur .4 . in both positive and negative deterministic frequency dependence cases , individuals of the same species tend to aggregate in space , albeit in different local patterns . for the positive frequency dependence, we see the occurence of small clusters , fig .[ det00 ] , or even complete invasion , fig .[ det01 ] , based on the local feedback within the same species . for the negative frequency dependence however we observe the formation of a meander - like structure that is also known from physico - chemical structure formation .it results from the antagonistic effort of each species to avoid individuals of the same kind , when being surrounded by a majority of individuals of the opposite species . in conclusion ,the mean - field analysis given in this section may provide a first indication of how the nonlinearities may influence the voter dynamics .this , however , can not fully extended to the limiting cases given by the deterministic dynamics .the other interesting combination is the linear voter point where _ all _ different regions of the mean - field phase diagram , fig .[ phase ] , intersect . inserting into eqn .( [ eq : mean_field ] ) yields regardless of the value of , i.e. for _ all _ initial conditions .this important feature of the linear vm was already discussed in sect .[ sec : vm ] .we recall that , while on the one hand the microscopic realizations always reach consensus ( complete invasion of one species ) in the long term , on the other hand an averaged outcome over many realizations shows that the share of the winning species is distributed as . to put it the other way round , the mean - field limit discussed failes here because it does not give us any indication of the fact that there is a completely ordered state in the linear vm .the averaged outcome , for example , can result both from complete _ invasion _ of species 1 ( 50__coexistence _ _ of the two species .both of these outcomes exist in the immediate neighborhood of the linear voter point as fig .[ phase ] shows . in order to get more insight into this, we will later use the pair approximation derived in sect . [3.2 ] . here ,we first follow a perturbation approach , i.e. we add a small perturbation to the solution describing the macroscopic ordered state of complete invasion ( consensus ) . in terms of the nonlinear response function , expressed by the in eqn .( [ trans2 ] ) , this means a nonzero value of , i.e. a small parameter indicating the perturbation . with this, we arrive at a modified mean - field equation : + \frac{dx}{dt}\ ] ] where is given by the nonperturbated mean - field eqn .( [ eq : mean_field ] ) and the index shall indicate the presence of the perturbation .consequently , this changes both the value of the fixed points , previously given by eqn .( [ zero ] ) and their stability . instead of a complete analysis in the parameter state, we restrict the investigations to the vicinity of the linear voter point where .( [ eq : mean_field - p ] ) then returns only one real stationary solution , , which is independent of and stable .consequently , any finite perturbation will destroy the characteristic feature of reaching an ordered state in the linear vm , i.e. complete invasion , and leaves only _ coexistence _ of both species as a possible outcome .this is little surpising because adding an to the dynamics transforms the former attractor into a repellor , i.e. it prevents reaching the ordered state .more interesting the question is , how the perturbated linear voter dynamics looks in detail .this is investigated in the next section by means of computer simulation , and in sect . [ 4.1 ] by means of the pair approximation approach . for further insight into the dynamics of the nonlinear vm , we perform some computer simulations using the ca approach already described in sect .[ sec : limit ] .is important to notice that we have chosen different sets of the parameters , from the region of _ positive frequency dependence _, as defined in fig .[ region ] .i.e. , the transition towards the opposite state strictly increases with the number of neighbors in that state ( _ majority voting _ ) .so one would naively expect a similar macroscopic dynamics in that region as done in this however is not the case as the following simulations indicate .a thorough analysis is presented in sect . [ 4.3 ] in order to study the stability of the global dynamics for the different , settings in the vicinity of the linear voter point , we have added a small perturbation . as the investigations of sect .[ sec : pertub ] have indicated , we should no longer expect consensus for the perturbated linear vm , but some sort of coexistence .in fact , we observe an interesting nonstationary pattern formation we call _ correlated coexistence_. fig .[ posit ] ( obtained for another range of parameters , ) shows an example of this .we find a long - term coexistence of both species , which is accompanied by a spatial structure formation . here, the spatial pattern remains always nonstationary and the global frequency randomly fluctuates around a mean value of , as shown by fig .[ x - t-2 ] .( a)(b ) ( c)(d ) , , , and ( b ) for the same setup and parameters as in fig .[ posit ] .the initial frequency is =0)=0.5 for both runs .[ x - t-2],width=245 ] in more specific terms , the regime defined as correlated coexistence is a paramagnetic phase with finite domain length , typical of partially phase - separated systems .we mention that such a regime was also observed in some related investigations of the vm and other nonlinear spin models with ising behavior .also , a similar transition was observed in the abrams - strogatz model , where the transition rate is a power of the local field .the stability of the solutions then changes at , from coexistence for to dominance for . in order to find out about the range of parameters in the nonlinear vm resulting in the quite interesting phenomenon of correlated coexistence , we have varied the parameters , within the region of positive frequency dependence .[ posit ] actually shows results from a set picked from region ( f ) in fig .[ phase ] , where the mean - field analysis predicts an asymmetric coexistence of both species .obviously , the nonstationarity results from the perturbation .however , for other sets , in the positive frequency dependence region the perturbation does _ not prevent _ the system from reaching a global ordered state , i.e. invasion of one species as fig .[ stoch ] verifies .this process is accompanied by a clustering process and eventually a segregation of both species indicated by the formation of spatial domains .[ x - t ] depicts the evolution of the global frequency of species 1 for different intitial frequencies . in every case, one species becomes extinct . for species 1is the most likely survivor , while for it is most likely to become extinct . for , random events during the early stage decide about its outcome . on the other hand, the perturbation also does _ not induce _ an ordered state as the random coexistence in fig .[ random ] shows , which was again obtained from parameter settings in the region of positive frequency dependence .so , we conclude that computer simulations of _ positive _ frequency dependent processes show three different dynamic regimes ( dependent on the parameters , ) : ( i ) complete invasion , ( ii ) random coexistence , and ( iii ) correlated coexistence . while for ( i ) and ( ii ) the outcome is in line with the mean - field prediction shown in fig .[ a1a2 ] , this does not immediately follows for ( iii ) .so , we are left with the question whether the interesting phenomenon of correlated coexistence is just _ because _ of the perturbation of some ordered state , or whether it may also exist _ inspite _ of .( a)(b ) ( c)(d ) .the initial frequencies =0 ) of the four different runs are : ( a ) 0.6 , ( b ) 0.5 , ( c ) 0.5 , ( d ) 0.4.,width=245 ] .[ x - t ] we just add that for _ negative _ frequency dependent invasion , eqn .( [ eps - alpha ] ) , the the spatial pattern remains random , similar to fig .[ random ] .furthermore , regardless of the initial frequency , on a very short time scales , a global frequency =0.5 is always reached .that means we always find _ coexistence _ between both species .we conclude that for _ negative _ frequency dependence =0.5 is the only stable value , which is in agreement with the mean - field prediction , whereas for _ positive _ frequency dependence the situation is not as clear .to answer the question what ranges of , eventually lead to what kind of macroscopic dynamics , we now make use of the pair approximation already derived in sect . [ 3.2 ] as a first correction to the mean - field limit . here , we follow a two - step strategy : first , we investigate how well the pair approximation , eqs .( [ eq : master2d ] ) , ( [ eq : doublet2d ] ) , ( [ eq : master2dc ] ) of the macroscopic dynamics , eqn . ( [ x - fin ] ) , predict the global quantities and . in order to specify the network topology , we use againthe ca described above .second , we use the pair approximation to derive a phase diagram in the case of local interaction . eventually , we test whether these findings remain stable against perturbations of the ordered state .all predictions are tested by comparison with computer simulations of the microscopic model , from which we calculate the quantities of interest and average them over 50 runs .here we have to distinguish between the three different dynamic regimes already indicated in sect . [ 2.4 ] . regime ( i ) , _ complete invasion _ , is characterized by fixed points of the macroscopic dynamics of either or .the ca simulations as well as also the pair approximation of the dynamics quickly converge to one of these attractors , dependent on the initial conditions . regime ( ii ) , _ random coexistence _, has only one fixed point , , to which the ca simulations quickly converge .the pair approximation converges to after some initial deviations from the ca simulation , i.e. it relaxes on a different time scale ( ) , but is correct in the long run .the approximation of the local correlation shows some deviations from the predicted value , .we have tested the case of random coexistence for various parameter values and found values for between 0.4 and 0.6 .the discrepancy is understandable , since in the case of long - term coexistence some of the spatial patterns flip between two different random configurations with high frequency .thus , while the global frequency settles down to 0.5 , the microscopic dynamics is still nonstationary . regime ( iii ) , _ correlated coexistence_ , the most interesting one , is chacterized by an average global frequency of again , however the existing local correlations lead to a much higher value of .this is shown in fig .[ variat ] , where we find from the ca simulations and from the pair approximation .i.e. , for the case of spatial domain formation the long - range correlations can be well captured by the pair approximation , whereas this was less satisfactory for the short - range correlations of the random patterns . with min - max deviations ( top ) and spatial correlation ( bottom ) for the case of correlated coexistence .the two curves shown in the bottom part result from averaging over 50 ca simulations ( black dotted line ) and from the pair approximation ( green solid line ) the parameters are as in figs .[ posit ] , [ x - t-2 ] .( top ) [ variat],title="fig:",width=245 ] ) , ( bottom ) pair approximation .phase boundaries for resulting from ( upper limit ) : left red solid line , right black solid line , for comparison phase boundaries resulting from ( lower limit ) : left red solid line ( identical with ) , right red solid line on the far right . dashed red lines indicate the shift of the phase boundaries for if .the linear voter point ( 0.2,0.4 ) is indicated by .further , marks those three parameter sets from the positive frequency dependence region where ca simulations are shown in figs .[ stoch ] , [ posit ] , [ random ] ( see also global frequency and local correlation in figs .[ x - t-2 ] , [ x - t ] , [ variat ] ) .the straight dashed lines mark the different parameter areas given in eqn .( [ eps - alpha ] ) which are also shown in fig .[ region].,title="fig:",width=245 ] the insights into the dynamics of the nonlinear vm derived in this paper are now summarized in a phase diagram that identifies the different parameter regions for the possible dynamic regimes identified in the previous sections . in order to find the boundaries between these different regimes , we carried out ca simulations of the complete parameter space , .precisely , for every single run the long - term stationary values of and were obtained and then averaged over 50 simulations .as described above , the three different regimes could be clearly separated by their values , which were used to identify the phase boundary between the different regimes .the outcome of the ca simulations is shown in the phase diagram of fig .[ phasepl ] ( top ) and should be compared with fig .[ phase ] , which results from the mean - field analysis in sect .[ 3.2 ] and thus neglects any kind of correlation . instead of the six regions distinguished in fig .[ phase ] , in the local case we can distinguish _ two _ different regions divided by one separatrix : the parameter region left of the separatrix refers to the _ complete invasion _ of one of the species , with _high _ local correlations during the evolution . in the ca simulations ,we observe the formation of domains that grow in the course of time until exclusive domination prevails .asymptotically , a _stationary _ pattern is observed , with ( or 0 ) and ( or 0 ) ( cf the simulation results show in figs .[ stoch ] , [ x - t ] ) .i.e. , the system converges into a frozen state with no dynamics at all .the region to the right of the separatrix refers to _ random coexistence _ of both species with and _ no _ local correlations , i.e. . in the ca simulations , we observe _ nonstationary _ random patterns that change with high frequency ( cf .[ random ] ) .both regions are divided by a _separatrix_. as shown in fig .[ phasepl ] ( top ) , the separatrix is divided into two pieces by the linear voter point , ( 0.2,0.4 ) . above that point, the separatrix is very narrow , but below the voter point it has in fact a certain extension in the parameter space .looking at the dynamics _ on _ the separatrix , we find that the mean frequency is both above and below the voter point ( see also fig .[ variat ] , top ) .the local correlation holds within the whole extended area of the separatrix which identifies it as the region of _ correlated coexistence _( see also fig .[ variat ] , bottom ) .the question is how well this phase diagram can be predicted by using the pair approximation of the dynamics , described by eqs .( [ eq : master2d ] ) , ( [ eq : master2dc ] ) . for a comparison, the coupled equations were numerically solved to get the asymptotic solutions for ( which looks complicated but is numerically very fast and efficient ) . in order to distinguish between the three regimes , we have to define a critical value for the correlations .whereas in the ca simulations indicated a correlated nonstationary coexistence , this value was never reached using the pair approximation ( see fig .[ variat ] , bottom ) , so could be regarded as an _ upper limit _ for the case of correlated coexistence . random coexistence , on the other end , yielded in the ca simulations and a value between 0.4 and 0.6 in the pair approximation .so , given suitable initial conditions , we can regard as a _lower limit _ , to identify correlated coexistence .the results are shown in fig .[ phasepl ] ( bottom ) which shall be compared with the phase diagram above ( fig .[ phasepl ] , top ) .[ phasepl ] ( bottom ) shows the influence of the threshold value . with the lower limit, we find a quite broad region of correlated coexistence , which for example also includes the point for which a random coexistence in the ca simulations was shown in fig .[ random ] . using the upper limit results in a much smaller region of correlated coexistence . comparing this with the ca simulations above, we can verify that the pair approximation correctly predicts the extended region below the voter point and also shows how it becomes more narrow above the voter point .one should note that the left border of the separatrics is not affected by the threshold value , whereas the right border shifts considerably .the left border also contains the voter point ( independent of the threshold value ) , for which a complete invasion can be observed .therefore , it is quite interesting to look into changes of the phase diagram if additional perturbations are considered ( see also sect .[ sec : pertub ] ) .[ phasepl ] ( bottom ) shows ( for the threshold ) that this does _ not _ affects the existence of the three dynamic regimes and most notably of the extended separatrix below the voter point , but only shifts the boundaries toward the left , dependent on the value of ( this can be also verified for but is omitted here , to keep the figure readable ) .thus , the consideration of perturbations in the phase diagram reveals that it is indeed the _nonlinearity _ in the voter model which allows for the interesting phenomenon of the correlated coexistence and _ not _ just the perturbation . a closer look into fig .[ phasepl ] ( bottom ) also shows that in the perturbated phase diagram the voter point no longer lies on the boundary towards the region of complete invasion but clearly _ within _ the region of correlated coexististence .this is in agreement with the findings in sect .[ sec : pertub ] which showed that for the linear vm complete invasion is an unstable phenomenon and changes into correlated coexistence for finite .in this paper , we investigated a local model of frequency dependent processes , which for example models the dynamics of two species in a spatial environment .individuals of these species ( also called voters ) are seen as nodes of a network assumed as homogeneous in this paper ( i.e. all nodes have the same number of neighbors , ) . the basic assumption for the microscopic dynamicsis that the probability to occupy a given node with either species or depends on the frequency of this species in the immediate neighborhood .different from other investigations , we have counted in the state of the center node as well ( see sect . [ sec : vm ] ) and have further considered a _ nonlinear response _ of the voters to the local frequencies .studies of a nonlinear version of the traditional voter model ( without counting the state of the central node and with sequential dynamics ) have already been analyzed before . , as pointed out before , is closest to our investigations , but restricted itself to the mean - field analysis and computer simulations of the 2d case , to obtain a phase diagram similar to fig .[ phasepl ] ( top ) . , on the other hand , have provided a markov analysis which is restricted only to very small ca .the two - parameter model in is for a nonlinear voter model that exhibits at the voter point ( ) a transition from a ferromagnetic phase , i.e. invasion , for to a paramagnetic phase ( correlated coexistence ) for . alsothe case of the perturbated linear voter model is included in the model , for and .similar results are also presented in , which points out relations to random branching processes , and in , where the emphasis is on investigations of the interface density , to describe the coarsening process .a recent paper also shows for spin systems with two symmetric absorbing states ( such as the vm ) that the macroscopic dynamics only depends on the first derivatives of the spin - flip probabilities . in our paper, we set out for a formal approach that allows to derive the dynamics on different levels : ( i ) a _ stochastic dynamics _ on the microlevel , which is used for reference computer simulations but also allows a derivation of the ( ii ) _ macroscopic dynamics _ for the key variables , given in terms of differential equations .this macroscopic dynamics is then analysed by two different approximations , ( i ) a _ mean - field approximation _ that neglects any local interaction in the network , and ( ii ) a _ local approximation _ considering correlations between pairs of nearest neighbors . in order to test the validity of these approximations, we compare their preditions with the averaged outcome of the microscopic computer simulations .we like to emphasize that our approach is general enough to be applied to various forms of _ frequency dependent processes _ on homogeneous networks with different number of neighbors .even if a two - dimensional regular lattice is used to illustrate the dynamics , the approach is not restricted to that .our main result , in addition to the general framework for nonlinear frequency dependent processes , is the derivation of a _ phase diagram _ using the _ pair approximation _ derived in this paper .this approach predicts correctly both the type of the dynamics and the asymptotic values of the key variables dependent on the possible nonlinearities for the case of _ local interaction _ , .the predicted phase diagram was verified by comparision with extensive microscopic computer simulations rastering the whole parameter space .while the structure of the phase diagram was already known from previous computer simulations presented in we could demonstrate that the pair approximation works very well both for predicting the correct phase boundaries and the dynamics within these phases .it should be noticed that the pair approximation is a valuable tool , particularly with respect to computational efforts .the computer simulations are much more timeconsuming , since the results of the different runs have to be averaged afterwards .the pair approximation , on the other hand , is based on only 2 coupled equations and therefore needs less computational effort . in the following ,we discuss some of the interesting findings ._ the region of correlated coexistence : _ analysing the nonlinear vm with _ local interaction _ has shown that there are in fact only _ three _ different dynamic regimes dependent on the nonlinearities : ( i ) complete invasion , ( ii ) random coexistence , and ( iii ) correlated coexistence . the first one is already known as the standard behavior of the _ linear _ vm .consequently the only interesting feature , namely the time to reach the ordered state dependent on the network size and topology , has been the subject of many investigations .number ( ii ) , on the other hand , only leads to trivial results as no real dynamics is observed .thus , the most interesting regime is ( iii ) correlated coexistence , which can be found in a small , but not negligible parameter region below the voter point .this region separates the two dominant regimes ( i ) and ( ii ) and therefore was called a separatrix here . going over from the right to the left side of the phase diagram in that region, we notice a transition from 0.5 to 1.0 ( or 0.0 respectively ) in the mean frequency , and from 0.5 to 0.7 to 1.0 in the local correlations .thus , in fact separates the two dynamic regimes ( i ) and ( ii ) ( below the voter point ) . for parameters chosen from that region, we find in the ca simulations a long - term and nonstationary _ coexistence _ between the two species as on the _ right _ side of the phase diagram .but we also find the long - range spatial correlations that lead to the formation of spatial domains as shown e.g. in fig .[ posit ] which is characteristic for the _ left _ side of the phase diagram .the spatial pattern formation is also indicated by large fluctuations of shown in fig . [variat](top ) . a single run , as shown in fig .[ x - t-2 ] , clearly indicates the long - term nonstationary coexistence of both species .we emphasize that the separatrix between the two dynamic regimes is well predicted by the macroscopic dynamics resulting from the pair approximation ( as can be clearly seen in fig .[ phasepl ] ) .most importantly , we could verify that the correlated coexistence of both species is not simply the effect of an additional perturbation , but results from the nonlinear interaction ._ comparison with the mean - field phase diagram : _ in our paper , the mean - field approximation plays the role of a reference state used to demonstrate the differences of the local analysis . the phase diagram of fig .[ phase ] distinguishes between six different regions , whereas the one in the local case , fig .[ phasepl ] ( top ) shows only three . comparing the two phase diagrams, we realize that the most interesting regions in fig .[ phase ] , namely ( c ) and ( f ) , have simply collapsed into the separatrix shown in fig .[ phasepl ] .the region ( c ) of unstable asymmetric coexistence or multiple outcome , respectively ( see sect . [ 3 ] ) , relates to the separatrix line above the voter point .it should be noticed that the phase diagram for local interaction , fig .[ phasepl ] , correctly predicts that the deterministic behavior for leads to complete invasion ( see sect . [4.3 ] and fig .[ det01 ] ) .the region ( f ) of stable asymmetric coexistence relates to the extended area of the separatrix shown below the voter point in fig .[ phasepl ] , where we still see a coexistence of both species - but the asymmetry between the two species relates to their changing dominance over time , as figs .[ x - t-2 ] , [ variat ] ( top ) clearly illustrate .we conclude that in the local case no regions of stationary _ and _ asymmetric coexistence between the two species exist , as was predicted by the mean - field analysis .however , we find a ( small but extended ) region _ on _ the separatrix that shows the _ nonstationary _ and asymmetric coexistence of the two species for _ single _ realizations ( which results in a symmetric coexistence averaged over runs , , see fig .[ variat ] , top ) . _ the role of positive frequency dependence : _ the possible nonlinear responses in frequency dependent processes can be distinguished in four parameter areas of positive and negative frequency dependence and positive and negative allee effects , as fig .[ region ] shows .previous investigations assigned a dynamic leading to invasion to a positive frequency dependence , while associating a spatial coexistence with negative frequency dependence .our investigations have shown that such an assignment does not unambiguously hold .in particular , a random coexistence can be found for _ negative _ frequency dependent dynamics as well as for _ positive _ frequency depencence , which was so far assigned to complete invasion only . on the other hand ,complete invasion is not observed only for positive frequency dependence , but also for positive and negative allee effects .a random spatial coexistence can be found for positive and negative allee dynamics as well .the only case where just one dynamic regime can be observed is the case of negative frequency dependence .we note , however , that the nonstationary long - term coexistence with spatial pattern formation occurs both for the positive frequency dependence and the negative allee dynamics , given that the parameters are chosen from the most interesting zone of the separatrix below the voter point . in conclusion ,the region of positive frequency dependence bears in fact a much more richer dynamics , as it is transected by the separatrix we identified in the local analysis and thus shows all three types of dynamics we could identify for nonlinear voter models , namely ( i ) complete invasion of one of the species via the formation of large domains , ( ii ) long - term coexistence of both species with random distribution , ( iii ) long - term coexistence of both species with formation of nonstationary domains . however , the most interesting regime ( iii ) is _ not _ restricted to positive frequency dependent processes , but can be also found for some negative allee effects .we summarize our findings by pointing out that _ nonlinear _ vm show indeed a very rich dynamics which was not much investigated yet .in addition to the phenomenon of complete invasion ( or consensus ) which occurs also beyond the linear vm , we find most interesting that certain parameter settings lead to a dynamics with nonstationarity and long - term correlations .thinking about possible applications of the vm , we see that in particular this region has the potential to model relevant observations , be it the temporal dominance of certain species in a habitat or the temporal prevalence of certain opions ( or political parties ) in a social system .the nonstationarity observed gives rise to the prediction that such dominance may not be the end , and change happens ( even without additional perturbation ) .the authors want to thank thilo mahnig and heinz mhlenbein for discussions on an early version of this paper .here we derive some explicit expressions for the three equations of the pair approximation discussed in sect . [ 3.2 ] , for the global frequency ( eqn .( [ macro - pair ] ) ) , the doublet frequency and the correlation term , eqn .( [ eq : local1 ] ) ) .the equations are derived for the neighborhood .we use the notation . using eqn .( [ eq : cond_prob ] ) and the transition rates of eqn .( [ trans2 ] ) , we find for , eqn .( [ macro - pair ] ) in pair approximation : \\ & + 4\alpha_1 \left [ \frac{x}{(1-x)^3 } ( 1 - 2x+xc_{1|1})^3 ( 1-c_{1|1 } ) \right .\\ & \qquad \quad - x ( 1-c_{1|1 } ) { c_{1|1}}^3 \big ] \\ & + 6 \alpha_2 \left [ \frac{x^2}{(1-x)^3 } ( 1 - 2x+xc_{1|1})^2 ( 1-c_{1|1})^2 \right . \\ & \qquad \quad - x ( 1-c_{1|1})^2 { c_{1|1}}^2 \big ] \\ & + ( 1-\alpha_1 ) \left [ \frac{x^4}{(1-x)^3 } ( 1-c_{1|1})^4 - x ( 1-c_{1|1})^4 \right ] \\ & + 4(1-\alpha_2)\left [ \frac{x^3}{(1-x)^3}(1 - 2x+xc_{1|1 } ) ( 1-c_{1|1})^3 \right .\\ & \qquad \quad - x ( 1-c_{1|1})^3 c_{1|1 } \big ] \end{aligned}\ ] ] we note that and in the mean - field limit , in which case eqn .( [ eq : master2d ] ) reduces to eqn .( [ eq : mean_field ] ) . in order to calculate the time derivative of the doublet frequency we have to consider how it is affected by changes of in a specific occupation pattern of size , , considering the as constant .again , in a frequency dependent process it is assumed that the transition does not depend on the exact distribution of the , but only on the frequency of a particular state in the neighborhood .let describe a neighborhood where the center node in state is surrounded by nodes of the same state .for any given , there are such occupation patterns .the global frequency of neighborhood is denoted as with the expectation value .obviously , can be calculated from the global frequencies of all possible occupation distributions ( eqn .( [ sigm01 ] ) ) , that match the condition i.e. it is defined as regarding the possible transitions , we are only interested in changes of the doublet ( 1,1 ) , i.e. transitions or . the transition rates shall be denoted as and respectively , which of course depend on the local neighborhood . with this , the dynamics of the expected doublet frequency can be described by the rate equation : \end{aligned}\ ] ] in order to specify the transition rates of the doublets , with and , we note that there are only 10 distinct configurations of the neighborhood .let us take the example .a transition of the center node would lead to the extinction of 4 doublets . on the other hand , the transition rate of the center node is as known from eqn .( [ trans2 ] ) .this would result in .however , for a lattice of size the number of doublets is , whereas there are exactly neighborhoods .therefore , if we apply the transition rates of the single nodes , eqn .( [ trans2 ] ) , to the transition of the doublets , their rates have to be scaled by 2 .similarly , if we take the example , a transition of the center node would occur at the rate and would create 3 new doublets . applying the scaling factor of 2, we verify that .this way we can determine the other possible transition rates : note that two of the transition rates are zero , because the respective doublets ( 1,1 ) or ( 0,1 ) do not exist in the assumed neighborhood .finally , we express the in eqn .( [ d_frequency2 ] ) by the of eqn .( [ s - m ] ) and apply the pair approximation , eqn .( [ eq : pairapprox ] ) , to the latter one . this way, we arrive at the dynamic equation for : \\ & + 6 \alpha_2 \left[\frac{x^2}{(1-x)^3 } ( 1-x+xc_{1|1})^2 ( 1-c_{1|1})^2 c_{1|1}^{2 } \right.\\ & \qquad \quad -x ( 1-c_{1|1})^2 \big ] \\ & + 2(1-\alpha_1 ) \left [ \frac{x^4}{(1-x)^3 } ( 1-c_{1|1})^4 \right .\\ & \qquad \quad -x ( 1-c_{1|1})^3 \big ] \\ & + 2(1-\alpha_2 ) \left [ \frac{x^3}{(1-x)^3}3 ( 1 - 2x+xc_{1|1})\times \right .\\ & \qquad \quad \times ( 1-c_{1|1})^3 - x ( 1-c_{1|1})^3 c_{1|1 } \big ] \end{aligned}\ ] ] the third equation , eqn .( [ eq : local1 ] ) , for the correlation term can be obtained in explicte form by using eqn .( [ eq : master2d ] ) for and eqn .( [ eq : doublet2d ] ) for : \\ & + 6\alpha_2 ( 1-c_{1|1})^3 \left[\frac{x}{(1-x)^3 } ( 1-x+xc_{1|1})^2 -c_{1|1}\right]\\ & + ( 1-\alpha_1)(1-c_{1|1})^4 \left[\frac{x^3}{(1-x)^3 } ( 2-c_{1|1 } ) + c_{1|1}\right ] \\ & + ( 1-\alpha_2)(1-c_{1|1})^3 \big[2c_{1|1}(2c_{1|1}-1 ) \\ & \quad + \frac{x^2}{(1-x)^3}(1 - 2x+xc_{1|1})(6 - 4c_{1|1 } ) \big ] \end{aligned}\ ] ] kendall , b. e. ; bjrnstad , o. n. ; bascompte , j. ; keitt , t. h. ; fagan , w. f. ( 2000 ) .dispersal , environmental correlation , and spatial synchrony in population dynamics ._ the american naturalist _ * 155(5 ) * , 628636 .oomes , n. a. ( 2002 ) .emerging markets and persistent inequality in a nonlinear voter model . in : d. griffeath ; c. moore ( eds . ) , _ new constructions in cellular automata _ , oxford universitiy press. 207230 . | in nonlinear voter models the transitions between two states depend in a nonlinear manner on the frequencies of these states in the neighborhood . we investigate the role of these nonlinearities on the global outcome of the dynamics for a homogeneous network where each node is connected to neighbors . the paper unfolds in two directions . we first develop a general stochastic framework for frequency dependent processes from which we derive the macroscopic dynamics for key variables , such as global frequencies and correlations . explicit expressions for both the mean - field limit and the pair approximation are obtained . we then apply these equations to determine a phase diagram in the parameter space that distinguishes between different dynamic regimes . the pair approximation allows us to identify three regimes for nonlinear voter models : ( i ) complete invasion , ( ii ) random coexistence , and most interestingly ( iii ) correlated coexistence . these findings are contrasted with predictions from the mean - field phase diagram and are confirmed by extensive computer simulations of the microscopic dynamics . _ pacs _ : 87.23.cc population dynamics and ecological pattern formation , 87.23.ge dynamics of social systems |
for millennia humans have observed the classical planets as unresolved points of light moving on the celestial sphere .now we are on the verge of seeing _ extrasolar _ planets as unresolved , moving points of light .we also are exploiting , or soon will exploit , alternative methods of imaging extrasolar planets , one of which we propose in this paper : sea - surface glints .astronomical research and human exploration have a strong historical precedent .the global oceanic explorations of captain james cook and others were in large part motivated by the possibility of measuring the scale of the solar system in physical units , e.g. meters , by geometrical triangulation of the silhouette of venus transiting the sun as viewed from nearly opposite sides of the earth during the transit of venus of 1769 ( maor 2000 ) .cook s expedition required the significant resources of the greatest naval power on earth to sail for nearly 8 months from england to tahiti . that is , he journeyed as far as was humanly possible in the 18th century in order to attempt to answer a scientific question as old as humankind , `` how far is the sun , and hence by elementary geometry , how far are the other planets ? ''although cook and others were blessed with cloud - free skies on the day of the transit , the scientific objective of his expedition failed due to turbulence in the earth s atmosphere . from the long view of history, we do not recall such expeditions costs in lives or gold , nor does it matter much that they failed in their scientific objectives .instead , we revere such expeditions for their power to inspire humans to collectively seek great accomplishments for the good of all humankind . in this paper , we outline a similar scientific quest to understand humankind s place in the cosmos : a historically significant and achievable scientific goal for nasa would be to identify _ oceanic _ extrasolar planets .this goal is simpler than , and also related to , the goal of former nasa administrator dan goldin to image the surfaces of extrasolar planets .if we were to attempt such imagery by diffraction - limited optics , the minuscule angular size of an extrasolar planet would require truly enormous separations between the optical components .however , an indirect imaging technique such as that presented in section [ sec : glints ] may permit mapping the surfaces of extrasolar planets with telescopes that are well within the capabilities of current human technologies .in section [ sec : context ] i provide some context to the lunar exploration initiative and some digressions on possible impacts it may have on society .we need not spatially resolve an extrasolar planet in order to determine if it has an atmosphere and an ocean like those of earth .instead , we propose to exploit the linear polarization generated by rayleigh scattering in the planet s atmosphere and specular reflection ( glint ) from its ocean to study earth - like extrasolar planets . in principlewe can map the extrasolar planet s continental boundaries by observing the glint from its oceans periodically varying as the rotation of the planet alternately places continents or water at the location on the sphere at which light from the star can be reflected specularly to earth .the concepts in this section have been described by mccullough ( 2007 ) and independently by others .seager et al .( 2000 ) , saar & seager ( 2003 ) , hough & lucas ( 2003 ) , stam , hovenier , & waters ( 2004 ) , and stam & hovenier ( 2005 ) have examined the rayleigh - scattered light of a hot jupiter .williams & gaidos ( 2004 ) and gaidos et al . ( 2006 ) examined the unpolarized variability of the sea - surface glint from an earth - like extrasolar planet .stam & hovenier ( 2006 ) independently examined the observability of the polarized signatures of an earth - like extrasolar planet , including rayleigh scattering and sea - surface glint .kuchner ( 2003 ) and lger et al .( 2004 ) have proposed that `` ocean planets '' may form from ice planets that migrate inward and melt ; the surfaces of these planets would be liquid water exclusively , i.e. no continents , and it remains to be seen whether such planets would be entirely obscured by a thick steam atmosphere , or instead might have clear atmospheres like earth s .specularly reflected light , or glint , from an ocean surface may provide a useful observational tool for studying extrasolar terrestrial planets .an interesting parallel exists between using glints to image the oceans of extrasolar planets and a similar technique to image within the earth s turbid ocean ( e.g. moore et al .2000 ) . in the latter ,one aims an underwater camera toward the sea floor while illuminating the camera s field of view with a laser beam .laser light scattered by the ocean water creates a haze of light visible by the camera , but the laser light that reaches the sea floor creates a well - defined spot that is also detected by the camera . by scanning the laser across the sea floor and simultaneously recording the location and brightness of the peak of the image ,the light scattered by the turbid water is suppressed and detection of objects on the sea floor is enhanced . in the proposed technique for imaging extrasolar planets ,the glint acts like the localized spot of the laser beam , and the rotation of the planet under the glint serves much the same purpose as the scanning of the laser beam .the polarization of the glint allows one to isolate it from the less - polarized reflection from other regions on the surface , and from the nearly - unpolarized star light scattered in the telescope optics . in the underwater technique ,the monochromatic color of the laser s light can be used to increase its contrast over any ambient light .analogously , the two primary mechanisms of polarization , the glint and atmospheric rayleigh scattering , can be differentiated with color , since the former is nearly achromatic whereas the latter depends strongly on wavelength .detection of sea - surface glints would differentiate ocean - bearing terrestrial planets , i.e. those similar to earth , from other terrestrial extrasolar planets .the brightness and degree of polarization of both sea - surface glints and atmospheric rayleigh scattering are strong functions of the phase angle of the extrasolar planet ( see mccullough 2007 ) .the difference of the two orthogonal linearly polarized reflectances may be an important observational signature of rayleigh scattering or glint .the difference attributable to rayleigh scattering peaks near quadrature , i.e. near maximum elongation of a circular orbit .the difference attributable to glint peaks in crescent phase , and in crescent phase the total ( unpolarized ) reflectance of the glint also is maximized .the reader intrigued by this short summary will find additional detail in mccullough ( 2007 ) and references therein .as quoted in his son s book ( dyson 2002 ) , freeman dyson wrote in may 1958 , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we shall know what we go to mars for , only after we get there .... you might as well ask columbus why he wasted his time discovering america when he could have been improving the methods of spanish sheep - farming . it is lucky that the u.s . government likequeen isabella is willing to pay for the ships ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in the case of the earth s moon , we have been there already , and to me , the potential tangible benefits of lunar exploration are not as clear as the intangible benefit of inspiring people to reach collectively for grand accomplishments .preparing for this conference has caused me to consider many things that i would nt have otherwise . in this sectioni digress to consider ( in order ) the relative cost of nasa , some potential risks of the us not exploring the moon , a significant budgetary challenge to sustaining that initiative , some benefits and concomitant risks of the lunar - exploration initiative .humans commonly redefine large quantities in appropriate units , such as the a.u . for solar system distances , or the parsec for stellar distances .i need to do the same for the large sums of money associated with lunar exploration .financially , nasa is approximately equivalent to a single large corporation . the median market capitalization of the 30 corporations in the dow jones industrial average is 108 b usd ( as of oct 31 , 2006 ) .one such corporation , pfizer , a pharmaceutical company founded in 1849 and headquartered in nyc , in 2005 had annual revenue of 51 b usd and spent 7 b usd on r&d .nasa by comparison was awarded 16 b usd in 2005 by the us congress .the net worth of us households in 2000 was 50 t usd .the aggregate value of corporate equities directly held was 9 t usd in 2000 and was 4 t usd in 2003 , so it declined approximately 2 t usd per year for three consecutive years .that is 2000 b usd per year , or 1 b usd per hour of each and every working day for three years .although it is hard for me as an individual to grasp these large sums , these comparisons may help put into context that nasa s lunar exploration initiative is expensive in one sense but not relative to the richness and power of our society , and especially not in comparison to those of the global society .returning to the moon may make sense when one considers the potential risks or costs to the usa of _ not _ returning to the moon when other nations do .each nation may have a self - interest in establishing a presence on the moon , much like antarctica , in order to assure that no single nation monopolizes it . as someone once saidwhen asked how much the us should spend on science , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we should spend exactly as much in each field as makes us first in that field . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if a competitor were to establish a presence on the moon , the us might worry that it was missing out on something of which it was not aware . here , i am reminded of seward s folly , the purchase of alaska , which in fact was both a strategic windfall and an economic bargain . over the many years required to return to the moon, there are large risks that could jeopardize the political will to continue the initiative .the largest risk , in my estimation , is the us federal budget .the aging of the `` baby boom '' demographic of the usa and western europe will soon greatly increase the rate of retirements .beginning approximately in 2010 , which is 1945 ( the end of world war ii ) plus 65 years ( the nominal age of retirement ) , wage earners that had been paying income taxes and capitalizing equity markets will retire and begin to _ withdraw funds_. political recognition of this impending financial bust is evident ; for example , on jan 18 , 2007 , us federal reserve chairman ben bernanke testified on this specific topic before the senate budget committee , `` we are experiencing what seems likely to be the calm before the storm .... '' if somehow that bust does not materialize , or the lunar exploration initiative somehow is immune to its effects , i expect a very substantial , albeit largely intangible , benefit of lunar exploration may be to bring the peoples of the world closer together , to save ourselves from ourselves .an excellent treatise on the latter topic was written at the time of the apollo program , `` the tragedy of the commons '' by garrett hardin ( 1968 ) .hardin argues well that some problems do not have a _ technical _ solution .he also points out that scientists , and often policy makers as well , often assume that a technical solution exists and fruitlessly seek one in their faith that one will eventually be found .( consider global climate change . )perhaps colonization of the moon will be a metaphor for solving earth s geopolitical problems .for example , a quasi - sustainable presence on the moon would nt use fossil fuels ; it would use solar or nuclear power with a considerable emphasis on conservation . due to the large expense , in energy or dollars , of moving mass from the surface of earth to the surface of the moon , nuclear power , in any of the various forms of radioisotope thermoelectric generators , reactors , or explosives , may be convenient for lunar exploration for electrical power , transmuting elements _ in situ _ , or excavations .however , any such convenience , i.e. of anything brought from earth , should be considered a negative compared to the long - term strategic benefit of `` living off the land ( regolith ) . '' for example , solar power is readily and abundantly available on the lunar surface , and a large , slow flywheel utilizing compacted regolith for mass could provide for the variable power demands of human habitation and/or store power through the lunar night away from the poles .an alternative approach would be to bring from earth a high - speed , precision flywheel of relatively small mass or a chemical battery , but those are antithetical to the strategic benefit of utilizing lunar resources . from the opposite perspective , utilizing any water ice mined from lunar craters , for human consumption and/or rocket fuel , could be myopic exploitation and destruction of an important scientific resource .the benefits of knowledge always have had concomitant negative consequences : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the idea that curiosity leads to disaster has an ancient pedigree .pandora opened the gods box and let loose all the evils of the world ; the descendants of noah built the tower of babel to reach heaven , but god scattered them and confounded their language ; icarus flew so close to the sun that his homemade wings melted , and he fell into the sea and drowned ; eve ate the apple of knowledge and was exiled from the garden of eden ; faust traded his soul for sorcery and spent eternity in hell .... we believe that curiosity is the beginning of knowledge and especially of science , but we know that the application of science has led to disaster . finkbeiner ( 2006 ) . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ neils bohr , as attributed by rhodes ( 1986 ) , believed the knowledge of nuclear weapons would `` foreclose the nation - state , '' by which he meant that knowledge of nuclear weapons was so powerful that its proliferation would make the nation - state obsolete .today , one might imagine any number of pandora s boxes equally well in that role , but none yet that have been opened ( at least not for long ) outside governmental control .it seems to me an inevitability akin to the second law of thermodynamics that diffusion of knowledge will occur , whether or not that diffusion is good for humankind .technology increasingly amplifies the power of an individual , or a small group of individuals , for good or for evil . as archimedes said of the lever ,_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ give me a place to stand on , and i will move the earth . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ here i suggest the lesser task of moving an asteroid , as an example of something that seems entirely fanciful now , but which may not be so in decades hence .consider again the example given in the introduction , of captain cook s voyage : at that time , it required government sponsorship , whereas today s transportation and technical infrastructure make replicating the feat entirely within the capacity of a private individual .astronomers have suggested that they may give early warning of an asteroid on a collision course with earth .technologies useful to measuring the orbit of such an asteroid with precision sufficient to enable the prediction of an impending collision , and especially those technologies useful to perturbing it to prevent a collision , are vaguely understood and only potentially available to powerful nations today .mostly , those nations lack a specific motivation to act .however , in some number of decades , what would have required the concerted effort of a nation or nations might be accomplished by a group of individuals , but instead of preventing a collision , that group could attempt clandestinely to create a collision from what naturally would have been a near miss . even a credible threat to do so would be influential .presumably it is more difficult to turn a near miss into a collision than vice versa , and today we may take solace in our confidence that such a concept is entirely impractical .my purpose is to illustrate the duality of technology s amplification of power to individuals with a novel example potentially relevant to the lunar exploration initiative and related technologies .for those that reasonably consider the proposition patently absurd , i recite with intentional irony margaret mead , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ never doubt that a small group of thoughtful , committed citizens can change the world . indeed , it is the only thing that ever has . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _an indirect method of imaging the oceans and continental boundaries of extrasolar planets is outlined .results from simulations of earth - like extrasolar planets ( mccullough 2007 ) are presented in section [ sec : glints ] and demonstrate that the difference of fluxes in two orthogonal linear polarizations is modulated by the planet s rotation , as it alternately places continents or water at the location on the sphere at which light from the star can be reflected specularly to earth .section [ sec : context ] digresses into the lunar exploration initiative s potential impacts on society and vice versa .gaidos , e. , moskovitz , n , & williams , d. m. 2006 , iau colloq . 200 : direct imaging of exoplanets : science & techniques hardin , g. 1968 , science , 162 , 1243 hough , j. h. , & lucas , p. w. 2003 , esa sp-539 : earths : darwin / tpf and the search for extrasolar terrestrial planets , 11 kuchner , m. j. 2003 , apj l , 596 , l105 lger , a. , et al . 2004 ,icarus , 169 , 499 mccullough , p. r. 2007 , apj , submitted , also arxiv astrophysics e - prints , arxiv : astro - ph/0610518 moore , k.d . , jaffe , j. s. , and ochoa , b. l. 2000 , journal of atmospheric and oceanic tech .17 , no . 8 , pp . 1106 - 1117 , also http://jaffeweb.ucsd.edu/pages/instruments/lls/lls.html rhodes , r. 1986 , the making of the atomic bomb , simon & schuster , new york , p. 783 .seager , s. , whitney , b. a. , & sasselov , d. d. 2000 , apj , 540 , 504 stam , d. m. , de rooij , w. a. , cornet , g. , & hovenier , j. w. 2006 , a&a , 452 , 669 stam , d. m. , & hovenier , j. w. 2005 , a&a , 444 , 275 stam , d. m. & hovenier , j. w. 2006 , iau colloq . 200 : direct imaging of exoplanets : science & techniques stam , d. m. , hovenier , j. w. , & waters , l. b. f. m. 2004 , a&a , 428 , 663 williams , d. m. , & gaidos , e. 2004 , bulletin of the american astronomical society , 36 , 1173 yardeni , e. 2004 , consumer handbook ( with baby boom charts ) , prudential equity group , llc , dated aug 13 , 2004 . | ambitious studies of earth - like extrasolar planets are outlined in the context of an exploration initiative for a return to the earth s moon . two mechanism for linearly polarizing light reflected from earth - like planets are discussed : 1 ) rayleigh - scattering from a planet s clear atmosphere , and 2 ) specular reflection from a planet s ocean . both have physically simple and predictable polarized phase functions . the exoplanetary diurnal variation of the polarized light reflected from a ocean but not from a land surface has the potential to enable reconstruction of the continental boundaries on an earth - like extrasolar planet . digressions on the lunar exploration initiative also are presented . |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.